• HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    3 months ago

    If we’re going to this amount of trouble, wouldn’t it be better to replace the monolithic kernel with a microkernel and servers that provide the same APIs for Linux apps? Maybe even seL4 which has its behaviour formally verified. That way the microkernel can spin up arbitrary instances of whatever services are needed most.

  • geneva_convenience@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    3 months ago

    Docker has little overhead and wouldn’t this require running the entire kernelmultiple times, take up more RAM?

    Also dynamically allocating the RAM seems more efficient than having to assign each kernel a portion at boot.

  • IHave69XiBucks@lemmygrad.ml
    link
    fedilink
    arrow-up
    5
    ·
    3 months ago

    This seems to be a pretty niche use case brought about by changes in the available hardware for servers. Likely they are having situations where their servers have copious amounts of RAM, and CPU cores that the task it is handling don’t need all of, or perhaps isn’t even able to make use of due to software constraints. So this is a way for them to run different tasks on the same hardware without having to worry about virtualization. Effectively turning a bare metal server into 2 bare metal servers. They mention in their statement that, “The primary use case in mind for parker is on the machines with high core counts, where scalability concerns may arise.”

    • Karna@lemmy.mlOP
      link
      fedilink
      arrow-up
      3
      ·
      3 months ago

      If you consider the core count in modern server grade CPUs, this makes sense.

    • LeFantome@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      3 months ago

      I run a Proxmox homelab. I just had to shut everything it runs down to upgrade Proxmox. If I could hot rreload the kernel, I would not have had to do that. Sounds pretty handy to me. But that may be the multikernel approach, not this partitioning.

      Honestly, even on the desktop. On distros like Arch or Chimera Linux, the kernel is getting updated all the time. It would be great to avoid restarts there too.

    • LeFantome@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      3 months ago

      Xen is running full virtual machines. You run full operating systems on simulated hardware. The real “host” operating system is the hypervisor (Xen). Inside a VM, you have the concept of one or more CPUs but you do not know which actual CPU cores that maps to. The load can be distributed to any of them by the real host.

      In something like Docker, you only run a single host kernel. On top of that you run sandbox environments that run on the kernel that “think” they have an environment to themselves but are actually sharing a single host kernel. The single host kernel directly manages the real hardware. Processes can run on any of the CPUs managed by the single host kernel.

      In both of the above, updating the host means shutting the system down.

      With this new approach, you have multiple kernels, all running natively on real hardware. Any given CPU is being managed by only one of the kernels. No hypervisor.