If we’re going to this amount of trouble, wouldn’t it be better to replace the monolithic kernel with a microkernel and servers that provide the same APIs for Linux apps? Maybe even seL4 which has its behaviour formally verified. That way the microkernel can spin up arbitrary instances of whatever services are needed most.
I always thought that Minix was a superior architecture to be honest.
Docker has little overhead and wouldn’t this require running the entire kernelmultiple times, take up more RAM?
Also dynamically allocating the RAM seems more efficient than having to assign each kernel a portion at boot.
This seems to be a pretty niche use case brought about by changes in the available hardware for servers. Likely they are having situations where their servers have copious amounts of RAM, and CPU cores that the task it is handling don’t need all of, or perhaps isn’t even able to make use of due to software constraints. So this is a way for them to run different tasks on the same hardware without having to worry about virtualization. Effectively turning a bare metal server into 2 bare metal servers. They mention in their statement that, “The primary use case in mind for parker is on the machines with high core counts, where scalability concerns may arise.”
If you consider the core count in modern server grade CPUs, this makes sense.
I run a Proxmox homelab. I just had to shut everything it runs down to upgrade Proxmox. If I could hot rreload the kernel, I would not have had to do that. Sounds pretty handy to me. But that may be the multikernel approach, not this partitioning.
Honestly, even on the desktop. On distros like Arch or Chimera Linux, the kernel is getting updated all the time. It would be great to avoid restarts there too.
And they said k8s was overengineered!
I mean isn’t this just Xen revisited? I don’t understand why this is necessary.
Xen is running full virtual machines. You run full operating systems on simulated hardware. The real “host” operating system is the hypervisor (Xen). Inside a VM, you have the concept of one or more CPUs but you do not know which actual CPU cores that maps to. The load can be distributed to any of them by the real host.
In something like Docker, you only run a single host kernel. On top of that you run sandbox environments that run on the kernel that “think” they have an environment to themselves but are actually sharing a single host kernel. The single host kernel directly manages the real hardware. Processes can run on any of the CPUs managed by the single host kernel.
In both of the above, updating the host means shutting the system down.
With this new approach, you have multiple kernels, all running natively on real hardware. Any given CPU is being managed by only one of the kernels. No hypervisor.
GTFO, you’re the brainrot ai slop hosting TikTok company.
the only brainrot here is your own








