Desktop virtualization is entering into the corporate limelight, after many years of existence as a consumer toy. Recently, for example, Citrix and Intel announced a partnership that will deliver an embedded bare-metal client hypervisor by the second half of 2009.
Startup contender Virtual Computer, founded by server virtualization guru and Virtual Iron ex-CTO Alex Vasilevsky, has had its NxTop product in beta since the fourth quarter of 2008. And it’s likely VMware will follow suit with a desktop hypervisor.
Linux as Hypervisor?
Bare-metal hypervisors are all the rage, especially for corporate desktops, as they provide a thin embeddable layer that separate VMs from the hardware and make IT types feel warm and cozy about security and the ability to sandbox between VMs. Until now, desktop virtualization has largely been implemented as a hosted architecture, whereby a mainstream OS such as Windows or Linux controls the hardware. But having a big fat OS driving the hardware doesn’t settle well with corporate IT types, first because of apparent security issues and second because it presents as yet another software image to maintain.
Not long ago, I blogged about how Linux could be looked at as a bare-metal hypervisor. Mostly I feel that the bare-metal vs. hosted debates are largely academic and marketecture with regard to Linux, which can be thinned down, embedded, hidden and managed like any other bare-metal hypervisor.
When you consider the overall footprint size of the management layers, storage, authentication, domain0 stack in Xen-style hypervisors, special guest drivers, etc., the hypervisor footprint is put in a better perspective. However, it is true that modularity and smaller, more manageable components are generally better designed and more auditable for errors and security issues. In fact, it’s just damn good software architecture.
The Size Problem
I think the Linux community is missing out on a big opportunity and risks losing recent gains on the desktop if they don’t come together to push Linux as a capable bare-metal hypervisor. But let me point out a sticking point with regard to the Linux kernel’s viability as a bare-metal hypervisor: it’s HUGE!
When the Linux kernel was initially designed, it was a monolithic chunk of code (i.e. drivers could not be modules). Some years later, after various debate, loadable kernel module support was added. That was a big step in the right direction for code quality and modularity — few people could imagine Linux without it today. Thereafter was one of the more storied debates on adding a pluggable scheduler — one size does not fit all workloads. The net result is that it’s now in recent Linux kernels, and again the consensus is that this was a very good thing. Unfortunately, the life span from initial debate to introduction to the mainline kernel for these sorts of developments has been measured in years.
Extra Baggage
So what’s become of the size of the Linux kernel after such developments? Well, not that long ago I compiled an extremely stripped down recent Linux kernel for a VM. Uncompressed, the size of the kernel binary still rhymes with megabytes. That tells me there’s a lot of extra baggage which needs to be modularized out. Whether it’s perception, marketing, or reality, I think we need to get a slim kernel-proper down to a few hundred kilobytes or less before people consider it a bare-metal hypervisor.
Any places where layered functionality exists is a candidate for pulling out and being made into pluggable counterparts. The mantra I’d like to offer is that if a piece of code does not absolutely need to be in the monolithic part of the kernel, then it shouldn’t be.
Beyond consideration as a hypervisor, taking time to modularize and clean house is always a good thing, and like past modularization efforts always produces a better design and encourages more innovation. Smaller modular chunks mean people or groups of people can comprehend, analyze, audit, innovate and improve within a more manageable domain. This would go a long way toward allowing academic projects, which need to be completed in one semester (nod to Andrew Tanenbaum’s MINIX philosophy). And with Linux development picking up steam, I think this is well needed in any case.
Dream Jobs
With these thoughts in mind, I offer the following project ideas, as I realize often people are looking for ways to help.
- Modularization Guru: Oversee modularization of anything that can possibly be modularized, and drive changes into the mainline kernel. This gig is not for the faint of heart — you’ll need a flame-proof shield.
- Bloat Tracker: Do minimal compiles over a history of Linux kernels and plot the size of the kernel proper with everything modularized. Keep track of new code introductions which add bloat creep. The job is to shame people to help the modularization guru.
And I’d note to the Linux crowd, that the hypervisor is becoming the new OS. Either Linux adapts, or it becomes subjugated.
Kevin Lawton is a pioneer in x86 virtualization, serial entrepreneur, founding team member in a microprocessor startup and the author and lead for two open source projects: Bochs and plex86.
I’ve talked about this issue before, and go into a little more depth at http://looseaffiliation.blogspot.com/2009/02/thin-vs-thick-hypervisors.html . The basic tradeoff is footprint vs functionality… though I agree that some of the Linux functionality causing bloat could be lost without shedding many tears. 🙂