Community

FOSS Debates, Part 1: Kernel Truths

This is the first installment in a three-part series, presenting a detailed look at the current state of opinions on matters of the Linux kernel.

The free and open source software community is known for many things, but perhaps none more than its propensity for passionate debate.

No topic is too small for the community’s spirited analysis, so it should come as no surprise that the Linux kernel — one of the most central elements of the FOSS world — figures so frequently and so prominently.

Indeed, ever since Linux inventor Linus Torvalds released the first version of the Linux kernel back in 1991, it has been the topic of regular discussion, debate and downright dispute, particularly on questions of its importance, its size and its security.

Torvalds’ Take

As the father of Linux and coordinator of the kernel’s development, Linus Torvalds might be expected to have strong views on its importance. In fact, in a recent interview with LinuxInsider, he said he looks at the question in a few different ways.

“The first one is the purely personal one: I think system programming — and kernels in particular — is just more interesting than doing the more mundane kinds of software engineering,” Torvalds told LinuxInsider. “So to me, the kernel is simply more important than anything else, because it does things that no other piece of software does: It’s the thing between the hardware and ‘ordinary programs.'”

Of course, “the kernel really is the heart of the OS and tends to be pretty central even aside from my personal opinions,” Torvalds added. “The kernel ends up being involved in everything you do, which means that if there is a performance issue or a security issue with the kernel, you have fundamental problems with anything that builds on top of it — which ends up being absolutely everything, of course.”

‘It’s the Heart and Soul’

Indeed, “the kernel is Linux,” agreed Elbert Hannah, coauthor of the O’Reilly book, Learning the vi and Vim Editors. “Everything else is GNU software. Linux is the kernel — it’s the heart and soul of Linux and the heart and soul of a movement. Without the Linux kernel, nothing else exists, nothing else runs.”

The kernel is also “the traffic cop to serve and protect software running in Linux,” Hannah told LinuxInsider. “The kernel is the counselor, to ensure all get their fair share in processor power. The kernel is the foundation upon which everything else grows.”

‘It Should Be Almost Invisible’

At the same time, however, “the kernel on its own is rather unimportant” in the end, Torvalds asserted. “What matters is really how well it allows other programs to do their jobs: not getting in the way, and hopefully never being the limiting factor. So while the kernel is both important and challenging, in the end it should also be almost invisible.”

For some, the kernel has achieved such levels of performance that it’s also diminished in importance from a development perspective.

“The kernel is at the heart of Linux and is extremely important, but I think that development in the Linux world needs to change its focus to things farther up the stack and improve the user interface and apps,” Montreal consultant Gerhard Mack told LinuxInsider. “The kernel is exceptional when it comes to performance, so I think some of that talent could be better put to use on parts of Linux that are more lacking.”

10 Million Lines and Counting

When Linux version 0.01 was first released more than 17 years ago, it included some 10,000 lines of code; last fall, the kernel surpassed 10 million.

Though blank lines, comments and text files are included in that count, the kernel’s size has been a source of growing concern among many observers, not a few of whom charge the kernel has become unwieldy and bloated.

“Size is not necessarily a problem in itself, but it does result in certain challenges,” Torvalds admits — “the biggest one being the ability to maintain a big body of code and not let quality suffer.”

It’s difficult to scale development, and smaller projects are much easier to maintain, he explained. “With a big project, you inevitably end up in the situation that no single person knows all of the details, and that certainly makes maintenance more challenging.”

‘A *Potential* Problem’

So, size is certainly “a *potential* problem,” he added. “It can make it harder for people to approach the project, because it can be simply overwhelming. It also obviously tends to imply a higher level of complexity, which again is not necessarily a problem in itself, but that then can make fixing other problems much harder.”

On the other hand, “I have to say that I think we’ve been pretty good at combating these issues,” Torvalds asserted. “Our development model has scaled very well, and we have a rather large number of developers, and I think they are actually productive and not bogged down in unnecessary ‘administrativia.'”

In addition, “while the kernel has also grown in complexity, we’ve maintained a pretty good modular architecture, and much of the bulk ends up being things like drivers — which are complex, but most of the complexity is of the ‘local’ kind,” he added. “That again makes it much more manageable — you need to have knowledge about a particular piece of hardware to be able to write a driver for it, but you can often mostly ignore other issues.”

‘So Few People Need to Care’

Similarly, “I don’t see the increasing size as a problem because so few people need to actually care about the download size,” Mack said.

“Most of the kernel tarball is architecture code and device drivers; if it becomes more of a problem, I’m sure someone can write a script to split the kernel into per-architecture tar files and possibly remove some of the really old and rare drivers,” he added.

There are ways of paring down the kernel, but “the simple truth is that nobody (relatively speaking) is using floppies anymore, so the best reason to fit the kernel into an incredibly tiny space has gone away,” Slashdot blogger Martin Espinoza told LinuxInsider. “Disk space and RAM are both so cheap these days that the size of the Linux kernel barely even merits a mention on those grounds.”

‘One of the Most Visionary Thoughts’

It was “almost scandalous” when — back in 1997 — Torvalds said something like, “‘No longer will I constrain my decisions around Linux on a small memory model,'” Hannah recounts. “Back then, 32M was a lot of memory, and it seemed to break with everything Linux (and Linus) held near and dear: run lean, run clean,” Hannah noted.

However, “in my opinion, it turned out to be one of the most visionary thoughts in Linux’s history,” Hannah asserts. By all standards, Linux “has continued to be lean and clean, but Linus embraced and foresaw that memory was soon to be the commodity it became,” he explained. “That simple recognition carries today.

“As Linux increases in complexity, it must increase in size, but that’s a curve that eventually plateaus,” Hannah went on. “Far more risk and damage to computing comes from the careless and glib attitudes about bloat in software. Linux is still one of the most nimble OSs out there today, especially considering its breadth of services.”

‘It Is Just Fine With Us’

As long as the kernel performs “like a scalded cat, I have no problems with its speed on PCs,” educator and blogger Robert Pogson noted. “Where there is an issue is thin clients. These generally have CPU power a few generations back and some boot PXE, so a stripped-down kernel is important.”

For now, though, it is possible to custom-build a kernel with only the needed drivers and get “excellent performance,” he added. Overall, “whatever the kernel boys and girls are doing, it is just fine with us,” he concluded.

‘Bugs Are Inevitable’

And what of security — another issue that’s frequently harped upon by concerned Linux fans?

“Security always ends up being one of the things that kernel developers need to keep in mind, but bugs are inevitable — which in a kernel means that security problems *will* happen,” Torvalds said. “We’re careful, but you’ll never avoid it entirely.”

The good news, he added, “is that we tend to have several layers of security, and the core code — which is orders of magnitude smaller than the bulk of drivers and filesystem code — tends to be better vetted and have a lot more people looking closely at it than, say, a random device driver.”

That core code “tends to be the code that needs to be more conscious about security,” Torvalds explained. It also “does things like validate arguments against buffer overflow issues, for example, so that low-level filesystems or drivers don’t generally even need to do range checking for the normal operations, because those have been done by the core layers.”

‘Eternal Vigilance Is Needed’

“I don’t see any security implications in the kernel’s increasing size and complexity,” Mack said. “Most of the kernel is very modular, and the interfaces are designed to make driver writers’ lives less complex. I have also been very impressed by the tools they use to scan the source for possible bugs.”

The security issue may be far less severe than on, say, Microsoft operating systems, Pogson added.

“I have never seen malware on a PC running GNU/Linux, but I see it every week on machines running that other OS,” he noted. “I think GNU/Linux is OK on security, but eternal vigilance is needed.”

‘I Give Linux the Edge’

It’s not entirely clear whether “Linux (and Unix) remains relatively secure because most attacks go for Microsoft, or if it’s that Linux really is more secure,” Hannah said. “Ultimately I think most security scares are overblown (yes, even for Microsoft), and that security in computers is far scarier around social engineering attacks.

“Why bother with attacks against obfuscated reverse-engineered code when you can pretend to be someone else and simply connive your way into the hen house?” he pointed out. “Certainly I’ve not seen or encountered what I’d describe as unusual Linux security weaknesses, and I give Linux the edge in overall architectural integrity — this begets better security.

‘Linux Is the Poster Child’

Ultimately, any software becomes “harder to secure as the size increases, at least in theory,” Espinoza said.

In practice, however, “the Linux kernel isn’t just a blob of code sitting on a server somewhere — different pieces of the kernel have authors and maintainers,” he noted. “Of all the pieces of code you would expect to be protected by ‘many eyes,’ the Linux kernel is about at the top.”

So, “in the end, size is a poor metric of quality,” Espinoza concluded. “If one person had to understand the whole thing, it would be pretty much hopeless, but Linux *is* the poster child for collaboratively developed, free, open source software.”

FOSS Debates, Part 2: Standard Deviations

FOSS Debates, Part 3: Mission Control

1 Comment

  • It is a pleasant activity to reflect on the leadership of Linus Torvalds compared to that at M$ and Apple. Linus has repeatedly made tough decisions that turned out to work well, always with the quality of the end product as a focus. GNU/Linux and FLOSS are powerful forces for good in the world and the Linux organization is one of the bright stars thanks in large part to Linus’ determination and insight. Thank you, Linus.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

LinuxInsider Channels