About two weeks ago Linus Torvalds
Unfortunately, experience has taught me to believe that about half of everything I know for sure is wrong. So how wrong am I about this?
The issue seems to come down to two forms of one question: What can you do with Linux today that you could not do with something like AT&T/NCR’s SVR4 MP-RAS in 1992, and what implementations exist in both but are done better in Linux?
Notice that these questions are not at all about the hardware and only indirectly about the applications that now come with Linux. You couldn’t run Apache with Tomcat on an NCR 3450 in 1992 because the software didn’t exist, not because there were technical reasons stopping the implementation.
To get at the answers, I need your help — because I don’t have either the documentation or the programming experience needed to assess how products like AT&T/NCR System VR4.2 really stack up against Linux.
I’m reasonably sure there were things you couldn’t do then and can now. For example, I remember playing with a form of journaled filesystem (Sprite LFS) on SunOS, but have no record of anything comparable on the System V machines I was responsible for — but that doesn’t mean the capability wasn’t available in products like AT&T System V/MLS for NCR. I’m far more certain that some ideas are better implemented today than in System VR4 — for example Solaris 10 includes code based on the first serious rethinking of the basic TCP/IP implementation since Bill Joy’s original translation of protocol to code.
Notice, however, that what’s important here aren’t the differences that reflect the new centrality of networking and Sun’s intention of moving packet I/O into the hardware, but that (non-threaded) applications could count on functionally the same services from the two implementations.
I’ve got some books that will help, but they tend to focus on SunOS/BSD and what’s needed here is documentation and programming experience relevant to commercial System V Release Four Unix — so if you have that to share, please get in touch (murph at winface.com).
I’m particularly interested in programming documentation or experience on the AT&T/NCR products because the port to Intel had just been completed and a lot of the MC680X0 throughput management hardware missing from the x86 had been added externally on the NCR CPU board — very much what Sun is rumored to be doing with its next generation Athlon servers and the basis, then, for NCR’s performance advantage over cheaper Intel-based machines from Compaq or IBM.
At the time that performance advantage seemed worth the money — but the cost of things back then seems almost unbelievable now. In January of 1993, an NCR 3450 CPU board had a list price of $27,500 (Canabucks, then about US$0.78 each) and that was for a “speed doubled” 33Mhx i80486! Adding 256 MB to a dual processor machine cost $60,515 — after a 32 percent discount from list. A 32-user Oracle RDBMS license was going to cost $39,400, but we bought Informix 5.0 for $19,000 instead, and a storage pack containing two 2.1 GB SCSI drives listed at $26,800, with NCR throwing in a dedicated controller for each of the two we bought. On the other hand, the machine did the job — and ran without an unplanned powerdown (or any upgrades) from March of 1993 to April of 1998, when we replaced the entire system with a $28,200 Sun 2270 running Solaris 2.5.1.
How Does BSD Fit In?
Release 2.5.1 was really the first Solaris release that could be trusted with production work. Prior to its release on UltraSparc in 1996, people like me hung onto 4.1.4, as if our jobs depended on it — which, of course, they mostly did since choosing to run Unix always magnifies your personal responsibility for any failures.
That brings us to the second part of the main issue: It’s easy enough to think of a half dozen things you can do with Solaris that you can’t do with Linux — like swapping processes across a resource network, managing trusted communities (or using containers to blackmail vendors into dropping per-CPU licensing), automating disk resource pooling, using event ports for control processing, mapping processes to electronically nearby memory, or upgrading some processors in a box by a whole generation without shutting down production — but where does BSD fit?
How do the BSD and Linux kernel technologies compare on the same box? If we pick one or two key releases of each one, are there really things one can do that the other can’t? Again, I have strong biases here, but not enough in the way of fact — and this question begs for facts. So if you can help, I’d appreciate a note.
More Secure, Reliable?
Specifically, what’s needed here is the low level programmer view, not of what’s out there by way of applications, but of what could be out there if the kernel and device technologies were the sole determinants.
Is BSD really more secure and reliable? If you think so, can you point at some structural issue, programming construct, or convention that objective reviewers could nod over? Conversely, can you point at something in a major Linux version that makes it a better OS than a BSD variant? Notice, please, that’s “in Linux” not “about Linux” — there are lots of things about Linux that make it a better business choice for many people, but I’d like to focus on fundamentals: what it does better, not applications and not popularity.
Either way I want to hear from anyone with facts to share: that’s murph at winface.com, please.
Paul Murphy, a LinuxInsider columnist, wrote and published The Unix Guide to Defenestration. Murphy is a 20-year veteran of the IT consulting industry, specializing in Unix and Unix-related management issues. .