OPINION

What Differentiates Linux from Windows?

What really are the most fundamental differences between Windows variants like 2003/XP and Unix variants like Linux?

From a practical perspective, cost is an obvious differentiator, as are access to source and the ability to run outside the Intel processor environment. But it’s possible to argue that those differences are neither real nor important. For example, cost is usually important in business only if the products being compared are otherwise very similar. Some companies have negotiated access to Windows source, and NT 4.0 Server on Alpha was, until quite recently, the fastest way to run any Microsoft OS.

To get beyond superficialities like these, we must look at the fundamental functions of a modern business-oriented operating system and ask how these are implemented by the two groups: Microsoft and the Unix community. Conceptually, all major business-oriented operating systems, including Linux and Windows 2003/XP, are pretty similar because they use similar hardware to achieve similar goals.

Specifically, all of them act as interfaces between hardware and user applications, with most able to provide a single virtual interface to the hardware for multiple — often concurrent — user applications. Thus, most have four interlocking layers — the user (or applications) layer communicates with the OS services layer, which uses kernel services to share access to hardware controllers — and deliver five kernel functions. The scheduler mediates CPU resource sharing, the memory manager mediates memory sharing, the virtual file system abstracts the hardware to present a common file management interface to all applications, the network interface manages network I/O, and the Inter-Process Communication (IPC) module controls interprocess messaging.

Take any one of these, and the technical differences between how Unix and Microsoft implement the function overwhelm the commonality of terminology and purpose. It is more or less true, for example, that both Windows NT 5.X and Unix variants like Mach and some BSD variants use a modified microkernel design with a preemptive scheduler focused on interruptible thread execution, but that use of the same words is just about as far as the actual similarity goes.

Looking at Implementations

Look at how those ideas are implemented, and what you see is that core design philosophies influence how developers make thousands of small decisions on exactly what the terms mean and how things actually get done. Because the core philosophies behind the operating system design are diametrically opposed, these microdecisions tend to go in opposite directions and thereby most fundamentally differentiate the Microsoft operating systems from Linux.

To the extent, for example, that we know what decisions the Microsoft people made, it appears that they generally made choices preferring efficiency for — and external controls over — a small number of processes over scalable multiprocessing and internal process control. In contrast, Unix developers, whether aiming at a true microkernel-like BSD (or Darwin) or a monolithic kernel like Linux, generally made the opposite choices to favor multiple processes running under adaptive internal controls.

That difference in design philosophy shows up everywhere. In memory management, for example, Windows NT 5.0 and its successors use clustered paging, a working set memory analogue and a free memory manager that fires up exactly once per second, while Unix uses an adaptive page specific algorithm — often least-recently used — to control paging. In Unix, there is no working set equivalent, and the free memory manager runs when needed.

Another of the ways in which the preference for technical choices that favor a small number of core processes is expressed in the Windows kernel is in the fact that it runs nonthreaded internally. This choice avoids “object blockage” to trade off concurrency and context switching in favor of increased efficiency for, and better control of, a small number of key processes. Similarly, multiprocessor memory management and interprocess communications are tightly integrated with process control to gain better use of Intel’s rather limited memory management hardware, in part by simplifying page management.

In contrast, the Unix approach generally has been to favor process creation and context switching at the cost of some efficiency for long-running processes, to favor multiprocessor memory management at the cost of increased hardware complexity, and to favor process or thread-level independence at the cost of making interprocess communication more difficult.

Consequences Beyond Differentiation

These kinds of decisions have consequences beyond fundamentally differentiating the multiuser communications orientation embedded in the Unix approach from the single-user, control-oriented focus in the Microsoft designs. Among these consequences, three groups — affecting security, scalability and adaptability — stand out as of interest in today’s business environment.

In Windows NT 5.X, for example, the hard-wired nature of the one-second interval at which the balance set manager runs almost certainly allows an attacker with application-level access to crash the kernel more or less at will. Similarly, the hard 50:50 division of the available 32-bit memory space in NT 5.2 and earlier releases can be expected to cause serious application incompatibilities when some future service pack or new release changes that in the run-up to 64-bit system compatibility.

In contrast to intrinsic weaknesses affecting reliability and security, most simple problems affecting scalability can be kludged — meaning that Microsoft can add temporary fixes as problems are recognized simply by adding code to isolate and work around each kind of special case as it comes up. Thus the “stack” idea found everywhere in NT 5.X, in which one processing object calls another — which calls another until the process happens to hit one that deals with whatever the problem is — presents an object lesson in institutionalized kludging.

Unix, of course, also has had its share of such kludges. But a key research direction, particularly in the Solaris and BSD communities, has been to remove them and so bring the core OS closer and closer to a clean realization of the original design ideas — something that’s both commercially and practically impossible for Microsoft to do.

For example, although we don’t know what Microsoft’s interprocess communications management code really looks like, it’s a safe bet that the company’s code for this is at least an order of magnitude longer, and correspondingly more complex, than that used in a typical BSD kernel — despite the fact that the BSD approach is both more general and conceptually more complex.

New Ideas Require Change

Some external changes are too complex to be dealt with via kludges, and thus limit the OS’s lifetime by constraining what can be achieved before the fundamental design breaks down. For example, the page-management philosophy now embedded in the network, file system and memory-management stacks makes it functionally impossible for Microsoft to copy the page-placement optimizations available for large multiprocessor systems in Solaris 2.8 and later releases without making fundamental change to NT 5.X first.

Because the change needed to take advantage of new ideas like this tends to be quite fundamental, such changes historically have been accompanied by the addition of new layers of kludged code intended to maintain some semblance of backward compatibility with previous kludges.

Unix hasn’t had this problem with the fundamental philosophy and research-based development processes, allowing it to grow consistently closer to an ideal representation of the underlying ideas. Thus a device-dependent application — like a 1991 copy of Vsifax for SunOS 4.4 — works perfectly under Solaris 2.9, while Windows 2003/XP server now contains both a Posix-compliant interface set and four generations of the Win32 interface, but code written explicitly for devices supported by previous generations still often fails.

Similarly, Solaris-on-Sparc users will experience no need for software change when products like the forthcoming eight-way Niagara CPU assembly hit the market. But Microsoft — and Intel — remain trapped in the megahertz race because Microsoft’s basic Windows OS design is unable to take full advantage of even today’s limited two-way thread concurrency.

So, what’s really the difference between a Unix variant like Linux and any Windows OS? It’s that Microsoft reacts to marketing pressure to make design decisions favoring running a few processes faster but then finds itself forced first to layer in backward compatibility and then to engage in a patch-and-kludge upgrade process until the code becomes so bloated, slow and unreliable that wholesale replacement is again called for.

In total contrast, Unix developers advance systems research to provide both long-term continuity and continuous improvement in the software’s ability to do more or better with respect to things like throughput, reliability, security and communications.


Paul Murphy, a LinuxInsider columnist, wrote and published The Unix Guide to Defenestration. Murphy is a 20-year veteran of the IT consulting industry, specializing in Unix and Unix-related management issues.


No Comments

  • Hmmm, what differentiates Linux from Windows. I could rant forever about this, but I’ll keep it short ‘n’ sweet. When I run Windows it locks up many times per session. I run it at more than 95% system resources free. I run 3 applications, 2 of those are Explorer and Systray, the third being whatever I’m using Windows for and it locks up. I use Linux to develop my programs, and to surf the internet. I run fifty some applications on 4 different desktops as three different users on a computer that only has a 333 megahert processor and I get perhaps 2 lockups a year. I think the better O/S is the one that has more eyes. Open Source support it!

  • /*
    Written by – Paul, best Linux guy in the WORLD!
    Feel free to distribute freely and review this code. It’s the way all the smartest people work.
    p.s. I am not responsible for anything in this code and can’t idemnify anything or anyone.
    */
    #include <stdio.h>
    int main()
    {
    /*
    Tell everyone how smart I am at the one thing I know.
    */
    printf("This program, from a 20-year veteran of one technical discipline, just cost you $50,000n");
    printf("But I’m really good at being a one-trick pony, so the services fees are worth itn");
    /* Linux is it. Maybe I should mention GNU? */
    printf("Ooops, but the OS was free!!!n");
    exit(0);
    }

  • I just received this email:

    From: "Mark Russinovich" <[email protected]>
    To: <[email protected]>
    Subject: Linux and Windows
    Date: Thu, 11 Mar 2004 17:30:24 -0600
    MIME-Version: 1.0
    Content-Transfer-Encoding: 7bit
    X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1165
    Thread-Index: AcQHwNOxdSTMYl4xToudyRPyZYimCg==
    Hi Rudy (aka Paul Murphy),
    I read your article (http://www.linuxinsider.com/perl/story/33089.html)
    posted today at Linux Insider comparing Windows and Linux from a design
    philosophy point of view and am writing to tell you that its full of blatant
    innacuracies, misconceptions and ridiculous postulations on the reasons
    behind the way Windows is architected. Your descriptions of Windows memory
    management, process management, and kernel behavior demonstrate almost
    complete ignorance of the Windows OS.
    Its exactly this type of irresponsible writing that the Linux community
    always accuses the Windows community of using to promote FUD. If you’re
    interested in maintaining journalistic integrity for Linux Insider (or your
    psuedonym of Paul Murphy), reply to this e-mail and I’ll provide you
    point-by-point corrections for you to publish. You can also research the OS
    yourself by reading the official book on the internals of Windows NT/2000
    that I coathored, Inside Windows 2000.
    -Mark Russinovich

    • I agree with Mark; there are a number of factual inaccuracies in the article. For example, it was claimed that NT 5.2 only supports a 50:50 virtual memory split, but I’m currently successfully using it with a 75:25 split (/3GB mode). Current Linux kernels go one better with a 100:0 mode.
      It’s only relatively recently that Linux has been able to run large x86 applications. The lengthy VM stabilization process in the 2.4 kernel series meant that it took many months before Linux’s VM could perform as well as NT’s. I wouldn’t write off either Linux or NT until you’ve tried them running the applications you want to run.
      Michael.

      • "Its exactly this type of irresponsible writing that the Linux community always accuses the Windows community of using to promote FUD."
        I do not understand this comment. LinuxInsider is part of the Windows community. Did you not know?

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

How confident are you in the reliability of AI-powered search results?
Loading ... Loading ...

LinuxInsider Channels