Uncategorized

INDUSTRY ANALYSIS

Apple: Up the Market Without a CPU

For the last three weeks I’ve been talking about the impact the new Sony, Toshiba and IBM cell processor is likely to have on Linux desktop and datacenter computing. The bottom line there is that this thing is fast, inexpensive and deeply reflective of very fundamental IBM ideas about how computing should be managed and delivered. It’s going to be a winner, probably the biggest thing to hit computing since IBM’s decision to use the Intel 8088 led Bill Gates to drop Xenix in favor of an early CP/M release with kernel separation hacked out.

Sun has the technology to compete. Its throughput-computing initiative — coupled with some pending surprises on floating point — give it the hardware cost and performance basis needed to compete on software where it has the best server-to-desktop story in the industry.

No one else does. Microsoft’s software can’t take x86 beyond some minor hyperthreading on two cores without major reworking — and Itanium simply doesn’t cut it. The Wintel oligopoly could spring a surprise — a multicore CPU made up from the Risc-like core at Xeon’s heart, along with a completely rewritten Longhorn kernel to use it. But no one has reported them stuffing this rabbit into their hat. So, for now at least, they seem pretty much dead ended.

Wintel’s Dilemma and Apple’s Problem

If, as I expect, the Linux community shifts massively to the new processor, Microsoft and its partners in the Wintel oligopoly will face some difficult long-run choices. It’s interesting, for example, to wonder how long key players like Intel and Dell can survive as stand-alone businesses once the most innovative developers leave them to Microsoft’s exclusive mercy.

Wintel’s dilemma is, however, a fairly long-term issue. Much closer at hand is Apple’s immediate problem. Just recently Steve Jobs has had to apologize to the Apple community for not being able to deliver on last-year’s promise of a 3-Ghz G5 by mid 2004. IBM promised to make that available, but has not done so.

A lot of people have excused this on the grounds that the move to 90-nanometer manufacturing has proven more difficult than anticipated, but I don’t believe that. PowerPC does not have the absurd complexities of the x86, and 90-nanometer production should be easily in reach for IBM. The cell processor, furthermore, is confidently planned for mass production at 65-nanometer sizes early next year.

This will get more interesting if, as reported on various sites, such as Tom’s Hardware, IBM has been burning the candle at both ends and also will produce a three-way, 3.5-GHz version of the PowerPC for use on Microsoft’s Xbox.

Whether that’s true or not, however, my belief is that IBM chose not to deliver on its commitment to Apple because doing so would have exacerbated the already embarrassing performance gap between its own server products and the higher end Macs. Right now, for example, Apple’s 2-Ghz Xserve is a full generation ahead of IBM’s 1.2-GHz p615, but costs about half as much.

Consequences of Apple Decision

Unfortunately this particular consequence of Apple’s decision to have IBM partner on the G5 is the least of the company’s CPU problems. The bigger issue is that although the new cell processor is a PowerPC derivative and thus broadly compatible with previous Apple CPUs, the attached processors are not compatible with Altivec and neither is the microcode needed to run the thing. Most importantly, however, the graphics and multiprocessor models are totally different.

As a result, it will be relatively easy to port Darwin to the new machine, but extremely difficult to port the Mac OS X shell and almost impossible to achieve backward compatibility without significant compromise along the lines of a “fat binary” kind of solution.

In other words, what seemed like a good idea for Apple at the time, the IBM G5, is about to morph into a classic choice between the rock of yet another CPU transition or the hard place of being left behind by major market CPU performance improvements.

Look at this from IBM’s perspective and things couldn’t be better. Motorola’s microprocessor division — now Freescale Semiconductor — is mostly out of the picture, despite having created the PowerPC architecture. Thus, if Apple tries to stay with the PowerPC-Altivec combination, it can either be performance starved out of the market or driven there by the costs of maintaining its own CPU design team and low-volume fabrication services.

If, on the other hand, Apple bites the bullet and transitions to the cell processor, IBM will gain greater control while removing Apple’s long-term ability to avoid having people run Mac OS on non-Apple products. Either way, Apple will go away as a competitive threat because the future Mac OS will either be out of the running or running on IBM Linux desktops.

Apple-Sun Partnership

I think there’ll be an interesting signal here. If IBM thinks Apple is going to let itself be folded into the cell-processor tent, it will probably allow as many others to clone the new Cell PC as it can make CPU assemblies for. If, on the other hand, IBM thinks Apple plans to hang in there as an independent, it might just treat the Cell PC as its own Mac and keep the hardware proprietary. Notice, in thinking about this, that they don’t have to make an immediate decision: There will be CPU assembly shortages for the first six months to a year if not longer.

So what can Apple do? What the company should have done two years ago: Hop into bed with Sun. Despite its current misadventure with Linux, Sun isn’t in the generic desktop computer business. The Java desktop is cool, but it’s a solution driven by necessity, not excellence. In comparison, putting Mac OS X on the Sunray desktop would be an insanely great solution for Sun while having Sun’s sales people push Sparc-based Macs onto corporate desktops would greatly strengthen Apple.

Most importantly, Sparc is an open specification with several fully qualified fabrication facilities. In the long term, Apple wouldn’t be trapped again, and in the short term the extra volume would improve prospects for both companies. Strategically, it just doesn’t get any better than that.

Some Important Footnotes

I am not suggesting that Sun buy Apple, or Apple buy Sun. Neither company has adequate management bandwidth as things stand. I’m suggesting informed cooperation, not amalgamation.

The transition to Sparc would be easier than the transition to Cell. It might look like the bigger change, but the programming model needed for cell is very different, whereas existing Mac OS software, from any previous generation, need only be recompiled to run on Sparc.

In particular, the graphics libraries delivered with the Cell PC will likely focus on Gnome-KDE compatibility to make porting applications for them easy, but Apple would have to redo its interface-management libraries at the machine level — something it would not face in a move to Sparc where PostScript display support is well established.

In addition, existing Sun research on compiler automation suggests that multithreaded CPUs like Niagara and Rock could automatically convert PowerPC and even MC68000 executables to Sparc on the fly — meaning that “fat binaries” would not be needed, although a Mac OS 9.0 compatibility box would probably still make sense.

Sun’s Throughput-Computing Initiative

People I greatly respect tell me that Sun’s throughput-computing direction isn’t suited to workstations like the Mac where single-process execution times are critical to the user experience. The more I study this question, the more I disagree. Fundamentally this issue is about software, not hardware.

Consider, for example, what could be achieved with the shared-memory access and eight-way parallelism inherent in the lightweight process model Sun is building into products like Niagara. This won’t matter for applications like Microsoft Word, where the 1.2-GHz nominal rate is far faster than users need anyway, but can make a big difference on jobs like code compilation, JVM operations or image manipulation in something like Adobe’s Photoshop.

Given the much higher cache hit rates and better I/O capabilities offered by the relatively low cycle rate, theory suggests that truly compute-intensive workstation software could hit somewhat better than 85 percent system use — meaning that an eight-way Niagara-1 running at 1.2 Ghz would easily outperform a Pentium 4 at 8 GHz.

Making that happen would, of course, take serious software change, but if the preprocessors now thought to be under development at Sun work as expected, most of that would be automated — thereby greatly reducing the barriers to effective CPU use on the Mac for PC-oriented developers like Adobe.


Paul Murphy, a LinuxInsider columnist, wrote and published The Unix Guide to Defenestration. Murphy is a 20-year veteran of the IT consulting industry, specializing in Unix and Unix-related management issues.


8 Comments

    • "not enough management bandwidth" means that they’re operating at or past the limit of their management’s abilities.
      Sun, in particular, is strong at the very top and strong at the bottom but has a horribly weak middle management layer.
      For either of these companies to take on another workload would be to over extend tremendously.

  • Makes me think about the SNAPPLE jokes that were popular a few years back when the fortunes of Apple and SUN were reversed. The author neglected to mention the fact that the Next Corporation did a version of Openstep for the Sparc architecture and that SUN’s internal politics kept such from ever being supported in a meaningful way. The joke of an interface known as CDE was promoted in favor of Openstep. Then there’s the Macintosh Application Environment for Solaris…Had Steve Jobs not killed the Mac OS clones, Apple would now be an interesting footnote in the Wiki history of PCs.
    If memory serves, IBM also had a license to sell the PC version of Openstep/NextStep and never managed to do anything with it for this OS just added to IBM’s sad mid-90’s OS story: OS2, Windows, AIX, Taligent, Pink, etc.
    Anyone remember "Project Star Trek?"
    The author should have also mentioned that SUN’s having quite a few problems with it’s CPU roadmap and had to kill its next generation Ultra chip in favor of one developed by Fujitsu. SUN’s move to LINUX is one of the brighter things it’s done in the last year. The faster SUN can move towards standardizing upon the Gnome desktop for Linux, Sparc Solaris and PC Solaris the better.
    The idea of a SUNRAY for Apple’s OSX is interesting, but what’s the point, especially if you can use the Apple Remote Desktop in conjunction with VNC on just about any box you might have laying around?
    Apple and IBM have had an interesting relationship that’s spanned close to 15 years. Remember things like OpenDoc, PReP/CHRP hardware and the like? IBM’s chip games are not news to anyone at Apple/NeXT. On the other side of the fence, even Intel has had to pull a U-turn with respect to clock speeds and CPU designs. Least there wasn’t mention of AMD and it’s 64 bit wonder.
    That IBM and Apple didn’t hit 3 ghz is not that big of a deal. This kind of schedule slip is SOP in the world of Tech – and is not a sign that the sky is falling or that Apple needs to find yet another savior. Odds are the R&D going into IBM’s new cell chip will likely power iBooks in 2006. Any takers on this particular bet?
    PS Apple was doing fat binaries when Microsoft was still trying to transition folks from 16 to 32 bit applications. At the same time, Next was doing quad fat binaries…

    • This is sort of true. IBM invented the POWER architecture, but Motorola modified the architecture to simplify implementation on silicon. The IBM POWER line of CPU’s were multi-chip monsters that wouldn’t work in a desktop machine (I think some of them still are). So Motorola trimmed a few features and instructions making the architecture implementable on a single current day chip.
      They then later added the great altivec instructions and unit to the chip when silicon technology allowed them the transistor budget.
      So I think it is fair to say that Motorola created the PowerPC architecture. It might have been more clear to add "based on IBM’s POWER architecture"

  • The cell isn’t the only way to go; Power architecture server chips will fuel much, if not most, of the PowerPC technology development.
    As it stands, watt for watt, a PowerPC is a much more attractive chip. Many embedded developers have realised this, cluster builders are catching on, and large hosting companies will soon follow suit.
    If the Linux community ever gets out of its myopic, x86 centric worldview, PowerPC will be the obvious choice for efficient, reliable computing.
    John Klos

  • It may be that Apple doesn’t WANT to use 3GHz G5s because it’s already busy engineering products that use the cell processor. Apple already has an OS portable across 2 similar processors (and Intel, if rumors are to be believed). They’re not that far past a multiyear drought brought on by their reliance on Motorola. Is it really that farfetched to believe there might be a contingency plan in case their star processor ends up on a milk carton?

  • But…..
    You make a big point about how cell wont support altiVec but completely leave out any mention of how altiVec would/nt fit with Sun cpus and products like Niagara.

  • "Motorola’s microprocessor division — now Freescale Semiconductor — is mostly out of the picture, despite having created the PowerPC architecture."
    .
    Actually IBM created the PowerPC architecture, which was based on its POWER line of CPUs, and brought it to Apple for consideration. Apple liking what they saw then got Motorola involved since they were long time partners. Thus the AIM alliance was formed.
    IBM has the trademark on the name "PowerPC"

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories

LinuxInsider Channels