Forgot your password?
typodupeerror
Sun Microsystems Businesses Apple

Apple Nearly Moved to SPARC 257

Posted by CmdrTaco
from the we-that-woulda-been-interesting dept.
taskforce writes "Sun Microsystems Co-Founder Bill Joy claims that Apple nearly moved to Sun's SPARC chips instead of IBM's PPC platform, back in the mid-1990s. From the article: "We got very close to having Apple use Sparc. That almost happened," Joy said at a panel discussion featuring reminiscences by Sun's four cofounders at the Computer History Museum. An account of his entire presentation can be found on Cnet."
This discussion has been archived. No new comments can be posted.

Apple Nearly Moved to SPARC

Comments Filter:
  • Dupe (Score:2, Informative)

    by FTL (112112)
    Here we go again [slashdot.org].
  • Fine dining (Score:5, Funny)

    by Mostly a lurker (634878) on Sunday January 22, 2006 @10:26AM (#14532358)
    Sun Microsystems CEO Scott McNealy had to be wined and dined at a Silicon Valley McDonald's before he gave up his reluctance to help launch the workstation maker in 1982
    History does not record which of the many fine vintages available at McDonald's was selected on this illustrious occasion.
    • by Neo-Rio-101 (700494) on Sunday January 22, 2006 @10:42AM (#14532421)
      So that explains the "Happy Meal Ethernet" driver for Linux on SPARC systems....

      • Re:Fine dining (Score:5, Informative)

        by Megane (129182) on Sunday January 22, 2006 @11:12AM (#14532538) Homepage
        Actually, "Happy Meal Ethernet" is the 100Mbit sequel to the 10Mbit "Big Mac Ethernet".

        static void happy_meal_tcvr_write(struct happy_meal *hp,
        unsigned long tregs, int reg,
        unsigned short value)
        {
        int tries = TCVR_WRITE_TRIES;

        ASD(("happy_meal_tcvr_write: reg=0x%02x value=%04xn", reg,
        value));

        /* Welcome to Sun Microsystems, can I take your order please? */
        if (!hp->happy_flags & HFLAG_FENABLE)
        return happy_meal_bb_write(hp, tregs, reg, value);

        /* Would you like fries with that? */
        hme_write32(hp, tregs + TCVR_FRAME,
        (FRAME_WRITE | (hp->paddr ...
        • I always thought Big Mac Ethernet was the first 100mb chipset (lance ethernet being their 10mb option), which was only available as an addon sbus card and never became widely popular (whereas the happy meal was shipped as standard on newer ultrasparc systems)
      • For Linux?

        It's the name of the driver on Solaris too, man hme. Maybe, just maybe, it's the Sun codename for the hardware concerned.
    • can I suggest the McMerlot, July was a truly remarkable vintage!
  • by TTL0 (546351) on Sunday January 22, 2006 @10:34AM (#14532385)
    almost only counts in horseshoes and handgrenades.
  • by CyricZ (887944) on Sunday January 22, 2006 @10:34AM (#14532388)
    For serious workstations, the SPARC was basically the dominant chip at the time. Indeed, it was at the top of its game. Even now we still see it used for mission-critical and high-performance tasks. So it's really no wonder that Apple would have considered such a switch.

    • Weren't most heavy workstations in Apple's primary domain (graphics, video, design) based on MIPS at that time, in Silicon Graphics' workstations?
      • You might get that impression, because SGI was the best-known maker of graphics workstations, and for a long time they used MIPS chips exclusively. But I don't recall any other GWS company using MIPS at all. I think HP's PA-RISC [wikipedia.org] was as big as MIPS for a while. Plus POWER and SPARC had significant market shares.

        I don't know the hard figures, but I think SGI never really dominated the GWS market. They just got most of the press because a ton of Hollywood SFX were generated on their MIPS workstations.

        The f

    • SPARC was the oldest, weakest, most primitive processor design at the time. Truly horrible. It was only successful to the extent Sun was successful. Even the dead Moto 88K was better.
      • Heh, the rumor was that SUN, before opening up the SPARC CPU, took it to Motorola and asked if Moto would build it. Moto looked at it and said, no, but we can make you something better, and showed SUN the 88k. Unfortunately, the 88k died but at least its bus lived on in the first PPC processors.
      • The SPARCv7 architecture was the pinnacle of 32-bit RISC ISA elegance. As a compiler writer, I found it to be far and away the best ISA target for code generation. The tagged integer instructions made it a dream for higher-level language compilers, the register windows made function calls cheap, and the orthogonality level of the ISA was far and away superior to MIPS and 88k, which were ad hoc and low level in comparison. Moreover, the upward path through superscalar pipelining, branch prediction, etc. -
        • How would a deal with Apple have changed anything?

          They'd still have a tiny market share and be moving to
          Intel because 1.2GHz SPARCs aren't that impressive.

          Granted if you throw 1024 or so of them together they're pretty slick...

          Hell throw 1024 of anything together and they're pretty slick (except maybe 6502's or something like that! Wow fastest Apple II array ever!)
    • Sparc probably would have been the wrong way to go as there never was a good portable version of the chips. Sure, there were some SparcBooks but IIRC, they never worked out well. PPC worked out a lot better as a general purpose architecture for longer.
    • As I remember from running simulations at the time, the top of the game was IBM's Power, with DEC's Alpha close behind, then followed by SGI/MIPS. The MIPS R8000 was the first hard-core contender, but they were already having trouble keeping up with DEC/IBM. Sparc was on anyone's radar because it was cheap (relatively), and all of the software written for the previous 3/XX generation could still run.

      We used to be really psyched that the PowerMacs had a version of IBM's workstation chip inside (PPC 601/
    • Apple has always gone with trying to be the most creative. The Apple Newton was a good example. Great idea, creative for the era. In many ways, it wasn't Apple's fault that the technology of the time simply wasn't up to what they wanted with the Newton. The same is true for the original Macs and why they were not simply an evolution of their very popular Apple II line.

      So, if innovative and creative are the name of the game, I'm actually surprised the Sparc was considered at all. The MIPS would have been a b

    • for the fact that for 1993 (one year prior to Apple's introduction of the PowerPC Macs, so contemporaneous with the decision process for the move to a RISC CPU), market shares were:

      HP PA-RISC 31%
      Sun SPARC 25%
      MIPS 20%
      IBM RS/6000 12% (the architecture upon which the PowerPC was based)

      I don't think most people consider "middle of the pack" to be a dominant position.

    • SPARC was only dominant in market share, and it was only dominant because the dominant vendor was using it. It's never been a high performance chip compared with other RISCs, and rarely even when compared with Intel
  • by Bloater (12932) on Sunday January 22, 2006 @10:37AM (#14532402) Homepage Journal
    Sun Microsystems Boasts "We're not quite good enough."
    • They're still at it.. Remember Jonathan Schwartz trying to convice Apple to save Solaris from the ash heap of history a couple of months back?

      -jcr
  • Good decision (Score:5, Informative)

    by lordholm (649770) on Sunday January 22, 2006 @10:39AM (#14532409) Homepage
    The SPARC V8 is quite clean and nice to work with, and is farley sane, with the exception of tagged arthmetics, the trap model and the visible pipeline, and missing standard interface to the MMU (yes I know of the ref-mmu).

    On the other hand, the SPARC V9 is a horrendeus monster thar is just plain scary when dealing with supervisor level code. IMHO the PPC64 is much nicer than the V9, in many aspects.

    But, on the other hand the PPC, has gone out of order, while the SPARC has stayed in order, making the CPU a hell to compile code for.

    Architecturally, the PPC is a slight bit nicer than the SPARC, and as a plus, the PPC64 was defined exactly the same time as the PPC32 was, and thus they (PPC32 & 64) are very similar.

    In my eye, it was a good decision to go for the PPC.
    • Perhaps, but in business the question isnt necessaraly about the technology today. Its the old story of small fish, little pond, v. big fish little pond.

      If Apple had gone with a the Sparc chip/platform, could Apple have influenced SPARC Internation more then they did with the Motorola/IBM/Apple setup? Interesting question. I know that one of the reasons cited for Apple moving from the PPC arch is that IBM has only been interested in investing in the POWER arch, all but ignoring the consumer grade PPC system
  • by CyricZ (887944) on Sunday January 22, 2006 @10:42AM (#14532422)
    There has always been much speculation as to what the computing landscape would look like today had the non-Intel vendors worked together to produce a superior chip.

    Indeed, the combined talents of the Alpha crew from DEC, with the PA-RISC developers from HP, the SPARC group from Sun, those behind the MIPS at SGI and MIPS Technologies, and the PPC people from IBM, for instance, could have come up with a CPU that completely trumped what Intel was putting out at the time.

    • by dfghjk (711126) on Sunday January 22, 2006 @10:53AM (#14532457)
      There's no reason to believe this at all. Adding more of the same level of engineeering expertise doesn't necessarily get you anywhere. Besides, it could be argued that all the processor groups you mentioned produced processors that were better than Intel offered at the time. They simply weren't enough better to make a difference. Odds are that combining the efforts of the competition would have made them all fail even sooner. HP joined Intel for IA64 and look where that got them.
      • Indeed. In particular, it's unclear where the transistor budget would have come from for any sort of major innovation that none of the companies had thought of individually, yet could come up with as a group.

        Now the market backing of 10 major companies working in concert might have made something sell, but technologically, there's just no reason to believe there would have been any magic.
      • Adding more of the same level of engineeering expertise doesn't necessarily get you anywhere

        It has nothing to do with engineering expertise -- it's FAB investment. None of the RISC companies could afford to keep up with Intel in process technology, and the enormous cost of designing and producing your own chip basically sunk DEC and SGI.

        I agree that it was probably politically infesible, but the RISC crowd invested far too much money into niche CPUs and it killed all of them (except IBM).
        • Don't know that I agree. Binary compatibility counted for a lot and it still does.

          If the investment in fabs was the issue then the problem must have been cost, yet IBM with the PPC architecture specifically targetted lower cost with its designs and it was ultimately unsuccessful. IBM has always been competitive with Intel regarding fab technology yet PPC untilmately failed on the desktop. Would the joining of all the other vendors have changed that? I don't think so.
          • None of the RISC CPUs were binary compatible with the CPUs previously used by their sponsors.

            As for PowerPC, 970 wasn't that competitive with Intel's process, the chips were low-volume and ran very hot. But mainly Apple did it to themselves by creating a low-growth businss model that wasn't attractive to CPU vendors.

            > Would the joining of all the other vendors have changed that?

            No probably not, because Intel largely caught up. But it might have kept the RISC workstation/lowend server market alive.
    • Back in the day.. (Score:3, Interesting)

      by turgid (580780)

      ...late '80s/very early '90s there was something called the ACE Consortium [byte.com].

      This was formed by the likes of DEC, Compaq and SCO at the time when IBM had not long brought out the dreadfully underpowered, expensive and proprietary PS/2 line of personal computers running the pathetic MS-DOS [wikipedia.org] and mediocre OS/2 [wikipedia.org].

      Most people were running PeeCees which were essentially 16-bit with a single user, single tasking operating system running on dreadfully slow CISC (8086, 80286, 80386) processors will pitifully small amo

      • Re:Back in the day.. (Score:4, Informative)

        by dfghjk (711126) on Sunday January 22, 2006 @11:33AM (#14532637)
        First, when the ACE Consortium was formed Intel was selling 486's. The 486 was not dreadfully slow compared to RISC competition although its floating point lagged. Intel PC's also had far more memory than you suggest and Windows (even OS/2) was well established at that time. The competition for ACE was not 16-bit, single-tasking low performance DOS machines like you say.

        Second, Microsoft was a member of ACE and Windows NT was built to run on ACE machines as well as PC's. For those who wonder why NT/2000/XP boots the way it does, the reason is that PC's run special boot code that emulates an ACE bootstrap environment. It could be argued that ACE was the preferred platform for NT and MS internally built ACE workstations as reference platforms. Much of the NT code was developed on them. The ACE machines inside MS had EISA busses and used PC peripherals. ACE even included a spec that allowed ACE machines to use PC expansion cards with modified option ROMS.

        It's conceivable that ACE intended the workstations to run a UNIX derivative but I doubt MS saw it that way. It's far more likely, had ACE succeeded, that its main platform would have been Windows. ACE machines, despite their MIPS processors, ran DOS applications! Sorry, ACE wasn't a UNIX workstation, it was a PC replacement that ran MS OS'es in addition to UNIX variants.

        Now, about ARC---the PowerPC version of ACE...
        • The 486 was a dog compared to a 30MHz SPARC, both in integer, and especially floating-point.

          When the Pentium 100 came out, it was almost as fast as RISC processors that had come out 5 years previously.

          Yes, in 1990, some people were buying PCs with 2MB RAM, but most people were still running machines with MSDOS with 1MB of RAM at most.

          x86 processors finally caught up with RISC workstations when the AMD Althon came out. The Pentium III nearly caught up, but not quite. We're now into 1999. That's a good dec

          • Re:Back in the day.. (Score:4, Informative)

            by dfghjk (711126) on Sunday January 22, 2006 @12:45PM (#14533043)
            ACE was formed in 1991. At that time the 386 was dead and the 486 was available at 50MHz. The Pentium was introduced in 1993. It was superscalar and offered integer performance similar to the best RISC processors of that day, certainly faster than RISC from 1988! Such comparisons are silly. Incidently, the first Pentiums were 60 and 66 MHz. It took another cycle and a different pinout before the Pentium went 100 MHz.

            As for GUI's, OS/2 1.1 (the first with a GUI) was introduced in 88. Windows/386, the first fully virtual, fully preemptive version of Windows was introduced in 87. Windows 3.0 in 90 and 3.1 in 92. Windows was not the exclusive desktop at the time but it was certainly established. Compelling Windows apps that forced the PC world over to Windows started appearing around 92, not much after the creation of ACE. Word started dominating WP beginning in 92. There was still a lot of DOS use but the PC world was hardly as you describe (slow 286's and 386's).

            Memory cost the same for PC's as it did for workstations. If anything, PC's with their compact instruction sets and small footprint OS'es made better use of memory than workstations did. Don't know what your point is there. Workstations had more memory typically but they needed it and their prices reflected it. Business ppl didn't buy workstations.

            Claiming that the Athlon was substantially better than the P3 is silly. It had a slight IPC advantage and eventually a clockrate advantage, but the two designs offered similar performance. While the Athlon was introduced in 99, 8 years after ACE (not a good decade), the first of the P3 designs was introduced in 95, only 4 years after ACE.

            AMD's Opterons aren't Alpha's and it's a good thing. Alpha's sucked and the P4 looks much more like and Alpha than the Opterons do. DEC had good engineers and contributed nicely to the PC world, most notably with their PCI work, not their processor designs. They gave use PCI bridges and a nice ethernet controller.

            If we are comparing experience with these machines, my first PC was an IBM 8088 machine. I started work for a major PC manufacturer in 87. I did OS/2 1.0 and 1.1 work, UNIX systems programming and NT driver development. I did firmware programming work for that company starting in 88. My first machine there was a 10Mhz 286 and I used every type processor and most speed grades since then. I had extensive experience with the 960, Alpha, and PPC 603 in addition to all the Intel x86 processors. I worked some with the i860, the Moto 88K and the Itanium. I'm quite familiar with the history of the processors, OS'es and ACE. You can have your Slackware 486 machine. I got rid of mine long ago and wouldn't be bragging if I was still using one.
            • > There was still a lot of DOS use but the PC world was hardly as you describe (slow 286's and 386's).

              Turgid is right about this one. In 1991, there were still AT and even XT machines on the market, and 1MB would have been the stock RAM. The early 486 machines cost well over $5000 and it took a couple years for the chip to filter down to regular machines.
            • AMD's Opterons aren't Alpha's and it's a good thing. Alpha's sucked and the P4 looks much more like and Alpha than the Opterons do.

              Um, what? You're right that the Opterons aren't Alpha, but wrong in saying they are closer to the P4. The Opteron's FPU pipeline looks incredibly like the 21264's, right down to the dual assymetric pipes. The load/store setup and L1 cache setup of the two architectures are very similar, down to the 64kb/64kb L1 cache sizes. The 21264's integer pipeline is much shorter than the O
              • Alphas in their day had far deeper pipelines than competitive processors with the specific goal of running at much higher clock rates, much as the P4 is today.

                Comparing design specifics for processors of different eras with hugely different transistor counts and process technologies doesn't make a lot of sense to me. If you care to believe the Opteron is the evolution of the Alpha then more power to you.
          • x86 processors finally caught up with RISC workstations when the AMD Althon came out.

            Actually, the original 200mhz Pentium Pro had higher SPEC scores than any RISC chip available at the time (although there was a revised Alpha a couple months later). The PPro pretty much put the final nail in the coffin of ACE/ARC/PREP and all the other RISC PC efforts, and the beginning of the end of the RISC Workstation. By the time Athlon came out, everyone had already pretty much given up except Sun.
            • By the time Athlon came out, everyone had already pretty much given up except Sun.

              Yes, *sigh*. They all climbed aboard the itanic, which is still promising jam tomorrow.

              Don't read too much into CPU spec scores. Yes, the PPro was impressive when it came out, but as with all x86 intel CPUs, the memory and I/O bandwidth was a problem. They were never intended for anything other than PeeCees. Sever and workstation applications were an afterthought, as anyone with any experience of SMP systems will tell you.

              • "PeeCee"

                It's hard to take you seriously when you say that.

                "They were never intended for anything other than PeeCees. Sever and workstation applications were an afterthought, as anyone with any experience of SMP systems will tell you."

                Actually, they optimized for 32-bit protected mode performance at the expense of real-mode performance. It hurt them because there was still a lot of real-mode software being used, but they were fine for UNIXes and NT.
                • Actually, they optimized for 32-bit protected mode performance at the expense of real-mode performance.

                  So? What the hell has that got to do with SMP servers? RISC chips had been "optimised for 32-bit" for a decade already, and most had long movedo on to 64-bit.

                  It hurt them because there was still a lot of real-mode software being used, but they were fine for UNIXes and NT.

                  They sucked and still do for Unix on SMP boxen.

                  In a previous life, I used to build software (many gigabytes daily) on 64-bit SMP RI

                  • "So? What the hell has that got to do with SMP servers? RISC chips had been "optimised for 32-bit" for a decade already, and most had long movedo on to 64-bit."

                    Merely pointing out an example of where they picked server/workstation performance over traditional PC performance. Pentium Pros were adequate for 1 and 2-way systems, and they cost significantly less than the RISC systems you would have needed to beat them.

                    "They sucked and still do for Unix on SMP boxen.

                    In a previous life, I used to build software (
              • The i860 sucked as a general purpose processor and was intended as a coprocessor for PC's. It's exception handling was the worst ever divised! Don't cry for the i860, it was the i960 that truly suffered. It was Intel's original plan for 32-bit until they were compelled to do the 386.
    • I think you can basically ignore the Alpha developers: after the the theft of Alpha technologies for the Pentium, and the theft of David Cutler's old work for developing NT (which David Cutler himself took illegally along with his merry band of software pirates he hired from DEC), repeating the Alpha work for a non-Intel chipset would have been playing to Intel's and Window's strengths.

      Unless the old Alpha developers in a cooperative environment were able to sidestep old Alpha issues, or completely avoid th
      • by dfghjk (711126) on Sunday January 22, 2006 @11:44AM (#14532686)
        haha, Alpha had the grimmest, most threadbare instruction set imaginable. It's strength was it's ferocious clock rates that were enabled by abnormally deep pipelines and instructions that did relatively little (no integer divide!). The characteristics that Alpha had that caused it to be so loved are the same ones that cause the P4 to be so hated; relatively poor IPC, very deep pipelines, very high clockrates, huge caches to cover it's design weaknesses, and excessive power consumption. The love of Alpha was a cult. Yeah it was fast and 64-bit but it was a tremendous power hog for it's generation. No need to love Alpha. No one did but DEC.

        BTW, Intel didn't steal anything from Alpha for the x86's. It's owned the team at the time. Cutler didn't steal anything from DEC either. A person owns the knowledge and experience inside his head. I'm sure if there was evidence of theft it would have been dealt with. DEC was a dinosaur that wasn't showing any signs of interest in Cutler's continued work. He left to take up his projects at a company that was interested in pursuing them.
        • No need to love Alpha. No one did but DEC.

          sgi (at the time, known as cray research) used alphas in their supercomputers.

          it kicked serious butt. and they were NOT DEC, last time I checked.

          (although sgi and cray will probably go the way of DEC, sadly to say).

          ob disc: I worked at both DEC and SGI in my past.
        • by argent (18001) <peterNO@SPAMslashdot.2006.taronga.com> on Sunday January 22, 2006 @02:46PM (#14533683) Homepage Journal
          It's strength was it's ferocious clock rates that were enabled by abnormally deep pipelines and instructions that did relatively little (no integer divide!).

          7 stages is not an "abnormally deep pipeline", and divide-step is absolutely conventional RISC design. The Berkeley RISC used divide-step. Sparc started out with divide-step. There really isn't a huge difference between Alpha's ISA and any other RISC, the difference is in the small details... whatever criticism you have of the Alpha, you can't in fairness leave the other RISCs out.

          Alpha also had great execution control. The memory barrier instruction (also in Power, by the way, and eventually picked up by Sparc) let the compiler control the pipeline far better than Itanium's "I can't believe it's not VLIW" design or MIPS "just guess" delayed branch. And the huge register file gave the compiler much more leeway in scheduling instructions.

          The biggest problem with the Alpha was that it jumped prematurely into 64-bit with both feet, so that even if the compiler generated 32-bit code (the -taso option) it was still moving 64-bit words around and throwing away half the result.
        • It looks like you've never actualy been paid for code you've written, or you're so young you don't remember a time before DEC's development teams had already been reaped by corporate thieves. The company that pays you for the code normally owns it: it's released under whatever copyrights they use, or that are in your contract. David Cutler was a core author of VMS: DEC-owned, copyrighted code and trade secrets that he developed in his time at DEC turned up throughout the NT core when he and his merry gang
        • haha, Alpha had the grimmest, most threadbare instruction set imaginable. It's strength was it's ferocious clock rates that were enabled by abnormally deep pipelines and instructions that did relatively little (no integer divide!).

          The 21264's pipeline's were actually quite shallow for an out-of-order processor. At 7 stages, it was closer to the Pentium's 5 than the Pentium Pro's 10. It was quite a bit shorter than the pipelines of any modern, comparable processor. The PPC970 has a 16 stage pipeline, the Ult
    • >>Indeed, the combined talents of the Alpha crew from DEC, with the PA-RISC
      >>developers from HP, the SPARC group from Sun, those behind the MIPS at SGI and
      >>MIPS Technologies, and the PPC people from IBM, for instance, could have come up
      >>with a CPU that completely trumped what Intel was putting out at the time.

      ROFL - that is hilarious. Can you imagine the politics in a chip like this? By the time the chip meets everyone at these companys requirements you would have a horrific ch
    • by poot_rootbeer (188613) on Sunday January 22, 2006 @03:16PM (#14533813)
      Indeed, the combined talents of the Alpha crew from DEC, with the PA-RISC developers from HP, the SPARC group from Sun, those behind the MIPS at SGI and MIPS Technologies, and the PPC people from IBM, for instance, could have come up with a CPU that completely trumped what Intel was putting out at the time.

      Hey, this broth isn't tasty enough! Better bring in a few more cooks...
  • ..and Apple was about to. Sun was probably an obvious partner for Apple.
    However.. I think going PowerPC was the by far best choice at the time with massive backing by almost everyone.
    My take on history is that Apple have chosen the right processor architecture at any given moment taking account everything that was known at the time. In hindsight everything always looks different.
    • There was a lot of speculation in the early to mid 1990s that SGI would buy Apple. SGI was doing quite well at that time, considering they had just released their very successful Indy line. Considering that both provided workstations for the same type of applications (multimedia-related, desktop publishing, and so forth), the systems from Apple could have offered a solid low-end line to complement SGI's more powerful systems.

      What could have happened is an infusion of IRIX with Mac OS. We could have seen Mac
      • It's far more likely that Mac OS would have been phased out entirely since, as you said, Apple would have been a low end line for SGI. SGI was an incredibly arrogant company and it would not have seen Mac OS as offering anything of interest to their current platforms (and that would have been right). If SGI had bought Apple, macs would have run SGI's OS'es until they ceased to be called macs, and of course they'd be out of business entirely today.
      • by jandrese (485) * <kensama@vt.edu> on Sunday January 22, 2006 @01:52PM (#14533395) Homepage Journal
        It always strikes me that if you listen to the rumors, there is ALWAYS a company gearing up to buy out Apple for one reason or another. I don't know what it is about Apple, but people really want to see it bought by some huge conglomerate for some reason.

        I doubt SGI ever had any interest in Apple. They were positioning themselves in the server market at the time and Apple had nothing to offer them.

        Of course that was back when Apple was tanking and speculation that everybody from SGI to Microsoft to Pepsi was going to buy them out.
      • Thank god that didn't happen.

        Out of all the UNIX systems I've used, Irix beats out HPUX and SCO, but I'd rather have seen just about anything else as the base of Mac OS X than Irix.

        And I don't know exactly what the timing was, but if SGI had a consumer OS they might not have had the same incentive to open up GL.
    • Almost everyone? Who are you talking about?

      PowerPC was not considered the best choice by anyone outside IBM, Moto and Apple. It was clear at the time that ALL other processor alternatives offered superior performance to both Intel and PPC since IBM didn't design PPC to be the fastest processors of the group, it wanted PPC to be speed competitive with Intel at far lower cost. Apple bit on that. The downside of PPC was that Motorola proved just as incompetent carrying the family forward as it was with the
      • I still think that in some ways Apple is making the wrong move again. I STILL say that within a few short years, Mac OS X will be bootable on ANY Intel machine and Apple will stop trying to fight letting it run on only Mac Intel Machines. I still think Apple may make hardware, but it will always be the super high tech and absolutely georgeous design. Microsoft or someone else will figure out how to get Windows programs running natively on Mac OS X on intel and that will be all she wrote for Microsoft.
    • "My take on history is that Apple have chosen the right processor architecture at any given moment taking account everything that was known at the time."

      Even if PowerPC was the right choice in the early 90s, it's been the wrong one for 5+ years. IBM and Moto/Freescale don't care about desktop chips, and the time has passed when other chips can be pushed into service against AMD and Intel's best and brightest.
  • by DaveRexel (887813) on Sunday January 22, 2006 @10:47AM (#14532442) Homepage Journal
    -TFA-
    "McNealy added that he went to Steve Jobs' house to try to hammer out the user interface agreement. The Apple co-founder and CEO was "sitting under a tree, reading 'How to Make a Nuclear Bomb,'" with bare feet and wearing jeans with holes torn in the knees, McNealy said."
    ---

    From just this one anecdote one does get the feeling that Steve might have taken over Sun eventually. The disappointment expressed by Bill Joy over the failed "close encounters" with Apple does indicate that they would have followed Steves leadership.

    On a more serious note, the clash of the raging CEO egos would not have been beneficial for either company.
    • On a more serious note, the clash of the raging CEO egos would not have been beneficial for either company.

      .. but it would have been fun to watch. would probably make Larry Ellison look like he's been on Prozac all these years.. :)

      -'fester

  • by RetiredMidn (441788) on Sunday January 22, 2006 @10:52AM (#14532453) Homepage
    I seem to recall seeing a demo of a Mac with a Motorola 88000 RISC processor running my 68000 binary code (Lotus 1-2-3) under emulation, a predecessor to the PowerPC effort.

    Oops, I may be in violation of an NDA...

    /. sure is a good place for dredging up obscure technical memories.

    • Wasn't the "Star Trek" Intel port done at about the same time? I also have heard stories that DEC Alpha was considered. So it does sound like they looked at everything.
      • by RetiredMidn (441788) on Sunday January 22, 2006 @01:01PM (#14533105) Homepage
        Wasn't the "Star Trek" Intel port done at about the same time?

        Now that you mention it, yeah. We were given a separate presentation at Lotus about Star Trek, including a demo. (Damn, there goes another NDA.)

        To be honest, I remember thinking at the time that Star Trek wasn't really thought through. Certainly the execs at Lotus didn't get it (which says more about the execs than it says about Star Trek). DOS/Windows apps were not going to run under Star Trek (certainly not with the desired user experience). "Porting" these apps to the Mac OS APIs wasn't going to be all that easy. And converting Mac applications of the day, many of which were written in processor-dependent ways, to a new processor architecture would be much more difficult than the conversion of more modern applications today.

        It was neat technology, but it didn't solve a problem people thought they had.

        I kinda went off topic there; please don't hurt my karma.

        • > It was neat technology, but it didn't solve a problem people thought they had.

          I always had the impression that Star Trek was dry run of their 68K emulator technology, which was a problem that needed to be solved.

          And I suppose you could argue that if they were going to switch to Intel eventually, they should have done it sooner rather than later.
          • by RetiredMidn (441788) on Sunday January 22, 2006 @03:19PM (#14533828) Homepage
            I always had the impression that Star Trek was dry run of their 68K emulator technology

            Interesting thought, but I really don't think so. AFAIK, Star Trek was not emulation; it was the Mac OS APIs recompiled and re-hosted on a different platform. I've seen conflicting reports about how it was really implemented, but (forgive me), Cringely's [pbs.org] is the most credible, IMHO. It is possible they learned a thing or two that helped them with the PowerPC platform transition.

            And I suppose you could argue that if they were going to switch to Intel eventually, they should have done it sooner rather than later.

            Personally, I've never believed that. I worked closely with both the 680x0 and 80x86 architectures in the 80's, and, form my perspective as a user of the instruction set, I found the 68K vastly superior to work with; the only thing the Intel platform had going for it was the fact that IBM had made it a de facto standard.

            Architecturally, the Pentium started to close the gap, but the power consumption issues were pretty significant. My five-year-old fanless PowerBook G3 is still a pleasure to use over the Dell laptops my last employer supplied me with.

            IMNSHO, Apple's Intel switch wasn't inevitable, it just makes sense at the moment. And I harbor a suspicion that Apple won't necessarily stay mono-architectured. Mac OS X binaries, by design, can accommodate multiple (not just two) processor architectures. Apple will pursue the direction(s) that make the most sense as things play out over the next few years.

            • And I harbor a suspicion that Apple won't necessarily stay mono-architectured. Mac OS X binaries, by design, can accommodate multiple (not just two) processor architectures. Apple will pursue the direction(s) that make the most sense as things play out over the next few years.

              If this is the case, one wonders why Apple hasn't tried emulating something like CLR or JVM as a standard "architecture" to forestall future such changes. Obviously native code is required for some things, but at this point, a well-cra
  • I hear.... (Score:3, Funny)

    by Slashcrap (869349) on Sunday January 22, 2006 @11:20AM (#14532579)
    ...that Sun are also considering switching to Sparc for their servers. You know, if things don't work out with the Opteron they need a backup strategy.

    I kid, I kid....
  • by penguin-collective (932038) on Sunday January 22, 2006 @11:41AM (#14532670)
    Sun almost created several great desktop window systems. Sun almost set a standard for web-based application delivery with Java. Apple almost picked Sun's SPARC architecture. Sun almost set the standard for server operating sytems. And then there are things that Sun achieved, briefly, and lost, like dominance of university departments.

    I leave it to others to diagnose the exact causes of Sun's repeated failures. I can say this much for myself: I won't buy another Sun product again, ever, nor will I ever trust any of Sun's promises again.
    • by Animats (122034) on Sunday January 22, 2006 @02:47PM (#14533695) Homepage
      That's very insightful.

      Someone should write a book on how Sun blew it with client-side Java. They gave the product away and spent tens of millions marketing it. In a marketing sense, they succeeded; everybody has a Java interpreter on their desktop. Yet almost nobody uses them any more. Why?

      Part of the problem is that Sun's top technical people, including Joy, never really figured out GUIs. Sun went through three bad in-house window systems before finally giving up and going with X-Windows. Then in the Java era, they went through the AWT and Swing eras, both of which combine complexity with poor performance.

      So Sun ended up as a "server company", the place SGI went after they failed to survive the transition to low-cost graphics.

      • by Decaff (42676)
        Yet almost nobody uses them any more. Why?

        You are wrong. Java client-side development is far from dead - it is growing, and at the end of last year overtook MS WinForms as the most popular client-side development platform in North America. There are even 'shrink-wrapped' commercial Java applications based on Swing that are amongst the best in their class (the financial package Moneydance is a good example).

        Then in the Java era, they went through the AWT and Swing eras, both of which combine complexity wit
        • >> Yet almost nobody uses them any more. Why?

          > You are wrong. Java client-side development is far from dead - it is growing, and at the end of last year overtook MS WinForms as the most popular client-side development platform in North America.

          Hey! You didn't count all the gazillions of mobile phones out there that all (well >95%) run java.

          > Swing went through years of poor performance, but .... got better. Now it is hardware accelerated.

          What you mean is that some of the drawing
          • by Decaff (42676) on Sunday January 22, 2006 @06:11PM (#14534664)
            The event handling framework is quite complex (you can do practically anything with it) and the fact that each java class behaves almost like a dynamically linked library in more static languages will keep the start-up performance forever behind.

            You might think so, but it really doesn't. Try the following: Install a significant Java application like JEdit or Moneydance. Time it's startup. I typically get start-up times of 3-4 seconds. That is faster than most KDE apps on the same machine!

            The memory usage hasn't shrank since I was introduced to java. The extra hit that comes from the VM and GC is major pain in small applications but negligible in bigger ones.

            I don't find this. I can start up trivial Java apps in just a few megabytes, and even Swing apps like JEdit can run in 8MB. That is nothing on modern machines. As for the GC being a major pain - it can be finely tuned these days, so much so that real-time APIs can be implemented even on standard VMs.

            My impression is that performance and memory efficiency has improved significantly since Java 1.4.x.
  • by Black Parrot (19622) * on Sunday January 22, 2006 @11:59AM (#14532783)
    Isn't that kinda like "I almost got laid"?

  • They should have gone with the Cell. It's even better than Sparc.

    NOTE FOR THE SARCASM-IMPAIRED: This comment is meant as a spoof of the unavoidable Cell comments that come up in any Apple CPU discussion. The anachronism is intentional.
  • by Alon Tal (784059) on Sunday January 22, 2006 @12:40PM (#14533009)
    Most probably, the only difference today would have been that we would be reading about Apple dumping _Sun_ for Intel, rather than dumping IBM for same. Reminds me of an Isaac Asimov story called "What If-", in which a newlywed couple meets a man who owns a gadget that can show them alternate realities, if key events in their past had taken a different course. For example: Would they be married had they not accidentally met on a train ride, etc. They keep going back to different points in their past: The day they met, the date of their wedding, and of course, everything is radically different, which aggravates the wife to no end ("This marriage is just based on chance, an accident..."). Right before everything gets really ugly, the husband deparately says: "Show us what we would have been doing at this very moment, had we not met on that train", and, surprisingly, they see themselves, exactly as they are right now, sitting together, happily married.
    • Would have made a huge difference. If Apple had gone to Sparc they'd have gone out of business before Jobs got back, because SPARC is almost as bad a processor architecture as x86, and Sun doesn't have the resources of Intel to just bull through the problems through sheer force of process.
  • by FrankDrebin (238464) on Sunday January 22, 2006 @01:18PM (#14533206) Homepage
    The Macintosh line would have been replaced by the SPARtan, leading to memorable models like the iSpart.
  • by Laebshade (643478) <laebshade@gmail.com> on Sunday January 22, 2006 @02:01PM (#14533448)
    "here's a phrase that apparently the airlines simply made up: near miss. Bullshit, my friend. It's a near hit! A collision is a near miss." - Airline Announcements, George Carlin
  • Of all sad words of tongue or pen, the saddest are these: "It might have been."

    (Sorry, I couldn't think of anything Whittier).

Almost anything derogatory you could say about today's software design would be accurate. -- K.E. Iverson

Working...