Follow Slashdot stories on Twitter


Forgot your password?
Intel Technology (Apple) Technology

Apple to Use Intel Chips? 920

Stack_13 writes "Wall Street Journal reports that Apple will agree to use Intel chips. Neither Apple or Intel confirm this. Interestingly, PCMag's John C. Dvorak predicted this for 2004-2005. Are even cheaper Mac Minis coming?"
This discussion has been archived. No new comments can be posted.

Apple to Use Intel Chips?

Comments Filter:
  • Original source? (Score:4, Informative)

    by ctr2sprt ( 574731 ) on Monday May 23, 2005 @09:30AM (#12611280)
    The WSJ reports it, but no link to the WSJ's actual story? Well, here it is [].
  • Apple Denies (Score:5, Informative)

    by nbharatvarma ( 784546 ) on Monday May 23, 2005 @09:32AM (#12611300)
    Some links I found some 30 mins ago in Google News _denies_intel_rumour [] ing-intel-chips.html [] []

    Of course, one could argue that Apple wouldn't want this news to be leaked

  • Re:Does this mean - (Score:5, Informative)

    by Oculus Habent ( 562837 ) * <(oculus.habent) (at) (> on Monday May 23, 2005 @09:32AM (#12611303) Journal
    This could be the same tactic Dell uses with Intel... "We could go with AMD, but about those prices..."

    Cheaper because of Intel? I doubt it. Even if Apple does start using x86 - or more likely x86-64 - they would still likely use their own controller chips (Note that Apple uses a single, integrated controller rather than a north/southbridge approach) and custom boards.

    It's not impossible that Apple will switch to Intel processors. We already know they keep a copy of the OS up to date on Intel hardware, and even released Darwin x86. The problems come from all the things they would leave behind:

    Compatibility - The PowerPC architecture emulates x86 better than the other way 'round. To keep from eliminating all old software with one fell swoop, they would need to emulate PowerPC. This would cause old software to run like death.

    VMX - Much of Apple's current power comes from the AltiVec/VMX/Velocity Engine available on the G4 & G5 processors. It is what offers Apple serious performance benefits in certain applications, and makes possible many of the near/realtime capbilities in programs like iPhoto, iMovie, and even Final Cut Pro. Unless Intel tacks on a VMX unit, I don't see Apple switching.

    Maybe a dual-processor system: one PowerPC and one Intel? Not likely, I grant you.
  • Re:rumor? (Score:1, Informative)

    by leomekenkamp ( 566309 ) on Monday May 23, 2005 @09:36AM (#12611341)
    coup de tat (is that how you spell it??)

    coup d'état
  • by Anonymous Coward on Monday May 23, 2005 @09:41AM (#12611397)
    El Reg has thought about it some more [].
  • by stratjakt ( 596332 ) on Monday May 23, 2005 @09:41AM (#12611398) Journal
    Flash memory, network controllers, raid controllers, memory controllers, etc and so on.

    Maybe they just plan to use an intel PIC to control the little blinky power light. Or one of Intel's DSP's to make the iPod not sound like a 70's era 8-track.

    It's highly unlikely this means OS/X for the PC. Apple would never give up their fiefdom.
  • by dick johnson ( 660154 ) on Monday May 23, 2005 @09:43AM (#12611407)
    The codebase for the OS is already done. Nextstep, currently known as OS X, has always run on Intel chips.

    As for the developer base, it's a trivial matter to port native (cocoa, not carbon) applications to Intel from PowerPC

    You would need to recompile it. But that is not a huge matter for someone who has programmed their application in cocoa.
  • by syntax ( 2932 ) on Monday May 23, 2005 @09:47AM (#12611430) Homepage
    Things may have changed from the original base stations, but back then the stations were running a 200Mhz x86 AMD.
  • Re:Does this mean - (Score:3, Informative)

    by Oculus Habent ( 562837 ) * <(oculus.habent) (at) (> on Monday May 23, 2005 @09:52AM (#12611470) Journal
    Fat binaries are great for the transition phase, but don't do anything for old apps. If I just started my DTP company and plunked down $7500 for various software packages, I would not be happy to hear that none of it will run on the next Mac I buy. Just as they emulated the 680x0 on the PowerPC - which is still available under Classic - they would need to emulate the PowerPC under the x86.
  • Cell (Score:3, Informative)

    by mikeee ( 137160 ) on Monday May 23, 2005 @09:53AM (#12611480)
    I wouldn't think so. You could, but Cell is a weird chip; not so much like a G6 as a G4 with 7 (!) super-VMX coprocessors.

    With proper coding, performance for something like Photoshop might be fabulous, but I doubt it would be efficient for general-purpose computing.
  • Re:Does this mean - (Score:5, Informative)

    by Halo1 ( 136547 ) on Monday May 23, 2005 @09:57AM (#12611500)
    The gigabit ethernet chip in my old G4/400 in fact is an Intel chip.
  • Re:Does this mean - (Score:3, Informative)

    by Kyro ( 302315 ) on Monday May 23, 2005 @10:04AM (#12611564)
    The latest powerbooks actually have a USB interface for keyboard and mouse. however you are correct, the ibooks, and all previous powerbooks used ADB.
  • Re:Does this mean - (Score:2, Informative)

    by Halo1 ( 136547 ) on Monday May 23, 2005 @10:07AM (#12611596)
    No, it's a chip soldered on the motherboard.
  • 100% of the gaming world will be using PowerPC or PowerPC deritives in the next year - year and a half -

    You couldn't be more wrong. 100% of the next-gen console gaming world will be in the next year and a half, however, everyone who plays handhelds (nintendo ds, gameboy advance, sony psp etc) and all us PC gamers (of which there are considerable numbers) will still be using other chips besides PowerPC/CELL

  • Re:Dvorak (Score:5, Informative)

    by Quiet_Desperation ( 858215 ) on Monday May 23, 2005 @10:09AM (#12611606)
    That's an even funnier quote when you consider the mouse had been invented 16 years earlier at SRI. The mouse was hardly "experimental" in 1984, and was already in use in CAD workstations. Dvorak is another one of those dumbass media figures that people inexplicably listen to. Good gig if you can get it.
  • by stevel ( 64802 ) * on Monday May 23, 2005 @10:15AM (#12611671) Homepage

    Intel now owns the largest stake in ARM (bought from Apple)

    Sigh - how soon they forget. Intel's ARM technology was acquired from DEC, not Apple. It was DEC's StrongARM that was "bought" by Intel as part of the settlement of the patent infringement lawsuit brought by DEC. Not just the rights to the processor and the architecture license, but the Hudson, MA chip fab that made the processors.

    As far as I know, Apple has had no involvement in ARM.

  • Re:Does this mean - (Score:1, Informative)

    by Anonymous Coward on Monday May 23, 2005 @10:21AM (#12611722)
    PPC and 68k are entirely different architectures with different instruction sets.
  • by Utopia ( 149375 ) on Monday May 23, 2005 @10:27AM (#12611765)
    PS3 CPU makes sense when you have lots of floating point operations.
    It isn't any faster when it comes to general purpose functions.
    Plus By the time it goes into mass production next year the standard CPUs will be much faster.
  • XServe uses Intel (Score:1, Informative)

    by Anonymous Coward on Monday May 23, 2005 @10:35AM (#12611824)
    The Xserve currently uses an Intel Raid controller chip. I can see them using more non-cpu Intel chips.
  • Re:Does this mean - (Score:4, Informative)

    by NMerriam ( 15122 ) <> on Monday May 23, 2005 @10:35AM (#12611835) Homepage
    Whether you call it an "upgrade" or a change is semantics. The PPC and 680x0 had different instruction sets and required completely different programming at the system level -- that Apple built 680x0 system-level software emulation (and later on-the-fly dynamic recompilation) and made it completely transparent to the end-user was a pretty significant feat.

    Not to mention, the PowerPC processor is the only edge Macs have left on PC hardware. If Apple goes x86 the Mac will simply be an overpriced PC running a pretty gui on top of BSD.

    Whatever. When Ferarri bulds a car with an automatic transmission, it's just an overpriced Taurus with a pretty body kit, right?

    After all, what kind of crazy computer USER would buy a computer based on the USER interface? Everybody knows your decision should be based on whether the system is little-endian or big-endian!
  • Re:Does this mean - (Score:3, Informative)

    by jdgeorge ( 18767 ) on Monday May 23, 2005 @11:09AM (#12612157)
    Exactly. Bottom line is that open architecture is superior to closed architecture. x86 is actually a shitty architecture when you get right down to it, but at least it is open.

    Err... The x86 chip architecture is NOT open. If you want to produce a clone of the chip, you have to license the technology from Intel. For example, AMD has a license [] which allows it to produce microprocessors that are compatible with the Intel x86 CPUs.

    PPC chips are at least as "open" as Intel's:

    Apple, IBM, and Motorola collaborated in creating the PowerPC architecture. Apple was buying all its PowerPC parts from Motorola until they started having yield problems that prevented them from producing the high-performance parts that Apple required. freescale semiconductor/a?, Motorola's former chip division, was spun off from Motorola and is currently selling PowerPC processors for embedded applications. []
  • Re:Why cheaper!? (Score:2, Informative)

    by mnemonic_ ( 164550 ) <> on Monday May 23, 2005 @11:16AM (#12612220) Homepage Journal
    Actually, the Pentium M's run far cooler than any G4 or PPC970 processor, and Intel plans to extend the M line more into the desktop market (already some desktops using the Pentium M). The P4's probably run cooler than the PPC970's, after all, P4 mobile chips have been available for several years now, whereas PPC970's have been desktop only for 2 years. Intel's processors are cheaper because they're more massively-produced. Compare to the case of the APG-77 radar, whose price plummeted (by millions of dollars) after common circuitry in the T/R modules became used in consumer wireless ethernet gear (market expansion from .5M chips to 9M+). The 3-5% of the market Apple holds is nothing compared to the dominance x86, especially when you factor in the vicious competition between AMD and Intel that drives prices down further. Competition that does not exist in the cathedral Apple world, unfortunately.
  • Re:Does this mean - (Score:1, Informative)

    by Anonymous Coward on Monday May 23, 2005 @11:35AM (#12612408)
    This might be semi-offtopic, but you make it sound like you haven't yet seen this [].
  • Re:Original source? (Score:3, Informative)

    by hab136 ( 30884 ) on Monday May 23, 2005 @11:37AM (#12612428) Journal
    The WSJ reports it, but no link to the WSJ's actual story? Well, here it is.

    Requires a WSJ subscription, that's why.

  • by Momoru ( 837801 ) on Monday May 23, 2005 @11:42AM (#12612480) Homepage Journal
    With a wireless card (no wires where my office is) on a 1.2ghz mac mini, the price is $578. My current computer, a dell with a 3 ghz p4, with monitor, keyboard, optical mouse etc was only $500. So the mini is still pretty expensive in comparision. But that aside, $580 might be an impulse buy for you, but $600 is an awful lot for some of us just to "try something out". And with those barebones specs of the mini, if I liked it, i would probably have to buy a "real mac" a couple of months later anyways. My point is that I already have some bucks invested in hardware...not having to scrap all that would be a serious selling point for picking up an x86 version of Tiger.
  • Re:Why cheaper!? (Score:3, Informative)

    by skribble ( 98873 ) on Monday May 23, 2005 @12:11PM (#12612801) Homepage
    Provided Apple does it's job right this isn't too complex. Essentially it's a rebuild for a Cocoa App. (Next/OpenStep ran on multiple platforms). Remember... with the exception of very low level programming (i.e. some drivers, and almost no productivity apps). You write code for an OS not a hardware platform. If the OS is ported properly along with the supporting lowlevel libraries porting an app is trivial.
  • Re:OMG! (Score:2, Informative)

    by nullhero ( 2983 ) on Monday May 23, 2005 @12:13PM (#12612829) Journal
    What are the chances of seeing an x86 port of OSX??

    There is it's called OpenDarwin. It runs on the x86 the only thing is what make Mac OS X a Mac OS X is it's window server and os libraries - core foundation, cocoa, etc. Check out the site for Darwin: []. They have a link to the x86 port.

  • Apple uses ARM in the iPod already and hired a person to work on GCC for ARM. (this was all based on public knowledge).
  • by Been on TV ( 886187 ) on Monday May 23, 2005 @12:22PM (#12612949) Homepage
    Having worked in Apple product management and been recruited to and used to work for IBM at a time when they wanted to put Mac OS on IBM PPC hardware (gosh, that's got to be a decade ago...), I would think that the article in the Post is a sign that Jobs has just about had it with IBM internal politics.

    There are parts of IBM that do not give didley about Apple - actually a lot of IBM-ers talk about Apple as they wished it off the surface of the earth. There are of course folks in Microelectronics and some Linux on Power guys who care, but the rest...

    If IBM really cared about getting more PPC based systems into the market, they'd have IBM Software make sure Apple was properly supported both on the client side, but also on XServe with their server software products. You don't see much of that.

    The PowerPlay(TM) that is going on inside of IBM, and what is probably seriously hampering Apple these days, is that IBM is trying by all means to protect its high-end server business. In which the POWER processors (and dual core) play an all important role both in the iSeries (former AS/400) and pSeries (former RS/6000). These are low volume, very high margin products that sustain two ecosystems in IBM with revenues and margins that far exceeds any business IBM will do with Apple this century.

    With Apple eagerly wanting to use dual-core PPC chips in, not only dual processor systems (customers chairing on the side), but possibly bringing both 4 and 8 processor systems - both workstations and servers, to the market, IBM's Enterprise Division will increasingly see this as a threat to the i and pSeries servers. Apple will, with a completely different price-point on servers in particular, significantly threaten to alter the margins IBM has on the low-end to midrange i and pSeries systems.

    IBM got a very rude awakening seeing Apple XServe hardware finding the way into some of the worlds fastest supercomputer configurations at a fraction of the cost of then priced IBM hardware.

    Now, with a possible 4 and 8 processor XServe out the doors, the rocking of IBM's boat would still continue. Why? Well, IBM is to a larger and larger degree touting both iSeries and pSeries ability to run Linux software both natively in AIX and OS/400, but also in logical partitions, as one of its major features and selling points. Guess what? Apple can run Linux software too.

    The relative ease by which Linux software can be made to run natively under Mac OS X combined with much lower priced hardware, will make IBM's iSeries and pSeries customers increasingly ask why not to switch if all they want is the ability to run Linux software on PPC.
    Such a scenario could put tremendous strain on the Enterprise Division's margins. Which is why there are forces internally in IBM who do not want Apple to have the powerful PPC chips Steve Jobs needs to transform Apple into a success in the enterprise market. They probably try to put all kinds of restrictions on what systems he can build with those chips, if he gets them.

    Intel does not play these games. Which is why a processor switch may be attractive for Jobs.

    Of course there are all kinds of problems with the existing installed base in terms of binary compatibility of software, but they have lived through this before without too many problems. Apple knows how to handle a processor switch from before and I think the OS will handle another chip excellently given the long time Apple has had to prepare for this.

    Now for the market? As another guy so excellently put it in a post; 95% of the market does not have the problem of binary compatibility of software under Mac OS X.

  • Re:Does this mean - (Score:0, Informative)

    by Anonymous Coward on Monday May 23, 2005 @12:30PM (#12613033)
    The gigabit ethernet chip in my old G4/400 in fact is an Intel chip.

    I doubt it. I believe the gigabit ethernet controller for the PowerMac G4 clockwork models were Broadcom models, and usually were under a heatsink.

    If your G4/400 had an intel chip, it was actually a Intel-branded DEC 21154 for 10/100 fast ethernet. Later on the Apple 10/100 finally worked and the external controller was not needed.

  • by mihalis ( 28146 ) on Monday May 23, 2005 @12:32PM (#12613064) Homepage

    I contacted a reporter who filed this story for one of the on-line financial news websites. He confirmed the WSJ did actually say Intel CPUs for Apple PCs.

  • Re:unbelievable (Score:3, Informative)

    by AKAImBatman ( 238306 ) * <> on Monday May 23, 2005 @12:52PM (#12613322) Homepage Journal
    That's a bit of a blanket statement...

    No, it's not.

    As someone who lives in Photoshop, InDesign and Illustrator for 45+ hours a week, I wouldn't complain if my filters were twice as fast. Not every cpu-intensive process is bus- or disk-limited.

    Your photoshop filter probably *is* bus and memory limited. For one, your processor needs to move the memory in and out of the processor quickly. If the bus is saturated, then it can't feed the CPU any more than it already has. Which means that the CPU can't run any faster than what the bus can feed it. And if you run out of memory, you'll have to start accessing the disk - which is several magnitudes slower than memory. Improved disk performance (e.g. Serial ATA) can help hide some of that swapping.

    I regularly peg my dual 2ghz g5s. If it were dual 4ghz, I would peg that too, just not as often.

    Pegging a CPU is not the same as getting 100% of the CPU's performance. If the bus can't feed the CPU, it's going to start running wait states which look like normal CPU usage to the OS. Most people would be amazed to know that the time spent in wait states can easily be half or more of a CPU's processing time. i.e. It's all about how efficiently the bus can service the CPU.

    Trust me. Every time you get a new machine, most of the performance increase comes from systems other than the CPU.
  • Re:Dvorak (Score:3, Informative)

    by tgibbs ( 83782 ) on Monday May 23, 2005 @01:24PM (#12613835)
    I forgot about MacDraw, but it wasn't out when Dvorak said that anyway.

    MacDraw was released with the Mac; it was derived from LisaDraw.

    What Dvorak said was of course trivially true, but abysmally stupid--there was of course "no evidence" at the time that people wanted to use mice, because when he wrote that statement, hardly anybody except for a handful of Xerox Star and Apple Lisa users had even touched one.
  • Re:Does this mean - (Score:3, Informative)

    by shotfeel ( 235240 ) on Monday May 23, 2005 @01:32PM (#12614016)
    At one time, IBM was developing a PPC that also included an x86 core. I believe it was the PPC 615.
  • by wkcole ( 644783 ) on Monday May 23, 2005 @02:01PM (#12614514)
    As far as I know, Apple has had no involvement in ARM.

    As you appear to be completely ignorant of ARM's origins, why bother making such a statement?

    See [] and scroll down to where ARM describes their origin as an independent company. ARM was initially a joint venture of Apple, Acorn, and VLSI. Selling off their shares of ARM was part of what kept Apple alive in the late 90's.

  • Re:Does this mean - (Score:3, Informative)

    by LionMage ( 318500 ) on Monday May 23, 2005 @02:37PM (#12615110) Homepage
    This "JVMs are ass-slow" argument gets repeated a lot. There have been multiple benchmarks published demonstrating definitively that this is not the case -- in fact, Java can outperform C and C++ in many applications. Here [] is one reference for you to ponder. Of course, it's easier for most people to parrot hearsay rather than actually rely on empirical evidence to support their opinions.

    Note that I'm not saying JVMs are inherently superior for all, or even most, applications. I'm just saying that things are a little more complicated than your one-line zinger would imply. I'm also saying that virtual machines can be very performant, with the right code-translation and caching strategies. (See my other post in another branch of this thread.)
  • Re:Does this mean - (Score:3, Informative)

    by RzUpAnmsCwrds ( 262647 ) on Monday May 23, 2005 @05:48PM (#12617552)

    The "Java is faster than C/C++" argument argues this"

    No, it argues that a JIT compiler can make optimizations at runtime that could not be made at compile time. Which, indeed, is true. Certain specific algorithms are faster in Java than in C++.

    Of course, we're talking about technicalities here. I'll agree that C is genarally faster than Java. But Java isn't necessarily slow - take the T-Mobile Sidekick, which has a quasi-Java OS and Java applications, yet maintains excellent performance on a 50MHz ARM CPU.
  • Re:Mod parent wrong (Score:3, Informative)

    by dcam ( 615646 ) <david@uberconcept.TEAcom minus caffeine> on Monday May 23, 2005 @07:09PM (#12618381) Homepage
    On no circumstances actually open the Pynchon novel. You will be scarred for life.

    I made the mistake of reading Gravity's rainbow on the recommendation of an (at the time) girlfriend. In my entire life I do not believe I have come across a more useless and unpleasant way to spend my time.
  • Re:Does this mean - (Score:3, Informative)

    by Trillan ( 597339 ) on Monday May 23, 2005 @08:10PM (#12618895) Homepage Journal

    No, they are not. The PPC does not support m68k code.

    m68k support was done through a software emulator in the Mac OS, similiar to Virtual PC for emulating Windows on the Mac or something like MAME for emulating arcade games. The difference is that Apple did such a good job on the emulator that people like you didn't realize it existed. :)

  • FX!32 (Score:2, Informative)

    by falconwolf ( 725481 ) <<falconsoaring_2000> <at> <>> on Monday May 23, 2005 @10:38PM (#12619826)

    Digital did this with their FX!32 emulator. They could run Windows binaries at pretty decent speeds on NT Alpha

    Not even close. I got an Alpha several years ago setup as a dualboot with NT and Linux, but I've hardly even used NT because I wasn't able to install that many apps even using FX!32. The only commercial software I was able to install was Borland C++ Powerbuilder. I got the Alpha because it was said you could install almost any app that ran on NT but came to find out later only "well behaved" apps will install. Now what "well behaved" means I don't know.

  • hmm.. (Score:3, Informative)

    by Cryptnotic ( 154382 ) * on Tuesday May 24, 2005 @04:02AM (#12621326)
    You weren't paying attention either. NP Complete problems are definately not unsolvable algorithmically. They are problems for which there exists a solution solvable by a non-deterministic machine (a fictional machine which can be in any number of states at a time) in a polynomial amount of time (i.e., O(N^i), where i is some constant integer). NP Complete problems are also such that they can be mapped onto other problems in NP Complete. The problem with these though is that they usually require exponential or even factorial time and/or space in order to solve them. Sometimes these solutions are "try every possible solution, pick the best one". It's a definate algorithm, but as the problem size grows, it will be impractical.

    The fundamental question in computability theory is "Does P equal NP?". (Recall that P is the set of problems which can be solved by a deterministic machine (more like a real machine) in polynomial time). NP-Complete is interesting, because if a mapping were found from a P problem to an NP-Complete problem, all of the problems in NP-Complete would be solvable in polynomial time on real machines. Last time I checked, the answer to this question was unknown (no one has been able to proove that P and NP are different). Obviously, P is a subset of NP, but is it a strict subset or could the two be equal?

    So, anyway, P and NP both consist of algorithmically solvable problems. For an example of an unsovable problem, look up the "halting problem". The problem basically is, "given an input turning machine A and input I, decide whether or not the machine, if run, would ever halt (finish)." It's generally impossible (other than a few simple examples) to figure out without running or emulating machine A. That problem is unsolvable.

Thus spake the master programmer: "Time for you to leave." -- Geoffrey James, "The Tao of Programming"