Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Technology (Apple) Technology

Apple to Use Intel Chips? 920

Stack_13 writes "Wall Street Journal reports that Apple will agree to use Intel chips. Neither Apple or Intel confirm this. Interestingly, PCMag's John C. Dvorak predicted this for 2004-2005. Are even cheaper Mac Minis coming?"
This discussion has been archived. No new comments can be posted.

Apple to Use Intel Chips?

Comments Filter:
  • Does this mean - (Score:3, Interesting)

    by thewldisntenuff ( 778302 ) on Monday May 23, 2005 @09:26AM (#12611230) Homepage
    We'll see Mac OS X - x86 anytime soon?
  • AMD? (Score:2, Interesting)

    by InsideTheAsylum ( 836659 ) on Monday May 23, 2005 @09:28AM (#12611244)
    Wouldn't using AMD be even cheaper..?
  • by antifoidulus ( 807088 ) on Monday May 23, 2005 @09:30AM (#12611270) Homepage Journal
    All it says is that "Apple will use intel chips", it doesn't state what kind of chips, but it does repeat itself over and over again. Maybe Apple will use Intel chips in an embedded device, maybe they are considering bringing back the mac/pc hybrid. There is really no "meat" to this story, but we can all speculate anyway.
  • Re:Does this mean - (Score:2, Interesting)

    by taskforce ( 866056 ) on Monday May 23, 2005 @09:30AM (#12611277) Homepage
    It was rumoured a while back that Apple had an x86 internal build of OS X. It's not unlikely becuase of it's FreeBSD roots that it was part of the development cycle at least for the early versions of OS X. (I don't see why they would continue to develop the later versions on x86 if they wheren't planning this all along)
  • by Anonymous Coward on Monday May 23, 2005 @09:32AM (#12611301)
    The Register already has an analysis on this: http://www.theregister.co.uk/2005/05/23/apple_inte l/ [theregister.co.uk]

    The conclusions are: Apple already use a lot of non PowerPC chips (iPod, AirPort base stations), so these talks may well have nothing to do with Mac's. Also, it could be a scare tactic to make IBM a bit more eager as a chip supplier.
  • by TempusMagus ( 723668 ) * on Monday May 23, 2005 @09:33AM (#12611305) Homepage Journal
    Well, for one, it would make the whole confusing use of clock speeds vs platform processor go away. It would also make it easier to emu windows software and port games. However, the new IBM PPC chips seem to kick all sorts of major ass. Why give that up? I'm betting anything this is for iPod chips.
  • Re:Why move now? (Score:2, Interesting)

    by NextGaurd ( 844638 ) on Monday May 23, 2005 @09:34AM (#12611318)
    It would be really ironic - Microsoft's Xbox2 goes power PC but Apple goes X86? Something just doesn't feel right.
  • Re:Does this mean - (Score:5, Interesting)

    by Anonymous Coward on Monday May 23, 2005 @09:34AM (#12611320)
    The thing that sets Apple apart from all other companies in this area is that they aren't just a hardware company or a software company. They are both. Most people buy the hardware because of the excellent software they offer on top. It's the combined experience that makes their hardware stand above the rest.
  • Re:Apple Chips (Score:2, Interesting)

    by turgid ( 580780 ) on Monday May 23, 2005 @09:34AM (#12611327) Journal
    Apple doesn't use Motorola PowerPC chips any more and hasn't for years. It gets them from IBM.

    Intel chips are hot, offer poor performance at a given clock frequency, and the 64-bit AMD64-a-like models are in short supply.

    No, this is nonsense and a hoax. If Apple were going to switch to the x86 architecture, it would make far more sense technically and economically to choose Opteron/Athlon 64.

    Nothing to see here folks. Move along please.

  • Re:Does this mean - (Score:5, Interesting)

    by /ASCII ( 86998 ) on Monday May 23, 2005 @09:41AM (#12611385) Homepage
    My guess is they really are planning on using Intel chips - just not processors. Remember, Intel produces wireless chips, Flash memory, Ethernet chips, and Salt and Vinegar chips.
  • Summary of issues (Score:5, Interesting)

    by G4from128k ( 686170 ) on Monday May 23, 2005 @09:41AM (#12611396)
    Here's why this is not that likely:
    1. It's just Apple trying to get better terms/service from IBM (think Dell's "talks" with AMD)
    2. It will be the death of Apple's hardware division
    3. Apple will have a hard time supporting the myriad boards, chipsets, and peripherals of PCs
    4. Piracy/sharing (pick your preferred new-speak term) will mean a revenue-less expansion of the install base
    That said, Apple's done some strange moves in the past. If PC users can just buy OS X86 for $99, they might give Mac a try. It wouldn't take that high conversion rate for OS software profits to easily replace hardware profits. I'd bet that Apple makes nearly as much profit on a sale of Tiger as it does on the sale of it slower-end machines.
  • unbelievable (Score:2, Interesting)

    by cg0def ( 845906 ) on Monday May 23, 2005 @09:43AM (#12611404)
    It is amazing how many people still believe that PPC is vastly superior to x86. (and yes it is x86 and NOT 386). What exactly do you think a PPC can do that an x86 cannot? It is not that hard imagining Mac switching over to silicon provided from Intel. The IBM CPUs cost them a bundle and plus Intel has the capacity to produce huge amounts of CPUs while IBM doesn't. Also IBM being so huge and going in so many directions does nothing to promote the CPUs that they make. On the other hand Intel does a great deal of advertising and this will help Mac a lot. Not to mention that an x86 compatible MacOS X will brings a boat load of new customerts for Apple and probably double the value of their shares if not even tripple them. So moving over to x86 especially now that x86 is moving towards multiple core CPUs is a great idea and I am all the way behind Apple if this turns out to be more than a rumor. Hey I'll be one of the first to buy the x86 version of MacOS X.
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Monday May 23, 2005 @09:43AM (#12611410)
    Comment removed based on user account deletion
  • Re:rumor? (Score:2, Interesting)

    by Batzerto ( 543710 ) on Monday May 23, 2005 @09:44AM (#12611412)
    This would be GREAT news for Microsoft. No more whining that they are a monopoly that leaves consumers nowhere else to turn.
  • Re:rumor? (Score:5, Interesting)

    by cgenman ( 325138 ) on Monday May 23, 2005 @09:46AM (#12611420) Homepage
    The WSJ does have an excellent reputation, but remember what it says... "Chips." Nowhere does it say x86. This could be an agreement for Intel to get into the PPC business, which would be a great supplier coup for Apple, or it could be an agreement to switch to cheaper Intel wireless networking chips. Maybe Intel will build Apple's ROMs. There are a lot more chips in a computer than the main processor, and nowhere does it say they're thinking about switching suppliers for that or the base architecture for that.

    And maybe they won't be used at all. The WSJ says they are in talks that "could" lead to using Intel chips. It's known that at least one version of Apple's OS was up and running on an x86 chip, in the same way that Microsoft had Windows up and running on a PPC architecture. It's also known that Apple talks a lot.

    I'd say the chance of a complete platform shift is slight, as backwards compatibility from x86 to PPC would be a nightmare. But Intel supplying PPC chips to Apple, after the years of languishing Apple went through before IBM could deliver a G5? That's a lot more likely.
  • Never Happen (Score:2, Interesting)

    by kaltekar ( 464545 ) <kaltekar AT gmail DOT com> on Monday May 23, 2005 @09:50AM (#12611449) Homepage
    'Analysts' have been predicting this for years, saying its cheaper, more flexable, etc. Apple has too buch invested PPC, if Apple switched to x86 EVERY program on the current platform that is optimized for PPC would have to be reworked to run on x86. That and Apple has too much of a proformace advantage with t he G5.
  • Weird timing (Score:3, Interesting)

    by Per Abrahamsen ( 1397 ) on Monday May 23, 2005 @09:52AM (#12611469) Homepage
    With IBM CPU's powering both the new XBox and Playstation, one would imagine that volume production for cheap Mac's would be possible. Is there any reason you couldn't use a XBox 360 CPU in a Mac?
  • Re:Does this mean - (Score:2, Interesting)

    by Klivian ( 850755 ) on Monday May 23, 2005 @09:54AM (#12611484)
    >Cheaper because of Intel? I doubt it.
    Exactly, the big price difference between Apple and generic x86 hardware are not mainly caused by the price of the processor. The price difference lays in the commodity hardware contra Apples custom boards, and the resulting competition where price are one of the driving factors. Since Apple does not have competition on their hardware they can chose a much more comfortable price margin on their hardware. Given that Apple are a hardware company any possible shift to x86 will not give significant lower prices, as Apple still will not open the platform to clone makers.

    Besides Apple has already changed processor architecture once, so they have experience with the process involved.
  • Re:Apple Chips (Score:1, Interesting)

    by Anonymous Coward on Monday May 23, 2005 @09:56AM (#12611495)
    > Apple doesn't use Motorola PowerPC chips

    Wrong. All G4 chips are from Motorola/Freescale.

    > Intel chips are hot, offer poor performance at a given clock frequency

    Wrong again. Intel Pentium-M is much cooler than a G4 chip and runs just as fast as a G5, clock-per-clock.
  • It's not about CPUs (Score:4, Interesting)

    by CptSkippy ( 793400 ) on Monday May 23, 2005 @09:58AM (#12611513)
    Who said this had anything to do with CPUs?

    Intel = Flash Memory God

    iShuffle = Flash Memory MP3 Player
  • by Frobozz0 ( 247160 ) on Monday May 23, 2005 @10:15AM (#12611672)
    *ahem* bullsh*t.

    There's a saying about hell freezing over and _something_ to do with Steve Jobs using Intel CPU's.... hmmmm.

    I think these people misinterpreted the evidence for CPU's. Apple uses Intel chips in their computers NOW... just not CPU's. And for good reason:

    1) Intel chips are NOT cheaper. Any difference is negligable.
    2) They don't run faster (AMD keeps pace, but not Intel.)
    3) They'd have to recompile every app made for one architecture to run on another.
    4) They run hotter.
    5) Steve doesn't like Intel CPU's.
    6) Steve doesn't want to piss off Microsoft by being THAT agressive in their turf.
  • Re:Does this mean - (Score:3, Interesting)

    by chasingporsches ( 659844 ) on Monday May 23, 2005 @10:20AM (#12611711)
    nope. you might see intel making powerpc though. if apple ports mac os x to x86, thats cool and all, but none of the applications will work, then they'll go back to the mess of the 68k to powerpc switchover, where you have 2 versions of every product. considering powerpc is working so well for apple, and steve jobs just said he's happy with the powerpc architecture, then this story seems to be bogus. not worth apple's effort.
  • Re:Does this mean - (Score:1, Interesting)

    by Anonymous Coward on Monday May 23, 2005 @10:23AM (#12611738)
    USB and USB2 are also Intel patents. There's lots of Intel built or licensed stuff in a typical Mac.
  • Re:Does this mean - (Score:5, Interesting)

    by chasingporsches ( 659844 ) on Monday May 23, 2005 @10:23AM (#12611741)
    or Intel XScale processors for a video ipod...
  • Re:Dvorak (Score:3, Interesting)

    by tgibbs ( 83782 ) on Monday May 23, 2005 @10:36AM (#12611842)
    That's true. There was no evidence that people wanted to use Mice. The Mac sold quite poorly early on -- it wasn't until people wrote software that really showed what a mouse could do that it caught on.

    Nonsense. The Mac came with software that showed what a mouse can do--MacWrite, MacPaint. Microsoft Excel was available almost immediately. There is very little that a mouse does on modern computers that was not demonstrated in the software included in the very first Macintosh. The Macintosh took a while to catch on because people were locked into legacy DOS software, and because people needed to be convinced that the features offered by the Macinstosh--GUI, mouse control, fully bitmapped screen with software fonts--really were so valuable that they justified the performance hit. Most other computers of the time, with low resolution screens driven by character generators were hardware-incapable of animating a cursor (there were attempts to implement mouse pointers on character mapped displays, but they were horribly ugly and jumpy and unpleasant to use).
  • Re:Does this mean - (Score:3, Interesting)

    by quelrods ( 521005 ) <`quel' `at' `quelrod.net'> on Monday May 23, 2005 @10:37AM (#12611848) Homepage
    By closed architectures are you referring to the PPC? This is not produced by Apple but by IBM and the architecture docs are quite good (IBM will mail you a hardcopy set of the books for free.) In fact AMD and Intel for the x86 have docs on par with IBM for the PPC. I think you meant to say that some of their peripherials are closed products (like their wireless.)
  • Currently all of Intel's stuff runs hotter, so Apple would have to work significantly harder at heat dissipation issues in all but their tower designs.

    That's a design issue, not a manufacturing issue.

    And what, pray tell, do you expect them to do with little-endian issues, backwards compatibility, and all those little details?

    It's commonly known that Apple keeps a version of OS/X for Intel current and ready to go if they should ever have to switch because of supply problems (which has always been a real threat).

    The biggest reason they don't switch is because Apple likes incompatibility where they can do it -- it locks people in. If Apple used the Intel architecture, it would be a lot easier to run their software on less expensive hardware.

  • Not for Macs.... (Score:3, Interesting)

    by SJ ( 13711 ) on Monday May 23, 2005 @10:45AM (#12611939)
    XServe RAID already uses an Intel IO chip.

    Airport Base Stations use (or at least they used to use) a 486.

    iPod probably has (or will have) some sort of ARM chip in it.

    The XNU Kernel has the ability to assign certain types of tasks to certain types of CPU. There is no reason why a Mac could not use both a PPC and an x86 in the same box.

    Intel make kick-arse network chips.

    Who said anything about these going into a Mac? (New product?)
  • by Eccles ( 932 ) on Monday May 23, 2005 @10:51AM (#12611989) Journal
    Intel could be arranging to supply Apple with non-CPU chips.

    Intel could conceivably be arranging to be a source for G5 or G5-compatible chips. As one of the world's largest chip manufacturers, they've got lots of resources and access to technology. It's more central to their business than chips are to IBM.

    The OS X on x86 path seems the least likely to me. App writers don't want to support what is essentially another platform, nor does Apple.
  • by el_womble ( 779715 ) on Monday May 23, 2005 @11:13AM (#12612189) Homepage

    I can't imagine why Apple would want to move towards x86 hardware, but there are many reasons why I can see Apple and Intel having a lot to talk about.

    • Intel make a lot of chips. Apple / IBM in comparison do not, but that doesn't mean that Apple doesn't want to. Intel could become a licensed manufacturer, and pick up the slack if volumes get too much for IBM to handle (in the wake of PS3 and XBox 360s).
    • Intel know a lot about 90nm technology. They have several patents that would no doubt make IBMs life a lot easier when it comes to making a G5 that works in a laptop (without sterilizing the user) and pushing the G5 beyond the 3GHz barrier
    • Intel make other technologies that Apple would be interested in, WiMax being the most obvious
    • Intel, have the potential to be great innovators. They're reaching the limits of what they can achieve with x86 because Microsoft are unlikely to want to support a new architecture anytime soon. Apple could offer them an oppertunity to try something new, and maybe make the next big thing in processors (if they don't already have it up their sleves).
    • I could even imagine a 'G6' or similar with a x86 instruction decoder. We all know that x86 instructions are internally reduced to RISC like microcode, why not bolt one onto the the front of a G5 and remove the software emulation in virtual PC? (ok, this is scraping the barrel)
    • Intel inside sells 200 million units a year, maybe that badge could make a difference to Apple sales - even if they used a different instruction set
  • Re:Does this mean - (Score:1, Interesting)

    by Anonymous Coward on Monday May 23, 2005 @11:14AM (#12612194)
    Another thing Apple could use is Intel's fab plants to manufacture their chip designs - Intel has more fabs than anyone else (although much lower "quality"/technology than AMD's Dresden plant or IBM's Fishkill one).

    This could mean cheaper chips (at least from Apple's point of view).
  • Re:rumor? (Score:3, Interesting)

    by timster ( 32400 ) on Monday May 23, 2005 @11:16AM (#12612216)
    I'm certainly wondering about the prospects for an Intel PowerPC chip. Maybe the CPU geeks out there can tell me how possible it would be for Intel to develop a microcode (or other relatively simple modification) that would allow for a version of the Pentium-M that ran a PowerPC instruction set. Possibly this could be the chip for the next Powerbook, if indeed the G5 turns out to be too big and too hot to make it into a portable, like, ever. A PowerPentium, as unholy as it sounds, could be a great notebook chip.
  • NOPE (Score:2, Interesting)

    by kaiwai ( 765866 ) on Monday May 23, 2005 @11:31AM (#12612362)
    I'd say the most likely scenario, as outlined by some:

    1) ScaleX (ARM) based processor for the iPod, possibly to be able to power the H264 videos that they'll be selling soon - music videos first, then work up to movies once the studios feel comfortable with the idea - I'm sure Steve will win them over with his charm.

    2) Another supplier for the wireless chipset. Currently Broadcom supplies the wireless chipset for Apples Air Port Extreme; coupled with the use of Intel NICs, and the move to try and lower costs with the mini-Mac, you might just find that Apple might try to negotiate a better deal if the computer chipset + wireless + nic were all Intel.
  • Re:Dvorak (Score:2, Interesting)

    by tedhiltonhead ( 654502 ) on Monday May 23, 2005 @11:39AM (#12612448)
    I thought it was invented at Xerox PARC?
  • Re:rumor? (Score:2, Interesting)

    by sld126 ( 667783 ) on Monday May 23, 2005 @12:27PM (#12613002) Journal
    It's also well known that Apple already uses AMD chips:
    http://www.vonwentzel.net/ABS/Evolution/index.html [vonwentzel.net]

    Doesn't it seem more reasonable that they're upgrading their wireless systems from AMD to Intel, than fundamentally changing their core machines/processes/software? I think so.
  • Re:Does this mean - (Score:3, Interesting)

    by MarcQuadra ( 129430 ) * on Monday May 23, 2005 @12:36PM (#12613092)
    Woah there, you're very wrong. The 68k compatability was done entirely in software. There was no magic dust on the early PowerPC chips. The firmware of the PowerPC line had a 68k emulator in it and the OS could supplant it with a newer emulator once it was past the boot stage.

    And nobody ever turned it off, my Dad runs HyperCard 2.1, which came out a LONG time before Apple even considered PPC, on his G4 in the Classic environment, which includes the 68k emulation as part of OS 9.

    As for what OS version they finally got rid of 68k code? 8.6 still had some 68k code, and I'll bet 9.2 has a few tidbits (I'd have to dig with ResEdit, and I don't feel like it now. Many of the original 'toolbox' APIs for the Mac were hand-crafted in assembler, it was really hard to take the old pascal blueprints and refactor everything for PPC, especially when you know that the Next Big Thing is going to obsolesce the toolbox anyway.
  • Re:Nope (Score:4, Interesting)

    by javaxman ( 705658 ) on Monday May 23, 2005 @12:53PM (#12613342) Journal
    Plus I *REALLY* don't see how Apple can switch architectures at this point.

    They could quite reasonably switch architectures. Or even support and produce both. This was a new reality starting with OS X, and it's strange, but it's true. Pretty much every single bit of the Apple hardware stack could run on a different gcc-supported CPU with a simple recompile. Darwin X86 is totally doable. Device drivers would be the biggest problem, but... just switching out CPUs with a slightly different motherboard and keeping everything else the same should make writing device drivers unnecessary. What wouldn't work would be very old OS 9 applications, but the vast majority of stuff would "just work" with a recompile. The structure already exists to distribute multiple-target binaries in OS X. It's been done before, with NeXTStep, it could certainly be done again.

    If they were to do this they would need a damn good reason, and thats whats missing, whats the *REASON*?

    Very insightful, that bit. It almost makes up of for the rest of your post... what, indeed, would be the reason ? Unless there's some sort of cost savings, or we're talking about a non-PC device, I don't see it. But it certainly could happen.

    Apple hardware is expensive for two reasons, one being volume, and the other being the fact that Apple actually does R&D. But R&D is not the major factor. If they could increase volume... I'm going to guess that's going to be a difference of more like $200 between Linux on Intel and Apple on Intel, and to the average user, it'll be money well spent.

    I am among those who are 100% certain that somewhere, in the bowls of 1 Infinity Loop, behind several layers of locked doors, is a PC lab with Darwin, Cocoa, and major portions of the OS X software stack running on Intel ( or AMD ) hardware. It may never see the light of day, but the simple fact that you *can* compile Darwin for X86 tells me it's there...

  • Re:Does this mean - (Score:4, Interesting)

    by EvilTwinSkippy ( 112490 ) <yoda AT etoyoc DOT com> on Monday May 23, 2005 @01:03PM (#12613501) Homepage Journal
    (Ahem).

    I'm a computer professional. A router programming, server building, cable splicing, code hacking computer professional if you care at all. In college I used to build embedded systems and I've have more than my fair share or processor architecture courses.

    When I go home, I just want to turn the damn machine on and have it do what I need it to do. And that machine is an iBook. I know it has a G4 processor running a 1Ghz, the memory bus, and most of the I/O architecture. My next desktop is going to be a Mac, because my wife, a professional computer teacher who specializes in Windows, feels the same way.

  • by saddino ( 183491 ) on Monday May 23, 2005 @01:08PM (#12613595)
    As usual per rumors of this type (although I suppose this source is somewhat more credible than Dvorak's wild guesses), people assume that such a move (Mac hardware based around x86) means that Apple would sell a standalone version of OS X that runs on any x86 hardware. My guess: not likely.

    Apple would still be selling a closed-box solution and thus provide drivers for their platform only. OS X would still run on Macintoshes only, notwithstanding their x86 internals. A custom chipset similar to the old MacOS ROM could also help prevent tinkerers from trying to "extend" OS X to non-Apple machines. As others have mentioned, Apple doesn't want to get into the "support every x86 platform" game..."it just works" is still a motto for them.
  • Re:Does this mean - (Score:3, Interesting)

    by composer777 ( 175489 ) * on Monday May 23, 2005 @01:15PM (#12613701)
    I'd be curious to see if the extra registers included in amd64 would be enough to speed up x86 emulation of PPC. How many registers does the PPC have?
  • Re:Does this mean - (Score:3, Interesting)

    by nokiator ( 781573 ) on Monday May 23, 2005 @01:36PM (#12614095) Journal
    My gut feeling is also that the subject of Apple-Intel talks is not x86 chips. There are two other, more viable options:

    1. Intel's PXA family [intel.com]. These are mid-rage embedded processors targeted at applications that require very low power consumption but a decent amount of compute power. Assuming that Apple is not likely to bring Newton back, the obvious target would be a video iPod kind of device.

    2. To be able to run a iFlix Movie Store, Apple needs much more bandwidth to end customers. Even if this is a background service where customers maintain lists (similar to Netflix) and top few items from the list are downloaded in the background, phone and cable companies that own the last mile will not just sit back and watch Apple make money by using their precious bandwidth. At least in metropolitan areas, one way to bypass Cable and DSL providers is WiMAX. WiMax is especially suiatable for broadcasting (or multicasting) content. Intel is one of the leaders in the WiMax effort and was one of the first vendors [intel.com] to come up with WiMax silicon.

  • by G4from128k ( 686170 ) on Monday May 23, 2005 @02:00PM (#12614497)
    Why is there always the presumption that a system with an x86 CPU will be PC compatible?

    I agree with you that Apple does not necessarily have to go PC compatible. But there are two very strong reasons that Apple would only buy Intel CPUs as part of a conversion to PC-compatible hardware.

    First, Apple has always suffered from the "its more expensive" criticism (some, including I, disagree that Apple is more expensive, but that's beside the point of public perception). Adding an Intel chip will do nothing for Apple's costs (may even increase them). Only if Apple goes PC compatible will it gain access to ultra-low prices associated with Intel hardware. Apple will never beat Dell at the low-cost PC game, but could co-opt Dell's economies of scales by releasing OS X for x86.

    Second, Apple gains access to a massive install base of PCs. Apple sells only 3 million machines per year versus worldwide PC production of 200 million machines per year. I'd bet there are more than half a billion PCs that have the heft needed to run Tiger (from what I have seen, Tiger can run on a 400 MHz G3). If just 3% of those PCs converted, Apple sells 15 million copies of Tiger immediately and another 6 million copies per year afterward. This, alone, would triple Apple's marketshare. Given the high gross margins on software (vs hardware), Apple could afford to lose some hardware sales. But this is only possible if Apple sells a true PC-compatible version of OS X.
  • Where is AsSeenOnTV? (Score:3, Interesting)

    by IdahoEv ( 195056 ) on Monday May 23, 2005 @02:27PM (#12614936) Homepage
    ASOT is strangely silent in this thread.
  • Re:Does this mean - (Score:3, Interesting)

    by laird ( 2705 ) <lairdp@gmail.TWAINcom minus author> on Monday May 23, 2005 @06:49PM (#12618170) Journal
    "We will see Windows on PowerPC long before we ever see the full OS X on x86."

    Ironically enough, we've already seen Windows on PowerPC and OS X on x86. Specifically, Microsoft shipped Windows NT (3.x and 4.0) and Windows CE for PowerPC, and Apple shipped (to developers) Rhapsody for x86, and currently ships WebObjects for many processors, including x86. WebObjects includes the Cocoa runtime, which means that (except for Apple's license prohibiting it) developers could ship their Cocoa applications as a single binary installer for all WebObjects platforms (MacOS X, Windows 2000+, Solaris). I've been told by friends at Apple that they still make sure that MacOS X runs on a wide range of processors (x86, PPC, etc.) in order to make sure that they don't accidentally break the portability that NeXTSTEP gave them. Of course, this doesn't mean that they'll ship it to consumers, but it's important that they keep the option open.
  • Re:Mod parent wrong (Score:2, Interesting)

    by brwski ( 622056 ) on Monday May 23, 2005 @11:38PM (#12620245)

    Do not listen to this advice!

    Gravity's Rainbow is well worth reading. It does, however, require that you pay attention. There is no hand-holding in this novel. Give it a chance --- this book is *funny*, while also being a great WWII novel.

  • Re:Does this mean - (Score:3, Interesting)

    by LionMage ( 318500 ) on Tuesday May 24, 2005 @12:16PM (#12624319) Homepage
    This is starting to get WAY off topic. However, I should point out that your point number 1 is attributable to many things, none of which are directly VM related. Poor programming is indeed to blame in some cases. You speak of "Java desktop apps," but realistically, this must be broken down into three sub-cases: AWT, Swing, and SWT, the three main GUI toolkits available to Java programmers these days. SWT has performance problems on every platform besides Windows, because SWT has only been optimized under MS Windows. AWT is no longer promoted, and seldom used, mainly because it looks clunky and doesn't provide much flexibility. Swing widgets are supposed to be 100% Pure Java, although Swing has been optimized on some platforms (notably OS X optimizes Swing drawing operations using Quartz 2D or similar, depending on version). Obviously, having GUI widgets that are written in an interpreted/JIT-compiled language is not a recipe for performance, especially for applications with rapid visual updates. But then again, a well-designed Swing app will intelligently handle redraw, something I've noticed is hard for some programmers to cope with. After all, when you can live in a Windows world and rely on hardware-acceleration to conceal the fact that your application redraws the same thing a dozen times, your programming techniques will fall flat in Swing. (I've been around long enough to see examples of GUI code in Java written by people who cut their teeth in Windows and X11, and almost invariably, profiling the code revealed cases where redraw was happening far more than it should have.)

    Of course, this "desktop apps" argument is kind of a straw man argument, because these days, almost 100% of commercial Java development is on the server side, in J2EE environments like Websphere and WebLogic. Major corporations use J2EE (and JSP/Servlet engines like Tomcat) because this stuff works. It's easy to develop for, the performance is very good when using the Server VM (as opposed to the client VM, which is optimized for processes that don't run 24/7), and the toolset is very rich, which makes productivity very high.

    Let's put it this way: Java wouldn't survive if it weren't performant in the server space. So obviously, for some applications, it's superior to the alternatives.

    Your point number 2 is another straw man argument in disguise. The fact is that there is a lot of crap C/C++ code out there, and much of it is in libraries that are leveraged by user-written C/C++ applications. However, even ignoring this fact, you can empirically demonstrate with experiment (not so-called "intuition," which often gives incorrect answers -- ask any real scientist) that for some algorithms, a direct implementation in Java may in fact outperform a similar implementation in C/C++. The paper I cited in fact does this type of comparison -- it is an "apples to apples" comparison, implementing the same algorithms in each language and comparing the results.

    Nice job on the straw man arguments, though. It's very easy to misrepresent someone else's argument and then tear it down, because you're not really addressing the other person at all; it only looks like you are.

    Now, let's talk about your intuition. Most modern JVMs use what is called JIT (or Just-In-Time) compilation of the Java bytecodes. A VM that is tuned to run long-term, such as the server JVM that Sun ships, will cache the JIT-compiled bytecode for immediate retrieval and execution. As another poster pointed out, there are optimizations that can be performed at runtime by inspecting the running code's behavior; these optimizations can and do outperform compile-time optimizations, where the optimizer can only make assumptions about the run-time profile of the running code. Basically, in any case where a static compile-time optimization fails on the C/C++ side, a code implementation on the Java side will be more performant. So there's the counter-argument to your "intuition." Not that this is likely to conv
  • Re:Does this mean - (Score:3, Interesting)

    by Lally Singh ( 3427 ) on Tuesday May 24, 2005 @02:19PM (#12625709) Journal
    Don't worry about being off topic, once the /. post leaves the front page, nobody else but us cares :-)

    The Desktop App example is actually really straightforward: the delays in using Java are immediately visible to the user, and immediately comparable to C/C++ desktop apps. Delays imposed by java on the server side are less visible; as they can only be seen indirectly by a client app (and web browsers are C/C++!), and there aren't many comparable C/C++ apps on the server side.

    And, btw, the document you linked to before was all scientific number-crunching. Something that wouldn't run as a server app. And I did read through the source of that, they're reasonably close implementations with each other, in a sec I'll describe why C/C++ is still going to be faster in a production environment (even with the same number-crunching code!). See next paragraph.

    As for the intuition claims, what happens when you feed some profiling data back into the compiler (which the better ones accept) ? Any (good) C/C++ app development house will have automated tests running, which provide gobs of good profiling data. And again, a compiler back-end can optimize more aggressively than a JVM can, as it has high-level knowledge of the system (e.g. the source code) vs only the low-level knowledge a JVM has (bytecode only). Not to mention that the low-level implementation of the intermediate representation can be made highly compatible with the target's architecture, vs having one imposed from Sun (which, btw, was made for interpretation, not recompilation, compare with .Net's CLR which was).

    And I was being nice about GC, and I'll be nice again. I'll save the debugging arguments for another day -- java programmers tend to fear problems solved in C++ quite easily. Btw, the arguments for JIT are usually similar for GC: in the "big picture" (e.g. a fantasy land where I can claim whatever I want), the overhead is nothing compared to code that we don't actually talk about.

    And some more niceness: it's easier to tweak out Java code's performance: the JVM does most of the heavy lifting for you. Java's often easier to develop for (unless you use the libraries too heavily, they're mostly shit), and there are many areas where the dev effort/speed trade off favors java over C/C++. Java has a richer runtime with some nice parts in there (reflection comes to mind).

    However, the claim that Java can be faster than C/C++, when good people are put on both side, is false. A C/C++ compiler simply has more knowledge of the system being compiled (the source text vs java bytecode), essentially unlimited time to optimize it (for an extreme case, check out http://www.cs.utk.edu/~rwhaley/papers/icpp05_8.ps [utk.edu] ), and full freedom for transformations (a C/C++ compiler, knowing what you originally asked for in the source code, can generate anything it wants that fills your request; a JVM's JIT can only guess at what your source wanted from the bytecode it has to work with, limited to the strict letter of the law given by the bytecode). The paper you listed was an honest attempt at comparison, but GCC isn't optimal on many platforms, most notably intel or ppc.

    This is a fair set of tradeoffs; don't let the marketing people and the kool-aide drinkers tell you different.
  • by LionMage ( 318500 ) on Tuesday May 24, 2005 @05:15PM (#12627604) Homepage

    This is a fair set of tradeoffs; don't let the marketing people and the kool-aide drinkers tell you different.

    OK, jackass, you just crossed the line into insulting me. First off, I have a Master's Degree in Computer Science. Secondly, I have been a professional software engineer for well over a decade now. You, on the other hand, are coming across as an amateur, and probably one with a specific axe to grind.

    I tried to be as polite as possible, to counter your "arguments" with facts and observations based on years of experience in the field. But you choose to believe your little fantasies. Fine.

    Here's a postcard from reality: I've worked on projects where the scientific number crunching that you claim would never be present in a server app, was in fact present in a server app. Here's another postcard from reality: In a real production environment, you're lucky if you get a chance to profile your code once or twice to identify the top two or three areas with the most CPU utilization (or the biggest I/O bottlenecks, whatever). This is typically not done as part of automated testing, as it is time consuming, and automated testing is usually intended to handle QA issues and to insure that bugs stay fixed (i.e., regression testing).

    You make a lot of unsubstantiated claims that further bolster my previous assertion that you're simply ignorant and talking out of your ass. For instance, you claim that the Java class libraries are "mostly shit," but that's a pretty broad blanket statement that actually flies in the face of years of experience to the contrary. Essentially unlimited time to optimize in static compilation? That's great, assuming that the optimizer is (a) guaranteed to generate equivalent code and not break your logic, and (b) actually has runtime knowledge of your code in order to make more effective optimizations. Neither of these is automatically true. The former is often not true -- even commercial compilers such as those offered by Microsoft can generate broken code in optimization, necessitating disabling the optimizer. The latter case is almost always untrue for C and C++ (and similar languages). Even if your compiler is designed to take profiling data as input, all that does is provide hints to the optimizer on where it should spend its time being the most aggressive. There are only so many tricks you can bring to bear in compile-time optimization.

    Again, you're arguing in abstractions. I challenge you to put up or shut up, and by that, I mean show me definitive empirical tests. You don't like the paper I cited because it has lots of scientific computation? Fine. Pick your own test case. But you can't deny that there are always going to be some cases where I'm right, and Java will come out on top -- not because it's a better language, not because virtual machines are just that good, but simply because you can always come up with a few cases where Java simply will perform better.

    Of course, I don't care about "performs better." I only care about "performs about as well as." And that's certainly the case almost 100% of the time, except for some very low-level bit banging type applications (e.g., video transcoding, audio processing, etc.)

    But there's really no point in abusing this thread any further, since it's obvious that no matter what I say, you're going to insist that I'm wrong, even though I never claimed that Java was always faster than C/C++, or even most of the time; I only ever claimed that Java was roughly equally performant to natively compiled languages like C/C++, and that it could outperform such languages in certain cases. I proved my point by citing a research paper.

    Your claim is that C/C++ is always more performant, and I proved that your claim is wrong because it is absolute. Your only response was to dismiss the research paper I cited because, to paraphrase you, "nobody does lots of scientific computation in the real world." That's also an

Lots of folks confuse bad management with destiny. -- Frank Hubbard

Working...