Posted
by
Hemos
from the makin'-the-move-with-powerpc dept.
ohmygod2 wrote to us with a story from SF Gate that Apple, unsurprisingly, is going to be one of the purchasers of IBM's PowerPC 970. At this time, though, it's unclear where Apple is going to actually *use* said chip.Update: 10/14 15:53 GMT by H: Follow-up to Tim's story.
This discussion has been archived.
No new comments can be posted.
This kind of technology can be more easily implemented by burning porn directly into ROM reducing lookup times to almost the speed of the bus.
Apple could appeal to the hardware hackers with offers of ROM upgrades packaged in convienent easy-to-bend pinned chips using tightly machined push-down sockets. Withing months there would be a "Burn your own Porn ROM howto" and instructions on how to mill the pin thickness down to permit easy insertion!
Headline: Apple employees seen putting new IBM chips into new computer cases It is still unclear whether Apple is going to sell these computers, or switch to Intel at the last second for no good reason.
Give it up people! Apple is stuck with PowerPC chips whether they like it or not. What are they going to do, release OS X for Intel and realize suddenly that there are *no* applications or drivers available for it? It would take a while for the application base to build up again, and some older applications would never be recompiled. Then would new applications continue to be released both in Intel and PowerPC versions? If there's something Apple cannot afford, it is to lose market share due to a messy transition.
Google News [google.com] of course has pretty much all the acticles. They are all based upon the same IBM press release, but many make slightly different predictions.
its used in the IBM z series servers and these servers can serve up like 100,000 pages per second its insane. this chip is only second to the dec alpha in FPU processing! macs running on these are going to be smokin``
How could a processor be in servers when it is not even made yet? If you read the press release you would see it is IBM announcing details of a chip that is unfabbed. Maybe you are thinking of the Power4, kind of this chip's big brother.
Ummm no, actually PowerPC is a subset of POWER, not an extention. PowerPC code would run on a POWER, not nessessarily vice versa.
It is a non-strict subset plus a non-strict superset. In other words they removed some POWER instructions (like the fused multiply-add) and added other (PPC has a whole set of single percision FP instructions). There are other POWER extentions PowerAS for example (which I think is strictly a superset for 256 bit addressing, but I'm not positave!).
Of corse many POWER CPUs implment the PowerPC instruction set also, but there is no requirment what so ever that they do so! The POWER4 does at least POWER, PowerPC, and PowerAS. However...
...the PowerPC instruction set doesn't include the AltiVec SIMD instructions, not technically. So you can have a fast PowerPC CPU that is mostly useless to Apple because while Apple doesn't depend on AltiVec (they run on the G3 after all which has no AltiVec!), they really really run faster with it. A 1.8Ghz AltiVec-less CPU may well run signifigantly slower then a 667Mhz AltiVec CPU for some commonish tasks (MP3 encoding for example, and some screen effects).
Apple has been slowly expanding in the last little while.. OS/X is becomming ever more popular and Apple's hardware is slowly but surely getting much better as time goes on.. If Apple creates a 64-bit arch market before Microsoft does, Apple could really take off and beat MS.. Dreams CAN come true!!
sure they could but it is unlikely that masses of people are going to move to the 64bit platform. Apple still lacks the software base that MS has (unfortunatly I suppose) and the hardware will be out of the price range that most people will be looking to spend (I just purchased a second PC for $500 including monitor, I have no desire to pay as much as the PPC platform will cost).
Dreams in this case will most likely remain just that (no matter how bad I want them to come true:)
who cares about market share? apple is making profits..buy there stock!!!!
any way.... the software gap is non-existant for 90% of computer users. games are the only thing that lacks, and ALL big title games are released at the same time for the mac as they are for the PC...only the crappy stuff does not come out at the same time or ever...though, crappy is subjective.
the $500 PC might be around for a long time, but it certainly is not being pushed by dell or gateway...why is that...becasue dell and gateway have realized that there is no money in a market with high volume and no profit....if you had not noticed, dell and gateway have been pushing there 1200 - 1500 dollor PCs and almost never mention there $500, under featured systems which are basicly now just a way to package out dated hardware and move it out of the factory in an attempt to at least recoup the costs of the hardware.
I haven't seen the numbers, but I think Apple is poised to make inroads into corporate IT, particularly if they ship systems with this 64 bit chip.
The consumer market will probably make a much smaller splash. The real market breakthroughs that Apple needs on the consumer side are more software than hardware.
I have seen no evidence that Apple has been increasing any market share. In fact, if Apple suddenly took the majority of market share away from Microsoft, I don't think we'd be in any better shape. To have one company control both the hardware and the software end would be suffocating.
I don't think Apple is going anywhere because of its high costs and its inability to produce machines with superior value and/or price. IBM's 64-bit PowerPC chip may be priced more like the Itanic than it will be priced like the Hammer. The Apple Tax is for colored, moulded plastic. So if Apple takes up 64-bit, that only means their survival will be extended for 5 or so more years.
I'm looking forward to Hammer machines running Linux, not an overpriced 64-bit Macintosh.
When Apple started selling FireWire-based Macs, Intel immediately tried to marginalize it by saying that the technology only appealed to a niche of consumers, and oh-by-the-way here's our specs for ATA/66 and USB 2.0 (for which the detailed specs hadn't been finalized, and which didn't start hitting mainstream systems until some 2 years later).
Intel takes seriously Andy Groves's words about only the paranoid surviving.
oh-by-the-way here's our specs for ATA/66 and USB 2.0 (for which the detailed specs hadn't been finalized, and which didn't start hitting mainstream systems until some 2 years later).
Disclaimer: I used to work for Intel's server motherboard division. I don't think I'm biased, but wanted to get that out of the way.
USB 2.0 still isn't in 'mainstream' systems. I'd give it another 6 months.
It has been more than two years since FireWire came out. The first FireWire Mac was the 'Blue and White' G3 in January 1999, and FireWire cards were an option even before that.
Intel is a member of the IEEE1394 working group, and early in FireWire's life, Intel supported it, only to distance themselves when USB 2.0 was announced.
Intel has Intel-branded motherboards with FireWire onboard.
ATA/66 has nothing to do with FireWire or USB at all... Intel doesn't even dictate ATA standards, although I'm sure they have a lot of clout. (Heck, Maxtor got their 'FastDrives', a.k.a. ATA/133 accepted by the ATA standards board, against Intel's objections...)
And IBM didn't see a world-wide demand for more than a dozen mainframes.
By the time you factor in biometric security, voice recognition and Christ's own gaming engines, VR generation, desk-top video editing and so on, 64 bits gets chewed up pretty fast even if you offload some processing to custom chips (and anyway who wants to build boxen with more ASICs that cost more money?)
64-=bitrs on the desktop? In five years it may be the majority of new box builds are 64-bits and 32-bit will be for poor for folks stuck on Windows without a migration path.
The 970 is a derivative of the Power 4 chip (with what I assume to be the Altivec extensions)
These run in the 1.6 -2.0 Gig range
As a Risc chip
with 64 byte chunks.
Granted, I am unsure as of yet if Darwin runs 64 bit natively, but when it does, imagine a dual processor of these (with of course, quartz extreme pushing all of the video over to the Graphics processor).
Maybe I am getting my hopes up, but this is what I have been waiting for. New macintosh, here I come:)
The 970 is a derivative of the Power 4 chip (with what I assume to be the Altivec extensions)
Altivec is just the Motorola marketing name for a set of SIMD extensions, Apple markets it under the name "Velocity Engine". IBM's chip will supposedly contain similar extensions to take advantage of the same thing, Apple could simply just swap them and still retain the Velocity Engine moniker.
BTW, from what I read, OSX's underpinnings were designed with 64bit in mind, doesn't sound like it would be too big of a development job to convert over.
I think what the above poster was referring to was the time and energy that Apple has reportedly spent making sure that OSX can be ported to multiple platforms. There have been rumors of OSX (or at least Rhapsody) running on everything from ARM through Alphas and UltraSparc processors.
How many of those rumors are true? Well, without being inside Apple, it's hard to know, but I wouldn't be surprised. NetBSD and its associated utilities run on just about everything under the sun. I have no idea how portable the MACH kernel is, but I'm guessing it's been ported to the ARM and the Alpha. That leaves the interface. Keeping that platform independent might be tricky, but I'll bet Apple's been keeping it in mind. They've known that the G5 was going to be 64 bit for a year or two now..
so i'm guessing that this is the first iteration of this proc. (even tho from what i read, it just a stripped down version of the chips they use in their own servers). and that their roadmap indicates some kind of wild and crazy ramping up in chip speed (since 1.8GHz will be puny compared to whatever intel and amd have out by then). cuz that's the only way they'll stay competitive with x86 hoopla (unless somehow consumers magically understand the difference b/w chip clock and the speed of the chip
anyone know if ibm's powerpc architecture allows them to do this?
On top of that, I think you should add about 30% for 64-bit processing
Not really. A few specialized apps will benefit from 64-bit, just as a few specialized apps benefit more from Altivec than SSE. But in general 64-bits is irrlevant to typical apps and tools.
64-bit may help indirectly in that the processor will need to fetch instructions and data more quickly, so more transistors will probably be dedicated to this. However these improvements could have been made in a 32-bit CPU as well. It's just that with 64-bit it becomes more of a necessity.
All just speculation, looking forward to seeing what Apple eventually comes up with. That said I'm a bit skeptical since we've had so many PowerPC advancements that were finally supposed to let PowerPC catch up to Intel, and of course nothing really changed.
Essentially a derivative of the company's Power4 microprocessor, IBM's PowerPC 970 adds 64-bit PowerPC compatibility, an implementation of the Altivec multimedia instruction-set extensions and a fast processor bus supporting up to 16-way symmetric multiprocessing.
I hope they use a memory controller that does at least DDR 333.
Over at AppleInsider [appleinsider.com] There has been much talk of IBM using an on chip integrated memory controller. This would be good because it would be FAST, but bad because it would probably use a proprietary form of RAM. So I guess we'll see.
I doubt IBM would design a memory controller that uses a proprietary memory interface. This would raise their memory costs enormously without providing much in return.
I would guess that IBM would build a multichannel switched memory interface to DDR SDRAM. The controller would handle say four simultaneous requests for 64-bit memory values. This is what Nvidia does in their GPUs, and it seems to work for them. I believe the Sparc chip also has a similar on-chip memory controller.
The presence of Altivec is a clear indication that Apple was involved and/or will be using the chip. Up until now, IBM has resisted adding Altivec to its version of PPC, and Apple depends on it heavily.
In terms of die size, a rough measure of cost, the PowerPC 970 measures 118 mm2, against 131 mm2 for the Northwood 2.X-GHz Pentium 4. Both the IBM and Intel parts are being made in 130-nanometer CMOS on 300-mm wafers.
This indicates that the price could be competitive in desktops.
The 970 also sports a cache-coherent, 900-MHz processor bus capable of data rates up to 6.4 Gbytes/second.
Keep in mind that DDR333 runs at 167MHz, so this new processor has a bus that can do DDR at 450MHz (DDR900), or quad-pumped 225MHz (QDR900?), or maybe <Sarcasm>PC900 SDRAM</Sarcasm>.
Not speaking as a pro here, but I do know that Apple's mobo architecture recently has been to split as many system pieces onto their own independent buses as possible. Surfing over to apple.com's hardware section should provide some insight, as should ars technica & tom's hardware, which recently had some articles about this.
Its been cited as a key difference between the Mac system architecture and the PC system architecture - different buses for AGP bus data, processormemory, processorPCI, etc.
I imagine this will continue to be the case - don't know if it impacts the aforementioned speeds, though.
This is being discussed all over (here, Ars, Macworld) but the Wired article takes a much more "done-deal" tone than any of the other commentary I have seen yet. It suggests the possibility of Macs with 4TB of ram too:-)
"Apple, IBM and Motorola declined to comment on the switch, which has been rumored as the processors in Macintosh computers have trailed Windows-based counterparts in clock speed."
Wake me when one of the companies comments please. They will, but be patient before yelling CONFIRMED!
Exactly. This is pure speculation, once again elevated to implied fact by a lazy, unverified summary. The story said no such thing, and quoted no verifiable source.
Okay, actually read the stories. "According to industry sources..." is what it says. Nowhere is there confirmation from Apple or IBM that Apple has comitted to purchasing them. This is not new, this is just the same news as the last story, only centered on one specific rumor, instead of the main story.
As soon as Apple or IBM officially states that Apple has committed to purchasing these processors, don't title the story 'Apple is Buyer...' since we still aren't sure.
Yeah, I'll admit, I've been expecting it since IBM announced the chip, and I fully expect that Apple will be the main customer. BUT, my belief (or the belief of any 'industry source', without hard proof) doesn't make it a fact.
I'm not asking that you not to rumormonger on it, I'm just asking that it not be presented as fact when it is still just rumor.
(Bah, and now I've forfietted three of my moderator points by posting in a thread I moderated in...:-( It just got me pissed off when I finally noticed that there still isn't any proof.)
Cupertino, CA - Apple Computer (AAPL) is expected to buy a record number of the new "PowerPC 970" CPU, but in a suprise move, isn't expected to actually do anything with them.
"We're doing great with the iPod, the warehouse is totally empty," said Apple VP Phil "All your Jaguar" Schiller. "Steve thought it would look more lived-in if we had some big boxes of stuff in there."
Steve Jobs was hard at work developing a new way to mispronounce the name of the new CPU and was unavailable for comment.
Despite the fact that the PPC 970 will be introduced at 1.8 GHz while the P4 is expected to be around 3GHz, the 970 will execute 8 instructions per cycle. I can't recall how many instructions per cycle the P4 executes but I believe it is far fewer than 8. Of the handful of articles I read about it, somebody said that the 970 would effectivly compete with a 4-6 GHz P4 as a result of the instructions per cycle efficiency of the chip.
Plus, it's gotta run cooler than a 6GHz P4 would. As a laptop owner, ignoring the superior performance potential of this chip, the cooling and power requirements alone would make me choose a 970 architecture over a Pentium.
the P4 executes 1 instruction per cycle. the G4 does 3 (the basis of apples "megahertz myth" myth), so this is a huge step up.
as for the laptop part, hell yeah. my tibook by the end of 2003 should be nearing the end of it's "useful lifespan" - whatever that is, and i'll probably sell it for half of what i bought it for then and buy the latest, greatest "G5" laptop once it's avalible. that's the plan, at least. i'm in college after all.... and apple has a tendancy to take forever to release a new laptop based on a new processor design.
By "in flight" I'm assuming you mean "in some stage of the processing pipeline at any given moment" - I believe the P4 has something like a 20stage pipeline, the G3/G4 I believe is more along the lines of an 8 stage pipeline, if memory serves.
Part of what's at stake here is how many instructions are decoded/dispatched each clock cycle and then other factors like branch-prediction and such muddy the waters a bit more. In the end, the 'instructions per cycle' is really more of an average than anything else, as not every instruction will be a candidate for sending through the parallel functional units, etc. Taking into account the efficiency of the branch-prediction unit is important, too, since you could take a wrong turn and have to clear out all your functional units, at every stage of the pipeline and start over again, in certain circumstances. The fewer times this happens, the more effective your CPU will be at pushing the bits around.
Bottom line: modern processor mechanics are far more sophisticated than can be easily summarized by any one number or neat phrase. Just ask AMD about that one..
Now here's a case of when life hands you lemons you make lemonade. The parent was talking about simultaneous execution, i.e. how many instructions per cycle can come out of the end of the pipeline. You're twisting it around to take that number and multiply it by the horribly long pipeline.
Let's go back to basics, every time the processor makes a mistake in guessing what's going to happen next, the pipeline has to be cleared. Every modern CPU faces this problem so you want short pipelines so your penalty is low. Intel has vastly longer pipelines and thus they pay a higher price every time predictive branching screws up.
So having a large number of instructions being simultaneously worked on is a *bad* thing unless they are also being pumped out and executed in large numbers as well. AFAIK, in the P4 they aren't.
According to Ars Technica [arstechnica.com] the P4 in the real world gets 2.5 instructions per cycle done. With the new G5 getting 8 done per cycle with half the pipeline depth, performance should once again favor the Mac side of the PC wars.
I agree with you, but I hope you're not confusing instructions per cycle with length of the pipeline.
The P4 processes instructions in a pipeline. The pipeline can contain 20 instructions at any one time, but each instruction is only finished once it exits the pipeline.
Same goes for the 970, I'd imagine.
To truly increase instructions per cycle, you have to add extra pipelines (and a lot of extra circuits to prevent instructions from stepping on each other)
If pipelines were always full, and all instructions were equivalent, the P4 would beat the pants off of the 970. But the pipeline is not always full because instructions often depend on the results of other instructions, and not all operations are equal in their requirements.
So shorter pipelines often handle instruction dependancies better resulting in better performance, while (for other reasons) longer pipelines are easier to design for higher Ghz.
" Despite the fact that the PPC 970 will be introduced at 1.8 GHz while the P4 is expected to be around 3GHz, the 970 will execute 8 instructions per cycle."
The IBM processors are RISC processors. The Intel ones are CISC. RISC do less per instruction, therefore, it is stupid compare the way you do.
All currently shipping Intel and AMD desktop microprocessors internally translate x86 instructions to much smaller instructions that are functionally similar to RISC style instructions.
Intel based chips typically have deep pipelines and fewer execution units and registers. Chips designed like this lose speed every time a pipeline flush is necessary (bad branch prediction, for instance), or during pipeline stalls (caused during some "exclusive" instructions and some synchronization tasks, notably when you need to stall out the pipeline). Intel makes up for this with higher clock speeds and larger high speed caches.
Number of parallel units and parallel execution is a very important factor in some performance tests - the original Pentium could either do int/float or MMX and required a context switch to flip between them, while Altivec could run at the same time as Int and Float operations (and multiple - I think 3 - could be processed at the same time). Alas, Motorola was slow out the gate, delaying the G4 multiple times, and Intel released the parallel-able SIMD around the same time (if not first) and had kicked performance well above the Motorola chips shortly afterward, which mostly made up for the aformentioned flaws.
Also, I believe some of the extra non-general purpose registers are used for context switching by the processor in PPC systems, where Intel's chips grab this information from L1 cache. I don't know if this is true for newer chips, though (even circa Pentium 2)
From the article: "Critics -- notably Intel -- argue that most desktop users have no need for 64-bit processing. In fact, Microsoft Corp. has yet to release a 64-bit version of Windows that will run on AMD's Hammer chips."
Is it any wonder, given they just lost their defense against Intergraph's patent lawsuit which may result in them not being able to release the Itanium series?
Hey, Intel, last I checked, no one had a use for 32-bit processing or 640K of RAM on the desktop, either.</sarcasm>
"In its marketing, Apple has stressed the megahertz and gigahertz is not necessarily indicative of a machine's performance. Still, the fastest Motorola processor for the Mac, the G4, runs at 1.25 gigahertz; Intel Corp.'s fastest Pentium 4 chip runs at 2.8 gigahertz."
It's like he never even thought about what he wrote. Someone conveys the thought that marketing hype may be costing you money, but let's ignore that and perpetuate the marketing hype.
On the other hand, the "Megahertz Myth" is marketing hype aimed at opposed marketing hype, so who really cares what either Apple or Intel offer as the "fastest"?
My PowerBook G3 runs just fine, my Pentium III runs just fine. If you need the power, go for it, but if you don't, go refurbished.
I'm wondering though.
I remember part of the reason apple went with motorla G4's was for the altivec engine. Back when Motorala and IBM split they forked the powerpc chip (the then G3), when this happened the definition for the chips changed slightly. Motorola's definition of the G4 was a faster chip with the altivec engine. This is what allows for superfast processing during high floating point calculations (similar to MMX only phatter). This was also the part Apple was talking about when they used to advertise "twice as fast as pentium pc" because during those moments of super-intense number crunching, they were. IBM's definition of the G4 was a chip made with copper, shorter pipelines things like that. How is the switch to an IBM chip going to affect altivec? Since it's motorola technology I think it's safe to assume it won't be on the IBM chip. Will the IBM chip suffer at all during those slowdowns? Or will the extra 32 bit data path, in conjunction with copper, etc... be more than enough to make up the difference?
I believe it was Ars Technica or the Register that mentioned the coincidence that IBM's new PowerPC based on the Power4 included "extra instructions".
Yea, it was The Register (slowly...remembering...)
The new IBM chip had the same amount of instructions as AltiVec, and when somem digging was done, the name of the instruction set was the same as the generic name for AltiVec (also mentioned in the Register article, verified at IBM's & Motorola's sites?)
I'm sorry I haven't posted links, but I gotta grab lunch before I get pulled aside for troubleshooting again.
And IBM said no one needed the power of the 80386. Then Compaq released their 386 monster and IBM stopped mattering in the PC world.
The difference is that we have had plenty of 64 bit processors aimed at the lower end and they just don't work. It is too expensive to bring in 64 bits from RAM to cache when the average variable has less than 8 significant bits. Hence the packed words of VLIW Itanium.
Back when my job description included developing code for the Alpha and the Pentium, just paging in the larger 64 bit code killed the speed advantage of the Alpha chip.
The difference is that we have had plenty of 64 bit processors aimed at the lower end and they just don't work. It is too expensive to bring in 64 bits from RAM to cache when the average variable has less than 8 significant bits. Hence the packed words of VLIW Itanium.
Back when my job description included developing code for the Alpha and the Pentium, just paging in the larger 64 bit code killed the speed advantage of the Alpha chip.
What a load of bullshit!
First, what's this crap about 64-bit processors not working? There are plenty of MIPS, Alpha, Sparc and PowerPC based 64-bit systems that work just fine. Aren't the current crop of Nintendo game consoles powered by a 64-bit MIPS? How much more low-end do you need to go?
Second, what's this crap about most variables having less than 8 significant bits? Most variables have a minimum of 8 significant bits. The average length of a character string is in the 8-12 byte range (64-96 bits!) and integers and pointers are all (at least) 32-bits wide in modern systems (Windows, MacOS, and all unices).
Third, what's this crap about it being "too expensive" to transfer 64-bits of data in from RAM? All modern processors have 64-bit wide data busses and transfer data in 4-beat blocks (meaning 4*64-bits, or 256-bits at a time). This is true for the Pentium as well, at least since the Pentium II!
Finally, what's this crap about "paging in 64 bit code"? Just because Alpha (or Sparc, MIPS, or PowerPC) have 64-bit wide data registers doesn't make the code any bigger! Both the 32-bit and 64-bit variants of Sparc, MIPS, PowerPC and PA-RISC use the same size instruction words (32-bits), so there is no difference in code size.
Whatever your job description might have said, you clearly don't know what you are talking about.
Clock speed does not measure processor speed. These chips running at 1.8 ghz are faster than P4's running at twice that speed. IBMs Power4 has a huge die and processes tons more information per cycle than a P4. So, if clock speed did measure speed a 100 ghz chip that does 1 operation per second would be 10x fast than a 100 mhz 486!!! right?
Both the Power4 and Itanium are tremendously powerful processors. See this page [hp.com], ironically intending to promote the Itanium2 (which is a tremendously powerful chip), to see how a 1.3Ghz Power4 compares with a P4.
Also, notice that according to press relewase 970 will be the single core version of Power4, so you should look at the green box closer to Sun's suckers, not at the orange one. Press release also notices "economy version" of Power4, so it may be even slower.
Ahem. 128MB L3 cache (on the POWER4 in the benchmark)? Daaaamn. I'm not saying that a fat L3 cache has anything to do with SPEC benchmarks (I'm guessing it doesn't), I'm just making an observation: that's a lot of cache! And it's probably bloody expensive to get 128MB of cache-speed memory. HP's comments allude to that but it also has 64GB of RAM so it's sort of a straw man ("let's overconfigure a system and then make fun of how overpriced it is").
I think it's quite silly of HP to say that "IBM's Power4 architecture is outclassed in performance". Really? A 10% difference qualifies as outclassed? I don't agree. And the POWER4's SPECint score is better. "Outclassed?" Hah.
Of course the proof is in the pudding. Let's see what actually hits the streets. Apple has now been "just around the corner from really kicking Wintel's butt" performance-wise for about 8 or 9 years, but it has yet to happen. We were all led to believe that the PPC would blow away x86 and that never happened. With luck, IBM will actually deliver a really kick-ass CPU at clock speeds close to the x86 family, and the superior per-clock performance will actually make it faster. But there would still be the question of price/performance. If Joe PC Buyer can buy a faster PC for the price of a Mac, it doesn't matter that the Mac runs cooler, or at a lower clock speed, or in 64-bit mode. Joe will just say "my $500 PC is faster than your $1500 Mac, end of story". And he will have a good point. Until that changes, the only people who care are the people who are willing to pay a premium for OS X and the Mac experience, and people who need something faster than the fastest desktop PC but still want a user-friendly mainstream desktop OS. The folks who use Office and Outlook all day won't be able to justify the extra $1000 or whatever it would cost to get a Mac that performs similarly.
I'd also like to remind everybody that benchmarks don't necessarily reflect real-world performance. This is a very synthetic benchmark that is great for telling you what the best-case raw CPU performance of these CPUs can be, but it doesn't prove that $REAL_APP will see those performance gains over older CPUs.
In particular it's not clear what the performance cost would be of using code compiled for a PPC604 would be vs. using code compiled with the very best compiler for the POWER4. I'm sure that Steve Jobs will crow about another highly-optimized Photoshop benchmark that we can all wish represented overall system performance, but it doesn't. That said, I imagine that the really important professional creative apps (you know, the ones that cost thousands of dollars per seat and really beat the @%$%@$% out of the CPU) will be quickly updated for the new CPUs because their customers will demand it. (To be fair, the same is true for the Itanium.)
SPEC int2000 [spec.org] consists of gcc, gzip, perl, bzip2, crafty (a Free chess engine), and some other stuff. I happen to be interested in building a computer to run crafty fast, so it's very handy to have good benchmark results for it on recent AMD and Intel CPUs. (Athlons kick P4 butt on crafty, probably because of bit shifts and things like that that P4 is slow on.) Many people would find the gcc, perl, and compression benchmarks interesting when buying a *NIX workstation.
SPEC fp2000 [spec.org] includes Mesa, but only doing software rendering. The other programs are mostly scientific computing apps. (Not just synthetic matrix multiplies or things like that.)
Keep in mind most of these articles are coming from the BusinessWeek article, or an IBM press release. IN the IBM release, *nothing* about a real date of shipping was stated. What was stated was "Second Half of 2003".
As for the GHz issue, the chip does more per-clock than the P4. This means that it can still be competitive. Just wait another day for the MPF, and maybe we'll be able to see some initial SPEC numbers.
MHz and GHz are fine, but that's just RPMs. As anyone who has driven a bored out V8 or massive V10 will tell you, there is no replacement for displacement. You can rev a crappy 2L engine to 7,000rpm and make your itty bitty wheels spin and make a nice smell. But if you want to throw asphalt into the air and stike terror into living things you put the pedal to 8 or 10L of fire-breathing cast iron.
The Power line from IBM has that kind of displacement. You don't need GHz, or at least not as many, to get a lot of torque out the back end. And of course once you get torque, you can work on the revs. As we've all seen, higher revs happen with improvements to production technique, and are a given. But more torque (ie, more and better logic on the die) takes a strategic investment, and some amount of risk. But I'll take a bigbore Dodge Viper over this years higher-revving econo Tondabishi any day.
MHz and GHz are fine, but that's just RPMs. As anyone who has driven a bored out V8 or massive V10 will tell you, there is no replacement for displacement. You can rev a crappy 2L engine to 7,000rpm and make your itty bitty wheels spin and make a nice smell. But if you want to throw asphalt into the air and stike terror into living things you put the pedal to 8 or 10L of fire-breathing cast iron.
Bad analogy. A 3L F1 engine puts out in excess of 850HP, courtesy of a high tech design that can go in excess of 18,000 RPMs. A 3.5L modern engine in the Altima can put out 240HP, versus the 220 or so horsepower of a 5L Mustang engine of but a few years ago.
5Ghz of spinning it's wheels is a fraction of what a few Ghz of actual work is worth.
Last I checked the G4 had a 4 stage bus, and the P4 had a 20 stage bus. Although the P4 at full effiency can really move, the extra bus stages make it very hard to ever utilize the chip at it's full rating as instructions must "predict" that they won't interfere with the other 19 operations already in progress.
Add to that a growing lack of interest in Ghz as even the lowest powered machines are amplely powered to run a word processor / spreadsheet.
There are many other points to argue (like 32 bit processing vs 64 bit processing) but I don't think that 1.8Ghz will hurt Apple in the least. Especially with the history of thier 500 Mhz machines outperforming 1Ghz Intels.
Well, as wanted to indicate "Average" is very vague. SPECInt has 14-16% conditional branches. SPECFloat has 3-12%.
>On the first run [...] 50% prediction rate
Not really. Statistical prediction and profiling can be applied. 85% of back branches are taken (loops), and 60% forward branches are taken. With predict taken you get roughly a miss prediciton rate (MPR) of 35%. With profiling you can get a MPR of 10%-20%.
>[...] and a good branch prediction unit can give 90% correct predictions.
Let's say an average one. (At best)
A two-level adaptive scheme (T. Yeh, Y. Patt) delivers a MPR of 3%. Hybrid Branch Predictors deliver even better results.
>We then need to add the pipeline length
The penalty is not always the whole pipepline length. For the P4 the pipeline has 28 stages. But only 19 have to be flushed (8 are needed for the trace cache).
So lets review your calculation:
>So this leads us to need a pipeline flush every 45 instructions (on average)
20% branches 10% MPR means to me 2% pipeline flushes. How do you come on every 45 instructions?
I'd say something more like this: Conditional branch instructions: 20% (your guess is as good as mine). MPR 10% = 2E-2 Pipline flush probability MPR 5% = 1E-2 MPR 3% = 6E-3
So, at least according to my estimation the P4 has actually not 18% penalty towards the PPC970 but only one of 6%.
> multi-tasking Umm, you are running with something 1GHz for something like 10ms. So you'll have 10MI. So most probably, the penalty for a cold BTB is probably neglectable. Otherwise, you're probably IO-bound anyway, and the CPU will be you're least problems.
The reason for better (or worse) performance may probably lie somewhere else. Actually, the increase of other pipeline hazards may be one of them. How long instructions take another one. (Well, for RISC processor an (non-fp) instruction takes 1 cycle, but for x86...) Not to mention caches and memory.
This is an IBM chip. The fabled G5 is the next generation chip from Motorola, Apple's current supplier of G3 and G4 chips. It seems Motorola is aiming the G5 squarely at the embedded market, either because Apple already decided to go with IBM or Motorola did not feel the development effort was worth designing for Apple's needs.
The 'G5' will be whatever chip Apple slaps on their next 'big' processor upgrade. The G3, G4, G5 designations have nothing to do with the chips themselves or their model numbers. They're just spin that Apple uses to compete with the Pentium 3, Pentium 4, etc lineup. Apple could decide to throw AMD Hammers in their next generation systems and would still call the chip the 'G5'. Ignorant consumers are unlikely to percieve any performace improvements in models unless there is some underlying technology that gets a new name or a new version number. It's like model years in cars, the 2002 has a higher number than the 2001 model, so it MUST be better, and people drool over it.
The people that are spouting about the G5 being Motorola have forgotten or never realized that the G3 is an IBM chip. Apple could call this G5 or anything else they wanted.
Open up the hardware, because I believe they could give Micro$oft a run for their money...
Microsoft is not Apple's competitor. In many way, Apple depends on Microsoft. Having Office v.X for Mac OS X is a good thing and all, but Apple has also spend considerable time and effort developing features that allow OS X systems to integrate into Windows workgroups easily; they even fully integrated Samba into OS X in Jaguar!
Apple's competitors are Dell, HP, and other PC makers. If Microsoft were to evaporate and those companies were all to start selling only PCs running Linux tomorrow, you can bet your bottom dollar that Apple will start running new Switch adds by Thursday morning.
Because of these facts, Apple would be committing corporate suicide if they were to, as you say, "open up the hardware." (What you mean by this, of course, is to allow competitors to build computers that can run Mac OS X.) Apple is successful only in proportion to the number of Macs it sells. If other companies sold Macs, or Mac clones, Apple would be less successful, not more.
Apple being dependent on MS does not mean MS is not a competitor to MS. MS is every bit a monopolizing competitor for Apple (or Apple wouldn't be losing market share in education to Dell just because it's a PC in a largely PC world).
MS markets. MS competes. MS dominates. Apple copes because it would be suicide otherwise.
Perhaps you meant MS is an investor in Apple? They're still competing, but MS is invested in Apple. How does that work? MS kept Apple alive so that it would have a survivor in PC market to compete again (you see, MS is also "competing" with the justice department).
MS is invested in Apple. How does that work? MS kept Apple alive so that it would have a survivor in PC market to compete again
Actually this is very far from the truth. Microsoft bought approximately $150 million of Apple's non-voting stock. This was a drop in the bucket, since Apple had several billion dollars in the bank at the time. In fact the deal was more in Microsoft's favor since Apple's stock was on the rise. You can read all about the original deal in this article at pcmag.com [pcmag.com]
Over the past few years Microsoft has sold most, if not all, of the $150 million in stock and has made a handsome profit on it. Right now Microsoft has very little stake, if any, in Apple as a stockholder. That being said, the 5 year co-development deal between Apple and Microsoft came to an end this past summer and now it is open season between the two. They both say that they will continue to cooperate for their mutual benefit but you can already see some of the signs of the fierce competition showing.
There aren't 2 architectures... they're all PowerPC chips, thus the same instruction set... and the new IBM chips are also 32 bit compatible, so they will run code from current PPC chips without a recompile.
No need. The PowerPC 32bit ISA is a subset of the 64bit version. 32bit apps run in 32bit address space perfectly happily.
You only need to recompile if you need to see the full 64bit address space.
Oh, and don't worry about AltiVec. The AIM alliance jointly developed the Vector SIMD extensions. Apple calls the unit Velocity Engine, Moto uses AltiVec and IBM calls it VMX.
>You can run the same binaries on a 80486 that you can on a Pentium 4.
Not true. If you use MMX or SSE2 instructions then it'll barf on the 486. I imagine there are other new things on the P4 that code *can* use that aren't available on the 486.
Of course a common workaround is to ship both kinds of code (with new features, and without) in your binary and let it decide at runtime / install time which to use. I imagine that some folks also ship source and decide at runtime which code to compile.
The first iteration of these chips, if Apple buys from IBM, are likely to be desktop only. Only once Apple is content with the power consumption and heat immition will we see these in portables. Also, Apple might also want to sell a line of towers before they start seling portables with these.
I know this doesn't answer the question, but there isn't really much in the way of specs at the moment.
If they put it into one of those sexy Titanium Powerbooks, they got themselves a convert. Woot! I would love to be able to afford one
More likely they will start in the Xserve. The server crowd is much more likely to be able to use 64-bit and much more likely to be able to afford the new chip.
urr (Score:5, Funny)
I predict that Apple will use the chip in a high end personal computer.
Apple Employee Reads Slashdot (Score:5, Funny)
Wow! That's an even better idea!
Re:Apple Employee Reads Slashdot (Score:5, Funny)
What other rational explaination is their?
Narrowing it down... (Score:4, Funny)
Forget the cluster... (Score:5, Funny)
Apple could appeal to the hardware hackers with offers of ROM upgrades packaged in convienent easy-to-bend pinned chips using tightly machined push-down sockets. Withing months there would be a "Burn your own Porn ROM howto" and instructions on how to mill the pin thickness down to permit easy insertion!
(puns, unfortunately, were intended)
Re:urr (Score:3, Insightful)
Headline: Apple employees seen putting new IBM chips into new computer cases
It is still unclear whether Apple is going to sell these computers, or switch to Intel at the last second for no good reason.
Give it up people! Apple is stuck with PowerPC chips whether they like it or not. What are they going to do, release OS X for Intel and realize suddenly that there are *no* applications or drivers available for it? It would take a while for the application base to build up again, and some older applications would never be recompiled. Then would new applications continue to be released both in Intel and PowerPC versions? If there's something Apple cannot afford, it is to lose market share due to a messy transition.
Re:urr (Score:2, Funny)
Re:urr (Score:2, Funny)
Re:urr (Score:2, Funny)
Good news roundup (Score:5, Informative)
good this processor is excellent (Score:2, Insightful)
Re:good this processor is excellent (Score:2, Insightful)
Re:good this processor is excellent (Score:2)
Re:good this processor is excellent (Score:3, Insightful)
Re:good this processor is excellent (Score:3, Interesting)
It is a non-strict subset plus a non-strict superset. In other words they removed some POWER instructions (like the fused multiply-add) and added other (PPC has a whole set of single percision FP instructions). There are other POWER extentions PowerAS for example (which I think is strictly a superset for 256 bit addressing, but I'm not positave!).
Of corse many POWER CPUs implment the PowerPC instruction set also, but there is no requirment what so ever that they do so! The POWER4 does at least POWER, PowerPC, and PowerAS. However...
...the PowerPC instruction set doesn't include the AltiVec SIMD instructions, not technically. So you can have a fast PowerPC CPU that is mostly useless to Apple because while Apple doesn't depend on AltiVec (they run on the G3 after all which has no AltiVec!), they really really run faster with it. A 1.8Ghz AltiVec-less CPU may well run signifigantly slower then a 667Mhz AltiVec CPU for some commonish tasks (MP3 encoding for example, and some screen effects).
Complex innit? Welcome to IBM's world.
Apple becomming much larger... (Score:3, Interesting)
Re:Apple becomming much larger... (Score:3, Insightful)
Dreams in this case will most likely remain just that (no matter how bad I want them to come true
Just my worthless
Re:Apple becomming much larger... (Score:2)
any way.... the software gap is non-existant for 90% of computer users. games are the only thing that lacks, and ALL big title games are released at the same time for the mac as they are for the PC...only the crappy stuff does not come out at the same time or ever...though, crappy is subjective.
the $500 PC might be around for a long time, but it certainly is not being pushed by dell or gateway...why is that...becasue dell and gateway have realized that there is no money in a market with high volume and no profit....if you had not noticed, dell and gateway have been pushing there 1200 - 1500 dollor PCs and almost never mention there $500, under featured systems which are basicly now just a way to package out dated hardware and move it out of the factory in an attempt to at least recoup the costs of the hardware.
Re:Apple becomming much larger... (Score:2)
The consumer market will probably make a much smaller splash. The real market breakthroughs that Apple needs on the consumer side are more software than hardware.
Re:Apple becomming much larger... (Score:2, Interesting)
I don't think Apple is going anywhere because of its high costs and its inability to produce machines with superior value and/or price. IBM's 64-bit PowerPC chip may be priced more like the Itanic than it will be priced like the Hammer. The Apple Tax is for colored, moulded plastic. So if Apple takes up 64-bit, that only means their survival will be extended for 5 or so more years.
I'm looking forward to Hammer machines running Linux, not an overpriced 64-bit Macintosh.
my favorite line (Score:5, Insightful)
then to be redundant, Intel should face up to the fact that most users have no need for 2.8 Ghz processors.
Typical Intel PR blather... (Score:5, Interesting)
Intel takes seriously Andy Groves's words about only the paranoid surviving.
Re:Typical Intel PR blather... (Score:5, Informative)
Disclaimer: I used to work for Intel's server motherboard division. I don't think I'm biased, but wanted to get that out of the way.
Re:my favorite line (Score:3, Funny)
Re:my favorite line (Score:5, Funny)
>then to be redundant, Intel should face up to the fact that most users have no need for 2.8 Ghz processors.
Ah Grasshopper! You've obviously never tried to compile the KDE source tree.
Intel didn't believe in the 8008 either. (Score:4, Insightful)
By the time you factor in biometric security, voice recognition and Christ's own gaming engines, VR generation, desk-top video editing and so on, 64 bits gets chewed up pretty fast even if you offload some processing to custom chips (and anyway who wants to build boxen with more ASICs that cost more money?)
64-=bitrs on the desktop? In five years it may be the majority of new box builds are 64-bits and 32-bit will be for poor for folks stuck on Windows without a migration path.
where are they going to use it?!? (Score:4, Funny)
There are so many options:
hmmm.... (Score:2, Funny)
>SELECT * FROM users WHERE clue > 0
0 rows returned
Go back to SQL school... (Score:2)
Re:Go back to SQL school... (Score:5, Funny)
1 rows affected (0.01 sec)
Power 4, here we come (Score:4, Interesting)
The 970 is a derivative of the Power 4 chip (with what I assume to be the Altivec extensions)
These run in the 1.6 -2.0 Gig range
As a Risc chip
with 64 byte chunks.
Granted, I am unsure as of yet if Darwin runs 64 bit natively, but when it does, imagine a dual processor of these (with of course, quartz extreme pushing all of the video over to the Graphics processor).
Maybe I am getting my hopes up, but this is what I have been waiting for. New macintosh, here I come
Re:Power 4, here we come (Score:3, Insightful)
Re:Power 4, here we come (Score:3, Interesting)
I think what the above poster was referring to was the time and energy that Apple has reportedly spent making sure that OSX can be ported to multiple platforms. There have been rumors of OSX (or at least Rhapsody) running on everything from ARM through Alphas and UltraSparc processors.
How many of those rumors are true? Well, without being inside Apple, it's hard to know, but I wouldn't be surprised. NetBSD and its associated utilities run on just about everything under the sun. I have no idea how portable the MACH kernel is, but I'm guessing it's been ported to the ARM and the Alpha. That leaves the interface. Keeping that platform independent might be tricky, but I'll bet Apple's been keeping it in mind. They've known that the G5 was going to be 64 bit for a year or two now..
Re:Power 4, here we come (Score:2, Funny)
and then... imagine a Beowulf cluster of these...
only 1.8 GHz? (Score:2, Interesting)
anyone know if ibm's powerpc architecture allows them to do this?
Re:only 1.8 GHz? (Score:3, Funny)
Mac 3600+, etc.
Re:only 1.8 GHz? (Score:3, Informative)
Re:only 1.8 GHz? (Score:3, Interesting)
Re: Plus 64-bit advantage (Score:3, Informative)
Not really. A few specialized apps will benefit from 64-bit, just as a few specialized apps benefit more from Altivec than SSE. But in general 64-bits is irrlevant to typical apps and tools.
64-bit may help indirectly in that the processor will need to fetch instructions and data more quickly, so more transistors will probably be dedicated to this. However these improvements could have been made in a 32-bit CPU as well. It's just that with 64-bit it becomes more of a necessity.
All just speculation, looking forward to seeing what Apple eventually comes up with. That said I'm a bit skeptical since we've had so many PowerPC advancements that were finally supposed to let PowerPC catch up to Intel, and of course nothing really changed.
EETimes article has more details.. (Score:5, Interesting)
Essentially a derivative of the company's Power4 microprocessor, IBM's PowerPC 970 adds 64-bit PowerPC compatibility, an implementation of the Altivec multimedia instruction-set extensions and a fast processor bus supporting up to 16-way symmetric multiprocessing.
I hope they use a memory controller that does at least DDR 333.
Re:EETimes article has more details.. (Score:2, Informative)
no proprietary RAM (Score:2)
I would guess that IBM would build a multichannel switched memory interface to DDR SDRAM. The controller would handle say four simultaneous requests for 64-bit memory values. This is what Nvidia does in their GPUs, and it seems to work for them. I believe the Sparc chip also has a similar on-chip memory controller.
Re:EETimes article has more details.. (Score:5, Interesting)
Furthermore.. (Score:2, Informative)
This indicates that the price could be competitive in desktops.
Way to go IBM!
Re:EETimes article has more details.. (Score:3, Informative)
Keep in mind that DDR333 runs at 167MHz, so this new processor has a bus that can do DDR at 450MHz (DDR900), or quad-pumped 225MHz (QDR900?), or maybe <Sarcasm>PC900 SDRAM</Sarcasm>.
Re:Umm, yes (Score:3, Informative)
Its been cited as a key difference between the Mac system architecture and the PC system architecture - different buses for AGP bus data, processormemory, processorPCI, etc.
I imagine this will continue to be the case - don't know if it impacts the aforementioned speeds, though.
Another Source for information.... (Score:5, Informative)
http://www.wired.com/news/mac/0,2125,55722,00.html [wired.com]
This is being discussed all over (here, Ars, Macworld) but the Wired article takes a much more "done-deal" tone than any of the other commentary I have seen yet. It suggests the possibility of Macs with 4TB of ram too :-)
Wait... (Score:5, Informative)
Slashdot: SF Gate: Wake me when one of the companies comments please. They will, but be patient before yelling CONFIRMED!
Thanks
*** MOD UP *** (Score:2)
Critics (Score:5, Funny)
Critics -- notably Microsoft -- have argued that most desktop users have no need for more than 640kb of memory.
the 'slide
"Corporate rock still sucks. What are you gonna do about it?"
Still not confirmation! (Score:5, Insightful)
Okay, actually read the stories. "According to industry sources..." is what it says. Nowhere is there confirmation from Apple or IBM that Apple has comitted to purchasing them. This is not new, this is just the same news as the last story, only centered on one specific rumor, instead of the main story.
As soon as Apple or IBM officially states that Apple has committed to purchasing these processors, don't title the story 'Apple is Buyer...' since we still aren't sure.
Yeah, I'll admit, I've been expecting it since IBM announced the chip, and I fully expect that Apple will be the main customer. BUT, my belief (or the belief of any 'industry source', without hard proof) doesn't make it a fact.
I'm not asking that you not to rumormonger on it, I'm just asking that it not be presented as fact when it is still just rumor.
(Bah, and now I've forfietted three of my moderator points by posting in a thread I moderated in... :-( It just got me pissed off when I finally noticed that there still isn't any proof.)
Apple to buy, but not use (Score:5, Funny)
"We're doing great with the iPod, the warehouse is totally empty," said Apple VP Phil "All your Jaguar" Schiller. "Steve thought it would look more lived-in if we had some big boxes of stuff in there."
Steve Jobs was hard at work developing a new way to mispronounce the name of the new CPU and was unavailable for comment.
Should compete with Pentium 4. Even at 1.8GHz. (Score:5, Informative)
Plus, it's gotta run cooler than a 6GHz P4 would. As a laptop owner, ignoring the superior performance potential of this chip, the cooling and power requirements alone would make me choose a 970 architecture over a Pentium.
Re:Should compete with Pentium 4. Even at 1.8GHz. (Score:4, Interesting)
as for the laptop part, hell yeah. my tibook by the end of 2003 should be nearing the end of it's "useful lifespan" - whatever that is, and i'll probably sell it for half of what i bought it for then and buy the latest, greatest "G5" laptop once it's avalible. that's the plan, at least. i'm in college after all.... and apple has a tendancy to take forever to release a new laptop based on a new processor design.
Re:Should compete with Pentium 4. Even at 1.8GHz. (Score:4, Interesting)
Part of what's at stake here is how many instructions are decoded/dispatched each clock cycle and then other factors like branch-prediction and such muddy the waters a bit more. In the end, the 'instructions per cycle' is really more of an average than anything else, as not every instruction will be a candidate for sending through the parallel functional units, etc. Taking into account the efficiency of the branch-prediction unit is important, too, since you could take a wrong turn and have to clear out all your functional units, at every stage of the pipeline and start over again, in certain circumstances. The fewer times this happens, the more effective your CPU will be at pushing the bits around.
Bottom line: modern processor mechanics are far more sophisticated than can be easily summarized by any one number or neat phrase. Just ask AMD about that one
Re:Should compete with Pentium 4. Even at 1.8GHz. (Score:4, Informative)
Let's go back to basics, every time the processor makes a mistake in guessing what's going to happen next, the pipeline has to be cleared. Every modern CPU faces this problem so you want short pipelines so your penalty is low. Intel has vastly longer pipelines and thus they pay a higher price every time predictive branching screws up.
So having a large number of instructions being simultaneously worked on is a *bad* thing unless they are also being pumped out and executed in large numbers as well. AFAIK, in the P4 they aren't.
According to Ars Technica [arstechnica.com] the P4 in the real world gets 2.5 instructions per cycle done. With the new G5 getting 8 done per cycle with half the pipeline depth, performance should once again favor the Mac side of the PC wars.
Re:Should compete with Pentium 4. Even at 1.8GHz. (Score:4, Interesting)
The P4 processes instructions in a pipeline. The pipeline can contain 20 instructions at any one time, but each instruction is only finished once it exits the pipeline.
Same goes for the 970, I'd imagine.
To truly increase instructions per cycle, you have to add extra pipelines (and a lot of extra circuits to prevent instructions from stepping on each other)
If pipelines were always full, and all instructions were equivalent, the P4 would beat the pants off of the 970. But the pipeline is not always full because instructions often depend on the results of other instructions, and not all operations are equal in their requirements.
So shorter pipelines often handle instruction dependancies better resulting in better performance, while (for other reasons) longer pipelines are easier to design for higher Ghz.
Re:Should compete with Pentium 4. Even at 1.8GHz. (Score:3, Insightful)
The IBM processors are RISC processors. The Intel ones are CISC. RISC do less per instruction, therefore, it is stupid compare the way you do.
Re:Should compete with Pentium 4. Even at 1.8GHz. (Score:3, Informative)
Re:Should compete with Pentium 4. Even at 1.8GHz. (Score:3, Interesting)
Number of parallel units and parallel execution is a very important factor in some performance tests - the original Pentium could either do int/float or MMX and required a context switch to flip between them, while Altivec could run at the same time as Int and Float operations (and multiple - I think 3 - could be processed at the same time). Alas, Motorola was slow out the gate, delaying the G4 multiple times, and Intel released the parallel-able SIMD around the same time (if not first) and had kicked performance well above the Motorola chips shortly afterward, which mostly made up for the aformentioned flaws.
Also, I believe some of the extra non-general purpose registers are used for context switching by the processor in PPC systems, where Intel's chips grab this information from L1 cache. I don't know if this is true for newer chips, though (even circa Pentium 2)
Reminds me... (Score:3, Funny)
This reminded me of an exchange from the animated series "Freakazoid," when Douglas Douglas received a long-coveted computer chip for Christmas.
"Can I put it in, Mom?" he asked.
"Okay, but only in your computer."
(Well, I thought it was just as silly...)
Well, Duh. (Score:5, Interesting)
"Critics -- notably Intel -- argue that most desktop users have no need for 64-bit processing. In fact, Microsoft Corp. has yet to release a 64-bit version of Windows that will run on AMD's Hammer chips."
Is it any wonder, given they just lost their defense against Intergraph's patent lawsuit which may result in them not being able to release the Itanium series?
Hey, Intel, last I checked, no one had a use for 32-bit processing or 640K of RAM on the desktop, either.</sarcasm>
from the horse's mouth (Score:5, Informative)
Arg? (Score:5, Insightful)
It's like he never even thought about what he wrote. Someone conveys the thought that marketing hype may be costing you money, but let's ignore that and perpetuate the marketing hype.
On the other hand, the "Megahertz Myth" is marketing hype aimed at opposed marketing hype, so who really cares what either Apple or Intel offer as the "fastest"?
My PowerBook G3 runs just fine, my Pentium III runs just fine. If you need the power, go for it, but if you don't, go refurbished.
Just my opinion.
altivec repurcussions? (Score:4, Insightful)
I remember part of the reason apple went with motorla G4's was for the altivec engine. Back when Motorala and IBM split they forked the powerpc chip (the then G3), when this happened the definition for the chips changed slightly.
Motorola's definition of the G4 was a faster chip with the altivec engine. This is what allows for superfast processing during high floating point calculations (similar to MMX only phatter). This was also the part Apple was talking about when they used to advertise "twice as fast as pentium pc" because during those moments of super-intense number crunching, they were. IBM's definition of the G4 was a chip made with copper, shorter pipelines things like that. How is the switch to an IBM chip going to affect altivec? Since it's motorola technology I think it's safe to assume it won't be on the IBM chip. Will the IBM chip suffer at all during those slowdowns? Or will the extra 32 bit data path, in conjunction with copper, etc... be more than enough to make up the difference?
Re:altivec repurcussions? (Score:3, Interesting)
Yea, it was The Register (slowly...remembering...)
The new IBM chip had the same amount of instructions as AltiVec, and when somem digging was done, the name of the instruction set was the same as the generic name for AltiVec (also mentioned in the Register article, verified at IBM's & Motorola's sites?)
I'm sorry I haven't posted links, but I gotta grab lunch before I get pulled aside for troubleshooting again.
Need? (Score:4, Insightful)
And IBM said no one needed the power of the 80386. Then Compaq released their 386 monster and IBM stopped mattering in the PC world.
Re:Need? (Score:4, Interesting)
The difference is that we have had plenty of 64 bit processors aimed at the lower end and they just don't work. It is too expensive to bring in 64 bits from RAM to cache when the average variable has less than 8 significant bits. Hence the packed words of VLIW Itanium.
Back when my job description included developing code for the Alpha and the Pentium, just paging in the larger 64 bit code killed the speed advantage of the Alpha chip.
Re:Need? (Score:3, Insightful)
What a load of bullshit!
First, what's this crap about 64-bit processors not working? There are plenty of MIPS, Alpha, Sparc and PowerPC based 64-bit systems that work just fine. Aren't the current crop of Nintendo game consoles powered by a 64-bit MIPS? How much more low-end do you need to go?
Second, what's this crap about most variables having less than 8 significant bits? Most variables have a minimum of 8 significant bits. The average length of a character string is in the 8-12 byte range (64-96 bits!) and integers and pointers are all (at least) 32-bits wide in modern systems (Windows, MacOS, and all unices).
Third, what's this crap about it being "too expensive" to transfer 64-bits of data in from RAM? All modern processors have 64-bit wide data busses and transfer data in 4-beat blocks (meaning 4*64-bits, or 256-bits at a time). This is true for the Pentium as well, at least since the Pentium II!
Finally, what's this crap about "paging in 64 bit code"? Just because Alpha (or Sparc, MIPS, or PowerPC) have 64-bit wide data registers doesn't make the code any bigger! Both the 32-bit and 64-bit variants of Sparc, MIPS, PowerPC and PA-RISC use the same size instruction words (32-bits), so there is no difference in code size.
Whatever your job description might have said, you clearly don't know what you are talking about.
Good Power 4 Review.. (Score:3, Informative)
http://www.digit-life.com/articles/ibmpower
+1 insightful (Score:2, Insightful)
Re:+1 insightful (Score:5, Informative)
Re:+1 insightful (Score:5, Interesting)
Re:+1 insightful (Score:5, Interesting)
I think it's quite silly of HP to say that "IBM's Power4 architecture is outclassed in performance". Really? A 10% difference qualifies as outclassed? I don't agree. And the POWER4's SPECint score is better. "Outclassed?" Hah.
Of course the proof is in the pudding. Let's see what actually hits the streets. Apple has now been "just around the corner from really kicking Wintel's butt" performance-wise for about 8 or 9 years, but it has yet to happen. We were all led to believe that the PPC would blow away x86 and that never happened. With luck, IBM will actually deliver a really kick-ass CPU at clock speeds close to the x86 family, and the superior per-clock performance will actually make it faster. But there would still be the question of price/performance. If Joe PC Buyer can buy a faster PC for the price of a Mac, it doesn't matter that the Mac runs cooler, or at a lower clock speed, or in 64-bit mode. Joe will just say "my $500 PC is faster than your $1500 Mac, end of story". And he will have a good point. Until that changes, the only people who care are the people who are willing to pay a premium for OS X and the Mac experience, and people who need something faster than the fastest desktop PC but still want a user-friendly mainstream desktop OS. The folks who use Office and Outlook all day won't be able to justify the extra $1000 or whatever it would cost to get a Mac that performs similarly.
I'd also like to remind everybody that benchmarks don't necessarily reflect real-world performance. This is a very synthetic benchmark that is great for telling you what the best-case raw CPU performance of these CPUs can be, but it doesn't prove that $REAL_APP will see those performance gains over older CPUs.
In particular it's not clear what the performance cost would be of using code compiled for a PPC604 would be vs. using code compiled with the very best compiler for the POWER4. I'm sure that Steve Jobs will crow about another highly-optimized Photoshop benchmark that we can all wish represented overall system performance, but it doesn't. That said, I imagine that the really important professional creative apps (you know, the ones that cost thousands of dollars per seat and really beat the @%$%@$% out of the CPU) will be quickly updated for the new CPUs because their customers will demand it. (To be fair, the same is true for the Itanium.)
SPEC CPU is not meant to be a synthetic benchmark (Score:3, Interesting)
SPEC int2000 [spec.org] consists of gcc, gzip, perl, bzip2, crafty (a Free chess engine), and some other stuff. I happen to be interested in building a computer to run crafty fast, so it's very handy to have good benchmark results for it on recent AMD and Intel CPUs. (Athlons kick P4 butt on crafty, probably because of bit shifts and things like that that P4 is slow on.) Many people would find the gcc, perl, and compression benchmarks interesting when buying a *NIX workstation.
SPEC fp2000 [spec.org] includes Mesa, but only doing software rendering. The other programs are mostly scientific computing apps. (Not just synthetic matrix multiplies or things like that.)
Re:1.8ghz..... (Score:5, Interesting)
As for the GHz issue, the chip does more per-clock than the P4. This means that it can still be competitive. Just wait another day for the MPF, and maybe we'll be able to see some initial SPEC numbers.
I think you'll be pleasantly surprised.
.
Re:1.8ghz..... (Score:4, Informative)
The Power line from IBM has that kind of displacement. You don't need GHz, or at least not as many, to get a lot of torque out the back end. And of course once you get torque, you can work on the revs. As we've all seen, higher revs happen with improvements to production technique, and are a given. But more torque (ie, more and better logic on the die) takes a strategic investment, and some amount of risk. But I'll take a bigbore Dodge Viper over this years higher-revving econo Tondabishi any day.
Re:1.8ghz..... (Score:2, Interesting)
Bad analogy. A 3L F1 engine puts out in excess of 850HP, courtesy of a high tech design that can go in excess of 18,000 RPMs. A 3.5L modern engine in the Altima can put out 240HP, versus the 220 or so horsepower of a 5L Mustang engine of but a few years ago.
Re:1.8ghz..... (Score:3, Informative)
5Ghz of spinning it's wheels is a fraction of what a few Ghz of actual work is worth.
Last I checked the G4 had a 4 stage bus, and the P4 had a 20 stage bus. Although the P4 at full effiency can really move, the extra bus stages make it very hard to ever utilize the chip at it's full rating as instructions must "predict" that they won't interfere with the other 19 operations already in progress.
Add to that a growing lack of interest in Ghz as even the lowest powered machines are amplely powered to run a word processor / spreadsheet.
There are many other points to argue (like 32 bit processing vs 64 bit processing) but I don't think that 1.8Ghz will hurt Apple in the least. Especially with the history of thier 500 Mhz machines outperforming 1Ghz Intels.
Re:1.8ghz..... (Score:4, Informative)
Re:1.8ghz... Ignoring Pipeline Length (Score:3, Redundant)
Well, as wanted to indicate "Average" is very vague. SPECInt has 14-16% conditional branches. SPECFloat has 3-12%.
>On the first run [...] 50% prediction rate
Not really. Statistical prediction and profiling can be applied. 85% of back branches are taken (loops), and 60% forward branches are taken. With predict taken you get roughly a miss prediciton rate (MPR) of 35%. With profiling you can get a MPR of 10%-20%.
>[...] and a good branch prediction unit can give 90% correct predictions.
Let's say an average one. (At best)
A two-level adaptive scheme (T. Yeh, Y. Patt) delivers a MPR of 3%. Hybrid Branch Predictors deliver even better results.
>We then need to add the pipeline length
The penalty is not always the whole pipepline length.
For the P4 the pipeline has 28 stages. But only 19 have to be flushed (8 are needed for the trace cache).
So lets review your calculation:
>So this leads us to need a pipeline flush every 45 instructions (on average)
20% branches 10% MPR means to me 2% pipeline flushes. How do you come on every 45 instructions?
I'd say something more like this:
Conditional branch instructions: 20% (your guess is as good as mine).
MPR 10% = 2E-2 Pipline flush probability
MPR 5% = 1E-2
MPR 3% = 6E-3
Some Guesses:
MPR: P3 5%, P4 3%, PPC970 3%
Pipeline penalty: P3 10, P4 19, PPC970 10
Overhead: P3: 10%, P4: 12%, PPC970 6%
So, at least according to my estimation the P4 has actually not 18% penalty towards the PPC970 but only one of 6%.
> multi-tasking
Umm, you are running with something 1GHz for something like 10ms. So you'll have 10MI. So most probably, the penalty for a cold BTB is probably neglectable. Otherwise, you're probably IO-bound anyway, and the CPU will be you're least problems.
The reason for better (or worse) performance may probably lie somewhere else. Actually, the increase of other pipeline hazards may be one of them. How long instructions take another one. (Well, for RISC processor an (non-fp) instruction takes 1 cycle, but for x86...) Not to mention caches and memory.
Re:How does this relate to the G5? (Score:3, Informative)
Re:How does this relate to the G5? (Score:2)
Re:How does this relate to the G5? (Score:5, Informative)
Sorry, but you're wrong. IBM currently supplies Apple with the G3s (for the iBook). Motorola only supplies Apple with the G4s.
The Gn style of naming is Apple's doing. Motorola and IBM use names like PPC 750 or PPC 7440.
If Apple uses this chip in their future you can bet it will be called G5s (if they decide to keep with that naming convention).
Re:How does this relate to the G5? (Score:2)
Re:How does this relate to the G5? (Score:5, Informative)
Ignorant consumers are unlikely to percieve any performace improvements in models unless there is some underlying technology that gets a new name or a new version number. It's like model years in cars, the 2002 has a higher number than the 2001 model, so it MUST be better, and people drool over it.
Re:How does this relate to the G5? (Score:3, Insightful)
Re:How does this relate to the G5? (Score:4, Informative)
Uhm, what are you talking about? This IS a PowerPC chip. It uses the PPC instruction set and is backward compatible with the 32 bit G3 and G4.
Re:If only they would... (Score:5, Insightful)
Microsoft is not Apple's competitor. In many way, Apple depends on Microsoft. Having Office v.X for Mac OS X is a good thing and all, but Apple has also spend considerable time and effort developing features that allow OS X systems to integrate into Windows workgroups easily; they even fully integrated Samba into OS X in Jaguar!
Apple's competitors are Dell, HP, and other PC makers. If Microsoft were to evaporate and those companies were all to start selling only PCs running Linux tomorrow, you can bet your bottom dollar that Apple will start running new Switch adds by Thursday morning.
Because of these facts, Apple would be committing corporate suicide if they were to, as you say, "open up the hardware." (What you mean by this, of course, is to allow competitors to build computers that can run Mac OS X.) Apple is successful only in proportion to the number of Macs it sells. If other companies sold Macs, or Mac clones, Apple would be less successful, not more.
Re:If only they would... (Score:3)
MS markets. MS competes. MS dominates. Apple copes because it would be suicide otherwise.
Perhaps you meant MS is an investor in Apple? They're still competing, but MS is invested in Apple. How does that work? MS kept Apple alive so that it would have a survivor in PC market to compete again (you see, MS is also "competing" with the justice department).
Yes, business is weird. Get over it.
Re:If only they would... (Score:5, Informative)
Over the past few years Microsoft has sold most, if not all, of the $150 million in stock and has made a handsome profit on it. Right now Microsoft has very little stake, if any, in Apple as a stockholder. That being said, the 5 year co-development deal between Apple and Microsoft came to an end this past summer and now it is open season between the two. They both say that they will continue to cooperate for their mutual benefit but you can already see some of the signs of the fierce competition showing.
Re:If only they would... go out of business (Score:3, Funny)
Many people don't get the strange position they're in where from one point of view they use the Mac OS to sell their hardware.
If they 'opened up' the hardware, they would quickly run out of resources to develop the OS, and the whole thing would fall apart.
Re:Does anyone know.... (Score:2)
Re:Will this require application rewritting? (Score:4, Informative)
You only need to recompile if you need to see the full 64bit address space.
Oh, and don't worry about AltiVec. The AIM alliance jointly developed the Vector SIMD extensions. Apple calls the unit Velocity Engine, Moto uses AltiVec and IBM calls it VMX.
Re:Will this require application rewritting? (Score:2)
Not true. If you use MMX or SSE2 instructions then it'll barf on the 486. I imagine there are other new things on the P4 that code *can* use that aren't available on the 486.
Of course a common workaround is to ship both kinds of code (with new features, and without) in your binary and let it decide at runtime / install time which to use. I imagine that some folks also ship source and decide at runtime which code to compile.
Re:Power Consumption (Score:2)
I know this doesn't answer the question, but there isn't really much in the way of specs at the moment.
Re:Well... (Score:4, Insightful)
More likely they will start in the Xserve. The server crowd is much more likely to be able to use 64-bit and much more likely to be able to afford the new chip.
Re:Well... (Score:5, Funny)
Re:Well... [NOPE] (Score:3, Interesting)
Yes... they are both Mach but not quite the same.
I wouldn't call the Mach that Tru64 is based on the same kernel as the other two either [its Mach 2.5 I think]
Re:It hasn't been said, (Score:3, Funny)
Imagine clustering for anybody.