Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Technology (Apple) Businesses Apple Technology

New Power Macs Have Crippled DDR Memory? 82

eggboard writes "According to Rob Art Morgan, who has tested this, the new Power Macs from Apple that use DDR (double data rate) memory -- like the Xserve rank-mount unit -- cannot access the memory any faster than the cheaper and slower SDRAM found in the previous system arch. A controller limits the data rate to 1 GB/s, while DDR could work more than twice as fast. Unfortunately, this makes mincemeat of the architecture, as it bus-/memory-bounds 2D and 3D graphics and rendering."
This discussion has been archived. No new comments can be posted.

New Power Macs Have Crippled DDR Memory?

Comments Filter:
  • More information (Score:4, Informative)

    by go-low ( 149672 ) on Friday August 16, 2002 @12:04PM (#4082987)
    http://www.macosrumors.com/ has a similarly unfavourable article
  • by DLWormwood ( 154934 ) <wormwood&me,com> on Friday August 16, 2002 @12:09PM (#4083012) Homepage
    I've already discussed this on MacSlash, but the problem is that the G4 processors currently shipped with Macs don't support DDR memory via a direct connection.

    The closest Moto has gotten is a 8xxx series "G5" processor that supports a RapidIO interconnect. However, this new processor, despite the existence of demo units dating back years, is still effectively vaporware. My understanding is that Apple is backing an interconnect technology called HyperTransport instead.

    Any insiders willing to clarify or correct this? Motorola's current financial state is distressing, especially since I live near where they are based. All those layoffs...
    • Roumour is, is that MOT is still pissed about the "Mac Clone" fiacso - they were expecting for a huge increase of G3 production due to more and diferent Macs being sold, and invested accordingly. Apple killed that dream, and MOT hasen't been too eager to invest in anthing Apple needs.

      Just a roumour thogh. File it away in the round-file.

      • Uh, actually, most of the clones were PPC 601/604 models. The OS licensing/clones debacle was over by the time the G3 came out. I'm not saying that Motorola's not still peeved about the way the licensing went down, just that the G3 wasn't involved.
        • You're quite correct,

          The clones were older powerpc based - however they we all going to transition to G3 sooner rather than later, some even went so far as to inclue a G3 daughter card inorder to get around Apples obnoxious legal department. here for more info [everymac.com].

          Not only that, but MOT had it's own Mac clone that got squashed - though it was a small endevour compaired to the roumoured G3 ramp up.

          more info. [wired.com]

          according to the article, this move my apple cost MOT $95 million, who knows how much monet MOT wased on G3?

          • by Golias ( 176380 ) on Friday August 16, 2002 @06:12PM (#4086004)
            Except the total number of G3's didn't really go down by the death of cloning. In spite of the claims to the contrary in Power Computing ads, the clone market utterly failed to grow the Mac OS market.

            Besides, how many G3's would MOTO have sold if Apple went bankrupt? If cloning had continued, they would be gone by now.

      • Roumour is, is that MOT is still pissed about the "Mac Clone" fiacso

        Yes and before that the whole company ran on Macs (both Apple and Mot)... after the clone fiasco they started switching to PCs.

    • The closest Moto has gotten is a 8xxx series "G5" processor that supports a RapidIO interconnect. However, this new processor, despite the existence of demo units dating back years, is still effectively vaporware.

      Motorola already has a G5 out... the MPC8560 : PowerQUICC III Integrated Communications Processor

      It's an embeded processor, not very fast either, 600 MHz - 1 GHz, but this is of interest:

      The PowerQUICC III also offers two integrated 10/100/1000 Ethernet controllers, a DDR SDRAM memory controller, a 64-bit PCI-X/PCI controller, and a RapidIO(TM) interconnect.

  • by Walker ( 96239 ) on Friday August 16, 2002 @12:18PM (#4083086)
    This is not new with the Power Macs; it is true with the XServe as well. This has been well discussed on the Ars Technica forms. Please read

    before drawing any conclusions from this article.

  • I don't exactly blame Apple for this. I mean, some people might perhaps consider it deceptive (if it is accurate), but the way Motorola has been dragging its feet for the past several years has put Apple in a really tough position. They can just barely get Motorola to squeeze out enough improvements that the hardware is usable. Everybody has been telling Apple they need to move to DDR, and meanwhile old man Motorola just lets Apple down again and again.

    Maybe this move is hoping partly to twist Motorola's arm that this is how it's gonna be with RAM so they should get their act together?

    I actually just hope that Apple ditches Motorola altogether and lets IBM do their thing.
    • Perhaps this is why the speculation [com.com]about moving to the Intel-ish platform. As long as they do their usual good job with integration, it's a good idea.
      It's a shame, though, that they didn't use plain sdram and make the system cheaper... unless, there's something in the works for a plug-in replacement cpu supporting faster memory througput?
      • Re:Motorola (Score:2, Insightful)

        by Dephex Twin ( 416238 )
        I think Apple is definitely positioning themselves to be capable of moving to Intel quickly if needed. But I have a feeling they will go IBM if at all possible. The PPC hardware is superior to Intel's hardware, and I think Apple wants to stick with this sign of quality.

        Then if that falls through somehow, they'll be ready to move to Intel.

        Just my feeling on it.
        • Personaly, if they go with any PC chip company other than IBM, I have a feeling that Apple would choose AMD. Not because of any technical minded reasons, but simply because of image. For a long time (and still today) Intel is associated with Windows. And to have the logo that says Intel inside, implies that it is PC compatible.

          If apple went with intel, they would piss off the rabid fans, they would piss off the PC comunity, they would piss off the anti intel people and they would cause sever issues for new users.

          But that' just me talking
          • You make a good point, and I must say, if I imagined Apple switching to a PC chip company, the thought of AMD sits much better with me. Also, I get the feeling that AMD is overall better quality than what Intel puts out.

            Haven't heard any rumors about that though, surprisingly.
        • I completely agree. No one seems to have mentioned it yet, but I'll chime in. Apple has moved their Quartz rendering engine from very CPU specific (highly AltiVec optimized) design, to OpenGL. Now they just make deals with the driver writers to highly optimize these OpenGL drivers for whatever processor they're using. They've managed to not only be able to offload a ton of processing to the GPU, but at the same time move their windowing system to a CPU independant model. At one time Quartz seemed to be the one big piece of the puzzle that would be dificult to move to Intel/AMD/etc, and that no longer seems to be the case.
      • of COURSE they're ramping up for new processors. I mean, they'll eventually get the new G4/G5 that actually SUPPORTS the ram, but for now they're making the groundwork needed for the newer processors.

        If I may do a bit of rumourmongering, I'd like to say that we may expect a quad-processor G5, but with the G5 running at, say, 800 MHz. I mean, we may not have superhigh frequencies on our processors (or multiplier values such as 21.5), but you can make sure we'll have the most processors we can get in there!
  • Server? (Score:5, Funny)

    by SteveX ( 5640 ) on Friday August 16, 2002 @12:34PM (#4083214) Homepage
    My rackmount server is going to suck at 3D games. Crap.
  • by Gil Da Janus ( 586153 ) on Friday August 16, 2002 @01:00PM (#4083426)
    are really badly done.

    The base configs of each machine are NOT listed.

    The base OS configs of each machine are NOT listed.

    The combined running configs, ie, size of objects, optional software (especially 3rd party apps and gui-players), etc, etc.

    Guess what - each of the above - without running a single line or click of a benchmark can help in determining the outcome.

    I'll wait to see how bad or good the new machines are - but I can tell you in advance, the old dual 1Ghz machines and the new ones are not identical at all in the CPU area.

    Some folks have to learn to read and understand specs before jumping up and down and screamming.

    Just my 2 cents, from the peanut gallery here in NY

    Gil

    • Well, the real question is why Apple doesn't submit SPEC benchmarks on well-defined configurations. Instead, they just keep people in the dark and rely on marketing for claims of "supercomputer performance".
  • by AHumbleOpinion ( 546848 ) on Friday August 16, 2002 @01:30PM (#4083682) Homepage
    The DDR is underutilized only for CPU based operations. DMA and AGP based operation will get a boost from DDR.
  • by tbmaddux ( 145207 ) on Friday August 16, 2002 @01:45PM (#4083850) Homepage Journal
    A controller limits the data rate to 1 GB/s, while DDR could work more than twice as fast. Unfortunately, this makes mincemeat of the architecture, as it bus-/memory-bounds 2D and 3D graphics and rendering.

    The data rate between CPU and RAM is limited to 1.3 GB/s. However there is still more than 1.3 GB/s of bandwidth for the GPU (AGP 4x which goes at about 0.5 GB/s), DMA calls from hard disks, etc. So graphics and rendering are not strictly bus-limited, as the GPU can never fully stress the bus. Furthermore, the GPU wasn't tested in the BareFeats benchmarks!

    Furthermore, don't forget that the L3 cache on the new 1GHz Macs is only 1 MB, not 2 MB as it was in the previous 1GHz Macs (and as it remains in the 1.25 GHz Macs).

    All these benchmarks teach us is that CPU-limited tasks like those posted at BareFeats are not a good test of the added throughput between the system controller and RAM. We need to see benchmarks that stress all of the throughput, not just the portion between CPU, controller, and RAM.

    • The design is about acquiring an incremental gain in performance, and seems to follow the Jaguar theme of enhanced multitasking and anti-spinlocking. This rev speeds overall system throughput compared to the QuickSilver by allowing other bus masters to access RAM without stealing cycles from the CPU.

      What I haven't heard anyone talking about is some of the groundwork laid out for later, when they can remove the CPU bottleneck. Some of the more interesting features of the Xserve architechture are: Intervention [apple.com] and Write Combining [apple.com]. Funny, the things revealed by a little research...

      I'll keep my QuickSilver 933 for now. Jaguar promises good performance gains, and that's worth the $129 if I save about one hour for one client. (or worth about $500 if Jag gives me an extra hour of quoteunquote free time)

    • I think the "1 GB/s" was taken from a comment in the original review on Bare Feats. There was an "explanation" posted from a reader that made the bogus claim that the memory bandwidth was limited to 1 GB/s even in the models with a 166MHz system bus. I notice that the revised Bare Feats report has removed that whole section.

      Anyway, the thing that stands out about these benchmarks is that the new dual 1GHz has a system bus (and memory bandwidth) that is 25% faster than the old one, yet this made no discernable difference in these particular benchmarks. This isn't surprising considering that these are CPU-intensive tests, but the bizarre thing is the number of people who are claiming that this somehow proves that the DDR implementation is useless, a fraud, etc. etc. They seem to think these benchmarks would improve dramatically if the DDR bandwidth was passed on to the CPUs.

      This isn't logical. Why would a real 25% improvement in memory bandwidth have no influence at all on the benchmarks, yet a 100% improvement which would come from a "real DDR" implementation suddenly make a big difference.

      The results of these particular benchmarks would be the same because they are CPU intensive tasks and therfore are bound by CPU speed. (That is, for these tasks much more time is spent in CPU processing that in reading or writing memory). There's no magic change Apple could make in other parts of the system that will make 1GHz CPUs process faster than 1GHz.

      That isn't to say that the 166MHz system bus and the DDR implementation Apple is using isn't advantageous in generaly system usage, it is just to say that these benchmarks will not reveal those advantages.

      - Dennis D.
  • by h0tblack ( 575548 ) on Friday August 16, 2002 @01:46PM (#4083856)
    So we have an article which misses a few important points at the generally iffy barefeats, this is then compounded by the comment "cannot access the memory any faster than the cheaper and slower SDRAM" which misses the mark even further. It's a real shame when these things spread around as "the truth" especially on somewhere like slashdot.
    Yes the new motherboards are not full DDR, this is mainly because the processors available from Motorola cannot handle DDR FSB's and therefore a full DDR motherboard. This is a shame, but it is far from crippled DDR RAM. Many early DDR RAM x86 motherboards were the same, only the RAM was DDR, not the full motherboard and processor FSB. While this does mean there is still a bottleneck (in certain tasks) between the processor and other components there are advantages to having DDR RAM. The tests at barefeats are using purely CPU limited operations, which will obviously show no real improvement as there has been no CPU or bus change (although the new 1Ghz procs have only 1Mb of DDR L3 cache versus the old 2Mb DDR and a 167Mhz bus version is available). What DDR RAM will help with is when there are a variety of components (CPU, HD, network, AGP, PCI, Firewire, etc) all vying for valuable memory bandwidth. It's these 'real-world' situations when we will see a performance increase. If you just run single process, purely CPU intensive tasks then maybe these machines aren't for you, but if you run a lot of stuff at the same time, or anything that uses CPU, HD, AGP etc intensively and concurrently then you should see an improvement. Things like Quartz Extreme will be throwing a LOT of data at the AGP bus, with DDR RAM this won't have to wait it's turn while say your CPU is busy grabbing all the bandwidth. I'd say many users have a lot of HD, CPU, GPU and network activity going on simultaneously, especially 'power' users. Hopefully we'll see some more benchmarks that show a variety of tasks being performed on these new machines once more people (and more reputable sites) get hold of the machines. While not fulfilling everyones dreams, I'm sure that the statements about the DDR RAM additions being a "waste" or "crippled" will be shown to be entirely false.
    • And even for typical CPU/memory intensive tasks I really don't think the G4 CPU is FSB bandwidth limited.

      I've seen a lot of comments on various web sites about the new 1Ghz bemoaning its lack of a DDR front side bus. Though I too am a little disappointed I think everyone has got sucked into the Apple marketing distortion field. I'm also disappointed to see comments on a few sites saying "clearly" the dual CPUs in the "Wind Tunnel" G4s are FSB bandwidth limited.

      These claims require proof and the proof just isn't there. These best counter argument I've seen so far was a comment on xlr8yourmac.com

      http://xlr8yourmac.com/archives/aug02/081402.htm l# S14256

      where a user reported:

      1) A quick check shows it to be 3 (and bit) times the performance of my 667 Mhz G4 system (7450 processor). It scaled linearly (e.g. 2 * (1000/667)) despite the improved memory system. [BTW - the new dual 1GHz has 1MB DDR L3 per CPU, vs 2MB DDR L3 cache per CPU with the dual 1GHz Quicksilver model] The FSB is clearly SDR from the documentation and performance. The memory system is DDR. I need to run more tests.

      Hmm that "clearly" word again.

      Well the 7455 bus is still SDR but the thing to note in this report is the "performance scaled linearly" with clock speed. As both machines use similar CPUs (7450 in one and 7455 the newer there are no large changes in the CPU design) the conclusion we draw from this is that the CPU is *not* memory I/O bound (i.e. FSB bandwidth bound). If it was the increase in performance would be less than 2 * (1000/667) times. So running both CPUs flat out doesn't saturate the memory bus (and all the usual other traffic is kept off the internal bus by the IO controller if it moved by DMA transfers).

      It also implies that for most applications (the tester doesn't describe the tests they used so whatever test they used) the 1Mbyte per CPU cache is sufficient and its loss

      I'd like to see more measurements done to confirm this hypothesis but it looks like magically speeding up the bus won't cause the CPU through to improve dramatically. The way to do this (if anyone has a chance is to run some CPU and memory bound applications on the all three models (and try to correct for the different cache sizes and FSB speeds) but if you see close the linear relationship then the CPUs are certainly not held back by the front side bus.

      And h0tblack has a point that DMA will get more of a workout with Quartz Extreme though I suspect it will be less than he expects (most of the stuff should be in the GPU VRAM for compositing and anything that gets there will have to be worked on by the CPU to some extent at least once).
      • by stux ( 1934 ) on Saturday August 17, 2002 @02:57PM (#4089712) Homepage
        Running fully optimized AltiVec code all G4s are currently memory bound for most operations

        It is really really hard to keep a dual 1Ghz machine fed when a single instruction (taking a single cycle) can process 16bytes of information.

        If you had a simple filter for example, a blur.. that could be executed in perhaps 10 cycles...

        which would require 3.2GB/s of bandwidth to run at full speed (1.6GB in, 1.6GB out), and on a dual... 6.4GB/s (which happens to be the bandwidth on that new IBM PowerPC ;))

        The current bus can only provide 1.3GB/s

        Which means this filter would run at 40% of the full speed...

        If its running on two cpus, then its going to run at 20% of full speed.

        This means DDRing the bus would double the performance... but you'd still only be running at 40% of full speed.

        AltiVec generally converts almost any relatively complex operation into a memory benchmark.

        Since altivec is used for the most time critical parts of a program, faster memory would allow these time critical parts to run x times faster...
        Anywho, when it takes only two cycles to multiply 16 values by another 16 values, then add another 16 values, and saturate the result (something which would maybe take 80 cycles without altivec, memory bandwidth becomes the limiting factor. (for those counting that's a 40x improvement, the equivalent of a 40GHz chip if it was running scalar code)

        Its even worse on 7450s because the AV unit can execute multiple instructions concurrently.

        G4s *ARE* memory constrained, I'd say even seriously.

        Small benchmarks will not expose this as they'll almost always run out of L3, or even L2 (L1!) cache.

        BUT real world operations normally work on massive data sets...

        (be it video, audio, 2D, 3D, genetic sequences, or just your window being composited with a menu)

        Incidentally, the speed improvements from altivec can generally be worked out as 4x, 8x, or 16x for most uses depending if you can use 8, 16 or 32bit math. Some operations can make use of tricks altivec can do and scalar can't. which allows speeds of 32x (or even more)

        Running a highly optimized calculation which is NOT memory bound we've managed to come up with some interesting numbers ;)

        The algorithm was highly optimized for MMX and AltiVec,

        running on a single G4/500, with many other applications running etc, the calculation was over twice as fast, as the same calculation on an athlon 1.3Ghz. The G4 has a 100Mhz bus, the athlon has DDR266, but it doesn't matter because the process is not memory bound.

        this means it took 15 mins on the G4/500 and 30mins on the athlon/1300.

        (the athlon was running NOTHING else)
  • Apple's pro iron today is quite expandable. If a new IBM chip arrives that can handle DDR memory as it should, your investment in a new Power Mac will pay off in spades with an upgrade from places like PowerLogix and Sonnet. I just upgraded a now-3-year old Power Mac G3 Blue and White (the first Macs with the current pro chassis) from a 350MHz G3 to a 550MHz G4. With OS X on it, this system rocks...and now I'm reticent to sell it as planned.

    In any case, the new systems are still a great buy. It's a UNIX box, folks. More processors mean more processes. At least the systems aren't SLOWER. I take the benchmarks from Bare Feats with a grain of salts. As the saying goes, your mileage may vary. I'm betting these systems will rock when the Mac version of Jedi Knight II shows up.
  • why Apple gets all of its chips from Motorola instead of IBM? Is there some kind of contractual obligation, or does Moto just come out with new chips first? AFAIK, IBM has much more manufacturing prowess (first with copper, first with 0.13, higher yields, etc, etc) and it seems silly for Apple to limit itself to one supplier that has had consistent manufacturing problems ever since G4s went above 800 mHz...
    • by LenE ( 29922 ) on Friday August 16, 2002 @04:11PM (#4085129) Homepage
      IBM won't license Altivec from Motorola, so they can't make the "G4" Chips that Apple wants and needs. Actually, if I remember right, it was an Apple Engineer who came up with Altivec, and Motorola implemented it because they could also benefit from it.

      Unfortunately the AIM alliance partners seem to have increasingly divergent needs from the partnership. IBM wanted PowerPC for servers, and sees Altivec as a gaudy tack-on to their architecture. They still produce the "G3" chips, at ever higher and higher clock speeds. Apple can't use them though because of the MHz. myth. IBM's stance towards Altivec appears to be weakening though with their upcoming chip.

      Motorola wants PowerPC for embedded stuff, and Altivec makes it easy to do DSP like functions in a general purpose processor.

      Apple needs the PowerPC for everything but the iPod. They need Altivec to make MacOS X so cool for consumers and scientists. Since IBM won't license it, they are stuck with the only producer, Motorola.

      It's times like these that I wish there were some truth to the old rumors about Apple buying Motorola's PowerPC fabs. If that were the case, Apple could produce the exact chips that Apple needs, not what IBM or Moto wants. Unfortunately, there isn't any indication that this would be profitable or feasible for them.

      -- Len
      • Just a little addition about "IBM's stance towards Altivec appears to be weakening though with their upcoming chip", namely, The Register [theregister.co.uk] has something to say about it.
      • Fo the record: Apple does use IBM chips. Every G3 the ship is an IBM G3, including 700 MHz models, which is, I think, the top of IBMs line right now. My money's on the "G5" being a version of the Power4 chip, and Apple and Moto will go their separate ways.
      • Wrong, wrong, wrong (Score:5, Informative)

        by Aapje ( 237149 ) on Saturday August 17, 2002 @06:18AM (#4088326) Journal
        Altivec was created in a joint effort between Apple, IBM and Motorola. They all have patents on the thing, but use different names for it. Motorola owns the name Altivec, Apple the name Velocity Engine and IBM used to call it VMX. AFAIK they all have the right to use the instruction set (but not a name or specific implementation they don't own). The reason why IBM didn't use it was because they didn't see the use of a vector processing unit (in the past). Of course, the G4, Pentium III & IV and the Athlon have shown the usefulness of a vector unit and IBM has changed their stance. The best proof is of course the new 64 bit PowerPC. It has a vector processing unit which almost certainly is Altivec (although they won't use that name). The Power5 will probably contain a vector processing unit as well.

        It is clear that Motorola and Apple have grown apart. Apple has had big problems with them for a long time and has looked at other options. They didn't choose to buy Motorola's awful fabs, it's too late to do that know (nor is it smart). No, credible rumors point to IBM. It makes a lot of sense:
        - IBM wants to sell more low-range (Linux) servers, so they already need a fast desktop CPU. Why not sell it to someone else as well?
        - Apple has a lot of experience with Altivec, it makes sense to work with them to produce this chip (Apple employs some very smart chip-designers).
        - Altivec is a respected instruction set. It's proven to work (no need to reinvent the wheel on a risky venture). Tools are available. GCC supports it (and since their servers will run Linux...).

        In 6-9 months we'll probably have a 64 bit PowerMac that is very competitive. I can't wait.
        • In 6-9 months we'll probably have a 64 bit PowerMac that is very competitive. I can't wait.

          I'm sorry, but I think that this is a widely optimistic time frame. The chip won't even be discussed in a conference for another two months, and you expect it to be shipping in a Mac remade for 64 bit 4-7 months after that? I don't think it will be happening any time next year.

          • I agree that I may have been a bit optimistic. However, if the processor is taped out before/at the time of the conference, it may very well ship 6 months later. Unfortunately I don't have any reliable info on the new chip's schedule. I hope the conference will tell us when to expect the beast.
      • They still produce the "G3" chips, at ever higher and higher clock speeds. Apple can't use them though because of the MHz. myth.

        By "MHz myth", you must be referring to the fact [heise.de] that G4's are not performing much faster than a Pentium of similar clock speed.

        Apple makes nice machines and software, but they really do need to get their act together on performance.

        • "Flamebait"? Come on, contribute something constructive. Heise's applications of the SPEC benchmarks show that G4's are not performing much faster than a Pentium at similar clock speed. If you believe that those benchmarks are wrong, describe your reasoning. So far, I have not seen a single criticism of Heise's benchmarks that would alter those conclusions.
  • IIRC the northbridge chip in all new world macs is on the daughtercard and can be replaced along with the CPU. These boards could be next-generation-CPU-ready.

    The benchmarks are also poor. They appear to mainly be CPU dependent, not memory bandwidth dependent.
  • Alright, that website told a nice story but it isn't true. The logo was designed at Regis McKenna Inc., a Silicon Valley business consulting firm that then also had a graphics design house internal. They did much of the work for Apple Computer business and marketing strategy, esp for introduction of Lisa and The Macintosh.

    The six-color logo was inspired by a series of print posters make for Ford Motor company, think late seventies design.

    How do I know this, I used to work MIS there and we had piles of old Macs, including lots of the really early models. Alas, they no longer do consulting to Apple, nor do they use Macintoshes anymore. Held out until 1998 though, so don't bag on them too much.
  • are not a good measure for memory performance! All these benchmarks are essentially CPU-intensive, or CPU-bound benchmarks. Even if the RAM is 1000x faster, it will hardly show on them. It's so easy to lie in a benchmark...
  • Why does the Apple world keep spinning in its own little universe when it comes to benchmarks? We have the SPEC benchmarks, industry-standard benchmarks that are acceptable for a wide variety of processors and applications. It's not only Intel and AMD that compare their processors with it, it's all the RISC manufacturers as well. In fact, SPEC started before Pentium architectures even appeared on the scene. SPEC isn't perfect--no benchmark ever is--but people have a reasonably good idea of how to interpret them.

    The only major manufacturer that seems to be missing official SPEC results is Apple. Instead, we get bogus and irreproducible benchmarks like Photoshop and Bryce, both from Apple and from benchmark sites like these.

    Why? Is Apple afraid of backing up their claims of "supercomputer performance" with actual facts? Inofficial SPEC benchmarks have shown the G4 not to be all that much faster than a Pentium with similar clock speed.

    • Ok the problem with the SpecInt and especially the SpecFloat benchmarks is that the code can't be optimised for a particular processor architecture. Unless you have an auto vectorising compiler no spec bench mark test will use the Altivec unit on G4s.

      A G4s integer performance is acceptable but it's floating point unit is sadly lacking. Altivec is the only reason that G4's scream at floating point math. With a vector processing unit that handles floats like that you don't need a really strong general purpose floating point unit. The only annoying thing missing from Altivec is support for double-precision floats.

      If GCC were made to auto vectorise code for Altivec then the Spec benchmark results on the G4 would increase dramatically. Then you might see something closer to the real picture and they might be worth Apple publishing.
  • Does anyone know if these benchmarks test memory throughput at all? I have no idea what the Photoshop "MP" action file test is, or why it's there twice, so I'm mainly curious about that. I can't imagine that MP3 encoding or rendering in Bryce aren't completely CPU-bound.

Whoever dies with the most toys wins.

Working...