
Journal Paradox's Journal: Micro-rant #1 1
So, here's a question from an annoyed mac user. Why are so many people willing to belive AMD's (excellent) Opteron chip has great performance potential despite its less-than-impressive benchmarks. GamePC mentions in their review that the Opteron has better real-world performance than "synthetic" performance. Their various real-world time tests seem to show this.
No one calls them on this. People cite the report and nod and agree and buzz about how cool the Opteron and the upcoming Athlon 64 will be.
But when Apple does this with the G5s, when they have the EXACT SAME SITUATION, no one seems to believe them. Their processers "suck" and are "just slow."
Further, if you take into account the normalized compiler issue, the G5s do BETTER than even the opterons at keeping up in SPECmarks. So if IBM ups the quality of their compiler, tweaks the auto-vectorization stuff to work well, we may see some absurdly high "real-world" speed increases. But it may not up SPEC at all.
Which leads me to my conclusion. Synthetic benchmarks that only test one aspect of a processor or machine are useless. They're pointless for any real comparison. You can't test every situation.
This won't stop people from holding Apple to a double standard, though.
Cannot understand CPU performance (Score:1)
The G3 was a 64-bit bit chip. The G4 was a 128-bit chip. The G5 was the first 64-bit chip. But how can G3 and G4 not be 64-bit if they were?
I think now that the G3 and G4 being 64- and 128-bit had something to do with moving chunks of data, while the G5 being 64-bit is related to memory addressing, where the G3 and G4 were only 32-bit. I don't want to understand these measurements.
I don't need to worry about primitive 32 bit and 64 bit computers. The iMac that I use has 64 megabytes, that is 536870912 b