Gigahertz Mac Finally SPEC'd 52
FrkyD writes "C't magazine puplished a story with the results of a test they designed using a Mac OS X-adapted benchmark suite by the Standard Performance Evaluation Corporation (SPEC) entitled CPU2000. SPEC allows comparisons to be made within a certain framework with the Intel competition.
They compared the G4/1 GHz running Mac OS X with a PIII/1 GHz (Coppermine) running Windows and Linux."
So. they are fast, but not WHOA fast.... (Score:1)
Now, where's the PowerPC chips made on IBM's new process and running at 40Ghz?...
Re:So. they are fast, but not WHOA fast.... (Score:1)
It all comes down to application (Score:3, Insightful)
Having said that, there will always be applications that are optimized enough to kick some butt on a G4 like Photoshop, etc. If you are a programmer then it is nice to not be limited on registers on a RISC cpu. Choose the right tool for the right job. If it comes down to a push then use your favorite. :P
Re:It all comes down to application (Score:1, Offtopic)
Linux vs. Windows (Score:3, Interesting)
With a SPECint_base value of 306 Apple's 1 GHz machine under Mac OS X ran almost head to head with the equally clocked Pentium III, combined with Linux and GCC, with a SPECint_base value of 309. Under Windows, the bad quality of Microsoft's run-of-the-mill compiler, which pushed the system down to a SPECint_base value of 236
That means Linux is over 30% faster than Windows!
Too bad they didn't give similar floating point numbers (or at least I didn't find them in the article), especially seeing as how the Mac is faring so poorly against the Linux PIII in that area.
Re:Linux vs. Windows (Score:4, Insightful)
Of course it doesn't. It means that GCC is somewhat better at compiling the SPECint_base benchmark than Visual Studio is.
I won't pretend to be educated about the inner workings of SPECint, but one would suppose that, because it's purported to be a hardware benchmark rather than an OS benchmark, it is completely independent of the standard C library, or any other OS-level service. One would expect the compiled benchmark to just run pure code inside the CPU, without any system calls or any of that stuff.
So the same benchmark compiled with the same compiler but run under two different OSs should return exactly the same result, within a certain statistical margin.
Somebody with more time on their hands could either test this hypothesis, or confirm that it's already been done by somebody else.
Re:Linux vs. Windows (Score:2)
Of course, since everything on a standard Linux machine (kernel, libraries, X, etc.) is compiled with GCC, while I imagine that everything that Windows runs is compiled with Visual Studio's compiler, if GCC consistently turns out code that's faster than Visual Studio, we could extrapolate that Linux is significantly faster than Windows.
Re:Linux vs. Windows (Score:2)
No, you couldn't. Writing code in C is generally faster than writing it in Java. However, surely you've seen a C program do something slower than a Java program. For example, starting up Nautilus compared to running a hello world program in Java. While this example is taken to an extreme, it still holds true- there is a big difference in the way hello world in Java and Nautilus does things, the same with Linux and Windows, and the speed of the C compiler says absolutely nothing about the amount of abstraction, the amount of bloat, the amount of optimization, the sheer amount of code or the quality of code written as a part of neither Linux nor Windows. Among plenty of other factors.
Re:Linux vs. Windows (Score:3, Insightful)
Maybe you could, if GCC could be thus characterized. But there's no evidence in this article that points to that conclusion. Rather, this article says that GCC did a better job of compiling the SPEC benchmarks. As everybody knows-- or should know-- benchmarks are to real applications as fish are to bicycles.
Re:Linux vs. Windows (Score:3, Interesting)
Snide comments on "supercomputer" show bias (Score:5, Informative)
No one ever claimed that the FP alone on the G4 was at supercomputer status, just that the G4 in conjunction with Altivec could crunch at FLOPs at "supercomputer" speeds.
Keep in mind that OS X is hardly optimized for this kind of test. OS X has just recently reached the point where it is useful as a general purpose platform. But Apple is making a big push in the scientific computing area so I expect that you will find vast improvements in the SPEC FP suite in the future.
Re:Snide comments on "supercomputer" show bias (Score:1)
Re:Snide comments on "supercomputer" show bias (Score:3, Informative)
As far as I understand the problem, altivec is very performant but only handles single precision floats, not doubles.
While single precision floats are largely enough for multi-media processing (filters, compression, etc...), in general, number crunching is done in double precision and the floating point tests of SEPC reflect this. You don't always need double for scientific calculations, but this is altoghter another discussion.
Maybe one day we'll see a multimedia component of SPEC or Altivec will support double precision numbers (the author even mentions this at the end of the article) but until then Altivec is out and this has nothing to do with a bias of the author.
As for OS X being optimised for this kind of stuff, we are talking applications that nearly never call the OS for anything, so the impact of OS X is probably nil. The truth is, floating point calculation is not really important for most users and both Intel and PowerPC processors are optimised for integer calculations. There was a good article about this [arstechnica.com] on Ars Technica.
One reason I could see to explain the large difference lies in the compiler: there has been much more work on gcc to optimise for the Intel instruction set than for the PPC instruction set. Like most RISC processors, the performance of a PPC processor is hugely influenced by the compiler.
Re:Snide comments on "supercomputer" show bias (Score:1)
I don't know much about how these benchmarks are written or how the compilers actually generate FP code but if they use a standard OS library that isn't particularly optimized then that would show up in the SPEC FP tests.
SPEC doesn't just measure CPU speed, it measures it in conjunction with the complete system that is being used to run the test. Unless they've changed their charter, this was always acknowledged by the SPEC consortium.
I would love to know what kind of impact OS X has on the benchmarks. Has anyone done the equivalent study using Yellow Dog Linux?
Re:Snide comments on "supercomputer" show bias (Score:1)
Never call the OS??? (Score:1)
That doesn't sound right. Most unix systems, OSX included (and NT FYI) don't allow direct hardware calls. You can only access system resources through operating sytem APIs.
DOS is the only system I know of that lets you access the hardware directly. (I think NT let's you access graphics systems directly too. However, that has nothing to do with this test).
Vanguard
Just make your compiler produce Altivec then! (Score:1, Interesting)
The rules are simple: You can do anything you want to your system, compiler, libraries, optimization flags, but you are NOT allowed to touch the code.
This is *GOOD* since it means any optimization introduced by the hardware vendor or compiler authors will benefit all programs, not only hand-tuned assembly.
So, it's completely OK to use vector processing (and some of the benchmarks would benefit from it), but must do it in the compiler and not hand-tune each executable.
Of notu to GNU and Linux Users (Score:2)
gus
Processor only test (Score:3, Insightful)
It might be interesting to see a comparison with Linux running on both machines... Anyone have one of these?
which libraries? (Score:1)
I don't see that info here.
Is it possible that unoptimized libraries like libm would hobble the Mac's results under OS X?
and what version of Darwin?? (Score:1)
FYI: Good articles on P4 v G4e architecture.... (Score:4, Informative)
Part I [arstechnica.com].
Part II [arstechnica.com].
my own experience (Score:3, Interesting)
Even a lowend PC these days ($700 or so) will run Windows FAST, whereas Apple's lowend end runs OS X slowly.
Most of the Mac's "speed problems" lie in the OS, not the hardware. Linux on the iBook described above flies.
Re:my own experience (Score:2, Insightful)
I was actually quite surprised by the poor performance of the G4. Although if the other posters are correct in stating that SPEC doesn't test very well for single-precision floats nor SIMD, it's probably not a very good test of the capabilities of the CPU. Of course, Apple has crippled their systems with PC133. That doesn't help either.
Re:my own experience (Score:2, Informative)
Re:my own experience (Score:2, Informative)
My own experience is that a 300 MHz G3 will blow a 500 MHz Pentium out of the water, thats running MacOS 9.
System configurations matter, memory matters, &c.
Re:my own experience (Score:1)
OS 9 is not a fair comparison - it's more like Windows 98 than 2000. Mac users who want a stable system must use OS X. The equivalent PC operating system is Windows 2000.
Windows 2000 is far faster than OS X with the same amount of memory.
Assembler anyone? (Score:1)
It's not even reasonable to take readings when you KNOW you're data will be inaccurate. Sheesh. Anyone who can code VB will call themselves a "computer scientist" these says...
Why a P3? (Score:3, Interesting)
Wouldn't a P4 be a better test?
Re:Why a P3? (Score:2)
Re:Why a P3? (Score:1)
In case you don't read the article... (Score:3, Informative)
Buried in this article is this note: and switched off the second supporting processor of the dual machines. Which means that the Dual 1Gs were only run as single Gig machines--and would therefore be much faster in the real world, so cost comparisons should be made accordingly.
Not necessarily (Score:1, Flamebait)
Re:Not necessarily (Score:1)
Re:Not necessarily (Score:1)
Now, whether the SPEC benchmarks would reflect that is another question entirely. Not knowing anything about how they are written, I can't make any predictions. However, if they are simply single-threaded tests, then, no, they won't show much of an improvement on the dual systems.
Comparing sci/pro app execution speeds (Score:1)
For the 'silver standard' DNA sequencing application BLAST, Apple claims that a dual 1 GHz G4 sequences up to 5 times as fast as a 2 GHz Pentium 4. (Sounds like we may have at least a fair RISC:CISC speed advantage there). Can't complain about the digital video rendering speed either. Mathematica seems to kick @ss. And then there's Maya... yas....
Remind me again why these G4 boxes are too expensive for the professional working scientist or media producer. Say, can anyone spec me a cheaper dual P4 box with builtin 1000-base-T for cluster or render farm building?
Warning: Failure to religiously upgrade your PC's Red Hat installation may be an early sign of OS X intoxication.
same OS would have been better (Score:1)