Factual 'Big Mac' Results 566
danigiri writes "Finally Varadarajan has put some hard facts on the speed of the VT 'Big Mac' G5 cluster. Undoubtedly after some weeks of tuning and optimization, the home-brewn supercluster is happily rolling around at 9.555 TFlops in LINPACK.
The revelations were made by the parallel computing voodoo master himself at the O'Reilly Mac OS X conference. It seems they are expecting and additional 10% speed boost after some more tweaking. Srinidhi received standing ovations from the audience.
Wired news is also running a cool news piece on it. Lots of juicy technical and cost details not revealed before. Myth dispelling redux: yes, VT paid full price, yes, it's running Mac OS X Jaguar (soon Panther), yes, errors in RAM are accounted for, Varadarajan was not an Apple fanboy in the least... read the articles for more booze."
Re:Full price (Score:2, Insightful)
The x86 cluster would have been twice as expensive. And this outpreforms the highest ranking x86 cluster, which has more processors.
Super computer? (Score:3, Insightful)
Full Price? WHY?!? (Score:5, Insightful)
This is disgraceful! Hundreds of Macs on one purchase order, and they couldn't (or chose not to!) negotiate a deal? The Virginia taxpayers should be outraged! Good grief, if I bought 600 loaves of bread from the corner market, I'd expect a discount. Perhaps they were more interested in making the press than being good stewards of the public trust. After all, the college knows the taxpayers will have pay the bills, sooner or later.
Shameful.
Simply amazing (Score:3, Insightful)
Damn!
Too slow/expensive (Score:3, Insightful)
So he went full price with the G5 ($3000 apiece) and for only $5.2 million has the number 3 slot and is shooting for a 10% boost.
Re:price (Score:2, Insightful)
Open source the code (Score:3, Insightful)
This is in addition to consulting where they are helping others build similar clusters.
Re:Full Price? WHY?!? (Score:3, Insightful)
That VT wasn't able - or didn't think - to do the same is pretty shocking. A savings of $330,000 isn't anything to sneeze at.
Re:Too bad some software patents will be filed (Score:5, Insightful)
The patent system needs to be overhauled, then maybe we can start opening up the Universities again (and give them some more funding too!)
Full price? (Score:5, Insightful)
You'd think apple would at least sell G5's to VT without SuperDrives and Radeon 9600s. I seriously doubt those things (especially the video cards) will get a lot of use in a giant cluster.
But, hey, even with all that pointless extra hardware, this cluster is still less then half the price of a comparable intel system from Dell or IBM. Weird.
Re:interesting points (Score:5, Insightful)
Re:Full price (Score:1, Insightful)
The x86 cluster was built a year and a half ago.
OF COURSE this thing will be faster.
Why don't YOU RTFA with some perspective.
Re:Why didn't they use Darwin or Gentoo? (Score:1, Insightful)
The Truth Revealed (Score:2, Insightful)
system from IBM? (Score:3, Insightful)
When IBM comes out with the $3,500 4-way 970 (G5 in Apple-speak) workstation it will be interesting to see what people do with it. Imagine a cluster that is 17% more expensive but with twice as many processors...
Optimize Thit Optimize That (Score:4, Insightful)
Look at what they built: a complete COTS supercomputer, miniscule price, functionality in six months, public data in a year. They have >9Tf right outta the box.
Yes they have written their own software, but name a company that doesn't? They modded them (cooling I think, but I couldn't find data only pics.) They bribed students with pizza and soda, they didn't have to buy, make or gut a building. What is amazing is they showed that any simple slashdot pundit could build one if given these resources.
Re:Too slow/expensive (Score:1, Insightful)
No, he couldn't have. That's kinda the point. There were no Opteron systems available at the time. They would have had to buy component parts straight from AMD and then contract somebody to assemble the machines for them, which would have cost considerably more than just buying them already designed, built, and tested from Apple.
I pity you.
Pull the other one.
Re:interesting points (Score:5, Insightful)
Your professor's opinion is... well... flawed.
Itanium is an excellent architecture. Its flaws come from politics:
1: Itanium requires good compilers. For now, that means compilers from Intel. GCC will be fine for running Mozilla on an Itanium, but technical apps simply won't perform anywhere near the performance of the machine when compied with GCC.
2: Intel wants to market Itanium as a server chip. That means that they are putting 3MB or 6MB on the high end Itaniums. Soon they will have a 9MB cache version. Lots of cache means lots of transistors means lots of heat.
3: Intel is not fabbing Itanium with a state of the art process. Intel leads the world in process technology, yet their Itanium is still on a 130nm process. Before Madison (about a year ago), it was on a 180nm process.
Some misconceptions:
1: Itanium is "inefficent". This couldn't be further from the truth. At 1.5Ghz, it whoops *anything* else in SPECfp (by a margin of 1.5x or more) and matches the 3.2Ghz P4 or 2.2Ghz Opteron in SPECint.
2: Itanium is "slow". Wrong again, see above.
3: Itanium doesn't scale. Wrong again. Itanium scales better than any other current architecture, getting nearly 100% of clock in both int and fp. Opteron gets around 99% int and 95% fp. Pentium 4 gets around 85% int and 80% fp. I don't have data for PPC970.
4: Itanium is expensive. This is true, but it has to do with politics rather than architecture. Itanium uses *fewer* transistors and does *more* instructions per clock than a RISC architecture. Itanium takes much of the logic out of the CPU and puts it into the compiler (this is why you need good compilers). Itanium's architecture is called EPIC, or explicitly paralell instruction computing, because each instruction is "tagged" by the compiler to tell the CPU what instructions can and cannot be executed in paralell.
EPIC scales better than RISC architectures. It does more work with a lower clock and fewer transistors. That means that it will ultimately result in a cooler, cheaper, smaller, faster CPU than anything else. Intel's politics prevents this from happening.
So, please don't say that Itanium is a poor architecture. Itanium is a proven architecture. It uses fewer transistors and lower clock speeds than comparable RISC CPUs. Yes, it has problems, but most of them have to do with Itanium the CPU (too much cache, too expensive, not latest process) instead of EPIC the architecture.
Re:Anyone find the efficiency of this thing? (Score:3, Insightful)
Yes and no. The only way the G5 can do 4 FP operations per cycle is if each of its 2 FP units executes a fused multiply-add instruction. Obviously no code is going to consist entirely of these, so the actual theoretical peak is less than the theoretical theoretical peak. Or something like that.
Re:Simply amazing (Score:2, Insightful)
Dude, get your facts straight... it's the largest university in Virginia. 25,000 undergrads alone. I did my undergrad their... Phi Beta Kappa class of 2000.
Re:Memory errors? (Score:1, Insightful)
One poster mentioned a redundancy scheme, doing a calculation multiple times in different memory locations. This does not prevent errors it only reduces the chances of an error occurring. Yes it does reduce them substantially, but it is still possible to get the same error in 2 different calculations.
Would still be cheaper (Score:1, Insightful)
yes and no... (technical arguing) (Score:3, Insightful)
Desktops take up more room, correct. And yes, the desktop G5 does not have a console serial port like the xServe does. But seriously, how many modern clusters do you see with a terminal server connecting to each of the node's serial port? These days it's all install-and-run. OS X is UNIX... you can do a lot with a remote shell. These folks will never need to sit down at a GUI for each node. If you look at their setup photos, you'll see that they even removed the gfx card from each node.
And... desktops DO NOT put out more heat that a similar rackmount unit. The hard drives are the same, the processors are the same. A larger case does not create more heat. More heat may be expelled due to better fans, but that is a GOOD THING, you don't want your board, ram, and processors to cook. The only difference between the two is the power supply. Slim rackmount machines generally have smaller power supplies. But, with modern switching power supplies, there is nearly no difference in power consumption (and, by the laws of thermodynamics, heat output).
Once you go rack, you never go back. I much prefer a rack of 1U units that are built to be used in cluster situations.
Yes and no. A rack of 1U servers is small, compact, snazzy looking, and neat. But, you also increase the number of processors per square foot, which can be a cooling issue. With a concentration of heat in that area, more cool air will need to be directed to the rack.
I guess VT also has the luxury of running CPU intensive tasks. Those machines can only 8 GB RAM while other offerings can hold 16 GB and if they start to swap....ouch, not having SCSI drives will hurt.
4 GB per processor is pretty good for the current HPC world. A lot of monster supercomputer are still sold with 2 - 4 GB per processor. The G5 can unoffically support 16 GB via 2 GB DIMMs, but Apple has not certified this. SCSI drives are great for a big RAID, fibrechannel is even better. But for the drive in each node, IDE is fine. Even Google uses IDE drives in their nodes (which they use as a distributed filesystem too!).
All in all this setup is very impressive when just considering CPU performance. Wonder what is going to happen when a proffessor needs to run a few hundred jobs that use 10 or so GB of RAM each.
The prof will have to re-write his code to use less ram per processor. This is a cluster afterall, and code for clusters have to work with a fixed amount of ram per node. This is not a Cray X1, SunFire15K, or SGI Origin with high thruput, low latency global shared memory. Very very few supercomputers, and even fewer clusters, have 10 GB of ram per processor. Even 8 GB per proc is pretty rare today.
If the thread did need that much ram, it would be possible to pool memory between several nodes, it wouldn't be too fast, though (but still WAY faster than swapping to any harddrive). I believe they're currently getting a little over 800 MBytes/sec real-world thruput via the 20gbit full duplex Inifniband interconnects.
Yeah one of my professor was pointing it out (Score:1, Insightful)
Wrong (Score:3, Insightful)
1. Earth simulator
2. ASCI Q
3. Virginia Tech G5 cluster (9.555 Tflops and rising, $5.2M HARDWARE ONLY)
4. PNL Itanium2 cluster (8.633 Tflops, $24.5M HARDWARE ONLY [pnl.gov])
So nope, not only will the PNL Itanium2 cluster not be #2, it will also be 1Tflop behind the Virginia Tech cluster, and it will have done it at almost 5 times the cost. Bravo!
Re:Anyone find the efficiency of this thing? (Score:5, Insightful)
You could calculate a new marketing BS peak number where multiply-add only counted as a single flop, or you took into account some realistic cache miss rate. The new lower theoretical peak would give you a much higher efficency.
Re:interesting points (Score:1, Insightful)
Re:interesting points (Score:3, Insightful)
Using fewer transistors is good for reducing heat and manufacturing costs, but the Itanium is neither cheap nor cool (130W!). In the performance arena, Moore's law is useless unless chip designers figure out how to use MORE transistors to compute more quickly. Otherwise there's nothing to do with all those transistors except... more cache?
Re:interesting points (Score:5, Insightful)
Or do you mean scaling with clockspeed? In that case, the bigger the cache and the faster the system bus and ram, the better will it scale, but the cpu architecture itself is hardly a factor. Unfortunately I haven't seen any transistor numbers of a Itanium2 core. But I think it's not true. The Itanium saves some logic on instruction decoder, but has more execution units in parallel (which should lead to better performance, but ONLY IF it's actually possible to build a well optimizing compiler which manages to keep the execution units busy, and it's completely feasible that this is just not possible in the general case). I really don't think this is true. Scaling is independant from the cpu core architecture.
I will agree that EPIC (which, btw, isn't quite intels invention, it shares most of the ideas with VLIW) is a nice concept, but for some reason it just doesn't work in practice as well as it should.
Re:Wrong (Score:1, Insightful)
CPU manufacturers estimate the performance to some extent, and where it's not estimated it depends on hardware that is simply not going to fly in the mass market.
So you end up with pie-in-the-sky numbers that really don't mean anything except when being compared to other pie-in-the-sky numbers.
When someone compares those fanciful numbers against a system builder's numbers, guess which one is going to be lower?
Gotta love platform religion backed up by absurd figures, only to conveniently "forget" to use IBM's absurd figures in the comparison.
Re:Answers (Score:3, Insightful)