Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Apple Businesses

More on Virginia Tech G5 Cluster: 17.6 Tflops 390

daveschroeder writes "BBC World's Click Online has a video report (with text transcript) on Virginia Tech's new 1100-node dual 2.0 GHz G5 Terascale Cluster. The report quotes the performance as 17.6 Tflops. As a point of reference, the cluster would be number 2 on the most recent June Top 500 list, behind only Japan's Earth Simulator, and considerably more than doubling the performance of the current number 3 1152-node dual 2.4 GHz Xeon MCR Linux cluster. Assuming the performance figure accurately reflects the LINPACK score (which it should; since the deadline for submissions for the upcoming list of Oct 1 has already passed, one would imagine VT would quote that figure), and depending on new entries for November's upcoming list, the cluster should almost certainly rank in the top 5 - all for only US$5.2 million. The video report is available in Windows Media 9 and Real formats; the relevant portion starts at 13:00."
This discussion has been archived. No new comments can be posted.

More on Virginia Tech G5 Cluster: 17.6 Tflops

Comments Filter:
  • Twice as fast...? (Score:3, Interesting)

    by suwain_2 ( 260792 ) on Sunday October 12, 2003 @11:14AM (#7194278) Journal
    ...considerably more than doubling the performance of the current number 3 1152-node dual 2.4 GHz Xeon MCR Linux cluster.

    If I understand this correctly, it's saying that a G5 is more than twice as fast as a dual 2.4 GHz Xeon? (1152 dual 2.4 GHz Xeons vs 1100 dual 2.0 GHz G5s -- there are fewer G5s and they run at a slower clock speed.)

    This is a pretty staggering statistic. I hadn't really believed the hype about how fast the new G5s were.
  • by Waffle Iron ( 339739 ) on Sunday October 12, 2003 @12:38PM (#7194621)
    So they bog down the software doing something that could be done in hardware?

    Just because it's in hardware doesn't mean it's free. The ECC logic is going to add a small delay to each of trillions of memory accesses. Plain memory can most likely be tuned to run faster than ECC memory.

    If you're running a constrained problem and can verify the results at the end, a single error check in software could consume far less overall time than the continuous ECC hardware checks. The software check would probably catch other types of errors as well (including many errors caused by software bugs).

  • by daveschroeder ( 516195 ) on Sunday October 12, 2003 @12:40PM (#7194634)
    Easy: you yourself point out that 1100 * 15.7 = 17.27 ... not 17.6.

    Since the call for papers for the new Top 500 list was Oct 1, and the BBC show aired on Oct 9 with a companion BBC News story [bbc.co.uk] dated Oct 12, you'd hope that VT was simply regurgitating the figure that has already been sent to the Top 500 organization.

    And why are you trolling around with one of those super-old benchmarking stories? We've already established that every manufacturer does what they can to show their products in the best possible light. At least Apple documented their test [veritest.com] results [apple.com] and methods in full.

    So acually, your logic doesn't make any sense: you jump to the conclusion that it's not real results - even though real results already exist and have been submitted, and the entire story is pretty much about that process, making performance figures a critical piece to get accurate - and that they must have just multiplied some benchmark number by 1100. Then, even though the subject of your own post indicates your recognition that "it doesn't add up", you still apparently assume that the results are somehow doctored, this time for the worse, and you manage to weave in one of the stories that tries to make it look like Apple lied with its benchmarks - which it didn't - which is unrelated to the current issue! How does it "assume" the original scores were accurate?? YOU are assuming that they're just multiplying. You might have been onto something if the multiplication actually came out, but it doesn't, meaning that is NOT what they did.

    Bravo, +1 Troll.
  • Interesting math (Score:3, Interesting)

    by lexcyber ( 133454 ) on Sunday October 12, 2003 @01:11PM (#7194783) Homepage
    The VT cluster cost about $5.2 Million and get approx. 17TFlops - The NEC Earth Simulator gets 35TFlops and cost one billion dollars. That makes it 192 times more expencive. So you can build 192 VT Clusters. And then in theory get. 3.2PFlops for the same amount of money. If you detract performance for cable lenght etc. - You will most definitly get around 1PFlops.

    So, you supercomputerusers out there - build a 1PFLOPS cluster NOW!
  • MOD PARENT UP (Score:2, Interesting)

    by Mark of THE CITY ( 97325 ) on Sunday October 12, 2003 @02:37PM (#7195159) Journal
    I'm glad someone out there thinks these things through...

    BTW, an acquaintance told me of her ILLIAC IV days. With 64 independent processors it was the fastest pre-Cray machine, but sometimes did produce wrong values. Standard practice was to have 3 processors run the same problem and compare the results at the end, deciding that the 3x performance hit was worth it, if the results actually meant something...
  • Re:MOD PARENT UP (Score:3, Interesting)

    by sakusha ( 441986 ) on Sunday October 12, 2003 @03:11PM (#7195331)
    Yep, that's an accepted practice in mission critical real-time systems. I recall reading about the IBM computers used in the Space Shuttle, they have triple redundancy, all 3 computers operate in parallel, and they "vote" on all results. If one computer doesn't agree with the other two, it is outvoted. Of course this is an extreme oversimplification of the software design, but you get the idea.

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...