Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology (Apple) Technology Hardware

Big Mac Benchmark Drops to 7.4 TFlops 417

coolmacdude writes "Well it seems that the early estimates were a bit overzealous. According to preliminary test results (in postscript format) on the full range of CPUs at Virginia Tech, the Rmax score on Linpack comes in at around 7.4 TFlops. This puts it at number four on the Top 500 List. It also represents an efficiency of about 44 percent, down from the previous result of 80 achieved on a subset of the computers. Perhaps in light of this, apparantly VT is now planning to devote an additional two months to improve the stability and efficiency of the system before any research can begin. While these numbers will no doubt come as a disappointment for Mac zealots who wanted to blow away all the Intel machines, it should still be noted that this is the best price/performance ratio ever achieved on a supercomputer. In addition, the project was successful at meeting VT's goal of developing an inexpensive top 5 machine. The results have also been posted at Ars Technica's openforum."
This discussion has been archived. No new comments can be posted.

Big Mac Benchmark Drops to 7.4 TFlops

Comments Filter:
  • by bluethundr ( 562578 ) * on Wednesday October 22, 2003 @02:09PM (#7283258) Homepage Journal


    I've always been sort of intrigued by ,a href="http://www.top500.org/">Top500. Has there ever been a good comparison written about the similarities/differences between a 'supercomputer' and the lowly pc sitting on my desk running Linux/XP? At what point does the computer in question earn the title "Super"?
    • by Anonymous Coward
      Thanks for asking!!
    • The big difference is that a "supercomputer" is usually heavily optimized towards vector operations: performing the same operation on many data elements at once. Think of it as SIMD (MMX, SSE, etc), only more so. A "supercomputer" would be pretty useless at ordinary tasks such as web browsing or word processing, as those can't be vectorized or parallelized very well. A "supercomputer" might be good as a graphics or physics engine for gaming, but that's sort of like using a cannon to swat a fly: a lot of
    • by BostonPilot ( 671668 ) on Wednesday October 22, 2003 @02:32PM (#7283496) Homepage
      Nah, the real defintions is:

      Super computers cost more than 5 million dollars

      Mainframes cost more than 1 million dollars

      Mini-Super computers cost more than 1/4 million dollars

      Everything else is by definition a Plain Jane (TM) computer

      btw, I've worked on all 4 kinds ;-)

    • by Jungle guy ( 567570 ) <brunolmailbox-generico.yahoo@com@br> on Wednesday October 22, 2003 @02:56PM (#7283718) Journal
      Jack Dongarra says that a "supercomputer" is simply a computer that, for todays's standards, is REALLY fast. I saw one presentation from him, and he said he run the Linpack benchmark on his notebook (2.4 GHz Pentium 4) and it would get to the bottom of the Top500 list in 1992. So, this supercomputer definition is very fluid.
  • by JUSTONEMORELATTE ( 584508 ) on Wednesday October 22, 2003 @02:10PM (#7283270) Homepage
    Way to go /. -- updated the logo from G4 to G5 just in time.

    --
    • While these numbers will no doubt come as a disappointment for Mac zealots who wanted to blow away all the Intel machines, it should still be noted that this is the best price/performance ratio ever achieved on a supercomputer.

      Way to go there; lets just keep encouraging their terrorism.

  • by daveschroeder ( 516195 ) * on Wednesday October 22, 2003 @02:10PM (#7283274)
    It's worth noting a few important things:

    First, from a an Oct 22 New York Times [nytimes.com] story:

    Officials at the school said that they were still finalizing their results and that the final speed number might be significantly higher.

    This will likely be the case.

    Second, they're only 0.224 Tflops away from the only Intel-based cluster above it. So saying "all the Intel machines" in the story is kind of inaccurate, as if there are all kinds of Intel-based clusters that will still be faster; there is only one Intel-based cluster above it, and with only preliminary numbers for the Virgina Tech cluster at that.

    Third, this figure is with around 2112 processors, not the full 2200 processors. With all 1100 nodes, even with no efficiency gain, it will be number 3, as-is.

    Finally, this is the a cluster of several firsts:

    First major cluster with PowerPC 970

    First major cluster with Apple hardware

    First major cluster with Infiniband

    First major cluster with Mac OS X (Yes, it is running Mac OS X 10.2.7, NOT Linux or Panther [yet])

    Linux on Intel has been at this for years. This cluster was assembled in 3 months. There is no reason for the Virginia Tech cluster to remain at ~40% efficiency. It is more than reasonable to expect higher than 50%.

    It's still destined for number 3, and its performance will likely even climb for the next Top 500 list as the cluster is optimized. The final results will not be officially announced until a session on November 18 at Supercomputing 2003.

    • I wonder how dual Xeon boxes would do using Infiniband? Probably a lot better than they're doing at the moment.
    • Officials at the school said that they were still finalizing their results and that the final speed number might be significantly higher.

      This will likely be the case.


      Why is this likely? The number dropped, why is it more likely to go up rather than down (or nowhere, for that matter)?
    • by Durinia ( 72612 ) * on Wednesday October 22, 2003 @02:21PM (#7283401)
      On the other side of the issue is that it places 4th in the current Top 500 list, which was released in June. We won't really know where it places on this "moving target" until the next list is released in November.
      • Not really (Score:5, Informative)

        by daveschroeder ( 516195 ) * on Wednesday October 22, 2003 @02:26PM (#7283442)
        The preliminary performance report at http://www.netlib.org/benchmark/performance.pdf contains the new entries for the upcoming list as well (see page 53).
      • On the other side of the issue is that it places 4th in the current Top 500 list, which was released in June. We won't really know where it places on this "moving target" until the next list is released in November.

        The deadline for submission to the Nov 2003 Top 500 list was Oct. 1st (see call for proposals) [top500.org], so it has already passed. Any further improvements that they make to the scalability of the cluster should not be included. This is true for all the machines.

    • Also Important? (Score:3, Informative)

      by ThosLives ( 686517 )
      If you read the fine print, the Nmax for the G5 was 100,000 higher than for the Linux cluster. Now, that's kind of interesting, because the G5 cluster was then only slightly slower doing a much bigger (450,000 Nmax vs 350,000 Nmax on the Xeons) problem. I wonder why they don't somehow scale the FLOPs to reflect this fact.

      Anyone know how much merit there is to using Nmax (or N1/2) to compare different systems?

    • My (completely untrained) guess is they are dealing with network saturation. The computers themselves don't get slower because there are more of them, so...

      Could they add NICs to each computer, bond them (probably need to write something for this), and set up parallel networks with each set of cards to improve bandwidth?

      Don't enough about the cluster's setup to say much at this point.
      • No they could not bond NICs, because they're using Infiniband and not ethernet. Besides, I think that they are being limited more by latency than bandwidth, so therefore adding bandwidth isn't going to help much. What's worse, their bandwidth limit is being reached inside the computer, with their chip to chip interconnect having less bandwidth than their computer to computer interconnect.

        This is not altogether surprising, given that they are using a desktop computer and trying to shoehorn it into a super
    • Second, they're only 0.224 Tflops away from the only Intel-based cluster above it.
      So saying "all the Intel machines" in the story is kind of inaccurate


      I was trying to refer to the fact that sometimes the Mac zealots, in the midst of their zealotry,
      lose sight of reality and simply lump all non-Mac related things into one huge category, even if it really isn't one.
  • by Anonymous Coward on Wednesday October 22, 2003 @02:11PM (#7283284)
    What they're not telling you is that the real reason they are building a supercomputer is because the only copy of the router passwords is GPG-encrypted, and they lost the key.
  • by mrtroy ( 640746 ) on Wednesday October 22, 2003 @02:12PM (#7283289)
    That 80% efficiency simply sounded too good to be true, and it was.

    Now its at 44%. Thats not a small drop, thats a MASSIVE drop.

    They didnt predict any loss in going from a small subset to the whole system? Or was it a publicity stunt (we can outperform everyone! our names are __________!)
  • by jandrese ( 485 ) * <kensama@vt.edu> on Wednesday October 22, 2003 @02:12PM (#7283295) Homepage Journal
    That's nothing, last time I benchmarked my Big Mac Cluster (100 Big Macs) it came to almost 57.6 megacalories. Those Apples will never be able to match that!
    • Calorie is a unit of heat transfer, 1 Cal (uppercase C) is the amount of heat required to raise the temperature of 1g of water 1 degree celsius. Lower case calorie is 1/100th that.
      • 1 Cal (uppercase C) is the amount of heat required to raise the temperature of 1g of water 1 degree celsius

        which brings up a totally off topic question.... a can of coke is 350 ml. it contains 300 calories.

        now, let's say i drink this coke. it is really cold - say 4 degrees. my body temperature is a nice, mamallish 37 degrees. by drinking this coke i am warming up 350 g of what is essentially water from the temperature of the can to that of my body - a difference of 33 degress.

        33c * 350ml = 11550 calor

        • by cK-Gunslinger ( 443452 ) on Wednesday October 22, 2003 @02:41PM (#7283585) Journal
          I find your ideas intriguing and would like to subscribe to your newsletter.
        • Re:Big mac cluster.. (Score:4, Informative)

          by zulux ( 112259 ) on Wednesday October 22, 2003 @02:43PM (#7283598) Homepage Journal
          since the coke is only 300ish calories in the first place...

          For consumers, food calories are really kilo-calories. So in this case, you coke has 300,000 physic-style calories.

          If you look at a euopean food-labels, sometime you can seem them writen as kcal.

        • I've wondered that myself. It would seem that drinking cold water (or ice!) would be an excellent way to lose weight, but it doesn't seem to be that way...
        • Re:Big mac cluster.. (Score:5, Informative)

          by Graff ( 532189 ) on Wednesday October 22, 2003 @03:06PM (#7283809)
          The original poster was wrong when he said:
          1 Cal (uppercase C) is the amount of heat required to raise the temperature of 1g of water 1 degree celsius

          A Calorie (the one used on food labels) is actually a kilocalorie. A Calorie is therefore 1000 calories. 1 calorie is basically the amount of heat needed to raise 1g of water 1 degree celsius. (A calorie is actually 1/100 of amount of heat needed to get 1 gram of water from 0 degrees C to 100 degrees C, but that works out almost the same.)

          This is explained a bit on this web page. [reference.com]

          So warming a 4 degrees C, 350mL Coke to 37 degrees C would take (37 - 4) * 350 = 11550 calories. This is 11.55 kilocalories or 11.55 Calories. The Coke has around 300 Calories in nutritive value therefore you would gain 300 - 11.55 = 288.45 Calories of energy from a 4 degrees C, 350mL can of Coke.
      • Our thanks to the Anal Retentive Chef [goodeatsfanpage.com] for his guest editorial.
      • It's not a measure of heat transfer, it's a measure of energy. You could measure the output of an automobile engine in calories if you like. Convert calories to watts to HP to torque(more or less) to thrust, it's all a different scale of the same thing.
    • ...my Big Mac Cluster...

      Um, yeah, could I get some fries with that?

    • Seeing as a large apple is about 100 kilocalories, you'd need a cluster of maybe 580 apples to best your Big Mac Cluster. If you go to an apple orchard I'm sure you could find a better price-performance ratio with apples than you could with Big Macs at McDonalds. Plus, most orchards will probably let you gather virtually unlimited quantities of fallen apples for free.
  • Instant Numbers... (Score:3, Insightful)

    by Dracolytch ( 714699 ) on Wednesday October 22, 2003 @02:12PM (#7283297) Homepage
    Not terribly surprising. Much like estimated death tolls for disasters, never believe the first set of benchmarks for a computer. Wait until thorough testing can be done before you start believing the numbers.

    Y'all should know this by now. ;)
    ~D
  • by ikewillis ( 586793 ) on Wednesday October 22, 2003 @02:14PM (#7283313) Homepage
    "best price performance" and "Apple" in their minds?
    • While some people have given the parent a flamebait mod and hostile replies, the poster makes a good (and humorous) point. Apple is not typically thought of in terms best price performance any more than, say, Cadillac is in the car industry. Macs are bought by those willing to pay a premium for that distinct Apple stying, OSX's slick interface with the power of Unix behind the scenes, the "it just works" factor, and so on. Those who don't care about the amenities and just want bang for the buck go for a

    • I guess the original submission didn't see the slashdot article [slashdot.org] from August 23 about our KASY0 [aggregate.org] supercomputer breaking the $100 per GFLOPS barrier.

      KASY0 achieved 187.3 GFLOPS on the 64-bit floating point version of HPL, the same benchmark used on "Big Mac". While "Big Mac" is about 40 times faster on that benchmark, it is about 130 times the cost of KASY0 (~$40K vs ~$5200K). Considering the size difference, "Big Mac" is VERY impressive, but it can't claim to be the best price/performance supercomputer on

  • by humpTdance ( 666118 ) on Wednesday October 22, 2003 @02:14PM (#7283314)
    Best Price/Performance ratio = promotional video with the phrase:

    "Virginia Tech: Home of the Poor Man's Supercomputer and Michael Vick."

  • by dbirchall ( 191839 ) on Wednesday October 22, 2003 @02:16PM (#7283331) Journal
    A single G5 FPU (each CPU has 2) can do 1 64-bit (double precision) FLOPs per cycle, or 2 if and only if those two are a MULTIPLY and an ADD.

    Apparently there are a lot of cases where a MULTIPLY and an ADD do come together like that, but I'm not surprised if LINPACK doesn't consist entirely of those pairs. ;)

    The 17.6 TFLOP theoretical peak assumed a perfect case consisting entirely of MULTIPLY-ADD pairs. In a case assuming no MULTIPLY-ADD pairs, the theoretical peak is 8.8 TFLOPs.

    7.4 TFLOPs is only 42% of 17.6 TFLOPs, but it's 84% of 8.8 TFLOPs. I suspect the actual "efficiency" of the machine lies somewhere in the middle.

    (As for me, I'm happy with just ONE dualie...)

    • Until these applications are written in 64 bit code, it won't matter. Smeagol and Panther will still have to cross that bridge so old utilization rates will continue to apply.

      From: http://www.theregister.co.uk/content/39/31995.html [slashdot.org]

      The PowerPC architecture was always defined as a true 64-bit environment with 32-bit operation defined as a sub-set of that environment and a 32/64-bit 'bridge', as used by the 970, to "facilitate the migration of operating systems from 32-bit processor designs to 64-bit p

      • Uhh, you really don't know what you're talking about here do you? We're talking floating point code here, not integer code! You don't need Smeagol or Panther or any other cat to get 64-bit floating point code, DOS can handle that just fine!

        Essentially ALL processors with a floating point unit do 64-bit precision calculations. The old G4 and G3 did, the Pentium 4 does, the old 486 did, etc. etc. The whole 32-bit vs. 64-bit argument with these PowerPC 970 chips (and, in a similarly light, AMD64 chips) ha
    • by hackstraw ( 262471 ) * on Wednesday October 22, 2003 @02:36PM (#7283544)
      FWIW here are the efficiencies for the top 10 on www.top500.org:

      87.5 NEC Earth-Simulator
      67.8 Hewlett-Packard ASCI Q
      69.0 Linux Networx MCR Linux Cluster Xeon
      59.4 IBM ASCI White
      73.2 IBM SP Power3
      71.5 IBM xSeries Cluster
      45.1 Fujitsu PRIMEPOWER HPC2500
      79.2 Hewlett-Packard rx2600
      72.0 Hewlett-Packard AlphaServer SC
      77.7 Hewlett-Packard AlphaServer SC
    • Correct. Also note that one of the strengths of the G5 (and G4) is its vector units, which (afaik) can't be used for Linpack, because of the 64-bit precision requirements. For jobs that can use Altivec, the performance should be substantially better.
  • by BWJones ( 18351 ) on Wednesday October 22, 2003 @02:19PM (#7283363) Homepage Journal
    While these numbers will no doubt come as a disappointment for Mac zealots who wanted to blow away all the Intel machines, it should still be noted that this is the best price/performance ratio ever achieved on a supercomputer.

    It still bests all other Intel hardware with only the Alpha hardware on top. And given the CPU count, even the Alpha hardware does not match it. Look at the numbers.....The Linux based 2.4Ghz cluster has almost 200 more CPU's on board with a 217 Gflop/sec difference. The Alpha clusters are running anywhere from 1,984 to 6,048 more CPU's.

    • The Mac cluster is still on top per CPU

      From the same document the Mac proponents have been quoting from: Dondarra Doc [netlib.org]

      Table 3 - page 53:

      Big Mac -> Rmax: 8164 Processors: 1936
      Cray X1 -> Rmax: 2932.9 Processors: 252

      Please be careful when making general statements. Thank you.

      That said, yes, it has the highest per CPU performance of the machines with commodity processors. (that are listed, at least - including the year-old Xeons)

      • Big Mac -> Rmax: 8164 Processors: 1936
        Cray X1 -> Rmax: 2932.9 Processors: 252


        I did say It still bests all other Intel hardware... Commodity clusters are entirely different beasts than dedicated supercomputers and this is exactly why I chose the terminology "clusters" rather than supercomputers. Also, check out the architecture of real "supercomputers". Most of the real costs are in CPU interconnectivity.

    • Remember them? Manufacturer of the highest performance x86 processors available? An array of dual-Opteron systems could be built with dramatically lower price/performance ratio than any other platform, especially G5s or Intel Xeons.

  • by daveschroeder ( 516195 ) * on Wednesday October 22, 2003 @02:19PM (#7283366)
    See http://www.netlib.org/benchmark/performance.pdf [netlib.org] page 53.

    Since yesterday's release at 7.41 Tflop, the G5 cluster has already increased almost a Tflop, and is now ahead of the current #3 MCR Linux cluster, and about 0.5 Tflop behind a new Itanium 2 cluster.
  • by Anonymous Coward on Wednesday October 22, 2003 @02:19PM (#7283367)
    /Watched WarGames too many times as a kid.
  • If someone used off-the-shelf machines that my company made, and got even into the top-10, you can bet your bottom dollar that the next thing in my job-pile would be a "make an announcement that we're in the top-10 fastest computers in the world."

    This is fantastic, no matter what way you cut it! Using commodity components, these folk have turned the G5 into a real champion. No longer do budgets have to be in the hundreds, or even tens of millions to get a top-notch supercomputer. And this is not even th
    • Umm, they've got the POWER4, which is internally the same thing as the G5 (which they also make). WHY would they use the consumer-grade G5 (that Apple is demanding in mass quantities) when they can use the POWER4 that does the same thing and is server-grade (and IBM already uses)?
      • by gerardrj ( 207690 ) * on Wednesday October 22, 2003 @02:57PM (#7283725) Journal
        Because the Power4 is hotter and uses more current than the G5. To use 2200 Power 4 CPUs they would have to about triple the cooling capacity of the room. For all the heat and power, the Power4 lacks the AltiVec units that allow the G4/G5 to process vector operations so quickly.

        The G5 is also significantly lower cost than the Power4
  • by ianscot ( 591483 ) on Wednesday October 22, 2003 @02:20PM (#7283385)
    Yet another Apple product that failed to save the world. Lately they do nothing but disappoint us. Boo.

    First you have the iTunes store which doesn't do anything but give the average user basically anything he or she might have wanted to have in on online music store. Despite its being free, we're all cheesed off that it doesn't support OGG, or it's meant partly to push iPods (duh), or whatever.

    Now this -- a supercomputer that has, to quote that again, the "best price/performance ratio ever achieved on a supercomputer." But dang it all, it doesn't completely blow away every established precedent -- it's just in the top five on the usual list of comparisons. One more crushing disappointment.

    From Microsoft, we just want products that don't completely ream us. From Apple, we want the entire world to seem a little friendlier and cooler with every product release, every dot-incremenent OS update. They both disappoint us, but the expectations seem a little different...

    • I know this is really nitpicking, and is somewhat offtopic (but there isn't a front page iTunes thread at the moment), but it probably needs to be said.

      iTunes for Windows, just like Mac iTunes, does it's decoding using Quicktime. As crappy as you think Quicktime Player software is, the backend Quicktime library is very nice, especially in regards to it's modularity.

      Any app that uses Quicktime Lib can now play AAC files (even the iTMS 'protected' ones), not just iTunes. Of course, not may Windows apps us
  • Do these benchmark results take into account that software they have to run to check for memory errors?
  • by DavidBrown ( 177261 ) on Wednesday October 22, 2003 @02:28PM (#7283472) Journal
    of all of these so-called "benchmark" discussions. Everyone really knows, in their heart of hearts, that the only valid benchmark is to be found in real-world applications such as Quake III. I want to know how many fps this alleged "supercomputer" gets.

  • Moore's Law applied (Score:3, Interesting)

    by moof-hoof ( 678977 ) on Wednesday October 22, 2003 @02:34PM (#7283517)
    ...it should still be noted that this is the best price/performance ratio ever achieved on a supercomputer.

    Yes, but doesn't Moore's Law and the commodification of computer hardware suggest that each new generation supercomputer will have the best price/performance ratio?

    • A very excellent point. I was also wondering how much time has passed between the time the Intel cluster and this Apple cluster were constructed. Would put things into a little more perspective regarding cost.
  • by mfago ( 514801 ) on Wednesday October 22, 2003 @02:36PM (#7283541)
    Efficiency is strongly dependent on the interconnect. Does anyone know if the 128 node benchmark (that supposedly showed ~80% efficiency) was run with only one Infiniband switch -- i.e. all nodes connected through only one switch?

    BTW, the performance never was stated to be 17 TF, so it did not drop to 7.4 (or whatever it ends up to be).
  • Price vs Preformance (Score:2, Interesting)

    by Metex ( 302736 )
    While I am amazed at the initial price vs preformance that this cluster of macs have obtained I am worried about the eventual cost all the electricity and cooling will be for the cluster. I remeber reading in some random article that the electricity used to cool and power the computer was extimated around 3,000 midrange homes. Just from a quick calculation of homes x $100 x 12 months we get the horrible figure of 3.6mil. So over a 10 year lifespan of the cluster it will cost 36mil more the the current price
    • I remeber reading in some random article that the electricity used to cool and power the computer was extimated around 3,000 midrange homes.


      That can't possibly be right. There's no way that the cluster's power requirements are over 1 home's worth per CPU. Maybe they just added a zero and it's supposed to be 300, but even that sounds very high.

    • That makes no sense. There are only 1100 nodes. That means that each node takes as much electricity as 3 mid-range homes?
    • by G4from128k ( 686170 ) on Wednesday October 22, 2003 @03:23PM (#7284041)
      I think that magazine article must be wrong. If 1100 Macs use as much power as 3000 homes, then each mac is using about 3 houses worth of power. That seems excessive unless the home is in a 3rd world country or those 9 fans are really really running full blast. More likely, each G5 (with networking and cooling equipment) uses a few hundred watts. Even at 500 W/Mac, 1100 Macs, $0.15/kWH, 24 Hr/day, 365 day/year the cluster costs about $722,700/year. More likely, each Mac probably only consumes an average of 300 W max and is not running full tilt 24x7, so the cost is maybe around $300-$400k/year.

      But your point is a good one. I often wonder about the environmental economics of people running SETI, Folding@Home, etc. on older machines. Most of those older "spare" CPU-cycles are quite costly in terms of electricity relative to newer faster machines that do an order of magnitude more computing with the same amount of electricity.
      • You're forgetting the AC costs... If you've ever worked in a DC you know that the room itself can get mighty toasty, and toasty air leads to cooked systems.

        Each processor, drive, and switch generates heat which is dissipated into the air. Untouched that heat accumulates and will kill the entire thing. With 1100 dual processor nodes running (and you can be they'll each be running at pretty close to full tilt) constantly that's a hell of a lot of heat that needs to be removed from the air.
  • by madpierre ( 690297 ) on Wednesday October 22, 2003 @02:40PM (#7283583) Homepage Journal
    I installed a button on the front of my cluster
    to manually clock the CPU's.

    So far i've managed ONE whole flop.

    My record is for the slowest supercomputer
    on the planet.
  • If this cluster was MAC and anywhere near the size/cost of other clusters it would easily be number 1, assuming of course they do workout those efficiency problems.
  • Check out the Englight256 [lenslet.com]... Coming soon to a military installation near you...
  • Does anyone know if Linpack was optimized for PPC hardware, specifically the 64-bit G5 with all its bells and whistles? That makes quite a difference.
  • ... it should still be noted that this is the best price/performance ratio ever achieved on a supercomputer.

    Noted. And go VT, go Apple! Now, with the cheerleading out of the way, I wonder something - with Moore's law and all still applying pretty well, just getting the latest-and-greatest any home computer architecture will all but guarantee you pretty good price/performance.

    As another poster pointed out, someone's recent laptop could do as well on Linpack as a 1992 supercomputer.

    So what I think would

  • Interesting link describing the processor architecture and application performance in modern supercomputers.

    Good read for anyone interested in some of the background in current super computers and what they used for testing.
    Heres the link. [jukasisters.com]
  • Scalability (Score:5, Informative)

    by jd ( 1658 ) <`imipak' `at' `yahoo.com'> on Wednesday October 22, 2003 @03:16PM (#7283935) Homepage Journal
    First, scalability is highly non-linear. See Amdahl's Law. Thus, the loss of performance is nothing remarkable, in and of itself.


    The degree of loss is interesting, and suggests that their algorithm for distributing work needs tightening up on the high-end. Nonetheless, none of these are bad figures. When this story first broke, you'll recall the quote from the top500 list maintainer who pointed out that very few machines had high performance ratings, when they got into the large numbers of nodes.


    I'd say these are extremely credible results, well worth the project team congratulating themselves. If the team could open-source the distribution algorithms, it would be interesting to take a look. I'm sure plenty of Mosix and BProc fans would love to know how to ramp the scaling up.


    (The problem of scaling is why jokes about making a Beowulf cluster of these would be just dumb. At the rate at which performance is lost, two Big Macs linked in a cluster would run slower than a single Big Mac. A large cluster would run slower than any of the nodes within it. Such is the Curse that Amdahl inflicted upon the superscaler world.)


    The problem of producing superscalar architectures is non-trivial. It's also NP-complete, which means there isn't a single solution which will fit all situations, or even a way to trivially derive a solution for any given situation. You've got to make an educated guess, see what happens, and then make a better informed educated guess. Repeat until bored, funding is cut, the world ends, or you reach a result you like.


    This is why it's so valuable to know how this team managed such a good performance in their first test. Knowing how to build high-performing clusters is extremely valuable. I think it not unreasonable to say that 99% of the money in supercomputing goes into researching how to squeeze a bit more speed out of reconfiguring. It's cheaper to do a bit of rewiring than to build a complete machine, so it's a lot more attractive.


    On the flip-side, if superscaling ever becomes something mere mortals can actively make use of, understand, and refine, we can expect to see vastly superior - and cheaper - SMP technology, vastly more powerful PCs, and a continuation of the erosion of the differences between micros, minis, mainframes and supercomputers.


    It will also make packing the car easier. (* This is actually a related NP-complete problem. If you can "solve" one, you can solve the other.)

  • point missed (Score:2, Insightful)

    by gerardrj ( 207690 ) *
    Most responses in here are about how the G5 should be performing better, or should have better numbers than the Xenon or Sparc, or whatever.
    What seems to be missing from most of the conversation is that it's not the Mac's that are loosing efficiency per se, it's the network (the interconnects) that is slowing the machine as a whole down. I know little about the LinPac test, but I would assume that it's written to test/stress the entire machine: CPU, disk, memory and interconnects. If the Macs can finish par
  • So, in all these "maximum speed tests", what is being used, 32 bit reals or 64 bit reals? The difference is that in solving large non-linear systems, the higher precision numbers result in a faster solution, but operations involving doubles will resulting a lower gflops measurement with benchmarks (although a solution may in fact take 10x less iterations).
  • (Disclaimer: I'm a Mac user).

    I still think #4 in the world is pretty damn impressive for Apple hardware! And it looks like there might be some small performance improvements to come.

    I think everyone involved did a pretty damn good job! Have a beer on me.

    -psy
  • by Anonymous Coward
    the first supercomputer to feature exactly 1 mouse-button.


    (hides/ducks - I ain't an anonymous coward for nothing!)

  • seti@home not listed (Score:5, Interesting)

    by suitti ( 447395 ) on Wednesday October 22, 2003 @04:09PM (#7284530) Homepage
    The 21st version of this list does not
    show the SETI@Home project. The top entry
    is NEC at 35 terraflops. Today's SETI@Home
    average for the last 24 hours is 61 terraflops.
    It may be a virtual supercomputer, but it
    is producing real results.

The best defense against logic is ignorance.

Working...