Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology (Apple) Businesses Education Apple Technology Hardware

Big Mac Officially Ranks 3rd 357

An anonymous reader noted that according to Wired, it will be announced officially on Monday the Big Mac supercomputer is the third-fastest super-computer. The article also talks about some of the amazing supercomputers in the planning stages. The sort of stuff that will make Big Mac look like that old TI-85 collecting dust in your drawer.
This discussion has been archived. No new comments can be posted.

Big Mac Officially Ranks 3rd

Comments Filter:
  • by Anonymous Coward on Sunday November 16, 2003 @09:16AM (#7486871)
    ...you insensitive clod!
  • Kudos to the Mac (Score:4, Interesting)

    by Space cowboy ( 13680 ) on Sunday November 16, 2003 @09:17AM (#7486874) Journal
    It has to be said that Mac's haven't been famous for their speed, always pushing the "it does more", or "there are 2 procs" arguments, but this gives them some serious ammunition. Perhaps they'll even get their advert on the air in the UK now :-)

    Simon
    • by goombah99 ( 560566 ) on Sunday November 16, 2003 @09:54AM (#7487039)
      Mac has never been known for being cheap, just a good value. Now even this seems to be fading: macs are a cheap too!

      Now this system is the cheapest of the top 10. its cheaper than many it beat by a factor fo ten (more than that considering some of the building infrastructure are in that figure). Even more interesting these were stock mac at full price loaded with DVD-roms, firewire, blue tooth, the OS, etc..---not some stripped down model.

      Its a good bet too that this thing is going to have lower maintainence costs and higher up-time given the macs attention to cooling, the use of high quality hard drives and power supplies, and high end memory chips. (on our cluster a tenth that size we blew 60 hard drives in the first 6 months and had to replace 10% of the motherboards.

      • True.

        Most of the 1984 macs still run! I heard stories on slashdot about Apple laser suddenly not working. Upon inspection the guy found an old macII with an inch of dust on it hidden behind a bookcase. Running since 1987 and forgotten untill recently when the nic failed. Heheh

        How many of you have old 486's or pentium's that still run? Mine have old died.

        But a mac will run forever.

        • They used to, anyway, back when they were still being manufactured in Cupertino by Apple themselves. Now that Apple doesn't make all their own components the average life span of a modern Mac is nothing compared to what you found in the 80's. Though I suppose one could say the same thing to some extent of VCRs and televisions, too.
        • How many of you have old 486's or pentium's that still run?

          I do, I do! :)

          I have an old 486, it still works fine. I leave it off most of the time, simply because it's just not useful to me. It runs a recent LFS system (maybe a year old). Slow as hell, of course, but I think it's impressive that a very recent version of linux is running on an old 486. That impresses me, just try running XP on it :)
    • by tychay ( 641178 ) on Sunday November 16, 2003 @10:09AM (#7487103) Homepage

      Hmm, guess this means my submission a couple hours ago won't go through (dangit, Wired!)...

      Here is the official press release [top500.org] and the list [top500.org].

      There is a lot of good points to note all around. The first is the G5 Terascale cluster at Virginia Tech at #3 (10.28 Tflops/s, 2200 CPU, Infiniband) is the first academic computer to break 10 teraflops/s. This extra performance was promised at Mac OS X Developer's conference [macdevcenter.com] last month. Not to sure if the price is a testament to Infiniband ($1.5 million cabling, cards, and routers) or the Macs ($4.2 million list).

      Good thing too because in a surprise move the NCSA cluster made the list at #4 (9.82Tflops/s, 2500 CPU, Myrinet). This cluster is built using Dell's running Pentium 4 XEONs and Red Hat Linux [uiuc.edu]! One subtle point to note is that they didn't get all the systems online in time (there should be 2900 CPUs, not 2500). I bet some programmer at PSC [chaosmint.com] and an ex-Chief Scientist of SDSC is appreciating having a hand in edging out NCSA for #3--not to mention Apple beating Dell for #3.

      The fastest Itanium cluster is at #5 (8.63 TFlops/s, 1936 CPU, Quadrics) which is looking like the odd man out boxed in by a PC based systems using Myrinet, the P4 Xeon above, and the most powerful Opteron system at #6 (8.05 Tflops/s, 2816 CPU, Myrinet). Another point of similarity:did I mention it's also using Linux?

      And finally, It's easy to overlook #73, a single compute node of BlueGene/L (1.44 Tflops/s, 1024 CPU). Imagine 128 of these [com.com] connected together and you have something that will easily take #1 when it's completed even if we handicap it 20-40%. As noted on SlashDot earlier [slashdot.org], this will be running Linux.

  • already official (Score:5, Informative)

    by Anonymous Coward on Sunday November 16, 2003 @09:21AM (#7486887)
  • by Anonymous Coward on Sunday November 16, 2003 @09:22AM (#7486888)
    Excuse me, but Big Mac is a registered trademark of McDonald's according to http://mcdonalds.com/legal/index.html

    So what's going on here? Can they actually do that?
    • by lost_it ( 44553 ) on Sunday November 16, 2003 @09:37AM (#7486961)
      Virginia Tech has not (and will not) call the computer "Big Mac". BBC used the name when it first started appearing in the news, and everyone else picked it up, IIRC.

      The people in charge of the cluster don't want to call it "Big Mac" because (1) they don't want a lawsuit from McDonalds, and (2) who wants to be associated with nasty, greasy fast food?

      They've worked out a solid candidate for a name (it's not official yet) that isn't quite as catchy as "Big Mac", but it also doesn't have any of the downsides.
  • by ergo98 ( 9391 ) on Sunday November 16, 2003 @09:22AM (#7486890) Homepage Journal
    Given the basic benchmarks used to rank supercomputers, could a cluster of loosely coupled machines compete, or is the bandwidth demands for the benchmark set too demanding? I'm just curious how projects like what is detailed at distributed.net compare: 1100 dual-processor macs would be vastly outranked by the hundreds of thousands (or millions) of PCs taking part in distributed processing for various code cracking or cancer curing purposes.
    • Interconnect is very important.

      This is nothing like distributed.net.

      For a problem that can be broken into millions of discrete, independent chunks, sure, distributed.net's model is fantastic, and works really well... (seti, folding, distributed.net, etc)

      For something where you need lots of feedback from nodes, (like these benchmarks, and lots of simulation work), bandwidth is everything.
  • Yes but... (Score:4, Funny)

    by Nefrayu ( 601593 ) on Sunday November 16, 2003 @09:28AM (#7486917) Homepage
    Can the Big Mac play games [moviesoundscentral.com] like the WOPR?
    How about a nice game of chess?
  • by Spoing ( 152917 ) on Sunday November 16, 2003 @09:29AM (#7486925) Homepage
    The Top 500 supercomputer list [top500.org] does provide the basic comparison information, though nothing like OS used or IO speed (network and storage). For that, you have to dig through each site and even then it is not easy to find. (The Earth Simulator uses SUPER-UX for the OS -- another Unix tuned to this type of task [jamstec.go.jp].)

    That said, for what is provided, the Earth Simulator seems to be the current king by about 2x. (Corrections appreciated.)

  • Check the #5 and #6 (Score:5, Interesting)

    by Jesrad ( 716567 ) on Sunday November 16, 2003 @09:34AM (#7486946) Journal
    The Top500 site [top500.org] lists two competing 64bits architectures-based clusters: the Integrity rx2600, with 1938 Itanium2 at 1.5GHz (must be pricey), and an 2816 Opteron 2 GHz cluster, that achieves only three fourths of Big Mac's performance. Now that's a defeat for AMD.

    Also, the VirginiaTech cluster is the only "self-made" supercomputer in the Top50 (the next one is ranked 63th, based on SunFire V60). The original #3 slipped to the 7th position because of the new supercomputers. Competition for that third place was tough !

    Now where's the G5 XServe ? It was supposed to be out when OS X Server 10.3 was released.
    • Also, the VirginiaTech cluster is the only "self-made" supercomputer in the Top50
      And I guess the only one not using ECC memory. This is a *major* problem. Doing computations on many nodes during a long period *will* cause memory errors. This of course is both cheeper and also a bit faster (ECC has a performance hit). You will though not be able to do long computations. And short comutations must be run at least two times in order to check the results
      • by damiam ( 409504 ) on Sunday November 16, 2003 @10:14AM (#7487122)
        If you had read anything about this project, you would know that they have implemented their own low-overhead software error correction to compensate for the non-ECC memory. Presumably the benchmarks were done with this enabled.
        • by justins ( 80659 )
          Presumably the benchmarks were done with this enabled.

          Why would they? It's not a requirement for the benchmark.

          The benchmark is purely a measure of performance, not reliability or anything else. Of course the benchmark might end up being tainted by the creation of systems that just plain ignore reliability issues, but I think everyone involved in supercomputing knows to take these figures with a grain of salt.
        • Virginia Tech thinks they can handle.
        • If it had ECC, it would still be faster.
        • The point of having a fast computer is to have shorter computations.
      • The lack of ECC is a very good point. I built a small beowulf cluster a few years ago, we had 20 nodes with 1/2 GB of ECC memory each. When running computations 24/7 I would get emails about single bit memory errors from the ECC reporting software I wrote about once a week. That was after I replaced a marginal DIMM that had errors about once a day.

        My memory (128MB PC133 DIMMs from crucial, this was several years ago..) had an average of around .01428 errors per day per gigabyte. If bigmac's memory is

    • and an 2816 Opteron 2 GHz cluster, that achieves only three fourths of Big Mac's performance. Now that's a defeat for AMD.

      Keep in mind that VT did a lot of assembly-level hacking before they managed to reach the number 3 spot. Presumably, LANL didn't bother since they probably didn't feel like they had anything to prove. Perhaps AMD will invest some time in doing something similar now.

      Now where's the G5 XServe ?

      Apparently, the G5's run so hot that making 1U rack mounts is difficult.
      • by tychay ( 641178 )

        No, VT did not do a lot of "assembly-level hacking" one man working two months [macslash.org] did port a bunch of code and he did use the best compiler and LINPAK on the market (Professor Goto's libraries). If LANL didn't do the same or better, I'd be disappointed.

        Also you keep harping on the fact that it was "self-assembled." But then you go on to compare it to a system not provided by IBM, HP, NEC, or Cray but one provided by Linux Networx. Perhaps if VA Tech had gone to them, Linux Networx might have beat out IBM's Op

  • by Faust7 ( 314817 ) on Sunday November 16, 2003 @09:35AM (#7486953) Homepage
    The sort of stuff that will make Big Mac look like that old TI-85 collecting dust in your drawer.

    Cluster a billion TI-85s together and then we'll see who's collecting dust.
  • There are some other interesting semi-commodity hardware based new additions to the top 500 right under VT's #3 slot.
    BigMac is certainly impressive, but even if these systems can't quite match it's scores, they deserve a mention.

    4
    NCSA
    United States/2003
    Tungsten
    PowerEdge 1750, P4 Xeon 3.06 GHz, Myrinet / 2500
    Dell
    9819 Rmax
    15300 Rpeak

    5
    Pacific Northwest National Laboratory
    United States/2003
    Mpp2
    Integrity rx2600 Itanium2 1.5 GHz, Quadrics / 1936
    HP
    8633 Rmax
    11616 Rpeak

    6
    Los Alamos National Laboratory
    United States/
    • Yes, I agree - these machines do deserve a mention. However, not necessarily a favorable one.

      For example, the Tungsten machine at #4 uses 14% more processors at a 53% higher clock rate to achieve 95.5% of the Rmax and 87% of the Rpeak.

      The Lightning machine at #6 uses 28% more processors at the same clock speed to achieve 82% of Rmax and 74% of Rpeak.

      I'm not impressed.
  • by mindstrm ( 20013 ) on Sunday November 16, 2003 @09:38AM (#7486966)
    I mean, I understand reasonably well the benchmarks used... but my question is this:

    In the past, we always looked to the DoE or DoD for who had the fastest computers... they had stuff we could only dream of.. huge, fast clusters of funky computers we've never heard of.

    Now, a university built one out of macs... and it competes with the same benchmarks.

    What I wonder is, are there applications the old-style supercomputers are still better at, or has technology simply advanced since then? (Things like 10gig ethernet and ghz processors and memory busses, etc)... have we simply surpassed them? Don't just feed me some line about I/O either....
    • There are numerous applications in the applied sciences where it is very difficult to take advantage of 10's or 100's of thousands of processors. There are a few applications that are termed "embarassingly parallel" because they are easy to run, but most are not. It is much easier on the harder applications to get some degree of parallel efficiency on a few fast cpus (the NEC/Fujitsu-type of vector supercomputer), as opposed to trying to be efficient on a cluster of G5's.
    • But there are quite a few computers out there that actual power is classified.... how do we know this is really the fastest computer out there. we dont.
    • have we simply surpassed them?

      "we"? Are you on a "side"? I'd guess that you have nothing to do with any manufacturer or group involved with making new computer technology. I always thought it odd to take sides in something that has nothing to do with the person.

      But I digress.

      I think it just might be economies of scale. Billions of dollars are spent in CPU design and improved manufacturing to make a cheaper product, and to make that cheap product better. When one can ammortize the cost over millions
  • by caffeine_monkey ( 576033 ) on Sunday November 16, 2003 @09:41AM (#7486982)
    I used to run an Intel-based supercomputer, but then one night, I was modelling a nuclear explosion on it, and all of a sudden it went berserk, the screen started flashing, and the model just disappeared. All of it. And it was a good model of a nuclear explosion! I had to cram and remodel it really quickly. Needless to say, my rushed model wasn't nearly as good, and I blame that Intel supercomputer for the fact that DARPA yanked our funding.
  • **IBM has managed to cram 1,024 PowerPC 440GX processors into a slanted cabinet the size of a dishwasher. The unit -- described by IBM as a small-scale prototype of Blue Gene/L -- is already ranked 73rd in the new Top500 list.**

    now that's intresting.. you could fit one hell of a cluster in your basement.
  • Imagine a beowulf cluster of..... oh, wait.
  • Well now we know where they are being stockpiled. ;)
  • by gsdali ( 707124 ) on Sunday November 16, 2003 @10:39AM (#7487239)
    Now to get on with the research. It's a credit to them that this computer got from the drawing board to fruition in the tiny amount of time that it did. It's raised the bar for price/performance in the research computing world and hopefully many less wealthy institutions (I'm looking at UK universities especially here). At the end of the day its about the research they put into it and the results they get out of it.
  • by Spoing ( 152917 ) on Sunday November 16, 2003 @10:49AM (#7487295) Homepage
    Aren't there supposed to be comments about Microsoft here? This is Slashdot -- isn't it?

    Well, to get the ball rolling, here is a query on the top 500 supercomputers using Microsoft Windows [google.com]. Corrections and insight are appreciated.

  • Yes, it might make some fast scientific calculations really really fast, but I want to know how fast it does some real world stuff. Give me some Quake framerates, or Photoshop gaussian blur benches.
    • Give me some Quake framerates, or Photoshop gaussian blur benches.

      Or if you're PC World, some Premiere or MS Word benchmarks.
    • I'm assuming you're joking, but here's a serious answer: for a supercomputer, scientific calculations are real world stuff. A significant chunk of the processing power in the world goes to things that the average desktop user will never see directly. I like a fast framerate on Quake or a fast Photoshop filter as much as anyone, but as a comp. bio. grad student, I also really appreciate a system that can run my bioinformatics apps in a reasonable time.
  • Infiniband, not G5 (Score:5, Informative)

    by binkleym ( 723884 ) on Sunday November 16, 2003 @10:58AM (#7487331) Homepage
    The G5 is a cool processor, but it isn't the reason the VT cluster is so fast, the Infiniband interconnect is. The LINPACK benchmark that is used to determine position on the Top 500 list depends very strongly on the latency of the network connection.

    Infiniband has ~ 8-12 us latency (probably even less by now), while ethernet is an order of magnitude slower. In real-life applications it's actually worse than this suggests.

    We have tested a real-life application (socorro) using both gigabit ethernet and Myrinet (slightly slower than Infiniband), and gigE took 600 seconds to finish a run, while Myrinet took 4.

    VT's cluster is using the largest Infiniband network yet built (or at least announced). The previous largest Infiniband network was O(100) machines. VT could have built the cluster using Xeons, Itaniums, or Opterons and arrived at roughly the same level of performance.
    • by 2nd Post! ( 213333 ) <gundbearNO@SPAMpacbell.net> on Sunday November 16, 2003 @11:16AM (#7487441) Homepage
      Yes, you're quite right, the networking hardware is important.

      But as researched by the VT folk, the G5 is significant: It was cheaper for their needs than the Xeons, Itaniums, and Opterons of similar performance and energy consumption!

      So both component choices were critical to their achieving number 3.
    • by afantee ( 562443 )
      >> VT could have built the cluster using Xeons, Itaniums, or Opterons and arrived at roughly the same level of performance.

      This is not true at all. VT clearly has stated in their presentation that G5 has the best performance / price for what they do.

      The 1.5 GHz Itanium 2 costs over $3000 per chip, and even the 32-bit Xeon 3.06 GHz is about $1000, while the 2 GHz PPC 970 is about $300 or $400. In addition, VT wants 64-bit chips, so Xeon is a nonstarter.
    • Maybe the G5 has the cheapest CPU that is interoperable with infiniband and that is why they chose it? Remember that the G5 is a computer not a processor and you have to look at it as an integrated solution not a selection of components. PPC970 would be far less useful in a poorly designed computer.
    • not in LINPACK (Score:3, Insightful)

      by green pizza ( 159161 )
      Dig around the Top500 list and you'll see that for this benchmark (LINPACK), Myrinet and Infiniband don't do much better than plain GigE. (Which is one reason why the Cray X1 systems aren't ranked higher).

      In fact, there are some nearly-identical setups in which there is no difference between GigE and Myrinet.

      LINPACK is a good benchmark for generating big numbers for clusters, but it's a pretty poor supercomputing bechmark in general. The faster your machine can multiply and add fp numbers, the better its
  • by Boone^ ( 151057 ) on Sunday November 16, 2003 @11:12AM (#7487424)
    Run down the list and look at processor counts. We've got 5120 at the top (vector), but number 2 needed 8192 to get the job done. BigMac at #3 drops to 2200 and the processor counts hover in that 2000+ category. Until #19, when Cray's X1 jumps in at 252 processors.

    Having a fast computer is cool and all, but if you can do it with 252 CPUs instead of 1024 (#22, P4 2.4), isn't that a win?

    Besides, LINPACK doesn't stress interconnect latency and bandwidth, only cache and memory performance. When you run a "real" codes on these Mac/Xeon clusters and get 5% efficiency, suddenly the Earth Simulator (and the small Cray X1's) look good when they blow well past the 50% efficiency mark.
    • It means you solve different problems with different computers. Isn't that obvious?

      There are solutions where multiple, numerous, nodes are fairly efficient. Something with coarse granularity and high compute effort, so that you can allocate per node infrequently. In those situations, something like the VT cluster is cheaper, more cost effective, and more capable than the Earth simulator because you can built many of them for the same cost.

      Different tools for different problems, I think is the conversant i
    • Having a fast computer is cool and all, but if you can do it with 252 CPUs instead of 1024 (#22, P4 2.4), isn't that a win?
      It really depends on whether you think liquid cooling is a win.

      Besides, this article implies that the ORNL machine is only half finished. [ornl.gov]

      The rest of the initial setup of eight cabinets won't arrive until ORNL completes construction of its new 40,000-square-foot computational sciences facility.

      At 64 processors per cabinet, the "initial installation" will be 512 processors.

  • by Whatah ( 650786 ) on Sunday November 16, 2003 @11:16AM (#7487443)
    Why does every product apple make have its own icon ?

    And yet equally, if not more, important products like amd64 don't have their own icons ?

    Additionally, why does this CPU have a G5 icon? And not a PPC970 icon ?

    Has slashdot sold out to apple ?

  • The article is cool enough (first official confirmation of the 3rd spot) but after stating this in simple terms, they go on bashing Big Mac with the 'future' clusters that will beat it big time.

    It might be well true that BM will be beaten, but please, a more positive spin on the present achievements would be in good order. If in 6 months it gets beaten, yeah, cool, but it will be *then*, not now. Pleaaseee, wasn't it over, that pointless subliminal Apple bashing?

    dani++
    • I suppose it's no worse than Slashdot bashing: The sort of stuff that will make Big Mac look like that old TI-85 collecting dust in your drawer??? Regardless of the success of future supercomputers, it is complete hyperbole to compare Big Mac's eventual demotion in the SC ranks to today's utility of a TI-85. Slashdot ediotrializing at its best folks!
  • by aliquis ( 678370 )
    My TI-85 isn't collecting dust in my drawer. This summer I had a small accident with flavoured oatmilk. What started as a tilted backpack ended as a TI-85 with some IC connectors magically erroded away after 5 hours or so of traveling.

    Kids, remember this: Oatmilk with salt = ionized water. Batteries = electricity. Ionized water + electricity isn't healthy for those small metall pieces of yours.

    Whoa, do I smell a intresting+informative moderation?
  • I still use mine. heheh

    I left highschool in 96 and now this year rejoined college since the tech jobs now require a college degree.

    Hell the el cheapo Ti's have more memory then my high end model but at the same time mine has alot of functions reserved for the TI-89.

  • ...is not collecting dust. It sits with all my other calculators. They are so tightly crammed into a little shoebox, along with their instruction booklets, that there is no room for air to get in or dust to settle. It's actually kinda sad that the very machines that bore my appetite for tech toys and knowledge are reduced to boxdwelling on a cabinet shelf directly below my stereo.

    On the other hand, it may even be more sad that they still sit so close to human touch, considering that no one has gotten any

  • by afantee ( 562443 ) on Sunday November 16, 2003 @01:21PM (#7488151)
    The 1.5 GHz Itanium 2 costs over $3000 per chip, and even the 32-bit Xeon 3.06 GHz is about $1000, while the 2 GHz PPC 970 is about $300 or $400.&#160; In addition, VT wants 64-bit chips, so Xeon is a nonstarter.

    Excluding the Earth Simulator, the 2 GHz G5 has the highest Flops per CPU, even 5% higher than the 1.5 GHz Itanium 2 and 10 times cheaper:

    #2 Alpha 13880 / 8192 = 1.69

    #3 G5 10280 / 2200 = 4.67

    #4 Xeon 9819 / 2500 = 3.92

    #5 Itanium 8633 / 1936 = 4.45

    #6 Opetron 8051 / 2816 = 2.85

  • by yroJJory ( 559141 ) <meNO@SPAMjory.org> on Sunday November 16, 2003 @01:32PM (#7488202) Homepage
    can it run Maya AND Photoshop at the same time?
  • BlueGene/L (Score:3, Interesting)

    by frenchs ( 42465 ) on Sunday November 16, 2003 @05:58PM (#7489652) Homepage
    If the BlueGene/L interests you, take a look at the next member of the family BlueGene/P (the P means Petaflop). If I recall correctly, the Petaflop version is going to have more than a million processors in it. These computers are pretty much used for biological applications, and are going to benefit from some serious hardware, software, and networking.

    Here is the project update from a while back, talks a bit about each level of the blue gene project. It also talks about the biological motivations for supercomputing.
    http://www.research.ibm.com/bluegene/BG_External_P resentation_January_2002.pdf [ibm.com]

    And more generally, the blugene homepage: http://www.research.ibm.com/bluegene/ [ibm.com]

    -SF

Any sufficiently advanced technology is indistinguishable from magic. -- Arthur C. Clarke

Working...