Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology (Apple) Businesses Apple Technology

Factual 'Big Mac' Results 566

danigiri writes "Finally Varadarajan has put some hard facts on the speed of the VT 'Big Mac' G5 cluster. Undoubtedly after some weeks of tuning and optimization, the home-brewn supercluster is happily rolling around at 9.555 TFlops in LINPACK. The revelations were made by the parallel computing voodoo master himself at the O'Reilly Mac OS X conference. It seems they are expecting and additional 10% speed boost after some more tweaking. Srinidhi received standing ovations from the audience. Wired news is also running a cool news piece on it. Lots of juicy technical and cost details not revealed before. Myth dispelling redux: yes, VT paid full price, yes, it's running Mac OS X Jaguar (soon Panther), yes, errors in RAM are accounted for, Varadarajan was not an Apple fanboy in the least... read the articles for more booze."
This discussion has been archived. No new comments can be posted.

Factual 'Big Mac' Results

Comments Filter:
  • FACT: (Score:2, Funny)

    by Anonymous Coward
    Big Macs are bad for your health.
  • by devphaeton ( 695736 ) on Thursday October 30, 2003 @04:12PM (#7351726)
    ....ok, we've really got real numbers THIS time!!

  • I haven't seen a cluster of Macs this big and powerful since the last annual pimp convetion!

    Now, where did all the tricks go?
  • Brewn? (Score:3, Interesting)

    by FatAlb3rt ( 533682 ) on Thursday October 30, 2003 @04:13PM (#7351747) Homepage
    Is that a word? How about brewed? Hate to nit, but .... aw... nevermind.

  • Super computer? (Score:3, Insightful)

    by ludky132 ( 720414 ) on Thursday October 30, 2003 @04:16PM (#7351766)
    I've always been sort of intrigued by Top500 [top500.org] Has there ever been a good comparison written about the similarities/differences between a 'supercomputer' and the regular pc sitting on my desk running Linux/2k? At what point does the computer in question earn the title "Super"?
    • Re:Super computer? (Score:2, Interesting)

      by Carnildo ( 712617 )
      A "supercomputer" is usually one that is optimized for vector operations: operations that take a data set, and perform the same operation on each element of that data set -- sort of a "Super SIMD/SSE/AltiVec/whatever". Your desktop computer is designed around performing a series of different operations on a single data element at a time. The graphics card of your computer could be considered a very specialized supercomputer.

      In terms of raw processing power, the computer on your desk is more powerful than
    • by isoga ( 670113 ) on Thursday October 30, 2003 @04:26PM (#7351889) Journal
      When you get on the list. Then you have a supercomputer
    • when it can rate on the list. And since it's a fluid list, with big computers regularly falling off, the definition changes. The real question is, what happens to a "former" super computer? If you want a comparison, run the linpack code on your machine.
  • Full Price? WHY?!? (Score:5, Insightful)

    by JonTurner ( 178845 ) on Thursday October 30, 2003 @04:16PM (#7351768) Journal
    >>yes, VT paid full price

    This is disgraceful! Hundreds of Macs on one purchase order, and they couldn't (or chose not to!) negotiate a deal? The Virginia taxpayers should be outraged! Good grief, if I bought 600 loaves of bread from the corner market, I'd expect a discount. Perhaps they were more interested in making the press than being good stewards of the public trust. After all, the college knows the taxpayers will have pay the bills, sooner or later.
    Shameful.
    • I agree. As an employee of a state-run university, I can attest that I'm elligible for a 10% discount off the purchase price of one of the dual 2GHZ G5s. (Originally $2999, discounted to $2699).

      That VT wasn't able - or didn't think - to do the same is pretty shocking. A savings of $330,000 isn't anything to sneeze at.
    • by zeno_2 ( 518291 ) on Thursday October 30, 2003 @04:24PM (#7351860)
      Derek Bastille of the Arctic Region Supercomputing Center in Fairbanks said that they just built a supercomputer but spent about 30 million using Cray and IBM equipment. He got quotes from other companies (dell) and the price was going to be about 10 million. They only ended up spending 5.2 million on the apples. Id say if I lived in Virginia, and paid taxes, I would be happy.
      • by david614 ( 10051 ) *
        Well, I *do* live in Virginia - - and this is one of the greatest things to happen at a publically funded University in years! Great science, ingenuity, huge potential. Now *that* is why public funding is an essential part of R&D. D
    • Ya know. I could not figure out if this was a troll or if you were being serious. I decided that you were being serious so I will simply say that there is always going to be someone that is going to bitch and moan about something. Apple gave VT the best deal going (out of IBM, Dell, HP etc..) and they also got an educational discount. What more do you want from a company that gives you the fastest for the cheapest?

    • by Patrick Lewis ( 30844 ) * on Thursday October 30, 2003 @04:25PM (#7351874)
      I imagine that it was because the G5s were very scarce at launch. These aren't loaves of bread we are talking about. Apple could ship and sell at full price as many of these as they could make, so VT really had no leverage to try and get lower prices. 3-6 months after the launch, then sure, they might be able to get it cheaper. But first in line? I don't think that it is suprising at all to hear they paid full price.
    • I'm sure they got lots of extra special support to offset the MSRP purchase price.
  • interesting points (Score:5, Interesting)

    by kaan ( 88626 ) on Thursday October 30, 2003 @04:17PM (#7351772)
    I think it's interesting that he wasn't a Mac fan at all before this project. He says he chose it because it had better performance than everything else out there ("Ironically, they lost the gigahertz game," he said of Intel. "(The G5) is extremely faster than the Itanium II, hands down."), and was cheaper too (Dell and other manufacturers quoted prices between $10 and $12 million, vs. the $5.2 million or G5s).

    What more do you need? Faster systems, cheaper total cost, and slick looking cases.
    • by davidstrauss ( 544062 ) <david@davidstrauPERIODss.net minus punct> on Thursday October 30, 2003 @04:29PM (#7351928)
      Itanium is a poor architecture. This isn't just my opinion, it's the opinion of the professor here at UT Austin working on the multi-core lightweight processor (a.k.a. TRIPS) that IBM will hopefully be fabbing soon. Seeing a cost comparison with the Athlon64/Opteron would be more enlightening. Also consider that it would be almost impossible to buy Itanium or any other "enterprise" system without all the redundant hardware (ECC RAM, etc.) for which the G5 cluster compensates in software.
      • by RzUpAnmsCwrds ( 262647 ) on Thursday October 30, 2003 @06:19PM (#7353064)
        "Itanium is a poor architecture. This isn't just my opinion, it's the opinion of the professor here at UT Austin working on the multi-core lightweight processor"

        Your professor's opinion is... well... flawed.

        Itanium is an excellent architecture. Its flaws come from politics:

        1: Itanium requires good compilers. For now, that means compilers from Intel. GCC will be fine for running Mozilla on an Itanium, but technical apps simply won't perform anywhere near the performance of the machine when compied with GCC.

        2: Intel wants to market Itanium as a server chip. That means that they are putting 3MB or 6MB on the high end Itaniums. Soon they will have a 9MB cache version. Lots of cache means lots of transistors means lots of heat.

        3: Intel is not fabbing Itanium with a state of the art process. Intel leads the world in process technology, yet their Itanium is still on a 130nm process. Before Madison (about a year ago), it was on a 180nm process.

        Some misconceptions:

        1: Itanium is "inefficent". This couldn't be further from the truth. At 1.5Ghz, it whoops *anything* else in SPECfp (by a margin of 1.5x or more) and matches the 3.2Ghz P4 or 2.2Ghz Opteron in SPECint.

        2: Itanium is "slow". Wrong again, see above.

        3: Itanium doesn't scale. Wrong again. Itanium scales better than any other current architecture, getting nearly 100% of clock in both int and fp. Opteron gets around 99% int and 95% fp. Pentium 4 gets around 85% int and 80% fp. I don't have data for PPC970.

        4: Itanium is expensive. This is true, but it has to do with politics rather than architecture. Itanium uses *fewer* transistors and does *more* instructions per clock than a RISC architecture. Itanium takes much of the logic out of the CPU and puts it into the compiler (this is why you need good compilers). Itanium's architecture is called EPIC, or explicitly paralell instruction computing, because each instruction is "tagged" by the compiler to tell the CPU what instructions can and cannot be executed in paralell.

        EPIC scales better than RISC architectures. It does more work with a lower clock and fewer transistors. That means that it will ultimately result in a cooler, cheaper, smaller, faster CPU than anything else. Intel's politics prevents this from happening.

        So, please don't say that Itanium is a poor architecture. Itanium is a proven architecture. It uses fewer transistors and lower clock speeds than comparable RISC CPUs. Yes, it has problems, but most of them have to do with Itanium the CPU (too much cache, too expensive, not latest process) instead of EPIC the architecture.
        • EPIC scales better than RISC architectures. It does more work with a lower clock and fewer transistors. That means that it will ultimately result in a cooler, cheaper, smaller, faster CPU than anything else.

          Doing more per clock isn't necessarily good if it pushes your clock speed too low. Itanium2 is only availble up to about 1.3 Ghz. As the article says, it's ironic that Intel should now lose the Mhz race.

          Using fewer transistors is good for reducing heat and manufacturing costs, but the Itanium is

          • by RzUpAnmsCwrds ( 262647 ) on Thursday October 30, 2003 @08:30PM (#7354180)
            "Itanium2 is only availble up to about 1.3 Ghz."

            If by "about 1.3Ghz", you mean 1.5Ghz, then, yes, Itanium only goes up to 1.5Ghz. But at 1.5Ghz is faster than the fastest 3.2Ghz Pentium 4. With a decent process and less cache, it could easily scale to 2+ Ghz.

            " but the Itanium is neither cheap nor cool (130W!)"

            This has to do with the fact that the CPU has 3MB of cache on it. That makes the die huge which makes the CPU expensive. It also makes it heat up like a toaster. As a comparison, the latest Pentium 4s are ~90W, and they only have 512K of cache.

            "In the performance arena, Moore's law is useless unless chip designers figure out how to use MORE transistors to compute more quickly."

            My statement was that, for a given performance level, Itanium uses less transistors than RISC. Itanium was *designed* to use more transistors. That's why the instruction set is designed to produce code that runs well in paralell. RISC CPUs have to figure out what can be run in paralell in hardware - Itanium does it in the compiler.
        • Intel wants to market Itanium as a server chip. That means that they are putting 3MB or 6MB on the high end Itaniums. Soon they will have a 9MB cache version. Lots of cache means lots of transistors means lots of heat.

          I don't see your point here. More cache does not make it a better processor architecture.

          Intel is not fabbing Itanium with a state of the art process. Intel leads the world in process technology, yet their Itanium is still on a 130nm process.

          The PPC970 and Power4+ are both fabricated i

        • by mczak ( 575986 ) on Thursday October 30, 2003 @08:43PM (#7354252)
          Itanium is an excellent architecture.
          Can't agree there. It's certainly not as bad as the first Itanics made it look, it has lots of interesting ideas, but overall it seems the architecture didn't reach the goals intel probably had.
          Itanium requires good compilers. For now, that means compilers from Intel.
          Certainly. However, it looks like it is very, very hard (if not impossible) to write a good compiler for it - intel certainly invested a LOT of time and money, and increased performance quite a bit (quite a bit of the performance difference in published spec scores between itanium 1 and 2 is just because of a newer compiler), but if rumours are true the compiler still isn't quite that good - after what, 5 years?
          Lots of cache means lots of transistors means lots of heat.
          Not quite true. Cache transistors aren't very power hungry - look at P4 vs. P4EE with an additional 2MB L3 cache, the power consumption hardly changed (5W or so isn't much compared to the total of 90W).
          Intel is not fabbing Itanium with a state of the art process.
          Well, their 130nm process sounds quite good to me. Nobody really uses much better process technologies yet - AMD might have a slight edge with their 130nm SOI process, which should help a bit with power consumption.
          Itanium is "inefficent". This couldn't be further from the truth. At 1.5Ghz, it whoops *anything* else in SPECfp (by a margin of 1.5x or more) and matches the 3.2Ghz P4 or 2.2Ghz Opteron in SPECint.
          The itanium makes up for its inefficiency with large caches (compared to P4 / Opteron). Compare the dell poweredge 3250 spec results with 1.4Ghz/1.5MB cache and 1.5Ghz/6MB cache, otherwise configured the same (unfortunately using slightly older compilers, so don't take the absolute values too seriously). The smaller cache (which is still more than Opteron/Pentium4 have) costs it (factored in the 6% clock speed disadvantage) about 20% in SpecInt (making it definitely slower than Opteron 146 and Pentium4, even considered the results would be higher with the newer compiler). In SpecFp it's about the same 20% difference, which means it still beats P4 and Opterons, but no longer by such impressive margins.
          Itanium doesn't scale. Wrong again.
          I'm too lazy to check the numbers, but the Itanium has a shared bus - granted, with quite a bit of bandwidth, but still shared (similar to the P4). 2 CPUs should scale well, 4 shouldn't be that bad, and after that you can forget it (meaning your 64 cpu boxes will be built with 4-node boards). The Opteron will scale much better beyond 4 nodes - its point-to-point communication is probably overkill for 2 nodes, should show some advantages with 4 nodes, and scale very well to 8 nodes - too bad nobody builds 8-node Opteron systems...
          Or do you mean scaling with clockspeed? In that case, the bigger the cache and the faster the system bus and ram, the better will it scale, but the cpu architecture itself is hardly a factor.
          Itanium uses *fewer* transistors and does *more* instructions per clock than a RISC architecture.
          Unfortunately I haven't seen any transistor numbers of a Itanium2 core. But I think it's not true. The Itanium saves some logic on instruction decoder, but has more execution units in parallel (which should lead to better performance, but ONLY IF it's actually possible to build a well optimizing compiler which manages to keep the execution units busy, and it's completely feasible that this is just not possible in the general case).
          EPIC scales better than RISC architectures.
          I really don't think this is true. Scaling is independant from the cpu core architecture.

          I will agree that EPIC (which, btw, isn't quite intels invention, it shares most of the ideas with VLIW) is a nice concept, but for some reason it just doesn't work in practice as well as it should.
  • Dumb Question... (Score:4, Interesting)

    by devphaeton ( 695736 ) on Thursday October 30, 2003 @04:18PM (#7351788)
    ....maybe i'm obtuse, but i keep hearing about this thing as "..and we're only seeing X% of its real potential right now!"....

    1) Why can't they just shout "Let 'er rip!!" and crank the thing wide open?

    2) Why all the media buzz concerning this as a `surprise' when they've already got its performance figured out, apparently?

    Sorry.
    • Re:Dumb Question... (Score:5, Informative)

      by SquareOfS ( 578820 ) on Thursday October 30, 2003 @04:22PM (#7351833)
      Because performance in a supercomputing cluster is not just the sum of the nodes.

      It's highly dependent on the interconnects, the topology of the network, the software that does the clustering (i.e., that actually makes the nodes available for parallelized word), etc.

      So minor tweaks can have major effects, and getting it tweaked properly is quite an accomplishment.

      • the nodes available for parallelized word

        Does it make Word's performance acceptable?
      • Ahhh.. thanks... Guess i couldn't see the forest for the trees on that one.

        You know, i still would have thought that they'd have all this worked out beforehand, maybe before they even build a supercomputer.

        I mean, you can't just say "well, let's go grab a metric fsck-ton of X and see what happens when we cluster it". You're talking a lot of resources and especially $$ that's being thrown on the line. I'm sure that building a supercomputer is way over my understanding and these folks probalby have put m
        • Re:Dumb Question... (Score:3, Interesting)

          by Blimey85 ( 609949 )
          They did have specs before hand. They said ok, we take this many and the max theoretical performance is X. We scale that back to Y percent and that's what we will likely achieve. We need to get to Z performance level and Y percent of X is above the Z threshold so we're good to go. Now lets talk price. It's the cheapest available and they can get it to us to meet our deadline? Great. Lets order.

          They new in advance what they could likely achieve with this cluster and they have surpassed what they were expec

        • It is more likely that they had attempted to figure out exactly what they would get out of this hardware before hand. Then when nearing completion of the project found things to be turning out a little bit differantly than they expected. Hence additional tweaks and expected additional performance increases. Just because you have something planned out at the beginning doesn't mean things are going to turn out exactly how you expect them to. No matter how much time and money is spent on that planning stage. I
    • > 1) Why can't they just shout "Let 'er rip!!" and crank the thing wide open?

      Statement: I know I can get my Ferrari to give me about 20% more acceleration by adjusting the fuel to air ratio.

      Question: Why can't you just shout "Let 'er rip!!" and crank the thing wide open?

    • Why can't they just shout "Let 'er rip!!" and crank the thing wide open?

      If it were that easy, I'm sure they'd do just that. But a cluster of over 1000 machines is a complex contraption and I'm willing to bet that tuning it to get the last drop of performance out of it is not a simple task. It's probably a theorize, measure, tweak loop getting small increments of performance gain on each iteration. It's probably going to take them some time before they get it fully tweaked out.
    • ....maybe i'm obtuse, but i keep hearing about this thing as "..and we're only seeing X% of its real potential right now!"....

      1) Why can't they just shout "Let 'er rip!!" and crank the thing wide open?


      The Real Potential is figured as a pure function of how many processors are in a machine and what speed they're running at.

      Only a certain percentage of this processing power is aimed squarely at solving a problem, however... you also have to do things like:

      Run an operating system
      Compute error checks on co
    • It used to be, at least according to the US gov, when a computer could hit the 1 gigaflop mark and that meant export restrictions. However, that's no big deal these days so I guess the definition is really dated if they haven't bumped it to 1 teraflop.
  • by Anonymous Coward on Thursday October 30, 2003 @04:18PM (#7351796)
    An audience member asked if he'd made the purchase through the Apple store. Varadarajan smiled and said that actually, yes, he had.

    • by Glonoinha ( 587375 ) on Thursday October 30, 2003 @04:29PM (#7351927) Journal
      I can see that one now ... Varadarajan surfs to www.apple.com/purchase

      Ok, max all the options. Cool.
      Now put 1100 in the quantity. Cool.
      Ok (chugga chugga chugga) $3.3 million dollars. Who has the credit card? (silence, *crickets*, the rude sound of nobody reaching for their wallet...)

      Ok maybe it is just me. Of course I have play configured a few systems in the online order systems of IBM and Dell a few times (didn't actually hit 'Submit' however) and it is possible to configure a single $100k machine from Dell. I haven't found the limit at IBM yet as they seem to have more imagination than I do (although it is easy just to get the SOFTWARE on one of their systems to exceed $100k.)
      • Is this a hobby?

        You should start a list of the most expensive things you can buy online with a credit card.

        For instance could you buy a Airbus?

      • by jeholman2003 ( 311887 ) on Thursday October 30, 2003 @06:25PM (#7353115) Homepage

        I usually never reply to these things, but I think it is funny that people are arguing about how he ordered on the Apple Store. I find it even funnier that people would even go to the Apple Store and try. It was a joke! There were a lot of dedicated people at Apple, including myself, that helped to make this dream become a reality. The "myth" that I would like to clear up is that Apple DID have a clue and a lot of great people at Apple have been working really hard for that last few months, making a lot of personal sacrifices to make sure that all the awesome work from Dr. Varadarajan and the rest of the cluster team could be possible and successful. That's my 2 cents.


        Jerome Holman
        Apple Campus Representative @ VT
        http://filebox.vt.edu/users/jeholman [vt.edu]
        • Ah, finally someone who is actually involved with the project. Can you tell me what the total cost of the super comptuer?
          The $5.2M figure seems to just be the Towers (Dual 2Ghz + 4GB RAM is $4814 with the standard educational discount, mulitply by 1100 and you get $5295400). What was the additional cost of the Infiniband cards and switches, the Cisco switches, the racks, and the cooling equipment? Were any modifications necessary for the building (more power, etc)?
          • Answers (Score:3, Interesting)

            From http://macslash.org/article.pl?sid=03/10/28/235723 5&mode=thread "The total cost of the asset, including systems, memory, storage, primary and secondary communications fabrics and cables is $5.2mil. Facilities upgrade was $2mil. 1mil for the upgrades, 1mil for the UPS and generators." Total: $7.2M + essentially "volunteer" assembly So it's still a LOT cheaper than anything even close to comparable.
    • With that kind of purchase, it would be first class travel for years!
  • by SuperBanana ( 662181 ) on Thursday October 30, 2003 @04:18PM (#7351799)
    from the whole-lotta-clock-cycles dept.
    [snip]
    yes, it's running MacOSX Jaguar ( soon Panther)

    More like whole-lotta-CD-jockying. Perhaps the bio department can lend a hand by donating the services of their chimps to handle the CD swapping.

    (Yes, I'm aware there are smarter ways of doing it, but isn't it a fun mental picture, 100 chimps running around a cluster of G5's and throwing bananas and CDs at each other?) Talk about your fun install-fests.

  • Simply amazing (Score:3, Insightful)

    by laird ( 2705 ) <lairdp @ g m a i l.com> on Thursday October 30, 2003 @04:19PM (#7351805) Journal
    This is simply an amazing achievement. Plenty of people have built supercomputers from huge piles of x86's, but this team managed to not only pull the trick off in less time, for less money, but on a new hardware platform. I certainly follow their logic (PPC's have always been far better than x86's for real scientific-level precision FLOPs) but it's a really gutsy move betting your entire supercomputing program on a new CPU, new hardware platform, etc., and on your ability to get everything ported to the PPC -- that's a lot of risks to take, and a small school like that can't afford to fail, even building a relatively cheap supercomputer. But it clearly paid off! Not only did they get great PR for the university, they got a great computing resource for the students and faculty, and by doing it themselves rather than buying a complete system from a vendor, I am sure that those students all learned far more. And those 700 pizza and coke consuming students that cranked the code will all be able to say that they were part of this amazing thing.

    Damn!
  • by SDMX ( 668380 ) on Thursday October 30, 2003 @04:21PM (#7351827)
    'yes, errors in RAM are accounted for,' And no malloc library benchmark jumbling bullshit this time? T minus 10 minutes before some PC nut looks at all this, sees that the Mac relies on something a PC can't do, and 'blows the whistle'. T minus 15 minus before they realize it's the OS.
  • by Colonel Panic ( 15235 ) on Thursday October 30, 2003 @04:22PM (#7351831)
    Varadarajan told the audience he would publish full documentation and release most of the code written for the machine. However, some of the software is subject to patent applications, he said, and he wasn't yet sure if it would be released under an open-source license.

    What's up with that?
    Used to be that work like this done at a Univeristy was considered 'open' as in available to anyone to help advance the state-of-the-art. Not anymore...
    • by norkakn ( 102380 ) on Thursday October 30, 2003 @04:27PM (#7351906)
      It isn't their fault.. I hear a long story on NPR about it a while ago. Universities tried to stay out of the patent game, but companies would take their research and patent it and then charge the university to use it.. researchers having to pay to use their own findings.

      The patent system needs to be overhauled, then maybe we can start opening up the Universities again (and give them some more funding too!)
      • It isn't their fault.. I hear a long story on NPR about it a while ago. Universities tried to stay out of the patent game, but companies would take their research and patent it and then charge the university to use it.. researchers having to pay to use their own findings.

        Boy, I'd love to see you substantiate this assertion with a source. If what you say is true, then it sounds to me like the entire university system in this country is essentially already undermined completely. No wonder researchers are

      • It isn't their fault.. I hear a long story on NPR about it a while ago. Universities tried to stay out of the patent game, but companies would take their research and patent it and then charge the university to use it.. researchers having to pay to use their own findings.

        The patent system needs to be overhauled, then maybe we can start opening up the Universities again (and give them some more funding too!)

        That's a real problem, but it has a real solution: release the findings as public domain or under

      • ...companies would take their research and patent it and then charge the university to use it.. researchers having to pay to use their own findings.

        Besides the prior art issue that others mentioned, academic research is not subject to patents. So university researchers never have to pay to license patents.
    • Comment removed based on user account deletion
    • They probably aren't his patents... someone already applied for them.
    • Used to be that work like this done at a Univeristy was considered 'open' as in available to anyone to help advance the state-of-the-art.

      Since when? Sure some work is done openly and published. Some developments are marketed. This is one way a university makes money...

    • Ever hear of pre-emptive patenting? You patent something before someone else comes along and patents it, then offers to license it to you for a truckload of money? Just because someone obtains a patent on something doesn't mean they're out to squeeze every dollar out of it. Sometimes a patent is obtained, then freely licensed to everyone just to make sure the knowledge is freely available.

      But until you actually get the patent, you're an idiot if you let the cat out of the bag, so to speak.
    • Assuming you live in the USA - have you taken a look at your state budget recently? In particular, the dollars being spent on higher education?

      Here's a fun game to play: find the total budget for your state university system. Calculate the percentage of that budget coming from state financing. Perform the same calculation using 1995, 1990, 1980, and 1970 budgets.

      Do the math in a state like Wisconsin, and the percentage has fallen quite dramatically. That lost financing has to be made up from somewher
    • Used to be that work like this done at a Univeristy was considered 'open' as in available to anyone to help advance the state-of-the-art.
      Nothing about a patent requires charging for or restricting use of the patented ideas. What it does do is prevent Lawyers, Inc. from patenting the work, and then restricting its use.

      Holding the patent in hand is much cheaper than having a law firm fight the prior art fight for you.

  • by BWJones ( 18351 ) on Thursday October 30, 2003 @04:22PM (#7351835) Homepage Journal
    So, the other really cool thing they are doing is open sourcing the code for error checking and connectivity.

    This is in addition to consulting where they are helping others build similar clusters.

  • Full price? (Score:5, Insightful)

    by Aqua OS X ( 458522 ) on Thursday October 30, 2003 @04:29PM (#7351922)
    Wow.. I can't believe Apple didn't cut them a break for buying 1100 Dual G5s.

    You'd think apple would at least sell G5's to VT without SuperDrives and Radeon 9600s. I seriously doubt those things (especially the video cards) will get a lot of use in a giant cluster.

    But, hey, even with all that pointless extra hardware, this cluster is still less then half the price of a comparable intel system from Dell or IBM. Weird.
    • I seriously doubt those things (especially the video cards) will get a lot of use in a giant cluster.
      as i say every time this topic comes up, they're talking about using the GPUs for additional processing. while GPUs aren't flexible enough to perform many tasks, among some of the tasks they can do are some that they do extremely well.
    • Re:Full price? (Score:5, Interesting)

      by OECD ( 639690 ) on Thursday October 30, 2003 @05:19PM (#7352495) Journal

      You'd think apple would at least sell G5's to VT without SuperDrives

      OTOH, five years from now, when they have the world's 65,000th fastest supercomputer, they could just pull the thing apart and give/sell complete computers to their students. Then it's back to the Apple Store to order up a whole lot of G7's.

    • Um, in a roundabout way some of this is from IBM. The two CPUs in each box are from IBM.

      When IBM comes out with the $3,500 4-way 970 (G5 in Apple-speak) workstation it will be interesting to see what people do with it. Imagine a cluster that is 17% more expensive but with twice as many processors...

  • nerds (Score:2, Funny)

    by mooface ( 674033 )

    From the wired article:

    "After his presentation, a group of nerds followed him to the hotel's bar for drinks, hanging on his every word."

    How dorky did these guys have to be to have a reporter for "Wired" catagorize them as nerds...damn....
  • Think how much faster it will be when they switch all the nodes over to Linux! :-)/2
  • by Epistax ( 544591 ) <(epistax) (at) (gmail.com)> on Thursday October 30, 2003 @04:33PM (#7351979) Journal
    ... but that doesn't matter. An accomplishment is an accomplishment. Besides if an AI manifests itself it'd be less likely to destroy the world and more likely to tell you that your white socks do not match your purple tie.
  • by cosmo7 ( 325616 ) on Thursday October 30, 2003 @04:38PM (#7352027) Homepage
    For your convenience I've collected the main arguments people have made against the cluster:
    • They got some special deal from Apple
    • It's running Linux, not OS X
    • Opterons would be faster and cheaper
    • The guy in charge is some Mac zealot
    • It isn't as fast as everyone expected
    • Rockets would not work in outer space as there is no atmosphere to push against
  • I am somewhat surprised that there isn't a G5 version of the XServe yet. I guess the G5 chips are still pretty scarce. (Or else Apple's really taking the time to get the G5 XServe right... or both.)

    However, if G5 Macintosh systems like this become "popular" in supercomputing, maybe that's a reason to get a G5 XServe out there sooner. I'd imagine a rack mount system would be easier to deal with than a bunch of towers.
  • by EmCeeHawking ( 720424 ) on Thursday October 30, 2003 @04:43PM (#7352083)
    To those who are wondering why the G5 is a serious contender for supercomputing applications( and why VT decided the way they did ), you may want to follow this link: http://www.chaosmint.com/mac/vt-supercomputer/ [chaosmint.com]

    Here's a quick rundown:

    Dell - too expensive [one of the reasons for the project being so "hush hush" was that dell was exploring pricing options during bidding]

    Sun (sparc) - required too many processors, also too expensive

    IBM/AMD (opteron) - required twice the number of processors and was twice the price in the desired configuration; had no chassis available

    HP (itanium) - same

    Apple (IBM PPC970) - system available with chassis for lowest price
  • Power PC 970 and G5 (Score:3, Interesting)

    by mojowantshappy ( 605815 ) on Thursday October 30, 2003 @04:52PM (#7352199)
    From the O'Reilly article:

    "The IBM with a PowerPC 970 was a first choice but the earliest delivery date would have been January 2004."

    "On June 23 Apple announced the G5."

    I was under the impression that the G5 was a Power PC 970. Is it just some derivative of the Power PC 970... or what?

  • From the summary: "the home-brewn supercluster is happily rolling around at 9.555 TFlops"

    Ignoring the "brewn" part of things, since when does "home-brewed" mean "designed and funded by a major university"?

    I usually think of "home brewed" as something that someone put together at home. With their own money. In their spare time.

    This is *not* a home-brew supercomputer, it is an institute designed and created super computer.

    That is all.
  • by mojowantshappy ( 605815 ) on Thursday October 30, 2003 @05:05PM (#7352361)
    Here is slideshow (in PDF format) with a bunch of details on the supercomputer, including desicions on what to get.. etc.

    Here is da slide-show [vt.edu]

  • Memory errors? (Score:3, Interesting)

    by Hoser McMoose ( 202552 ) on Thursday October 30, 2003 @05:06PM (#7352371)
    I keep seeing reference to some sort of software that will defeat hardware memory errors.

    How, pray tell, are they planning on detecting these errors? I can understand how you could reduce the frequency of errors with only a slight loss in performance, ie take some sort of checksum of your data after every x number of cycles, but that doesn't eliminate the errors, only reduces their frequency. Maybe it reduces the frequency by enough that you don't need to worry about it, especially if 'x' is a sufficiently small number, but it still seems like a pretty risky prospect to me.

    Anyone seen any actual TECHNICAL details on this point, ie not just some Mac fan yelling "Deja Vu, DEJA VU!!!"?
  • by Hoser McMoose ( 202552 ) on Thursday October 30, 2003 @05:11PM (#7352416)

    For anyone interesting in learning a bit more about what some of the issues are when creating a super-computer, you might want to have a look at the following:

    Red Storm PDF [lanl.gov]

    The article is talking about Cray/Sandia's new Red Storm machine, a supercomputer using over 10,000 AMD Opteron processors that is expected to be competitive with the Earth Simulator for the #1 spot on the Top500 list. It does, however, talk about a lot more than just the specifics of this cluster, describing what some of the bottlenecks in supercomputers are and how to avoid/work around them.

  • by Uosdwis ( 553687 ) on Thursday October 30, 2003 @05:42PM (#7352743) Journal
    Okay for everyone asking about optimizations, why do it?

    Look at what they built: a complete COTS supercomputer, miniscule price, functionality in six months, public data in a year. They have >9Tf right outta the box.

    Yes they have written their own software, but name a company that doesn't? They modded them (cooling I think, but I couldn't find data only pics.) They bribed students with pizza and soda, they didn't have to buy, make or gut a building. What is amazing is they showed that any simple slashdot pundit could build one if given these resources.

"Yes, and I feel bad about rendering their useless carci into dogfood..." -- Badger comics

Working...