Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AMD Businesses Apple

Dell $38m Supercomputer [not] More Costly than VT's G5s 578

An anonymous reader writes "According to the Austin Business Journal, Dell's 3-teraflop, 600 server supercomputer cluster cost the University of Texas $38 million. As The Apple Turns has pointed out that this is 7 times the cost (and a quarter of the power) of Apple's cluster at Virginia Tech! " Update: 10/14 17:56 GMT by M : worm eater writes "The Register has posted a correction to the widely-reported story that a 3.7 terraflop Dell cluster cost the University of Texas $38 million. As it turns out, the computer cost $3 million, vs. $5.2 million for the 17.6 terraflop Mac G5 cluster at Virginia Tech."
This discussion has been archived. No new comments can be posted.

Dell $38m Supercomputer [not] More Costly than VT's G5s

Comments Filter:
  • by eyegor ( 148503 ) * on Tuesday October 14, 2003 @10:59AM (#7208859)
    Since the Apple site [appleturns.com] is going to get knocked flat, here's a copy of the page.

    Monday, 5:57 PM: Virginia Tech's G5-based supercomputer is (sort of) running-- with 17.6 teraflops of theoretical performance. Meanwhile, Dell tries to build something (sort of) similar, but it winds up with a quarter of the power and seven times the price, and Apple (sort of) announces Xgrid, a product for "parallel and distributed high performance computing"...

    Monday, 5:57 PM: Today's holiday episode is now broadcasting. Don't forget to take your shot for a free AtAT shirt (tee or turtleneck) by entering the Q4/03 Beat The Analysts contest; guess closest to Apple's final reported quarterly profit or loss, and you get the garment-- or your choice of creaky old software from the Baffling Vault of Antiquity(TM). You've only got until Wednesday at 4 PM, and in the likely event of a tie the earliest entry wins, so why wait? Enter now!

    Up, Running, & Kicking Tail (10/13/03) Fun fact: believe it or not, folks, AtAT's wild success isn't confined to these here United States. No, seriously, it's true! The show actually has semi-regular viewers holed up in such far-flung corners of the world as Iceland, the Dominican Republic, and Delaware-- and for the benefit of those fans, we thought we'd explain that, here in the U.S., today we celebrate a holiday called Columbus Day. Columbus Day, for the uninitiated, is one of our most sacred occasions: a day on which we reflect on the many cultural and technical achievements of the city of Columbus, Ohio. We celebrate Greater Columbus's world-famous quilts, its shrubberies recreating Pointillist masterpieces, and (most importantly) its commitment to the preservation of really old TV sets by wondering why the bank is closed and our mail never came. A good time is had by all.

    So if this is such a major holiday, why are we broadcasting, you ask? Well, normally we wouldn't, but faithful viewer Nathaniel Madura pointed out that Slashdot just referenced a BBC World report on that G5 supercomputer down at Virginia Tech, and we're just a little giddy about the existence of a Mac-based cluster than can chew through 17.6 trillion floating point operations per second. Yes, the thing is up and running (at least enough to run performance testing), and reportedly it pumps out 17.6 teraflops of raw perforated aluminum muscle while sucking down enough juice to power 3,000 average homes. Wow, is it getting warm in here, or is it just us? (It's just us-- the G5s are cooled by means of 1,500 gallons of chilled water pumped through every minute. Ooooo, frosty.)

    Kudos to the Virginia Tech team who pulled this off, because frankly, this is the sort of technological triumph we'd normally only expect to come out of, say, Columbus, Ohio. Now, what's interesting about that 17.6 teraflop figure is that if you scope out the last compiled list of the world's top 500 supercomputers (from last June), you'll notice that, if 17.6 teraflops is Virginia Tech's "theoretical peak performance" score, it'll probably slot in nicely at number three. (Scores are ranked by "Maximal LINPACK performance achieved," so it's just guesswork so far.) The top-ranked Intel-based cluster is currently ranked at number three, with 2,304 2.4 GHz Xeons and a theoretical peak performance of 11 teraflops. Gee, more processors, a higher clock speed per processor, and 63% of the performance. Now that's efficiency, baby!

    We'll have to wait until the next top 500 list comes out in November to see if "Big Mac" (as the VA Techies apparently call it) really takes third place, or if the real-life LINPACK scores push it down lower-- but we figure a top five placement is a safe bet. One of the world's bestest supercomputers made of Macs and running Mac OS X? Why, it's a Columbus Day miracle!

    4x The Bang, 1/7 The Buck (10/13/03) Meanwhile, we know that the G5 supercomputer is delivering more pluck per processor than any other supercomputer out there, but what about bang for the b

    • by Anonymous Coward on Tuesday October 14, 2003 @12:13PM (#7209847)
      Tina Romanella de Marquez, communications and development manager at the Texas Advanced Computing Center (TACC), says that the $38 million mentioned by MacNN is for far more than just a supercomputer.

      She said: "The $38M total you refer to was NOT for a single supercomputer. It was announced in February for a total package that included:

      "The establishment of the new Institute for Computational Engineering & Sciences (ICES) at UT, including:

      four new endowed faculty chairs in ICES at UT
      additional funding for the research endowment and the visiting scholars endowment in ICES
      the completion of construction of the ACES building (the 4th floor) for use by ICES and TACC

      "and the establishment of a terascale distributed computing infrastructure at UT, hosted by TACC, including:

      two supercomputers at TACC (the cluster you refer to, and the other IBM system
      two massive storage systems at TACC
      three leading-edge components to increase UT's networking infrastructure
      increases in operations funding over five years for ICES and TACC".

      She adds: "There are many more things that were needed to create ICES and establish a terascale distributed computing architecture at TACC. This point was made by TACC Director, Jay Boisseau, during the Lonestar dedication ceremony. The value of the specific computer referred to was approximately $3.0 million. And, no tuition funds were used in this process. Most of the money did not even come from UT. The package included $8M in discounts and donations from about 10 leading technology vendors, and over $15M from a generous foundation." And, she continued: "The VaTech number ONLY includes the actual computer, not the cost of the building, power, cooling, people, or anything else needed to actually operate it."
  • [Nelson]Ha-ha![/Nelson]
  • But using a cheaper system that had more power would make sense.
  • See?! (Score:2, Insightful)

    I told you Macs were cheaper!

    Seriously, though: How?
    • "'Lonestar' consists of 300 Dell PowerEdge 1750 and PowerEdge 2650 servers running the Linux operating system from Raleigh, N.C.-based Red Hat Inc. [Nasdaq: RHAT]. Dell worked with Seattle.-based Cray Inc. [Nasdaq: CRAY] to design and install the cluster."

      If Dell had to contract Cray to help them build the cluster, I'm sure that aspect wasn't cheap. I'd imagine that most of the actual costs associated with getting the Dells into an efficient cluster were more mundane things than buying the servers themselv
    • Re:See?! (Score:4, Insightful)

      by sql*kitten ( 1359 ) * on Tuesday October 14, 2003 @12:10PM (#7209810)
      I told you Macs were cheaper!

      Then I hope they're waterproof too, since the $38M also included the cost of a building to put them in...
  • Costs (Score:4, Interesting)

    by BWJones ( 18351 ) on Tuesday October 14, 2003 @11:02AM (#7208886) Homepage Journal
    Of course Apple gave them a little bit of a deal on these systems, but on the whole, the bid process was made based upon who gave them the best deal. Apple won out in the free market making this supercomputer cluster one of the most inexpensive supercomputers in the world. Imagine it, we have ASCII blue, ASCII red and ASCII white guarded by guys with guns, and here we have a tech school that appears like they are going to enter the 500 list at potentially number 2. Cool.

    • Re:Costs (Score:5, Funny)

      by Eric_Cartman_South_P ( 594330 ) on Tuesday October 14, 2003 @11:11AM (#7209003)
      Imagine it, we have ASCII blue, ASCII red and ASCII white guarded by guys with guns... ...Cool.

      Wanna really go for cool? Guard the Apples using girls with guns!

    • And I'm sure UT got one heck of a deal from Dell.

      More things that I'm sure people will talk about: The Dells are 1U and 2U boxes designed for rack enclosures meaning they'll be more heat and power effecient not to mention they take up about 1/3 the physical space as the enormous PowerMac G5.

      If VT had waited for the XServer with G5s in it they would have a better cluster. But I realize the desire to be in a big computing list with other big dogs.
      • Re:Costs (Score:5, Informative)

        by sloth jr ( 88200 ) on Tuesday October 14, 2003 @11:27AM (#7209230)
        More things that I'm sure people will talk about: The Dells are 1U and 2U boxes designed for rack enclosures meaning they'll be more heat and power effecient not to mention they take up about 1/3 the physical space as the enormous PowerMac G5.
        Rack-designed enclosures don't mean that there'll be less heat - 1655MC blade servers, for instance, effectively consume .5 U, but pump out gobs and gobs of heat, and are rated at 6A nominal/12A maximal. Placing 11kw in one 42U rack is enormously difficult to cool. Recent article in ComputerWorld [computerworld.com] talks about this issue.
      • The lack of knowledge of basic physics around here simply astounds me.

        If you put a processor into a large box, it does not output any more heat than if you put it into a small box. If anything, the large box will have a more efficient means of heat exchange. Of course, this also means that the building will need the same cooling requirements too - since the same amount of heat is generated, the same amount of cooling is required.

        It might be more space efficient, yes, but space mostly means money, and obvi
    • Re:Costs (Score:5, Interesting)

      by robbieduncan ( 87240 ) on Tuesday October 14, 2003 @11:21AM (#7209140) Homepage
      Apple did not offer them a special deal on the pricing. They sold them 1100 G5s at standard educational discount. What they did offer was a bump up the queue to ensure that the cluster would be running in time to make the November list.
      • apple essentially gave them quite a much more than that(slapping them with 4gb of mem for one, which is nearly 900$ or so alone, per computer).

        moreover, apple wouldn't be able to provide these computers consistently at that price point to companies. making the whole comparision of "they should have bought apples" quite a stupid argument in these other, seperate bids(if they're not bidding how you choose them? you don't).

        the g5's ain't bad price/perf/usability, but the vt cluster really yells 'marketing' a
      • Are you serious?!? I sure as hell would expect pretty steep discounts if I ordered 1000 of ANYTHING from ANYONE if an individual unit cost more than a few bucks...
  • by Daniel Dvorkin ( 106857 ) on Tuesday October 14, 2003 @11:02AM (#7208890) Homepage Journal
    UT is in Austin. Dell is in Austin.

    Can you say "sweetheart deal," boys and girls? I knew you could.
    • And I'm sure that VT paid full retail for their Apple boxes, too. Riiiight.
      • And I'm sure that VT paid full retail for their Apple boxes, too. Riiiight. Here's where you give some sort of justification for why you say that Apple has diverted a huge chunk of their first shipment of top-end G5s to a project where you think they're subsidizing the cost of the hardware. What? You don't have an explanation? Just look at the costs, there's plenty of room above the $3000/machine cost for interconnect technology and extra RAM. Nice troll, but not true in this case.
      • "Sweetheart deal" doesn't mean the deal where a manufacturer gives a discount to a customer who buys in volume; that's standard business practice. It means the deal where a large purchase is made as a favor to a friend, rather than on the basis of rational cost-benefit analysis. Often there's some kind of direct kickback involved.

        This kind of thing happens all the time in small businesses, of course, and I have a hard time arguing with it; often, it's the only way a small business can survive those first
  • Please don't expect to "scale" supercomputing with any sort of linearity, either in terms of "cost" or "power" (doublequotes because methinks both concepts are rather ill-defined in this situation)...
    • Indeed. However, a lot of it comes from your code design and your network layout. If your model is written well enough, you'll be maxing out CPU just before you max out network bandwidth. Else, the whole idea behind having a massive number of CPUs goes down the drain with dropped packets.
  • Shocking! (Score:5, Funny)

    by Bob9113 ( 14996 ) on Tuesday October 14, 2003 @11:03AM (#7208906) Homepage
    So wait, let me see if I get this straight. Are they acutally implying that supercomputers built starting five years ago are actually more expensive per unit of computing power than supercomputers built today? Why, if that were true, and if the same thing applied to workstations, then you'd be able to get a 2 gigahertz machine today for what a 500 megahertz machine cost five years ago. Ludicrous I tell you. Simply ludicrous.
  • by gregor_b_dramkin ( 137110 ) on Tuesday October 14, 2003 @11:04AM (#7208907) Homepage
    $38e6 / 300 servers = 1.2667e+05 $/server

    Methinks the price tag includes a lot more than the hardware costs.

    The comparison with the VT supercomputer is almost certainly not apples to apples (so to speak)

    • Sure. However if one exclude support (the article contained no info about this and I'm not sure about the Apple/VT deal) it looks like UT paid far more for the interconnect and network stuff in their cluster.

      The VT use Mellanox InfiniBand (InfiniScale 96-port 10Gb/sec switches and InfiniHost dual port host channel adapters ).
      On the other hand UT probably paid many milions for the Cray solution. Faster(?) and with lower latency, but with a worse price/performance ratio.

    • Absolutely!

      A direct comparison is way out there in left field. Somebody didn't bother to do the basic math. The actual article is full of grand assumptions. They may have a point that the apple cluster is a better value than the dell cluster, but you'ld have to look at several factors before you can realyl say that....

      What was UT using previously to perform their calculations/sumulations, and what did it cost to operate? How much does it cost to customize/code the apps that they will be running on the clu
    • I maxed out a Dell 2650 at less than $15K, and you know an order of 600 computers is going to get your AT LEAST 25% off. Lets call it $12K per box making it a grand total of $7.2M. Where are they spending the other $31M? Floor space, power, cooling, and maintenance don't cost that much. Some tax payer in Texas is gonna be PISSED!
    • The article makes it plain that this is the just the beginning of a five-year project that will eventually spend $38mil, and which will end up with a lot more than 300 systems (200 to be added next year alone, for example) too. Comparing this to another project without knowing all the details of both is pointless.

  • by platypus ( 18156 ) on Tuesday October 14, 2003 @11:04AM (#7208913) Homepage
    but the numbers jumped at me:

    38E+6/6E+2 $ is about 60000 $ per machine. Seems to be a little much for a cluster of "cheap" machines, right?
    Isn't there more to it?

    Ok, off, reading the article.

    • Something definitely doesn't add up - the Dell cluster price probably includes labor and other hardware, while the G5 price quote is ONLY FOR THE MACHINES

      1,100 G5 machines, 1/7 the price of the Dell cluster = 4,545 per machine.

      I cal shenanigans, wish someone would look into these biased articles before posting them.

      ~Berj
      • From the article: The cost of the five-year project is about $38 million

        How is it fair to compare five-year cost (which includes all kinda of things like upkeep, all the supporting hardware, etc) to the base machine cost of the G5 cluster?

        ~Berj
        • From what I see in my experience, the hardware is just 1-10% of TCO per unit in a regular IT environment, depends if is it a workstation or a server. I am not sure how much is it in clusters.
        • After reading the article on this apple site, it's clear that neither the author, nor his expected audience have a clue about clusters (what did I expect, they're Mac-heads afterall ;) ).

          Additionally, as one can read at
          http://www.tacc.utexas.edu/resources/hpcsyste m s/
          TACC seems to have invested quite a bit into the network, interconnect and storage technology, while by the calculations of your parent poster, there shouldn't be much financial leeway for stuff like that in the apple cluster (and no, gigabit
      • I seriously doubt that labor is going to cost them $32 million (the difference between the two price tags.)
  • by Triumph The Insult C ( 586706 ) on Tuesday October 14, 2003 @11:04AM (#7208914) Homepage Journal
    went towards mice with more than one key
  • Supporters of the University of Texas' $38 million Dell cluster 'investment' today asked for their money back so they can build their own G5 cluster and hopefully regain at least a portion of their self-respect.

    One UT suppporter was quoted as saying "We didn't get lousy clusters....we got lousy cluster planners!"
  • WHACK! BANG! Ouch! That hurts! I'll bet DULL never even kissed U. of Texas on the back before the back oriface insertion, to let them know it was coming. There's nothing quite like the feeling of getting SCREWED, especially with cheap commodity PC hardware. And to add salt, vinegar, and isopropyl alcohol into the inury (mustn't allow any infections), the DULL supercomputer has SUPER electricity usage too, and even more power hungry COOLING requirements. Well, we all live and learn. RISC-vectorized comp
  • The Dells have twice as many mouse buttons, so THERE!

    (/me happy new mac user)

  • The average Mac is probably 7 times more expensive than the average Dell, so how is this possible? I know that G5s are powerful, but are they really that much more powerful?
    • But here's the thing. My bet is that this Dell supercomputer may have more redudant items in each node then the G5 cluster does. The Poweredge 1750 has dual hot swappable power supplies. If one node looses a power supply, the other supply keeps this node up and running. G5's don't have this. You loose a power supply, the node is done. Depending on what you want your supercomputer for, you probably want a little redundancy in your setup! :) I'd take the redundancy over the power! :) But these and oth
    • The avereage BMW costs a lot more than the average Kia too. Dell makes some really cheap machines, so their average is low. Apple makes only moderately high to high-end machines, so their average is higher.

      A better comparison would also consider what you get for the money. There is little price difference between similarly configured Dell and Apple machines -- and Apple usually comes out cheaper by a bit.
  • Too bad the Austin Business Journal didn't realize that the cluster is only 1/4 the power and 7x the price as the Apple G5 cluster just created. Now imagine if they spent all that money on an Apple G5 cluster, it would easily be the faster super computer cluster in the world. Now if only people would give Apple credit when it is due, because it is certainly due at this point in time.

    Lets just hope Apple can keep up the good work and keep the G5 line updated and in pace with the x86 lines from Intel and AMD
  • Apples and Oranges (Score:2, Informative)

    by schnuf ( 103708 ) *
    Well, VT are clearly getting a very good deal on their hardware. $5m for 1100 nodes works out at $4,500 a node.

    Speccing up a dual G5 at the Apple store comes out at over $5,000. They also need to pay for the power and cooling hardware to run the thing.

    Looks like they are getting a very good price from all their suppliers/contractors...

    The retail price for the processing hardware for the UT cluster is very similar, a dual PowerEdge 2650 with 4Gb of RAM is also about $5,000. If they had taken the workstati
  • by clmensch ( 92222 ) on Tuesday October 14, 2003 @11:12AM (#7209022) Homepage Journal
    Austin has its nose so far up Dell's butt that they would make a supercomputer of their PocketPC's if they were asked to. You think there was even a QUESTION of who would build their supercomputer?

    And don't try to tell me that the Company-Formerly-Known-as-Compaq had a shot even though they're based in Houston...well not really anymore anyway.
  • Hey...let's just remember the standard response. "PCs are so much cheaper than Macs". "I can build a PC cheaper than your Mac". Am I missing any?
  • You're gettin' dope slapped by the provost and the comptroller and the Texas lege...

    Not to mention the next site visit by NSF - that should be a hoot.

    This should be on the Apple Hot News page.
  • How much REAL WORK does that G5 cluster do?

    Just because you have a cluster with mega TFlops doesn't mean it'll do more real work that something more expensive with less mythical peak performance.
  • by jimfrost ( 58153 ) * <jimf@frostbytes.com> on Tuesday October 14, 2003 @11:15AM (#7209054) Homepage
    I knew something had to be wrong with that $38M number; 600 Dell servers, good ones, should run in the ballpark of $3M, not $30M. Even with the high-performance interconnect you'd need you ought to be able to do it for under $5M.

    So I hit Dell's website and at educational pricing the servers they bought run around $4k apiece. Which means that this solution should be very price/performance competitive with the VT cluster.

    I hit the UT page and found that the $38M number came from a press release about their investment in quite a bit of stuff, including the "Institute for Computational Engineering and Sciences (ICES), a new center for interdisciplinary research and graduate study in the computational sciences." I.e. at least one new building.

  • I realize that this is /., but even the submitter should have read the article. The $38 million price tag was the five year cost of the supercomputer project, not the cost of the computers. Project costs include much more than just the cost of the computers. That money also has to pay for infrastructure, consulting and management costs, as well as the normal day-to-day costs to keep the thing running. I suspect that the actual value of the first 300 computers themselves was well under a million dollars.
  • by Anonymous Coward
    Turn That PC Into a Supercomputer By Leander Kahney
    Story location: http://www.wired.com/news/technology/0,1282,60791 , 00.html

    02:00 AM Oct. 14, 2003 PT

    A small chip-design firm will unveil a new processor Tuesday it says will transform ordinary desktop PCs and laptops into supercomputers.

    At the Microprocessor Forum in San Jose, California, startup ClearSpeed Technologies will detail its CS301, a new high-performance, low-power floating-point processor.

    The new chip is a parallel processor capable of perf
  • weird (Score:3, Informative)

    by duph ( 27605 ) on Tuesday October 14, 2003 @11:16AM (#7209071)
    the cost in that journal article seems much much too high. poked around and found this article at infoworld: http://www.infoworld.com/article/03/10/03/HNdellcl uster_1.html [infoworld.com]

    they quote a dell spokeswoman saying that a configuration like that costs about 3 million with installation. it also states that UT gets an educational discount, but doesnt say how much they got off the $3million.
    if the 38 million were correct, theyd be spending on the order of 120,000 per machine....a 2650 with highest processors and max ram only comes out to $13,500 on dells site...yeesh
    • The slashdot heading (and maybe the article - I havn't read it :-) is wildly misleading. The $38 million for the Dell machine is the total cost, including administration, support and maintenance, application development etc, for 5 years.

      The '1/7 cost' for the Apple machine is for the hardware only.

      Someone should wake up and smell the coffee. In these days of commodity hardware, differences like a factor of 7 (overall a factor 28 in price/performance!) is at best highly improbable.

  • $38 million works out to $63000 for each one of the 600 Dell nodes. But the 1750 and 2650 are your run-of-the-mill $3500-$4000 Xeon rack mounts. Where are the other $59000 per machine going? Someone is comparing apples and oranges (no pun intended).

    Also, "flops" is a pretty meaningless measure of performance. If you want to use benchmarks, at least use SPEC, and on those, the Macs are not particularly outstanding in terms of price/performance.

    Finally, the Macs aren't rack mounts and they require manua [vt.edu]
  • Cost of Operation (Score:3, Interesting)

    by toupsie ( 88295 ) on Tuesday October 14, 2003 @11:17AM (#7209084) Homepage
    Does anyone have a comparitive on the cost of operation for each system, i.e., cost to power the nodes, cool the nodes, cost to manage the nodes, etc? Is an Apple or a Dell "supercomputer" cheaper to run?

    Stupid question: Are these really supercomputers or superclusters? I always think of a computer as one unit not a collection of units.

  • ... can you imagine the behometh they could produce with their budget if they used a similar approach?
  • but it's 3.67TFlops - which if you insist on taking to the nearest TF is 4. Anyway, read all about the 'LoneStar' system here http://www.tacc.utexas.edu/resources/hpcsystems/
  • There's more to such clusters than just the cost of the CPU. Things like storage, interconnect (no, you can't use the $5 Dlink cards from BestBuy), cooling, powerplant, etc. come to mind.

    Of course, there's also this concept of "theoretical FLOPS", which is open to interpretation... we'll have to wait and see what the LINPACK (and other such benchmark) numbers report.

  • Whereas the previous quoted 5M cost of the BIG MAC is hardware. Unless given the breakout of the costs on the Austin machine I doubt any fair comparison can be made.

  • Not that kind of purchasing power, but the super computing power rather..

    According to this Wired article [wired.com] a small firm in CA called Clear Speed [clearspeed.com] will soon revolutionize the PC space with Super computing power.

    I know we will all believe it when we can find these chips on Bestbuy aisle no:4, but still currently from where I am sitting (I am sitting on a Microsoft biztalk server 2004 training session, boring as hell, being inundated by claims of innovation by a clueless trainer who programmed in Visual bas
  • by meggito ( 516763 ) <npt23@drexel.edu> on Tuesday October 14, 2003 @11:27AM (#7209220) Homepage
    Dell is Austin's biggest employer. University's like to give back to their town and area, endearing the locals. Also, by pumping more money into the area it becomes a better place and more people would be willing to come there. I'd be willing to wager that most, if not all, of the work was done by the local Dell plant. This isn't a coincidence guys, this was more than just a business deal, it was giving back to the community that supports the school.
  • ...but at least it cost 7 times as much.
  • To make it to the Top 500 Supercomputer list, you have to post more than theoretical performace. Sorry, the story is interesting but you need hard benchmark numbers from both machines to make a fair judgement.

    On a supercomputer, communications overhead between the nodes is a big deal. Fast processors that can't be adequately fed don't cut it.
  • I'm sure a large chunk of the mark-up in the price for this cluster was due to them involving Cray.

    It's interesting that his comes up. This month's Linux Journal had an article with a guy with no cluster experience building a large cluster using...drumroll...Dell systems. Cheap. And he did it with out expensive consulting services from big names. He's now on one of those international comparison lists for top-performing clusters.
  • Notice that the Apple system is a 64 bit system, the Intel based system uses Xeons and is 32 bit. That is an interesting difference in a configuration this size. Cluster or not.
  • And I feel really stupid when I pay $15 more for a video game. Can you imagine being off by many millions of dollars?
  • by lazn ( 202878 ) on Tuesday October 14, 2003 @12:08PM (#7209789)
    It is my understanding that the G5 doesn't support ECC RAM, so how can you trust it's results? With that many machines, the statistics of a bit error in RAM gets quite high.

    So you have fast incorrect data.

    Please correct me if I am wrong.
  • Too much Spin (Score:3, Informative)

    by Tokerat ( 150341 ) on Tuesday October 14, 2003 @12:41PM (#7210211) Journal

    Hey, I'm a Mac zealot, I know. I love to make Apple look good, for any reason. But this story has too many inconsistencies.

    The Slashdot blurb and the article don't even match. And As The Apple Turns is being quoted as a technically informed Mac news source?

    Anyways, yes, $5 million for the G5s. Now let's add in the price of those racks they sit on. Let's add in the price of the cooling system, the network equipment, cables, power supply. Contractors. $38 million doesnt' sound so far off the mark when you think about what all that stuff.
  • by Infonaut ( 96956 ) <infonaut@gmail.com> on Tuesday October 14, 2003 @12:48PM (#7210315) Homepage Journal
    It's interesting to see how many people in here are getting themselves in a lather refusing to believe that it might cost less to run a supercomputer on Macs than on Dell boxes.

    If this were an OS to OS comparison between Windows and OS X, perhaps we wouldn't be getting so frothed up. But this is hardware, and dammit, PC hardware is supposed to *always* be cheaper than Mac hardware!

    To summarize what others have said:

    1) Dell gave UT a sweetheart deal

    2) Apple gave VT a sweetheart deal

    3) Nobody has dredged up any information to indicate that the $38M UT spent includes the cost of a building. As csoto pointed out:"A "Center" at UT is a special term for a particular type of organized unit, often a research unit. It does not necessarily mean this place gets its own building. In fact, at UT, space is such a premium that most "Centers" don't have their own (yeah the place is huge, but has lots of people). In fact, I'd venture to guess that NO center has its own building."

    4) Hardware is only a portion of the total cost, obviously. UT and VT have set up their supercomputing projects differently. This again is obvious.

    5) The really important point of all this is that VT manage to put together a very powerful supercomputing cluster using Macs at a cost that in no way can be considered more expensive than if they'd used PC hardware.

    You can argue that costs would have been cheaper had they built their own, or used PCs from some source cheaper than Dell. But they still would have had to deal with labor costs in assembling the PCs, or higher maintenance costs associated with keeping all of those commodity PCs running properly.

    UCLA [ucla.edu] is already using OS X to run Beowulf-style clusters. Tokyo University [asahi.com] is replacing over 1,100 Linux PCs with OS X boxes.

    Even the total cost of installing, operating, and maintaining large numbers of Macs running OS X is cheaper than either PCs running Windows or PCs running Linux, people often seem incapable of absorbing that information.

    You can talk all you want about the Reality Distortion Field, but the truth is that Apple is always working against an incredibly strong bias that says Apple is always more expensive.

    That's simply no longer true.

  • Price per Teraflop (Score:3, Informative)

    by wembley ( 81899 ) on Tuesday October 14, 2003 @02:13PM (#7211319) Homepage
    As it turns out, the computer cost $3 million, vs. $5.2 million for the 17.6 terraflop Mac G5 cluster at Virginia Tech.

    3.7 tflops / $3M = 1.23 tflops/$1M

    17.6 tflops / $5.2M = 3.38 tflops/$1M

    So with the Apples you get 2.75x more computing power for the same $$.
    Sounds like a an easy argument to get by the Dean's office...

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...