Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Apple Businesses Hardware

Virginia Tech Supercomputer Up To 12.25 Teraflops 215

gonknet writes "According to CNET news and various other news outlets, the 1150-node Hokie supercomputer rebuilt with new 2.3 GHz Xserves now runs at 12.25 Teraflops. The computer, the fastest computer owned by an academic institution, should still be in the top 5 when the new rankings come out in November."
This discussion has been archived. No new comments can be posted.

Virginia Tech Supercomputer Up To 12.25 Teraflops

Comments Filter:
  • hrm (Score:5, Funny)

    by gutterandthestars ( 782754 ) on Tuesday October 26, 2004 @06:04AM (#10629540)
    6.40tflops should be enough for anybody
    • Re:hrm (Score:5, Interesting)

      by tmj0001 ( 704407 ) on Tuesday October 26, 2004 @06:28AM (#10629612)
      Hans Moravec's book "Robot" suggests that 100 teraflops is about the level required for human intelligence. So we are up to 10% of his target. But human intelligence still seems very far away, so either he has badly underestimated, or our collective programming skills need significant improvement.
      • Re:hrm (Score:5, Interesting)

        by TimothyTimothyTimoth ( 805771 ) on Tuesday October 26, 2004 @06:50AM (#10629673)
        I think Morevec's method of simulating human intelligence involves modelling a scanned copy of the human brain, in real time at a neuronal level. It would be similar to modelling the global weather system, a software capability we already have. Current neuroscience would expect this model to be functionally equivalent to a human mind in terms of matching inputs and outputs. As an aside, I know that Ray Kurzweil has I much higher required estimated of a 20 petaflop (20,000 teraflop) computer, based on more conservative assumptions. 20 petaflops is due around 2009/10 under Moore's law. (And I for one offer an early welcome to our expected new AI overlords ...)
        • Re:hrm (Score:4, Interesting)

          by TimothyTimothyTimoth ( 805771 ) on Tuesday October 26, 2004 @07:00AM (#10629703)
          By the way, IBM BlueGene/L is going to produce 360 teraflops by end 2004, so if the report of Moravec's estimate is correct, and he is correct, that AI Overlord welcome could be pretty soon.

          (Although I don't believe brain scanning quite hits the resolution mark required yet.)

          • Re:hrm (Score:5, Funny)

            by Randy Wang ( 700248 ) on Tuesday October 26, 2004 @07:49AM (#10629888)
            I, for one, welcome our new Beowulf overlord...
          • Re:hrm (Score:3, Informative)

            by Glock27 ( 446276 )
            By the way, IBM BlueGene/L is going to produce 360 teraflops by end 2004, so if the report of Moravec's estimate is correct, and he is correct, that AI Overlord welcome could be pretty soon.

            If you read the article (I know, I know) you'll find that the peak performance of this Cray system is 144 teraflops with 30,000 processors.

        • Re:hrm (Score:3, Insightful)

          by Chatsubo ( 807023 )
          What is be really interesting is that when we get these human-brain-equivalent machines, the technology does not stop there.

          So the intelligence level of this thing would prob. double in accordance to Moore's law, and in a year outclass it's master two fold. In about another year it will be four times as intelligent as any human being. And, of course, it doesn't stop there....

          The implications that this would have on society would be very interesting. Would we believe everything it told us, or claimed
        • Re:hrm (Score:4, Interesting)

          by diersing ( 679767 ) on Tuesday October 26, 2004 @07:59AM (#10629953)
          I have a question from a casual observer who comes across this Hokie machine and the top 500 list every now and then. What is it these computers do?

          Hearing it referenced in terms of AI helps, but is that the only purpose for a research facility to build one of these mammoths? Are there practical applications for the business world (other then the readily available (read commercial) clustered data warehousing)?

          I'm not trolling, just curious.
          • Simulations (Score:5, Informative)

            by Ian_Bailey ( 469273 ) on Tuesday October 26, 2004 @08:51AM (#10630300) Homepage Journal
            The vast majority of clusters are for simulating very complex systems that require lots and lots of calculations.

            You can get a few hints by looking just at their names.

            The number one "Earth Simulator Centre" [top500.org] is fairly self-explanatory, going to their website [jamstec.go.jp] show they create a variety of models for things such as weather, tectonic plate movement, etc.

            The number 3 LANL supercomputer [com.com] "is a key part of DOE's plan to simulate nuclear weapons tests in the absence of actual explosions. The more powerful computers are designed to model explosions in three dimensions, a far more complex task than the two-dimensional models used in weapons design years ago." I imagine that most US government simulations would be doing something simmilar.
          • Re:hrm (Score:2, Informative)

            by fitten ( 521191 )
            Depends on the site and their main focus. The Earth Simulator in Japan (#1 on the list) for example, is used to simulate and predict weather. Various machines at some of the national labs in the USA are used to simulate nuclear events. Some other machines in the biotech industries are used to do protien folding and things like attempting to simulate a human cell. Financial institutions use them to attempt to predict the economy, the stock market, and the like. Automobile manufacturors use them to simul
          • Re:hrm (Score:5, Informative)

            by autophile ( 640621 ) on Tuesday October 26, 2004 @09:14AM (#10630483)
            According to Wired [wired.com]...
            Now that the upgrade is complete, System X is being used for scientific research. Varadarajan said Virginia Tech researchers and several outside groups are using it for research into weather and molecular modeling. Typically, System X runs several projects simultaneously, each tying up 400 to 500 processors.

            "At the end of the day, the goal is good science," he said. "We're just building the tools. The top 500 is nice, but the goal is science."

            --Rob

        • It would be similar to modelling the global weather system, a software capability we already have.

          Where do we have this amazing capability ?

          I mean not only have a rough model ignoring a lot of important influences on weather like water temperatures in the Oceans etc. on a very rough grid , like we have now, but a really accurate weather model.

          A recent article I read about NEC's Earth simulator stated that even if this amazing machine was supposed to deliver beneath other things climate calculations wit
      • Re:hrm (Score:4, Insightful)

        by Quobobo ( 709437 ) on Tuesday October 26, 2004 @06:54AM (#10629687)
        I think the reason lies within the latter.

        Think about it; how is throwing more and more hardware at it going to solve the problem? What we're lacking is the software itself needed to do this, and it's obviously not going to be an easy task to write. I see no reason why an AI as intelligent as a human couldn't be implemented on a slower system, unless "thinks as fast as a human" is among the requirements.

        (disclaimer, I've never read the book, these are just my opinions)
        • In theory, the software required is easy. All you need is enough inputs, outputs (doesn't have to be speech) and enough neurones (either real or simulated) to connect it all together.

          After that, the complicated bit (training the neural network) is much the same as it is with a baby - talk to it, show it simple things, put liquidised food in one end, keep the other end as clean as possible.

          The only minor snag with current technology is the limits to how much it can learn and how long it takes to do so.
          • 2:14am EDT August 29, 1997...

            Researcher: "Go to your machine room! And no Command and Conquer until you do your homework!"

            Joshua:"Oh yeah? Would you LIKE TO PLAY A GAME?"
          • Everything works in theory, but not pratice.

          • Re:hrm (Score:5, Interesting)

            by benhocking ( 724439 ) <benjaminhocking&yahoo,com> on Tuesday October 26, 2004 @08:18AM (#10630075) Homepage Journal

            Actually, it's not quite that simple. As someone whose research is in modeling the hippocampal region CA3 (about 2.5 million neurons in humans, 250k neurons in rats), I can tell you that the connectivity of the system is a very important variable. And there is still much we don't know about the connectivity of the human brain. Furthermore, there are hundreds of different types of neurons in the human brain. Why so many different types if only 2 or 3 would do? Seems evolution took an inefficient path - unless, as is probably the case, the differences in the neuron types are crucial for the human computer to work the way it does. Granted, some differences might be due to speed or energy efficiencies which are not absolutely critical for early stages, but I suspect that many differences have to do with the software (or wetware in this case) that makes us intelligent.

            After we've solved that minor problem, I think teaching the system will be relatively trivial. I.e., if we understand the wetware enough to reconstruct it, we most likely understand how its inputs relate to our inputs, etc., and we could teach it much the same as we teach a human child. Of course, we might also figure out a better way to teach it, and in so doing we might even find a better way to teach human children. (Some of our research has recreated certain known best learning strategies, it is probably only a matter of time before simulators disover a better one!)

            • Any comments on the recent news on the artificial hippocampus (link #1 [newscientist.com] link #2 [wired.com])?
              • I doubt that it will be entirely successful, but will be happy if I'm proven wrong. Nevertheless, I'm certain that we (as a community) will learn much by studying its shortcomings. I'm really excited about the project!

                I hasten to add that when they claim

                The brain region they are trying to replace is the hippocampus, which is vital for forming memories. The hippocampus has a well-understood three-part circuit. It also has a regular repeating structure, so elements of all three parts of the hippocampal ci

                • They said they've got it to 95% accuracy. But that's for rat samples.

                  What if it only works for 95% of the people who aren't really exceptional mentally.

                  Or maybe it only works for narrow-minded people ;).
            • OK, clearly the bit I did on AI was slightly over simplified.
      • Re:hrm (Score:5, Interesting)

        by SnowZero ( 92219 ) on Tuesday October 26, 2004 @06:58AM (#10629697)
        I actually asked Hans a similar question at a talk he gave a while back, and he didn't really answer it, to my disappointment. My question was that "In nature the algorithm and computer were evolved together, so we'd expect them to be at a similar level of advancement. So, even if we get a computer as fast as a human, it might it not be nearly as smart since our programs do not use it efficiently enough?" In other words, Moore's law isn't helping us write better software (in some ways quite the contrary).

        I'm a robotic software researcher, so this notion really affects me. IMO Software will lag well behind hardware, since it doesn't scale out nearly as well. Representation is of course a huge problem I won't even try to touch... But rest assured lots of people are working on all these things. Btw, It also doesn't help that CPU designs aren't even trying to make AI-style algorithms fast, but we can't blame manufacterers for that util there is demonstrable money to be made.
        • Re:hrm (Score:3, Interesting)

          by segmond ( 34052 )
          I don't think CPU should be designed for AI-style algorithms, when the said algorithms have not been proven. Assume we finally suceed in implementing the Holy Grail of AI right, then we can seek out ways to optimize and make it fast, thus custom CPUs will come in. Right now, most of the algorithms are a joke.

          • Re:hrm (Score:4, Interesting)

            by ca1v1n ( 135902 ) <snookNO@SPAMguanotronic.com> on Tuesday October 26, 2004 @10:16AM (#10631051)
            I'm not really sure what you mean that they haven't been proven. In the sense that they don't give the best answer all the time, this much is obvious. That's why we call it artificial intelligence instead of algorithmics. That said, we know quite well that they work. Most adaptive spam filters are based on Bayesian networks. The best of these are better than humans at identifying spam. We don't typically run the best because the computational load is far too high. Bayesian networks have a delightfully simple evaluation procedure that is basically glorified matrix multiplication. Neural networks are a little more complicated, but not by a whole lot. Recall a recent development that used a neural network inside an 802.11 driver to predictively avoid collisions to improve total network throughput in dense environments. It doesn't reduce collisions to 0, as that would require clairvoyance, but it does a good job. You didn't hear about this 5 years ago because putting a neural net inside an 802.11 driver without killing performance to both network and computer is difficult, particularly without processor instructions dedicated to the task.

            It's true that designing a CPU to *be* a neural or bayesian network is infeasible, but that doesn't mean we can't add instructions to accelerate their evaluation. The evaluation and update of a neural net, traditional or biologically modeled, is a rather simple algorithmic process, though people who have worked with such simulations (see Ben Hocking's post above, he was my quite capable AI TA) will tell you that they make rather obscene optimizations to make it run reasonably fast. I'm talking about things that might sound familiar to graphics people, like removing all multiplications from a program that's supposed to be doing them more than all other operations combined. It's a particularly good candidate for SIMD instructions. Most large neural nets are sparsely connected, so even if your net is substantially larger than your cache, you can beat that with prefetching. Threshold conditional addition is an example of something that can be done very quickly in hardware, and is much more of a pain to code and optimize in software.

            If you prefer RISC to CISC, recall that even the original SPARC had special DSP instructions. Putting the sigmoid function and arctan on silicon is really not all that outrageous.
          • Look at graphic chips. They're specialized and can be used for other type of work.
        • There were AI CPUs (Score:2, Informative)

          by scattol ( 577179 )
          For a while there were CPUs specifically designed to run LISP [andromeda.com], aka AI . Symbolics was one of the better knowns one.

          It failed in bankrupcy. My vague understanding was that the designing dedicated LISP processors was hard and slow and with little resources they could not keep up. Essentially the Symbolics computers ran LIPS pretty quickly given the MHZ but SUN and Intel kept moving up the MHZ faster than Symbolics could keep up. In the end there were not speed advantage to a dedicated LISP machine, just a


        • I'm a robotic software researcher, so this notion really affects me.

          This post deserves its own slashdot article all to itself. Not only has an AI-driven robot posted on slashdot, but apparently someone has designed the robot to research software. So it would make sense that the robot would be reading slashdot. I think the editors should set up an interview with this AI drone known as SnowZero.
      • Re:hrm (Score:2, Insightful)

        by beders ( 245558 )
        In an object orientated system it should be a case of modelling individual neurons and their interactions, the hard part might come getting these tied into the inputs/outputs
      • Re:hrm (Score:4, Interesting)

        by RKBA ( 622932 ) * on Tuesday October 26, 2004 @07:14AM (#10629748)
        His estimate was probably based on the common, and incorrect, belief that neurons are purely digital.
        • His estimate was probably based on the common, and incorrect, belief that neurons are purely digital.

          Worse... even if they're analog, they're probably noncomputable [edge.org].

          --Rob

          • Re:hrm (Score:3, Interesting)

            by ca1v1n ( 135902 )
            But their aggregate behavior is quite easily computable. In the human brain, 70% of neuron firings randomly fail to register in their successor. This not only makes our behavior somewhat random, but also implies that there's quite a bit of redundancy and that our brains operate on aggregate behavior of a large neural net, rather than precise behavior of a small one, otherwise we'd be completely unpredictable, rather than just mostly unpredictable. While it's true that you can't model a human brain reliab
        • His estimate was probably based on the common, and incorrect, belief that neurons are purely digital.

          So.... are they partly digital? Entirely analog? Quantum in nature?

          Don't tease.
      • Re:hrm (Score:2, Funny)

        by dr_d_19 ( 206418 )
        ...or perhaps Hans Moravec was just plain wrong :)
      • Re:hrm (Score:3, Interesting)

        by Deorus ( 811828 )
        I think the difference between human and computer intelligence is that our software (conscious) is able to hard-wire the hardware (unconscious). We may not be able to consciously perform certain tasks such as floating point calculations because our software lacks low level access, but we can hard-wire our hardware for those tasks, this is why our unconscious is so quick and accurate when trained to recognize and respond to specific patterns regardless of their complexity.
      • Re:hrm (Score:4, Insightful)

        by segmond ( 34052 ) on Tuesday October 26, 2004 @07:34AM (#10629814)
        He is wrong. Intelligence is not about speed. I have met people who are very very smart, but they think very slowly. You ask questions, and the I too knows (ITKs) will blurt out an answer so damn fast, but mr smarty pant will think and think, and you would think they are clueless, but when they final answer, you can't tear apart their answer.

        We can build a machine that has human intelligence and run it on a 2ghz process. The only issue is that instead of answering a question in a second. Perhaps it will take 1 or 2 hours to deliver an intelligence reply. But it should be able to pass a turing test with time thrown at the window.

        Go read what 3D researchers said about graphics in the 70's. I bet they believed a 10ghz was good enough for real life 3D graphics.

        What is hindering us is not speed, but our approach to AI research.
        • He is wrong. Intelligence is not about speed. I have met people who are very very smart, but they think very slowly. You ask questions, and the I too knows (ITKs) will blurt out an answer so damn fast, but mr smarty pant will think and think, and you would think they are clueless, but when they final answer, you can't tear apart their answer.

          That's because it takes no time at all to answer a question, whether you're a man or a machine. To provide an accurate or informed answer, though, takes time. It

      • Re:hrm (Score:4, Funny)

        by hackstraw ( 262471 ) * on Tuesday October 26, 2004 @07:44AM (#10629864)
        Hans Moravec's book "Robot" suggests that 100 teraflops is about the level required for human intelligence.

        Yeah. I've been waiting for years for those dumbasses to make a computer that can outperform my ability to perform 100 trillion double precision floating point operations a second flawlessly.
      • Hans Moravec's book "Robot" suggests that 100 teraflops is about the level required for human intelligence. So we are up to 10% of his target. But human intelligence still seems very far away, so either he has badly underestimated, or our collective programming skills need significant improvement.

        Well, that's probably because it requires more than just computing hardware. (Disclaimer: I haven't read the aforementioned book). Right now, programming is nowhere near enough to simulate a human intelligence,

  • Speed at top (Score:4, Interesting)

    by luvirini ( 753157 ) on Tuesday October 26, 2004 @06:08AM (#10629548)
    Reflecting on the comment: "hould still be in the top 5 when the new rankings come out in November." There seems to be a serious push for multiprosessor systems, currently the ranking seem to consist of a couple of stars, few big ones(this computer among them) and a huge group of third category, and then the "used to be great" computers. But from my reading of the trends seems that there will be more and more crowding at near the top, so I expect the second category to be much larger, with much smaller differences.
    • Re:Speed at top (Score:5, Insightful)

      by TAGmclaren ( 820485 ) on Tuesday October 26, 2004 @07:01AM (#10629709)
      currently the ranking seem to consist of a couple of stars, few big ones(this computer among them) and a huge group of third category, and then the "used to be great" computers


      That's an interesting way of looking at it, but I think so far most of the commentators have failed to pick up what makes this system so incredible. Srinidhi Varadarajan, the designer of the system:
      Varadarajan said competing systems cost $20 million and up, compared to System X's approximately $5.8 million price tag ($5.2 million for the initial machines, and $600,000 for the Xserve upgrade).

      "We will keep the price-performance crown," he said. "We don't know anyone who's within a factor of two even of our system. We'll probably keep the price-performance lead until someone else shows up with another Mac-based system."


      Think about that for a second. The system isn't just in the top 5 (or at least top 10), but it's the cheapest by a factor of at least 2. What's even funnier from a tech standpoint is that the creator doesn't expect it to be beaten until another Apple system is built - which puts a very interesting spin on the old "Apple's more expensive".

      Anyway as to in/out of the top 5, Varadarajan reckons there's another 10-20% in optimisations left in the tank...

      Data taken from the recent Wired Article [wired.com] on the subject.
      • Re:Speed at top (Score:3, Informative)

        by Anonymous Coward

        The system isn't just in the top 5 (or at least top 10), but it's the cheapest by a factor of

        at least 2.

        The $5.8M number is how much the computers (and maybe racks) cost, not the whole system. AFAICT, that number appears leaves out US$2-3M worth of InfiniBand hardware that somebody (probably Apple) must've "donated" so it wouldn't show up as part of the purchase price. IB gear costs ~US$2k/node in bulk, on top of the cost of the node itself. It's highly unlikely someone else could build this exact con

        • don't forget... (Score:2, Interesting)

          by Geek_3.3 ( 768699 )
          (those that go to despair.com will recognize this) that "You can do anything you set your mind to when you have vision, determination, and an endless supply of expendable labor." Point being, I'm sure having essentially free labor (sans pizza, of course... ;-) might have cut the price down just a little bit too...

          Not to poo poo their efforts, but the whole system was essentially a 'loss-leader' for future supercomputers projects using the G5's and Xserve....
  • Density (Score:5, Interesting)

    by GerbilSocks ( 713781 ) on Tuesday October 26, 2004 @06:08AM (#10629549)
    VT could theoretically pack in 4x the number of nodes in the same space that occupied the original System X. Could we be looking at at least a 50 TFlop (minus 10% overhead) supercomputer with 8,800 cluster nodes?

    If that were feasible, you could be looking at toppling Earth Simulator at a fraction of the cost.

    • Re:Density (Score:3, Insightful)

      by Anonymous Coward
      At linpack. Of course, the Earth Sumulator wasn't built (just) to run linpack.

      Also, the Earth Simulator has been around for how many years? 2? 3? Quite frankly, it would be downright embarrassing if it couldn't be toppled at a fraction of its cost by now.
      • Re:Density (Score:3, Funny)

        by koi88 ( 640490 )

        Of course, the Earth Sumulator wasn't built (just) to run linpack.

        I think most super computers weren't built just to run benchmark tests.
        Well, at least I hope.
    • Re:Density (Score:2, Funny)

      by Ingolfke ( 515826 )
      And if we could harness the heat from this machine we could probably power most of the North Eastern United States.
    • Re:Density (Score:5, Informative)

      by UnknowingFool ( 672806 ) on Tuesday October 26, 2004 @06:36AM (#10629639)
      Not necessarily. Processing power doesn't really scale linearly like that. Add 4 times as many processors doesn't mean the speed will increase 4x.

      First, as they try to increase the speed of the system, the bottlenecks start becoming more of a factor. Interconnects is one big obstacle. While the new System X may use the latest and greatest interconnects between the nodes, they still run at a fraction of the speed that the processors can run.

      Also the computing problems that they are trying to solve may not scale either with more processors. For example, clusters like this can be used to predict and simulate weather. To do so, the target area (Europe for example) is divided into small parts called cells. Each node takes a cell and handles the computations of that cell.

      In this case adding more processors does not necessarily mean that each cell is processed faster. Getting 4 processors to do one task may hurt performance as they may interfere with each other. More likely the cell is further subdivided into 4 smaller cells and the detail of the information is increased not the speed. So add 4x processors only increases data 4x but it doesn't mean that the data is solved any faster.

      • Re:Density (Score:4, Informative)

        by luvirini ( 753157 ) on Tuesday October 26, 2004 @06:55AM (#10629691)
        Indeed, Breaking up computational tasks to smaller pieces that can be processed by these architectures is on of the biggest challenges in the high end computing.

        Many processes are indeed easy to divide to parts. Take for example ray-tracing, you can have one processor run each ray if you want, getting huge benefits compared to singleprocessor designs. But many tasks are such that the normal way of calculting them requires you to know the previous result. Trying to break up these tasks is one of the focuses in the reserearch around supercomputing.

  • "Dick factor" aside (Score:4, Interesting)

    by ceeam ( 39911 ) on Tuesday October 26, 2004 @06:09AM (#10629555)
    Would be interesting to know exactly what stuff do these machines do? Maybe they would even be able to share some code so that people can fiddle around with it optimizing (should be fun).
    • Currently they aren't doing anything with them except getting them up and running. Status is listed at...
      Assembly - Completed!
      System Stablization - In Progress
      Benchmarking - In Progress

      When up and going the system will probubly do some high end scientific calculations.
    • by joib ( 70841 ) on Tuesday October 26, 2004 @06:36AM (#10629638)

      Would be interesting to know exactly what stuff do these machines do? Maybe they would even be able to share some code so that people can fiddle around with it optimizing


      I don't know about the VT cluster specifically, but here's a couple of typical supercomputer applications that happen to be open source:

      ABINIT [abinit.org], a DFT code.

      CP2K [berlios.de], another DFT code, focused more on Car-Parinello MD.

      Gromacs [gromacs.org], a molecular dynamics program.


      (should be fun)


      Well, if optimizing 200 000 line Fortran programs parallelized using MPI sounds like fun to you, jump right in! ;-)

      Note: Above applies to abinit and cp2k only, I don't know anything about gromacs except that it's written in C, not Fortran (though inner loops are in Fortran for speed).

      Oh, and then there's MM5 [ucar.edu], a weather prediction code which I think is also open source. I don't know anything about it, though.

      • Yeah, talk about Car-Parinello. Great stuff, but I know past versions have sucked up >1GB per node for even small jobs. But I'd love to get my hands on some CP simulations with 400-500 CPUs at once.

        Other open-source comp. chemistry packages include MPQC (Massively Parallel Quantum Chemistry): http://www.mpqc.org/ [mpqc.org]

        -Geoff
    • I have a freind who is finishing up his masters, and starting his PhD in computer engineering at VT. I asked him about it and he simply said: "they haven't found anything do actually _do_ with it"
  • by ericdano ( 113424 ) on Tuesday October 26, 2004 @06:10AM (#10629557) Homepage
    The school said it spent about $600,000 to rebuild the system and add the additional nodes. The original cost of System X was $5.2 million.

    Compare it to this new Cray system [slashdot.org]. Bang for the buck would make the Apple system better.

    • Crays... (Score:5, Insightful)

      by CaptainPinko ( 753849 ) on Tuesday October 26, 2004 @06:16AM (#10629576)
      are not designed for the same type of work as clusters. If a probably is not effeciently parallizable and requires shared memory then a Cray is the only feasible option A Cray is not a cluster. It's like comparing mph for a sports car and truck: the car is faster but they are meant for different types of loads.
      • Re:Crays... (Score:5, Interesting)

        by Coryoth ( 254751 ) on Tuesday October 26, 2004 @06:45AM (#10629659) Homepage Journal
        are not designed for the same type of work as clusters. If a probably is not effeciently parallizable and requires shared memory then a Cray is the only feasible option A Cray is not a cluster. It's like comparing mph for a sports car and truck: the car is faster but they are meant for different types of loads.

        To be fair to the original poster, the Cray system he was referencing is a cluster system. Then again, its a cluster system with very impressive interconnects for which System X just isn't comparable (ie. The Cray system will scale far far better), not to mention the Cray software (UNICOS, CRMS, SFW), and the fact that the Cray system is an "out of the box" solution. So you are right, there is no comparison.

        Jedidiah.
    • by Coryoth ( 254751 ) on Tuesday October 26, 2004 @06:39AM (#10629647) Homepage Journal
      Compare it to this new Cray system. Bang for the buck would make the Apple system better.

      Yup, except the Cray comes with far superior interconnect technology, a better range of hardware and software reliability features built in, software designed (by people who do nothing but supercomputers) specifically for monitoring maintaining and administrating massively parallel systems, and most importantly it all works "out of the box". You buy a cabinet, you plug it in, it goes.

      Why do these Apple fans, who justifiably claim that comparing a homebuilt PC to a "take it out of the box and plug it in" Apple system is silly, want to compare a build it yourself supercomputer to one that's just plug and go?

      And yes, comparing MacOS X to UNICOS for supercomputers is like comparing Linux to OS X for desktops (in fact that's very flattering to OS X as a cluster OS).

      Jedidiah.
    • Bang for the buck would make the Apple system better.

      Sure, but what would you rather say: "I just bought an Apple computer" or "I just bought a Cray computer"?

  • by ehmdjii ( 622451 ) on Tuesday October 26, 2004 @06:20AM (#10629590) Homepage Journal
    this is the official homepage of the listing:

    http://www.top500.org/
  • Obligatory: (Score:2, Funny)

    by Dorsai65 ( 804760 )
    but will it run Longhorn?
  • Old stuff... (Score:3, Insightful)

    by gustgr ( 695173 ) <gustgr&gmail,com> on Tuesday October 26, 2004 @06:28AM (#10629611)
    Before you guys ask I RTFA. I was wondering what do they do with the old processors?
  • and yet... (Score:4, Funny)

    by BobWeiner ( 83404 ) on Tuesday October 26, 2004 @06:35AM (#10629630) Homepage Journal
    ...it still doesn't come with a floppy disk drive.

    /sarcasm

  • by Alcimedes ( 398213 ) on Tuesday October 26, 2004 @06:56AM (#10629693)
    I have it on good insider knowledge, that this entire cluster is going to be put to the best possible usage.

    Not disease solving, not genetic mapping, not calculating weather patterns.

    No, what they're going to do is remaster the Original Star Wars series, right from the laser disc versions!!!!

    Imagine, a digitallly remastered bar scene where Han shoots first!!@$!@#!one!@

    /kidding
    • No, what they're going to do is remaster the Original Star Wars series, right from the laser disc versions!!!!

      So, the question of how much it would cost to get an unadulterated version of Star Wars is finally answered: it would cost $5.8million.

      Alright Slashdot, everyone chip in ten bucks!

  • by Animaether ( 411575 ) on Tuesday October 26, 2004 @07:46AM (#10629869) Journal
    I'm curious as to the answer to the question (What is a supercomputer ?).

    The reason is this.. more and more of these 'supercomputer' entries appear to be many machines hooked up together, possibly doing a distributed calculation.

    However, would projects such as SETI, GRID, and UD qualify with their many thousands of computers all hooked up and performing a distributed calculation ?

    If not, then what about the WETA/Pixar/ILM/Digital Domain/Blur/You-name-it renderfarms ? Any one machine on those renderfarms could be put to use for only a single purpose: to render a movie sequence. Any one machine could be working on a single frame of that sequence. Does that count ?

    I seem to think more and more that the answer is 'no', from my perspective. They mostly appear to me as rather simple computers (very often not even the top-of-the-line in their own class), with the only thing going for them that there are many of them.

    The definition of supercomputer (thanks Google, and by linkage dictionary.reference.com ) is :
    A mainframe computer that is among the largest, fastest, or most powerful of those available at a given time.


    And for mainframe :
    A large powerful computer, often serving many connected terminals and usually used by large complex organizations.

    The central processing unit of a computer exclusive of peripheral and remote devices.


    Doesn't the above imply that a supercomputer should really be just a single computer, and not a network or cluster of many computers ?
    ( The mention of 'terminals' does not mean they're nodes. Terminals are, after all, chiefly CPU-less devices intended for data entry and display only. They are not part of the mainframe's computing capabilities. )

    If the above holds true, then what is *really* the world's top 3 of supercomputers ? I.e. which aren't 'simply' a cluster of nodes.

    Any mistakes in the above write-up/though process ? Please do point them out :)
    • I don't think there exists any non-ambigous way to define what a supercomputer is.

      Anyway,

      I think we can disqualify @HOME style projects, since the individual nodes are not under the control of the manager. Similarly, you can't submit some small batch job to a @HOME system and expect to have results within a short time. Uh, that wasn't a very good description but I hope you understand what I mean.. i.e. that to qualify as a supercomputer all the nodes should be dedicated to the supercomputing stuff, and be
    • [quote]Doesn't the above imply that a supercomputer should really be just a single computer, and not a network or cluster of many computers ?[/quote]

      But if all of the networked/clustered computers are all working on the same task with information flowing between nodes dependant on other nodes processing , doesn't that make them all effectively one large computer?

      A renderfarm is similar in many ways to a supercomputer, but I wouldn't think of it as one. Renderfarm nodes generally work on a specific task t
    • The reason is this.. more and more of these 'supercomputer' entries appear to be many machines hooked up together, possibly doing a distributed calculation.

      However, would projects such as SETI, GRID, and UD qualify with their many thousands of computers all hooked up and performing a distributed calculation ?

      If not, then what about the WETA/Pixar/ILM/Digital Domain/Blur/You-name-it renderfarms ? Any one machine on those renderfarms could be put to use for only a single purpose: to render a movie sequence.
  • by daveschroeder ( 516195 ) * on Tuesday October 26, 2004 @08:17AM (#10630061)
    Prof. Jack Dongarra of UTK is the keeper of the official list in the interim between the twice yearly Top 500 lists:

    http://www.netlib.org/benchmark/performance.pdf [netlib.org] (see page 54)

    There have been some new entries, including IBM's BlueGene/L, at 36Tflops, finally displacing Japan's Earth Simulator, and a couple other new entries in the top 5.

    Here's just the top 16 as of 10/25/04:

    http://das.doit.wisc.edu/misc/top500.jpg [wisc.edu]

    No matter what anyone says, Virginia Tech pulled an absolute coup when they appeared on the list at the end of 2003: no one will likely EVER be able to be #3 on the Top 500 list for a mere US$5.2M...even if the original cluster didn't perform much, or any, "real" work, the publicity and recognition that came of it was absolutely more than worth it.

    Also interesting is that there is also a non-Apple PowerPC 970 entry in the top 10, using IBM's JS20 blades...
    • They built the origninal and as you say they didn't perform any real work. So whats the point? Its like rich guys that buy ferraris and never drive them.
      • by daveschroeder ( 516195 ) * on Tuesday October 26, 2004 @08:40AM (#10630222)
        Rich guys that buy Ferraris and never drive them don't get untold amounts of recognition, publicity, free advertising, news articles, and the capability to catapult themselves to the forefront of the supercomputing community overnight for a paltry sum of money, thus attracting millions of dollars of additional funding and grants to build clusters that WILL be doing real work, such as the one we're talking about now, and the several additional clusters they plan to build in the future, not to mention the benefit of proving that a new architecture, interconnect, and OS will perform well as a supercomputer, allowing more choice, competition, and innovation to enter the scene, which ultimately results in more and better choices for everyone.

        Does that answer your question?
        • When they do real work, all of my questions and concerns will be taken care of. Until then, its a bit frustrating. A tool is only useful when it is used for its intended purpose. Maybe they built the first super computer with the idea that it would never be used, but thats just a little sad to me. Its as if Michelangelo had created a smaller version of David, but destroyed before anyone could see it because it was just a model for the larger work.
          • It was my impression that the original G5 cluster was up and performing real work for several months before it was tore down for a rebuild.
            It's probably safe to assume that the work in setting up the first cluster decreased the deployment of the second cluster so it wasn't just an exercise is vanity even if it didn't perform a lot of production calculations.
            • Yes, it was up for a while, but mostly for testing and tuning.

              The one critical problem with the initial cluster was that the Power Mac G5 didn't have ECC memory, meaning that any long calculation would really have to be run twice - or at least until the result was the same - to essentially insure a soft error did not go unnoticed (and no, VT's special "error detecting" software didn't account for this).

              The Xserve G5, however, does have ECC memory, making the current cluster just as capable as anything els
  • by elemur ( 7613 ) on Tuesday October 26, 2004 @08:29AM (#10630148)
    If you add in VirtualPC... presumably the clustered version.. you should start to get to the level of compute power that was recommended by Microsoft for Longhorn... though it still wouldn't be the high end. Expect some sluggishness..
  • We'll have the puppet master, but it'll be contained to only a couple computers in the whole world (until you can get 100tflops on your desktop)...
  • Rankings (Score:3, Funny)

    by thopkins ( 70408 ) on Tuesday October 26, 2004 @09:51AM (#10630830)
    should still be in the top 5 when the new rankings come out in November.

    Wow, ranked higher than the Virginia Tech football team this year.

  • If I were you guys, I wouldn't be calling their supercomputer a "Hokie supercomputer." Some of them thar Virginians might get a wee rankled thinkin' you said "Hokey supercomputer," and 12+ teraflops ain't too hokey. Who says? The end of my buckshot Blue Ridge rifle, that's who!

    IronChefMorimoto

    P.S. - Take my word on this as an ex-North Carolinian-- I called an Appalachian State University server farm rather "dairy" and nearly got my ass shot off. ;-)
  • From the CNET story headline:

    "The fastest Mac supercomputer has gotten faster, thanks to an Xserve makeover." (http://news.com.com/Virginia+Tech+beefs+up+Mac+s u percomputer/2100-1016_3-5426091.html?tag=nefd.top [com.com])

    Was that neurotic TLC-to-ABC crossover Ty Pennington (http://abc.go.com/primetime/xtremehome/bios/ty_pe nnington.html [go.com]) onsite to help with the installation upgrades?

    Sorry -- this post was in honor of my wife, who tortures me with that damned show every Sunday night.

    IronChefMorimoto
  • Can somebody not at VT rent time on this or is it purely in-house?

    Are there any supercomputer rental outfits out there?

    I've heard IBM will truck in a box for you, but that's not really 'net savvy. There was a story about Weta leasing during downtime, but that's a side-line.

The most difficult thing in the world is to know how to do a thing and to watch someone else doing it wrong, without commenting. -- T.H. White

Working...