Macintosh Clustering 618
HiredMan writes: "Wired is running an article comparing the set-up and admin of Linux Beowulf clusters versus Mac based clusters. Slant of the article is that the Macs are easier to set-up, maintain and are more flexible. They note that the Linux "how to" manual is 230 pages while the corresponding Apple document is a 1 page PDF file. Dauger Research of former Appleseed fame is mentioned as well, of course. MacSlash is also covering the article. Let the on-topic (for once) Beowulf comments fly..."
Cost? (Score:2, Interesting)
I think there are pros and cons of both clusters.
Hardware == Cheap. Humans == Expensive. (Score:3, Interesting)
Re:Cost? (Score:4, Redundant)
Dumb businesses look at the quick and short-time costs, relegating longer-term costs to secondary status. Most semi-smart people know that longer-term (and usually recurring) costs tend to dominate over the long term. Even though there are many studies showing Macs having higher ROI, WinTel gets the vote. Just look at arguments here on Slashdot- "I can get (whatever) much more for the price Apple charges!" But it is proven that Macs need less maintainence and less configuration stuff.
With Linux, it's less clear because you can run it on even cheaper hardware, and the OS and most apps are also free. But it takes more human time to get it working right, for the most part, depending on what you want to do. Standard install, no prob. Something special, now you start running into human costs, which are way higher than equipment costs. Put it this way, it would be cheaper for a company to buy a brand new computer than to hire me for 1 day to fix it (minus data loss of course; but they should've backed up anyway!)
Meh.
Re:Cost? (Score:3, Insightful)
Indeed! Well, I'm salary, but its still about 10 days of my time = new computer.
Nothing speaks more about the falsehood of the market choosing wisely than the tech sector, as it relates to the perception of technology costs versus people costs. Who cares if it's 75% as fast if I need to spend less time thinking, caring, stressing over it.
It's kind of funny
Re:Cost? (Score:3, Interesting)
Luti
Re:Cost? (Score:3, Insightful)
Okay, that was a bit of flamebait. But still, Apple has a good thing going now. They piss some people off, but they are being pretty darned successful compared to the other major box manufacturers. I honestly don't think a new Dell product could get the cover of Time.
Success... (Score:3, Insightful)
Very hacker-unfriendly, and more monopolist than Microsoft. In PC land, there are almost always at least three suppliers for every major component. (e.g. CPUs: Intel, AMD, VIA, Transmeta; Motherboard chipsets: Intel, AMD, VIA; etc...)
You may have a point about the jealousy, though. Although my hardware is neither beige nor ugly -- and each of my components was selected at my choosing -- I have to admit that it would be kind of neat to mess around with OS X for a while. Now, if only Apple would let down its sometimes-whimsical sometimes-Microsoft-esque-monopolist schizophrenia for long enough to realize that it could really change the world and make a killing at the same time by entering the PC OS market... But, alas; Star Trek was crushed long ago in favour of the misguided hardware company vision.
Heck... If they leave the price/performance ratio wins and the majority market share to PC land, the Dells of the world will gladly reward Apple with all of the "cover of Time" success it wants. =)
The Manuals (Score:4, Funny)
Manual length and Macs vs. PC (Score:2, Insightful)
In fact, as far as I'm concerned, I wouldn't go with a solution claiming to make computer clusters "easy" with a one page manual.
Besides, if you are going to have a cluster, you want cheap, off the shelf machines such as PCs with plenty of spare parts that can be customised to suit your needs : why pay for a good 3d graphics card in every pc if you are going to do number crunching !
Re:Manual length and Macs vs. PC (Score:5, Insightful)
The fact that the manual is shorter - VASTLY shorter in this case does in fact imply that accomplishing a task is easier.
Here's the skinny: Human factors. A one-page PDF is easier to scan and reference than a 200-page text file without references or pointers. If references an pointers are added along with a TOC, then scanning for specific instructions becomes much easier.
Comparing a 200-page document written by programmers to a one-page document made possible by a more graceful GUI and architecture, and written by professional tech writers is ludicrous. Less instructions to accomplish the same task = easier. Plain and simple.
Re:Manual length and Macs vs. PC (Score:5, Insightful)
In fact, my first inclination is to try to use the Beowulf stuff rather than Pooch simply because such a detailed work exists and is available for Beowulf clusters, but I don't know if any such information exists for Pooch.
Re:Manual length and Macs vs. PC (Score:3)
My first inclination (if I was just getting started in cluster computing) would be to try Pooch and see if it meets my needs. If it doesn't, I've wasted an afternoon. If I try the Beowulf option first, I may do more work than necessary to get the results I want. If I try Pooch, and it works, but I need more horsepower, that's the time to dive into the Beowulf option.
Remember, "Linux is free (as in beer) if your time has no value".
Re:Manual length and Macs vs. PC (Score:5, Insightful)
Re:Manual length and Macs vs. PC (Score:4, Informative)
The situation was that Guy Kawasaki (an Apple "evangelist" at the time) challenged some PC folks to a "bake off," to determine which system made some tasks easier.
When the day came, Kawasaki sent out a 10-year-old to go head-to-head with the PC geek.
The full details of the story are at http://www.halcyon.com/kegill/mac/win95/faceoff.h
Maybe we should have a new challenge where a Linux geek and a 10-year-old compete to see who can set up a compute cluster the fastest.
Re:Manual length and Macs vs. PC (Score:2)
Re:Manual length and Macs vs. PC (Score:3, Redundant)
But I really do have to agree here. Short documentation doesn't necessarily mean a simpler product -- it could just mean bad documentation. In the case of Apple, unfortunately, that's very likely to be true; I've always found that their products come with unbearably flimsy documentation addressed to the most newbie user. Of course, this works most of the time, since their market is largely made-up of newbie users, and since many of their products are, in fact, much easier to use than their Windows/Linux counterpart. But when you're doing something like clustering, well, you know, I'd rather have a big manual, thanks.
Re:Manual length and Macs vs. PC (Score:5, Funny)
Mmmmmmm, Big Macs....
Re:Manual length and Macs vs. PC (Score:5, Informative)
Agreed, however if you'd ever actually tried to use the product you'd realise that this is not the case. Let me show you through exactly how simple it is in just 10 simple steps:
If you've ever set up a Mac beowolf cluster you'll very quickly realise that there is no comparison in ease of use and anyone who argues otherwise is clearly uninformed.
Like always, don't bash what you haven't tried...
Re:Manual length and Macs vs. PC (Score:3, Informative)
"The fact that a manual is shorter doesn't mean that it is a better or easier to install program."
While this is true, it's not even to the point. They didn't compare manuals. They took a book written on building a Linux cluster, and compared it to what is basically a step by step outline for for plugging together a G4 cluster. There are similar outlines out there for Linux clusters, too:
The SCL Cluster Cookbook by the folks at Ameslab is a bit longer than 1 page, but still shorter than 230. (http://www.scl.ameslab.gov/Projects/ClusterCookbHow to Build a Beowulf Cluster -- this is 10 pages long, but goes into such detail as processor, network, RAM, and disk speeds separately for both master and slave nodes. (http://www.mcsr.olemiss.edu/bookshelf/articles/h
But the point is, this article was written by pro-Mac people, so obviously they're going to take a pro-Mac stance. I mean, if these G4 clusters get to be useful, someone is going to write a 230 page book on how to build one of them. Right now, all the documentation that may be out there could be contained in this one page outline. The books come later, if the technology becomes accepted.
----Re:Manual length and Macs vs. PC (Score:2)
Re:Manual length and Macs vs. PC (Score:5, Informative)
I would agree that comparing manual lenght is not a reliable guide to judge the relative complexity of two programs. The one-page doc is even a "quick start guide" not a complete manual. But I still suspect that the writer is correct that Appleseed clusters are easier to set up and maintain than a Beowulf cluster. Reading over the directions myself it did looked pretty brain-dead simple - most of that one page didn't even have much to do with the actual installation of the program but with such complicated tasks as connecting your Mac to an ethernet hub: "For each Mac, plug one end of a cable to the Ethernet jack on the Mac and the other end to a port on the (ethernet) switch." and noting a few system requirments (CarbonLib 1.2 or OS X 10.1) The installation instructions consists of "Double-click the Pooch Installer and select a drive for installation." Instructions on how to use consist of dragging and dropping the program you want to run in parrallel onto the Pooch app and "click Select Nodes..., select the computers you want to run it on, and, in the Job Window, click on Launch Job."
Besides, if you are going to have a cluster, you want cheap, off the shelf machines such as PCs with plenty of spare parts that can be customised to suit your needs : why pay for a good 3d graphics card in every pc if you are going to do number crunching !
This is only the case if the individual PC's are dedicated nodes and not being used for anything else. Most Appleseed clusters are made up of computers that are primarily being used for something else. School Mac computer lab by day; clustered "supercomputer" by night. The cluster of that did 233 gigflops [daugerresearch.com] (76 dual G4's mostly 533's with a few 450's) was simply all of the Macs at UMC working as a cluster over Christmas break. This is where the easy set up, maintenance and the ability to cobble together computers with different processors and even different OS's (some nodes may be running MacOS 9 and some nodes may be running OS X) is an advantage. The Appleseed clusters that are made up of dedicated machines are probably discarded computers they already had kicking around so cost is not an issue there either.
Re:Manual length and Macs vs. PC (Score:3, Insightful)
This it patently absurd.
Apple's documentation was thin because it could be thin. PC makers shipping thin documentation in an attempt to be "like Apple" and failing to provide adequate information (because they couldn't get away with a thin manual due to PCs complexity and Windows' complexity) is solely the PC manufacturers fault.
There's plenty of things to hate Apple for: you don't have to go looking for stupid and pointless things.
Re:Manual length and Macs vs. PC (Score:3, Informative)
Nice flamebait, Reality Master, but I think cost savings had *way* more to do with PC manufacturers ditching the paper manuals than Apple ever did.
As far as that commercial goes, it was true for the average user (the commercial's target) -- the Mac didn't need a ton of manuals to do the equivalent tasks that you needed those manuals for on PCs.
It doesn't mean there shouldn't be better documentation nowadays, of course (apart from keeping David Pogue et al. in business). But let's try to keep our pre-conditioned biases as tenuously connected to reality as possible.
Re:Manual length and Macs vs. PC (Score:2, Troll)
Only Apple users think having the company hide as much as possible is a GOOD thing.
Re: (Score:3, Informative)
False economy (Score:3, Interesting)
Not if your primary concern is getting the most FLOPS/$. Given that a brand-new $1000 computer will be something like 10 times as fast as your old ones, at the same power consumption, it doesn't take very long before your new computer pays for itself with the money you save in electricity not running 9 additional machines.
Consider:
150 Watts (low for a PC, probably average for a Mac) x $0.10/KWH x 24 Hr/day x 30 day/mo. x 10 machines = $108 per month. Your $1000 new machine will pay for itself in less than a year, from electrical savings alone.
Of course, this assumes dedicated compute servers running all the time. If you run the cluster software as a backgound task on desktop machines with many users, it's a different story.
Re:Manual length and Macs vs. PC (Score:3, Informative)
Pooch won't run on those, however, because it requires MacOS 9 or later. Those versions of MacOS won't run on 68K Macs. A Beowulf might be doable under one of the 68K Linux distros (only one that comes to mind is Debian)...but I've found Linux to be almost unbearably slow on my Quadra 610. (Linux probably has been nowhere near as optimized as MacOS, which has (or had) large amounts of hand-coded 68K assembly in it.)
Oh my God (Score:5, Funny)
this is news? (Score:2)
Whoo. That's not tough.
If linux is to make further inroads (and I by all means wish Apple luck in the same) against Microsoft in the server arena, contributors must work towards this goal. It's the interface, stupid! I don't care how many geeks' grannys can send e-mail from the command prompt, but the MCSE-in-a-box crowd aren't going to go for it if it isn't simple to set up. Reading a 200-page howto isn't going to cut it, especially with the level of technical writing skill out there....
Recent MacSlash Thread (Score:5, Informative)
The old NextStep API won't hurt either (Score:4, Informative)
I remember Richard Crandall and the mathematica guy (Wolfram) using Zilla (an old Next distributed computing program) to crack the world's largest prime in the mid nineties...
Anyone know if Zilla is back on OS X?
Also the Gigabit ethernet on motherboad and the large 2MB cache on the PowerPC chips will go a long way on making these machines a good cluster.
It's been a while since I've done distributed computing (hey, I am out of acedemia) but OS X will hopefully make the whole shebang easier...
Re:The old NextStep API won't hurt either (Score:4, Informative)
And here's the documentation for using distributed objects in Cocoa [apple.com].
Easier vs. cheaper... (Score:3, Insightful)
-- Initial build costs are much lower (dual Athlon 2000+ right now without graphics hardware is way cheaper than a dual G4 1GHz).
-- Maintenance costs are much, much lower. Anything goes wrong with a PC node, just swap out that part with another commodity part. Mac repair or parts replacement costs will eat you, especially if you start to have many, many nodes.
Plus you can modify bits of Linux if you need to optimize the behavior of your cluster for the sort of computing you do, which you can't do with Mac OS.
My $0.02.
Re:Easier vs. cheaper... (Score:5, Informative)
True.
-- Maintenance costs are much, much lower. Anything goes wrong with a PC node, just swap out that part with another commodity part. Mac repair or parts replacement costs will eat you, especially if you start to have many, many nodes.
Wrong. Commodity parts such as memory and hard drives are exactly the same on the Mac. I have bought memory and hard drives at Sam's club, and they work just fine in my Mac.
Plus you can modify bits of Linux if you need to optimize the behavior of your cluster for the sort of computing you do, which you can't do with Mac OS.
Wrong again. At the level of the OS where you might need to have some custom tweaks (the kernel) you can customize OS X to your hearts content. See Darwin [apple.com].
Now this article may have been talking about OS 9 clusters, but there is nothing preventing anyone from using OS X.
Re:Easier vs. cheaper... (Score:2)
Here's the thing. If you buy a Mac (any Mac), to run any of the MacOSes (classic or X), you expect the thing to work, and work flawlessly. You do not have to worry about driver conflicts, you do not have to worry if the Kernel version works correctly with your hardware, and you really should not expect any compoent failures, given the price you are paying.
Plus, every Mac comes with a 1 year warrenty. The whole thing is guaranteed by Apple to Just Work for a whole year. If one of the nodes doesn't work right, you send it back to Apple and they either fix it, or send you a new one.
Besides, if you are doing clustering computing, by the time the Macs start failing, you'll be putting together a new cluster anyway. By the time that starts happening, you can just sell the old cluster on the used market, machine by machine, and probably build a cluster 5x as powerful at a similar price to what you paid for the last one.
Re:Easier vs. cheaper... (Score:2, Informative)
Run a stable kernel; there are no "driver" problems, there are no kernel problems. You don't need to run 2.5.3 for clustering. You don't even need to run 2.4
And if you want you can buy PC's with a warrenty. Having said that, I build my own computer 4 years ago from different parts from different stores and the only thing thats failed in that time is my mouse.
Re:Easier vs. cheaper... (Score:5, Insightful)
The market always wins. The social costs (ease of maintenance, accessibiliy, at the (granted) cost of performance) are almost always ignored when people vote with individual walets.
Natch:
> Anything goes wrong with a PC node
Thats cause stuff goes wrong far more often in a PC envrionment. I say this with 10 years of computing experience on both platforms. YMMV, and I'm sure I'll collect anywhere from 2 to 200 replies either quoting amazing PC/Linux uptimes or terrible Mac related experiences, but I've worked, at length and in technical situations with MacOS, Windows, Linux, FreeBSD, HPUX, AIX, Solaris
> Plus you can modify bits of Linux
OSX, the kernel is Open Source, so you are free to munge around with it, although I havn't gotten a chance to look deep into it, so I'm not sure of the extent of the validity of this.
OS9, removing kernal modules from the OS is a simple point and click, although I think there is obviously more code in the base system than on a bare bones Linux system. Again, trade offs are unavoidable.
It is only because Apple sells their OS as 'easy to use' to people assume this is equivilent to 'non customizable'. Any dedicated mac techie knows that while MacOS ain't as granular as Linux in its customizability, the perfornace loss in putting your CPU against surperfluous tasks pays back in the other advantages of the platform.
Note that I'm not arguing that MacOS is better to cluster than Linux
What I love the most is how people expect computers to be cars. Ie, if its more expensive, it had better be faster. Man, I'll take a slower and more enjoyable and pain-free computing experience any day of the week, which is why my dream setup would be OSX by default, then Linux or some BSD variant (I'm a programmer on FreeBSD), and then Windows. This holds true even in computationally-intensive tasks. If I can't enjoy the experience of doing it, I don't want to do it, even if it can be done faster or cheaper. My happiness and level of stress is more important than speed.
Linux on PPC? (Score:3, Insightful)
It's interesting, all the comments I've read so far, including yours, seem to deal with this as a dichotomy between Linux/Intel and OS10/PPC. Don't forget you can run Linux on PPC. For a high performance dedicated cluster that would definately be an option I would look at.
Of course, there are situations where the Mac software has advantages that will really shine. Like if your "Cluster" is really just the lab machines at the college, acting as a cluster when not being used for DTP and Video editing or whatever. In that case the ease of setting this up with Mac OS10 would be a real plus.
Re:Easier vs. cheaper... (Score:3, Insightful)
Thanks for the anecdotal evidence. No, wait. You've never used a Mac cluster. Nevermind.
There are many, many studies of overall maintenance costs of various platforms (Win, Mac, UNIX, etc). I have never seen a single one that does not conclude x86 to cost more than Apple HW. If you know of one, I would be happy to read it.
Re:Easier vs. cheaper... (Score:2)
Nevermind that the Athlon has a superior CPU bus and a fast DDR memory bus, while the Mac has a shared CPU bus and a rather pathetic 800MB/s PC133 memory bus.
Physical Space issue (Score:3, Insightful)
A 45U rack will hold 45 1U dual-CPU systems. Even more of the server-blade type systems (280 of the Compaq in a 42U rack).
The only way to rackmount a G4 that I can find is at Marathon Computer. A set of replacements for the "handles" for $225 or a whole new case which is 4U but is $550. Given a 45U square-hole 19" rack, you could squeeze in 11 dual CPU G4s.
I don't care what your performance fantasies are about the G4 systems, they're not more than 4x faster than dual x86 systems.
Swapping isn't the solution (Score:4, Interesting)
I just priced out some Compaq Workstations yesterday and compared them to Apple Powermacs (Apple's workstations) for doing some OpenGL game development.
Apple Powermac with dual monitors and the upgrades we'd want... $5k. Compaq Workstations... $5k.
In the price-conscious area, Apple's iMacs/iBooks offer a good solution at a reasonable price. You can't compare Apple's workstation line with your "look ma, I built it myself" machine.
Apple does QC. You don't. You and your screw driver does not equal scientific requirements for reliable and predictable. If a node fries, you likely need to start over again. You can't just try to fix the damage.
Linux is great, OS X is great. They are very different UNIXes in different markets.
Alex
Imagine (Score:4, Funny)
Strange, it doesn't seem to have the same comedic value this way...
Apple's biggest problem ... (Score:3, Insightful)
Before someone accuses me of saying they never break, always work flawlessly, and the like: They do need support. It's just that the ideal career envoirment is when there is more work than workers. An underwhelmed support staffer soon finds the company wants him to help unload pallets in his spare time.
When all the IT staffers know one platform, what do you think they're going to recommend come upgrade time?
From the article:
"
Sounds like an old commercial... (Score:2)
Sounds like an old Apple commercial called "Manuals" (sorry, I spent ten minutes Googling and still did not come up with link to the ad) that showed an IBM PC with a stack of huge binders thundering down from the sky into a heap next to it... then panned over to the 128K Mac, as its single, thin manual fluttered down like a feather in comparison.
~Philly
Mac OS X (Score:2, Interesting)
I mean, this shit flew under Mac OS 9 and 400MHz G3s. Now we have Mac OS 10.1 and *dual* GHz machines with Gigabit ethernet. I can't imagine the power.
maya, photoshop, etc. on a cluster? (Score:5, Interesting)
Nothing against Linux (I use it myself for a router), but a three-day setup for Beowulf clustering isn't a great deterrent if your calculations will be going for a month or two.
The type of clustering we're talking about here is something that could potentially appeal to the average SOHO or school, where they have five to 500 general-use Macs that have processor cycles to spare.
My question is this:
What would it involve to make Mac OS X and every program that runs natively on it to be able to take advantage of clustering right out of the box? If they can natively use multiprocessing, how much of a leap is it to patch the OS to natively support clustering?
Not only would this be great for techies, but it seems that this would be a great incentive to volume sales from Apple, where they now generally only get one or two Macs per site and the rest are Wintel workstations.
Re:maya, photoshop, etc. on a cluster? (Score:2)
I learned 3D on 3D Studio (DOS) R4. Without the hardware lock installed it would run as a render-slave. Another guy in class was also one of the student admins of the campus network. The machines in the computer labs would be forced to wipe their HDs every day and reinstall a fresh copy of the OS. He inserted a copy of 3DS R4 into the "clean directory". The next night all of the machines on campus had 3DS ready to go.
The renderfarm for 3DS R4 was driven by a batch script. The script had the IP addys of all the machines you wanted to use. He wrote a script that slaved the entire IP range of the campus network.
One morning, at about 2am, in the graphics lab we fired it up. Worked like a charm. We actually made use of it every night for about a week before a paid admin tweaked to it and made us shut it down.
Re:maya, photoshop, etc. on a cluster? (Score:5, Insightful)
Actually, native Cocoa apps have the capability to be built using distributed objects quite easily. In fact, the mechanisms used for multithreaded communication (NSConnection, NSPort, etc) are the same classes you use to communicate with other processes - on ANY machine.
The mechanism they use relies heavily on the dynamic nature of Objective-C objects, so I'm guessing it's NOT based on some standard (SOAP,CORBA,RPC,.NET). That would make it hard to integrate it into any cross platform clusters, but we were talking about Photoshop, weren't we?
So, it boils down to simply this: If you write a Mac OS X app, write it threaded and use Cocoa. If you do that, you'd be amazed what sort of functionality you get for 'free' - including being able to distribute your app across clusters!
Down with Carbon!
The much vaunted 1 page pdf... (Score:2)
About the same... (Score:5, Insightful)
Cost of 10 good Highend Macs, (about $30,000)...
Both are in the trivial range compared to the costs of time, energy, etc.
There is a more important question, which machine gives you the most bang for your buck?
We know that Photoshop runs better on the G4, what about your operation?
If the Mac gets a 2:1 performance advantage, then the costs are equal. If the Mac out-performs it regardless, you get an advantage.
For the moment, let's assume that you are getting real machines that are tested, not parts off of a sketchy vendor from pricewatch.com. If you are really trying to build a parallel computer, you want real systems, not junk that may or may not work.
This also rules out eMachines, or home computers. You are basically in the Compaq Workstation, Dell Workstation, HP Workstation, or IBM Workstation area. You aren't setting up a bunch of Presarios.
Re:About the same... (Score:4, Insightful)
Cost of 10 good Highend Macs, (about $30,000)...
Another thing to consider: If you use a cluster of Macs for a year - you could resell them and recoup most of your hardware costs. Beige x86 boxes sink in resell value much faster than the shiny Apple boxes do.
Re:About the same... (Score:5, Informative)
If it can be optimized for AltiVec, almost nothing will be faster than a G4.
Just take a look at these RC5 stats [xlr8yourmac.com] (mid-way down the page). G4s smoke everything, because the RC5 client is optimized for AltiVec, thus it can compute four keys in a single clock cycle. By comparison, Athlons do one key per clock cycle, and Pentium 4s do one key every four clock cycles.
So if you've got an operation that can benefit from the G4's SIMD capabilities, Macs are your best bet.
That's why I mentioned photoshop (Score:3, Insightful)
If you application does better on the Intel, you are likely better off considering a Linux cluster. However, if it isn't much better, you might be better off with the Mac cluster by adding a few more machines to compensate... depends on the costs of time.
If you are running an application that, LIKE Photoshop, does better on the G4, you will see the price performance favor the Mac line. That's my point.
If this market was a decent size, I bet Apple could get some really competitive cluster systems. It would be nice to see an Apple dual or quad G4-1 GHz, with a CD-ROM, ATI Rage 128, and Gigabit Ethernet for the scientific community.
They could make the machine without PCI slots and fit in a 1U case for OS X processing goodness.
However, the reality is that the extras (better video card, Superdrive, etc.) don't add much to the Apple's price. However, the right form factor could make them tremendous cluster machines.
Alex
Why am I taking the bait... (Score:5, Insightful)
So, I took the bait... I went to Compaq's site and spec'ed out an equivalent workstation. Note, I'm not souping up the video card or CD-ROM like the Apple workstations. No need to waste money.
Compaq Evo Workstation W6000, Intel Xeon 2.00 GHz/512K processor, dual processor... Upgrading to 512MB RAM. $3521.00. Note that this machine only has 10/100 networking. The Apple has Gigabit. This should matter in a cluster.
Dell Workstation 530. Intel Xeon 2.0 GHz x2, 512MB RAM, and an upgraded sound card (Dell won't sell a dual-proc workstation without an $80 soundcard upgrade... weird). Dell did let me downgrade the video card annd monitor... Price: $3878.00. Unlike Compaq, I could buy the Dell workstation with Linux (supported) instead of NT and needing to swap OSes.
Next I went to Big Blue. They push Linux, they should sell me good Linux workstations. When I bought my last round of Penguin Computing machines (to run OpenBSD and Linux) I looked at IBM first...
IBM's only dual processor workstation, the IBM Intellistation M Pro 6850 Tower. With a second 2.0 GHz Xeon processor, $5218.
Real computers cost money. Flaky machines that hardware lock from time to time do not. You can't compaq the Apple workstations to the bottom-barrel systems.
In fact, at $1300 for the lowend iMac (700 MHz G4), admittedly with a silly flatscreen for this project, or $2300 for the midrange (933MHz) G4, Apple hits some good price points for this.
Look, the new G4s (in the 933MHz and 1GHz-dual models) are sporting a 2MB L3 cache! That's damned impressive. A 2MB L3 cache should make cache misses SO infrequent that the slower memory bus speed is irrelevant.
Look, if you need lots of power, you used to need to stop millions. You're not going to cut corners on your machines. You're looking at $3500 for an Intel dual-Xeon based solution or $3000 for the dual-G4 based Apple solution. Sure you get an unneeded Superdrive, but who cares? When the project is over, I bet you everyone in the lab is happy to take one of the Superdrives home...
Geeze people, get a grip.
Apple's G4 workstations are not the same quality as the computer you have in your room in your parent's house. These are real machines with:
Gigabit Ethernet (very significant for a cluster, and unlike the PC's 32-bit, 33 MHz bus, real machines like the Apple, Compaq, or Dell workstations have 64-bit OR 66 MHz (sometimes both) PCI busses so you can actually USE the Gigabit Ethernet.
The Apple's L3 Cache has 2MB DDR SDRAM at up to 500MHz, this is much faster than the 266MHZ DDR in PCs and comparable to the PC800 RDRAM in the Dell/IBM workstations. Sure the System RAM is slower, but a 2MB L3 cache makes this less relevant.
The Superdrive, Firewire, and Video cards are all unnessary here, but they are actually really nice features if these machines will be reassigned as desktop machines when the project is over. You could buy new PowerMacs with the G5s ship within 6 months and reassign these as desktop machines. The real workstations are the same. You $45000 cluster of crap machines won't take you very far. They are trash when replaced, and if the machine hasn't been QC'd? Well, time to explain that your project needs to start over.
Come on people... Quake != scientific computing
These Xeons... (Score:4, Informative)
I can't compare the Apple's to the P4s... P4s don't go dual processor, so the PPC G4 wins here. I can't get a Dual proc P4.
Athlon? None of the vendors I checked have Athlon workstations, so they weren't in consideration.
However, after realizing the lack of Athlons, I remembered that Penguin Computing has a line of Athlon based workstations.
I went to their website, and priced out an Athlon MP system, the Tempest 210MP Workstation.
With 2 Athlon MP 1900+, not really competetiive with the new 1 GHz G4s, but close enough for our comparison (and matching your assertion that they are in the same league as them). With 512MB PC2100 RAM, and upgraded to the Gigabit Ethernet card (they have one, might as well try to be fair), and my workstation price is $2707.
Congratulations, we have a winner. A Athlon MP 1900+ (running at 1.53 GHz if I recall?) with similar specs at the Apple Workstation comes in $300 cheaper. The Apple has some advantages, the better video card and Superdrive are nice features when the machine is recycled as a desktop machine, but for now they are superfluous.
What is the point of my work?
You're all full of shit. Apple's computers are extremely price competitive. They are cheaper than Xeons from the real vendors with similar specs (Xeons had faster RAM, equal L2 cache, no L3 cache, and no gigabit ethernet).
Apple puts out a really competitively priced Unix workstation to Linux workstations from major vendors.
Apple puts out really competitively priced consumer machines (iMac/iBook) compared to Wintel machines from major vendors.
You can choose to use an Apple solution or not, but stop spreading the bullshit about Apple being more expensive.
* pb imagines (Score:2)
All you have to do is write an application for pooch (that, for example, does your linear algebra homework, or perhaps pingfloods slashdot.org) and run it on all the cable/dsl mac machines that now run pooch because of slashdot.org, and enjoy the amazing technology!
Now, if what I have outlined isn't possible, please let me know; this is all from the article and the incredibly meager documentation I have read. But as usual, it looks like the security ramifications for this are enormous, perhaps worse than other common and incredibly boneheaded ideas, such as auto-updating software, and executing code in e-mails from random people...
ahh... (Score:2)
My imagination originally came up with a similar scenario, and then it all FIT! That's why the POLAR ICE CAPS are MELTING! It isn't global warming; it's a BEOWULF CLUSTER! I figure they have TUNNELS connecting the supercomputing centers to the ESCAPE ROUTES for the ARK.
Work on the ark continues...
* pb types into google (Score:2)
Why the Mac won't be a good clustering choice (Score:2, Insightful)
Macs are luxury computers. They are generally more expensive than their custom PC counterparts, and Apple limits the BTO options that you can use to reduce the price of their G4 towers.
If you wanted to cluster 10 G4 towers, you'd be paying for 10 superdrives, 10 3d accelerated video cards, 10 snazzy cases etc etc. Most people building a cluster will want each system to only have the components they need: processor, memory, network IO, backplane bandwidth etc. You won't want to pay for components you won't use (like 9 extra superdrives).
So unless Apple decides to offer special deals for those who want clustering, I think the economics of the situation will work against Macs and infavour of x86 PCs running Linux where the economies of scale conspire to lower component costs to the minimum.
Hardly a comparison (Score:3, Insightful)
And don't note that the manual (if it's the Beowulf book everyone cites) is mostly about how to PROGRAM it (e.g., includes an intro to MPI).
since when (Score:2, Insightful)
Does anybody remember Zila ? (Score:2)
Guess the solution you discuss about is actually inherited from this one
No wonder he doesn't get consulting jobs... (Score:2, Informative)
HAH!! (Score:2)
It's to bad I havn't got mod points, I'd give you +1 funny. Thanks for brining a smile to my day
Why the Mac could be the perfect clustering device (Score:2)
* Gigabit ethernet networking
* Numbercrunching processors
Ditching the screen and stack large numbers
in racks might be a problem, how about power
requirements?
Speaking for myself, I only need one screen and
one computer with a diskdrive but I'd like to
see better ways of using multiple computers
(as long as any one program can crash one of them)
How to set up a Mac cluster (Score:5, Funny)
Step Two: Turn them on.
Step Three.... there's no Step Three! There's no Step three...
To be a fair comparison... (Score:3, Insightful)
MOSIX clusters are a one-liner to set up, for example. I challange Apple to beat that!
I'm not sure about Compaq's One-Stop Linux Clustering. I've never got it to compile. But, assuming it can be made to work, I bet it'd be pretty decent, too.
Last, but by no means least, clustering in the Real World tends to be through PVM or MPI, which are platform-independent. Hardly anyone uses OS-specific clustering, because hardly anyone but high-energy physicists ever develop large clusters in the first place!
Something tells me this guy has never set one up.. (Score:3, Insightful)
How is a Mac "easier to set up" in a Beowulf cluster than a group of identical PCs?
I can see where the author might make a point to say that the Mac is nice to use for a cluster because Mac hardware doesn't really change much from box to box, but the same could be said for a group of equal-built PCs. Infact, most real-world (re: not your bedroom.) Beowulf cluster nodes are NOT loosely conglomerated machines with wildly different capabilities from node to node. Most clusters are planned out well in advance, in where each node is precisely equal in terms of its hardware and horsepower.
"Its easy to set up because all of your nodes are the same with a Mac!!" ceases to be a valid "advantage", when the same can be said of a group of SGI O2 boxes, a group of Sun E10K boxes, or a group of lowly 386 PC boxes.
Besides, "its see-thru orange!!!" shouldn't top your list of reasons to purchase Macs for your cluster. You buy a pile of 1U rackmounts, because you normally don't have a whole room to dedicate to a cluster. (duh)..
Cheers,
Well how about that? (Score:2)
Why does appleseed use ethernet?? (Score:2)
Ok, no parallel port on Macs... but I wonder how do Firewire ports perform?
Yeah, Firewire would be better (Score:2)
So why not network a cluster of G4s together with firewire?? Seems like it would perform much better than ethernet.
Price/Performance (Score:4, Informative)
USC Macintosh Cluster Running the AltiVec Fractal Benchmark achieves over 1/5 TeraFlop on 152 G4's and demonstrates excellent scalability.
KLAT2's complete results are: Rmax=64.459 GFLOPS with 64 Athlon 700MHz with 128MB PC100 CAS2 SDRAM
So a 1 tflop apple machine would cost about $440,000 in hardware for 152 G4 1000mhz -vs- 270 Tbird 1400mhz at about $160,000.
The difference, $280,000 could certainly hire someone literate enough to read the long linux manual.
Correction... (Score:5, Informative)
At the "Macs in Science and Engineering" user conference at Macworld, they gave the general specs. of this cluster, and all of the machines were dual processors, but of different hardware generations. Although the fastest machines were dual 800 Mhz. on 133 MHz. bus, the majority were slower dual 450 and 500 Mhz. machines with 100 Mhz. buses.
With the fact that all were dual, and ignoring depreciation on the older hardware, the cost would be at most $220,000, If you were using Dual 1 GHz. G4's, it would still be only $220,000. My notes are on my laptop, but I believe that the actual cost of the USC cluster was less than $200,000.
Also, I assume that you think that the 270 uni-processor T-birds will scale performance linearly as well. I doubt it would only cost ~$600 per node as you would have to use Myrinet or some other fast fabric, and with three and a half times as many nodes, the latencies, hardware, and administration cost would be crippling. I have the same cost argument if you use dual Athlons, as the boards are quite rare, and the node count is almost double the Mac node count.
Your price/performance assertions don't stand up!
-- Len
Re:Price/Performance (Score:4, Informative)
First the Apple critique:
A G4 is capable of between 7 and 8 Gigaflops. So, You'r number of 152 G4's is reasonable. However, your price of $440,000 divided by the 152 G4's indicates a per unit price of $2894.
This is bullshit.
At $2894 you are $100 away from getting a Dual G4. You can get a single-proc G4 at 800Mhz at $1400. Not counting the quantity discount.
Using the Dual G4's you would need 67 of them, for a total price of $201,000. With the single G4's at 800Mhz you would need 156 of them for a total of $218,000.
Now, the Athon critique:
Let me see her. 64.5GFlops with 64 machines, that's 1Flop per machine. That's at 700Mhz. At 1400Mhz like you said, that's 2GFlops per machine. So, you need 500 of them. Using your figure of $600 per machine, this would be $300,000. If you went with Dual Thunderbirds you could get this down to 250 machines at closer to $100 a piece taking it down to $250,000. Not counting the quantity discount.
So, we have $250k to $201k using my rough mathematics. This is $49k price different in favor of the Apples, not counting the fact that you need 4 times as many Athlons.
Other miscellaneous critiques:
Doubling the speed of the Athon does NOT double the throughput in Gigaflops. That was a nice try though.
Anyway, have fun,
Justin Dubs
Re:Price/Performance (Score:3, Insightful)
67 56k modems (not optional)
67 Superdrives (DVD-RAM, not optional)
67 GEForce4 video cards (not optional)
67 sets of hyper-inflated Apple RAM which you could otherwise get from any other vendor at half the price. (512 Meg, not optional on that model).
If Apple would work a deal where I could get the same boxes without these add-ons for say, $1500 a piece, THEN we could make a deal on a cluster.
Not to mention, you'd probably want to hack the OS in some way so that you could kill CPU-hog Aqua.
I'm just trying to point out that Apple's destop machine is not necessarily optimal for this kind of application.
Some skepticism (Score:2, Informative)
It would be a really good idea to make clustering easier, but there is a trade-off between easiness and performance. Making the creation of clusters easy ("a few G4 Macs, some Ethernet cables, a hub and the Pooch software.") by only talking about the easy-to-use software and not optimized network topology (correct me if i'm wrong but the Beowulf handbook probably covers a lot of that) will definitely keep performance quite low.
BTW. on the wired site it says:
while almost the first sentence in the 1-page-pdf says:One page version of Linux Clustering (Score:2, Insightful)
Requirements: Linux Network with rsh enabled, preferably with firewall and IP Masquerade.
1.) Download jobmanager and bWatch rpm's
2.) Do a rpm -Ivh *.rpm
3.) Add list of nodes to
4.) List all nodes in
5.) In a terminal issue: jr -q [process command]
Viola! your distributive computing!
! == goatse.cx [perl.org]
Another way of looking at it. (Score:2)
I mean who cares how many pages a reference manual is? I would rather have a complete manual than an incomplete one.
Meanwhile... (Score:5, Funny)
They note that the Linux "how to" manual is 230 pages while the corresponding Apple document is a 1 page PDF file.
Meanwhile, documenters have been developing a "What to do with a linux beowulf cluster" list. That document has grown to 230 pages. The corresponding mac list has come up with one idea (And it fits on a 1 page PDF file): "Create a system that allows us to use Photoshop to edit super-high resolution pictures of Natalie Portman eating hot grits."
(j/k!, and, btw, I'm using a Mac right now. :-)
Everybody's missing the point (Score:5, Insightful)
The point isn't flexibility: sure you can be more flexible with a Linux-based cluster. You can tweak and tune a Linux-based cluster to meet your specific needs. This is why Google uses such a cluster.
The point isn't about cost: the real difference between a decent name-brand PC and a Mac is negligible. In the case of these Mac-based clusters, since the clustering software is just another app, a Mac-cluster can be setup and torn down quite readily. You come into the lab on Wednesday to find your workstation has been appropriated for the cluster.
The point is accessibility! If you're a physicist in a small school looking to model some complex interaction, you can rent some computer time from somebody (expensive), build a cluster (very expensive, because you'll have to hire somebody to do it--physicists aren't likely to be Beowulf experts), or use the Mac clustering software (expensive, because you'll have to buy the machines if you don't already have it, but you can do it yourself, quickly, without much bother).
Accessibility! It's what keeps Apple in business. This is another example of it.
I'm pretty disappointed in the posters who knock it, because it strikes me that they are a bit put out that they won't remain the Technical Elite because they've got the spare time to read the 230-page Beowulf manual.
That's NOTHING... (Score:2, Funny)
um, where's the hard part? (Score:3, Funny)
The faster the machine and traffic the better of course, but you could do this with the cheapest iMac ($799 new, ~$400 used) or a bunch of cubes (banking finally on their close packing ability) if you want Altivec in the mix.
Gosh, a reason to make a headless iMac2 - that would be quite the aesthetic eh? Seventy six of those snuggling on a ping pong table...
Communication can be over Airport, too - so you can imagine ad hoc Mac Clustering begin setup during the first half of every Jobs keynote - you know, the part where he just says stuff - to go thru all possible iterations of the product to be intro'd in the second half of the keynote...
The real point here is... (Score:5, Informative)
So, what does all this mean to us? As an atmospheric scientist, having some serious number crunching power is mighty helpful. Weather modeling is quite the processor intensive task, and then interpreting the results can take years after all the computing is done, including further computations and visualization routines. To put it shortly, we can easily tax our computers.
So, now you know that we need computing power, but money is a premium for us in many cases, so why shouldn't we just get some cheap Intel boxes and *nix cluster them? Well, we could, but then we'd need to hire a systems admin. Someone who is tech-savvy enough to keep everything running decently well for us. That requires another person who REALLY understands what's going on in many cases, which is another salary on the payroll. For us, it all ends up balancing in the end. The $5-10K that we save in clustering our 8 Intel boxes over the Macs is eaten up in one year or less by the guy (or woman) who has to set up the whole thing. So, for us, the ease of setup and use is something that can translate into some good savings and we don't have to worry as much about having to rely on another person to save us if something goes wrong. That's the benefit of simplicity for us.
I agree that it is important to know, as one person said, "The nature of the beast", but that's something that takes time to do, and when you're not being paid to learn about how to cluster computers, but to figure out how the atmosphere works, then things like "The nature of the beast" are just further complications. I would rather have something that I can slap together, know that it works, and get back to my work, without the interference of others if I don't need it.
And that brings me to another rebuttal, about someone mentioning that if you buy the Macs, you're also going to pay for all the extra Superdrives and video cards and all that. I say to that, "Good." That way, if the cluster doesn't need to be used, then I don't have a bunch of mostly useless boxes sitting around... or if a collaborator comes around and needs a computer, I can just remove one of the computers from the cluster and let them use that for as long as they need. The point is that there are advantages and disadvantages to each setup. Now you've heard some advantages and why the scientific community might care about this. Remember, not everyone here can compile their own kernels and not everyone cares about being able to do that. Some of us, thank the deity of your choice, actually want to do something with this power and not care how it works in depth. To each their own.
-Jellisky
exactly. (Score:3, Interesting)
The money saved by using a free OS is quickly eaten up by the salary of someone who has to make them run smoothly, which is damning if you're a small business with only a few employees, or in your case, a research group.
Ph.D required!! (Score:3, Funny)
Yeah, I guess there wouldn't be any qualified people amoung those running Tokamak fusion simulations or 100 million mutually interacting particle simulations.
A diskless linux system is cake to setup and as far as different kernels are concerned, the article is clueless, you can use LamMPI to mix different platforms (ie sun,sgi,intel linux, alpha linux) in a single cluster.
Disclaimer: I have a Ph.D.
My Manual is Smaller than Your Manual! (Score:5, Insightful)
I recall, back when CD-ROMs were fairly newfangled, the "manual" that came with the CD, if it was a dual-platform disk, often offered an interesting contrast.
The Windows instructions would go on for pages, discussing running the installer application, how to get the right drivers, etc.
The Macintosh instructions were usually:
I never understood why Apple didn't market that advantage heavily.
They did, but it didn't work (Score:3, Insightful)
The original 128K Macintosh came with a thin manual and a casette tape (which you played along with a movie running on the screen). This was enough. One of the first Mac commercials showed a PC, with a stack of books falling on the table, and a Mac, with the thin manual floating down.
However, they made the same error that you make: thinking that people select for ease of use. They don't. This is what happens:
The sum total of this is what I call "the Acolyte effect." An Acolyte is someone studying for the priesthood. Computer acolytes are attracted by the pseudo-mystical nature of software; learning its ins and outs is for them a rush. The choice of computers and software becomes a social hierarchy.
what if the manuals really ARE an indicator? (Score:4, Insightful)
Everyone here seems to be suggesting that the manuals indicate nothing. "Apple has weak docs!" seems to be the summary. But can we entertain the notion that perhaps while 1 page is too short, 230 pages is far too long? If so, is this because the people who wrote the manual are not professional authors, and got too wordy? Or is it because Linux just isn't usable enough?
And whatever you think, isn't it reasonable to suggest that making Linux more intuitive and the manuals more succinct might help rid us of idiot lusers who won't RTFM? They won't really go away, but if we actually take usability seriously, perhaps developers can get half those people to solve their own problems. Wouldn't this be a good thing? I guess that's a rhetorical question -- I am sure it is a good thing. I spend my entire workday building apps for people, and one usability tweak can mean the difference between 20 nagging people a day and 2. My team even has blacklisted a couple people in the company, whose projects are always time-sinks to build and time-sinks to maintain. Why? Because those people are control freaks who won't let us fix usability errors, and my team ends up spending their days on support. If you can build something intuitive and usable, both the users and the developers will be much happier.
And this is convincing because...? (Score:3, Informative)
Yes. Wonderful. This says nothing. This is one of "those" statistics. The Linux "how to" could be 230 pages because it not only tells you how to set it up, but gives you advice on customizing, creating optimized programs, hacking the kernel, and FAQs covering every single problem or question you might have.
The Mac PDF might be an almost blank page that says, "Call tech. support." Furthermore, why mention that it's a PDF at all? Are you saying that it's somehow better to use a proprietary document format (e.g. Proprietary Document Format - PDF, get it?) instead of plain text? Is the information somehow MORE relevant because it's in PDF?
Please. I've seen neither, but all this tells me is that someone wouldn't know a relevant comparison if it widdled on his shoes and stole his wallet.
Re:And this is convincing because...? (Score:3, Informative)
Well, at least you admit that this comment wasn't based on any actual facts. Here is the complete text of the PDF:
As you can see, there are only two sentences about actually installing the program, and three paragraphs about how to use it. Is the entire Beowolf book about installation and set up? Of course not, but a good few dozen pages are, so I'd say Pooch's 2/3s of a page wins.Comment removed (Score:3, Informative)
Power vs. Cost vs. Maintenance... (Score:3, Insightful)
1.) Power vs. cost. The G4, with AltiVec-enabled MPI code, can blast data through in 128-bit chunks. Steve Jobs loves to term this the "Velocity Engine", and it is much, much more powerful when doing solid number crunching -- exactly what would be taking place on these clusters. It's not as amazing for day to day operations, but the capability is there to quadruple the data flow of a traditional processor when doing clustered computing. Typical AMD/Intel processors can just not do this.
2.) Maintenance. This is key. I maintain a Linux cluster and have worked with others in the past, and wonderful as they are, they require lots of maintenance. It's pure and simple math. You probably built all 16 or whatever nodes with individual parts made by various companies, and inevitably, each of those elements will have problems. This makes debugging and fixing hardware problems unbelievably painful, especially when you also have to deal with multiple parts vendors. When you use Apple Power Macs, ALL hardware problems can go through ONE support source, and that's Apple. Plus, they are pre-built, tested, and refined in Apple's R&D labs far before they make it to your cluster room. This saves such incredible amounts of time and money, it definitely pays for the extra cost of the computers themselves. I wish I could explain to you the sheer pain of keeping a cluster alive which constantly had one part go bad here and there -- but one part, sixteen computers, each with eight or nine significant custom-attached parts... well, it meant a lot of troubleshooting time, a lot of replacement time, and having to deal with far too many different companies to get the parts and support I needed.
3.) MacOS X. Clustering under previous MacOS versions was, despite the best efforts of AppleSeed, absolutely reprehensible. The operating system was simply not designed to do massive computing projects, and it was not efficient at all. Definitely not worth it despite the work of the pioneers in the field. With OS X, you now have a BSD operating system, one that has done clustered parallel computing for over a decade. MPI, with AltiVec enhancements; gcc with multiprocessor compilation support, you name it, it now runs under OS X and, with the operating system natively supporting the G4, it does it DAMNED fast.
"What the heck do you know," you might ask. Again, I maintain a 16-node Linux cluster for a plasma simulation group at the University of Colorado, and am also the CU campus rep for Apple Computer. I am well-versed in both OS X and Linux, and their scientific computing environments, and have experience in clustering in both environments. I am in the process of establishing a scientific computing initiative at CU, and I am doing it on behalf of Apple because the G4s (and soon, G5s) are simply the best platform for multi-platform scientific and high-intensity computing.
The best saving grace from a sysadmin's point of view, is that I will never have to worry about maintaining the variety of parts in those damned Linux clusters. The operating system is wonderful for scientific computing, yes, but there's simply no cost-effective way to purchase and maintain Linux-based PC hardware that could ever compare to the Mac. From an overall perspective, and this is definitely the most important aspect, those who are using massive parallel clusters of computers need their data crunched fast, and the G4 processor, combined with AltiVec-enhanced code, is simply the fastest way to crunch data, straight and simple.
I hope that clears up the issues for people, because that's how it is. Just the facts, ma'am.
Ryan Bruels
Apple Campus Representative
University of Colorado, Boulder
bruels@mac.com * 303-332-5434
Re:BEOWULF CLUSTER! (Score:2)
Re:Clusters and clusters (Score:2)
If you need a high-performance computing environment, you need a batch-process mainframe and an elite band of nerds to run it. You plebs can just wait patiently outside the machine room.
If you want a toy, go get one of those piddling PCs.
I just love ridiculous condescesion!
Re:Clusters and clusters (Score:4, Funny)
You've come to the right place.
Re:Clusters and clusters (Score:4, Informative)
The software used to accomplish the clustering for AppleSeeds is Mac MPI, which is based upon the *standard* for parallel computing, MPI. The reason that the PDF doesn't talk about programming MPI is that there is no need for redundant documentation. Go find a book on MPI if you want to learn to prgram to that API.
And yes, I will get quite far telling you it's easier to upgrade Mac OS X to its latest version/. Thanks to Apple's Software Upgrade control panel program, this can all take place automatically according to any schedule you desire. Two clicks of a mouse is all it takes to set this up, as opposed to spending quite a lot of time figuring out how to use the incredubly arcane "apt". In fact, AFAIR, Software Update is now set to operate automatically by default.
Gee, I didn't realize that particle physics simulations involving millions of particles wasn't a *real* application...
The fact that your comment has been moderated up to four (so far) is simlply an empiric demonstration of the lack of knowledge of most Slashdot readers.
NetBoot (Score:4, Informative)
Just have all of your OS X clients boot off of a disk image on a Mac OS X Server machine.
http://www.apple.com/education/k12/networking/diff er/index.html#macmanager [apple.com]
and this is... your oppinion? (Score:2)
Building a true multi-user environment (I mean with multiple people at multiple machines) isn't all that easy. I doubt support costs are really less.
I've seen people say this before. But personaly doubt it's anything other then random apple hype (like the 230 page manual vs the 1 page PDF, even though much shorter beowulf docs exist)
Re:Text of the one-page PDF file (Score:2)
"Wow, I know a couple of friends on the 'net who have Macs, if we all install this will we build a distributed system?"
Those instructions are pretty flimsy, I seriously doubt it would work in disseperate IP address (like, a guy in India and a guy on AOL aren't going to be able link up just by installing the software). And even then, without some information on building the actual network you aren't going to get much performance in problems that require much crosstalk.