Myrinet Available for Mac OS X 21
KeithOSC writes "Looks like Apple may have one more step closer to real parallel computing. Myricom has just released its drivers for their high-speed cluster computing interconnect. I've been beta testing for two months now. With my findings, Mac OS X may be a real Beowulf cluster option. (Now, if Apple would just give us faster memory and PCI buses.)"
Expensive? (Score:2, Informative)
Re:Expensive? (Score:2, Informative)
You can use G4 Briq's [terrasoftsolutions.com] for making a cluster.
Although, for the price of one new LOADED G4 Mac, you can get a whole 8 way intel box, or a couple real nice P4's....
Re:Expensive? (Score:2)
Replacement for Research Clusters (Score:2, Interesting)
Re:Replacement for Research Clusters (Score:2, Informative)
Inconsistent Application (Score:2)
Re:Inconsistent Application (Score:1)
The only possibly advantages I can see are:
Strange Priorities (Score:1, Troll)
Myrinet itself thinks this is news:
"The IWR parallel high-performance computer was installed at the beginning of this year and consists of 512 AMD Athlon MP processors, two of them are placed into one computing node. These processors have frequencies of 1.4GHz and reach a theoretical maximum performance of 2.4 billion floating point operations per second (Gflops). The total system indicates a theoretical peak performance of more than 1.4 Teraflops, which well exceeds even all present installed Myrinet PC cluster in the USA. First performance measurements by using the well known Linpack Benchmark show an extraordinary performance of 825 Gflops, which would have placed this supercomputer in 24th position of last November on the list of the Top 500 most powerful computers in the world."
You can use lego mindstorms as webservers and you can use Macs in a cluster, but who would want to?
Re:Distributed object support in Cocoa (Score:2)
In other words, you can have about 6 dual athlons for every one dual Apple for the same price. Since there are absolutly no benchmarks where the G4 performs 6 times better than an Athlon, and clustering is about supercomputers made from commodity hardware, the Athlon is a much better value.
Re:Distributed object support in Cocoa (Score:1)
$70,000 does not go very far in development, so if your doing research (i.e. potentially requiring *new* code), you cannot always just trade off time against money.
clusters of apple hardware will fill a niche, it doesn't matter one bit that you want to commodify the market. maybe you should think about being a market analyst, and leave this kind of thinking to engineers.
that last statement was off-topic, like your post.
Gimme! (Score:2)
The Obligatory . . . (Score:1)
Why are they doing this? (Score:5, Interesting)
There are rumors Steve Jobs has started visiting major special effects houses, etc. Literally approaching the people who work there (and who have said they would love to be able to use a Mac from start to finish) and ask "what do you need from apple to make your work better (assuming that they think it would be done better all on macs, etc.)". From the replies I have seen (again, rumors): Atleast dual processors, quad would be nice, need a 1U rack case, faster bus, memory, etc. These places don't care about cost if it helps them get work done better (and isn't obscene) and actually saves them money.
The power required to run a full 1U rack of dual G4 machines (70 machines) is considerably less than that of dual AMD machines. When you are clustering lots and lots of these machines, that starts to really add up. And less power usage means less heat, so less AC cost. And those are just the rarely considered expenditures.
Apple is moving to make some big splashes in the Highend market. They now have an operating system that can be used in the highend market and the consumer market, at the same time. Which means applications can move from one market to the other easily, cross polinating etc. (iMovie bringing digital video editing to the masses, is something of an example).
I doubt these guys wrote the drivers entirely by themselves. This would require some very low level stuff, and lots of help from apple (because low level stuff is still being tweaked, etc.) meaning that apple had to partake in it. Apple probably initiated them to actually do this. Why? because in a few months, after the drivers have stabilized Apple will announcing products that will use it.
I think it's cool that apple seems to have a balancing point behind keeping a product a secret, but still getting field testing of their more obscure stuff. (Highspeed networking / clustering of this type is a foreign beast to their current hardware, and to the market that would be using them, because macosX has never worked with it before.) the same is true with BlueTooth. They showing you their cards, knowing they still have an ace up their sleeve.
Re:Why are they doing this? (Score:1)
I doubt these guys wrote the drivers entirely by themselves. This would require some very low level stuff, and lots of help from apple (because low level stuff is still being tweaked, etc.)...
Nothing was "tweaked." The driver is available as a binary or source kernel module. No kernel patches or special kernels are required.
While I'm not a huge fan of IOKit (just the opposite, in fact), I will say that Apple has fairly decent high-level APIs for pinning user's memory (the only really tricky thing an OS bypass driver needs to do) because its based upon BSD and Mach.. The MD driver code to pin and unpin a user's buffer in OS-X is 80 lines of code including comments. (and it is roughly similar in FreeBSD and Tru64). Linux, on the other hand, requires 300 lines to do the same job & does involve low-level access to the OS (low-level enough that new kernel versions require driver updates).
Re:Why are they doing this? (Score:2)
I still get the sense they got some "urging" from apple to do this, or atleast some really nice machines to test the hardware out on, for not much expense.