Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology (Apple) Businesses Apple Technology

Myrinet Available for Mac OS X 21

KeithOSC writes "Looks like Apple may have one more step closer to real parallel computing. Myricom has just released its drivers for their high-speed cluster computing interconnect. I've been beta testing for two months now. With my findings, Mac OS X may be a real Beowulf cluster option. (Now, if Apple would just give us faster memory and PCI buses.)"
This discussion has been archived. No new comments can be posted.

Myrinet Available for Mac OS X

Comments Filter:
  • Expensive? (Score:2, Informative)

    by spencerogden ( 49254 )
    Granted Myrinet is not cheap, but it doesn't seem to make much sense to use expensive apple hardware and software in a distributed situation. Are the G4s still that much faster as to outweigh the costs? I haven't seen recent benchmarks. Is there evidenve that Apple hardware and OS X are great for distributed processing? If not, why bother?
  • This is a great thing for research facilities where Mac has always held a strong foothold (along with academia) also when a *large* cluster is not needed, running MacOS would be a bit more friendly for researchers to administrate their own little 10 node beowolf...

  • You can buy 4 dual processor Athlon rigs for every single processor Apple. Since a cluster's advantage is it's ability to create supercomputers from commodity hardware, why even attempt to use proprietary hardware? This is a dead end clustering approach. If you need double precision, use Alphas. Their cost is comparable to Apple, and perform significantly better, especially for number crunching (The term everyone used before "bioinformatics" became a hip buzzword)
    • The only possibly advantages I can see are:

      1. The computation really takes advantage of the AltiVec processing unit on the G4
      2. The app to utilize the data is running on Mac OS X, and this makes life easier when the cluster is the same platform. This one is a bit of a stretch.
  • Slashdot thinks "Myrinet Available for Mac OS X" is news.

    Myrinet itself thinks this is news:

    "The IWR parallel high-performance computer was installed at the beginning of this year and consists of 512 AMD Athlon MP processors, two of them are placed into one computing node. These processors have frequencies of 1.4GHz and reach a theoretical maximum performance of 2.4 billion floating point operations per second (Gflops). The total system indicates a theoretical peak performance of more than 1.4 Teraflops, which well exceeds even all present installed Myrinet PC cluster in the USA. First performance measurements by using the well known Linpack Benchmark show an extraordinary performance of 825 Gflops, which would have placed this supercomputer in 24th position of last November on the list of the Top 500 most powerful computers in the world."

    You can use lego mindstorms as webservers and you can use Macs in a cluster, but who would want to?
  • by fm6 ( 162816 )
    (Now, if Apple would just give us faster memory and PCI buses.)
    Why should they? Their chosen market is the end user. High-performance computing is almost the exact opposite of that!
  • Imagine a Beowulf cluster of these [wired.com]
  • by Pfhor ( 40220 ) on Sunday April 21, 2002 @01:02AM (#3382025) Homepage
    Not Just Because...

    There are rumors Steve Jobs has started visiting major special effects houses, etc. Literally approaching the people who work there (and who have said they would love to be able to use a Mac from start to finish) and ask "what do you need from apple to make your work better (assuming that they think it would be done better all on macs, etc.)". From the replies I have seen (again, rumors): Atleast dual processors, quad would be nice, need a 1U rack case, faster bus, memory, etc. These places don't care about cost if it helps them get work done better (and isn't obscene) and actually saves them money.

    The power required to run a full 1U rack of dual G4 machines (70 machines) is considerably less than that of dual AMD machines. When you are clustering lots and lots of these machines, that starts to really add up. And less power usage means less heat, so less AC cost. And those are just the rarely considered expenditures.

    Apple is moving to make some big splashes in the Highend market. They now have an operating system that can be used in the highend market and the consumer market, at the same time. Which means applications can move from one market to the other easily, cross polinating etc. (iMovie bringing digital video editing to the masses, is something of an example).

    I doubt these guys wrote the drivers entirely by themselves. This would require some very low level stuff, and lots of help from apple (because low level stuff is still being tweaked, etc.) meaning that apple had to partake in it. Apple probably initiated them to actually do this. Why? because in a few months, after the drivers have stabilized Apple will announcing products that will use it.

    I think it's cool that apple seems to have a balancing point behind keeping a product a secret, but still getting field testing of their more obscure stuff. (Highspeed networking / clustering of this type is a foreign beast to their current hardware, and to the market that would be using them, because macosX has never worked with it before.) the same is true with BlueTooth. They showing you their cards, knowing they still have an ace up their sleeve.
    • I doubt these guys wrote the drivers entirely by themselves. This would require some very low level stuff, and lots of help from apple (because low level stuff is still being tweaked, etc.)...

      Nothing was "tweaked." The driver is available as a binary or source kernel module. No kernel patches or special kernels are required.

      While I'm not a huge fan of IOKit (just the opposite, in fact), I will say that Apple has fairly decent high-level APIs for pinning user's memory (the only really tricky thing an OS bypass driver needs to do) because its based upon BSD and Mach.. The MD driver code to pin and unpin a user's buffer in OS-X is 80 lines of code including comments. (and it is roughly similar in FreeBSD and Tru64). Linux, on the other hand, requires 300 lines to do the same job & does involve low-level access to the OS (low-level enough that new kernel versions require driver updates).

      • Wow, thats really cool, didn't realize that.

        I still get the sense they got some "urging" from apple to do this, or atleast some really nice machines to test the hardware out on, for not much expense.

BLISS is ignorance.

Working...