Apple Releases Cluster Node Xserve 57
JHromadka writes "Apple today released a cluster node version of its Xserve rackmount server. The Cluster Node is a dual 1.33GHz G4 that has 256 MB RAM, no optical drive, Gigabit Ethernet only on the logic board, no graphics card, and only 10 client licenses. Starting price is $2799, which is a grand less than the normal Xserve."
256 Megs RAM, huh? (Score:1, Flamebait)
Re:256 Megs RAM, huh? (Score:2, Insightful)
Noone in their right minds buys RAM thru Apple, they charge insane markups. Having one supplier is convenient, but not worth that kind of price imho.
This way, if someone buys a rack full of XServes, they can spend under $10K filling them up with ram instead of two or three times that buying preinstalled RAM from Apple.
Re:256 Megs RAM, huh? (Score:3, Informative)
-psy
10 Client Licenses? (Score:2, Interesting)
Comment removed (Score:4, Informative)
Re:10 Client Licenses? (Score:1, Informative)
Re:10 Client Licenses? (Score:2, Funny)
Re:10 Client Licenses? (Score:4, Informative)
Of course, if you have OS X clients, you can always use SMB or NFS on the client to connect to an OS X server. Only OS 9 or lower Macs would use up the AFP client licenses. Go figure.
CC
License info, from Apple (Score:4, Informative)
Sorry it can't be linked directly, but if you go to http://store.apple.com/, click "Apple Software" under "Software and Books" on the left, and scroll down to "Mac OS X Server v10.2 (10-User Lic.)", you will see:
A 10 User license should be used if your server load is no more than 10 simultaneous file sharing connections (for more connections, please select the Unlimited license).
A very good thing (Score:2, Interesting)
By offering more processors for the dollar, this configuration really plays to the platform's strengths in computational clusters for applications that can use Altivec.
Think Clustered... (Score:4, Funny)
Re: (Score:3, Funny)
Re:Think Clustered... (Score:1)
Why don't you READ my webpage before you assume it's lame? There are a lot of lame websites on *ALL* ISP hosting spaces, but there are a few good ones.
If you think my site is stupid, try this stupid teenage girl's page [aol.com].
Re:Think Clustered... (Score:2)
Re:Think Clustered... try: (Score:1)
*runs and hides*
Comment removed (Score:3, Informative)
Well (Score:1)
Whole point of the Cluster Node (Score:2)
My guess is that a lot of people will actually network boot them (if possible) and then run everything from there. This machine is designed for people who need extra grunt for processor intensive stuff such as Shake.
Re:Only one hard drive (Score:2)
Someone removed the Blinkenlights joke! (Score:4, Interesting)
Hey! Earlier today this page: XServe Design [apple.com] included a cool joke:
I looked again, and now its gone. Spoilsports!. Did anyone cache the original? Its quoted here: At Macintouch [macintouch.com]. I swear I am not making this up!
Re:Someone removed the Blinkenlights joke! (Score:1)
Just wait until some freak with cash to spare sets up a cluster of these (no, not *that* kind of cluster) and runs SETI@home or one of those crypto-cracking things... interesting.
it's the strangest thing... (Score:3, Funny)
Think different (Score:5, Funny)
Re:Think different (Score:1)
Re:Think different (Score:2)
Re:Think different (Score:2)
*ponderponder* Nah-uh.
good price (Score:5, Interesting)
When building a very large cluster this latter feature is massively important unless you have free sysadmin. dealing with failures is a crucial part of running a cluster. I've seen too many caseswhere the individual units work fine but overheat in a cluster or have too much down time or some fraction of the units fail more often. I'll pay double for reliability and in fact the last two systems I did pay double and got reliability. (supermicro P4s and RLX blades)
Stripping cluster units down is a good idea. having the fastes possible or most disk space system is not always important in a cluster. its throughput per dollar and reliability that count most. In my humble opinion P3s sometimes outperform p4s on relaibility and cost of ownership per throughput.
many types of clusters dont require having even a local disk. One of the more important developments in the linux world is the linux boot and bproc (from Los Alamos) which allow a cluster to run without any moving parts other than the fans (no CD, floppy, or hardrives need ever be present). adding redundant powersupplies or better yet an external powersupply is yet another desirable feature.
A while back I bought two xserves and they are built with impressive design standards and from what I can tell are highly reliable. They are super easy to sys admin and to keep pathced since apple provides easy to use tools.
the main problem with the apple, and the reason I still use x86 linux boxes for my clusters has been the fact that sometimes there is one or two peices of code that I cant get for the PPC cluster. This is not a big deal just a nuicance. the other problem is the price to throughput ratio. If all of my code worked well with the altivec set my estimates convince me that the ppc smoke the x86 boxes of comparable qualiy in throughput per dollar. but if I dont compile well for the altivec set the PC win on price. Since my main apps arent written with the altivec in mind (they are in fortran and have branches inside loops), i'm hosed.
what I have found is that the apples do make very cost cometitive disk servers when you include the total cost of ownership and high quality.
Re:good price (Score:2)
For those writing custom software though, x86 clusters are probably more economical, if only because dual Xeons or dual Athalons are so much faster. Still it is
Re:good price (Score:4, Informative)
I'm thinking about getting this for an upcoming project.
Re:good price (Score:4, Interesting)
in my case all of my loops were optimally bad for VAST to have an effect on.
namely I am working with floating point and double 3D (3 coordinate) vectors. And the tight loop in the interior goes from 1 to 3. Also there are if-statements inside some of the loops.
VAST works best when loops are several multiples of four in size. multiples of four are nice for optimal memeory boundary effects and for optimally utilizing the altivec. It works best on integers, pretty well on floats and hardly at all on doubles. Having if-statments in side loops kills it.
I'm not perfectly positive about this but from reading the literature I believe VASTs handling of fortran seems sub-opmtial. what it really does is make calls to C-code from fortran, increasing the overhead. Otimizing C I think should work better than fortran.
the other interesting thing is that one would think that fortan90 would be a fantastic language for the converting to altivec since it has syntaxes that allow out-of order loop evaluations. THus you may be surprised that VAST handles fortran 90 by first converting it to fortran 77!
Speaking of fortran, My guess is that with the rise of vector processors and parallelprocessing that fortran95 is due for a small comeback inhigh performance computing. THe out-of-order loop handling is perfect for telling the compiler it can parallelize a section of code. (e.g. loops can be decalred that say let k span from 1 to 100 but I dont care what order k gets evaluated since all the steps are independent) And the simpler more direct access memory structures may help as well. fortran95 explicitly declares which variables in a subroutine call will or wont be modified by the call, again allowing a compiler to know it can count on varialbe not changing after a subroutine call. Thus one process can handle a subroutine evaluation while another skips ahead it and processed subsequent instructutions knowing which memory locations wont be changed by the concurrent subroutine call. Finally fortan95 can replace an if inside of a loop over an array with a precomputed memory map of which array elements to act on. again this allows for vectorization and parallelization.
I believe that when people start using altivec opimized BLAS and Atlas libraries the altivec advantages in code will show up. The problem right now is that if you want to write protable code you are not going to write it specialized to the altivec. hence using the altivec is hard. having widespread optimized libraries is the solution.
dear moderators (Score:1)
use your points for something worth moderating.
Re:G4 processors 128? (Score:3, Informative)
Re:G4 processors 128? (Score:3, Informative)
Re:G4 processors 128? (Score:4, Informative)
macslash article [macslash.org]
Re:G4 processors 128? (Score:3, Interesting)
Re:G4 processors 128? (Score:3, Informative)
For spicy rumors... (Score:4, Interesting)
Final Cut Pro 4 will come out next month. Shake 3.0 is also supposedly around the corner as well.
Either of these programs, coupled with a XRaid and HD footage, will provide an interesting method for small vfx houses to tackle production.
Anything that lowers the bar of entry is a good thing, IMO.
-Brett
I see pieces on the board. (Score:4, Insightful)
- Transition to UNIXy OS complete
- Xserve has appeared
- Node clusters have appeared
- Xraid
- high-end video editing (Final Cut Pro)
- high-end compositing software (Shake)
- high-end audio production (Logic)
- Maya support
and let's not forget existing pieces:
- QuickTime (now the basis of MPEG-4)
- relationship with Adobe (Photoshop, et. al)
- Avid (just ported Symphony to OS X)
- the other big audio guys (MOTU etc.)
They get that processor situation sorted out come July, they are poised to totally pull it off, too. The slow processor argument is the chief complaint about Apple. Take that away, and they are looking more impressive for content creators than anytime in their history.
(Oh, and incidentally, on a personal note - just my opinion, don't flame me - the above are all reasons why Quark can go to hell.)