Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology (Apple) Businesses Apple Technology

Apple Releases Cluster Node Xserve 57

JHromadka writes "Apple today released a cluster node version of its Xserve rackmount server. The Cluster Node is a dual 1.33GHz G4 that has 256 MB RAM, no optical drive, Gigabit Ethernet only on the logic board, no graphics card, and only 10 client licenses. Starting price is $2799, which is a grand less than the normal Xserve."
This discussion has been archived. No new comments can be posted.

Apple Releases Cluster Node Xserve

Comments Filter:
  • 10 Client Licenses? (Score:2, Interesting)

    by absurdhero ( 614828 ) on Tuesday March 18, 2003 @07:43PM (#5540541) Homepage
    What exactly does this mean? Doesn't Apple usually give unlimited client licenses or is there something different going on here? "10 client licenses" seems like something that you see on a windowsXP box.
  • A very good thing (Score:2, Interesting)

    by RalphBNumbers ( 655475 ) on Tuesday March 18, 2003 @07:46PM (#5540555)
    Configurability is always a good thing, and this specific package looks very attractive for people who want to do some fairly large scale low budget vector computing.

    By offering more processors for the dollar, this configuration really plays to the platform's strengths in computational clusters for applications that can use Altivec.
  • by lordpixel ( 22352 ) on Tuesday March 18, 2003 @07:56PM (#5540616) Homepage

    Hey! Earlier today this page: XServe Design [apple.com] included a cool joke:

    Designed for the computational clusters and distributed applications, this Xserve configuration delivers high-density processing power without the server features you won't need in a cluster enviroment. A single drive bay offers space for the operating system, and there's no optical drive, which means the front panel can offer more ventilation.
    This does result in fewer blinkenlights. [Emphasis mine]

    I looked again, and now its gone. Spoilsports!. Did anyone cache the original? Its quoted here: At Macintouch [macintouch.com]. I swear I am not making this up!

  • good price (Score:5, Interesting)

    by goombah99 ( 560566 ) on Tuesday March 18, 2003 @08:19PM (#5540742)
    Having built several clusters now I'd say this was close to an excellent price point. My experience is that cluster nodes tend to run about 900 to 1400 per cpu. You can get them for a lot less but not from manufactureres you know will be relaible. That is to say, joe blow might build cluster units just as good as IBMs for $300 each, but unless I can actually distiguish joe blow from sam blow (who builds sucky ones) I wont buy from joe blow. thus knowing something is relaible is as important as being reliable. reputation matters.

    When building a very large cluster this latter feature is massively important unless you have free sysadmin. dealing with failures is a crucial part of running a cluster. I've seen too many caseswhere the individual units work fine but overheat in a cluster or have too much down time or some fraction of the units fail more often. I'll pay double for reliability and in fact the last two systems I did pay double and got reliability. (supermicro P4s and RLX blades)

    Stripping cluster units down is a good idea. having the fastes possible or most disk space system is not always important in a cluster. its throughput per dollar and reliability that count most. In my humble opinion P3s sometimes outperform p4s on relaibility and cost of ownership per throughput.

    many types of clusters dont require having even a local disk. One of the more important developments in the linux world is the linux boot and bproc (from Los Alamos) which allow a cluster to run without any moving parts other than the fans (no CD, floppy, or hardrives need ever be present). adding redundant powersupplies or better yet an external powersupply is yet another desirable feature.

    A while back I bought two xserves and they are built with impressive design standards and from what I can tell are highly reliable. They are super easy to sys admin and to keep pathced since apple provides easy to use tools.

    the main problem with the apple, and the reason I still use x86 linux boxes for my clusters has been the fact that sometimes there is one or two peices of code that I cant get for the PPC cluster. This is not a big deal just a nuicance. the other problem is the price to throughput ratio. If all of my code worked well with the altivec set my estimates convince me that the ppc smoke the x86 boxes of comparable qualiy in throughput per dollar. but if I dont compile well for the altivec set the PC win on price. Since my main apps arent written with the altivec in mind (they are in fortran and have branches inside loops), i'm hosed.

    what I have found is that the apples do make very cost cometitive disk servers when you include the total cost of ownership and high quality.

  • For spicy rumors... (Score:4, Interesting)

    by asparagus ( 29121 ) <koonce@gm a i l . com> on Wednesday March 19, 2003 @12:35AM (#5542005) Homepage Journal
    Apple's targeting using low-end hardware to tackle mid-range complexity tasks.

    Final Cut Pro 4 will come out next month. Shake 3.0 is also supposedly around the corner as well.

    Either of these programs, coupled with a XRaid and HD footage, will provide an interesting method for small vfx houses to tackle production.

    Anything that lowers the bar of entry is a good thing, IMO.

    -Brett
  • by jweatherley ( 457715 ) <james@nosPam.weatherley.net> on Wednesday March 19, 2003 @07:35AM (#5543038) Homepage
    I'm using the new rc5-72 client and getting about 19 million keys per second [distributed.net] on a dual 1.0GHz tower. A cluster of XServes would be awesome at this task.
  • Re:good price (Score:4, Interesting)

    by goombah99 ( 560566 ) on Wednesday March 19, 2003 @12:53PM (#5544718)
    Vast is a good program when it works. The best part Its fully automatic and fully trnasparent to the user. while human tweaking can do even better, to get a free 2x improvement (Vast claims this is typical) is great.


    in my case all of my loops were optimally bad for VAST to have an effect on.
    namely I am working with floating point and double 3D (3 coordinate) vectors. And the tight loop in the interior goes from 1 to 3. Also there are if-statements inside some of the loops.

    VAST works best when loops are several multiples of four in size. multiples of four are nice for optimal memeory boundary effects and for optimally utilizing the altivec. It works best on integers, pretty well on floats and hardly at all on doubles. Having if-statments in side loops kills it.

    I'm not perfectly positive about this but from reading the literature I believe VASTs handling of fortran seems sub-opmtial. what it really does is make calls to C-code from fortran, increasing the overhead. Otimizing C I think should work better than fortran.

    the other interesting thing is that one would think that fortan90 would be a fantastic language for the converting to altivec since it has syntaxes that allow out-of order loop evaluations. THus you may be surprised that VAST handles fortran 90 by first converting it to fortran 77!

    Speaking of fortran, My guess is that with the rise of vector processors and parallelprocessing that fortran95 is due for a small comeback inhigh performance computing. THe out-of-order loop handling is perfect for telling the compiler it can parallelize a section of code. (e.g. loops can be decalred that say let k span from 1 to 100 but I dont care what order k gets evaluated since all the steps are independent) And the simpler more direct access memory structures may help as well. fortran95 explicitly declares which variables in a subroutine call will or wont be modified by the call, again allowing a compiler to know it can count on varialbe not changing after a subroutine call. Thus one process can handle a subroutine evaluation while another skips ahead it and processed subsequent instructutions knowing which memory locations wont be changed by the concurrent subroutine call. Finally fortan95 can replace an if inside of a loop over an array with a precomputed memory map of which array elements to act on. again this allows for vectorization and parallelization.

    I believe that when people start using altivec opimized BLAS and Atlas libraries the altivec advantages in code will show up. The problem right now is that if you want to write protable code you are not going to write it specialized to the altivec. hence using the altivec is hard. having widespread optimized libraries is the solution.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...