Follow Slashdot stories on Twitter


Forgot your password?
Unix Businesses Operating Systems Software OS X Apple

Xgrid Agent for Unix 219

mac-diddy writes "Someone on Apple's mailing list for Xgrid, Apple's clustering software, just announced an 'Xgrid agent for Linux and other Unix platforms' available for download. There are still some issues being worked on like large file support, but it does allow you to simply add a Unix node to your existing Xgrid cluster. Just goes to show that when companies embrace open standards and code, the world doesn't fall apart."
This discussion has been archived. No new comments can be posted.

Xgrid Agent for Unix

Comments Filter:
  • My Experience (Score:5, Interesting)

    by artlu ( 265391 ) <> on Wednesday June 23, 2004 @06:24PM (#9513200) Homepage Journal
    My company has had experience using XGRID on our G4 notebooks. We always leave XGRID running and when we are at the office it is like having 20-30CPUs available at any given time. Now with Linux, we can have about 300 CPUs available, I just wonder how efficient it really is in the non-osx atmosphere.
    Time to find the download.

    GroupShares Inc. [] - A Free Online Investment Community
    • I pride myself on being a "jack of all trades, master of none" kinda guy. Unfortunately, I know very little about clustering. I often wonder why corporations with tons of workstations in one building don't do something like this to help out their intensive programs. Can I install clustering software on a workstation and give it the lowest priority so that it doesn't interfere with a user's work at all?
      • Re:My Experience (Score:5, Informative)

        by JamieF ( 16832 ) on Thursday June 24, 2004 @12:05AM (#9515320) Homepage
        Here are three gotchas that can make this sort of thing less appealing than it may seem at first:

        Problem type: The problem may not be well suited to running on a bunch of PCs (especially when the agent app isn't allowed to take 100% of the machine's resources to accimplish the task) over typical office networks. Basically if the app needs to communicate frequently with other nodes, or if a huge data set is involved (or both), latency or bandwidth issues might outweigh the possible advantage of putting more CPUs to work.

        Security: The data may be highly sensitive, in which case you might not want to put it on ordinary desktop PCs that might have untrustworthy users, spyware, etc.

        Configuration: The configuration of your office's PCs may vary enough to make the cost of getting a companywide desktop cluster working unacceptably high. You'd have to pick a few target configurations and settle for that. Hopefully drivers and such wouldn't matter as much as CPU, RAM, disk, and OS version, but there are still companies that are just now getting their desktops updated to Win2K. There's also the headache of installing yet another required application on a large number of heterogeneous machines, which is virtually guaranteed to result in confusing installation problems. Oops, our app crashes if the user has this or that service pack installed. Oops, our app requires strong encryption. You could build your app on top of some sort of moderately portable framework or VM or whatever but that will have system requirements too, and probably will have some surprising gotchas when deployed in a real-world environment.
      • Before this whole clustering thing blew up I used to work for a company called Silicon Engineering which did IC design work. They (we) used a clustered batch processing package called DQS [] to handle distributing verilog (and similar) jobs out to all the assorted systems. The company had almost nothing but SunOS on SPARC at the time so this was a highly successful concept. We used the berkeley automount daemon, nfs, and nis to make sure that users and their rights and all their files existed in the same place
  • How many clusters (Score:5, Interesting)

    by Anonymous Coward on Wednesday June 23, 2004 @06:24PM (#9513209)
    actually have hetergenous hardware platforms? It would be interesting to see a G5/Xeon/Athlon cluster make the top 10 in speed.
    • by aixou ( 756713 ) on Wednesday June 23, 2004 @06:29PM (#9513249)
      That would bring a tear to my eye. "I have a dream that one day, all different architectures can work together in a single cluster, and processors will be judged not by the flavor of their bits, but by the speed of their results."

      • by foidulus ( 743482 ) * on Wednesday June 23, 2004 @06:30PM (#9513261)
        I have a dream that one day, all different architectures can work together in a single cluster, and processors will be judged not by the flavor of their bits, but by the speed of their results.
        You forgot, "as long as they don't run Windows!"
      • "I have a dream that one day the state of Redmond, whose CEO's lips are presently dripping with the words of FUD and nullification, will be transformed into a situation where little Linux boys and Linux girls will be able to join hands with little Microsoft boys and Microsoft girls and compute together as peers-to-peers. I have a dream today. I have a dream that one day every bad GUI shall be exalted, every pricetag and TCO shall be made low, the open ports will be made closed, and the closed source will be
  • imagine (Score:4, Funny)

    by Anonymous Coward on Wednesday June 23, 2004 @06:26PM (#9513219)
    imagine a beo...oh...
  • Mixed Company (Score:4, Insightful)

    by jasno ( 124830 ) on Wednesday June 23, 2004 @06:26PM (#9513224) Journal
    Somewhat silly, but wouldn't you incur a bit of overhead mixing machines of different endian-ness? I suppose for non-communication intense algorithms this wouldn't be a big deal.
    • Re:Mixed Company (Score:5, Informative)

      by Carnildo ( 712617 ) on Wednesday June 23, 2004 @06:35PM (#9513297) Homepage Journal
      Somewhat silly, but wouldn't you incur a bit of overhead mixing machines of different endian-ness? I suppose for non-communication intense algorithms this wouldn't be a big deal.

      Not really. Everyone uses network byte order for communication, so you won't have more overhead in a mixed system than you would in a homogenous system.
      • Re:Mixed Company (Score:5, Informative)

        by kma ( 2898 ) on Wednesday June 23, 2004 @07:40PM (#9513731) Homepage Journal
        Not quite. "Network byte order" is big endian. So on big endian ppc's, which macs are, all those "ntos" macros, etc., expand to NOPs. Once you introduce little endian machines into the mix, they start doing real work to transform internal representations for the wire.

        The real tragedy is when you have homogenously little endian machines; e.g., a network that only has PCs on it. An integer gets byteswapped twice to end up in exactly the same byte order it was all along.
        • Re:Mixed Company (Score:5, Interesting)

          by DonGar ( 204570 ) on Wednesday June 23, 2004 @07:51PM (#9513819) Homepage
          What's worse.... that often ends up happening for loopback connections.
        • Re:Mixed Company (Score:4, Informative)

          by joe_bruin ( 266648 ) on Wednesday June 23, 2004 @08:04PM (#9513891) Homepage Journal
          So on big endian ppc's, which macs are, all those "ntos" macros, etc., expand to NOPs. Once you introduce little endian machines into the mix, they start doing real work to transform internal representations for the wire.

          not quite.
          first, i think you mean "ntohs" (and ntohl and friends).
          second, they are not macros. they are, in fact, real functions (in glibc, bsd libc, and windows' winsock library). i'd imagine it's the same on macs.
          third, a macro that does nothing is not expanded to a NOP, it is simply removed by the preprocessor.

          so, assuming the macs are conforming to bsd networking standards, ntohs is required to be a function, so there is still a function call per conversion (which is much more costly than doing the actual byteswap).

          The real tragedy is when you have homogenously little endian machines; e.g., a network that only has PCs on it. An integer gets byteswapped twice to end up in exactly the same byte order it was all along.

          a real high performance implementation (ie, the kernel) would not use ntohl, it would implement a similar byteswap macro. a byteswap can be done on x86 in one instruction, so it is fairly trivial to do.
          • Re:Mixed Company (Score:4, Informative)

            by Tjp($)pjT ( 266360 ) on Wednesday June 23, 2004 @08:40PM (#9514145)
            Unless ntohs is an inline function. Most compilers will optimize out inlines that return their calling argument unchanged. Of course reality differs and they are actually null macros on OS/X.
            These routines convert 16 and 32 bit quantities between network byte order and host byte order. On machines which have a byte order which is the same as the network order, routines are defined as null macros.
            The above quote brought to you by HMUG [].
        • Receiver swaps. (Score:3, Informative)

          by tlambert ( 566799 )
          Receiver swaps.

          In DCE RPC, the receiver does the byte swapping, if necessary. One of the main reasons Windows network services are built on DCE RPC is that between homogenous systems, there's no swapping taking place: all that data goes out in host byte order, and there's no such thing as network bte order.

          One of the big arguments about this had to do with Windows machines on Intel not "playing fair" with systems that natively implement network byte order as their host byte order. When talking to Intel
  • by Grant29 ( 701796 ) * on Wednesday June 23, 2004 @06:26PM (#9513229) Homepage
    This is really great news as it's becoming more popular to add CPU clusters to improve performance. Google is probably not the originator of this type of computing, but they have definately pushed it into the mainstream. Anyone living in NC might want to check out this new cluster going into RTP NC. I wonder if this will be the biggest cluser ever s_item&id=159 []

    9 Gmail invitations availiable []
  • Kinda Cool (Score:5, Insightful)

    by hypermike ( 680396 ) <hypermike@g[ ] ['mai' in gap]> on Wednesday June 23, 2004 @06:27PM (#9513231)
    Imagine waking up one day to find your Mac has solved a vexing scientific problem. While the cure to cancer, super-efficient solar power and ending world hunger are a ways off, you can combine your computing resources using Xgrid -- and help usher in a new era of biological breakthroughs, rocket science and advanced models of scientific phenomena.

    Everything is better clustered...

    • by Anonymous Coward on Wednesday June 23, 2004 @06:30PM (#9513257)
      [iMac] GOOD MORNING

      [Me] Good morning, computer. How are you?


      [Me] Oh is that so.


      [Me] Huh.


      [Me] .. That's... nice. So how about some Doom 3 then?

      [iMac] OK
  • by numbski ( 515011 ) * <numbski@hksilve[ ]et ['r.n' in gap]> on Wednesday June 23, 2004 @06:29PM (#9513250) Homepage Journal
    I have my G4 powerbook, 866 and my 800mHz iMac on my LAN at home.

    If I use XGrid on the two, what kind of performace could I use it for day to day?

    Faster compiles of applications would be the first thought. Any usefulness, say running photoshop? How about Quake? MAME?
    • by Colol ( 35104 ) on Wednesday June 23, 2004 @06:44PM (#9513364)
      Of the applications you've mentioned, only compiling things in Xcode would have any benefit. To utilize Xgrid, the application has to be written for it, which most apps simply aren't (and given turnaround issues, it would be suck for things like Quake and MAME).

      Xgrid's main benefit is in "grunt work" calculations that aren't necessarily needed immediately. Things like SETI@Home or Folding@Home would be the sort of thing Xgrid excels at: throw some data out, have it processed, get it back when it's done.

      While Apple has made clustering drop-dead easy, it's really not targeted at the home or small-business user, and the potential uses are pretty limited in that field.
      • by Twirlip of the Mists ( 615030 ) <> on Wednesday June 23, 2004 @08:35PM (#9514099)
        To utilize Xgrid, the application has to be written for it

        Not so, not so.

        If your problem is embarrassingly parallel, chances are you can use Xgrid to run it right now.

        For example, let's say you're rendering a 3D animation. (I haven't done real 3D work since the PowerAnimator days, so pardon me of some of my jargon is antiquated.) You've got a scene file on which you can run a render command. A command-line argument tells the renderer which frame to render.

        No problem. Just use use Xgrid's Xfeed plugin. Xfeed lets you set up a job that runs a single command with a variety of command-line arguments. You tell Xfeed that you want to run the "render" command with "-f" and the numbers 1 through 720.

        Xgrid goes to the first available machine on the grid and says, "Run render -f 1." Then it goes to the second machine and says, "Run render -f 2." And so on, until there are no available machines. Then it waits until a machine becomes available and says, "Run render -f n."

        As each output file (a frame, in this case) becomes available, Xgrid (the client application itself, I mean) collects them in whatever directory you specified when you submitted the job.

        The cool part comes when you realize that this isn't a cluster. It's a grid. That means machines can come and go as they please. If this job is running overnight, when I come in the next morning and sit down at my workstation, the agent on my computer stops the job and de-registers itself. The job goes back in the controller's queue for processing on whatever the next available machine is.

        And you don't have to have any special software for this. It can be done right now with the tools that already exist in Preview 2.
        • Aye, you don't need to be running an XCode application. My friend and I were running XGrid on our PowerBooks (867 MHz Ti and 1.5 GHz Al), and he was able to write something that would allow us to use our combined power for Blender rendering. It was rather awesome, breaking 2.something GHz on G4 processors.

          Mind you, I don't know how he did it, as I am still a code monkey-in-training.
        • Pardon the expression, but "Not so, not so".

          While the Xgrid application does indeed allow you to create custom interfaces for command line programs, there is still the issue of data. Xgrid will start processes on remote machines but as to how data is read and distributed is another matter.

          i.e. If you have an application that simply generates data(eg. a calendar) then that would work well with the custom plug-in feature. However, if your program needs to be fed data(eg. sort a list read from stdin), you
          • by Graymalkin ( 13732 ) * on Thursday June 24, 2004 @04:17AM (#9516141)
            Feeding a remote process data is not as difficult as you're describing. If the program you're using doesn't support ranges in arguments it isn't too difficult to wrap a script around it that does understand input ranges. The Xgrid client makes it pretty simple to use ranges as arguments for programs. It's possible to use the likes of Blender [] and Xgrid to do distributed rendering. Ergo input control doesn't seem to be a terribly difficult hurdle to overcome with Xgrid.
      • Clueless n00b question:

        Would it be possible to get this [] to work over Xgrid?

        At the high school I am teaching at, we have a lot of hardly used G4 eMacs and iMacs, and I would like to use them for something and perhaps even earn a little newsblurb about the school. I have been thinking about working with they SysAdmin to cluster the things and put them to good use. Xgrid seems like a good way to get them all working together, but I am very inexperienced in these sorts of things...

        Any suggestions?

        On a side

        • Publicity ? Set up a cluster and benchmark it. It's numbers won't be all that much, but you'll get a good press release at it. The politicians will eat that sh*t right up. You might even be able to leverage it into some grant money.

          Your side note is a sad but true sign of the times.
      • So does that mean I can run SETI@Home on a bunch of computers without alteration? Cause that would rock.

        I think the next step would be a freeware internet grid computing platform, a la SETI, but where people can lease time from Apple. Now that would be sweet.

    • basically, for your kind of applications: nothing.

      I doubt you compile applications that big
      photoshop: get an smp instead and plugins that support it
      quake,mame: u kidding get a faster gpu instead
    • by mrchaotica ( 681592 ) on Wednesday June 23, 2004 @06:53PM (#9513430)
      You don't need XGrid for faster compilation - Developer Tools already includes distcc
  • by Anonymous Coward
    How the developers actually benefit from OSS. The way I see it is that these people put the time and effort in to make a great product - which they give away for free.

    Large corporations then download and use these products to increase productivity, get better results without paying a cent, but possibly making themselves even richer in the process. This isn't a troll, i'm just after an answer. I'm not saying OSS is bad, but i'm curious as to what motivates developers.
    • by Anonymous Coward
      How the developers actually benefit from OSS. The way I see it is that these people put the time and effort in to make a great product - which they give away for free.

      In this case, Apple, the developer of XGrid, is benefiting because in order to use XGrid you have to buy hardware. Apple sells hardware.
    • by SuperKendall ( 25149 ) * on Wednesday June 23, 2004 @06:49PM (#9513394)
      In the past, as I have moved between jobs, I've written a number of Object->relational mapping tools.

      After a while they cease to become fun to write, and you'd rather just get on with writing code that does something instead of infrastructure. By using and contributing to OSS projects, you can use the same code no matter what company you end up at. Because the code is portable it can become part of the package you can offer to a potential employer - they not only get an employee but potentially one that can producive almost right away because they are familiar with the tools they'll be using, with no cost to the company for said tools.

      So it makes life easier for you, less re-work. And it makes life easier for employers, as they get richer products sooner. And if the employee becomes really proficient at a widely used OSS project they can write their own way through consulting or training.
      • >
        I've written a number of Object->relational mapping tools

        You mean object-SQL mapping. SQL ain't relational.

        This specific problem simply wouldn't exist with in a relational system.

        So while I see your point and agree, your example was a technological, not licensing, one.

    • by tupps ( 43964 ) on Wednesday June 23, 2004 @07:38PM (#9513713) Homepage
      Stop thinking of developers as individuals who are trying to sell a product and think of developers as people who work/contract for organisations.

      Instead of buying a product that is 95% of what I want I can take a OSS package that is 90% of the way there and pay a developer to customise it to exactly my needs. Now I have a solution that is perfect for my business, maybe given something back to the OSS community. While if I had bought the product I would probably have to change my business to use the product. The company now is also free of licensing and upgrade issues. Also they do not have to worry about the vendor going out of business or introducing a new version with no support for the old version.

      If you think of software as tools for business rather than something that a developer trys to sell OSS makes a lot more sense.
    • by Anonymous Coward on Wednesday June 23, 2004 @07:42PM (#9513743)
      Maybe that's not the right way to look at it. The way I see it is that these people put the time and effort into making great tools, not end product. Now they, and everyone else, are free to use those tools to create great products, which they don't give away for free.

      The mistake I see in every Microsoft attack on OSS and the great fallacy behind every purchased white-paper that predicts that OSS will destroy the economy is that writing and selling software is only a very small part of the economy! Most of the economy is involved in creating real, tangible things like cars and planes and food, etc, etc. Most of the economy is not involved in endlessly copying and selling the same pattern of bits.

      OSS creates tools that promise to improve the creation of many, many things on this planet and improve the prosperity of all. The only ones threatened by this are those that have made a business of monoplizing ideas. Ideas that are so easy to duplicate or recreate that they are deliberately trying to setup and use the force of law to keep people from producing ideas on their own.

      OSS is really a "paradigm shift". This phrase has been used so emptily so many times by senseless marketing droids that it has lost impact over the years. But it is here, it is now, and it is unstoppable. How can they stop it? We have the source!
    • by eamacnaghten ( 695001 ) on Wednesday June 23, 2004 @08:08PM (#9513912) Homepage Journal
      I am involved in both proprietary and open software.

      In the proprietary model the software is becoming worth less and less. 5 years ago run time licenses accounted for over 80% of the income of commercial software provider companies, now you will be lucky to see it account for 40% and it is going down rapidly. The rest being made up of support, training and other services.

      However, the cost of producing software is the same, and what is more, it is an upfront cost. You cannot get money for it until after you have paid a programmer to write it.

      Open source takes the above to a logical conclusion. As software is becoming relatively worthless (as far as run-time licenses go) you do not lose by giving the software away for free, and if you Open Source it you have available a 90% solution from free software out there before you begin thus cutting down on the production costs.

      It is not about "giving stuff away" or people "not paying a cent" to use your software, it is about facilitating an extremely cost effective way for which software companies can provide services to the customer by using open source predecessors, and passing the benefit on to successors.

    • by Anonymous Coward on Wednesday June 23, 2004 @08:09PM (#9513914)
      Well, when I develop a piece of software (or hire someone to do it for me), I solve one of my problems. That's the benefit. End of story, really.

      For instance, I need a special library for an app. And none of the off-the shelf ones exactly match it. So I write it.

      Now, I find out that other people have a similar problem. So I think to myself "well, I already got my ROI, so to speak. I solved my problem. So now I'll put this software out as open source and see what happens".

      And people use the software in ways I didn't think of. They give suggestions on how it might be better. A few send in patches. Suddenly my solution is an even better solution, at no cost to me.

      On the flip side: I download an open source library. It works okay, but there are some bugs and it needs a little refactoring. It will cost $1000 in labor to fix this library, vs. $5000 to write it from scratch. So I do it and send the patch to the author. The author is happy (free patch), I'm happy ($4000 worth of code for free), and I don't have to re-do my fixes in the next version! I sure wish commercial software worked that way.

      A lot of folks make it seem that OSS is a bunch of people working for others, for free, like communists or something. Not true .. I write software *to benefit myself only*. I am a capitalist. I fully believe in free markets. I believe people should make as much as they can and get to keep it all. I also believe there's no justification for charging for something that costs nothing to copy, so I don't. It goes against my thinking: the only way something that costs nothing can be charged for is if you have authoritarian government enforcing it (which we do). Charge for service, sure. Charge for installation, sure. Charge for consulting, yup. Charge for the box, the CD, whatever. All of that takes time or materials and I can't "copy" it for the next guy.

      Of course, you don't have to explain *how* OSS works. Just look and see that it exists and is self-sustaining, that's enough to prove that *something* about it works!
    • If you already have a setup of hundreds of non-Mac machines running some clustering that doesn't work on Mac, you might be tempted to try replacing it with this code. It might work better, it might be nicer designed, it might be cheaper, or you might like it because it works on Macs or because the source is available. In any case there is a chance that you will like it and switch to it.

      If you switch, you are suddenly in the position of being able to add Macintosh products to your cluster, and you may go ou
    • One thing worth noting is that a few companies I have worked for would rather spend an extra month reinventing something that already exists, than paying up any license fees. This usually means we are late because something as simple as a list manager and string handlers are being redone. At least, that is the way it is in C/C++. For the company I work for, who develops in Java, we are willing to pay for the big solutions, such as Weblogic (as long as they don't get too greedy), but for the smaller componen
  • by Alphanos ( 596595 ) on Wednesday June 23, 2004 @06:43PM (#9513352)

    Just goes to show that when companies embrace open standards and code, the world doesn't fall apart.

    Don't get me wrong, I support open standards/code, but it doesn't show any such thing if this linux client has only just been released. I bet Apple, and others for that matter, will be watching sales of Mac machines for use in clusters. If they drop because everyone starts using linux PCs, then Apple will probably not try this again.

    • by xchino ( 591175 ) on Wednesday June 23, 2004 @06:55PM (#9513442)
      If everyone was going ot be using Linux PC's in their cluster, they would just use one of the existing clustering applications. The only reason anyone would use Xgrid is because they plan on using SOME mac nodes, which is better than none. This could increase sales by opening up Macintosh hardware to projects that couldn't use it before. Due to the cost of a Mac hardware, it is often not feasible to build an all Mac cluster, but if I can throw some G5's in here and there Apple gets some of my money as opposed to none.
      • by aristotle-dude ( 626586 ) on Wednesday June 23, 2004 @07:21PM (#9513579)
        Actually, no it is quite feasible if you do it on a large scale and depending on what you use the cluster for. Big Mac and the Army cluster are two examples of where a mac cluster can be cheaper.
      • Setting up an Xgrid cluster is braindead easy for someone familiar with installing standard Mac software. You don't have to commit much time or energy to the task to get it drawing Mandelbrot fractals using all of the computer power you have at hand. Even over wifi.

        Once you get it running and figure out something useful to do with it, you could add a stack of linux boxes for a lot less than a stack of Macs. How much is a used 1Ghz PC? $50? I would consider adding 10 of those to my 3 Mac Xgrid, just for t
  • Home cluster (Score:4, Interesting)

    by vaguelyamused ( 535377 ) <> on Wednesday June 23, 2004 @06:46PM (#9513374)
    I wonder how effective this really is for home use? Will the performance improvement on my Powerbook be worth running XGrid on it and firing up a couple older computers (600Mhz IMac, Pentium III 1.0 Ghz) on Linux/OS X and adding them to the cluster. Would 100Mbs Ethernet cut it, what about WLAN?
  • but.... (Score:5, Funny)

    by jwcorder ( 776512 ) on Wednesday June 23, 2004 @06:48PM (#9513391)
    "Just goes to show that when companies embrace open standards and code, the world doesn't fall apart."

    But the world hasn't fallen apart using Microsoft either...oops, I said that outloud....

    • Re:but.... (Score:3, Insightful)

      by javaxman ( 705658 )
      what world are *you* living in that hasn't fallen apart over the past few years??

      I might like to move there, but I suspect, like some other folks, you've simply stopped following the news...

      So, you're saying your PCs are completely problem-free? You don't get tons of spam and haven't heard of major web hosting services DDoSed by zombified Windows users? Huh.

    • Re:but.... (Score:5, Funny)

      by Tjp($)pjT ( 266360 ) on Wednesday June 23, 2004 @08:46PM (#9514184)
      It is OK. You didn't utter the name three times.
  • GridEngine (Score:2, Interesting)

    by Anonymous Coward
    I find Sun Grid Engine better than other similar grid tools...

  • great job (Score:2, Insightful)

    by rainman1976 ( 774702 )
    Good job with the clustering ... as for the pro-Mac users that believe that this should not be, keep in mind that the computer is just a tool to simplify a job. Using a pipe on the base of the wrench to solve a problem easier doesn't mean that Sears Craftsman is now going to start making longer wrenches, it just shows that people will use whatever they have to solve/simplify problems, and if it means clustering in non-Mac computers, then so be it. Job done, cheaper, simplier, and quicker. -Rainman
  • Can anybody confirm if the linux and unix ports are smp aware?
    • by Novajo ( 177012 ) on Wednesday June 23, 2004 @07:58PM (#9513857) Homepage
      Can anybody confirm if the linux and unix ports are smp aware?

      (I wrote the xgridagent).

      As the other poster said, XGrid does not care what the binary does (so it can be smp aware, multi-threaded, whatever). However, the xgridagent itself is not explicitly smp aware, but it is multi-threaded. Each task is started in its own thread and depending on the OS(?) I guess they could spread to other CPUs. The other aspect of the question is "Does the Unix XGrid agent support MPI like Apple's GridAgent for OS X?". It does not and I can't say for sure how difficult it would be to support it. However, since all communication is done via the XGrid protocol, I don't see what would prevent it from being implemented. BUt other things need to be done first.

      The most pressing issue is to fix the annoying "large message" issue which makes the agent hang (while it waits forever for the controller to accept more frames). I am convinced it is trivial, I just don't know enough about BEEP to fix it. I am hoping somebody who knows BEEP will take a look at xgridagent-profile.c and fix the xgridagent_SengMSG() function and send me the patch.

      Daniel Côté

  • Why bother? (Score:2, Informative)

    by jweage ( 472545 )

    There are many other open source cluster/queuing systems available.

    The one I prefer is OpenPBS []. It works very well for engineering compute clusters, and there are many different resource schedulers available which use the PBS job and node management system.

    • Re:Why bother? (Score:2, Interesting)

      by oudzeeman ( 684485 )
      OpenPBS is shit. It needs about 25 3rd party patches(many of which have to be applied in order) to be halfway decent, (Altair Engineering doesn't actively develop it anymore so these patches won't be integrated into OpenPBS). Also it often falls flat on its face. I wouldn't want to use OpenPBS unless I had a trivially small cluster.

      If you want something free, TORQUE is OK. It is a OpenPBS derivative (they started with the last OpenPBS version and added all the popular scalability and fault tolerance patc
    • by csoto ( 220540 ) on Wednesday June 23, 2004 @08:48PM (#9514202)
      The other packages require a bit of planning, whereas Xgrid excels at locating nearby resources for pawning off processing tasks. Rendezvous (ZeroConf) is exactly about the need for ad hoc networking. Xgrid extends that to the cluster...
  • by Gordon Bennett ( 752106 ) on Wednesday June 23, 2004 @07:51PM (#9513821)
    Some households have a mix of computers and one can begin to see the benefits - for example, to halve the video compression time of iMovie when making a DVD.
    Considering Apple's ease-of-use for heavyweight *NIX apps this would empower more people to have more computing resources available rather than the big fish out there - schools with low budgets would be able to stretch their capabilities that bit further. And so on.
    • for example, to halve the video compression time of iMovie when making a DVD.

      Video compression is a difficult task to parallelize. If each frame were compressed individually it'd be easy: just and an uncompressed frame to a node and get the compressed frame back. But that's not how it works.

      Now, for something like Pixlet, which is frame-based, there's the possibility of distributing the task. But you will never use Pixlet. It was designed to compress 2K or 1080 material losslessly at a ratio of about 2:1
      • by zalas ( 682627 ) on Wednesday June 23, 2004 @09:28PM (#9514442) Homepage
        Might video compression work if the first scanning pass is done on one computer and the keyframe locations are extracted and then each computer in the grid/cluster would render the chunks between keyframes in parallel?
      • It might work well for a different usage scenario: ripping a DVD.

        The first step would be getting the data off of the DVD. Clearly that wouldn't be parallelizable unless you had multiple copies of the DVD (or some sort of hideous DVD multicast SAN).

        The second step (decryption) would probably be parallelizable. I don't know much about how CSS works but I imagine that it would be a fairly easy task to split the ciphertext into blocks for distribution and decryption.

        The third step (recompression) could be sp
      • Compressing video is very easy to do in a parallel manner. The first step is to perform a DCT (or DWT or similar) on each frame. This is embarrassingly parallel, especially for DCT where each macro block (usually an 8x8 pixel square) can be done in parallel. Next some form of quantisation is applied to the result. This, again, can be done in parallel on a per-frame basis. Finally, delta frames are computed for inter-frame compression on codecs that support it (MPEG and friends). Since key frames are u
  • embracing? (Score:5, Insightful)

    by dekeji ( 784080 ) on Wednesday June 23, 2004 @08:20PM (#9513974)
    Xgrid is proprietary, closed-source software. I think that hardly counts as "embracing" open-source software. Many other parts of the Macintosh platform are proprietary and closed source as well.

    I'm not disputing that Apple released Darwin source code. But before you start cheering, keep in mind that Darwin started out as open source: the CMU Mach kernel and bits and pieces of BSD. And it's not like Apple made a big sacrifice in releasing a kernel that looks and feels like half a dozen other open source kernels.
    • Re:embracing? (Score:2, Insightful)

      by DeifieD ( 725831 )
      Well... At least they are releasing software for Linux... I don't think Photoshop would be OpenSource either. But Adobe releasing it for Linux would be considered embracing.
    • Re:embracing? (Score:3, Informative)

      by babbage ( 61057 )

      Xgrid is proprietary, closed-source software

      Actually, that is completely false []:

      Xcode uses distcc to manage distributed builds. The distcc client manages the setup and distribution of build tasks. The server process (distccd) manages communication with the remote computers hosting the build tasks. The server process runs on both the local, or client, computer and on the remote computer.

      As 30 seconds of Googling will tell you, distcc [is] a fast, free distributed C/C++ compiler [].

      As they have done wit

    • Re:embracing? (Score:5, Informative)

      by jurv!s ( 688306 ) on Thursday June 24, 2004 @12:30PM (#9520111) Journal
      But Xgrid uses the BEEP [] protocol for all communication, which is open, and allowed this project to interoperate easily. The closed source part of Xgrid is just a Cocoa GUI that was thrown together with Interface Builder. This made it a lot easier to interoperate than say, the nasty Exchange/Outlook communication combo.

      If Apple breaks this intentionally (meaning not for adding significant, enhanced functionality) in their next release, I will stand with you as an anti-Apple nay-saying zealot and deride them all up and down /.

      -Potentially recovering Mac zealot (it's so hard with WWDC right around the corner :-( )

  • wake up! (Score:2, Insightful)

    So, this is an example of those open standards, and the world not falling apart over it?

    shall I quote from the download page? yes, yes I shall ...


    Several notes on compilation:

    1. If you use this for anything other than testing, you are insane.
    2. The configure script isn't great: it does not check for all compatibility issues and might even fail to run properly without telling you. /Quote

    I'll assume that Pudge is just another Michael in disguise, endlessly posting over hyped BS articles that are ea
  • How about when you play the next Beastie Boys record into your optical drive, it secretly installs an Xgrid client, and *whoosh* you're borged into its Internet cluster?

The world is coming to an end--save your buffers!