Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Businesses Software Upgrades Apple

Apple Freezes Snow Leopard APIs 256

DJRumpy writes in to alert us that Apple's new OS, Snow Leopard, is apparently nearing completion. "Apple this past weekend distributed a new beta of Mac OS X 10.6 Snow Leopard that altered the programming methods used to optimize code for multi-core Macs, telling developers they were the last programming-oriented changes planned ahead of the software's release. ...`Apple is said to have informed recipients of Mac OS X 10.6 Snow Leopard build 10A354 that it has simplified the`... APIs for working with Grand Central, a new architecture that makes it easier for developers to take advantage of Macs with multiple processing cores. This technology works by breaking complex tasks into smaller blocks, which are then`... dispatched efficiently to a Mac's available cores for faster processing."
This discussion has been archived. No new comments can be posted.

Apple Freezes Snow Leopard APIs

Comments Filter:
  • by Anonymous Coward

    Haven't video game programmers been doing it forever, doing some things on the CPU, some on the graphics card?

    And I heard functional languages like Lisp/Haskell are good at these multi-core tasks, is that true?

    • by A.K.A_Magnet ( 860822 ) on Tuesday May 12, 2009 @05:11AM (#27919525) Homepage

      Haven't video game programmers been doing it forever, doing some things on the CPU, some on the graphics card?

      The problem is shared-memory, not multi-processor or core itself. Graphics card have dedicated memory or reserve a chunk of the main memory.

      And I heard functional languages like Lisp/Haskell are good at these multi-core tasks, is that true?

      It is true, because they privilege immutable data structures which are safe to access concurrently.

      • And I heard functional languages like Lisp/Haskell are good at these multi-core tasks, is that true?

        It is true, because they privilege immutable data structures which are safe to access concurrently.

        Only partly true. Even in pure functional languages like Haskell, the functional-programming dream of automatic parallelization is nowhere near here yet; in theory the compiler could just run a bunch of thunks of code in parallel, or speculatively, or whatever it wants, but in practice the overhead of figuring out which are worth splitting up has doomed all the efforts so far. It does make some kinds of programmer-specific parallelism easier; probably the most interesting experiments in that direction, IMO, is Clojure [clojure.org]'s concurrency primitives (Clojure's a Lisp-derived language with immutable data types, targeting the JVM).

        Lisp, FWIW, doesn't necessarily privilege immutable data structures, and isn't even necessarily used in a functional-programming style; "Lisp" without qualifiers often means Common Lisp, in which it's very common to use mutable data structures and imperative code.

        • by A.K.A_Magnet ( 860822 ) on Tuesday May 12, 2009 @05:37AM (#27919655) Homepage
          I know it was partially true, I should remember not to be too lazy when posting on /. :).

          Note that I was not talking about automatic parallelization which is indeed possible only with pure languages (and ghc is experimenting on it); but simply about the fact that is is easier to parallelize an application with immutable data structures since you need to care a lot less about synchronization. For instance, the Erlang actors model (also in other languages like Scala on the JVM) still requires the developer to define the tasks to be parallelized, yet immutable data structures make the developer's life a lot easier with respect to concurrent access and usually provide better performance.

          My "It is true" was referring to "functional languages" which do usually privilege immutable data structures, not to Haskell or Lisp specifically (which as you said has many variants with mutable structures focused libraries). As you said, Clojure is itself a Lisp-1 and it does privilege immutable data structures and secure concurrent access with Refs/STM or agents. What is more interesting in the Clojure model (compared to Scala's, since they are often compared even though their differences, as functional languages and Java challengers on the JVM) is that it doesn't allow unsafe practices (all must be immutable except in variables local to a thread, etc).

          Interesting times on the JVM indeed.
          • Yeah that's fair; I kind of quickly read your post (despite it being only one sentence; hey this is Slashdot!) so mistook it for the generic "FP means you get parallelization for free!" pipe dream. :)

            Yeah, I agree that even if the programmer has to specify parallelism, having immutable data structures makes a lot of things easier to think about. The main trend that still seems to be in the process of being shaken out is to what extent STM will be the magic bullet some people are proposing it to be, and to what extent it can be "good enough" as a concurrency model even in non-functional languages (e.g. a lot of people are pushing STM in C/C++).

        • Even in pure functional languages like Haskell, the functional-programming dream of automatic parallelization is nowhere near here yet; in theory the compiler could just run a bunch of thunks of code in parallel, or speculatively, or whatever it wants, but in practice the overhead of figuring out which are worth splitting up has doomed all the efforts so far.

          Forgive my profound ignorance on the topic, but am I to understand that the determination of which can run in parallel is done a

      • Re: (Score:3, Insightful)

        by Lumpy ( 12016 )

        only the CRAPPY video cards use any of the main memory. Honestly, with how cheap real video cards are I cant believe anyone would intentionally use a memory sharing video card.

        It's like the junk winmodems of yore. DONT BUY THEM.

        • by AndrewNeo ( 979708 ) on Tuesday May 12, 2009 @08:19AM (#27920779) Homepage
          Unfortunately that's not the issue at hand. You're referring to the video card using system RAM for it's own, but the issue they're talking about (which only occurs in the 32-bit world, not 64-bit, due to the MMU) is that to address the memory on the video card, it has to be put into the same 32-bit addressable block as the RAM, which cuts into being able to use it all, rather than using it physically. At least, that's how I understand it works.
      • Re: (Score:2, Informative)

        by moon3 ( 1530265 )
        Heavy parallelized task can also be leveraged by utilizing CUDA and your GPU, even the cheap GPU of today have some 128-512 SPU cores.

        What do you think the GPU driven super computer fuzz is all about?
    • No, they haven't, with few exceptions. Doing multiple things at the same time isn't really the issue here, we're trying to figure out how to effectively split one task between multiple 'workers'. Video games are one of the harder places to try to apply this technique to, because they run in real time and are also constantly responding to user input. Video encoding is the opposite. One of the big problems with multicore is coordinating the various worker threads.

      You could learn a lot by taking the time to re

      • I see a lot of talk about programming data structures, but what if their tackling this at a much lower level? Taken to a simple extreme description, each core is simply processing a single task at a time for X number of clock cycles or less (although I understand they can process multiple tasks via piping or somesuch). There is already a balancing act between the CPU and memory as it has to sync I/O between the two due to differing clock speeds. What if they are doing something similar here?

        By that I mea
        • by DrgnDancer ( 137700 ) on Tuesday May 12, 2009 @07:17AM (#27920213) Homepage

          I'm by no means a multiprocessing expert, but I suspect the problem with your approach is in the overhead. Remember that the hardest part of multiprocessing, as far as the computer is concerned, is making sure that all the right bit of code get run in time to provide their information to the other bits of code that need it. The current model of multi-CPU code (as I understand it) is to have the programmer mark the pieces that are capable of running independently (either because they don't require outside information, or they never run at the same time as other pieces that need the information they access/provide), and tells the program when to spin off these modules as separate threads and where it will have to wait for them to return information.

          What you're talking about would require the program to break out small chunks of itself, more or less as if sees fit, whenever it sees an opportunity to save some time by running parallel. This first requires the program to have some level of analytical capability for it's own code (Let's say we have two if statements one right after the other, can they be run concurrently? or does the result of the first influence the second? What about two function calls in a row?). The program will have to erect mutex locks around each piece of data it uses too, just to be sure that it doesn't cause dead locks if it misjudges whether two particular pieces of code can in fact run simultaneously.

          It also seems to me (again I'm not an expert), that you'd spend a lot of time moving data between CPUs. As I understand it, one of the things you want to avoid in parallel programing is having a thread have to "move" to a different CPU. This is because all of the data for the thread has to be moved from the cache of the first CPU to the cache of the second. A relatively time consuming task. Multicore CPUs share level 2 cache I think, which might alleviate this, but the stuff in level 1 still has to be moved around, and if the move is off die, to another CPU entirely, then it doesn't help. In your solution I see a lot of these moves being forced. I also see a lot of "Chunk A and Chunk B provided data to Chunk C. Chunk A ran on CPU1, Chunk B on CPU2, and Chunk C has to run on CPU3, so it has to get the data out of the cache of the other two CPUS".

          Remember that data access isn't a flat speed. l1 is faster than l2 which is much faster than RAM, which is MUCH faster than I/O buses. Anytime data has to pass through RAM to get to a CPU you lose time. With lots of little chunks running around getting processed, the chances of having to move data between CPUs goes up a lot. I think you'd lose more time on that then you gain by letting the bits all run on the same CPU.

          • I think I understand what your saying and it makes sense.

            A = 1
            A = A + 1
            If you ran the first line on Core 1 and the second on Core 2, how would it know that the second line would need to be processed after the first (other than it's place in the code itself).

            I wonder if they are using this parallel process only for isolated threads then? I thought any modern OS already did this? Does anyone know exactly how they are tweaking the OS to better multitask among cores (In semi-technical Laymen's terms)?
            • On top of the fact that Core 2 needs to somehow know not to run the instruction until after Core 1 runs its preceding instruction, you're also moving the value of A from Core 1 to Core 2. Normally, when one instruction follows another and the same variable is used, that variable's value is cached in CPU level 1 cache. It's almost instantly accessed. In your example you have to move it between Cores; that means it has to go out to CPU level 2 cache from Core 1's L1, and back into Core 2's L1 so it can be a

      • by tepples ( 727027 )

        Video games are one of the harder places to try to apply this technique to, because they run in real time and are also constantly responding to user input. Video encoding is the opposite.

        Since when does encoding of live video not need to run in real time? An encoder chain needs to take the (lightly compressed) signal from the camera, add graphics such as the station name and the name of the speaker (if news) or the score (if sports), and compress it with MPEG-2. And it all has to come out of the pipe to viewers at 60 fields per second without heavy latency.

        • I didn't specify live video encoding. That sentence does not make sense if interpreted to be referring to live video encoding. I would be remarkably misinformed to have used live video encoding as an example of something that does not run in real time. Live video encoding is not often encountered in a desktop PC environment, and I would go so far as to say that the majority of video broadcasts are not live.

          I am somewhat confused as to why you're talking about live video encoding. Does this relate to multico

          • I didn't specify live video encoding.

            Your wording gave off the subtext that you thought live video encoding was commercially unimportant. I was just trying to warn you against being so dismissive.

            Live video encoding is not often encountered in a desktop PC environment

            Citation needed [wikipedia.org].

            I would go so far as to say that the majority of video broadcasts are not live.

            And you'd be right, but tell that to my sports fan grandfather or my MSNBC-loving grandmother.

            Most PCs have VGA or DVI-I output abilities, and the conversion to the RCA connectors requires no special electronics.

            Most PCs won't go lower than 480p[1] at 31 kHz horizontal scan rate, and they output RGB component video. SDTVs need the video downsampled to 240p or 480i at 15.7 kHz, and most also need red, green, and blue signals to be multiplexed into co

    • Haven't video game programmers been doing it forever, doing some things on the CPU, some on the graphics card?

      Yeah, and they sync an unknown (but often quite large) amount of "cores" (ie, the shaders, etc in the GPU) quite easily too.

      Of course, the only reason it's so easy for video game programmers is that raster graphics are one of the easiest things ever to parallelize (since pixels rarely depends on other pixels), and APIs like OpenGL and Direct3D make the parallelism completely transparent. If they had to program each individual pixel pipeline by hand, we'd still be stuck with CPU rendering. The purpose of Gra

    • Re: (Score:2, Informative)

      by mdwh2 ( 535323 )

      Haven't video game programmers been doing it forever, doing some things on the CPU, some on the graphics card?

      Not really - although it's easy to use both the CPU and the GPU, normally this would not be at the same time. What's been going on "forever" is that the CPU would do some stuff, then tell the GPU to do some stuff, then the CPU would carry on.

      What you can do is have a thread for the CPU stuff updating the game world, and then a thread for the renderer, but that's more tricky (as in, at least as diffi

  • G5? (Score:5, Interesting)

    by line-bundle ( 235965 ) on Tuesday May 12, 2009 @04:30AM (#27919361) Homepage Journal

    what is the status of 10.6 on the PowerPC G5?

    • Its working well, thanks for asking...
      • Thanks for Playing (Score:4, Informative)

        by Anonymous Coward on Tuesday May 12, 2009 @07:11AM (#27920167)
        I'm one of the seed testers, and even posting anonymously, I am concerned not to violate Apple's NDA. So, I'll put it like this: I have 2 PPC machines and an Intel machine. I have only been able to get the SL builds to work on the Intel machine due, I'm pretty sure, to no fault of my own.
    • Re:G5? (Score:5, Informative)

      by chabotc ( 22496 ) <chabotc&gmail,com> on Tuesday May 12, 2009 @09:00AM (#27921299) Homepage

      Snow Leopard is going to be the first version of Mac OS X that only runs on Intel Macs, so I'm afraid you're going to be stuck on plain old leopard

  • Spread your tiny wings and fly away,
    And take the snow back with you
    Where it came from on that day.
    The one I love forever is untrue,
    And if I could you know that I would
    Fly away with you.

    In a world of good and bad, light and dark, black and white, it remains very hopeful that Apple still sees itself as a beacon of purity. It pushes them to do good things to reinforce their own self-image.

    I can't wait to try this latest OS!

  • by BlueScreenOfTOM ( 939766 ) on Tuesday May 12, 2009 @06:20AM (#27919871)
    Alright guys, I know the advantages (and challenges) of multi-threading. With almost all new processors coming with > 1 core, I can tell there's now a huge desire to start making apps that can take advantage of all cores. But my question is why? One thing I love about my quad-core Q6600 is the fact that I can be doing so many things at once. I can be streaming HD video to my TV while simultaneously playing DOOM, for example. However, when I fire up a multithreaded app that takes all 4 of my cores and I start doing something heavy, like video encoding for example, everything tends to slow down like it did back when I only had one core to play with. Yeah, my encoding gets done a lot faster, but honestly I'd rather it take longer than make my computer difficult to use for any period of time...

    I realize I can throttle the video encoding to a single core, but I'm just using that as an example... if all apps start using all cores, aren't we right back where we started, just going a little faster? I love being able to do so much at once...
    • One thing I love about my quad-core Q6600 is the fact that I can be doing so many things at once. I can be streaming HD video to my TV while simultaneously playing DOOM, for example.

      Doom can run on a Game Boy Advance [idsoftware.com], rendering in software on a 16.8 MHz ARM7 CPU. You could emulate the game and your quad-core wouldn't break a sweat.

      if all apps start using all cores, aren't we right back where we started, just going a little faster?

      That's what developers want: the ability to use all the cores for a task where the user either isn't going to be doing something else (like on a server appliance) or has another device to pass the time (like a GBA to run Doom).

    • Remember, video encoding requires tremendous amounts of CPU power in the encoding process, far more so than audio encoding. That's why when Pixar renders the images for their movies they use thousands of Apple Xserve blade servers running in massively parallel fasion to do rendering at reasonable speeds.

      We can make all the Beowulf cluster jokes on this forum, :-) but one reason why Beowulf was developed was the ability to synchronize hundreds to thousands of machines in a massively parallel fashion to speed

    • by AlpineR ( 32307 )

      On a UNIX system (like Mac OS X) you should be able to "nice" the low-priority processes to give them less attention. If I'm running a twelve-hour, max-the-CPU simulation and I want to play a game while I'm waiting, I nice the simulation to a low priority. That way it yields most of the CPU to the game while I'm playing, yet runs at full dual-core speed when I'm not.

      I'm not sure this is actually working in Mac OS X 10.5, though. Since I got my dual-core system, the activity monitors don't seem to show that

      • Re: (Score:2, Informative)

        In theory 'nice' or 'renice' would do the right thing. But in most OS:es it seems to only affect the CPU scheduling. The IO scheduling is often left unmodified, meaning that a single IO-bound application may effectively block the harddrive from access by other applications.

        These days, the relatively lower memory and IO speeds are often the real performance bottlenecks for ordinary applications. So improved IO scheduling might do more than multiple cores for the perceived performance of a specific system or
    • by jedidiah ( 1196 ) on Tuesday May 12, 2009 @07:23AM (#27920275) Homepage

      Yup. If applications start getting to good at being able to "use the whole machine"
      again then that's exactly what they will try to do. The fact that they really can't
      is a really nice "release valve" at this point. As an end user managing load on my
      box I actually like it better when an app is limited to only 100% cpu.
      (IOW, one core)

      • Re: (Score:3, Insightful)

        by DaleGlass ( 1068434 )

        That only works because you have few cores.

        Once we get to the point where a consumer desktop has 32 cores, you're not going to be able to use even half of that CPU by running independent tasks simultaneously. You'll need to have apps that can take advantage of many cores. The more cores you have, the more power a single core application fails to take advantage of.

      • by solios ( 53048 )

        Agreed.

        There are a few applications I prefer either single-cored or with limited memory access. After Effects, for example, is so incredibly poorly behaved on the Mac that I'd rather use version 6 - which can only see around 1.5 gigs of ram - than 7 or higher, which can see much more. In my experience it's made almost no difference on rendering time - where it DOES make a difference is in wether or not I'm actually able to use my machine for anything else while the render is grinding.

        Of course, you can te

    • by slimjim8094 ( 941042 ) on Tuesday May 12, 2009 @08:11AM (#27920707)

      You ought to be able to set your program to only run on certain processors. I know Windows has this feature (set affinity in task manager) so I assume Linux/Mac does as well.

      I'd recommend putting heavy tasks on your last core or two, and anything you particularly care about on the second core - leave the first for the kernel/etc.

    • Re: (Score:3, Informative)

      One area: Graphics rendering. And I'm not talking about games, but Lightwave et al. especially when one is rendering a single very large image (say billboard). Currently most renders allow splitting of that frame across several machines/core where each one renders a smaller block and then reassembles the larger image. However, not all the rendering engines out there allow the splitting of a single frame. Also, if the render farm is currently being tasked for another project (animation) and you need to

    • I start doing something heavy (...)

      You just answered your own question.

    • Re: (Score:3, Interesting)

      by curunir ( 98273 ) *

      Applications shouldn't be concerned with limiting themselves so that they cannot under any circumstances slow down other applications. It's the job of the OS to provide the tools to prioritize applications according to the desires of the user.

      OS X, by virtue of its Unix underpinnings, should support nice/renice to alter the priorities of processes. One would hope that with additional support for developers to make use of multiple cores, Apple would also provide users with increased ability to easily alter t

  • by Anonymous Coward on Tuesday May 12, 2009 @06:44AM (#27920019)

    I always read it as "Slow Leopard"

  • My biggest problem with this upgrade is that it seems more like a Windows Service Pack than a true Mac OS X upgrade. Are we going to have to pay for "new APIs" and "multi-core processing"?

    How does all this help the average user (i.e. my Mom)? WooHoo! They are building a YouTube app and you can record directly off the screen! Big whoop. You can do that today without too much trouble with third party applications. Is the Mac OS X user interface and built-in apps already so perfect that they can't find things to improve?

    I'm usually a pretty big Mac fan-boy but I just can't seem to get excited about this one. Hell, I'm even thinking (seriously) about ditching my iPhone and getting a Palm Pre. sigh...how the world is changing. Has Apple lost it's Mojo?

    • by HogGeek ( 456673 )

      I doubt she will be motivated to...

      I think some of the changes affect the corporate user more than they do the home user.

      From what I've read, the mail, calendar and contacts apps now communicate with MS exchange (using the Active Sync technology Apple licensed from MS for use in the iPhone).

      While I'm sure there are other changes, I think those are some of the "bigger" one that a lot of people have been waiting for, myself included...

    • by dzfoo ( 772245 )

      >> Is the Mac OS X user interface and built-in apps already so perfect that they can't find things to improve?

      I thought that concentrating on performance optimizations and stability was an improvement to the current version.

            -dZ.

    • By marketing as a completely new version they're likely to make more sales of the expensive hardware they bundle with the OS to people who want to upgrade but think buying the $100 upgrade would be too difficult. Of course even paying for a $100 upgrade from Leopard doesn't make a whole lot of sense. Myself I'm still on Tiger with my Mini which means I can't even try to develop for the iPhone, so I may just end up jumping to Snow Leopard just because, that is unless I can nab a copy of Leopard for much ch
    • Re: (Score:2, Interesting)

      by MightyYar ( 622222 )

      My biggest problem with this upgrade is that it seems more like a Windows Service Pack than a true Mac OS X upgrade.

      I much prefer frequent, incremental updates. The $100 that Apple charges for the OS is peanuts compared to the amount of use it gets.

      Maybe you like the MS upgrade cycle, but look at all the bad press they get for it... you can hardly blame Apple for wishing to avoid that.

    • by jcnnghm ( 538570 )

      I think every major version is a service pack, except Apple charges $150 for it, and changes the API enough that you can't run new software. I wanted to run XCode on my 10.4 laptop, so I had to go buy a 10.5 upgrade, even though it didn't have any new features I actually cared about. I still think it should have been a $30 minor feature pack, not a whole OS.

      I think it's the most annoying part about Apple. They definitely seem to nickel and dime you, especially by not shipping with a full-screen media pl

      • by Corrado ( 64013 )

        Well, with past major versions of Mac OS X we at least got some newfangled toys to play with (the Dock, Spotlight, Spaces, etc.) But with SL, we get APIs and back end stuff. That may be neat and all but it doesn't do much for me, immediately.

        Now granted it will be faster and more stable, which is a good reason to upgrade, but I'm not sure its a good enough reason to pay $100. Even the "enterprise" features wont do much for the average person. I guess Apple is just using SL to get a foot into the corpora

        • by niteice ( 793961 )
          Snow Leopard is Intel-only, and apparently even encourages developers to target 64-bit primarily (thus leaving out the pre-Core 2 machines).

          To be pedantic, the PPC iMac was discontinued in January of 2006. If the machine is really two years old it will run Snow Leopard fine.
      • Of course, a decent copy of XP Pro costs as much as two of those Mac OS X upgrades combined, and a copy of Vista Ultimate would pay for pretty much every OS X update that has been released.

        Not to mention the fact that in terms of features, the jump from XP to SP3 has been smaller than any of the OS X upgrades.

        XCode never stopped working on an upgrade, you just can't necessarily run the new XCode because of the new API's that are part of the new Operating system... sounds kind of normal to me. Stuff that the

    • Cleanup (Score:5, Interesting)

      by copponex ( 13876 ) on Tuesday May 12, 2009 @10:46AM (#27922937) Homepage

      From what I've read, they are cleaning up the code and optimizing it for the Intel platform. Supposedly it will take up less hard drive space and memory, but I'll believe that when I see it. Even if they fail, I'm glad they attempted this cleanup, even if it just inspires Microsoft to do some similar scrubbing with Windows 8. It's about time someone stopped and said, "Hey, instead of shiny feature 837, can we make sure that our web browser isn't leaking memory like a paper boat?"

      It's not really for your mom - it's so she doesn't call you as often.

      I'm usually a pretty big Mac fan-boy but I just can't seem to get excited about this one. Hell, I'm even thinking (seriously) about ditching my iPhone and getting a Palm Pre. sigh...how the world is changing. Has Apple lost it's Mojo?

      I had the same thought. Apple is getting too greedy with their hardware prices, and they continue to screw customers over with their overpriced parts for repair. Plus, the computer world is changing, and they don't seem to understand what's happening.

      Try remotely controlling a Mac with VNC over a cellular broadband connection. It's like sucking a watermelon through a straw. Try creating a virtual network of virtual machines for testing before deployment, which is illegal under Apple's TOS except for their server software. You'll be dragging your toaster into the bathtub by the end of the day.

      Netbooks are evidence that people want computers for convenient access to information, usually located on the internet, and to have something to sync their iPod to. I'm not sure how much longer Apple can charge twice what their competitors are charging and get away with it. And they still have no chance of entering the enterprise market with their hardware costs and licensing restrictions.

      I'm due for a laptop upgrade, and given the choice of a Dell Precision, RGBLED screen, and a dock that supports legacy ports and dual 30" displays, or a slower MacBook Pro with a crappier display for the same price, they're really making the decision for me. I'll continue recommending Macs for friends and family that may call me with technical questions, but if Windows 7 offers the same kind of robustness for half the price, what's the point?

      • Re: (Score:3, Informative)

        by MojoStan ( 776183 )

        Supposedly it will take up less hard drive space and memory, but I'll believe that when I see it.

        I think it's safe to believe the part about less hard drive space, because Apple will save a lot of space with a very simple method. According to AppleInsider [appleinsider.com], Snow Leopard will trim the standard install size by "several gigabytes" (4GB according to Ars Technica [arstechnica.com]) by only installing printer drivers for currently connected printers. Drivers for newly attached printers will install over the network and Software Update, so this works best with an always-on connection.

        Personally, I'm blown away by the fact tha

    • by yabos ( 719499 ) on Tuesday May 12, 2009 @12:06PM (#27924113)
      Your mom will upgrade when new software requires Snow Leopard. Mac developers are pretty quick to adopt new APIs since they are usually making things really easy to do compared to the previous OS(such as Core Animation, Core Audio/Video etc.)

There's no sense in being precise when you don't even know what you're talking about. -- John von Neumann

Working...