Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Businesses Software Upgrades Apple

Apple Freezes Snow Leopard APIs 256

DJRumpy writes in to alert us that Apple's new OS, Snow Leopard, is apparently nearing completion. "Apple this past weekend distributed a new beta of Mac OS X 10.6 Snow Leopard that altered the programming methods used to optimize code for multi-core Macs, telling developers they were the last programming-oriented changes planned ahead of the software's release. ...`Apple is said to have informed recipients of Mac OS X 10.6 Snow Leopard build 10A354 that it has simplified the`... APIs for working with Grand Central, a new architecture that makes it easier for developers to take advantage of Macs with multiple processing cores. This technology works by breaking complex tasks into smaller blocks, which are then`... dispatched efficiently to a Mac's available cores for faster processing."
This discussion has been archived. No new comments can be posted.

Apple Freezes Snow Leopard APIs

Comments Filter:
  • by Fex303 ( 557896 ) on Tuesday May 12, 2009 @05:51AM (#27919427)

    Why... is there... there so much... punctionations in the summary?

    Because the summary is directly quoting the article and using ellipses [wikipedia.org] to indicate that certain party of the quotation have been omitted. Usually there would be a space on either side of the ellipsis when this was done, but this is /. so I'll let this one slide.

  • Re:G5? (Score:1, Informative)

    by Anonymous Coward on Tuesday May 12, 2009 @05:51AM (#27919429)

    as i understand it, there is no 10.6 on PPC.

  • Re:G5? (Score:1, Informative)

    by Anonymous Coward on Tuesday May 12, 2009 @05:53AM (#27919447)

    Rumour has it that the betas are Intel only. No official word from Apple.

  • by noundi ( 1044080 ) on Tuesday May 12, 2009 @06:04AM (#27919483)
    Because it's a qoute. You see there are rules to any language and one of them in the English language is regarding quoting. When you quote a source the text written must be matching every word of the source. When the quote contains unnecessary text to the topic at hand you cut out that part and replace it with three periods. This indicates that there's a piece missing from the original quote, in case e.g. someone is questioning the quote at hand. So you see quoting is not interpreting, and must be, at all times, matching every word of the source.

    Turn to side B for the next lesson.
  • by A.K.A_Magnet ( 860822 ) on Tuesday May 12, 2009 @06:11AM (#27919525) Homepage

    Haven't video game programmers been doing it forever, doing some things on the CPU, some on the graphics card?

    The problem is shared-memory, not multi-processor or core itself. Graphics card have dedicated memory or reserve a chunk of the main memory.

    And I heard functional languages like Lisp/Haskell are good at these multi-core tasks, is that true?

    It is true, because they privilege immutable data structures which are safe to access concurrently.

  • by jbolden ( 176878 ) on Tuesday May 12, 2009 @07:17AM (#27919851) Homepage

    No ellipse is not a change to the text but a deletion from the text.

  • by moon3 ( 1530265 ) on Tuesday May 12, 2009 @08:09AM (#27920155)
    Heavy parallelized task can also be leveraged by utilizing CUDA and your GPU, even the cheap GPU of today have some 128-512 SPU cores.

    What do you think the GPU driven super computer fuzz is all about?
  • Thanks for Playing (Score:4, Informative)

    by Anonymous Coward on Tuesday May 12, 2009 @08:11AM (#27920167)
    I'm one of the seed testers, and even posting anonymously, I am concerned not to violate Apple's NDA. So, I'll put it like this: I have 2 PPC machines and an Intel machine. I have only been able to get the SL builds to work on the Intel machine due, I'm pretty sure, to no fault of my own.
  • by DrgnDancer ( 137700 ) on Tuesday May 12, 2009 @08:17AM (#27920213) Homepage

    I'm by no means a multiprocessing expert, but I suspect the problem with your approach is in the overhead. Remember that the hardest part of multiprocessing, as far as the computer is concerned, is making sure that all the right bit of code get run in time to provide their information to the other bits of code that need it. The current model of multi-CPU code (as I understand it) is to have the programmer mark the pieces that are capable of running independently (either because they don't require outside information, or they never run at the same time as other pieces that need the information they access/provide), and tells the program when to spin off these modules as separate threads and where it will have to wait for them to return information.

    What you're talking about would require the program to break out small chunks of itself, more or less as if sees fit, whenever it sees an opportunity to save some time by running parallel. This first requires the program to have some level of analytical capability for it's own code (Let's say we have two if statements one right after the other, can they be run concurrently? or does the result of the first influence the second? What about two function calls in a row?). The program will have to erect mutex locks around each piece of data it uses too, just to be sure that it doesn't cause dead locks if it misjudges whether two particular pieces of code can in fact run simultaneously.

    It also seems to me (again I'm not an expert), that you'd spend a lot of time moving data between CPUs. As I understand it, one of the things you want to avoid in parallel programing is having a thread have to "move" to a different CPU. This is because all of the data for the thread has to be moved from the cache of the first CPU to the cache of the second. A relatively time consuming task. Multicore CPUs share level 2 cache I think, which might alleviate this, but the stuff in level 1 still has to be moved around, and if the move is off die, to another CPU entirely, then it doesn't help. In your solution I see a lot of these moves being forced. I also see a lot of "Chunk A and Chunk B provided data to Chunk C. Chunk A ran on CPU1, Chunk B on CPU2, and Chunk C has to run on CPU3, so it has to get the data out of the cache of the other two CPUS".

    Remember that data access isn't a flat speed. l1 is faster than l2 which is much faster than RAM, which is MUCH faster than I/O buses. Anytime data has to pass through RAM to get to a CPU you lose time. With lots of little chunks running around getting processed, the chances of having to move data between CPUs goes up a lot. I think you'd lose more time on that then you gain by letting the bits all run on the same CPU.

  • by AndrewNeo ( 979708 ) on Tuesday May 12, 2009 @09:19AM (#27920779) Homepage
    Unfortunately that's not the issue at hand. You're referring to the video card using system RAM for it's own, but the issue they're talking about (which only occurs in the 32-bit world, not 64-bit, due to the MMU) is that to address the memory on the video card, it has to be put into the same 32-bit addressable block as the RAM, which cuts into being able to use it all, rather than using it physically. At least, that's how I understand it works.
  • by mdwh2 ( 535323 ) on Tuesday May 12, 2009 @09:24AM (#27920833) Journal

    Haven't video game programmers been doing it forever, doing some things on the CPU, some on the graphics card?

    Not really - although it's easy to use both the CPU and the GPU, normally this would not be at the same time. What's been going on "forever" is that the CPU would do some stuff, then tell the GPU to do some stuff, then the CPU would carry on.

    What you can do is have a thread for the CPU stuff updating the game world, and then a thread for the renderer, but that's more tricky (as in, at least as difficult as any CPU multiprocessing), and hasn't been done by all games "forever". (There are also a few GPU functions that can be run in parallel, but these can be tricky to do right.)

    The main area of multiprocessing is on the GPU itself - the reason that's easy is because graphics rendering is an embarrassingly parallel problem (plus the graphics card/libraries will do all the setting up of the threads for you - it's harder to do that for CPUs, because the needs for CPU SMP vary for each program).

  • by ducomputergeek ( 595742 ) on Tuesday May 12, 2009 @09:33AM (#27920963)

    One area: Graphics rendering. And I'm not talking about games, but Lightwave et al. especially when one is rendering a single very large image (say billboard). Currently most renders allow splitting of that frame across several machines/core where each one renders a smaller block and then reassembles the larger image. However, not all the rendering engines out there allow the splitting of a single frame. Also, if the render farm is currently being tasked for another project (animation) and you need to render, I could see where 4 cores acting at one using all the available RAM is going to be a tad bit faster than splitting into separate tasks and rendering on each core with limited RAM.

  • Re:G5? (Score:5, Informative)

    by chabotc ( 22496 ) <chabotc AT gmail DOT com> on Tuesday May 12, 2009 @10:00AM (#27921299) Homepage

    Snow Leopard is going to be the first version of Mac OS X that only runs on Intel Macs, so I'm afraid you're going to be stuck on plain old leopard

  • by MightyYar ( 622222 ) on Tuesday May 12, 2009 @10:25AM (#27921669)

    Erm, so what is this Windows XP installation that I have been using since XP Service Pack 1 that I have *incrementally upgraded* through to Service Pack 3 with all the additional Microsoft updates then?

    Apple does "updates" as well.

    You can't deny that the move from XP to Vista is a big one. SP1 to SP2 may have affected some business users or home power users, but most users didn't really notice a difference... the overall XP experience was mostly unchanged. SP3 was even more slight. Apple updates tend to add marketable features. For instance, Leopard added Time Machine and Spaces along with the service-pack style under-the-hood stuff.

    but I wish you Apple fanbois would occasionally go read a technical book or something so that you can at least have some degree of intelligent conversation with those of us who do.

    Considering that I run XP, Ubuntu, and Apple stuff I think you might be barking up the wrong tree. I suspect you don't know as much about Apple's stuff as you think you do, especially if you think that the changes from 10.0 to 10.5 are anything like the changes that happened in XP during the same period.

    Ubuntu has a very aggressive update schedule, but the upgrades have not been as seamless for me as the Apple updates. In particular, every upgrade seems to step on my GRUB configuration. Otherwise I'm quite happy with the frequency of their updates.

  • by UnknowingFool ( 672806 ) on Tuesday May 12, 2009 @12:19PM (#27923403)

    Not really, PCs had disk drives for many more years. It was only when DVD writers became standards did it stop appearing in models.

    I would argue that when USB flash drives became cheap is when floppy drives became discontinued. By the way, many MB manufacturers and PC makers still include PS/2 connections when most keyboards and mice are USB these days. Backwards compatibility is hard to break even when the technology is obsolete.

  • Re:Nice (Score:2, Informative)

    by Per Cederberg ( 680752 ) on Tuesday May 12, 2009 @12:29PM (#27923571)
    In theory 'nice' or 'renice' would do the right thing. But in most OS:es it seems to only affect the CPU scheduling. The IO scheduling is often left unmodified, meaning that a single IO-bound application may effectively block the harddrive from access by other applications.

    These days, the relatively lower memory and IO speeds are often the real performance bottlenecks for ordinary applications. So improved IO scheduling might do more than multiple cores for the perceived performance of a specific system or workload.

    Since everyone is always referring to BeOS in these types of discussions, I guess the CPU and IO scheduling must have been one of the things that they got right.
  • by DECS ( 891519 ) on Tuesday May 12, 2009 @03:57PM (#27926935) Homepage Journal

    Apple's solution was to enable Remote DVD sharing, so that the "BIOS" (EFI) of the disc-less MacBook Air can install its OS from scratch via the DVD drive of another computer on the local network.

    But yes, a generic PC would have a problem installing Windows without a local DVD drive, because generic PCs have a completely retarded, ancient BIOS firmware that rarely offers any functional network boot support, and Windows makes 70's-era assumptions about what CPM drive letters it is installing on.

  • by DECS ( 891519 ) on Tuesday May 12, 2009 @05:01PM (#27928137) Homepage Journal

    Since it shipped in 2001- 8 FREAKING YEARS AGO - WinXP has gotten three SP releases. Microsoft's SPs don't often add significant new features, they fix broken things. Although sometimes, things are so broken (such as USB, or Firewall/security, that an SP appears to "add new features").

    Apple doesn't call it a service pack when they release a minor update to Mac OS X, but they deliver these far more often than Microsoft. Apple is gearing up to deliver its *seventh* significant free update to Leopard!

    Ten Myths of Leopard: 2 It's Only a Service Pack! [roughlydrafted.com]

    Since 2001, Apple has shipped 40 free updates to Mac OS X at regular intervals, compared to the three SPs you outlined for XP.

    There's no way to dance your way out of that corner. Apple has consistently out-delivered Microsoft across the board in both paid upgrades with major new features (six major reference releases this decade, compared to Microsoft's 3 desktop OS releases- Win 5.1 (XP), 6 (Vista), and 6.1 ("7")) and 40 minor free releases compared to Microsoft's 5 SPs.

  • by DECS ( 891519 ) on Tuesday May 12, 2009 @05:07PM (#27928217) Homepage Journal

    This is wrong:

    "encourages developers to target 64-bit primarily (thus leaving out the pre-Core 2 machines)"

    On Windows, targeting 64-bit might leave out 32-bit PCs, but Apple's Universal Binary architecture makes it easy to compile applications that support both 32/64-bit hardware in the same application package. And 64-bit Macs running OS X can run both natively. Windows requires a WOW emulation level to run 32-bit EXEs on the separate 64-bit version of XP/Vista. Which is part of the reason only a minority of Windows users have moved to 64-bits.

    Road to Mac OS X Snow Leopard: the future of 64-bit apps [roughlydrafted.com]

  • by Anonymous Coward on Tuesday May 12, 2009 @06:06PM (#27929071)

    Apple's solution was to enable Remote DVD sharing, so that the "BIOS" (EFI) of the disc-less MacBook Air can install its OS from scratch via the DVD drive of another computer on the local network.

    Note that installing OS X takes about five and a half hours [arstechnica.com] using Remote Install and 802.11n. I don't want to imagine how long it would take if your base station is only 802.11b/g.

    This would be simpler over ethernet or FireWire, but the MacBook Air has neither. Apple sells a USB ethernet dongle [apple.com] for $29 and a 7-foot ethernet cable [apple.com] for $15.

    But yes, a generic PC would have a problem installing Windows without a local DVD drive, because generic PCs have a completely retarded, ancient BIOS firmware that rarely offers any functional network boot support

    Name a single Windows PC released after the MacBook Air that does not support network boot and Windows network installation.

    and Windows makes 70's-era assumptions about what CPM drive letters it is installing on.

    You're an Apple-worshipping douchebag that hasn't installed Windows in at least 10 years.

  • Re:Cleanup (Score:3, Informative)

    by MojoStan ( 776183 ) on Tuesday May 12, 2009 @06:41PM (#27929587)

    Supposedly it will take up less hard drive space and memory, but I'll believe that when I see it.

    I think it's safe to believe the part about less hard drive space, because Apple will save a lot of space with a very simple method. According to AppleInsider [appleinsider.com], Snow Leopard will trim the standard install size by "several gigabytes" (4GB according to Ars Technica [arstechnica.com]) by only installing printer drivers for currently connected printers. Drivers for newly attached printers will install over the network and Software Update, so this works best with an always-on connection.

    Personally, I'm blown away by the fact that printer drivers alone take up anything close a one gigabyte, let alone 4GB.

    Even if they fail, I'm glad they attempted this cleanup, even if it just inspires Microsoft to do some similar scrubbing with Windows 8.

    I think netbooks have done enough to "inspire" MS (I prefer the word "panic") to scrub their OS.

  • Version 7.2 - download it, it's free from evil Apple, and even works with 10.4 - and it's been available for almost 2 years now

    Oh, and it looks like PRO is gone with Snow Leopard - more for you to whine about.

Organic chemistry is the chemistry of carbon compounds. Biochemistry is the study of carbon compounds that crawl. -- Mike Adams

Working...