How Snow Leopard Cut ObjC Launch Time In Half 158
MBCook writes "Greg Parker has an excellent technical article on his blog about the changes to the dynamic linker (dyld) for Objective-C that Snow Leopard uses to cut launch time in half and cut about 1/2 MB of memory per application. 'In theory, a shared library could be different every time your program is run. In practice, you get the same version of the shared libraries almost every time you run, and so does every other process on the system. The system takes advantage of this by building the dyld shared cache. The shared cache contains a copy of many system libraries, with most of dyld's linking and loading work done in advance. Every process can then share that shared cache, saving memory and launch time.' He also has a post on the new thread-local garbage collection that Snow Leopard uses for Objective-C."
I thought this was the shared libs always worked (Score:3, Interesting)
I take it since then shared library machine code has had to be patched in memory after it's loaded for a while now, thus preventing easy sharing among processes, and causing the page to need its own space in the swap file.
Sounds like this latest improvement effectively brings things back to the way they were, by effectively writing this patched version back to disk so that it can be mapped read-only as before, and not have to be patched every time the library is loaded into a process. It's odd, because I thought the OS already did this several versions ago when prebinding [wikipedia.org].
Re:I thought this was the shared libs always worke (Score:2, Informative)
The shared libs are shared. What was not shared before is the linking table, that must be built for accessing that shared code. prelink precalculates that table, and this apple thing does more or less the same.
Re: (Score:1, Interesting)
I'm not so sure about that. Typically, code in shared libraries is re-entrant and code pages are loaded once, then mapped into the address space of the process using them. I don't know of any modern OS that wastefully makes a copy of the library's code for each process.
Re: (Score:1)
I don't know about other platforms, but I wouldn't be surprised if they do something to share the memory as well.
dyld (Score:5, Funny)
dyld - noun. A reminder that regardless of age, you'll always have an adolescent sense of humor.
Re: (Score:1)
Oh...
That's what she said.
Commen Sense Sharded Library (Score:1, Troll)
I do not wish to be a poo poo, but since dynamic libraries and shared libraries have been around for just about forever, when even a second year CS major would immediately notice this could be done, is such big news now?
The first thing I would have done is built a cache for the library system. LINUX has one, why not the Mac?
So certainly I congratulate the Mac community. But wow, DUH, a cache for the linkage editor. :-)
-Hack
Re: (Score:3, Interesting)
Me thinks you (and many other readers) are mistaking this feature for more traditional static dyld caching.
This enhancement is actually about caching a runtime computation for Objective-C purposes. In practice, as the linked article indicates, this computation is consistent most of the time. In some cases it is not. So to handle the general and most common case, these computations (selector uniquing) are cached and used across different processes.
So the fair question is does Linux cache selector-uniquing?
Re: (Score:1)
Re:Commen Sense Sharded Library (Score:4, Insightful)
Does Linux need selector uniquing if it doesn't use Objective-C?
No it doesn't. Since the average executable on linux is static code linked to dynamic libraries made up of static code, you get your "selector uniquing" at compile time - you don't get a method selector description, instead you get a pre-calculated and already unique address of the method or function.
To me this sounds like an inefficiency in Objective-C that made it less efficient than C++ (the other OO flavour of C) has been improved somewhat.
It is a tradeoff. You get to worry about the performance of shared library selector uniquing, but you get all the benefits of dynamic language and runtime. In practice such inefficiencies matter most in cases where you are very constrained for resources - e.g. on a phone, as hinted in TFA. I doubt in the context of the rest of the performance and efficiency improvements in Snow Leopard and on a reasonably modern computer, the 1/10 of a second or the few megabytes of memory saved matter all that much.
Re: (Score:1)
Re: (Score:2)
The article on Arstechnica about Snow Leopard goes into some detail about the advantages of Obj-C being a dynamic language... primarily due to the new inclusion of Closures aka functions assigned to variables so that you can pass a function to another function with dynamic arguments.
This makes for not necessarily a better performing language but an easier, more efficient and less buggy language.
It's still likely a personal coding preference of course.
What? (Score:2)
Also, so what if they're "different languages"? If they were the "same language" there'd be nothing to compare. Do you go around comparing your right eye to your right eye?
Apple made a rod for their own back with Obj-C (Score:1, Insightful)
Perhaps Obj-C has a few nice features but personally I don't see it. If they'd stuck to C or C++ like every other version of Unix then this would never have been an issue in the first place. Plus a lot more people would have been able to cross-code for OS/X without having to learn an obscure OO version of C which never caught on in the wider IT world and is still used on practically no other system.
Re: (Score:1, Insightful)
Objective-C simplifies many design patterns. Some have language level support at the core. Objective-C is a very simply, elegant language. It's not hard to learn at all. In fact, compared to C++ it's down right plain. Personally, I don't want to waste my time and brain cells learning all the intricate, ass chapping c++ quirks. I'd be better devoting those brain cells to Haskell.
Re:Apple made a rod for their own back with Obj-C (Score:4, Insightful)
Right, because obviously the ultimate evolution of computer languages for all time is C and C++. There's never any need to further innovate that technology whatsoever.
Are you @#$@ kidding? It wasn't that funny.
I take issue with the assertion that nobody ever caught on with it. GNUStep? NeXT has been around for something like 15 years in industry now. EDS and others used it. Ross Perot was so impressed he invested in it and because a director at NeXT. It has a very feature rich set of frameworks associated with it, depending on your OS deployment. The only thing that sucks is Apple dropping OPENSTEP / Obj-C for Windows. But Steve didn't care about the enterprise market anymore at the time, and it might have eroded some mac hardware sales, and you couldn't very well charge a license for it. (I disagree, I think you could and can)
Re: (Score:2)
Re: (Score:2)
Considering what Apple did to WebObjects, my guess is that not even charging for it would have changed that decision.
Re: (Score:2)
What in God's name are you talking about?
Re: (Score:2)
What in God's name are you talking about?
Back when Apple's aquisition of NeXT was first announced, Apple had indicated that they'd continue to support OPENSTEP Enterprise (the Windows implementation of OpenStep). But it was killed off pretty quick.
Re: (Score:2)
Thank you. I understand. As many times as I read the sentence I kept interpreting it as "dropping OPENSTEP / Obj-C for [i.e. in favor of] Windows." Was that before or after M$ dropped a bunch of dollar bills on Apple?
Re: (Score:2)
ObjC is a modern, rich platform (Score:3, Interesting)
Perhaps Obj-C has a few nice features but personally I don't see it. If they'd stuck to C or C++ like every other version of Unix then this would never have been an issue in the first place.
But you can use either of those perfectly well mixed with ObjC calls.
ObjC is a relatively small set of additions to standard-C, so it really doesn't take that long to pick up the syntax changes if you've encountered C before, while at the same time it allows for some very nice dynamic behavior and things like introspecti
Re: (Score:2)
Re: (Score:2)
Yeah, Viol8. Quit slacking off, roll up your sleeves, and get going. You're behind the curve.
Re: (Score:2)
Re: (Score:2)
Obviously, the answer is "no", and obviously asshole moderators abound.
Re: (Score:2)
"No one can be...told...what ObjC is. You have to experience it for yourself."
Re: (Score:2)
Obviously the answer is "no", and obviously asshole moderators abound.
I don't get it. (Score:2, Insightful)
Why don't they (any OS) just add something onto the generic filesystem caching layer to keep executable bits in RAM as long as the input files stay the same? If it was done that way you could theoretically reuse it for interpreted code as well.
Kernel (Score:2)
They probably just rolled in the 2.6.31 kernel.
Here comes update_prebinding fashion again (Score:2)
Using OS X/Mac since 10.2.8, I haven't seen another abused tool like "update_prebinding" even while it is a very risky process in pre 10.5 systems since it deals with actual binary headers.
Also thanks to uninformed IT blogs etc, people always considered prebinding a thing which will go away in next release. Like, Apple is really stupid to do such thing. They basically misunderstood the added flexibility to prebinding scheme where tools without (or broken) prebinding will continue to run.
Anyway, want to see
Re: (Score:1, Informative)
Or the GAC [wikipedia.org]
Re: (Score:1)
And reintroduced dll hell as well?
Re: (Score:2, Informative)
no. we've had a variant but lesser hell for years. Which has led to a series of cargo cult like maintenance procedures for Mac OS.
Re:I've heard that before.... (Score:4, Funny)
Well okay then, Apple were the ones who "popularised" it! ("Well I hadn't heard about Superfetch, but I heard about Apple doing it first, therefore, Apple did it first")
Or um ... they "integrated" it better. Yeah, that's it.
Re:I've heard that before.... (Score:4, Interesting)
Re: (Score:3, Informative)
It's nothing like Superfetch. Superfetch preloads applications into system memory [microsoft.com] and this shared cache doesn't do that instead from what I understand it preforms some of the work the linker would do on load in advance.
The whole dyld sounds a lot like some of the basic features of the .NET runtime...
Or maybe some of the features in this advanced futuristic os:
http://blogs.technet.com/askperf/archive/2008/02/06/ws2008-dynamic-link-library-loader-and-address-space-load-randomization.aspx [technet.com]
Re: (Score:2)
Dyld's (Dynamic Loader) have existed before Snow Leopard. They are extensively used in all Mac OS X versions since they are at the base of the system. They also appear in BSD and Linux under slightly different names. This just explains how they found out a way to do caching and preloading better than previously. It's like Microsoft finding out a way to automagically load all necessary dll's and correctly find out the dll amongst several different versions of the same dll for a program and preload them befor
Re: (Score:1)
I understand, and Apple did a great job with it. It just really disturbs me when Apple (and their ilk) keep rehashing work previously completed on other platforms and claiming that they somehow invented it. I am yet to see a feature that's really *new* in Snow Leopard, yet I can't stop hearing about how many amazing new technologies Apple has created.
Apple did an intelligent new optimization of their dynamic loader on their platform. This is a good thing. They did not invent the concept of a dynamic loader.
Re: (Score:2)
Dude, I think you're hearing things that people aren't saying. The article describes how apple has improved their dynamic loading. The author doesn't claim they invented it. Apple doesn't claim they invented it. Nobody said anything about amazing new technologies Apple created.
Doesn't sound like this is loading apps. (Score:2, Interesting)
Sounds like they've just updated their dynamic (shared) library loader to be able to handle Objective C (aka Cocoa) instead of just plain C, and to be a little smarter about keeping track of what it's already got going on, so it doesn't duplicate things.
As a long-time UNIX and Linux (and other more esoteric OSes) geek, this alone doesn't impress me too much. The idea that they went through the whole OS and worked to get little efficiency/performance gains like this all over the place impresses me a little
Re:Doesn't sound like this is loading apps. (Score:5, Informative)
Re: (Score:2)
Good point. I should have said "as used in Cocoa" or something to be clearer. I'm not sure whether there are people out there writing Objective-C apps for the Mac without Cocoa, though. I guess there's always someone who won't use the nifty library and shortcuts and all that, because they're hardcore, efficiency nuts, or just masochists...
GNUstep Is Not Cocoa (Score:2)
I'm not sure whether there are people out there writing Objective-C apps for the Mac without Cocoa, though. I guess there's always someone who won't use the nifty library and shortcuts and all that, because they're hardcore, efficiency nuts, or just masochists...
Or they use only a subset of Cocoa because they plan to port the app to GNU/Linux, *BSD, and Windows using GNUstep, an OpenStep-compatible toolkit that implements only some of Cocoa.
Re: (Score:2)
GNUstep FTW indeed! And thanks, because I think you've just come up with an answer to something I was pondering the other day - what Adobe will do for Creative Suite 5, since they want to go 64-bit on the Mac, and that can't be done using the transitional Carbon library.
They're going to have to totally rewrite it in Cocoa (which I think is frankly a good thing, since the current codebase probably dates back to the early-to-mid 1990s, and has just had more and more crap glued onto it over the years), but I
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re:Doesn't sound like this is loading apps. (Score:4, Interesting)
Re:I've heard that before.... (Score:5, Informative)
Moderators, please mod the parent down -- it completely misses the point.
Objective-C selector uniquing caching is NOT the same as Windows Superfetch.
Objective-C uses a two-phase dispatch for method calls. When you see a call in the Objective-C source code that looks like:
the dispatch system:
The problem arises in the method dispatch table when you have multiple methods named "init" -- which is very common. When an application is loaded the dynamic loader ("dyld") needs to separately identify all of the methods named "init" (and any other methods with conflicting names) that apply to different classes. This is done by "tagging" each method in the dispatch table, a process called selector uniquing.
Now, this has to be not only for the application binary itself, but also for any Objective-C classes in shared libraries that are loaded. Almost all apps on Mac OS X load the libobjc.dylib library, which is cached to improve performance. As a part of the caching process, Snow Leopard now does the selector uniquing only once, and then stores the uniqued selectors in the cache. Thus, any application that links against libobjc.dylib (or any other library that is in the cache) only has to unique its own selectors, not those of the library as well. This significantly reduces the amount of overhead for launching an application compared to previous versions of Mac OS X.
This process does not attempt to retain application binary code in memory in the face of page-outs as Superfetch does. Selector uniquing caching speeds application launch times by reducing the amount of computation that has to happen at launch, not by pre-loading the application's binary.
Thread-local garbage collection is NOT the same as Windows Superfetch.
Thread-local garbage collection is a third phase of garbage collection added on top of the Objective-C 2.0 garbage collection system, which speeds up the garbage collection system even further. By concentrating GC to what has occurred in a single thread, the GC system can delay and reduce the cost of a slow global sweep even beyond the generational GC algorithm.
Windows Superfetch is a response to poorly written software.
To quote from the Wikipedia article:
The intent is to improve performance in situations where running an anti-virus scan or back-up utility would result in otherwise recently-used information being paged out to disk, or disposed from in-memory caches, resulting in lengthy delays when a user comes back to their computer after a period of non-use.
In my opinion as an experienced application developer the user should never run into the problem that Superfetch attempts to solve. Anti-malware scans or backups are generally limited by I/O transfer rates, not by CPU. In such situations, using lots of memory to pre-load data makes no sense. It is relatively easy to write a two-buffer, threaded, streaming system for situations that are constrained by disk transfer rates without consuming scads of memory.
In the bigger picture, Superfetch attempts to learn the times of day when apps are used and pre-loads their binaries. This is a nice concept, but I have serious doubts as to how useful it really is. The penalty for guessing wrong is fairly high, and users are more tolerant of consistent small slowdowns than they are of occasional long hangs (see the Mac literature on the spinning beach ball).
Mac OS X is less likely to need such anti-malware scans in the first place as the application binaries are now digitally signed by the developer. Any malware that attempts to insert itself into applications will run into problems. This is not to say that the Mac is immune -- I can think of a number of holes that could be exploited (such as the fact that unsigned binaries w
Re: (Score:2, Funny)
mod parent : should be the article itself :D
Start working at 9 AM (Score:2)
Objective-C uses a two-phase dispatch for method calls. When you see a call in the Objective-C source code that looks like:
the dispatch system:
C++ follows the same steps. The big difference is that in Objective-C, the dispatch table is an associative array (C++ unordered_map, Java HashMap, Python dict) from strings to function pointers, not a plain array (C++ vector, Java ArrayList, Python list).
In my opinion as an experienced application developer the user should never run into the problem that Superfetch attempts to solve. Anti-malware scans or backups are generally limited by I/O transfer rates, not by CPU. In such situations, using lots of memory to pre-load data makes no sense. It is relatively easy to write a two-buffer, threaded, streaming system for situations that are constrained by disk transfer rates without consuming scads of memory.
But then you rely on the operating system to provide a method for applications to provide cache hints, and you rely on the antivirus software to provide such hints. SuperFetch tries to infer these even for applications developed prior to widespread knowled
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
But who signs the developer's certificate? And what keeps malware publishers from signing their trojans??
IIRC the intent here is that any change in the binary - for example, an automatic updater - would have to be signed by the cert that's in the original package, and as such can validate that it's a "genuine" update, as opposed to the binary being changed by a remote exploit of any sort.
Re: (Score:2)
The big difference is that in Objective-C, the dispatch table is an associative array (C++ unordered_map, Java HashMap, Python dict) from strings to function pointers, not a plain array (C++ vector, Java ArrayList, Python list).
There are a whole bunch of tricks used (e.g., the map is actually from interned strings, which makes it far quicker to do the check) yet it has the flexibility to do things like dynamic dispatch and at a speed that isn't too horrible. Clever compromise (and has much in common with the way dynamic languages manage method invocations). And one of the cleverest things about it is that only method calls are routed through this: nobody even pretends that normal function calls need the overhead of fancy dispatch.
Re: (Score:2)
But then you rely on the operating system to provide a method for applications to provide cache hints, and you rely on the antivirus software to provide such hints. SuperFetch tries to infer these even for applications developed prior to widespread knowledge of these hints or ported from systems that lack these hints.
As I understand it, neither function pointer uniquing caching nor Superfetch require that the A/V or backup software provide cache hints. On Mac OS X, almost all apps load the libobjc.dylib library so caching the uniquing of function pointers is a big win for app launch times.
Having my applications ready to start at 08:57 when I'm about to grab the mouse at 08:58 improves my productivity. Consider that employees have sued their employers for requiring that employees be present during application startup time but not paid until the application has fully started up.
This may work for some, but not for others. The problem is the lack of consistency -- e.g., if I grab the mouse at 8:58 AM I get my e-mail quickly, but if I come in a little early at 8:30 AM I have to wait for it. This leads to user fr
Re: (Score:2)
As I understand it, neither function pointer uniquing caching nor Superfetch require that the A/V or backup software provide cache hints.
True, because SuperFetch is an improvement to Windows caching that works around the lack of such hints.
No, [code signing] won't prevent a malware writer from signing his or her code, but it accomplishes two things:
3. The price of a certificate from a trusted root makes it uneconomic for some people to sign their software. Or at least that's what I've seen from Authenticode on Windows: most non-corporate-backed free software and freeware and much shareware is distributed without a signature. Likewise, homebrew applications for video game consoles use holes in the operating system's signature verification to start exe
Re: (Score:2)
The price of a certificate from a trusted root makes it uneconomic for some people to sign their software. Or at least that's what I've seen from Authenticode on Windows: most non-corporate-backed free software and freeware and much shareware is distributed without a signature. Likewise, homebrew applications for video game consoles use holes in the operating system's signature verification to start executing. At least Mac OS X has an option for self-signed certificates, which do #2 (make sure two binaries have the same publisher) without having to do #1 (make sure each publisher is part of a private club).
Grr... I just checked and a 1-year code signing cert from Comodo is $179.95, with discounts for multi-year certs. Other vendors also seem to have pretty reasonable prices. Anyone who has the time to put together a serious app (even for freeware) can afford that amount. Verisign charges an unconscionable amount (around $900 for one year!) for a code signing cert. Bleah!
Video console homebrews are a different story, as the console makers won't sign an app that hasn't been through their (rather expensive) deve
Cross-platform code signing costs (Score:2)
I just checked and a 1-year code signing cert from Comodo is $179.95, with discounts for multi-year certs. Other vendors also seem to have pretty reasonable prices.
That's at least on the order of $100 per platform. The certificate for Windows is $179.95 per year, and the certificate for a secure web site from which to distribute copies of the software is another $99 per year. It gets even more expensive to target more than one platform: the certificate for XNA is $99 per year, the certificate for iPod Touch is $99 per year, and by the time one has ported an application to all the platforms that his audience uses, he'd be out of his hobby money.
Anyone who has the time to put together a serious app (even for freeware) can afford that amount.
Say I develop a video ga
Re: (Score:2)
I just checked and a 1-year code signing cert from Comodo is $179.95, with discounts for multi-year certs. Other vendors also seem to have pretty reasonable prices.
That's at least on the order of $100 per platform. The certificate for Windows is $179.95 per year, and the certificate for a secure web site from which to distribute copies of the software is another $99 per year. It gets even more expensive to target more than one platform: the certificate for XNA is $99 per year, the certificate for iPod Touch is $99 per year, and by the time one has ported an application to all the platforms that his audience uses, he'd be out of his hobby money.
Hold on, most code signing CA's include both the codeSigning and msCodeCom usage extensions in the same certificate so there's no need to buy multiple code signing certs. Unless you're conducting an e-commerce transaction (in which case you're no longer a hobbyist), there's no need for a website cert -- and even then I've found website certs for as little as $15/year. Mac OS X/iPhone code signing certs just require the code signing extension, so they just work. Ditto XNA. To join the iPhone developer progra
Re: (Score:2)
The intent is to improve performance in situations where running an anti-virus scan or back-up utility would result in otherwise recently-used information being paged out to disk, or disposed from in-memory caches, resulting in lengthy delays when a user comes back to their computer after a period of non-use.
In my opinion as an experienced application developer the user should never run into the problem that Superfetch attempts to solve. Anti-malware scans or backups are generally limited by I/O transfer rates, not by CPU. In such situations, using lots of memory to pre-load data makes no sense. It is relatively easy to write a two-buffer, threaded, streaming system for situations that are constrained by disk transfer rates without consuming scads of memory.
I don't think you understood it right: the perf problem is not for the anti-malware programs, but once they have run they have thrown everything out of the cache and subsequent applications have to re-populate it again, thus slowing eveything down. There used to be the same kind of problem under linux after the 'locate' cronjob.
Re: (Score:2)
Thread-local garbage collection is a third phase of garbage collection added on top of the Objective-C 2.0 garbage collection system, which speeds up the garbage collection system even further. By concentrating GC to what has occurred in a single thread, the GC system can delay and reduce the cost of a slow global sweep even beyond the generational GC algorithm.
Do you know how they detect that a pointer has escaped? Is there some sort of write barrier? How does this work with the somewhat unsafe base lang
Re: (Score:2)
Re: (Score:2, Insightful)
Gretchen, stop trying to make Superfetch happen. It's not going to happen.
Re:I've heard that before.... (Score:5, Informative)
Did you even read the article ? Suppose not... this is slashdot after all.
The article states that prebinding (similar to prelink) was used in previous versions of OS X and has been replaced by a much faster shared cache.
Re: (Score:2, Informative)
For reference, normally when a program is launched without prebinding, the program has to look into the symbol table for the shared library and "bind" it (basically, tell the program where it is). Prebinding basically does that in advance and saves the lookup table, but any time the library is changed, the bindings have to be regenerated.
The article says prebinding is actually quite efficient for C/C++ code, but objective-C (used by macOS X and iPhone) is structured more like Smalltalk or
Re:I've heard that before.... (Score:5, Informative)
selectors, which I believe can't be prebound (for you java programmers, these are equivalent to interfaces - C/C++ does not have this concept and instead allows direct access to the classes using protected or public)
I'm sorry, but this and most of the rest of your description is completely wrong. Selectors are nothing like Java interfaces. Interfaces are Java's version of Objective-C Protocols. Selectors are abstract method names (Smalltalk calls them symbols). Each Objective-C class has some data structure mapping these to function pointers. When you send a message (call a method) you look up the function pointer corresponding to the selector in the receiving class. To make this fast, all selector comparisons are done as pointer comparisons. To make this work, the runtime needs to make sure that selectors are unique. This process involves building a large hash table and inserting every selector referenced by every compilation unit into it. By making the linker handle this uniquing, you have several advantages. The first is that the resulting table can be shared more easily between processes, resulting in a memory efficiency gain. The second is that the runtime can first try doing pointer comparison when registering a new selector, and only use the hash if the linker didn't unique the selector.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
When you compile any C-family language, you get an object code file. This contains symbols which will be resolved by either the static or dynamic linker and various sections. One of these sections contains an array of pointers to functions which are called by the loader when that object file is loaded. These are not used in C. In C++, they are used for static constructors. In Objective-C, there is one that issues a call to __objc_exec() with a pointer to the struct objc_module created for that compilat
Re: (Score:2)
Re: (Score:3, Informative)
selectors, which I believe can't be prebound (for you java programmers, these are equivalent to interfaces - C/C++ does not have this concept and instead allows direct access to the classes using protected or public)
I'm sorry, but this and most of the rest of your description is completely wrong. Selectors are nothing like Java interfaces. Interfaces are Java's version of Objective-C Protocols. Selectors are abstract method names (Smalltalk calls them symbols). Each Objective-C class has some data structure mapping these to function pointers.
Although I will agree with you that GPP is somewhat misinformed I take issue with your statement that selectors are nothing like Java Interfaces.
It is true that the class structure of Objective-C (one root NSObject class, at least in common practice) and the class structure of Java (one root Object class) are virtually identical. And it is true that an Objective-C protocol has feature parity with a Java Interface and when you think of formal interfaces in Java the equivalent to that in Objective-C is a pro
Re: (Score:2)
With Objective-C, a method may not even exist at compile time. It is impossible to define formal interfaces in many cases. respondsToSelector allows you to ensure that any arbitrary object walks, swims, and quacks like a duck without strict class definitions.
Re: (Score:3, Insightful)
Re: (Score:2)
Also sounds like the prelink [wikipedia.org] application in Linux.
No, OS X has always done that. Except they call it prebinding. [wikipedia.org]
Re:I've heard that before.... (Score:4, Informative)
Well, the FP is claiming that they're the same thing, and your post seemed to be agreeing with him. But yea, that troll rating probably should have gone to the post you replied to.
I don't know anything about prelink, but Superfetch sounds completely different from dyld. Superfetch keeps frequently launched applications in memory to make them launch faster (much like Winamp Agent does for Winamp). dyld, OTOH, shortens application launch times by not reloading a shared library each time an application is launched. Keeping the shared library loaded in a shared cache also reduces the number of copies of that library you need loaded in memory. It doesn't sound like Superfetch does that.
Both a turbocharger and a cold air intake can improve car performance, but that doesn't make them the same thing.
Re: (Score:2)
You phrased things much better than I could fine sir. The link to the wikipedia article [wikipedia.org] that the original poster included read nothing like TFA [sealiesoftware.com]. Though because of the glaring placement of the post toward the top of the page and how straight forward it is written, I think most mods probably modded it without checking against either of the articles. That is how human nature is :). It's nice that slashdot moderation tends to correct itself over time thanks to insightful replies lower in the tree.
Re: (Score:1)
Re: (Score:1)
Re: (Score:1)
Depends on how you look at it. I was talking about how my post was rated troll which could and usually does turn into a talk about the content itself (which is on topic). A number of posts from other people on this article have been inaccurately moderated "Troll" for some reason though.
At his point, I do believe we may be both off topic (thanks to the mods). It will be interesting to see if you get rated off topic, and this post here gets rated troll, too. Here's hoping for helpful meta-moderation.
Re: (Score:1)
Re: (Score:1, Offtopic)
Apple II [wikipedia.org] had this in 1979. Back then we called it a "jump table". :)
Re: (Score:2)
Applesoft BASIC on the Apple II mapped keyword tokens directly to a jump table of ROM entry points.
Other than the fact that it was not object-based, didn't reference system or external libraries, and all of the token offsets were fixed and predetermined...
It's exactly the same. :)
Re:enough fucking (Score:5, Funny)
You've had enough of fucking, and would like more Snow Leopard stories? Each to his own, I guess.
Re: (Score:3, Funny)
Re:enough fucking (Score:5, Funny)
Re: (Score:2, Funny)
Hey, it makes a refreshing change from the daily "Do XYZ On Your Iphone" stories! I'm love the variation today here on Apple- er Slashdot.
Re: (Score:2, Insightful)
Considering that the whole point of Snow Leopard was to refine the internal structure of the operating system and introduce new features for developers, it should come as no surprise that there are far more /. appropriate stories about it than the more eye-candy oriented releases.
Re: (Score:1)
Re: (Score:3, Informative)
Re:enough fucking (Score:5, Insightful)
Comparing this to Superfetch is ignorant beyond belief. Superfetch is part of the paging system on Windows and attempts to trigger page faults before the data is actually needed so that it's already cached when it is needed. This is quite a nice feature and one I am naturally prejudiced to like because my PhD was in this topic.
This is entirely different. Part of it is similar to the existing prebinding / prelinking stuff in Leopard / Linux, which generates the relocation tables in position-independent code. This is nothing like anything in Windows, because Windows doesn't use position-independent code for shared libraries (it uses a horribly ugly hack which performs better in the best case and much worse in the worst case). The article is a bit too light on details to understand exactly why the new version performs so much better.
The other half, however, is very clever. By caching the selector uniquing information, they are saving a lot of time when loading compilation units containing Objective-C code. Even better is the fact that, because these symbols are now not modified, they can be shared between processes without triggering copy-on-write faults. This isn't actually that hard to implement for the GNU runtime; just give the selector symbols mangled names and mark them as having common linkage (it's a bit harder on Darwin because Mach-O is weird), then you can use pointer comparison as a first step in the runtime and avoid the strcmp() call. Combining this with the prelinking support and you get the caching for free, which is very nice. I actually implemented this in Clang while writing this post, so expect to see it on non-Apple platforms soon too.
Re: (Score:2)
Sounds like this "solution" is only a benefit because Apple chose to standardize on a language that generates very slow executables by default. Thus, the problem (and solution) are of their own making.
Re: (Score:2)
Yes, wouldn't it be terrible if Apple actually fixed issues with their OS...
Re: (Score:3, Funny)
that sound you hear is NoYob's spirit being completely crushed.
bravo, sir.
Re: (Score:2)
I still don't see what the point is. My solution to this: More RAM and *Autostart*. Really. I start everything I need at boot time. Which is quite rare. So I never felt the need to speed up the first start of any programs.
Re: (Score:3, Funny)
erm... these don't stick?
Re: (Score:2)
erm... these don't stick?
Not even sticky files stick when applied to glue languages like Python. Must be glue incompatibility or something.