Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming Businesses Apple IT Technology

Mac OS X Built For CISC, Not RISC 82

WCityMike writes "One of the programmers at Unsanity, maker of haxies, recently posted a rather shocking relevation on the company's weblog. He says that Mac OS X's Mach-O runtime ABI (Application Binary Interface) comes from a NeXTStep design for 68K processorts, and is not designed for the PowerPC architechture. Had they used the latter, things would have been approximately 10-12 percent faster. And supposedly, they can't fix it now without breaking all existing applications." The developer mentions there are workarounds in the newest GCC, but only for newly compiled programs.
This discussion has been archived. No new comments can be posted.

Mac OS X Built For CISC, Not RISC

Comments Filter:
  • by tps12 ( 105590 ) on Monday October 21, 2002 @07:35AM (#4494462) Homepage Journal
    This is good news for long-time Mac fans. Back in the day ("the day" was 1994 or so, IIRC) we Mac users took seeking out the correct 68k or PPC binaries as a sign of our superiority to PC users. While Windozers happily downloaded software that would run on circa 1987 hardware, we enlightened ones could narrow our searches to programs specifically compiled for our platforms. We could even get "fat" binaries, and optionally remove the unneeded binary code using a small freeware app.

    With OS X, I had hoped we would again have a situation where just using the Mac required that extra step of compatibility checking, setting us apart from the drooling masses of Gates-worshippers. Sadly, with the Classic compatibility layer, it did not come to pass. Hopefully this revelation will set things aright.
  • by Nutrimentia ( 467408 ) on Monday October 21, 2002 @07:50AM (#4494537) Homepage
    But tell me, if they could slide a PPC ABI in with the new journaling system update, couldn't they just get the performance hit and gain to cancel out? It'd be like journalling system for free! How hard can it be? 10.2.5 maybe?
  • Read World Tech [realworldtech.com] talks about the 970 in depth... I wonder how the addition of 64-bit arch AND the 32-bit compat mode will affect things.

    Like the Itanium, with its poor backwards compat performance? Or will it be speedy?
    • by BoomerSooner ( 308737 ) on Monday October 21, 2002 @10:30AM (#4495761) Homepage Journal
      The tweaking is at the Kernel/OS level so applications can run without modification (in theory). My guess is some apps will need patches while others will be okay. A perfect example of this is apps that run under 10.0 and 10.1 but not 10.2.

      With the speed of current and future processors the delivery of a stable OS is preferable to a 3 year late, and tweaked OS that runs the same things just a little faster.
      • With the speed of current and future processors the delivery of a stable OS is preferable to a 3 year late, and tweaked OS that runs the same things just a little faster.


        Exactly. When Copeland comes out it's gonna be smokin'.
      • I believe the reason why some apps will run under 10.0.x and 10.1.x but not Jaguar is the change to a new version of gcc. I think before Jaguar it was gcc 2.9.something and now it's gcc 3.1(.1?). In reality, the fact that Apple was able to pull off such a switch relatively flawlessly is amazing and most people don't realize it.
      • What always amazes me are the kinds of apps that break. Typically they *aren't* ones that ought to be doing low level stuff. I mean perhaps they are relying on questionable libraries. But still. The typical "high level" app shouldn't be breaking as one moves from 10.0 to 10.1 to 10.2.
    • A 64-bit PowerPC chip doesn't need to "emulate" anything to run 32-bit code, unlike the Itanium, which uses completely different instructions. There should be no speed hit: the only real difference is that the CPU can perform calculations using all 64 bits. This also won't remove the speed hit caused by the ABI.
    • From The PowerPC Compiler Writer's Guide [ibm.com] (warning: PDF):
      Both 32-bit and 64-bit implementations support most of the instructions defined by the PowerPC architecture. The 64-bit implementations support all the application instructions supported 32-bit implementations as well as the following application instructions: [...]

      The 64-bit implementations have two modes of operation determined by the 64-bit mode (SF) bit in the Machine State Register: 64-bit mode (SF set to 1) and 32-bit mode (SF cleared to 0), for compatibility with 32-bit implementations. Application code for 32-bit implementations executes without modification on 64-bit implementations running in 32-bit mode, yielding identical results. All 64-bit implementation instructions are available in both modes. Identical instructions, however, may produce different results in 32-bit and 64-bit modes:

      Addressing--Although effective addresses in 64-bit implementations have 64 bits, in 32-bit mode, the high-order 32 bits are ignored during data access and set to zero during instruction fetching. This modification of the high-order bits of the address might produce an unexpected jump following the transition from 64-bit mode to 32-bit mode.
      Status Bits--The register result of arithmetic and logical instructions is independent of mode, but setting of status bits depends on the mode. In particular, recording, carry-bit-setting, or overflow-bit-setting instruction forms write the status bits relative to the mode. Changing the mode in the middle of a code sequence that depends on one of these status bits can lead to unexpected results.
      Count Register--The entire 64-bit value in the Count Register of a 64-bit implementation is decremented, even though conditional branches in 32-bit mode only test the low-order 32 bits for zero.
      IOW, even if they use "32-bit compat mode", there should be now speed penalty whatsoever.
  • by paradesign ( 561561 ) on Monday October 21, 2002 @08:13AM (#4494643) Homepage
    that apple knows what their doing, and probably had a very good reason or doing what they did. I mean its not like they havent had the past four years to change it or anything, but whatever.
    • by mikedaisey ( 413058 ) on Monday October 21, 2002 @08:21AM (#4494676) Homepage

      I would agree with you, but this is a legacy decision that would have saved them crucial months early in the OSX creation process...so I can just as easily see them making the choice then, to get a shipping system out the door, THEN discovering that OSX's biggest problem is that it isn't fast enough, and now trying to retrofit a solution. That matches all the information I've gleaned from mac sites in the past.

      It sucks, but if they hadn't gotten X out the door when they did Apple would have been dead in the water--it was already horrendously late for those who started waiting for Copland.

      • by Anonymous Coward
        ... And I would agree with you, except no programmer worth a cent will ever wait until the last minute to wonder if his OS is fast enough. A good programmer will ALWAYS assume his OS is NEVER fast enough.

        But yes, this sounds like a 'hi, we're from NextStep, we're going to save you' attitude. The detail got lost in the fury and the excitement. That's what it sounds like at any rate. Sure they thought about it; but by now they know they didn't think about it 'right'.
      • The fact that Darwin appears to have a CISC-oriented design begs the hunch that Apple doesn't want to tie themselves to the MPPC platform. This could mean one of two things:

        1) Apple wants to ensure that should Motorola ever tank on them, they have the ability to port OS X to a wider range of processors, and/or

        2) Apple has specific possible platforms for OS X in mind in the future. x86 immediately comes to mind. Granted, this is a rumor that has run its course a million times over, but don't jump to conclusions. This may not mean an OS X port to x86. Looking at OS X and its multiple APIs (Carbon, Cocoa, Classic, "native Java"...) perhaps Apple may be plotting to roll out some sort of cross-platform API that would enable Mac OS X apps to run on, say, Windows. Support for such an API close to the kernel, as Apple would be capable of producing, would make such an API much faster on the Mac than, say, applications based on certain browser cum cross-platform COM system [mozilla.org].

        Just some speculation here.
    • I have a feeling they don't know. Look how fast MacOS X does 2D rendering and you'll see that they are a little lacking in the optimization department. Even with Quartz Extreme the performance can be painfully slow as it draws things like Postscript documents.

  • Here is the text:

    Unsanity.org
    October 21, 2002
    Mach-O ABI

    Here I go again, ranting about Mac OS X bowels. This time I want to talk about particular implementation details of Mach-O runtime ABI (Application Binary Interface). Before you get too confused, there are two different things under the 'Mach-O' name:

    * Mach-O ABI, which defines how every application in the system executes and calls functions (stack conventions, register usage, and more);
    * Mach-O File Format, which defines a way for compiled executables to store different parts of them in the same file (compiled code, data, strings, etc).

    The latter is not what I want to talk about today; the first is what puzzles me most. I admit I am just a "small programmer" with no relationship with the powers-that-be at Apple at all (this means, no insider contacts who can explain the reasoning behind the particular important design decisions to me), so my impressions, judgements and guesses expressed in this article may be slightly or totally off the mark. I, however, as many other developers who have dug deep into the implementation of such things, can see obvious drawbacks and oddities about Mach-O ABI, and this is what I am going to talk about.

    Mach-O originates from NeXTstep, an operating system created at NeXT for its NeXTstation machines, and later expanded to x86 hardware. NeXTstations were originally based on Motorola 68k CPUs, just like old Macintoshes. Mac OS (classic), on the other hand, used an ABI for PowerPC which followed the ABI principles defined in a document by IBM/Moto for PPC processors. So as you all may already know, m68k and x86 are CISC architectures; PowerPC used in all new Macs is RISC.

    To make a long story short, Mac OS X uses an ABI designed for CISC processors, mostly ignoring RISC design principles.

    What do I mean by that? Mach-O ABI we see now used in Mac OS X is more or less a direct port of NeXT's Mach-O designed for m68k - it relies on PC (program counter) register to perform various manipulations with data (for the geeks: PC-relative addressing). There's nothing wrong with that, as its an effective and common practice, except for one little thing: there is no PC register in RISC processors (programmatically accessible). That is not a show-stopper though - Mach-O for PowerPC just takes one of 32 general purpose registers and turns it into a program counter-style register, to base all offset calculations off it. That works well, as you can see, as all of Mac OS X applications (except for the ones compiled with Carbon/CFM) use the Mach-O ABI.

    That approach works well, except for one small thing: global/static data access adds about 7 cycle overhead per function, and about triple of that for cross-context calls (that is for the G4 class processor) compared to the old, Mac OS Classic ABI (excuse me for the geek talk). Mac OS Classic CFM ABI, in comparison, needed almost 0 cycles for static data access and about 5 for cross-context calls. To rephrase - applications in Mac OS X could be faster, if the Mach-O ABI followed the principles set for the PowerPC chip, and not the ones created over a decade ago for CISC ones.

    This brings us the question, "how much faster would the applications be if the ABI was done right?". The answer is, according to some tests done by my friends at #macdev IRC channel, the speed gain would be 10-30%, depending on each particular application (how often does it calls functions). Realistically, the speed gain would be around 10 to 12 per cent (how do I get these numbers, below).

    So why did Apple used an outdated ABI for a modern operating system? Frankly, I don't know the reason. About the best one I have heard - it saved Apple a few months in the Mac OS X development time so they didn't have to do massive updates to its NeXT-derived tool chain.

    There are signs of change though -- the recent update to GCC, the compiler shipped with OSX, allows it to perform so-called -mdynamic-no-pic optimization, which hard-codes the data addresses in the code, so the result is roughly equivalent to the CFM ABI used in Mac OS Classic -- so the GCC itself, compiled with that optimization, is 10% faster. Applications, to take advantage of that, need to be recompiled, so it doesn't affect 80% of the titles already shipped for Mac OS X. Then again, the optimization above only works for executables and not shared libraries.

    Either way, there is no way to change the ABI now, as it would break all of the existing applications - which is obviously not what Apple (or us) would want.

    And after all, who cares about a 10% speed loss? You can always get a faster Mac, right?

    Further reading:
    Mach-O Runtime Architecture [developer.apple.com]
    CFM-Based Runtime Architecture [developer.apple.com]

    Thanks to #macdev regulars and an anonymous Apple engineer for helping me with this article.

    Update 10/21: fixed a few phrases in the text to make it more clear; I've also been told OPENstep runs on RISC processors (non-PowerPC) - however, I have not investigated how the Mach-O ABI works there - quite possible it obeys the PowerPC guidelines, although I am pretty sure it does the same as on PowerPC.
    Posted by slava at October 21, 2002 03:40 AM
    • If Apple would just use CFM/PEF natively we woudln't need the PEF shim crap for PEF Carbon apps.

      I mean where does Apple go from here? Do they create a 3rd spec so they have to upkeep two non-native formats? Will they have to create three sets of libraries or shims?

      Not very forward-thinking Apple.
  • by Leimy ( 6717 ) on Monday October 21, 2002 @08:22AM (#4494682)
    Uhm... This is news? I am not shocked at all. In order to get the product out sooner rather than later they stuck with the old ABI that was used for Motorola 68k[probably wrong but I have had no coffee yet]. Anyway some people say that the performance loss as a result of this "corner cutting" may be up to 7 cycles per function call which just means we should all write our code as inlines and macros :).

    Just kidding.... Anyway it may or may not be easy/hard to fix... the problem is now that its out there changing the ABI [the C ABI !!! the way functions get called and parameters get passed] is going to break everything. Maybe they can fix it but not at a significant cost to 3rd party software... I could be wrong and not have thought this out well enough though.
    • Uhm... This is news?

      Sorry Leimy. didn't realize you'd known about this and therefore it shouldn't have been posted. Slashdot editors, in the future please check with this guy before posting any stories.

      • Anyone who has been paying much attention around the web should/would have been able to read this information elsewhere just like me :). There are MANY people who knew of this problem existed well before the release of Jaguar.

        Perhaps because I look into things and try to tinker and understand them as well as do silly things like assembly language programming I picked up on this sooner than others. This isn't news to a lot of people I hang with... :)

        Sorry if I came off sounding arrogant :)
        • by Anonymous Coward
          >Anyone who has been paying much attention around the web should/would have been able to read this information elsewhere just like me :).

          This can be said about ANYTHING and EVERYTHING that gets posted to Slashdot. Why don't we just close down the site, call it a day, and stop wasting our time? Google, after all is all we need, right.

          Yeah...any-way. Consider that the stories themselves are only half of what makes Slashdot an interesting place. I come for the DISCUSSION more than the stories.

          >This isn't news to a lot of people I hang with... :)

          I hate to break it to you, but the people who read Slashdot are a lot more diverse than just the people you happen to hang with. I come here for the occasional interesting science article (and the DISCUSSION that follows). I don't know shit about CS, although I'm interested in computers, and particularly Macs. This is the first I've heard of this issue (I'm sure loads of other Slashbots can say the same. I may not ever have heard about this issue if it wasn't for this Slashdot story. Please accept my formal apology.

          >Sorry if I came off sounding arrogant :)

          You did. You're forgiven. Don't let it happen again.

          -ac
  • Yay! FUD! (Score:5, Insightful)

    by Anonymous Coward on Monday October 21, 2002 @08:57AM (#4494915)
    And supposedly, they can't fix it now without breaking all existing applications.

    There is no reason that an operating system can't support multiple ABIs. That means that New applications wouldn't work on older versions of the OS, but it certainly doesn't mean that they can't fix the "problem" without breaking current applications.
    • Re:Yay! FUD! (Score:2, Informative)

      by anarkhos ( 209172 )
      So what ABI will the libraries use?

      Supporting an extra ABI isn't difficult at the execution level provided you have a separate set of libraries or shims like they do for PEF binaries now. That's also how you can run 32 and 64bit apps on the same system.

      But what now? Another set of shims?

      The whole situation is ridiculous. They should use PEF/CFM natively.
  • by jafuser ( 112236 ) on Monday October 21, 2002 @09:05AM (#4494960)
    Who says they need to "fix" it? Perhaps Motorola may be losing a big customer in the future... I've heard from more than one source now that big changes may be coming in Mac hardware... Absolutely all rumors of course, but this fact fits in nicely with what I've already heard...
  • by constantnormal ( 512494 ) on Monday October 21, 2002 @10:15AM (#4495604)
    While I am in complete agreement that it was originally done this way in the interests of expediency, we can all see a point very soon where the instruction set will be in a (minor/major?) state of upheaval -- when they revamp OS X for 64-bit operation on the IBM 970 chipset.

    However, it's not quite as easy as rolling it into that architecture, as they wil probably rely of the 32-bit PPC compatibility mode of the 970 to bring along a lot of the existing baggage, ruling out a wholesale conversion to another API. Which means they will either implement a foundation to migrate toward the new API, or invoke yet another API (probably 64-bit 970 only) that uses the appropriate model. Either way, it will be some years (if ever, as we can still code 68X apps using an API from decades ago that run under emulation on OS X) before we see an efficient API in widespread use.

    In any event, they will certainly retain a CISC-oriented API in the OS X stable of architectures, if only to be able to continue to wave the specter of an open source OS X on X86 in front of Microsoft, as sort of a "mutual assured destruction" weapon to prevent Microsoft from wiping them out, and possibly as a negotiating tool in keeping Microsoft coding for the Mac.

    But -- since Apple is pretty well (apparently) hamstrung on making great strides in hardware performance over the next year or so, maybe they will push the software changes as the next best way to get needed speed. It wouldn't be the first time Apple capriciously honked off developers by changing all the rules and rendering years of development obsolete.

    And the whole thing may be moot, as it appears that one can get equivalent performance improvement by compiling with gcc3.
    • by Anonymous Coward
      Keep in mind that gcc3 optimizations mentioned would only affect applications, *not* the dynamic shared libraries, which are a significant part of the OS (think /System/Library/Frameworks)
  • by kuwan ( 443684 ) on Monday October 21, 2002 @10:32AM (#4495774) Homepage

    It seems that Apple could easily correct this when they update OS X for a 64-bit chip (namely the PowerPC 970). Applications will need to be recompiled to be 64-bit anyway, so why not update the ABI in the process? It would certainly be incentive for developers to update their apps...

    Imagine:

    Apple:"Update your apps to 64-bit and see a 10% performance gain."

    (Of course most apps really won't need to be 64-bit, but this would be incentive for developers to update them and users to buy new machines.)

    As for Carbon, it states in the article that only Mach-o binaries use the CISC-style ABI, Carbon is not affected and uses a PowerPC-style ABI. This could be a way to "prove" his theory that you could get a 10-12% performance increase. Build two test apps, one in Cocoa and one in Carbon and then compare them to see if there really is a 10-12% speed difference.

    • by Anonymous Coward
      As for Carbon, it states in the article that only Mach-o binaries use the CISC-style ABI, Carbon is not affected and uses a PowerPC-style ABI. This could be a way to "prove" his theory that you could get a 10-12% performance increase. Build two test apps, one in Cocoa and one in Carbon and then compare them to see if there really is a 10-12% speed difference.
      Wrong. Carbon CFM Apps (only CodeWarrior and MPW can make these) don't use Mach-O when calling functions INSIDE themselves. They use some Mach-o CFM glue when calling functions in the system libraries. Carbon Mach-O apps (like iTunes 3.0) still use Mach-O
  • This [ulb.ac.be] leads me to believe that PowerPC does run of the mill PC-relative addressing, so that, for example, branches only take one instruction width, rather than having to store an entire 32-bit address in another location, etc. I'm confused by the purpose of the article, but I think the point was that PC-relative addressing forces the compiler to compute the address of a branch target in a function call to a fully-linked part of the binary. I don't think that address computation of that sort is responsible for a 10-15% performance hit, because wouldn't full time absolute addressing require more memory access on the already saturated MPX bus? If I'm incorrect in this, please tell me, and I will look for more information.
    • Re:I'm confused (Score:5, Informative)

      by nadador ( 3747 ) on Monday October 21, 2002 @11:27AM (#4496311)
      It just so happens that I friend of mine has a copy of "PowerPC Mircoprocessor Family: Programming Environments for 32-bit Microprocessors" sitting on his desk, which I grabbed. Here is how PowerPC processors branch (from sectino 4.2.4.1 of said dead-tree document):

      1. Branch relative addressing mode - the immediate displacement operand is sign exteneded and added to the current instruction address to produce the branch target address. So, PC relative addressing. There is no need for a programmatically accessible program counter because this is all done by the branch execution unit. Single 32-bit instruction.

      2. Branch conditional to relative addressing mode - same as branch relative addressing, except that the branch is only executed if the proper condition codes are set. Single 32-bit instruction.

      3. Branch to absolute addressing - the operand address is sign extended and used as the branch target. As the name implies, this is absolute addressing. Only problem is, the operand address is only 23 bits wide in a 32-bit implementation, and with the zero pad, it gives only 25 bits of absolute address (word alignment required). So, if you absolute address anything, you can only absolute address 25 bits worth of the address space.

      4. Branch conditional to absolute - same as regular absolute addressing, except that you have to encode condition codes, so the operand address is nowo only 13 bits if I read the diagrams correctly, meaning that you can only absolutely address 15 bits of address space with the zero pad.

      5. Branch conditional to link register - if you clobber the link register, you can branch to a 32-bit address. Of course, you have to clobble the link register, so I would think this would be most helpful in returning from a function call, not going to it, since the link register holds the return address. And if you use it forward instead of returning, you have to load the link register.

      6. Branch conditional to count register - same as link register branching as above.

      All of that said, the reason that the Mac OS ABI uses PC relative addressing is because the only way to fully address a 32-bit address space is to do PC relative addressing. According to this book, there is no two instruction width branch, eg a branch instruction which encodes an entire 32-bit absolute address in two 32-bit words (one word for branch encoding and condition codes, one word for the whole 32-bit address).

      This leads me to believe that there is no way to do all absolute addressing on PowerPC unless you implement new instructions (which will take more time to get to the processor, and to decode) or limit yourself to 15 or 25 bits of the address space.

      So, the short version is that that there is no way for the Mac OS ABI to do absolute addressing.
      • Re:I'm confused (Score:1, Insightful)

        by Anonymous Coward
        Its not about branching. Its about data references using PC relative addressing. The PowerPC has no PC relative data addressing modes.
        • Re:I'm confused (Score:4, Interesting)

          by nadador ( 3747 ) on Monday October 21, 2002 @11:40AM (#4496429)
          > Its not about branching. Its about data references using PC relative addressing. The PowerPC has no PC relative data addressing modes.

          Point taken. Section 4.2.3.1 of the same book is "Integer load and store address generation".

          1. Register indirect with immediate index addressing for integer loads and stores - In this case, you get a 16-bit index in the instruction added to the value in a general purpose register, which is used to compute the effective address.

          2. Register indirect with index addressing for integer loads and stores - this is the same as above, except that two registers are used and there is no encoded index.

          3. Register indirect addressing integer loads and stores - use just one general purpose register as an address for a load or store.

          So, the point is that in every case, some form of relative addressing is used. In order to make relocatable code, ie code that can be linked happily with other binary objects, you have to have some sort of reference address, and PC-relative addressing is the only way to do this.

          Even though there is no PC-relative addressing mode, the only way to guarentee that the relative addresses used in different object files won't clash is to do PC-relative. The fact that this is not easy on the PowerPC doesn't make it any less necessary.
          • Re:I'm confused (Score:5, Informative)

            by Anonymous Coward on Monday October 21, 2002 @11:59AM (#4496632)
            > So, the point is that in every case, some form of relative addressing is used. In order to make relocatable code, ie code that can be linked happily with other binary objects, you have to have some sort of reference address, and PC-relative addressing is the only way to do this.

            This is wrong. The PowerPC ABI, as defined by IBM, uses r2 as a TOC (Table of Contents) pointer. The PC is never needed or used as all data space references are made relative to the TOC, not the PC. Apart from being faster, this has several other advantages, not the least of which is that one copy of code can have multiple data contexts without involving VM.

            int foo;
            int bar(void) { return foo; }

            with macho:
            _bar:
            mflr r0,lr
            bl *+4
            mflr r2
            mtlr r0
            addis r3,r2,ha16(foo)
            lwz r3,lo16(foo)(r3)
            blr

            with IBM conventions:
            .bar:
            lwz r3,foo(rTOC)
            blr

            • Cool. I agree. I was wrong.
            • Re:I'm confused (Score:5, Interesting)

              by clem.dickey ( 102292 ) on Monday October 21, 2002 @01:35PM (#4497762)
              (The parent needs to be modded up. He may be an AC, but his information is accurate.)

              One problem with TOC is that is you are limited to 16K external addresses. Offset "foo" in the TOC example is 16 bits, and the low two are zero. With 64-bit addressing I suppose that drops to 8K externals.

              Another characteristic calling a separately compiled function requires that you load a different TOC. The PC-relative scheme requires that you load one value: the PC; the TOC scheme requires that you load two new values: the new PC and the new TOC.

              On the plus side, TOC makes shared libraries easier to manage because external addresses are bound to a non-shared data area.
              • Actually it is 64K and still 64K in 64bit because you can load from unaligned locations and in 64 bit PPC the instructions are still the same.
                • Actually it is 64K and still 64K in 64bit because you can load from unaligned locations

                  But the TOC entries are addresses, which are 4 or 8 bytes long. Unless you manage to find addresses which can overlap due to common bit patterns, you have to divide the 64K maximum TOC size (the offset of an lwz instriction is 16 bits long) by 4 or 8.

                  As for alignment, section 6.4.6.1 here [ibm.com] indicates that one response to unaligned integers is an alignment exception. So you can load unaligned, and the PowerPC may fix things up in hardware, but if it doesn't your O/S is expected to simulate the instruction.
                  • but you still get 64Kbytes of memory though. Because a byte is one address, you only get 16*2^10 words or 8*2^10 dual words though. It is still 64Kbytes. Any way you can load in 8bits at a time using lbz, so you get 64Kb of memory.
  • by Doktor Memory ( 237313 ) on Monday October 21, 2002 @11:39AM (#4496415) Journal
    Millions of dollars in burned money from VA-Linux, thousands of man-hours invested in slashcode, untold numbers of CPUs and hard drives sacrificed to the cause, and slashdot's editors/maintainers still can't be bothered to put a spellchecker into their story posting system.

    Why, exactly, do they expect people to pay money for this again?
    • can't be bothered to put a spellchecker into their story posting system.

      Get yourself a machine to run OSX on and every app written in cocoa can spell check every text entry field. (including my web browser)

      -Z
  • 1. Don't use externs or static variables.
    2. If you are going to use an extern variable in a tight loop, don't use a local variable and assign it after the loop.
    3. Pass the option -mdyanmic-no-pic to gcc if the source is in the final program because it does not work in a boundle or a dynamic library (or framework).

    The AIX ABI/PEF ABI uses a register called the TOC for PIC code but it is stored with the function reference so you lose one register if the Darwin ABI goes over to the PEF ABI. You get one more register to play around with if you do not use extern or static variables.
  • by JQuick ( 411434 ) on Monday October 21, 2002 @11:57AM (#4496607)
    Before jumping to conclusions and flinching lest we be struck by the falling sky, let's take a step back.

    First look at the most crucial benefits of the runtime environment. Mach supports an efficient and flexible framework for multiple memory objects. Objc leverages this by supporting the efficient mapping and unmapping of new bundles.

    You may think of a bundle as a set of related objects in a language like Java, but don't take that analogy too far. The concepts of delegation, and protocols, usually mean that different bundles of code have clean interfaces that do not require recompilation when one or the other changes. Sometimes even knowledge of each other's type is irrelevant.

    In any case, the best design for objc applications is a collection of separate UI definitions or nib files, and one or more libraries of code which are searched and loaded as needed at runtime.

    Statically linked code, is more efficient for some tasks, but in the context of good objc design does not fit very well. Text which is statically linked is more fragile, it must be recompiled more often. It can also take more time to initialize and load; using late binding and lazy loading, only the sections to text and definitions of objects actually called will be mapped in memory.

    Position independent code is absolutely needed for this kind of flexibility at runtime. The gcc compiler grew up on CISC, on 32 bit or 16 architectures. Position independent code, had to be relative to something, and the most commonly useful location was always "You are here", the program counter.

    I'm not sure whether a less frequently changed relative address such as a start of bundle address makes more sense for gcc on ppc. In any event, however, I would certainly not be willing to categorize Apple's reliance on position independent code as a bug. By default, use pic.

  • by guuyuk ( 410254 ) on Monday October 21, 2002 @12:41PM (#4497150) Homepage
    It's entirely possible that they are using the m68k ABI to allow really old Classic applications (pre PowerPC) to continue to run in Classic... I still run into the occasional program that was built for 680X0 machines, even though Apple switched to the PowerPC back in 1995.

    Some of the educational software that a lot of schools use is still built with MS-DOS 5 and Apple System 7.5 in mind. Until you can get some of these developers to move to something a little more modern, you will still have a lot of excess baggage to carry in your OS. Perhaps that is why Apple is moving to systems that won't boot MacOS 9 in January 2003.
    • I remember those days, A4 and A5 worlds, one was used for globals in applications while the other was used for globals in resource code (i.e. extensions, control panels, and modules for programs). But Apple never used the PC as the basis for the position independence on the 68K or the PPC until Darwin (Mac OS X). The problem for using the PC is that you have to do a branch and store the result only if you need to externs and such. The problem for using a register that stores the value all the time is that you are wasting a register (68K case: A4 and A5; PPC case RTOC or r2).
    • OS X still runs a lot of old 68k software very well under Classic -- surprising, even runs some stuff my old trusty 604 [lowendmac.com] refused to. Unfortunately, a lot of old 68k stuff is hard to find anymore. My mother wishes they'd carbonize Gunshy one of these days, the Classic launch is a bit of a pain.
    • The needs of the Classic runtime have nothing to do with the native ABI. As for targetting System 7.5 in schools... when most of the computers in use are LCIIIs and earlier, you don't have a lot of choice.
    • photosop 0.63b runs a dream on my dual g4.

      so yep i am fine with this. if they break compatability with my flavourite photoshop i will be pissed.
  • Apple supposedly saved some time using Mach-O because they wouldn't have to rewrite their linker to use CFM/PEF. If we're going to hack Mach-O so it uses the standard PowerPC ABI why not just support PEF/CFM?!

    There are many reasons for this. Not only are there tools already available to handle PEF/CFM, you wouldn't need the whole PEF interpreter w/shim libraries to run PEF Carbon apps!
  • Big deal. (Score:5, Insightful)

    by 0x0d0a ( 568518 ) on Monday October 21, 2002 @06:52PM (#4500335) Journal
    This is mindblowingly unimportant. Can *anyone* think of *any* company that has a perfect ABI? No, because processors evolve, and the ABI has to stay the same. When I write an x86 program on my Linux box, it pushes *all* the arguments onto the stack. Is that the best way to do things? No. Is it done anyway? Yup. Does anyone go into a tizzy about it? No.

    Seriously, the x86 Linux ABI is probably worse off...different (worse) byte alignment from Windows, the abovementioned everything-goes-on-the-stack....
  • It makes logical sense. OS X takes it's roots (no pun intended) from UNIX, UNIX grew up on x86 architectures long before Apple started using it, therefore it only makes sense that the code would be mostly optimised for CISC.
  • portability (Score:5, Interesting)

    by Nomad37 ( 582970 ) on Monday October 21, 2002 @07:39PM (#4500626)
    It seems the real reason apple did this is to maintain portability. Jobs has said multiple times that 'We like to have options' when asked about future chips in macs. It's been backed up by the top brass at apple too (eg, Infoworld article with VP's). [infoworld.com]

    It seems that they can emulate a pc-register in a risc architecture, but could they (easily) do it the other way around? Perhaps this is the real reason why they kept the abi the way it is: so they could easily port os x to whatever platform they like...

  • GCC 3 (Score:4, Interesting)

    by Anonymous Coward on Monday October 21, 2002 @08:11PM (#4500787)
    Using gcc 3 with OS X Jaguar 10.2.1, I checked this out using gcc's option to produce assembly-language code (gcc -S). It turns out that a function call uses only a single additional instruction (branch to link register). Since Apple has compiled OS X Jaguar with gcc 3, only legacy shared libraries and pre-OS 10.2 applications should be affected by the RISC/CISC problem. This is still a significant performance hit, but I would imagine that it is less than 10 percent figure given earlier, and also easier to fix.
  • PowerOpen [ic.ac.uk] would be a suitable ABI for any UNIX app that's POSIX compliant. Perhaps it could even be extended to work with cocoa apps?

If you aren't rich you should always look useful. -- Louis-Ferdinand Celine

Working...