Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Desktops (Apple) Software Apple Hardware Technology

Rosetta 2 is Apple's Key To Making the ARM Transition Less Painful (theverge.com) 153

At WWDC 2020 earlier this week, Apple announced that it's moving Macs away from Intel processors to its own silicon, based on ARM architecture. To help ease the transition, the company announced Rosetta 2, a translation process that allows users to run apps that contain x86_64 instructions on Apple silicon. The Verge reports: Rosetta 2 essentially "translates" instructions that were written for Intel processors into commands that Apple's chips can understand. Developers won't need to make any changes to their old apps; they'll just work. (The original Rosetta was released in 2006 to facilitate Apple's transition from PowerPC to Intel. Apple has also stated that it will support x86 Macs "for years to come," as far as OS updates are concerned. The company shifted from PowerPC to Intel chips in 2006, but ditched support for the former in 2009; OS X Snow Leopard was Intel-only.) You don't, as a user, interact with Rosetta; it does its work behind-the-scenes. "Rosetta 2 is mostly there to minimize the impact on end-users and their experience when they buy a new Mac with Apple Silicon," says Angela Yu, founder of the software-development school App Brewery. "If Rosetta 2 does its job, your average user should not notice its existence."

There's one difference you might perceive, though: speed. Programs that ran under the original Rosetta typically ran slower than those running natively on Intel, since the translator needed time to interpret the code. Early benchmarks found that popular PowerPC applications, such as Photoshop and Office, were running at less than half their native speed on the Intel systems. We'll have to wait and see if apps under Rosetta 2 take similar performance hits. But there are a couple reasons to be optimistic. First, the original Rosetta converted every instruction in real-time, as it executed them. Rosetta 2 can convert an application right at installation time, effectively creating an ARM-optimized version of the app before you've opened it. (It can also translate on the fly for apps that can't be translated ahead of time, such as browser, Java, and Javascript processes, or if it encounters other new code that wasn't translated at install time.) With Rosetta 2 frontloading a bulk of the work, we may see better performance from translated apps.
The report notes that the engine won't support everything. "It's not compatible with some programs, including virtual machine apps, which you might use to run Windows or another operating system on your Mac, or to test out new software without impacting the rest of your system," reports The Verge. "(You also won't be able to run Windows in Boot Camp mode on ARM Macs. Microsoft only licenses the ARM version of Windows 10 to PC manufacturers.) Rosetta 2 also can't translate kernel extensions, which some programs leverage to perform tasks that macOS doesn't have a native feature for (similar to drivers in Windows)."
This discussion has been archived. No new comments can be posted.

Rosetta 2 is Apple's Key To Making the ARM Transition Less Painful

Comments Filter:
  • "for years to come" (Score:5, Interesting)

    by Narcocide ( 102829 ) on Friday June 26, 2020 @09:28PM (#60232860) Homepage

    translation: you have 2 years

    • by msauve ( 701917 ) on Friday June 26, 2020 @09:56PM (#60232954)
      Yep. They unceremoniously dropped the first Rosetta without warning, destroying a lot of investment into applications. Even though it could have been left in place as an option for those who were willing to use the few MB of disk space it took.
      • by WankerWeasel ( 875277 ) on Saturday June 27, 2020 @11:25AM (#60234530)
        That would have meant developers would have continued to drag their feet. Sorry, at some point you have to force their hand. It's not as if they weren't given MORE than ample time to make the transition.
      • Re: (Score:2, Insightful)

        by NoMoreACs ( 6161580 )

        Yep. They unceremoniously dropped the first Rosetta without warning, destroying a lot of investment into applications. Even though it could have been left in place as an option for those who were willing to use the few MB of disk space it took.

        They dropped the original Rosetta so fast for a few reasons:

        1. To push certain major software publishers, notably Adobe and Avid (and to a lesser extent, Microsoft), to release Mac Intel-Native versions of key Applications (Photoshop, Illustrator, ProTools and MS Office, to name a few). This was immentized by the second reason, below.

        2. Performance, or rather lack thereof. The biggest barrier to decent performance for Rosetta was also the one that could never be satisfactorily fixed: Endianess. PowerPC (G5,

        • I have no idea what GP and parent are talking about... Rosetta still works fine on Mac OS X 10.4 Tiger and Mac OS X 10.5 Leopard. I think this is PEBKAC, that irresistible urge many users have to update their production machines without thinking. If your rig works, and you are not affected by the bugs they fix, and you do not expect to use the new features (this is the tradeoff), then please let me give you some advice: DON'T UPGRADE. If your stuff works without being on the bleeding edge, so what? Stay the

          • by msauve ( 701917 ) on Saturday June 27, 2020 @02:39PM (#60235304)
            You run into the situation where some new application only runs on a newer OS version, but that OS version _deliberately_ breaks support for older software which was working just fine.
            • You run into the situation where some new application only runs on a newer OS version, but that OS version _deliberately_ breaks support for older software which was working just fine.

              Thanks for not beating me up for being annoyed, thanks for ignoring that. You're a pro.

              The problem I see with that is "some new application." That's the problem that you don't think it is, i that it isn't a problem. Now, I wouldn't say there will never be any brand new innovation in software that rises to "killerapp." But they're going to be fewer and fewer and farther and father between. Everything has been done that can be done by now. I hate this idiom, but there is more than one way to skin a cat. UNI

        • by kriston ( 7886 )

          Computers "thunk" endianness more than you realize. The internet is big endian, while Intel and most ARM platforms are little-endian. Flipping endianness is cheap and hardware accelerated.

    • by fermion ( 181285 )
      I had no issues with Rosetta. The new machines were fast so there was no issue with slow down. By the time Roseta was taken out, in 10.7 I believe, I had moved to the current apps. The move is due to the same issues as before. Speed, heat issues, power. You recall that Apple Apple made the Same monstrosity Mac back at the end of the iBM days that they made last year. Back then it was because the IBM chip required liquid cooling. MS is good at getting an OS to work on any piece of junk that falls off the ba
    • ...and no time at all if you dual boot Windows.
      • by micheas ( 231635 )
        Or use docker.
        • by k2r ( 255754 )

          You need to improve your understanding what docker is / how docker works.
          You can even run docker containers on RaspberryPi.
          Docker on MacOS was always running in a VM and the containers did so, of course, too.
          There is no reason why you wouldn’t be able to run docker containers on some ARM-Linux using the macOS hypervisor or other virtualization solutions if available.

          I think what you meant to say: docker on Linux running on Intel. We’ll see if that is actually necessary and if there will be solut

          • Things in Docker containers are just jailed processes running on the host. There's more to it than that, but that's the essence of it.

            You will not be running x86-64 docker containers on ARM docker without Apple doing some serious magic with that container and everything inside of it. And, Docker runs in a VM on Mac because the XNU kernel doesn't have hooks necessary for Docker to properly jail processes. Rather than wait for Apple to deliver some implementation of that which was acceptable, they chose to

            • by rl117 ( 110595 )

              Docker on the mac has always been terrible due to the need for VirtualBox.

              I would think sensible people would use ARM containers instead of amd64, where possible. The platform-dependence of docker containers has always been a weakness, but while we were all on amd64 it wasn't quite so apparent. Hopefully this might have benefits beyond the Mac, such as an increasing prevalence of ARM Linux systems, with the consequent improvement in containers for non-amd64 platforms.

        • Or use docker.

          They specifically mentioned Docker support in the Keynote. Start watching (and listening) at 1:26:02.

      • ...and no time at all if you dual boot Windows.

        We'll see what happens when they work out an OEM license for Windows 10 arm (which has x86 support, albeit a bit pokey, and is supposed to have x64 support at then end of this year).

  • I'm out... (Score:3, Interesting)

    by sizzlinkitty ( 1199479 ) on Friday June 26, 2020 @09:50PM (#60232926)

    If Apple goes thru with this, I will stop buying macbook pros for myself and stop my employer from buying them. That would be roughly 4000 purchases every 3 years.

    • Re:I'm out... (Score:4, Insightful)

      by 93 Escort Wagon ( 326346 ) on Friday June 26, 2020 @10:04PM (#60232970)

      I'm playing it by ear - I want to see how the new machines fit with my needs before I make that sort of decision. I suspect I'll be moving away from Mac, though.

      In any case: on general principles, I'm planning to migrate some things out of the Apple ecosystem - that way I can move across OSes with less pain. The biggest pain point is my passwords... I've been just using Apple's Keychain for the last decade plus. I've started the long, tedious transition over to BitWarden. It'll still work if I stick with macOS, but it'll also work should I decide to switch to Linux (or, Lord help me, Windows).

    • If Apple goes thru with this, I will stop buying macbook pros for myself and stop my employer from buying them. That would be roughly 4000 purchases every 3 years.

      Perhaps you could explain your reasoning? Abandoning the OS would be far more difficult for most people than abandoning the underlying CPU architecture.

      • by caseih ( 160668 )

        Why? The applications people are familiar with are available on Windows. MS Office is the main one. My own parents have been using Macs for many years but recently in a volunteer capacity had to use the supplied Windows 10 computers. They found the transition quite easy. In fact my mother remarked that Windows 10 was much more Mac-like than older versions of Windows. The jump from Finder to Explorer was pretty simple (despite the lack of a paned interface in Windows). MS Office seemed about the same to t

        • It seems Adobe CC and Office 2019 already ported themselves to the Mac ARM and most apps are a compile away.

          The x86 emulation demo was impressive and they could always throw more hardware at the problem, the ARM chip is 1/3 the cost for Apple to produce and consumes a lot less power. Having an x86 code interpreter on chip is what Intel and AMD both do already, why not on ARM?

          • Re: I'm out... (Score:4, Informative)

            by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Saturday June 27, 2020 @11:38AM (#60234582) Homepage Journal

            The x86 emulation demo was impressive and they could always throw more hardware at the problem, the ARM chip is 1/3 the cost for Apple to produce and consumes a lot less power.

            Apple didn't produce x86 chips, so it's just "the ARM chip is 1/3 the cost for Apple". Only with their piss-poor volumes they could have gone AMD instead, probably saved just as much money (since they wouldn't have to spend money making their own desktop processors) and they'd have retained binary compatibility.

            Apple is sitting on some of the world's largest corporate cash reserves, they aren't taking this step to save money. They're doing it specifically to be incompatible with everyone else, in order to push people further into their walled garden. They just want to benefit from vendor lock-in.

          • Having an x86 code interpreter on chip is what Intel and AMD both do already, why not on ARM?

            In a word: Licensing. That is it.

            But functionally, you are correct (I believe).

        • Why? The applications people are familiar with are available on Windows. MS Office is the main one. My own parents have been using Macs for many years but recently in a volunteer capacity had to use the supplied Windows 10 computers. They found the transition quite easy. In fact my mother remarked that Windows 10 was much more Mac-like than older versions of Windows. The jump from Finder to Explorer was pretty simple (despite the lack of a paned interface in Windows). MS Office seemed about the same to them, other than superficial UI differences. Firefox, Thunderbird are nearly the same too. Plus the use of web-based tools meant the platform didn't really matter. They had no complaints about Windows 10, which surprised me.

          Have you had your parents checked for mental deterioration or memory issues? (I kid; but you get my point)

          I work with both Windows (10 and 7, plus several versions of Windows Server) every single day, and there is simply no comparison in smoothness, ease-of-use, and features-that-users-actually-can-use. MacOS wins all of those, hands-down; particularly as you have other Apple devices in your household. The level of integration and "this is what computers are supposed to be" in macOS, particularly in the pas

    • Re: (Score:3, Insightful)

      by Luckyo ( 1726890 )

      It's fairly common knowledge that Apple has been treating macbooks as "garbage legacy product that we can't really abandon for image reasons" for last ten years or so. Desktop is borderline dead in terms of support already, where they're primarily churning ridiculously priced low volume items only (hello infamous 1k monitor stand, it's not really hidden any more). They're tiny in their revenue streams, and they're nowhere near as high margin as their main products like iphones. CPU architecture isn't theirs

    • by k2r ( 255754 )

      Why would you make this your personal vendetta?
      That really sounds like a strange thing to spend time on.

    • by Osgeld ( 1900440 )

      that's the problem apple doesnt want the pro market anymore, they are too demanding and critical. 1% of a 3% pc market you and your employer can go stuff yourselves when they unleash the new generation of chomebook killers and all the bobble heads in the world go OH FUCK ITS AN IPAD WITH A KEYBOARD HERE HAVE 4 GRAND!

      • Well, it's going to turn out to be a problem for Apple. You can't have the plebe market without the pro market. You can have it for a while, but the plebes need the pros' help to use their devices, make purchasing decisions, etc. And the pros will be saying "well, I don't use it because [baffling reasons the plebes don't understand] so I can't really help you" and they will turn away from it in the end.

        Style matters a lot, but it's not everything.

  • by Mostly a lurker ( 634878 ) on Friday June 26, 2020 @09:50PM (#60232930)

    It remains to be seen how well this emulation works. It reminds me of the DEC Alpha, an excellent architecture that, in my view, would have been more successful if it had never added x86 emulation. The trouble is that developers have much less incentive to migrate to a new architecture when emulation is available. If the performance of the emulation is mediocre, it gives the whole system a bad name.

    • DEC's FX!32 trans/emulation was pretty damned good. On the first run of any x86 code, it ran pure emulation, analyzed the paths through the code and wrote the translated bits into the user's directory. On subsequent runs it used the translated code and emulated any new code as needed.

      Since each user has their own translated binary code, the bits used by each user would run faster and no time was wasted translating unused bits. For example, if I used MS Word's dictionary a lot but didn't use anything from
      • Did fx!32 kill DEC and the Alpha? Hardly. Piss poor marketing/sales and business strategy did.

        I remember the Alpha with fondness. It was the one architecture throughout the '90's that genuinely excited me. I still get the jollies thinking back to playing with NT 4.0 running on an AlphaStation. DEC had a huge opportunity. They were in the right place at the right time, and the marketing droids still managed to fuck it all up. It's one of the great travesties of computing hardware history that they bungled it on such an epic scale.

      • Did fx!32 kill DEC and the Alpha? Hardly. Piss poor marketing/sales and business strategy did.

        Yep. Alpha systems were just too expensive to justify. And PC processors ate their lunch completely as a result.

    • Alpha was nice, a little hot but nice.

    • by Windrip ( 303053 )

      I assume you're talking about the AlphaStations (pizza boxes), since it would make zero sense to offer FX!32 on VMS.
      The hardware architecture was pure Digital: no compatibility with other vendor's layouts. Daughter boards had to be designed for horizontal mounting, so only DEC boards could be used. The CD-ROM hardware was a disaster. I tried playing They Might Be Giants' "Apollo 18" in the manner suggested by the artists. It killed the drive. The local DEC office in Tucson was happy to help me replace it. T

    • by dfghjk ( 711126 )

      "It reminds me of the DEC Alpha, an excellent architecture..."

      An "excellent architecture" based on what? Its lack of an integer multiply instruction? Its massive power requirements? Its enormous size? Its utter failure to generate interest among other computer manufacturers?

      Anyone who says that DEC Alpha was great didn't actually know anything about the processors of the day. It was, by far, the most primitive architecture in the industry and only gained notice because it was 64 bit before anyone cared

  • by niknah ( 3494485 ) on Friday June 26, 2020 @09:58PM (#60232960)
    Intel have something in reverse, libhoudini. Runs ARM on Intel chips. From my experience with this, there're lots of buts and ifs. Some apps work, some apps work but some features in the app didn't, some won't start at all. As mentioned in the article, they already know that not everything will run properly.
  • Itanium (Score:3, Informative)

    by backslashdot ( 95548 ) on Friday June 26, 2020 @10:03PM (#60232966)

    Itanium ruined Intel.

    It failed spectacularly and then Intel and the industry was forced to AMD-64. That essentially today makes most of the core x86_64 legal to emulate because its most important instructions rely on now expired x86 patents. It will be really difficult for Intel or AMD to sue Apple. It's already difficult to sue to block software emulation .. but MIPS has done it successfully when they sued Lexra for software emulating unaligned load and store instructions.

    • by AC-x ( 735297 )

      That essentially today makes most of the core x86_64 legal to emulate because its most important instructions rely on now expired x86 patents. It will be really difficult for Intel or AMD to sue Apple

      Ok, I know software patents are a bit out of control, but has anyone ever been sued for software emulating a CPU instruction set? Wasn't Power PC covered by patents when the original Rosetta was around? Weren't the Intel patents still valid when programs like VirtualPC were emulating x86 on PowerPC macs?

    • That essentially today makes most of the core x86_64 legal to emulate because its most important instructions rely on now expired x86 patents.

      Patents never prevented you from emulating x86. Just ask Transmeta which was very much in the business at the time when the patents were still valid.

      • by dfghjk ( 711126 )

        Why ask incompetent failures their opinions? Transmeta was a running joke among (other) processor design teams at the time. Also, Transmeta was 20 years ago, any "patent" they might have known about would be expired by now.

        In order for a patent to prevent emulation of x86, there must be some aspect of the emulation that unavoidably infringes on a patent. If you want to ask something informative, ask what that patent is and why emulation can't avoid using it.

    • by dfghjk ( 711126 )

      Why would Intel or AMD sue Apple? What would be the justification? How does being "forced" to use "AMD-64" make "x86_64" "legal to emulate"? Why was Intel "forced to AMD-64"?

      This is an awfully strained narrative you've got going here. Not clear if it's some kind of lawsuit entitlement or you just want to take potshots at Itanium, but I'd say Intel has done remarkably well in the wake of the Itanium failure, to their credit. Also remember that their x86 architecture was a catastophe at that time, so not

  • Who remembers the move to x86/Unix, away from RISC chips (I believe)? There was celebration in the digital streets (at least on my particular block of it).
    My friends dreamed that Macs would be able to run all PC software and it would be great. Cross-compatibility, and lollipops and unicorns for everyone. Now it appears that Apple has gotten the market share it wanted and is ready to move back to a new version of the old days.

    I still don't particularly like Macs, and we can go back to the polarized Mac vs PC

    • Who remembers the move to x86/Unix, away from RISC chips (I believe)?

      The PowerPC chips are RISC. But all processors are internally "RISCy" - multi-cycle [x86] instructions are decomposed into multiple micro-ops which are dispatched to the functional units. Meanwhile, x86's variable-length instructions function as a kind of compression, and the x86 instruction decoder is a minuscule percentage of the area of a modern processor, even if you leave aside the cache (which dominates the die in most cases.)

      There was celebration in the digital streets (at least on my particular block of it).
      My friends dreamed that Macs would be able to run all PC software and it would be great.

      They essentially could, if you were willing to dual-boot or virtualize Windo

      • by dfghjk ( 711126 )

        "But all processors are internally "RISCy" - multi-cycle [x86] instructions are decomposed into multiple micro-ops which are dispatched to the functional units."

        "Functional units" are not "RISCy" and modern architectures are no more "RISCy" than they are "CISCy". Also, x86 instructions are no more "multi-cycle" than RISC instructions.

        RISC is a term without meaning today. Originally it was an approach that simplified processor design so that processors could be implemented in programmable logic. Then it e

  • by nateman1352 ( 971364 ) on Friday June 26, 2020 @11:22PM (#60233128)

    While it is good that Apple has built Rosetta 2 to help with this transition, it's not going to be nearly as important this time around. The Intel transition was a pretty big lift for a lot of software developers. During the PPC --> Intel transition most large Mac applications were built with Metrowerks CodeWarrior. CodeWarrior was the first PPC C compiler, and had the ability to produce binaries that ran on both OS X and classic Mac OS. It was the tool of choice for most large OS X applications during that era, though even in the early 2000s most could see the writing on the wall for it. At the same time, the free availability of XCode killed any further development of CodeWarrior. Many applications had considerable quantities of hand written PPC assembly code as well. If you were already using XCode and didn't have any assembly code, it was very easy but not everyone was so fortunate.

    Photoshop had many filters written in hand-optimized assembly. Perhaps most strange was that Microsoft Office was one of the worst hit. The entire Visual Basic macro interpreter was written in PPC assembly. While the Windows version was x86 assembly, it had a ton of Win32 calls mixed in and was not portable. The first version of Office for Intel Macs didn't have VBA support at all.

  • You have to upgrade your software (hint:$$$).
    • by Jeremi ( 14640 )

      Literally the entire article is about Apple going out of their way with Rosetta 2 to ensure that you won't have to upgrade your software.

  • Management Engine (Score:5, Interesting)

    by Big Bipper ( 1120937 ) on Saturday June 27, 2020 @12:11AM (#60233202)
    I wonder if the Apple chips will include the Management Engine. If not, Apple should have a huge opportunity selling chips to the security and privacy conscious.
    • I wonder if the Apple chips will include the Management Engine. If not, Apple should have a huge opportunity selling chips to the security and privacy conscious.

      *EVERY* processor has a Management Engine. It is an essential component of most. The only remaining possible question is how open someone is with their design of it.

    • by AmiMoJo ( 196126 )

      Apple already has something far worse than the Management Engine in current Macs.

      Their T2 chip is not only a black box containing closed source firmware like the Management Engine, it also stops you repairing your own Mac or recovering data when it fails.

  • Wouldn't it be great if Apple collaborated with Microsoft in developing the x86-64 ARM emulation? The Microsoft Surface Pro X is an aarch64 laptop that runs 32-bit x86 code through emulation just fine but MSFT haven't yet released support for x86-64 to beat Apple's announcement of their x86-64 emulation layer. Indeed, MSFT actively recommends you instead buy a Surface Pro 7 if you plan to use 64-bit applications.

    Wouldn't it be nice if AAPL and MSFT collaborated on a common x86- and x86-64 emulation solutio

    • Would not surprise me if something like that came out of this migration.

    • Wouldn't it be nice if AAPL and MSFT collaborated on a common x86- and x86-64 emulation solution for ARM?

      In short, no. Microsoft doesn't know any more than anyone else about emulating x86. For example, FX!32 was written by Digital. Involving Microsoft only means EEE, incompetence, or both.

  • by swell ( 195815 ) <jabberwock@poetic.com> on Saturday June 27, 2020 @01:26AM (#60233376)

    Having been with Apple in many roles since the Apple ][ (no, not the Apple ][+, the one before it that stored programs and data on an audio cassette), I've survived many upheavals. All were very inconvenient for most users, and some users were lost along the way. In many cases, as Narcocide noted, there was a two year interruption in functionality. Workarounds were the best hope and much productivity was lost over compatibility problems.

    My plan for now is to buy up some serviceable used Intel Macs for parts and backup. Then I'll ride out the two years after which I'll consider a new Mac for certain functions. The problem with my plan is that Apple will come out with some killer app, or some really surprising cross-functionality with other Apple devices or services that will be irresistible.

    Be warned that the Apple Store may then be your only source for applications and also that most important software may be by subscription only (you will never own it).

  • by dwywit ( 1109409 ) on Saturday June 27, 2020 @03:42AM (#60233580)

    Dual-layer abstraction. That way a program *binary* could be ported from one model to any other in the AS400 range. The program did not need re-compiling, the binary's first run triggered a process to create a second, model-specific binary. Google TIMI - technology-independent machine interface.

  • by Joe2020 ( 6760092 ) on Saturday June 27, 2020 @04:12AM (#60233626)

    PowerPC is or was a RISC design. To translate it to x86 instructions means it had to be translated to a CISC design. This meant that the instruction translator had to figure out how to map the fewer instructions of PowerPC onto the more complex set of x86 instructions. So it either meant it had to use only a few of the x86 CISC instructions, or, it had to perform pattern matching or even run more complex algorithms in order to make full use of the x86 instruction set.

    Now it's the reverse case. All the translator needs to do now is to translate the more complex x86 instruction set onto the fewer Arm RISC instructions. The translator will now be able to just map single x86 instructions, even the more complicated ones, onto an Arm RISC instruction or a series of instructions, by using only a translation table. This then becomes so simple that it can be done fast and at installation time.

    Thus, in theory, the transition from x86 to Arm should prove easier and yield better results.

    As a matter of fact, x86 CISC designs these days are RISC design "under the hood". A x86 CPU translates its x86 instructions into an internal reduced instruction set (aka microcode). Apple will most likely make use of this knowledge and have designed their translator in exactly the same the x86 hardware does it.

    • by Megol ( 3135005 )

      So much wrong. Me head huts wrong. Not write more.

    • by dmpot ( 1708950 )

      Thus, in theory, the transition from x86 to Arm should prove easier and yield better results.

      I'm afraid this theory is not based on any real-life experience. Sure, you can made up some theoretically possible architectures where it would be the case, but it is definitely not the case with processors that we have today.

      Transition from PowerPC to Intel x86 processors was simpler exactly because x86 has a richer set of commands, and no one forces you to combine many RISC instructions in one complex CISC instruction, because all modern x86 processors run the microcode internally, so they as efficient wi

      • ... and no one forces you to combine many RISC instructions in one complex CISC instruction ...

        Save it. This has nothing to do with you being forced. You are trying to make up an excuse and that it's somehow going to make it better. What a rubbish sentiment that is.

        I'm not saying the reverse was perfect, but it is definitely easier to implement and allows for much easier optimisations. And no, it doesn't rely "heavily on internals only available at runtime". If this was the case then the idea of having RISC underneath x86 would never have worked. It works, because it was easy to do, not because it w

      • by dfghjk ( 711126 )

        "because all modern x86 processors run the microcode internally, so they as efficient with simple instructions as RISC processors..."

        What in the world?!?

        "...but doing so is a child play."

        Implementing anything like what Rosetta does is not child's play. I doubt very many who comment on /. are even capable of it.

        "...you do not need to add any extra memory barrier that did not exist in the original code."

        What in the world?!?

        When converting from one processor architecture to another, you have to EMULATE the be

    • As a matter of fact, x86 CISC designs these days are RISC design "under the hood".

      As long as by "these days" you mean "since 1995, when the Am586 came out".

      A x86 CPU translates its x86 instructions into an internal reduced instruction set (aka microcode). Apple will most likely make use of this knowledge and have designed their translator in exactly the same the x86 hardware does it.

      The x86 hardware does it... with hardware. Rosetta is software. HTH

      • The x86 hardware does it... with hardware. Rosetta is software. HTH

        Rosetta 2 is software, which only has to do it once and it does so at installation time. The x86 hardware has to decode it every time it fetches an instruction. Thus can the software be slow and even apply costly optimisations. The hardware cannot and has to decode it as fast possible or risk taking high penalties, even more so for speculative execution.

        This has led to changes in the x86 CISC instructions under the hood. Their timings have changed to reflect the need of the hardware to decode instructions a

        • by dfghjk ( 711126 )

          It's hard to follow this painfully tortured and fallacious logic. It appears to suggest that Rosetta does something only once that x86 processors have to do over and over, which somehow suggests that x86 cannot possibly be good somehow. I can't imagine how uninformed one must be to have these thoughts.

          What's truly amazing, though, is that you claim that many x86 executable aren't "even optimised for a specific CPU". How stupid is this?

          • What's truly amazing, though, is that you claim that many x86 executable aren't "even optimised for a specific CPU". How stupid is this?

            It's true tough. Most x86 code is only targeted for a generic, approximated CPU model that often doesn't exist. You first have to instruct compilers to build for a specific CPU, which then means it won't run on every CPU. Compile something for "amd64" or "x86_64" and it means it will run on any 64-bit CPU. Compile it with i.e. AVX2 instruction support and it won't run everywhere, because not every 64-bit x86 CPU supports AVX2.

            I'm also not saying that x86 wouldn't be good somehow. It's just what you read int

    • by dfghjk ( 711126 )

      "This meant that the instruction translator had to figure out how to map the fewer instructions of PowerPC onto the more complex set of x86 instructions."

      False, because PPC had a very complete instruction set. RISC did not mean "fewer instructions" except in some, particularly early" designs. PPC was in no way "reduced" compared to x86.

      As for the rest, it's all nonsense because it's built on false assumptions.

      "As a matter of fact, x86 CISC designs these days are RISC design "under the hood". "

      No they are

      • False, because PPC had a very complete instruction set. ...

        No, you only have a bad understanding of RISC and CISC. CISC doesn't mean it's complete or that RISC would be incomplete. Nor does reduced or fewer instruction means it's literally less instructions by mere count. It means it is less or fewer types of instructions.

  • With both Microsoft and Apple now providing ARM emulation for x86, the former with their Surface Pro X line, I'm keen to see a new form of benchmark: emulation efficiency.

    I'm keen to see what the percentage hit for various popular applications are on each platform. This will require that we have x86 and arm native binaries for both applications but it certainly would be interesting to pitch Rosetta against whatever Microsoft call their thing.

  • Nobody calls assembly or machine code instructions "commands" except non-technical journalists. The last thing that used commands was BASIC

  • So how can Rosetta translate the Intel RdSeed instruction that returns a full entropy, SP800-90A & B compliant random number that relies on underlying nondeterministic hardware, when the ARM chip doesn't have that underlying hardware. Watch all your security go down the drain.

  • Oh hey look it's another person who thinks they can topple x86. That's never happened before.

E = MC ** 2 +- 3db

Working...