Rosetta 2 is Apple's Key To Making the ARM Transition Less Painful (theverge.com) 153
At WWDC 2020 earlier this week, Apple announced that it's moving Macs away from Intel processors to its own silicon, based on ARM architecture. To help ease the transition, the company announced Rosetta 2, a translation process that allows users to run apps that contain x86_64 instructions on Apple silicon. The Verge reports: Rosetta 2 essentially "translates" instructions that were written for Intel processors into commands that Apple's chips can understand. Developers won't need to make any changes to their old apps; they'll just work. (The original Rosetta was released in 2006 to facilitate Apple's transition from PowerPC to Intel. Apple has also stated that it will support x86 Macs "for years to come," as far as OS updates are concerned. The company shifted from PowerPC to Intel chips in 2006, but ditched support for the former in 2009; OS X Snow Leopard was Intel-only.) You don't, as a user, interact with Rosetta; it does its work behind-the-scenes. "Rosetta 2 is mostly there to minimize the impact on end-users and their experience when they buy a new Mac with Apple Silicon," says Angela Yu, founder of the software-development school App Brewery. "If Rosetta 2 does its job, your average user should not notice its existence."
There's one difference you might perceive, though: speed. Programs that ran under the original Rosetta typically ran slower than those running natively on Intel, since the translator needed time to interpret the code. Early benchmarks found that popular PowerPC applications, such as Photoshop and Office, were running at less than half their native speed on the Intel systems. We'll have to wait and see if apps under Rosetta 2 take similar performance hits. But there are a couple reasons to be optimistic. First, the original Rosetta converted every instruction in real-time, as it executed them. Rosetta 2 can convert an application right at installation time, effectively creating an ARM-optimized version of the app before you've opened it. (It can also translate on the fly for apps that can't be translated ahead of time, such as browser, Java, and Javascript processes, or if it encounters other new code that wasn't translated at install time.) With Rosetta 2 frontloading a bulk of the work, we may see better performance from translated apps. The report notes that the engine won't support everything. "It's not compatible with some programs, including virtual machine apps, which you might use to run Windows or another operating system on your Mac, or to test out new software without impacting the rest of your system," reports The Verge. "(You also won't be able to run Windows in Boot Camp mode on ARM Macs. Microsoft only licenses the ARM version of Windows 10 to PC manufacturers.) Rosetta 2 also can't translate kernel extensions, which some programs leverage to perform tasks that macOS doesn't have a native feature for (similar to drivers in Windows)."
There's one difference you might perceive, though: speed. Programs that ran under the original Rosetta typically ran slower than those running natively on Intel, since the translator needed time to interpret the code. Early benchmarks found that popular PowerPC applications, such as Photoshop and Office, were running at less than half their native speed on the Intel systems. We'll have to wait and see if apps under Rosetta 2 take similar performance hits. But there are a couple reasons to be optimistic. First, the original Rosetta converted every instruction in real-time, as it executed them. Rosetta 2 can convert an application right at installation time, effectively creating an ARM-optimized version of the app before you've opened it. (It can also translate on the fly for apps that can't be translated ahead of time, such as browser, Java, and Javascript processes, or if it encounters other new code that wasn't translated at install time.) With Rosetta 2 frontloading a bulk of the work, we may see better performance from translated apps. The report notes that the engine won't support everything. "It's not compatible with some programs, including virtual machine apps, which you might use to run Windows or another operating system on your Mac, or to test out new software without impacting the rest of your system," reports The Verge. "(You also won't be able to run Windows in Boot Camp mode on ARM Macs. Microsoft only licenses the ARM version of Windows 10 to PC manufacturers.) Rosetta 2 also can't translate kernel extensions, which some programs leverage to perform tasks that macOS doesn't have a native feature for (similar to drivers in Windows)."
"for years to come" (Score:5, Interesting)
translation: you have 2 years
Re:"for years to come" (Score:5, Informative)
Re:"for years to come" (Score:4, Informative)
Re: (Score:2, Insightful)
Yep. They unceremoniously dropped the first Rosetta without warning, destroying a lot of investment into applications. Even though it could have been left in place as an option for those who were willing to use the few MB of disk space it took.
They dropped the original Rosetta so fast for a few reasons:
1. To push certain major software publishers, notably Adobe and Avid (and to a lesser extent, Microsoft), to release Mac Intel-Native versions of key Applications (Photoshop, Illustrator, ProTools and MS Office, to name a few). This was immentized by the second reason, below.
2. Performance, or rather lack thereof. The biggest barrier to decent performance for Rosetta was also the one that could never be satisfactorily fixed: Endianess. PowerPC (G5,
Re: (Score:2)
I have no idea what GP and parent are talking about... Rosetta still works fine on Mac OS X 10.4 Tiger and Mac OS X 10.5 Leopard. I think this is PEBKAC, that irresistible urge many users have to update their production machines without thinking. If your rig works, and you are not affected by the bugs they fix, and you do not expect to use the new features (this is the tradeoff), then please let me give you some advice: DON'T UPGRADE. If your stuff works without being on the bleeding edge, so what? Stay the
Re:"for years to come" (Score:4, Interesting)
Re: (Score:2)
You run into the situation where some new application only runs on a newer OS version, but that OS version _deliberately_ breaks support for older software which was working just fine.
Thanks for not beating me up for being annoyed, thanks for ignoring that. You're a pro.
The problem I see with that is "some new application." That's the problem that you don't think it is, i that it isn't a problem. Now, I wouldn't say there will never be any brand new innovation in software that rises to "killerapp." But they're going to be fewer and fewer and farther and father between. Everything has been done that can be done by now. I hate this idiom, but there is more than one way to skin a cat. UNI
Re: (Score:3)
Computers "thunk" endianness more than you realize. The internet is big endian, while Intel and most ARM platforms are little-endian. Flipping endianness is cheap and hardware accelerated.
Re: (Score:2)
Re:"for years to come" (Score:5, Insightful)
I had no issues with Rosetta.
Try run it now.
I had moved to the current apps.
Lucky you. Must be nice to be in a position where you can simply buy new software, where all vendors are still around and everything is current.
Re: "for years to come" (Score:2)
Buy? You don't buy a perpetual software license anymore in many cases. You just rent a license to use it that month.
Perpetual rent society.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
You need to improve your understanding what docker is / how docker works.
You can even run docker containers on RaspberryPi.
Docker on MacOS was always running in a VM and the containers did so, of course, too.
There is no reason why you wouldn’t be able to run docker containers on some ARM-Linux using the macOS hypervisor or other virtualization solutions if available.
I think what you meant to say: docker on Linux running on Intel. We’ll see if that is actually necessary and if there will be solut
Re: (Score:3)
Things in Docker containers are just jailed processes running on the host. There's more to it than that, but that's the essence of it.
You will not be running x86-64 docker containers on ARM docker without Apple doing some serious magic with that container and everything inside of it. And, Docker runs in a VM on Mac because the XNU kernel doesn't have hooks necessary for Docker to properly jail processes. Rather than wait for Apple to deliver some implementation of that which was acceptable, they chose to
Re: (Score:3)
Docker on the mac has always been terrible due to the need for VirtualBox.
I would think sensible people would use ARM containers instead of amd64, where possible. The platform-dependence of docker containers has always been a weakness, but while we were all on amd64 it wasn't quite so apparent. Hopefully this might have benefits beyond the Mac, such as an increasing prevalence of ARM Linux systems, with the consequent improvement in containers for non-amd64 platforms.
Re: (Score:2)
Or use docker.
They specifically mentioned Docker support in the Keynote. Start watching (and listening) at 1:26:02.
Re: (Score:2)
...and no time at all if you dual boot Windows.
We'll see what happens when they work out an OEM license for Windows 10 arm (which has x86 support, albeit a bit pokey, and is supposed to have x64 support at then end of this year).
Re: "for years to come" (Score:2)
Re: (Score:3)
I'm perfectly aware of this.
However, running a "lame duck" operating system is not something I relish doing, as support tends to whither away and patches tend to stop coming.
I'm out... (Score:3, Interesting)
If Apple goes thru with this, I will stop buying macbook pros for myself and stop my employer from buying them. That would be roughly 4000 purchases every 3 years.
Re:I'm out... (Score:4, Insightful)
I'm playing it by ear - I want to see how the new machines fit with my needs before I make that sort of decision. I suspect I'll be moving away from Mac, though.
In any case: on general principles, I'm planning to migrate some things out of the Apple ecosystem - that way I can move across OSes with less pain. The biggest pain point is my passwords... I've been just using Apple's Keychain for the last decade plus. I've started the long, tedious transition over to BitWarden. It'll still work if I stick with macOS, but it'll also work should I decide to switch to Linux (or, Lord help me, Windows).
Re: (Score:2)
If Apple goes thru with this, I will stop buying macbook pros for myself and stop my employer from buying them. That would be roughly 4000 purchases every 3 years.
Perhaps you could explain your reasoning? Abandoning the OS would be far more difficult for most people than abandoning the underlying CPU architecture.
Re: (Score:2)
Why? The applications people are familiar with are available on Windows. MS Office is the main one. My own parents have been using Macs for many years but recently in a volunteer capacity had to use the supplied Windows 10 computers. They found the transition quite easy. In fact my mother remarked that Windows 10 was much more Mac-like than older versions of Windows. The jump from Finder to Explorer was pretty simple (despite the lack of a paned interface in Windows). MS Office seemed about the same to t
Re: I'm out... (Score:2)
It seems Adobe CC and Office 2019 already ported themselves to the Mac ARM and most apps are a compile away.
The x86 emulation demo was impressive and they could always throw more hardware at the problem, the ARM chip is 1/3 the cost for Apple to produce and consumes a lot less power. Having an x86 code interpreter on chip is what Intel and AMD both do already, why not on ARM?
Re: I'm out... (Score:4, Informative)
The x86 emulation demo was impressive and they could always throw more hardware at the problem, the ARM chip is 1/3 the cost for Apple to produce and consumes a lot less power.
Apple didn't produce x86 chips, so it's just "the ARM chip is 1/3 the cost for Apple". Only with their piss-poor volumes they could have gone AMD instead, probably saved just as much money (since they wouldn't have to spend money making their own desktop processors) and they'd have retained binary compatibility.
Apple is sitting on some of the world's largest corporate cash reserves, they aren't taking this step to save money. They're doing it specifically to be incompatible with everyone else, in order to push people further into their walled garden. They just want to benefit from vendor lock-in.
Re: (Score:2)
Having an x86 code interpreter on chip is what Intel and AMD both do already, why not on ARM?
In a word: Licensing. That is it.
But functionally, you are correct (I believe).
Re: (Score:2)
Why? The applications people are familiar with are available on Windows. MS Office is the main one. My own parents have been using Macs for many years but recently in a volunteer capacity had to use the supplied Windows 10 computers. They found the transition quite easy. In fact my mother remarked that Windows 10 was much more Mac-like than older versions of Windows. The jump from Finder to Explorer was pretty simple (despite the lack of a paned interface in Windows). MS Office seemed about the same to them, other than superficial UI differences. Firefox, Thunderbird are nearly the same too. Plus the use of web-based tools meant the platform didn't really matter. They had no complaints about Windows 10, which surprised me.
Have you had your parents checked for mental deterioration or memory issues? (I kid; but you get my point)
I work with both Windows (10 and 7, plus several versions of Windows Server) every single day, and there is simply no comparison in smoothness, ease-of-use, and features-that-users-actually-can-use. MacOS wins all of those, hands-down; particularly as you have other Apple devices in your household. The level of integration and "this is what computers are supposed to be" in macOS, particularly in the pas
Re: (Score:2)
Most people also do not give a crap about their operating system, as long as it runs the apps they want to use, is as invisible as possible and leaves them alone. Which is why windows is so shite, annoyingly right in your face, a pest and on purpose, which is why M$ died on the phone, too much competition for an anal retentive operating system.
Re: (Score:2)
Re: (Score:2)
There are some pretty obvious downsides that have been made public already. No bootcamp. No native VMs. Having to re-buy all the existing applications. That's on top of all the other negative changes that have been made to Mac laptops over the last several years like non-upgradeable memory and SSDs, the touchbar, the butterfly keyboard (which they reversed after THREE YEARS of it being apparent what a clusterfuck it is). From a software side the OS is pretty damn locked down, can't install third party drivers anymore, and almost all the Unixy software is badly out of date. Basically for the last several years when Apple announces a new MacOS my question is - what are they taking away, not what are they adding. Because I cannot think of a single useful thing they have added in years and I'm speaking as a developer. I think they added some API back in High Sierra that I used once?? Dunno.
There were times that Apple felt it had to compete on price with PCs, that is not happening right now. You get to pay out the ass for all of what I said. And for what tangible benefit? OP was talking as a corporate IT department where the benefits of MacOS are slight. As a company everything runs in Chrome or is available on Windows.
Nobody cares if ARM is slightly more efficient than a Ryzen laptop.
No amd64 docker.
Re:uh, no bootcamp? (Score:4, Insightful)
Re:uh, no bootcamp? (Score:5, Insightful)
Hoping whatever AU plugins you use come over as easily too.
Re: (Score:3, Insightful)
It's fairly common knowledge that Apple has been treating macbooks as "garbage legacy product that we can't really abandon for image reasons" for last ten years or so. Desktop is borderline dead in terms of support already, where they're primarily churning ridiculously priced low volume items only (hello infamous 1k monitor stand, it's not really hidden any more). They're tiny in their revenue streams, and they're nowhere near as high margin as their main products like iphones. CPU architecture isn't theirs
Re: (Score:3)
Why would you make this your personal vendetta?
That really sounds like a strange thing to spend time on.
Re: (Score:2)
that's the problem apple doesnt want the pro market anymore, they are too demanding and critical. 1% of a 3% pc market you and your employer can go stuff yourselves when they unleash the new generation of chomebook killers and all the bobble heads in the world go OH FUCK ITS AN IPAD WITH A KEYBOARD HERE HAVE 4 GRAND!
Re: (Score:2)
Well, it's going to turn out to be a problem for Apple. You can't have the plebe market without the pro market. You can have it for a while, but the plebes need the pros' help to use their devices, make purchasing decisions, etc. And the pros will be saying "well, I don't use it because [baffling reasons the plebes don't understand] so I can't really help you" and they will turn away from it in the end.
Style matters a lot, but it's not everything.
Reminiscent of the DEC Alpha (Score:4, Insightful)
It remains to be seen how well this emulation works. It reminds me of the DEC Alpha, an excellent architecture that, in my view, would have been more successful if it had never added x86 emulation. The trouble is that developers have much less incentive to migrate to a new architecture when emulation is available. If the performance of the emulation is mediocre, it gives the whole system a bad name.
Re: Reminiscent of the DEC Alpha (Score:2)
Since each user has their own translated binary code, the bits used by each user would run faster and no time was wasted translating unused bits. For example, if I used MS Word's dictionary a lot but didn't use anything from
Re: (Score:2)
Did fx!32 kill DEC and the Alpha? Hardly. Piss poor marketing/sales and business strategy did.
I remember the Alpha with fondness. It was the one architecture throughout the '90's that genuinely excited me. I still get the jollies thinking back to playing with NT 4.0 running on an AlphaStation. DEC had a huge opportunity. They were in the right place at the right time, and the marketing droids still managed to fuck it all up. It's one of the great travesties of computing hardware history that they bungled it on such an epic scale.
Re: (Score:2)
Did fx!32 kill DEC and the Alpha? Hardly. Piss poor marketing/sales and business strategy did.
Yep. Alpha systems were just too expensive to justify. And PC processors ate their lunch completely as a result.
Re: (Score:2)
Alpha was nice, a little hot but nice.
Re: (Score:3)
I assume you're talking about the AlphaStations (pizza boxes), since it would make zero sense to offer FX!32 on VMS.
The hardware architecture was pure Digital: no compatibility with other vendor's layouts. Daughter boards had to be designed for horizontal mounting, so only DEC boards could be used. The CD-ROM hardware was a disaster. I tried playing They Might Be Giants' "Apollo 18" in the manner suggested by the artists. It killed the drive. The local DEC office in Tucson was happy to help me replace it. T
Re: (Score:2)
"It reminds me of the DEC Alpha, an excellent architecture..."
An "excellent architecture" based on what? Its lack of an integer multiply instruction? Its massive power requirements? Its enormous size? Its utter failure to generate interest among other computer manufacturers?
Anyone who says that DEC Alpha was great didn't actually know anything about the processors of the day. It was, by far, the most primitive architecture in the industry and only gained notice because it was 64 bit before anyone cared
Like Intel's libhoudini, but reverse (Score:4, Interesting)
Itanium (Score:3, Informative)
Itanium ruined Intel.
It failed spectacularly and then Intel and the industry was forced to AMD-64. That essentially today makes most of the core x86_64 legal to emulate because its most important instructions rely on now expired x86 patents. It will be really difficult for Intel or AMD to sue Apple. It's already difficult to sue to block software emulation .. but MIPS has done it successfully when they sued Lexra for software emulating unaligned load and store instructions.
Re: (Score:2)
That essentially today makes most of the core x86_64 legal to emulate because its most important instructions rely on now expired x86 patents. It will be really difficult for Intel or AMD to sue Apple
Ok, I know software patents are a bit out of control, but has anyone ever been sued for software emulating a CPU instruction set? Wasn't Power PC covered by patents when the original Rosetta was around? Weren't the Intel patents still valid when programs like VirtualPC were emulating x86 on PowerPC macs?
Re: (Score:2)
That essentially today makes most of the core x86_64 legal to emulate because its most important instructions rely on now expired x86 patents.
Patents never prevented you from emulating x86. Just ask Transmeta which was very much in the business at the time when the patents were still valid.
Re: (Score:2)
Why ask incompetent failures their opinions? Transmeta was a running joke among (other) processor design teams at the time. Also, Transmeta was 20 years ago, any "patent" they might have known about would be expired by now.
In order for a patent to prevent emulation of x86, there must be some aspect of the emulation that unavoidably infringes on a patent. If you want to ask something informative, ask what that patent is and why emulation can't avoid using it.
Re: (Score:2)
Why would Intel or AMD sue Apple? What would be the justification? How does being "forced" to use "AMD-64" make "x86_64" "legal to emulate"? Why was Intel "forced to AMD-64"?
This is an awfully strained narrative you've got going here. Not clear if it's some kind of lawsuit entitlement or you just want to take potshots at Itanium, but I'd say Intel has done remarkably well in the wake of the Itanium failure, to their credit. Also remember that their x86 architecture was a catastophe at that time, so not
Re: (Score:2)
That's for manufacturing the actual silicon.
When it comes to emulation, there are quite a few x86 and x64 emulators for various instruction sets on the market. Just like there are plenty of emulators for ARM.
Re: (Score:2)
When it comes to emulation, there are quite a few x86 and x64 emulators for various instruction sets on the market. Just like there are plenty of emulators for ARM.
There are plenty of emulators for MIPS, too. Even qemu does it.
I remember when it was the opposite (Score:2)
Who remembers the move to x86/Unix, away from RISC chips (I believe)? There was celebration in the digital streets (at least on my particular block of it).
My friends dreamed that Macs would be able to run all PC software and it would be great. Cross-compatibility, and lollipops and unicorns for everyone. Now it appears that Apple has gotten the market share it wanted and is ready to move back to a new version of the old days.
I still don't particularly like Macs, and we can go back to the polarized Mac vs PC
Re: (Score:2)
Who remembers the move to x86/Unix, away from RISC chips (I believe)?
The PowerPC chips are RISC. But all processors are internally "RISCy" - multi-cycle [x86] instructions are decomposed into multiple micro-ops which are dispatched to the functional units. Meanwhile, x86's variable-length instructions function as a kind of compression, and the x86 instruction decoder is a minuscule percentage of the area of a modern processor, even if you leave aside the cache (which dominates the die in most cases.)
There was celebration in the digital streets (at least on my particular block of it).
My friends dreamed that Macs would be able to run all PC software and it would be great.
They essentially could, if you were willing to dual-boot or virtualize Windo
Re: (Score:2)
"But all processors are internally "RISCy" - multi-cycle [x86] instructions are decomposed into multiple micro-ops which are dispatched to the functional units."
"Functional units" are not "RISCy" and modern architectures are no more "RISCy" than they are "CISCy". Also, x86 instructions are no more "multi-cycle" than RISC instructions.
RISC is a term without meaning today. Originally it was an approach that simplified processor design so that processors could be implemented in programmable logic. Then it e
Not As Important This Time (Score:4, Interesting)
While it is good that Apple has built Rosetta 2 to help with this transition, it's not going to be nearly as important this time around. The Intel transition was a pretty big lift for a lot of software developers. During the PPC --> Intel transition most large Mac applications were built with Metrowerks CodeWarrior. CodeWarrior was the first PPC C compiler, and had the ability to produce binaries that ran on both OS X and classic Mac OS. It was the tool of choice for most large OS X applications during that era, though even in the early 2000s most could see the writing on the wall for it. At the same time, the free availability of XCode killed any further development of CodeWarrior. Many applications had considerable quantities of hand written PPC assembly code as well. If you were already using XCode and didn't have any assembly code, it was very easy but not everyone was so fortunate.
Photoshop had many filters written in hand-optimized assembly. Perhaps most strange was that Microsoft Office was one of the worst hit. The entire Visual Basic macro interpreter was written in PPC assembly. While the Windows version was x86 assembly, it had a ton of Win32 calls mixed in and was not portable. The first version of Office for Intel Macs didn't have VBA support at all.
cash grab (Score:2)
Re: (Score:3)
Literally the entire article is about Apple going out of their way with Rosetta 2 to ensure that you won't have to upgrade your software.
Management Engine (Score:5, Interesting)
Re: (Score:2)
I wonder if the Apple chips will include the Management Engine. If not, Apple should have a huge opportunity selling chips to the security and privacy conscious.
*EVERY* processor has a Management Engine. It is an essential component of most. The only remaining possible question is how open someone is with their design of it.
Re: (Score:2)
Apple already has something far worse than the Management Engine in current Macs.
Their T2 chip is not only a black box containing closed source firmware like the Management Engine, it also stops you repairing your own Mac or recovering data when it fails.
Wouldn't it be great to collaborate with Microsoft (Score:2)
Wouldn't it be great if Apple collaborated with Microsoft in developing the x86-64 ARM emulation? The Microsoft Surface Pro X is an aarch64 laptop that runs 32-bit x86 code through emulation just fine but MSFT haven't yet released support for x86-64 to beat Apple's announcement of their x86-64 emulation layer. Indeed, MSFT actively recommends you instead buy a Surface Pro 7 if you plan to use 64-bit applications.
Wouldn't it be nice if AAPL and MSFT collaborated on a common x86- and x86-64 emulation solutio
Re: (Score:2)
Would not surprise me if something like that came out of this migration.
Re: (Score:2)
Wouldn't it be nice if AAPL and MSFT collaborated on a common x86- and x86-64 emulation solution for ARM?
In short, no. Microsoft doesn't know any more than anyone else about emulating x86. For example, FX!32 was written by Digital. Involving Microsoft only means EEE, incompetence, or both.
best-laid schemes o' mice an' men gang aft agley (Score:5, Interesting)
Having been with Apple in many roles since the Apple ][ (no, not the Apple ][+, the one before it that stored programs and data on an audio cassette), I've survived many upheavals. All were very inconvenient for most users, and some users were lost along the way. In many cases, as Narcocide noted, there was a two year interruption in functionality. Workarounds were the best hope and much productivity was lost over compatibility problems.
My plan for now is to buy up some serviceable used Intel Macs for parts and backup. Then I'll ride out the two years after which I'll consider a new Mac for certain functions. The problem with my plan is that Apple will come out with some killer app, or some really surprising cross-functionality with other Apple devices or services that will be irresistible.
Be warned that the Apple Store may then be your only source for applications and also that most important software may be by subscription only (you will never own it).
Re:best-laid schemes o' mice an' men gang aft agle (Score:4, Insightful)
Be warned that the Apple Store may then be your only source for applications and also that most important software may be by subscription only (you will never own it).
Or it may not.
Re: (Score:3)
Curious, IBM did this long ago (Score:3)
Dual-layer abstraction. That way a program *binary* could be ported from one model to any other in the AS400 range. The program did not need re-compiling, the binary's first run triggered a process to create a second, model-specific binary. Google TIMI - technology-independent machine interface.
Why it's going to be better than before (Score:4, Interesting)
PowerPC is or was a RISC design. To translate it to x86 instructions means it had to be translated to a CISC design. This meant that the instruction translator had to figure out how to map the fewer instructions of PowerPC onto the more complex set of x86 instructions. So it either meant it had to use only a few of the x86 CISC instructions, or, it had to perform pattern matching or even run more complex algorithms in order to make full use of the x86 instruction set.
Now it's the reverse case. All the translator needs to do now is to translate the more complex x86 instruction set onto the fewer Arm RISC instructions. The translator will now be able to just map single x86 instructions, even the more complicated ones, onto an Arm RISC instruction or a series of instructions, by using only a translation table. This then becomes so simple that it can be done fast and at installation time.
Thus, in theory, the transition from x86 to Arm should prove easier and yield better results.
As a matter of fact, x86 CISC designs these days are RISC design "under the hood". A x86 CPU translates its x86 instructions into an internal reduced instruction set (aka microcode). Apple will most likely make use of this knowledge and have designed their translator in exactly the same the x86 hardware does it.
Re: (Score:2)
So much wrong. Me head huts wrong. Not write more.
Re: (Score:2)
Thus, in theory, the transition from x86 to Arm should prove easier and yield better results.
I'm afraid this theory is not based on any real-life experience. Sure, you can made up some theoretically possible architectures where it would be the case, but it is definitely not the case with processors that we have today.
Transition from PowerPC to Intel x86 processors was simpler exactly because x86 has a richer set of commands, and no one forces you to combine many RISC instructions in one complex CISC instruction, because all modern x86 processors run the microcode internally, so they as efficient wi
Re: (Score:2)
... and no one forces you to combine many RISC instructions in one complex CISC instruction ...
Save it. This has nothing to do with you being forced. You are trying to make up an excuse and that it's somehow going to make it better. What a rubbish sentiment that is.
I'm not saying the reverse was perfect, but it is definitely easier to implement and allows for much easier optimisations. And no, it doesn't rely "heavily on internals only available at runtime". If this was the case then the idea of having RISC underneath x86 would never have worked. It works, because it was easy to do, not because it w
Re: (Score:2)
"because all modern x86 processors run the microcode internally, so they as efficient with simple instructions as RISC processors..."
What in the world?!?
"...but doing so is a child play."
Implementing anything like what Rosetta does is not child's play. I doubt very many who comment on /. are even capable of it.
"...you do not need to add any extra memory barrier that did not exist in the original code."
What in the world?!?
When converting from one processor architecture to another, you have to EMULATE the be
Re: (Score:2)
As a matter of fact, x86 CISC designs these days are RISC design "under the hood".
As long as by "these days" you mean "since 1995, when the Am586 came out".
A x86 CPU translates its x86 instructions into an internal reduced instruction set (aka microcode). Apple will most likely make use of this knowledge and have designed their translator in exactly the same the x86 hardware does it.
The x86 hardware does it... with hardware. Rosetta is software. HTH
Re: (Score:2)
The x86 hardware does it... with hardware. Rosetta is software. HTH
Rosetta 2 is software, which only has to do it once and it does so at installation time. The x86 hardware has to decode it every time it fetches an instruction. Thus can the software be slow and even apply costly optimisations. The hardware cannot and has to decode it as fast possible or risk taking high penalties, even more so for speculative execution.
This has led to changes in the x86 CISC instructions under the hood. Their timings have changed to reflect the need of the hardware to decode instructions a
Re: (Score:2)
It's hard to follow this painfully tortured and fallacious logic. It appears to suggest that Rosetta does something only once that x86 processors have to do over and over, which somehow suggests that x86 cannot possibly be good somehow. I can't imagine how uninformed one must be to have these thoughts.
What's truly amazing, though, is that you claim that many x86 executable aren't "even optimised for a specific CPU". How stupid is this?
Re: (Score:2)
What's truly amazing, though, is that you claim that many x86 executable aren't "even optimised for a specific CPU". How stupid is this?
It's true tough. Most x86 code is only targeted for a generic, approximated CPU model that often doesn't exist. You first have to instruct compilers to build for a specific CPU, which then means it won't run on every CPU. Compile something for "amd64" or "x86_64" and it means it will run on any 64-bit CPU. Compile it with i.e. AVX2 instruction support and it won't run everywhere, because not every 64-bit x86 CPU supports AVX2.
I'm also not saying that x86 wouldn't be good somehow. It's just what you read int
Re: (Score:3)
"This meant that the instruction translator had to figure out how to map the fewer instructions of PowerPC onto the more complex set of x86 instructions."
False, because PPC had a very complete instruction set. RISC did not mean "fewer instructions" except in some, particularly early" designs. PPC was in no way "reduced" compared to x86.
As for the rest, it's all nonsense because it's built on false assumptions.
"As a matter of fact, x86 CISC designs these days are RISC design "under the hood". "
No they are
Re: (Score:2)
False, because PPC had a very complete instruction set. ...
No, you only have a bad understanding of RISC and CISC. CISC doesn't mean it's complete or that RISC would be incomplete. Nor does reduced or fewer instruction means it's literally less instructions by mere count. It means it is less or fewer types of instructions.
Keen to compare emulation technologies (Score:2)
With both Microsoft and Apple now providing ARM emulation for x86, the former with their Surface Pro X line, I'm keen to see a new form of benchmark: emulation efficiency.
I'm keen to see what the percentage hit for various popular applications are on each platform. This will require that we have x86 and arm native binaries for both applications but it certainly would be interesting to pitch Rosetta against whatever Microsoft call their thing.
"Commands"? (Score:2)
Nobody calls assembly or machine code instructions "commands" except non-technical journalists. The last thing that used commands was BASIC
Uh oh! (Score:2)
So how can Rosetta translate the Intel RdSeed instruction that returns a full entropy, SP800-90A & B compliant random number that relies on underlying nondeterministic hardware, when the ARM chip doesn't have that underlying hardware. Watch all your security go down the drain.
History repeating. (Score:2)
Re: Will be OK for most (Score:2)
Seems the wrong way to do it, if itâ(TM)s really true that ARM can outperform x86 on the same silicon process.
Re: (Score:2)
LOL this is meaningless. Modern architectures decouple the instruction set from the functional units that do the work. Your observation about CISC instructions affects only what happens at the decode stage, it says nothing about function units and their ability to perform "operations".
Thank goodness you "understand the instruction sets" though.
Re: (Score:2)
Ever since the Pentium era, Intel chips are RISC as well with a CISC interpreter (the software your OS occasionally upload in your CPU).
This is like saying "Google translate is good at doing English to French; therefore it should also be good at doing English to Chinese (it's really not, unless you think "Carefully slide" is an adequate translation of "Caution: slippery"). English to French is easy; they're part of the same language family. English to Chinese is hard. Intel's RISC is entirely designed to run x86; everything in the CISC instruction set maps to minimal numbers of instructions in the RISC set. ARM is not designed to run x86; a
Re: thread performance scales non-linearly (Score:2)
If you want to compile x86, you just take LLVM, translate x86 to IR, translate IR to ARM. And guess what: if you start with badly optimised x86 code, you end up with optimised ARM. If x86 ran out of registers, ARM wonâ(TM)t. With 30 GPRs it can move plenty
Re: (Score:2)
Spoken just like a person who understands nothing about the process but saw an Apple presentation. How does x86 code "run out of registers"? Does a call to regmalloc() fail?
Re: (Score:2)
RISC that supports variable size operation, support condition codes, have complicated internal instructions, use variable length internal instructions, requires complications in register handling, ....
No interpretation, not RISC, just using an appropriate internal architecture to execute x86 code.
Re: (Score:3)
"Ever since the Pentium era, Intel chips are RISC as well with a CISC interpreter (the software your OS occasionally upload in your CPU)."
This is stupid even for a layman. The rest of the comment gets worse.
Re: thread performance scales non-linearly (Score:5, Insightful)
Basically, same as ARM servers, but with "Apple gets to leverage iphone/ipad CPUs". Dog slow, but good enough for basic browser and email "experience" with GPU accelerated graphics.
That’s why ARM will never have a significant place in the realm of supercomputers. Oh no, look, an ARM-based System is currently on number one, how could that happen? You can’t be wrong, can you?
https://www.top500.org/news/ja... [top500.org]
Re: thread performance scales non-linearly (Score:5, Insightful)
Thatâ(TM)s why ARM will never have a significant place in the realm of supercomputers. Oh no, look, an ARM-based System is currently on number one, how could that happen? You canâ(TM)t be wrong, can you?
He might not be right, but neither are you; supercomputer and desktop performace are wildly different.
The Fujistu A64fx processor replaces the Fujitsu Sparc64 XIfx. It isn't now nor will that CPU ever in any way be a substitute for intel's mobile processors, for much the same reason the Sparc64 XIfx didn't dominate the laptop market.
Fujitsu have a few things, an interconnect (Tofu), a very fast, wide vector FPU (SVE), some some on-chip networking tech and a bunch of whacky RAS crap. They basically need a core to marshal data from the interconnect through the local network to the FPU. They probably got fed up putting so much work into their own core and tooling, and figured it was cheaper and faster to buy off the shelf.
Supercomputer chips have very very different performance requirements from desktop ones. There aren't any x86 CPU[*] supercomputers: there are only x86+GPU, where the job of the x86 is basically to feed the large floating point units on the GPU. Supercomputers don't need immense amounts of single threaded straight line performance on irregular code. If your problem isn't parallel, you won't be running it on a supercomputer after all. A machine that works well as a supercomputer chip can utterly stink at running GMail's javascript mush at a decent speed.
Take the Sunway SW26010 for example (currently #4 was 1 for a while). It's a bit cell-like in architecture, with, but has many many slow (1.4GHz), slightly wide, simple and efficient cores for floating point, which only have access to scratch pad, and a slightly different core to manage them. Good for a supercomputer (and had a decent peak/max), useless in a laptop.
So TL;DR just because there's an ARM based supercomputer doesn't mean that ARM desktops and laptops would be any good. On the other hand that's not to say the ARM chip Apple has won't be a decent desktop chip, it's just that use in supercomputers isn't really relevant.
[*] There are some pure x86 ones, but they use the Xeon Phi which is a big x86 powered accelerator, not a standard CPU. If fact it fits the model of everything else using weedy cores feeding massive FPUs. I went through the top 30. Some of them look like they're CPU only, but if you find the institution, it invariably mentions GPUs.
Re: (Score:2)
Because supercomputers are massively parallel. Those machines have hundreds of thousands of individual CPU cores.
Apple's ARM CPUs currently only have two high performance cores. It looks good on benchmarks because single threaded performance is decent, but for general purpose they fall way behind quad core CPUs (of course, their single thread advantage is not 2x the competition).
Presumably Apple will be releasing some new chips for their mid and high end Macs with 8+ cores. Performance will depend on how we
Re: thread performance scales non-linearly (Score:2)
Re: (Score:2)
Supercomputers are about parallelization. It's why they prefer GPUs to CPUs, that even ARM can run circles around in performance per core. Because modern shader units are effectively very simple CPU cores. That absolutely crush everything else for simple parallel tasks because where you can fit a couple of ARM cores or a single x64 one, you can fit a hundred GPU cores.
So if your claim is true that this is a definitional feature of a desktop/laptop CPU, than Apple has made a terrible mistake. It should have
It’s world wide developer conference 2020 (Score:2)
so many articles are about Apple. you’ll see a repeating pattern once you have been in IT for a few years.
Re:Enough Apple already! (Score:4, Informative)
Seriously, scroll back through the last several pages. Slashdot has become Apple PR central.
It's almost like Apple had an event that is relevant to nerds or something and Slashdot is a site that aggregates news from nerd sites which attend these events.
Seriously if you want more politics go read infowars or some shit like that. Hearing about actual technology discussed on Slashdot is a refreshing change from the usual garbage, and I don't even like Apple.
Re: (Score:3)
Consumers should hold the power, not mega corporations.
The power that consumers have been given in the past only showed that they don't want it. They don't care or have given up when confronted with the complexity. Corporations like Apple and Microsoft make their money, because the vast majority of consumers wants to be shielded off from it and do not want to know all there is to know about it.
Re: (Score:2)
Lots of people seem to think they're CPU-bound. Personally, I'm clinging to the last good version of Microsoft Word, or at least the last one I was willing to pay for, which is 32-bit and a lot slimmer than what came after it. I also need VMware to run Windows Quicken, at least until it's ported to ARM Windows, Mac Quicken gets its act together, or Greeks reckon time by the Calends.
Re: (Score:2)
Office for Mac 2019 is the 64-bit version you don't have to subscribe to.