Unpatchable Vulnerability in Apple Chip Leaks Secret Encryption Keys (arstechnica.com) 85
A newly discovered vulnerability baked into Apple's M-series of chips allows attackers to extract secret keys from Macs when they perform widely used cryptographic operations, academic researchers have revealed in a paper published Thursday. From a report: The flaw -- a side channel allowing end-to-end key extractions when Apple chips run implementations of widely used cryptographic protocols -- can't be patched directly because it stems from the microarchitectural design of the silicon itself. Instead, it can only be mitigated by building defenses into third-party cryptographic software that could drastically degrade M-series performance when executing cryptographic operations, particularly on the earlier M1 and M2 generations. The vulnerability can be exploited when the targeted cryptographic operation and the malicious application with normal user system privileges run on the same CPU cluster.
The threat resides in the chips' data memory-dependent prefetcher, a hardware optimization that predicts the memory addresses of data that running code is likely to access in the near future. By loading the contents into the CPU cache before it's actually needed, the DMP, as the feature is abbreviated, reduces latency between the main memory and the CPU, a common bottleneck in modern computing. DMPs are a relatively new phenomenon found only in M-series chips and Intel's 13th-generation Raptor Lake microarchitecture, although older forms of prefetchers have been common for years. Security experts have long known that classical prefetchers open a side channel that malicious processes can probe to obtain secret key material from cryptographic operations. This vulnerability is the result of the prefetchers making predictions based on previous access patterns, which can create changes in state that attackers can exploit to leak information. In response, cryptographic engineers have devised constant-time programming, an approach that ensures that all operations take the same amount of time to complete, regardless of their operands. It does this by keeping code free of secret-dependent memory accesses or structures.
The threat resides in the chips' data memory-dependent prefetcher, a hardware optimization that predicts the memory addresses of data that running code is likely to access in the near future. By loading the contents into the CPU cache before it's actually needed, the DMP, as the feature is abbreviated, reduces latency between the main memory and the CPU, a common bottleneck in modern computing. DMPs are a relatively new phenomenon found only in M-series chips and Intel's 13th-generation Raptor Lake microarchitecture, although older forms of prefetchers have been common for years. Security experts have long known that classical prefetchers open a side channel that malicious processes can probe to obtain secret key material from cryptographic operations. This vulnerability is the result of the prefetchers making predictions based on previous access patterns, which can create changes in state that attackers can exploit to leak information. In response, cryptographic engineers have devised constant-time programming, an approach that ensures that all operations take the same amount of time to complete, regardless of their operands. It does this by keeping code free of secret-dependent memory accesses or structures.
Re: (Score:3)
Recently, there was a report of unknown registers in an apple chip that exposed encryption. Unknown as in apple didn't even have an explanation of their use.
Re: (Score:3)
Recently, there was a report of unknown registers in an apple chip that exposed encryption. Unknown as in apple didn't even have an explanation of their use.
That would be a backdoor. As in "they were not allowed to give the explanation they doubtlessly have". One reason I trust FOSS Software encryption mechanisms actually a lot more than hardware. The exception is block ciphers, because there is no way to place backdoors in those or leak keys. They can only be made weak, but I doubt even the evil part of the NSA would risk that for AES or could really have done it.
Re: (Score:3)
The backdoor in the GPU chiplet?
You have to poke a specific series of 256 bytes into a specific undocumented address in 64-bit memory space and then you get access to extra instructions to do anything you want on the SoC.
Turn the iPhone into a listening device, read Tucker Carlson's Signal messages - whatever.
It's so well constructed that they can't claim it's an accident.
Kaspersky spent IIRC four years reverse engineering it after they were targeted by no such apple.
Re: (Score:2)
Mysterious Apple SoC Feature Exploited to Hack Kaspersky Employee iPhones [securityweek.com]
Re: (Score:1)
It is not a remote vulnerability. The bad guy has to be running an app on the same device.
I rarely share my laptop with random people I don't know while I'm encrypting data.
Re:In other words... (Score:5, Insightful)
And the "encrypting data" could be your SSL handshake with your bank.
Re: (Score:2)
Yes but are the two the same thing at once? The app could be JavaScript, and the data it is extracting could be your SSL handshake. But is it really, and what can we do with this info in the required time?
The problem with all these side channel attacks, to do anything useful you either need to know intricate details of the machine which you can only know if you're already an admin on it, or you need to exfiltrate all available memory (not something you can do without detection) and then trawl through it aft
Re: (Score:2)
The problem with all these side channel attacks, to do anything useful you either need to know intricate details of the machine which you can only know if you're already an admin on it, or you need to exfiltrate all available memory (not something you can do without detection) and then trawl through it afterwards.
No, not quite.
In worst-case (for the attacker) scenarios, this is true.
But there are stochastic methods to get you far under brute-force locations of good bits.
Re: (Score:2)
But there are stochastic methods to get you far under brute-force locations of good bits.
How far? Give it to me in terms of time taken to achieve a successful attack, or in terms of attack space size. Because here we are, 5+ years after a litany of unpatchable side channel attacks announced from everything to the latest Intel CPUs, to little crappy ARM micro processors, and yet not only are we not seeing wide spread exploitation, there's literally been zero confirmed attacks, and no demonstrations outside of a lab environment of this for your average PC.
Handwaving saying "yes but you can optimi
Re: (Score:2)
How far? Give it to me in terms of time taken to achieve a successful attack, or in terms of attack space size.
Minutes.
Because here we are, 5+ years after a litany of unpatchable side channel attacks announced from everything to the latest Intel CPUs, to little crappy ARM micro processors, and yet not only are we not seeing wide spread exploitation, there's literally been zero confirmed attacks, and no demonstrations outside of a lab environment of this for your average PC.
That's because mitigation was successful.
Handwaving saying "yes but you can optimise it" isn't enough. We need to know if it is optimisable to the point of actually being a viable attack method against which we need to protect ourselves, and right now I've seen no evidence that *any* of the side channel attacks so far has gotten to that point.
These attacks are all very viable until they're mitigated.
POCs are real code running on real hardware. Quit trying to pretend like "running in a lab" means something here.
The fact that you live in a world protected by mitigations doesn't mean the dangers no longer exist.
Re: (Score:1)
I am not sure about this particular issue, but the "app" you speak of could be a bit of JavaScript on a web page you are visiting. And the "encrypting data" could be your SSL handshake with your bank.
In theory, yes. In practice - Meltdown and Spectre have been out for 6 years. Anyone seen them (or other speculative execution bugs) actually being exploited in the wild? Serious question.
Re: (Score:2)
I am not sure about this particular issue, but the "app" you speak of could be a bit of JavaScript on a web page you are visiting.
No. JavaScript doesn't execute with normal user permissions and can't execute arbitrary machine code.
A browser plugin might work, but not JS code running from a website.
Re:In other words... (Score:4, Insightful)
Do you run JavaScript or WebAssembly in your browser that comes from some websites you do not fully trust? I am not saying that is enough for the attack here, and I am not saying you do that (others might), but there have been side-channel attacks in the past were it was enough. Unless you are very careful, it is unfortunately easy today to run software from potential attackers on your device without really realizing it.
Re: (Score:2)
Exactly this, Apple devices are almost always single user so this has a lot less impact than processors which are being used to run virtual machine instances for multiple different customers.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Probably best to stick it in a faraday bag.
Re: In other words... (Score:2)
ACPI allows remote power-on, if so configured in the BIOS/UEFI. :)
But your ethernet cable needs to be a thin one to fit through the Faraday cage
Re: (Score:2)
There are always edge cases though. Some web page running WebAssembly in Safari, or someone running VMs and Docker/Kubernetes containers.
As it stands, it may not be a direct, easy to use exploit, but combined with other stuff, it might be something that can allow remote intrusions.
Re: (Score:2)
Re: (Score:2)
What about unprivileged applications you're testing out?
You don't get to downplay this while saying shit like Meltdown is the end of Intel.
In the end, one thing is true- every CPU from every manufacturer is going to succumb to many of these side-channel attacks. It's a cat and mouse game that nobody will ever win, only keep ahead of.
Make your peace with it.
Re: (Score:2)
Re: (Score:3)
A back door is useless once discovered as it gets blocked in some way or it gets used by people who weren't intended to be able to slip through. Designing something clever enough to be exploitable, but practically impossible to detect is far more improbable than designers making a mistake when dealing with something incredibly complicated.
The NSA doesn't need something like this anyway when they can scoop up communications between a computer and the res
Re: (Score:2)
Not necessarily. One backdoor placement technique is to use people of limited insight to implement security mechanisms and then have a close look. More expensive, but basically impossible to find out from the backdoor alone that it was placed. This can also take the form of security reviewers (which you need for such a design or it will be bad) not pointing out flaws they see.
Side note: That $5 wrench does not help against protocols with perfect forward security, which even TLS has these days. If the target
Re: (Score:2)
I wouldn't write it off just yet. Spectre was supposed to be hard to exploit in real life according to Intel, but then someone made it work in Javascript when you visit a website (or load a dodgy ad - make sure you have uBlock installed).
This kind of flaw is useful to the NSA etc. because they can use it to remotely hack iPhones in other countries. They can't attack the leaders of those countries with a wrench, but they can get malware onto their iPhones.
Academic Attack (Score:2)
Re:Academic Attack (Score:5, Insightful)
Since the attack requires a process running with normal user permissions, and then a lot of CPU resources...
That set of conditions is common to pretty much all attacks that exploit speculation. So that argument won't save Apple here.
Then again, Apple and its customers never hesitate to embrace a double standard, so maybe it's all good.
Re: Academic Attack (Score:2)
Re: (Score:2)
That really demonstrates only one thing: The attackers found easier to exploit attack vectors. It does not mean that Spectre is hard to exploit, just "harder". And it says pretty bad things about the general state of IT security that nobody found it necessary to exploit Spectre.
Re: (Score:2)
That really demonstrates only one thing: The attackers found easier to exploit attack vectors. It does not mean that Spectre is hard to exploit, just "harder". And it says pretty bad things about the general state of IT security that nobody found it necessary to exploit Spectre.
I mean, not really from a fundamental security standpoint: Spectre, as well as this new Apple attack, require userland permissions. Once an attacker has gained those permissions, it's often game over already.
It's not the "state of IT security," it's that exploiting Spectre requires user access already, which means the device is already compromised. Extracting encryption keys from a compromised device doesn't require advanced processor side-channel exploits, just copying files and keystrokes.
The fact
Re: (Score:2)
Spectre, as well as this new Apple attack, require userland permissions. Once an attacker has gained those permissions, it's often game over already
I see that as a pretty sad and pathetic state of affairs. Attacks like these should, at the very least, require root permissions. On hardened Unix and Unix-like installations, that is typically the case. Obviously, forget about that under OSX, or, worse, Windows.
Re: Academic Attack (Score:2)
Re: (Score:2)
I am aware of that. We are talking about stealing keys from the system though.
Re: (Score:2)
Apple devices tend to be single user. Most macs don't have more than one account and it's even rarer for multiple accounts to be logged in at the same time, and ios devices don't even have a concept of multiple users.
If you were to get code execution on a mac, chances are it would either be as root or as the same user who's using the machine, so you'd have access to their processes and data anyway.
Such a vulnerability would be more serious on a server that was hosting multiple virtual machines for different
Re: (Score:3)
Apple devices tend to run code from all kinds of people, including randos on the fucking internet through your web browser.
Single-user only has meaning in one context- and it doesn't apply to anyone alive right now- and that's single-author of the code running on your machine.
Re:Academic Attack (Score:4, Interesting)
Re: (Score:2)
Nothing is single user anymore. With how much stuff a web app has available, including gigs of RAM, persistent storage, CPU, it is more than many machines had as bare metal hardware not too long ago.
Even a "single user" OS still has contexts: The web browser context should be separated as a different "user" just for safety reasons, otherwise, if code escapes the web browser. Most devices wind up with a locked down root, or admin context, so that is also a second "user". Which means at minimum, a device
Re: (Score:2)
AMD fanbois are just as bad, and utilize the exact same argument to hand-wave away side-channels that impact AMD parts.
Re: (Score:3)
Attacks only get better over time. And while this one is probably academit at this time, it is a _practical_ academic attack, not a theoretical one. The distance to a real-world usable attack is not large.
Re: (Score:2)
It also appears Spectre was a more wide-reaching and easier to use exploit than this one.
Re: (Score:2)
Quoting the authors,
Specifically, we find that any value loaded from memory is a candidate for being dereferenced (literally!). This allows us to sidestep many of Augury's limitations and demonstrate end-to-end attacks on real constant-time code
The only real defense is constant-time programming, much like some of the Spectre side-channels that can't be easily mitigated against.
Re: (Score:3)
You are correct, they require something to use a fair amount of CPU. They're still vulnerabilities though, and they are major. Is there a high likelihood of actual random-targeted exploitation? Not really. It just means that the risk profile for things you have have considered sandboxed before is higher.
running ssh while going to a website? Someone could have just taken the symmetric key for your session. Is that terribly useful? Pro
Re: (Score:2)
... In the real world I don't see a good use case for this. ...
A tool like https://github.com/n0fate/chai... [github.com] will work on M processors, maybe.
In the past I've used it to dump certificate along with the key installed by the company on my work macbook pro. Then I would make a second, clean installation (dual boot) of the OS without the company spyware, but still have access to all company resources since I had re-imported the certifiate.
This worked nicely on Macbooks with Intel CPUs, but once I migrated to an M1 one I noticed it can't dump the key for the cert anymo
Re: (Score:2)
Regardless, a certificate is a file stored on disk (in this case it is in a keychain db file) so you don't need to use the side-channel attack to grab it. As I mentioned, good ol' fashioned copying of files is all that's needed here. For someone
Re: (Score:2)
I am not familiar with how it is implemented, but I suspect the key is kept on the T2 security chip. You can send data to T2 to sign/verify it, but you can't export the key normally. With this attack it might be possible to extract the private key from T2.
Re: (Score:2)
That sucks, but points to something I keep saying. (Score:5, Interesting)
I really feel like we've taken the complexity of computer systems and networks past the point where it's possible to engineer any of it without it all containing serious flaws/bugs/vulnerabilities.
When I say this to many I.T. people, they just shrug it off or make snarky comments about the field just needing to get some better-trained/educated workers.
But in recent years, we've seen this move to bake security directly into hardware that's not feasible to just swap out when bugs are found. Either that, or at least it requires vendor-specific firmware upgrades (Intel trusted-platform tech, for example) and these updates are intrusive (require hard reboots and often following extra steps to click through dialog boxes, etc). And networks are getting to the point where the hardware is treated as a disposable part of your annual maintenance agreement, with people running lots of vulnerable gear because someone stopped paying for the ability to upgrade it.
Nobody can wrap their heads around any of this stuff anymore. They just throw things out there and see what breaks in production.
This is very much not a new issue (Score:2)
TEMPEST has been around a long time. [giac.org]
It's not popular because when you take the threat into consideration, it turns you into a paranoid person. But it is what it is.
Re: (Score:2)
TEMPEST has been around a long time. [giac.org]
It's not popular because when you take the threat into consideration, it turns you into a paranoid person. But it is what it is.
Let’s be real here. TEMPEST isn’t popular because it tends to turn your Bendgate-thin product into a solid brick.
Great for sturdiness, but you’re probably not going to get many sales bragging about your new smartphone shoulder straps laser-welded to the chassis for better weight distribution.
Re: (Score:2)
Heh. But people should still know that this complexity and risk they complain about already exists anyway...and actually doing something real about it will require some shoulder work at the gym.
Re: (Score:2)
>I really feel like we've taken the complexity of computer systems and networks past the point where it's possible to engineer any of it without it all containing serious flaws/bugs/vulnerabilities.
I disagree in the context of this sort of vulnerability.
It is entirely possible to build on-silicon cryptographic hardware with resistance to side channels both remote and local, fault injection, cryptanalysis and many other classes of attack. In know this because it's my job. It takes a combination of overlap
Re: (Score:2)
But it still happened, which means the failure was in management, not on the techy side, and you have to consider that. I think the previous commenter was right.
Apparently these guys didn't use any sort of tooling to look for these kind of "rookie" mistakes, or it wasn't any good.
Re: (Score:2)
I completely agree. In particular that the CS/IT field has really forgotten about KISS or never really understood it. Complexity is raising and events like the full compromise of MS Azure and o365 last year shows that even companies like MS are not capable keeping up with raising requirements anymore. Well, MS never was able to do good engineering, but they are not the only example.
So while better education would be a real requirement, it is with the _designers_ (not the workers) and it must be making it cl
Re: (Score:3)
The CS/IT field have given up KISS for "it's good enough". A lot of the dev teams I interacted with tended to have no care about tech debt, refactoring, or whatnot. All that mattered was getting their Jira tickets closed, and whatever stood in the way, be it security, reliability, or whatnot. If the house of cards fell, that's for the next people, and because devs were often offshored, there was usually a high turnover rate (100% annually) because the good overseas devs would find better work elsewhere,
Re: (Score:2)
I think it is also a cultural failure. Other engineering communities have standards and ethics and a responsibility to society. The IT field seems to have none of that and one reason may be because it was (is?) too easy to get into it. The ACM tried a long time a time with the "ACM Code of Ethics", but even CS students approaching the end of their studies usually have never heard of it. Obviously, programmers, admins, etc. that are not academically educated will have heard of it even less. In addition, ther
Re: (Score:2)
In the 1990s, there was some vestige of this. Sysadmins had the keys to the city and could burn a company to the ground in just a few commands. However, with the offshoring pushes of the 2000s led by Carly Fiorina (IIRC), what happened is that IT was shifted to the absolute lowest, bargain-basement workers who, just because of how it was done, had no ethical standards in place (if they had any standards, work would be shifted elsewhere.) Because of this, IT and code development never caught up to other f
Re: (Score:2)
Indeed. Well, the damage done and the threat to prosperity and, in fact, survical becomes larger and larger. At some point IT will get drastic regulation, because nothing else can help anymore. The EU is slowly starting with KRITIS, which is a good thing, but obviously not enough.
Re: (Score:2)
I recall, long ago, a detailed study of how likely fixing a bug would introduce a new bug. Their conclusion was that this is not a fixed ration; the more lines of code in a program, the more likely each bug fix would introduce a new bug. At 1 million lines of code, they estimated that on average, each bug fix introduced 1.2 new bugs.
(Windows, at the time, was something like 100 million lines of code.)
Gives you a bit of perspective on why Microsoft is sometimes reluctant to fix things.
Re: (Score:2)
But in recent years, we've seen this move to bake security directly into hardware that's not feasible to just swap out when bugs are found. Either that, or at least it requires vendor-specific firmware upgrades (Intel trusted-platform tech, for example) and these updates are intrusive (require hard reboots and often following extra steps to click through dialog boxes, etc).
After reading this, it tends to raise a question; Did the automotive industry take their cues from the IT industry, or did the IT industry take their cues from the automotive industry?
Either way, it explains why being a tech in either industry is more becoming a pain in the ass than a profession.
Re: (Score:2)
I really feel like we've taken the complexity of computer systems and networks past the point where it's possible to engineer any of it without it all containing serious flaws/bugs/vulnerabilities.
The solution to any excessive complexity issue is to reduce the complexity. There's a lot of crap in the world of IT that really shouldn't be there and is just more fluff to please idiots. See also UEFI (Which is a complete OS environment in ROM that replaced a simple subroutine who's sole job was to just find and load some other program to load the real OS) / Secure Boot (A vulnerable system in and of itself that used to be a simple phy
Re: (Score:2)
The solution to any excessive complexity issue is to reduce the complexity. There's a lot of crap in the world of IT that really shouldn't be there and is just more fluff to please idiots.
That is the core of the problem. Many CS types still think that creating complexity and then handling it makes them "real men", when in fact it just marks them as bad at designing things. Finding simple, robust solutions is much, much harder, and takes much, much longer than just creating a complex mess. "Move fast and break things." is an abject failure and a fundamental failure to understand how solid engineering works. The only thing it allows you to do is make a lot of money at the expense of society, a
Re: (Score:2)
The issue here is that the CPU doesn't implement the ABI correctly. ARM defined what is supposed to happen, but Apple screwed up the implementation. The spec is perfectly secure, it's just a design flaw, one that Intel also made.
It has nothing to do with the security features of the CPU.
Re: (Score:2)
I really feel like we've taken the complexity of computer systems and networks past the point where it's possible to engineer any of it without it all containing serious flaws/bugs/vulnerabilities.
This is an unflattering view of your personal experiences. There are ways to manage complexity. The fact that corporations don't want to pay for the person that is properly trained to manage the complexity is on the corporations, not humanity. Humans are capable of building things vastly more complex than we currently build.
Re: (Score:2)
Humans are capable of building things vastly more complex than we currently build.
There is no evidence for that and there is a lot of evidence that we actually cannot. It seems quite likely that some things we currently build are already outside of what we can build to be reliable, secure and maintainable.
Re: (Score:1)
... or make snarky comments about the field just needing to get some better-trained/educated workers
It's always interesting to me that the folks who like to pat themselves on the back the loudest for being geniuses can't seem to understand basic concepts.
First, by definition, technology IS getting more complex. Can't deny that.
But, more importantly, the pool of "better-trained/educated workers" is constant. Better training and education requires that the individual has a certain minimum level of native intelligence/aptitude -- and only a small percentage of humanity possesses that as well as the op
Re: (Score:2)
Indeed. Well said.
Not a good time for Apple (Score:2)
Seems they have (again) been lazy, arrogant and stupid.
Re: (Score:3)
One of these days, some CPU in existence will not be made by people who are lazy, arrogant, and stupid, but we're a long fucking way away from that.
Re: (Score:2)
Well, Apple at least pretends to be better than others. You are right that this is not the case.
Re: (Score:2)
I don't think you're right that Apple is different in this regard, and nor are their fanbois. I find AMD fanbois just as laughably ridiculous, and AMD corporate announcements during the start of the Spectre era.
Re: (Score:3)
So they're the Boeing of the tech world?
Re: (Score:2)
Seems they have (again) been lazy, arrogant and stupid.
For implementing the same non-issue of a bug that is in every other modern CPU? You are on a roll today. Normally I see your comments occasionally and thing, oh that's silly. But it's like every post you made today has been an effort to be more stupid than the last.
What's going on at home? Is everything okay?
Re: (Score:2)
Bring back Steve Jobs. Oh wait... :(
Re: (Score:2)
Re: (Score:2)
what problem? (Score:2)
AMD and Intel had the same problem (Score:2)
The Crippling Parallels We Live. (Score:2)
..it can only be mitigated by building defenses into third-party cryptographic software that could drastically degrade M-series performance..
Soo, you’re saying it’ll be pretty easy to play Spot the Fed(erally Protected) fully patched M-series in the future.
It’ll be the one crippled with a touch of bureaucracy. For Security’s sake of course.