Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Desktops (Apple) Security IT

Unpatchable Vulnerability in Apple Chip Leaks Secret Encryption Keys (arstechnica.com) 85

A newly discovered vulnerability baked into Apple's M-series of chips allows attackers to extract secret keys from Macs when they perform widely used cryptographic operations, academic researchers have revealed in a paper published Thursday. From a report: The flaw -- a side channel allowing end-to-end key extractions when Apple chips run implementations of widely used cryptographic protocols -- can't be patched directly because it stems from the microarchitectural design of the silicon itself. Instead, it can only be mitigated by building defenses into third-party cryptographic software that could drastically degrade M-series performance when executing cryptographic operations, particularly on the earlier M1 and M2 generations. The vulnerability can be exploited when the targeted cryptographic operation and the malicious application with normal user system privileges run on the same CPU cluster.

The threat resides in the chips' data memory-dependent prefetcher, a hardware optimization that predicts the memory addresses of data that running code is likely to access in the near future. By loading the contents into the CPU cache before it's actually needed, the DMP, as the feature is abbreviated, reduces latency between the main memory and the CPU, a common bottleneck in modern computing. DMPs are a relatively new phenomenon found only in M-series chips and Intel's 13th-generation Raptor Lake microarchitecture, although older forms of prefetchers have been common for years. Security experts have long known that classical prefetchers open a side channel that malicious processes can probe to obtain secret key material from cryptographic operations. This vulnerability is the result of the prefetchers making predictions based on previous access patterns, which can create changes in state that attackers can exploit to leak information. In response, cryptographic engineers have devised constant-time programming, an approach that ensures that all operations take the same amount of time to complete, regardless of their operands. It does this by keeping code free of secret-dependent memory accesses or structures.

This discussion has been archived. No new comments can be posted.

Unpatchable Vulnerability in Apple Chip Leaks Secret Encryption Keys

Comments Filter:
  • Since the attack requires a process running with normal user permissions, and then a lot of CPU resources...to extract crypto keys. In the real world I don't see a good use case for this. Hackers with user permissions will just try to escalate to root or kernel permissions and copy keys to your Bitcoin wallet the ol fashioned way, copy the file and keylog the password. It seems like this attack is academic... definitely possible, should be patched and fixed in future versions, but won't affect 99.999999
    • Re:Academic Attack (Score:5, Insightful)

      by Tailhook ( 98486 ) on Thursday March 21, 2024 @01:05PM (#64334199)

      Since the attack requires a process running with normal user permissions, and then a lot of CPU resources...

      That set of conditions is common to pretty much all attacks that exploit speculation. So that argument won't save Apple here.

      Then again, Apple and its customers never hesitate to embrace a double standard, so maybe it's all good.

      • Yes, and those speculative attacks are really hard to use. Can you give me an example of the 2017 Spectre bug ever being used in the wild? AFAIK, there are no real world examples of Spectre being used by a real threat actor. Just pentesting tools that leverage it under ideal circumstances.
        • by gweihir ( 88907 )

          That really demonstrates only one thing: The attackers found easier to exploit attack vectors. It does not mean that Spectre is hard to exploit, just "harder". And it says pretty bad things about the general state of IT security that nobody found it necessary to exploit Spectre.

          • That really demonstrates only one thing: The attackers found easier to exploit attack vectors. It does not mean that Spectre is hard to exploit, just "harder". And it says pretty bad things about the general state of IT security that nobody found it necessary to exploit Spectre.

            I mean, not really from a fundamental security standpoint: Spectre, as well as this new Apple attack, require userland permissions. Once an attacker has gained those permissions, it's often game over already.

            It's not the "state of IT security," it's that exploiting Spectre requires user access already, which means the device is already compromised. Extracting encryption keys from a compromised device doesn't require advanced processor side-channel exploits, just copying files and keystrokes.

            The fact

            • by gweihir ( 88907 )

              Spectre, as well as this new Apple attack, require userland permissions. Once an attacker has gained those permissions, it's often game over already

              I see that as a pretty sad and pathetic state of affairs. Attacks like these should, at the very least, require root permissions. On hardened Unix and Unix-like installations, that is typically the case. Obviously, forget about that under OSX, or, worse, Windows.

              • You're right from a theoretical computing case, obviously, root/kernel privileges should be necessary to run attacks like this involving the processor and other processes. But in the real world, hackers are looking to get the data. Often it's not 'admin' who has access to the data, it's postgres_readonly. In the case of personal computer systems, it's the logged in user. That user has access to all the data, so voila, no need to run a speculative attack. Privilege escalation to admin/root is only usu
      • by Bert64 ( 520050 )

        Apple devices tend to be single user. Most macs don't have more than one account and it's even rarer for multiple accounts to be logged in at the same time, and ios devices don't even have a concept of multiple users.
        If you were to get code execution on a mac, chances are it would either be as root or as the same user who's using the machine, so you'd have access to their processes and data anyway.
        Such a vulnerability would be more serious on a server that was hosting multiple virtual machines for different

        • Stop spreading this ignorant horse-shit.
          Apple devices tend to run code from all kinds of people, including randos on the fucking internet through your web browser.
          Single-user only has meaning in one context- and it doesn't apply to anyone alive right now- and that's single-author of the code running on your machine.
        • Re:Academic Attack (Score:4, Interesting)

          by bloodhawk ( 813939 ) on Thursday March 21, 2024 @04:44PM (#64334779)
          that's nice that you don't use a browser, any 3rd party apps, games or any sort of internet connected app on your Mac, however that doesn't apply to 99.999% of users.
        • Nothing is single user anymore. With how much stuff a web app has available, including gigs of RAM, persistent storage, CPU, it is more than many machines had as bare metal hardware not too long ago.

          Even a "single user" OS still has contexts: The web browser context should be separated as a different "user" just for safety reasons, otherwise, if code escapes the web browser. Most devices wind up with a locked down root, or admin context, so that is also a second "user". Which means at minimum, a device

      • It's not just Apple. It's all fanbois.
        AMD fanbois are just as bad, and utilize the exact same argument to hand-wave away side-channels that impact AMD parts.
    • by gweihir ( 88907 )

      Attacks only get better over time. And while this one is probably academit at this time, it is a _practical_ academic attack, not a theoretical one. The distance to a real-world usable attack is not large.

      • The distance is probably too large to matter, based on Spectre -- which after 7 years, has not been exploited in the wild; only in pentesting kits. That means all the M1 and M2 apple devices will be end of life before a real-world usable attack is used. Based on Spectre's non-use by threat actors, it is likely this exploit will never be used in the wild.

        It also appears Spectre was a more wide-reaching and easier to use exploit than this one.

        • It's about identical to any of the Spectre leaks, really.

          Quoting the authors,

          Specifically, we find that any value loaded from memory is a candidate for being dereferenced (literally!). This allows us to sidestep many of Augury's limitations and demonstrate end-to-end attacks on real constant-time code

          The only real defense is constant-time programming, much like some of the Spectre side-channels that can't be easily mitigated against.

    • It's the same use as the Spectre and Meltdown type side-channel attacks.
      You are correct, they require something to use a fair amount of CPU. They're still vulnerabilities though, and they are major. Is there a high likelihood of actual random-targeted exploitation? Not really. It just means that the risk profile for things you have have considered sandboxed before is higher.

      running ssh while going to a website? Someone could have just taken the symmetric key for your session. Is that terribly useful? Pro
    • ... In the real world I don't see a good use case for this. ...

      A tool like https://github.com/n0fate/chai... [github.com] will work on M processors, maybe.
      In the past I've used it to dump certificate along with the key installed by the company on my work macbook pro. Then I would make a second, clean installation (dual boot) of the OS without the company spyware, but still have access to all company resources since I had re-imported the certifiate.
      This worked nicely on Macbooks with Intel CPUs, but once I migrated to an M1 one I noticed it can't dump the key for the cert anymo

      • Why would you need to use a speculative execution attack to dump a certificate stored in the keychain? I'm going to guess you're referring to an X.509 certificate used by mobile device management to authenticate if the computer is approved and/or company-owned.

        Regardless, a certificate is a file stored on disk (in this case it is in a keychain db file) so you don't need to use the side-channel attack to grab it. As I mentioned, good ol' fashioned copying of files is all that's needed here. For someone

        • Yes, you can get the cert file easily, but you can't get the private key for it this way.
          I am not familiar with how it is implemented, but I suspect the key is kept on the T2 security chip. You can send data to T2 to sign/verify it, but you can't export the key normally. With this attack it might be possible to extract the private key from T2.
          • This bug does not affect the secure enclave. So, those keys properly configured to be stored (P-256 elliptic curve only) in the enclave are not vulnerable to this attack. It only affects user-space, and potentially kernel-space, not enclave-space.
  • by King_TJ ( 85913 ) on Thursday March 21, 2024 @01:05PM (#64334201) Journal

    I really feel like we've taken the complexity of computer systems and networks past the point where it's possible to engineer any of it without it all containing serious flaws/bugs/vulnerabilities.

    When I say this to many I.T. people, they just shrug it off or make snarky comments about the field just needing to get some better-trained/educated workers.

    But in recent years, we've seen this move to bake security directly into hardware that's not feasible to just swap out when bugs are found. Either that, or at least it requires vendor-specific firmware upgrades (Intel trusted-platform tech, for example) and these updates are intrusive (require hard reboots and often following extra steps to click through dialog boxes, etc). And networks are getting to the point where the hardware is treated as a disposable part of your annual maintenance agreement, with people running lots of vulnerable gear because someone stopped paying for the ability to upgrade it.

    Nobody can wrap their heads around any of this stuff anymore. They just throw things out there and see what breaks in production.

    • TEMPEST has been around a long time. [giac.org]

      It's not popular because when you take the threat into consideration, it turns you into a paranoid person. But it is what it is.

      • TEMPEST has been around a long time. [giac.org]

        It's not popular because when you take the threat into consideration, it turns you into a paranoid person. But it is what it is.

        Let’s be real here. TEMPEST isn’t popular because it tends to turn your Bendgate-thin product into a solid brick.

        Great for sturdiness, but you’re probably not going to get many sales bragging about your new smartphone shoulder straps laser-welded to the chassis for better weight distribution.

        • by HBI ( 10338492 )

          Heh. But people should still know that this complexity and risk they complain about already exists anyway...and actually doing something real about it will require some shoulder work at the gym.

    • >I really feel like we've taken the complexity of computer systems and networks past the point where it's possible to engineer any of it without it all containing serious flaws/bugs/vulnerabilities.

      I disagree in the context of this sort of vulnerability.

      It is entirely possible to build on-silicon cryptographic hardware with resistance to side channels both remote and local, fault injection, cryptanalysis and many other classes of attack. In know this because it's my job. It takes a combination of overlap

      • But it still happened, which means the failure was in management, not on the techy side, and you have to consider that. I think the previous commenter was right.

        Apparently these guys didn't use any sort of tooling to look for these kind of "rookie" mistakes, or it wasn't any good.

    • by gweihir ( 88907 )

      I completely agree. In particular that the CS/IT field has really forgotten about KISS or never really understood it. Complexity is raising and events like the full compromise of MS Azure and o365 last year shows that even companies like MS are not capable keeping up with raising requirements anymore. Well, MS never was able to do good engineering, but they are not the only example.

      So while better education would be a real requirement, it is with the _designers_ (not the workers) and it must be making it cl

      • The CS/IT field have given up KISS for "it's good enough". A lot of the dev teams I interacted with tended to have no care about tech debt, refactoring, or whatnot. All that mattered was getting their Jira tickets closed, and whatever stood in the way, be it security, reliability, or whatnot. If the house of cards fell, that's for the next people, and because devs were often offshored, there was usually a high turnover rate (100% annually) because the good overseas devs would find better work elsewhere,

        • by gweihir ( 88907 )

          I think it is also a cultural failure. Other engineering communities have standards and ethics and a responsibility to society. The IT field seems to have none of that and one reason may be because it was (is?) too easy to get into it. The ACM tried a long time a time with the "ACM Code of Ethics", but even CS students approaching the end of their studies usually have never heard of it. Obviously, programmers, admins, etc. that are not academically educated will have heard of it even less. In addition, ther

          • In the 1990s, there was some vestige of this. Sysadmins had the keys to the city and could burn a company to the ground in just a few commands. However, with the offshoring pushes of the 2000s led by Carly Fiorina (IIRC), what happened is that IT was shifted to the absolute lowest, bargain-basement workers who, just because of how it was done, had no ethical standards in place (if they had any standards, work would be shifted elsewhere.) Because of this, IT and code development never caught up to other f

            • by gweihir ( 88907 )

              Indeed. Well, the damage done and the threat to prosperity and, in fact, survical becomes larger and larger. At some point IT will get drastic regulation, because nothing else can help anymore. The EU is slowly starting with KRITIS, which is a good thing, but obviously not enough.

    • by taustin ( 171655 )

      I recall, long ago, a detailed study of how likely fixing a bug would introduce a new bug. Their conclusion was that this is not a fixed ration; the more lines of code in a program, the more likely each bug fix would introduce a new bug. At 1 million lines of code, they estimated that on average, each bug fix introduced 1.2 new bugs.

      (Windows, at the time, was something like 100 million lines of code.)

      Gives you a bit of perspective on why Microsoft is sometimes reluctant to fix things.

    • But in recent years, we've seen this move to bake security directly into hardware that's not feasible to just swap out when bugs are found. Either that, or at least it requires vendor-specific firmware upgrades (Intel trusted-platform tech, for example) and these updates are intrusive (require hard reboots and often following extra steps to click through dialog boxes, etc).

      After reading this, it tends to raise a question; Did the automotive industry take their cues from the IT industry, or did the IT industry take their cues from the automotive industry?

      Either way, it explains why being a tech in either industry is more becoming a pain in the ass than a profession.

    • Disclaimer: I'm human and make mistakes. I also don't know everything.

      I really feel like we've taken the complexity of computer systems and networks past the point where it's possible to engineer any of it without it all containing serious flaws/bugs/vulnerabilities.

      The solution to any excessive complexity issue is to reduce the complexity. There's a lot of crap in the world of IT that really shouldn't be there and is just more fluff to please idiots. See also UEFI (Which is a complete OS environment in ROM that replaced a simple subroutine who's sole job was to just find and load some other program to load the real OS) / Secure Boot (A vulnerable system in and of itself that used to be a simple phy

      • by gweihir ( 88907 )

        The solution to any excessive complexity issue is to reduce the complexity. There's a lot of crap in the world of IT that really shouldn't be there and is just more fluff to please idiots.

        That is the core of the problem. Many CS types still think that creating complexity and then handling it makes them "real men", when in fact it just marks them as bad at designing things. Finding simple, robust solutions is much, much harder, and takes much, much longer than just creating a complex mess. "Move fast and break things." is an abject failure and a fundamental failure to understand how solid engineering works. The only thing it allows you to do is make a lot of money at the expense of society, a

    • by AmiMoJo ( 196126 )

      The issue here is that the CPU doesn't implement the ABI correctly. ARM defined what is supposed to happen, but Apple screwed up the implementation. The spec is perfectly secure, it's just a design flaw, one that Intel also made.

      It has nothing to do with the security features of the CPU.

    • I really feel like we've taken the complexity of computer systems and networks past the point where it's possible to engineer any of it without it all containing serious flaws/bugs/vulnerabilities.

      This is an unflattering view of your personal experiences. There are ways to manage complexity. The fact that corporations don't want to pay for the person that is properly trained to manage the complexity is on the corporations, not humanity. Humans are capable of building things vastly more complex than we currently build.

      • by gweihir ( 88907 )

        Humans are capable of building things vastly more complex than we currently build.

        There is no evidence for that and there is a lot of evidence that we actually cannot. It seems quite likely that some things we currently build are already outside of what we can build to be reliable, secure and maintainable.

    • ... or make snarky comments about the field just needing to get some better-trained/educated workers

      It's always interesting to me that the folks who like to pat themselves on the back the loudest for being geniuses can't seem to understand basic concepts.

      First, by definition, technology IS getting more complex. Can't deny that.

      But, more importantly, the pool of "better-trained/educated workers" is constant. Better training and education requires that the individual has a certain minimum level of native intelligence/aptitude -- and only a small percentage of humanity possesses that as well as the op

  • Seems they have (again) been lazy, arrogant and stupid.

    • Yes, because they have done... what literally every other CPU manufacturer was done, and designed a machine that isn't side-channel proof.

      One of these days, some CPU in existence will not be made by people who are lazy, arrogant, and stupid, but we're a long fucking way away from that.
      • by gweihir ( 88907 )

        Well, Apple at least pretends to be better than others. You are right that this is not the case.

        • Doesn't every manufacturer when it suits them?
          I don't think you're right that Apple is different in this regard, and nor are their fanbois. I find AMD fanbois just as laughably ridiculous, and AMD corporate announcements during the start of the Spectre era.
    • by taustin ( 171655 )

      So they're the Boeing of the tech world?

    • Seems they have (again) been lazy, arrogant and stupid.

      For implementing the same non-issue of a bug that is in every other modern CPU? You are on a roll today. Normally I see your comments occasionally and thing, oh that's silly. But it's like every post you made today has been an effort to be more stupid than the last.

      What's going on at home? Is everything okay?

    • by antdude ( 79039 )

      Bring back Steve Jobs. Oh wait... :(

  • im sure apple is hard at work thinking of a way to blame this on someone else or the old stand by, only effects a small amount of users
  • And the sky didn't fall. Apple will fare just as well as AMD and Intel did when it turned out that their CPUs had unpatchable vulnerabilities.
  • ..it can only be mitigated by building defenses into third-party cryptographic software that could drastically degrade M-series performance..

    Soo, you’re saying it’ll be pretty easy to play Spot the Fed(erally Protected) fully patched M-series in the future.

    It’ll be the one crippled with a touch of bureaucracy. For Security’s sake of course.

If all else fails, lower your standards.

Working...