







Hacker Claims To Have Decrypted Apple's Secure Enclave Processor Firmware (iclarified.com) 111
According to iClarified, a hacker by name of "xerub" has posted the decryption key for Apple's Secure Enclave Processor (SEP) firmware. "The security coprocessor was introduced alongside the iPhone 5s and Touch ID," reports iClarified. "It performs secure services for the rest of the SOC and prevents the main processor from getting direct access to sensitive data. It runs its own operating system (SEPOS) which includes a kernel, drivers, services, and applications." From the report: The Secure Enclave is responsible for processing fingerprint data from the Touch ID sensor, determining if there is a match against registered fingerprints, and then enabling access or purchases on behalf of the user. Communication between the processor and the Touch ID sensor takes place over a serial peripheral interface bus. The processor forwards the data to the Secure Enclave but can't read it. It's encrypted and authenticated with a session key that is negotiated using the device's shared key that is provisioned for the Touch ID sensor and the Secure Enclave. The session key exchange uses AES key wrapping with both sides providing a random key that establishes the session key and uses AES-CCM transport encryption. Today, xerub announced the decryption key "is fully grown." You can use img4lib to decrypt the firmware and xerub's SEP firmware split tool to process. Decryption of the SEP Firmware will make it easier for hackers and security researchers to comb through the SEP for vulnerabilities.
Re:Not really a surprise (Score:5, Interesting)
Re: (Score:2)
It could have been brute forced, and the guy just got lucky.
Re: (Score:1)
Re: (Score:3)
Re: (Score:3)
Space is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist, but that's just peanuts to space.
2^256 is just as massive. The lower bound for the estimated number of atoms in the known observable universe is 10 ^ 78, which is only slightly more than the number of combinations you can have with a 256-bit key. So everything that exists in that really big space is only slightly more numerous than the possible combinations for one key.
It
Re: (Score:2)
yeah... hence the <em> tags around "could"
Re: Not really a surprise (Score:2)
Re: Not really a surprise (Score:1)
You are wrong. AES supports either a 128 bit or 256 bit key.
Re: (Score:1)
On filesystems, there's one method of setting it up that basically requires two keys: this is sometimes called (whatever) 512 for a 256 bit key. So "AES-512", while not accurate, is how some people have read AES-256 XTS, with one 256 bit key for the cipher and one 256 bit key for the IV.
Re: (Score:2)
192 bit keys are also defined.
Re: (Score:2)
I have some people try to do "AES 512" by chaining AES, or even doing triple AES (encrypt-decrypt-encrypt) like how triple DES was done. This is brain dead, and shows the person who wrote this doesn't have a clue about crypto.
Want more bits? Cascade different encryption algorithms, similar to how TrueCrypt/VeraCrypt does it. It is wise to use a multi-algorithm cascade anyway, because if one gets broken, there are two others. For asymmetric algorithms, same thing. Cascade RSA, some elliptic encryption,
Re: (Score:2)
Does combining algorithims really combine the keysizes? or does it actually just have the same key size but in a third algorithm you just haven't discovered?
Re: (Score:2)
Having two algorithms result in an unxpected third transformation is called a "group". Realistically, with TrueCrypt/VeraCrypt's cascades, you are not getting 3*256, or 768 bits, but somewhere between 258 bits and 768 bits. There is a lot of advanced crypto study on this, especially if one algorithm when used may weaken the second algorithm used after that. However, 99.99999% of the cryptographic breaks tend to be along the lines of weak key management and storage, or failure to implement existing algori
Re: (Score:2)
...especially if one algorithm when used may weaken the second algorithm used after that.
That is a concept I can't wrap my head around. Wouldn't that open up the possibility of using an intermediate encryption before an attack? Or is that just if you re-use the same key for the second algorithm?
Re: (Score:2)
There might be an algorithm C which is equivilent to (A(B()). However, the chance of that is quite unlikely, especially if a proper implementation of the cipher is used (CBC, for example, or XEX), it makes this almost impossible... likely less than the chance of brute forcing the key.
As for reusing keys, if one has three different algorithms (AES, Twofish, and Serpent), three different keys should be used. This ensures that if one of the algorithms coughs up a key somehow, the other two are still working.
Re: (Score:3)
Unless you believe in magic, not really.
a five year old could do it (Score:2)
1.Simply create a pocket universe where time run many many times faster than ours
2. drop your computer in there
3. run the brute force
5. retrieve the computer with the results from the pocket universe
6. make sure to safely deallocate the pocket universe when done.
good grief, do I have to tell you monkeys how to do everything?!!!
Re: Not really a surprise (Score:1)
Re: (Score:2)
ostensibly, it only exists inside the secure enclave
If they're referring to unmodified TrustZone then it's secure by emphatic assertion of Arm's marketing department. "This bit is secure, because we've said it is". I don't know whether Apple have made further changes or done their own firmware, but hacking TrustZone isn't that hard.
Re: (Score:2)
Re: (Score:2)
Following up to my own post: OK, it's not (don't)TrustZone but a distinct processor [blackhat.com]. Well done Apple for doing it properly (although this ref [theiphonewiki.com] then claims it's just TrustZone, which doesn't seem to be the case).. I'm assuming the guy found a flaw in the SEP, which for example has it's own I/O lines for GPIO, SPI, I2C, etc, so you've got a large attack surface and direct access to the CPU.
From what I understand from previous flame-wars on the subject, it is NOT TrustZone-based, but rather completely home-grown by Apple. Since Apple has an Architecture-level license with ARM (one of the few companies that do), they can pretty much do what they please inside of even the ARM core, let alone any peripheral subsystems.
Re: (Score:2)
Is TrustZone like a hypervisor, or is it like a LXC, managing containers/worlds? It would be nice if ARM supported VMs on the chip level with a standardized hypervisior.
Re: (Score:2)
I would recommend reading "Hacking the Xbox" by Bunny. Also, most of the security community assumes that an attacker with physical access will get in eventually, with the only possible exception being a real HSM (at, say, $50'000 and up per piece). A HSM would eventually get hacked as well I expect, but they are too expensive and too uninteresting (due to very limited deployment) for that to happen.
I do hope we get a detailed write-up of the hack, these are always pretty interesting.
Re: (Score:2)
Actually, I would bet a true HSM (host security module) would not get hacked for the main reason that it must have physical security surrounding the key storage so if there is a physical breach, the keys are destroyed.
Re: (Score:2)
1. It is called "Hardware Security Module"
2. What makes you think there is only one device involved in finding out how to bypass exactly these detectors?
Re: (Score:2)
Well, first, since both the Thales and Futurex manuals I have refer to their own products as "host security modules", I still stand by my first statement. Second, if you actually look at the internal setup, you will notice that any attempt to breach the area where the master key is stored, the electronics are damaged to where the keys are permanently unrecoverable.
Re: (Score:2)
Interestingly, their websites call it "Hardware Security Module", you know, like everybody else does:
- https://www.futurex.com/produc... [futurex.com]
- https://www.thalesesecurity.co... [thalesesecurity.com]
As to you second remark, do you actually think a HSM is unhackable? That would be pretty dumb.
Re: (Score:2)
And, ostensibly, it only exists inside the secure enclave and in Apple's care
It only exists unencrypted there. It also exists encrypted in the firmware update blobs that Apple ships. It's entirely possible that Apple's use of encryption in their distribution chain included some flaws. This wouldn't be the first time: there was a vulnerability in FileVault2 (Apple's full-disk encryption code) caused by incorrect use of AES keys, which dramatically reduced the search space.
Great news for law enforcement ... (Score:5, Interesting)
Re: (Score:3)
I've long been thinking that we need a time limited storage system for our secrets like encryption keys.
I'd suggest storing such data in SRAM. A small capacitor can keep it powered (only needs nanoamps to maintain).
If the phone is powered off for too long or powered but the user doesn't enter the passkey for a day or two it wipes itself.
Prevents this kind of attack, prevents any kind of slow attack in fact.
Re: (Score:2)
I like this plan, but the SRAM itself should still be encrypted with the device key HMAC'd with some other identifier as well (PIN ideally).
Re: (Score:2)
New standard procedure: Clip this thing onto that circuit board until the nerds arrive with their magic box.
Re: (Score:2)
There is nothing you could attach that would extend the time limit. Also, cops aren't going to take the phone apart and connect stuff to the PCB.
Re:Great news for law enforcement ... (Score:5, Insightful)
Suicide chips were common for a long time. And although effective are MUCH more trouble than they're worth.
For example, you'll lose ten times more "genuine" evidence (e.g. witnesses willingly handing their phones over for evidence, then the chip dying while in court storage) than anything you'll save on personal privacy.
Not to mention, get one duff battery/capacitor and one day your phone just stops working permanently with no possibility of restoration whatsoever.
This isn't an attack stopped by a suicide chip, either. You buy one device, let it wipe itself a thousand times in testing, get the key out of it eventually, and then you can attack ALL the security chips in ALL the phones way within your "day or two".
Plus, there's almost no way to ensure the timer is running. Isolate the suicide chip's clock (especially if it has to track real time and be running all the time) and you can pretty much stop it dead so it never gets to the point it can do anything about wiping the data.
Look into the old arcade stuff. Lots of old arcade games had suicide chips. Lots of them are still emulated. In many cases, people just ignored it and - like this - determined the keys in other ways (lots of arcade games have the equivalent of rainbow tables for their encryption in common emulators because the key itself was never found), in others they de-capped and imaged the chips while they were still working, which lets you basically pluck the stored data and logic of their semiconductors out of a microscope image of the silicon.
It's a lot of effort to go to, for a lot of risk. But what you describe is basically a proper TPM chip. I don't think anyone has ever successfully broken a TPM chip / keys, have they?
Re: (Score:2)
It's certainly not for everyone. The clock thing is a non issue though. Ultra low power RC oscillator on chip, only +/-30% but that's all you need to measure a couple of days approximately. Protected the same way as the secure enclave.
Re: (Score:2)
If the phone is powered off for too long or powered but the user doesn't enter the passkey for a day or two it wipes itself.
And when you end up in the emergency room because you were in a traffic accident you'll find that everything you had on your phone is gone forever. For the vast majority the problem is they lose information because they don't have backups and forget keys. The second biggest problem is that the user is hacked, tricked or forced so the attacker has the user's credentials, doesn't matter how good the lock is if the attacker has the key. If you're trying to hack a locked iPhone you're a little bit past script k
Re: (Score:2)
I generally don't worry too much about losing the information on my phone, because I cross borders regularly and have to backup and wipe it anyway. Encrypted backups stored on my server mean I don't need to carry sensitive stuff through border security.
My main concern is that an attacker gets hold of the phone before I can sanitize it. I want to be able to store private information on it, but obviously can't if it isn't secure.
Re: (Score:2)
Re: (Score:1)
and apple will get to sell a lot more of the next model iphones with a different security scheme.
Re: (Score:2)
They wouldn't need to. they have all the pipe wrenches they'd ever need. (And by pipe wrenches, i mean physical pipe wrenches, as well as whatever metaphorical pipe wrenches you can think of for compelling someone's cooperation)
Re: (Score:2)
The SEP is a completely separate SoC that is not really accessible from the application processor except through the kernel.
And through several hundred pins on the chip, and its I/O ports and protocols, and its boot loader, and its memory, and the shared memory areas with the AP, and ...
Re: Great news for law enforcement ... (Score:2)
No, having access to the unencrypted firmware does only make a system less secure if you believe in security by obscurity.
This will make the system MORE secure in the end.
Re: Great news for law enforcement ... (Score:1)
Re: (Score:1)
Could be many reasons. I would rather they communicate to the company first, then publish. But it's better to get the knowledge out there. Vulnerabilities that are hidden are ripe for zero day exploits. Those can be abused by criminals, law enforcement, and intelligence agencies.
But getting credit for it is probably what most hackers want. Curiosity is a large part of it. Money, maybe. Embarrass the company is likely low on the list.
Re: Honest Question (Score:3, Informative)
Will firstly there is no vulnerability here.
This does not effect the ability of the secure enclave to protect the user, it does not help law enforcement or any one to crack user data.
This is simple the code of what it does. If upon examination someone finds a vulnerability, then presumably they will let apple know...
Re: (Score:2)
If upon examination someone finds a vulnerability, then presumably they will let apple know...
Or sell it for a million on the vulnerability market.
Re:Honest Question (Score:5, Informative)
Re: Honest Question (Score:2, Informative)
Because just saying "look at this bug I found" gets you ignored.
If you want the problem solved, you give everyone a tool to exploit it for the quickest fix. Also, even going that far, you may still be ignored.
Re: Honest Question (Score:2)
Re: (Score:1)
Mistrust authority—promote decentralization.
You'll love this Hacker Ethic until one day when your own and your loved ones' personal details are hacked and uploaded to torrent or reddit.
This is great! (Score:3)
What people aren't grasping is that this is actually good news. When someone breaks security, it forces the device maker to improve their security tactics (lest they be considered insecure devices). The result is that people will get better security. The same is not true about cell towers because telecom companies don't care if your shit is insecure. :/
Re:This is great! (Score:5, Informative)
Re: (Score:2)
Decryption was essentially negated. That's breaking a layer of security if there ever was one. Otherwise, why would the encryption be in place in the first place? That is the point of encryption, correct? To secure things?
Re: (Score:2)
A minor form. The secure enclave doesn't actually have to be encrypted, it could just be signed. All you need to know is when you start up, you verify that the image is properly signed and then you start the processor up. Encryption makes it so you don't actually have to verify it decrypte
Re:This is great! (Score:5, Interesting)
This was a small shade of "security through obscurity" but is only a thin veil. The performance of GOOD security or cryptography isn't affected by exposure of its methods. Like you see in the movies, where the criminals get the floorplan of the vault, the schedule of the guards, placement of the cameras etc etc, and manage to come up with a plan. That means the security was actually quite poor. They should have looked at it and said "Well... I guess there's just NO way to break into this place without getting caught." Now that still doesn't mean they publish their guard's schedule on the web page.
The reason of course is that vulnerabilities may (and usually DO) still exist, and obfuscation or hiding of your security information does help a bit to mitigate that, but should not be seen as a solution. That's why good security is constantly changing and improving itself.
You could even look at this as a good thing for them. Hackers love a challenge. A few of them will find a few holes, and publish them. (either for the credit, or the bounty, or on the darkweb etc) And any of those that are made public will get patched. There'll still be a few zero-days, the kind that either lurk in the kernel for years in plain sight without being discovered (think ShellShock) or the kind that teams of state-actors dig up and use for espionage. (think NSA dump)
In this case, there are two types of encryption going on. One is just obfuscation. The reason is that the key is there. The hardware decrypts the firmware and runs it. It has to be able to decrypt it unless you're going to key in a 128/256 bit key every time you turn on your phone, hence the symmetrical cipher. So it may as well not really even BE encrypted. To say you "negated" something that was already negated is silly. I wouldn't even call this "encrypted". The key is right there, so it's really more "encoded" than "encrypted". "Encrypted" means you know the process but you need the key, "encoded" means you have the key (if there even is one) but you need to figure out the process.
The other encryption, the Asymmetric one, is the one that signed the firmware. The hardware decrypts the firmware, then checks the signature to make sure it hasn't been changed. No amount of searching the hardware or firmware will reveal the code to do the signing, as it doesn't exist. The public key is there, but not the private key. Now if the hackers had figured out THAT one, okay, NOW you can call it actually hacked. This wasn't hacked, it was simply researched. BIG difference.
TL/DR time?
OK that was a bit long-winded (but necessary) groundwork. What does this mean for ME? It's always safest to assume that people can and will do anything that's reasonably possible to be done. Digging out the obfuscation key is just something that's going to happen sooner or later. Where does that leave us? Since there's no private key to be dug out, and the crypto that's used in the signing isn't going to be brute-forced anytime soon, here's your options on how to leverage the firmware:
1) you could find a bug in it that you can take advantage of. Maybe a timing condition or a race. Maybe a back door. (VERY unlikely in this case) For example, they may find that if you wait EXACTLY 83 seconds between passcode attempts, there's a bug in the firmware that doesn't increment the attempt count toward a device wipe. LEA would find this useful, and someone would make a lego mindstorm or arduino contraption that would guess your pin in a few weeks. (go look for the Garmin ones on youtube) They may even find a way to get it to unlock without the correct code, but this is far less likely. (though not com
Re: This is great! (Score:1)
Is this all iPhones, or just iPhones 5s? (Score:3)
Phone Wiki [theiphonewiki.com]
Re: (Score:3)
It's a big step anyway, because once you know what's inside an encrypted file it is easier to decrypt later generations of that same file.
So, hacks are probably already out there splitting up this decrypted firmware and attempting to decrypt later versions.
Re: (Score:2)
I am assuming each version of the iPhone SEP firmware is encrypted with it's own unique key
It can't be. The SEP has to decrypt it, which means it needs the decryption key. Doesn't matter if you use symmetric or public key crypto, in the end you need to store a secret key in the SEP and compromising that key means you can decrypt every bit of firmware released for it.
This is similar to how battery management processor firmware updates work on their laptops. AES encrypted with a secret key, which they accidentally leaked themselves. The key is protected by the processor's secure memory, which has p
Re: (Score:2)
NSA responds that's' so 2015 (Score:2, Insightful)
Given the assets available to the NSA, and their propensity to hide defects they find, I would not be surprised if this was already known to the NSA.
Re: (Score:3)
I wish you were an expert in this field too.
If the US has a copy of all root digital certificates in the world, it doesn't help them decrypt a conversation one jot.
Those certs have a private and public key. Public keys encrypt. Private keys decrypt. You can't make/discover/etc. the private one from the public one. You literally GIVE AWAY the public key to anyone and never reveal the private one. They can then encrypt a message to you knowing that ONLY the private key can unlock that message.
A cert is g