M | T | W | T | F | S | S |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 | 29 |
30 | 31 |
For the first time in Apple’s history, they’ve been forced to think about the reality that an overreaching government can make Apple their own adversary. When we think about computer security, our threat models are almost always without, but rarely ever within. This ultimately reflects through our design, and Apple is no exception. Engineers working on encryption projects are at a particular disadvantage, as the use (or abuse) of their software is becoming gradually more at the mercy of legislation. The functionality of encryption based software boils down to its design: is its privacy enforced through legislation, or is it enforced through code?
My philosophy is that code is law. Code should be the angry curmudgeon that doesn’t even trust its creator, without the end user’s consent. Even at the top, there may be outside factors affecting how code is compromised, and at the end of the day you can’t trust yourself when someone’s got a gun to your head. When the courts can press the creator of code into becoming an adversary against it, there is only ultimately one design conclusion that can be drawn: once the device is provisioned, it should trust no-one; not even its creator, without direct authentication from the end user.
Apple has done well to maintain control at the top, which gives Apple a certain advantage in being able to assure product consistency, customer service, and provide a very low risk of putting customer data at risk during issues. This has one tragic flaw, however, in that Apple’s devices are too trusting of Apple themselves (and anyone with their signing keys). The latest charade with the FBI’s AWA order is an example of how private industry can be made to swallow its own tail, and when devices are inherently designed to trust their creator, this presents some major concerns for privacy. The third law of robotics, as Asimov put it, dictates that a robot (or machine) must protect its own existence. We rarely ever see this in computer science, but we should.
The only way to ensure that privacy cannot be legislated out by the courts is to consider yourself a threat model in your design. Rather than maintain the power structure at the top, security of the device’s core components should be a two-factor power between both Apple and the device’s owner; this ensures that Apple can’t do anything to your device without the user’s permission. We’ve started to see traces of this begin to surface. For example, Apple’s anti-theft feature tries to prevent a device from being restored until you enter your iCloud password. Firmware updates, as of recent, request the user’s passcode in order to install. But this design needs to go much deeper than that, and the thinking behind these needs to shift from device theft to compromise by the manufacturer.
If code truly is law, the device itself should be autonomous, and only take its cues from Apple with the user’s authentication… on a much deeper level than we see it implemented today. Apple is starting to head in this direction, however much of this is still managed in the software (that can be executed by Apple on the device), where it should be managed deep down within the secure enclave, or even at a chip level. A device’s boot loader should not even be willing to load without the SEP being unlocked by a user boot password. Mission critical security components, such as a passcode delay, wipe on fail mechanism, etc., should be hard-coded into the chip’s microcode so that they can never be disabled or even updated. Encrypting the operating system partition itself with a user key can help prevent trojan or backdoor installations. There are many other great ideas people have for design that I’m sure will trickle into Apple over time.
Whether you’re building a mobile operating system, or an electronic health records system, code should be self-enforcing, and the consequences of losing credentials should be stressed, rather than have mechanisms to compensate. The walled garden approach only gets you so far; you have to build a wall around yourself too if you want to have effective security in your design. Do you want to change the law? Do you want to legislate how your privacy tools are used? Then legislate it into your code.
M | T | W | T | F | S | S |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 | 29 |
30 | 31 |