San Bernardino: Behind the Scenes

I wasn’t originally going to dig into some of the ugly details about San Bernardino, but with FBI Director Comey’s latest actions to publicly embarrass Hillary Clinton (who I don’t support), or to possibly tip the election towards Donald Trump (who I also don’t support), I am getting to learn more about James Comey and from what I’ve learned, a pattern of pushing a private agenda seems to be emerging. This is relevant because the San Bernardino iPhone matter saw numerous accusations of pushing a private agenda by Comey as well; that it was a power grab for the bureau and an attempt to get a court precedent to force private business to backdoor encryption, while lying to the public and possibly misleading the courts under the guise of terrorism.

Continue reading

WhatsApp Forensic Artifacts: Chats Aren’t Being Deleted

Sorry, folks, while experts are saying the encryption checks out in WhatsApp, it looks like the latest version of the app tested leaves forensic trace of all of your chats, even after you’ve deleted, cleared, or archived them… even if you “Clear All Chats”. In fact, the only way to get rid of them appears to be to delete the app entirely.

whatsapp

To test, I installed the app and started a few different threads. I then archived some, cleared, some, and deleted some threads. I made a second backup after running the “Clear All Chats” function in WhatsApp. None of these deletion or archival options made any difference in how deleted records were preserved. In all cases, the deleted SQLite records remained intact in the database.

Just to be clear, WhatsApp is deleting the record (they don’t appear to be trying to intentionally preserve data), however the record itself is not being purged or erased from the database, leaving a forensic artifact that can be recovered and reconstructed back into its original form.

A Common Problem

Forensic trace is common among any application that uses SQLite, because SQLite by default does not vacuum databases on iOS (likely in an effort to prevent wear). When a record is deleted, it is simply added to a “free list”, but free records do not get overwritten until later on when the database needs the extra storage (usually after many more records are created). If you delete large chunks of messages at once, this causes large chunks of records to end up on this “free list”, and ultimately takes even longer for data to be overwritten by new data. There is no guarantee the data will be overwritten by the next set of messages. In other apps, I’ve often seen artifacts remain in the database for months.

The core issue here is that ephemeral communication is not ephemeral on disk. This is a problem that Apple has struggled with as well, which I’ve explained and made design recommendations recently in this blog post.

Apple’s iMessage has this problem and it’s just as bad, if not worse. Your SMS.db is stored in an iCloud backup, but copies of it also exist on your iPad, your desktop, and anywhere else you receive iMessages. Deleted content also suffers the same fate.

The way to measure “better” in this case is by the level of forensics trace an application leaves. Signal leaves virtually nothing, so there’s nothing to worry about. No messy cleanup. Wickr takes advantage of Apple’s CoreData and encrypts their database using keys stored in the keychain (much more secure). Other apps would do well to respect the size of the forensic footprint they’re leaving.

Continue reading

WSJ Describes Reckless Behavior by FBI in Terrorism Case

The Wall Street Journal published an article today citing a source at the FBI is planning to tell the White House that “it knows so little about the hacking tool that was used to open terrorist’s iPhone that it doesn’t make sense to launch an internal government review”. If true, this should be taken as an act of recklessness by the FBI with regards to the Syed Farook case: The FBI apparently allowed an undocumented tool to run on a piece of high profile, terrorism-related evidence without having adequate knowledge of the specific function or the forensic soundness of the tool.

Best practices in forensic science would dictate that any type of forensics instrument needs to be tested and validated. It must be accepted as forensically sound before it can be put to live evidence. Such a tool must yield predictable, repeatable results and an examiner must be able to explain its process in a court of law. Our court system expects this, and allows for tools (and examiners) to face numerous challenges based on the credibility of the tool, which can only be determined by a rigorous analysis. The FBI’s admission that they have such little knowledge about how the tool works is an admission of failure to evaluate the science behind the tool; it’s core functionality to have been evaluated in any meaningful way. Knowing how the tool managed to get into the device should be the bare minimum I would expect anyone to know before shelling out over a million dollars for a solution, especially one that was going to be used on high-profile evidence.

A tool should not make changes to a device, and any changes should be documented and repeatable. There are several other variables to consider in such an effort, especially when imaging an iOS device. Apart from changes made directly by the tool (such as overwriting unallocated space, or portions of the file system journal), simply unlocking the device can cause the operating system to make a number of changes, start background tasks which could lead to destruction of data, or cause other changes unintentionally. Without knowing how the tool works, or what portions of the operating system it affects, what vulnerabilities are exploited, what the payload looks like, where the payload is written, what parts of the operating system are disabled by the tool, or a host of other important things – there is no way to effectively measure whether or not the tool is forensically sound. Simply running it against a dozen other devices to “see if it works” is not sufficient to evaluate a forensics tool – especially one that originated from a grey hat hacking group, potentially with very little actual in-house forensics expertise.

Continue reading

Hardware-Entangled APIs and Sessions in iOS

Apple has long enjoyed a security architecture whose security, in part, rests on the entanglement of their encryption to a device’s physical hardware. This pairing has demonstrated to be highly effective at thwarting a number of different types of attacks, allowing for mobile payments processing, secure encryption, and a host of other secure services running on an iPhone. One security feature that iOS lacks for third party developers is the ability to validate the hardware a user is on, preventing third party applications from taking advantage of such a great mechanism. APIs can be easily spoofed, as a result, and sessions and services are often susceptible to a number of different forms of abuse. Hardware validation can be particularly important when dealing with crowd-sourced data and APIs, as was the case a couple years ago when a group of students hacked Waze’s traffic intelligence. These types of Sybil attacks allow for thousands of phantom users to be created off of one single instance of an application, or even spoof an API altogether without a connection to the hardware. Other types of MiTM attacks are also a threat to applications running under iOS, for example by stealing session keys or OAuth tokens to access a user’s account from a different device or API. What can Apple do to thwart these types of attacks? Hardware entanglement through the Secure Enclave.

Continue reading

An Open Letter to FBI Director James Comey

Mr. Comey,

Sir, you may not know me, but I’ve impacted your agency for the better. For several years, I have been assisting law enforcement as a private citizen, including the Federal Bureau of Investigation, since the advent of the iPhone. I designed the original forensics tools and methods that were used to access content on iPhones, which were eventually validated by NIST/NIJ and ingested by FBI for internal use into your own version of my tools. Prior to that, FBI issued a major deviation allowing my tools to be used without validation, due to the critical need to collect evidence on iPhones. They were later the foundation for virtually every commercial forensics tool to make it to market at the time. I’ve supported thousands of agencies worldwide for several years, trained our state, federal, and military in iOS forensics, assisted hands-on in numerous high-profile cases, and invested thousands of hours of continued research and development for a suite of tools I provided at no cost – for the purpose of helping to solve crimes. I’ve received letters from a former lab director at FBI’s RCFL, DOJ, NASA OIG, and other agencies citing numerous cases that my tools have been used to help solve. I’ve done what I can to serve my country, and asked for little in return.

First let me say that I am glad FBI has found a way to get into Syed Farook’s iPhone 5c. Having assisted with many cases, I understand from firsthand experience what you are up against, and have enormous respect for what your agency does, as well as others like it. Often times it is that one finger that stops the entire dam from breaking. I would have been glad to assist your agency with this device, and even reached out to my contacts at FBI with a method I’ve since demonstrated in a proof-of-concept. Unfortunately, in spite of my past assistance, FBI lawyers prevented any meetings from occurring. But nonetheless, I am glad someone has been able to reach you with a viable solution.

Continue reading

How Apple Can Make Their FBI Problems Go Away

An adversary has an unknown exploit, and it could be used on a large scale to attack your platform. Your threat isn’t just your adversary, but also anyone who developed or sold the exploit to the adversary. The entire chain of information from conception to final product could be compromised anywhere along the way, and sold to a nation state on the side, blackmailed or bribed out of someone, or just used maliciously by anyone with knowledge or access. How can Apple make this problem go away?

The easiest technical solution is a boot password. The trusted boot chain has been impressively solid for the past several years, since Apple minimized its attack surface after the 24kpwn exploit in the early days. Apple’s trusted boot chain consists of a multi-stage boot loader, with each phase of boot checking the integrity of the next. Having been stripped down, it is now a shell of the hacker’s smorgasbord it used to be. It’s also a very direct and visible execution path for Apple, and so if there ever is an alleged exploit of it, it will be much easier to audit the code and pin down points of vulnerability.

Continue reading

Viability of Software Exploit and SEP Devices

A number of people have been asking me my thoughts on the viability of a software exploit against Secure Enclave enabled devices, so here’s my opinion:

First, is interesting to note that the way the FBI categorizes this tool’s capabilities is “5c” and “9.0”; namely, hardware model and firmware version. They won’t confirm that it’s the only combination that the tool runs on, but have noted that these are the two factors they’re categorizing it by. This is consistent with how exploit-based forensics tools have functioned in the past. My own forensics tools (for older iPhones) came in different modules that were tailored for a specific hardware platform and firmware version. This is because most exploits require taking Apple’s own firmware and patching it; those patches require slightly different offsets in the kernel (and possibly boot loader). The software to patch is also going to be slightly different for each hardware and firmware combination. So without saying really anything, FBI has kind of hinted that this might be a software exploit. Had this been a hardware attack, such as a NAND mirroring technique, firmware version likely wouldn’t be a point of discussion, as the technique’s feasibility is dependent on hardware revision. This is all conjecture, of course, but is worth noting that the hints are already there.

If the FBI did in fact use a software exploit, the question then becomes one of how viable it is on other platforms. Typically, a software exploit of this magnitude could very well take advantage of vulnerabilities that have long existed in the firmware, making it more than likely that the exploit may also be effective (possibly with a little tailoring) to older versions of iOS. Even if the exploit today was tailored specifically for this device, adjusting offsets and patching slightly different copies of Apple’s firmware is a relatively painless process. A number of open source tools even exist to find and patch the correct bytes in decrypted Apple kernels.

Continue reading

FBI Breaks Into San Bernardino iPhone

As expected, the FBI has succeeded in finding a method to recover the data on the San Bernardino iPhone, and now the government can see all of the cat pictures Farook was keeping on it. We don’t know what method was used, as it’s been classified. Given the time frame and the details of the case, it’s possible it could have been the hardware method (NAND mirroring) or a software method (exploitation). Many have speculated on both sides, but your guess is as good as mine. What I can tell you are the implications.

Continue reading

NAND “Mirroring” Concept Demonstration

This is a simple “concept” demonstration / simulation of a NAND mirroring attack on an iOS 9.0 device. I wanted to demonstrate how copying back disk content could allow for unlimited passcode attempts. Here, instead of using a chip programmer to copy certain contents of the NAND, I demonstrate it by copying the data using a jailbreak. For Farook’s phone, the FBI would remove the NAND chip, copy the contents into an image file, try passcodes, and then copy the original content back over onto the chip.

I did this here, only with a jailbreak: I made a copy of two property lists stored on the device, then copied them back and rebooted after five attempts. When doing this on a NAND level, actual blocks of encrypted disk content would be copied back and forth, whereas I’m working with files here. The concept is the same, and serves only to demonstrate that unlimited passcode attempts can be achieved by back-copying disk content. Again, NO JAILBREAK IS NEEDED to do this to Farook’s device, as the FBI would be physically removing the NAND to copy this data.

Other techniques can be used to speed this up. For example, the clock could possibly be fudged by giving the device a data connection and rerouting time requests to a local server. Think IMSI catcher. This could be used to continuously bump the time five or ten minutes so that more passcode attempts could be tried per reboot without as many delays. The NAND chip could also be socketed or reworked in other ways to make switches seamless. Lastly, the same techniques used in IP BOX such as entering pins through the usb, and using a light sensor to detect an unlock, could help to automate this to be more efficient. Overall, I think this puts to bed any notion that the technique “doesn’t work”.

UPDATE: This technique was later proven by another researcher who wrote about it in this paper:  http://arxiv.org/pdf/1609.04327v1.pdf


iOS / OS X and Forensics Trace Leakage

As I sit here, trying to be the good parent in reviewing my daughter’s text messages, I have to assume that I’ve made my kids miserable by having a father with DFIR skills. I’m able to dump her device wirelessly using a pairing record and a tool I designed to talk to the phone. I generate a wireless desktop backup, decrypt it, and this somehow makes my kids safer. What bothers me, as a parent, is the incredible trove of forensic artifacts that I find in my children’s data every month. Deleted messages, geolocation information, even drafts and thumbnails that had all been deleted months ago. Thousands of messages sometimes. In 2008, I wrote a small book with O’Reilly named iPhone Forensics that detailed this forensic mess. Apple has made some improvements, but the overall situation has gotten worse since then. The biggest reason the iPhone is so heavily targeted for forensics and intelligence is because of the very wide data recovery surface it provides: it’s a forensics gold mine.

To Apple’s credit, they’ve done a stellar job of improving the security of iOS devices in general (e.g. the “front door”), but we know that they, just like every other manufacturer, is still dancing on the lip of the volcano. Aside from the risk of my device being hacked, there’s a much greater issue at work here. Whether it’s a search warrant, an unlawful traffic stop in Michigan, someone stealing my backups, an ex-lover, or just leaving my phone unlocked on accident at my desk, it only takes a single violation of the user’s privacy to obtain an entire lengthy history of private information that was thought deleted. This problem is also prevalent in desktop OS X.

Continue reading

AceDeceiver: Breaking Apple’s Cryptographic Leash

The past week, I’ve been writing all about cryptographic leashes and how they could be easily broken in the case of controlling FBI’s iOS backdoor. Surprisingly, the first serious example of this has surfaced this week. The researchers at Palo Alto Networks, who have been killing it lately with great iOS research, did a breakdown of a piece of Chinese malware known as AceDeceiver. AceDeceiver breaks the cryptographic leash baked into the iPhone’s App Store system, allowing an attacker to install applications on the host iPhone even after they’ve been revoked by Apple. While the malware, in its present form, isn’t likely to cause widespread damage, the vulnerabilities in Apple’s DRM that this presents could be used for far more malicious purposes.

AceDeceiver starts its life as malware on your desktop. In its present form, you’d have to be dumb enough to install a Chinese pirate app store in order to have to worry about this, but in a more malicious form, something like it could potentially be embedded as a trojan in legitimate software. The malware performs a man-in-the-middle attack between your computer and the App Store, and fudges the authorizations used to let your iPhone run purchased software. Think of the attack as forging a receipt, like paying for a set of towels at Target, then returning a different set. Apple has no way to check the towels (your apps) to make sure they’re the same ones, so the iPhone lets the app run since you have a valid receipt. It’s even worse than this, because the receipts aren’t tied to your iTunes account – you can pull someone else’s receipt out of the trash and return towels you never purchased. It’s this receipt that is re-used to install the malware’s own software on your iPhone by impersonating iTunes. The malware author can use his or her own receipts to load previously approved App Store software onto your phone.

Continue reading

Apple vs. FBI: Where We Are Now, and Where We’re Going

Much has happened since a California magistrate court originally granted an order for Apple to assist the FBI under the All Writs Act. For one, most of us now know what the All Writs Act is: An ancient law that was passed before the Fourth Amendment even existed, now somehow relevant to modern technology a few hundred years later. Use of this act has exploded into a legal argument about whether or not it grants carte blanche rights of the government to demand anything and everything from private companies (and incidentally, individuals) if it helps them prosecute crimes. Of course, that’s just the tip of the iceberg. We’ve seen strong debates about whether any person should be allowed to have private conversations, thoughts, or ideas that can’t later be searched, whether forcing others to work for the government violates the constitution, whether other countries will line up to exploit technology if America does, and ultimately – at the heart of all of these – whether fear of the word “terrorism” is enough to cause us all to burn our constitution.

Over the past few weeks, the entire tech community has gotten behind Apple, filing a barrage of friend-of-the-court briefs on Apple’s behalf. Security experts such as myself, Crypto “Rock Stars”, constitutionalists, technologists, lawyers, and 30 Helens all agree that Apple is in the right, and that backdooring iOS would cause irreparable damage to the security, privacy, and safety of hundreds of millions of diplomats, judges, doctors, CEOs, teenage girls, victims of crimes, parents, celebrities, politicians, and all men and women around the world. Throughout the past month, legal exchanges have escalated from ice cold to white hot, and from professional to a traveling flea circus as ridiculous terms such as “lying dormant cyber pathogen” have been introduced. Congress, the courts, and the public have seen strong technical and legal arguments, impassioned pleas from victims, attempts at reason by the law enforcement community, name calling, proverbial mugs-thrown-across-the-room, uncontrollable profanity on media briefings, and just about any other form of pressure manifesting itself that one can imagine.

Continue reading

A Bomb on a Leash

The idea of a controlled explosion comes to mind when I think about pending proceedings with Apple. The Department of Justice argues that a backdoored version of iOS can be controlled in that Apple’s existing security mechanisms can prevent it from blowing up any device other than Farook’s. This is quite true. The code signing and TSS signing mechanism used to install firmware have controls that can most certainly bind a firmware bundle to a given device UDID. What’s not true is the amount of real control and protection this provides.

Think of Apple’s signing mechanisms as a kind of “leash” if you will; they provide a means of digital rights management to control any payload delivered onto the device. Where the DOJ’s argument falls into error is that their focus is too much on this leash, and too little on the payload itself. The payload in this scenario is a modified version of iOS that has a direct line into a device’s security mechanisms to both disable them and manipulate them to rapidly brute force a passcode (remotely, mind you). It’s the electronic equivalent of an explosive for an iPhone that will blow the safe open (FBI’s analogy, not mine). What Apple is being forced to design, develop, test, validate, and protect is essentially a bomb on a leash.

Continue reading

Apple Should Own The Term “Warrant Proof”

The Department of Justice, in a March 10 filing, accused Apple of outrightly making “warrant proof” devices, and accused Apple of obstruction of justice by making these devices so secure that they could not be searched, even with a warrant. While these words belonged to DOJ, I think Apple should own them. If you study our state laws, federal laws, and international treaties, you’ll see many examples of intellectual property that actually are protected against warrants. Yes, there are things in this country that are deemed warrant proof.

As per The State Department, Article 27.3 of the Vienna Convention on Diplomatic Relations states, a diplomatic pouch “shall not be opened or detained”. In other words, it’s warrant proof. No law enforcement agency in our country is permitted – under international treaty – to open a diplomatic pouch, and any warrants issued are null and void. Guidelines even permit for unaccompanied diplomatic pouches that are traveling without a diplomat or courier, which even further emphasizes the impetus for security of such pouches: they should have locks, and strong ones at that. Do we still have spying? Absolutely, and it’s illegal. It is not only reasonable then, but important to have a device like the iPhone – secure against illegal search and seizure.

Continue reading

An Example of “Warrant-Friendly Security”

The encryption on the iPhone is clearly doing its job. Good encryption doesn’t discriminate between attackers, it simply protects data – that’s its job, and it’s frustrating both criminals and law enforcement. The government has recently made arguments insisting that we must find a “balance” between protecting your privacy and providing a method for law enforcement to procure evidence with a warrant. If we don’t, the Department of Justice and the President himself have made it clear that such privacy could easily be legislated out of our products. Some think having a law enforcement backdoor is a good idea. Here, I present an example of what “warrant friendly” security looks like. It already exists. Apple has been using it for some time. It’s integrated into iCloud’s design.

Unlike the desktop backups that your iPhone makes, which can be encrypted with a backup password, the backups sent to iCloud are not encrypted this way. They are absolutely encrypted, but differently, in a way that allows Apple to provide iCloud data to law enforcement with a subpoena.  Apple had advertised iCloud as “encrypted” (which is true) and secure. It still does advertise this today, in fact, the same way it has for the past few years:

“Apple takes data security and the privacy of your personal information very seriously. iCloud is built with industry-standard security practices and employs strict policies to protect your data.”

So with all of this security, it sure sounds like your iCloud data should be secure, and also warrant friendly – on the surface, this sounds like a great “balance between privacy and security”. Then, the unthinkable happened.

Continue reading

On Dormant Cyber Pathogens and Unicorns

Gary Fagan, the Chief Deputy District Attorney for San Bernardino County, filed an amicus brief to the court in defense of the FBI compelling Apple to backdoor Farook’s iPhone. In this brief, DA Michael Ramos made the outrageous statement that Farook’s phone might contain a “lying dormant cyber pathogen”, a term that doesn’t actually exist in computer science, let alone in information security.

Screen Shot 2016-03-03 at 9.37.00 PM

Continue reading

CIS Files Amici Curiae Brief in Apple Case

CIS sought to file a friend-of-the-court, or amici curiae,” brief in the case today. We submitted the brief on behalf of a group of experts in iPhone security and applied cryptography: Dino Dai Zovi, Charlie Miller, Bruce Schneier, Prof. Hovav Shacham, Prof. Dan Wallach, Jonathan Zdziarski, and our colleague in CIS’s Crypto Policy ProjectProf. Dan Boneh. CIS is grateful to them for offering up their expert take on the serious implications of the court’s order for the entire security ecosystem. We hope the court will listen.

Read more at https://cyberlaw.stanford.edu/blog/2016/03/cis-files-amici-curiae-brief-apple-case-behalf-iphone-security-experts-and-applied

Mistakes in the San Bernardino Case

Many sat before Congress yesterday and made their cases for and against a backdoor into the iPhone. Little was said, however, of the mistakes that led us here before Congress in the first place, and many inaccurate statements went unchallenged.

The most notable mistake the media has caught onto has been the blunder of changing the iCloud password on Farook’s account, and Comey acknowledged this mistake before Congress.

“As I understand from the experts, there was a mistake made in that 24 hours after the attack where the [San Bernardino] county at the FBI’s request took steps that made it hard—impossible—later to cause the phone to back up again to the iCloud,”

Comey’s statements appear to be consistent with court documents all suggesting that both Apple and the FBI believed the device would begin backing up to the cloud once it was connected to a known WiFi network. This essentially established that I nterference with evidence ultimately led to the destruction of the trusted relationship between the device and its iCloud account, which prevented evidence from being available. In other words, the mistake of trying to break into the safe caused the safe to lock down in a way that made it more difficult to get evidence out of it

Continue reading

Shoot First, Ask Siri Later

You know the old saying, “shoot first, ask questions later”. It refers to the notion that careless law enforcement officers can often be short sighted in solving the problem at hand. It’s impossible to ask questions to a dead person, and if you need answers, that really makes it hard for you if you’ve just shot them. They’ve just blown their only chance of questioning the suspect by failing to take their training and good judgment into account. This same scenario applies to digital evidence. Many law enforcement agencies do not know how to properly handle digital evidence, and end up making mistakes that cause them to effectively kill their one shot of getting the answers they need.

In the case involving Farook’s iPhone, two things went wrong that could have resulted in evidence being lifted off the device.

First, changing the iCloud password prevented the device from being able to push an iCloud backup. As Apple’s engineers were walking FBI through the process of getting the device to start sending data again, it became apparent that the password had been changed (suggesting they may have even seen the device try to authorize on iCloud). If the backup had succeeded, there would be very little, if anything, that could have been gotten off the phone that wouldn’t be in the iCloud backup.

Secondly, and equally damaging to the evidence, was that the device was apparently either shut down or allowed to drain after it was seized. Shutting the device down is a common – but outdated – practice in field operations. Modern device seizure not only requires that the device should be kept powered up, but also to tune all of the protocols leading up to the search and seizure so that it’s done quickly enough to prevent the battery from draining before you even arrive on scene. Letting the device power down effectively shot the suspect dead by removing any chances of doing the following:

Continue reading

Apple’s Burden to Protect or Perpetually Create a “Weapon”

As the Apple/FBI dispute continues on, court documents reveal the argument that Apple has been providing forensic services to law enforcement for years without tools being hacked or leaked from Apple. Quite the contrary, information is leaked out of Foxconn all the time, and in fact some of the software and hardware tools used to hack iOS products over the past several years (IP-BOX, Pangu, and so on) have originated in China, where Apple’s manufacturing process takes place. Outside of China, jailbreak after jailbreak has taken advantage of vulnerabilities in iOS, some with the help of tools leaked out of Apple’s HQ in Cupertino. Devices have continually been compromised and even today, Apple’s security response team releases dozens of fixes for vulnerabilities that have been exploited outside of Apple. Setting all of this aside for a moment, however, lets take a look at the more immediate dangers of such statements.

By affirming that Apple can and will protect such a backdoor, Comey’s statement is admitting that Apple will be faced with not only the burden of breathing this forensics backdoor into existence, but must also take perpetual steps to protect it once it’s been created. In other words, the courts are forcing Apple to create what would be considered a weapon under the latest proposed Wassenaar rules, and charging them with the burden of also preventing that weapon from getting out – either the code itself, or the weaknesses that Apple would have to continue allowing to be baked into their products to allow the weapon to work.

Continue reading