Category: Apple

Joining Apple

I’m pleased to announce that I’ve accepted a position with Apple’s Security Engineering and Architecture team, and am very excited to be working with a group of like minded individuals so passionate about protecting the security and privacy of others. This decision marks the conclusion of what I feel has been a matter of conscience Read More

Slides: Crafting macOS Root Kits

Here are the slides from my talk at Dartmouth College this week; this was a basic introduction / overview of the macOS kernel and how root kits often have fun with the kernel. There’s not much new here, but the deck might be a good introduction for anyone looking to get into develop security tools Read More

Resolving Kernel Symbols Post-ASLR

There are some 21,000 symbols in the macOS kernel, but all but around 3,500 are opaque even to kernel developers. The reasoning behind this was likely twofold: first, Apple is continually making changes and improvements in the kernel, and they probably don’t want kernel developers mucking around with unstable portions of the code. Secondly, kernel dev used to be the wild wild west, especially before you needed a special code signing cert to load a kext, and there were a lot of bad devs who wrote awful code making macOS completely unstable. Customers running such software probably blamed Apple for it, instead of the developer. Apple now has tighter control over who can write kernel code, but it doesn’t mean developers have gotten any better at it. Looking at some commercial products out there, there’s unsurprisingly still terrible code to do things in the kernel that should never be done.

So most of the kernel is opaque to kernel developers for good reason, and this has reduced the amount of rope they have to hang themselves with. For some doing really advanced work though (especially in security), the kernel can sometimes feel like a Fisher Price steering wheel because of this, and so many have found ways around privatized functions by resolving these symbols and using them anyway. After all, if you’re going to combat root kits, you have to act like a root kit in many ways, and if you’re going to combat ransomware, you have to dig your claws into many of the routines that ransomware would use – some of which are privatized.

Today, there are many awful implementations of both malware and anti-malware code out there that resolve these private kernel symbols. Many of them do idiotic things like open and read the kernel from a file, scan memory looking for magic headers, and other very non-portable techniques that risk destabilizing macOS even more. So I thought I’d take a look at one of the good examples that particularly stood out to me. Some years back, Nemo and Snare wrote some good in-memory symbol resolving code that walked the LC_SYMTAB without having to read the kernel from disk, scan memory, or do any other disgusting things, and did it in a portable way that worked on whatever new versions of macOS came out. 

Read More

Technical Analysis: Meitu is Junkware, but not Malicious

Last week, I live tweeted some reverse engineering of the Meitu iOS app, after it got a lot of attention on Android for some awful things, like scraping the IMEI of the phone. To summarize my own findings, the iOS version of Meitu is, in my opinion, one of thousands of types of crapware that you’ll find on any mobile platform, but does not appear to be malicious. In this context, I looked for exfiltration or destruction of personal data to be a key indicator of malicious behavior, as well as performing any kind of unauthorized code execution on the device or performing nefarious tasks… but Meitu does not appear to go beyond basic advertiser tracking. The application comes with several ad trackers and data mining packages compiled into it – which appear to be primarily responsible for the app’s suspicious behavior. While it’s unusually overloaded with tracking software, it also doesn’t seem to be performing any kind of exfiltration of personal data, with some possible exceptions to location tracking. One of the reasons the iOS app is likely less disgusting than the Android app is because it can’t get away with most of that kind of behavior on the iOS platform.

Read More

Configuring the Touch Bar for System Lockdown

The new Touch Bar is often marketed as a gimmick, but one powerful capability it has is to function as a lockdown mechanism for your machine in the event of a physical breach. By changing a few power management settings and customizing the Touch Bar, you can add a button that will instantly lock the machine’s screen and then begin a countdown (that’s configurable, e.g. 5 minutes) to lock down the entire system, which will disable the fingerprint reader, remove power to the RAM, and discard your FileVault keys, effectively locking the encryption, protecting you from cold boot attacks, and prevent the system from being unlocked by a fingerprint.

One of the reasons you may want to do this is to allow the system to remain live while you step away, answer the door, or run to the bathroom, but in the event that you don’t come back within a few minutes, lock things down. It can be ideal for the office, hotels, or anywhere you feel that you feel your system may become physically compromised. This technique offers the convenience of being able to unlock the system with your fingerprint if you come back quickly, but the safety of having the system secure itself if you don’t.

Read More

San Bernardino: Behind the Scenes

I wasn’t originally going to dig into some of the ugly details about San Bernardino, but with FBI Director Comey’s latest actions to publicly embarrass Hillary Clinton (who I don’t support), or to possibly tip the election towards Donald Trump (who I also don’t support), I am getting to learn more about James Comey and from what I’ve learned, a pattern of pushing a private agenda seems to be emerging. This is relevant because the San Bernardino iPhone matter saw numerous accusations of pushing a private agenda by Comey as well; that it was a power grab for the bureau and an attempt to get a court precedent to force private business to backdoor encryption, while lying to the public and possibly misleading the courts under the guise of terrorism.

Read More

WhatsApp Forensic Artifacts: Chats Aren’t Being Deleted

Sorry, folks, while experts are saying the encryption checks out in WhatsApp, it looks like the latest version of the app tested leaves forensic trace of all of your chats, even after you’ve deleted, cleared, or archived them… even if you “Clear All Chats”. In fact, the only way to get rid of them appears to be to delete the app entirely.

To test, I installed the app and started a few different threads. I then archived some, cleared, some, and deleted some threads. I made a second backup after running the “Clear All Chats” function in WhatsApp. None of these deletion or archival options made any difference in how deleted records were preserved. In all cases, the deleted SQLite records remained intact in the database.

Just to be clear, WhatsApp is deleting the record (they don’t appear to be trying to intentionally preserve data), however the record itself is not being purged or erased from the database, leaving a forensic artifact that can be recovered and reconstructed back into its original form.

Read More

WSJ Describes Reckless Behavior by FBI in Terrorism Case

The Wall Street Journal published an article today citing a source at the FBI is planning to tell the White House that “it knows so little about the hacking tool that was used to open terrorist’s iPhone that it doesn’t make sense to launch an internal government review”. If true, this should be taken as an act of recklessness by the FBI with regards to the Syed Farook case: The FBI apparently allowed an undocumented tool to run on a piece of high profile, terrorism-related evidence without having adequate knowledge of the specific function or the forensic soundness of the tool.

Best practices in forensic science would dictate that any type of forensics instrument needs to be tested and validated. It must be accepted as forensically sound before it can be put to live evidence. Such a tool must yield predictable, repeatable results and an examiner must be able to explain its process in a court of law. Our court system expects this, and allows for tools (and examiners) to face numerous challenges based on the credibility of the tool, which can only be determined by a rigorous analysis. The FBI’s admission that they have such little knowledge about how the tool works is an admission of failure to evaluate the science behind the tool; it’s core functionality to have been evaluated in any meaningful way. Knowing how the tool managed to get into the device should be the bare minimum I would expect anyone to know before shelling out over a million dollars for a solution, especially one that was going to be used on high-profile evidence.

A tool should not make changes to a device, and any changes should be documented and repeatable. There are several other variables to consider in such an effort, especially when imaging an iOS device. Apart from changes made directly by the tool (such as overwriting unallocated space, or portions of the file system journal), simply unlocking the device can cause the operating system to make a number of changes, start background tasks which could lead to destruction of data, or cause other changes unintentionally. Without knowing how the tool works, or what portions of the operating system it affects, what vulnerabilities are exploited, what the payload looks like, where the payload is written, what parts of the operating system are disabled by the tool, or a host of other important things – there is no way to effectively measure whether or not the tool is forensically sound. Simply running it against a dozen other devices to “see if it works” is not sufficient to evaluate a forensics tool – especially one that originated from a grey hat hacking group, potentially with very little actual in-house forensics expertise.

Read More

Hardware-Entangled APIs and Sessions in iOS

Apple has long enjoyed a security architecture whose security, in part, rests on the entanglement of their encryption to a device’s physical hardware. This pairing has demonstrated to be highly effective at thwarting a number of different types of attacks, allowing for mobile payments processing, secure encryption, and a host of other secure services running on an iPhone. One security feature that iOS lacks for third party developers is the ability to validate the hardware a user is on, preventing third party applications from taking advantage of such a great mechanism. APIs can be easily spoofed, as a result, and sessions and services are often susceptible to a number of different forms of abuse. Hardware validation can be particularly important when dealing with crowd-sourced data and APIs, as was the case a couple years ago when a group of students hacked Waze’s traffic intelligence. These types of Sybil attacks allow for thousands of phantom users to be created off of one single instance of an application, or even spoof an API altogether without a connection to the hardware. Other types of MiTM attacks are also a threat to applications running under iOS, for example by stealing session keys or OAuth tokens to access a user’s account from a different device or API. What can Apple do to thwart these types of attacks? Hardware entanglement through the Secure Enclave.

Read More

An Open Letter to FBI Director James Comey

Mr. Comey,

Sir, you may not know me, but I’ve impacted your agency for the better. For several years, I have been assisting law enforcement as a private citizen, including the Federal Bureau of Investigation, since the advent of the iPhone. I designed the original forensics tools and methods that were used to access content on iPhones, which were eventually validated by NIST/NIJ and ingested by FBI for internal use into your own version of my tools. Prior to that, FBI issued a major deviation allowing my tools to be used without validation, due to the critical need to collect evidence on iPhones. They were later the foundation for virtually every commercial forensics tool to make it to market at the time. I’ve supported thousands of agencies worldwide for several years, trained our state, federal, and military in iOS forensics, assisted hands-on in numerous high-profile cases, and invested thousands of hours of continued research and development for a suite of tools I provided at no cost – for the purpose of helping to solve crimes. I’ve received letters from a former lab director at FBI’s RCFL, DOJ, NASA OIG, and other agencies citing numerous cases that my tools have been used to help solve. I’ve done what I can to serve my country, and asked for little in return.

First let me say that I am glad FBI has found a way to get into Syed Farook’s iPhone 5c. Having assisted with many cases, I understand from firsthand experience what you are up against, and have enormous respect for what your agency does, as well as others like it. Often times it is that one finger that stops the entire dam from breaking. I would have been glad to assist your agency with this device, and even reached out to my contacts at FBI with a method I’ve since demonstrated in a proof-of-concept. Unfortunately, in spite of my past assistance, FBI lawyers prevented any meetings from occurring. But nonetheless, I am glad someone has been able to reach you with a viable solution.

Read More

How Apple Can Make Their FBI Problems Go Away

An adversary has an unknown exploit, and it could be used on a large scale to attack your platform. Your threat isn’t just your adversary, but also anyone who developed or sold the exploit to the adversary. The entire chain of information from conception to final product could be compromised anywhere along the way, and sold to a nation state on the side, blackmailed or bribed out of someone, or just used maliciously by anyone with knowledge or access. How can Apple make this problem go away?

The easiest technical solution is a boot password. The trusted boot chain has been impressively solid for the past several years, since Apple minimized its attack surface after the 24kpwn exploit in the early days. Apple’s trusted boot chain consists of a multi-stage boot loader, with each phase of boot checking the integrity of the next. Having been stripped down, it is now a shell of the hacker’s smorgasbord it used to be. It’s also a very direct and visible execution path for Apple, and so if there ever is an alleged exploit of it, it will be much easier to audit the code and pin down points of vulnerability.

Read More

Why a Software Exploit Would be a Threat to Secure Enclave Devices

As speculation continues about the FBI’s new toy for hacking iPhones, the possibility of a software exploit continues to be a point of discussion. In my last post, I answered the question of whether such an exploit would work on Secure Enclave devices, but I didn’t fully explain the threat that persists regardless.

For sake of argument, let’s go with the theory that FBI’s tool is using a software exploit. The software exploit probably doesn’t (yet) attack the Secure Enclave, as Farook’s 5c didn’t have one. But this probably also doesn’t matter. Let’s assume for a moment that the exploit being used could be ported to work on a 64-bit processor. The 5c is 32-bit, so this assumes a lot. Some exploits can be ported, while others just won’t work on the 64-bit architecture. But let’s assume that either the work has already been done or will be done shortly to do this; a very plausible scenario.

Read More

Viability of Software Exploit and SEP Devices

A number of people have been asking me my thoughts on the viability of a software exploit against Secure Enclave enabled devices, so here’s my opinion:

First, is interesting to note that the way the FBI categorizes this tool’s capabilities is “5c” and “9.0”; namely, hardware model and firmware version. They won’t confirm that it’s the only combination that the tool runs on, but have noted that these are the two factors they’re categorizing it by. This is consistent with how exploit-based forensics tools have functioned in the past. My own forensics tools (for older iPhones) came in different modules that were tailored for a specific hardware platform and firmware version. This is because most exploits require taking Apple’s own firmware and patching it; those patches require slightly different offsets in the kernel (and possibly boot loader). The software to patch is also going to be slightly different for each hardware and firmware combination. So without saying really anything, FBI has kind of hinted that this might be a software exploit. Had this been a hardware attack, such as a NAND mirroring technique, firmware version likely wouldn’t be a point of discussion, as the technique’s feasibility is dependent on hardware revision. This is all conjecture, of course, but is worth noting that the hints are already there.

If the FBI did in fact use a software exploit, the question then becomes one of how viable it is on other platforms. Typically, a software exploit of this magnitude could very well take advantage of vulnerabilities that have long existed in the firmware, making it more than likely that the exploit may also be effective (possibly with a little tailoring) to older versions of iOS. Even if the exploit today was tailored specifically for this device, adjusting offsets and patching slightly different copies of Apple’s firmware is a relatively painless process. A number of open source tools even exist to find and patch the correct bytes in decrypted Apple kernels.

Read More

FBI Breaks Into San Bernardino iPhone

As expected, the FBI has succeeded in finding a method to recover the data on the San Bernardino iPhone, and now the government can see all of the cat pictures Farook was keeping on it. We don’t know what method was used, as it’s been classified. Given the time frame and the details of the case, it’s possible it could have been the hardware method (NAND mirroring) or a software method (exploitation). Many have speculated on both sides, but your guess is as good as mine. What I can tell you are the implications.

Read More

My Take on FBI’s “Alternative” Method

FBI acknowledged today that there “appears” to be an alternative way into Farook’s iPhone 5c – something that experts have been shouting for weeks now; in fact, we’ve been saying there are several viable methods. Before I get into which method I think is being used here, here are some possibilities of other viable methods and why I don’t think they’re part of the solution being utilized:

  1. A destructive method, such as de-capping or deconstruction of the microprocessor would preclude FBI from being able to come back in two weeks to continue proceedings against Apple. Once the phone is destroyed, there’s very little Apple can do with it. Apple cannot repair a destroyed processor without losing the UID key in the process. De-capping, acid and lasers, and other similar techniques are likely out.
  2. We know the FBI hasn’t been reaching out to independent researchers, and so this likely isn’t some fly-by-night jailbreak exploit out of left field. If respected security researchers can’t talk to FBI, there’s no way a jailbreak crew is going to be allowed to either.
  3. An NSA 0-day is likely also out, as the court briefs suggested the technique came from outside USG.
  4. While it is possible that an outside firm has developed an exploit payload using a zero-day, or one of the dozens of code execution vulnerabilities published by Apple in patch releases, this likely wouldn’t take two weeks to verify, and the FBI wouldn’t stop a full court press (literally) against Apple unless the technique had been reported to have worked. A few test devices running the same firmware could easily determine such an attack would work, within perhaps hours. A software exploit would also be electronically transmittable, something that an outside firm could literally email to the FBI. Even if that two weeks accounted for travel, you still don’t need anywhere near this amount of time to demonstrate an exploit. It’s possible the two weeks could be for meetings, red tape, negotiating price, and so on, but the brief suggested that the two weeks was for verification, and not all of the other bureaucracy that comes after.
  5. This likely has nothing to do with getting intel about the passcode or reviewing security camera footage to find Farook typing it in at a cafe; the FBI is uncertain about the method being used and needs to verify it. They wouldn’t go through this process if they believed they already had the passcode in their possession, unless it was for fasting and prayer to hope it worked.
  6. Breaking the file system encryption on one of NSA/CIAs computing clusters is unlikely; that kind of brute forcing doesn’t give you a two week heads-up that it’s “almost there”. It can also take significantly longer – possibly years – to crack.
  7. Experimental techniques such as frankensteining the crypto engine or other potentially niche edge techniques would take much longer than two weeks (or even two months) to develop and test, and would likely also be destructive.

Read More

iOS / OS X and Forensics Trace Leakage

As I sit here, trying to be the good parent in reviewing my daughter’s text messages, I have to assume that I’ve made my kids miserable by having a father with DFIR skills. I’m able to dump her device wirelessly using a pairing record and a tool I designed to talk to the phone. I generate a wireless desktop backup, decrypt it, and this somehow makes my kids safer. What bothers me, as a parent, is the incredible trove of forensic artifacts that I find in my children’s data every month. Deleted messages, geolocation information, even drafts and thumbnails that had all been deleted months ago. Thousands of messages sometimes. In 2008, I wrote a small book with O’Reilly named iPhone Forensics that detailed this forensic mess. Apple has made some improvements, but the overall situation has gotten worse since then. The biggest reason the iPhone is so heavily targeted for forensics and intelligence is because of the very wide data recovery surface it provides: it’s a forensics gold mine.

To Apple’s credit, they’ve done a stellar job of improving the security of iOS devices in general (e.g. the “front door”), but we know that they, just like every other manufacturer, is still dancing on the lip of the volcano. Aside from the risk of my device being hacked, there’s a much greater issue at work here. Whether it’s a search warrant, an unlawful traffic stop in Michigan, someone stealing my backups, an ex-lover, or just leaving my phone unlocked on accident at my desk, it only takes a single violation of the user’s privacy to obtain an entire lengthy history of private information that was thought deleted. This problem is also prevalent in desktop OS X.

Read More

AceDeceiver: Breaking Apple’s Cryptographic Leash

The past week, I’ve been writing all about cryptographic leashes and how they could be easily broken in the case of controlling FBI’s iOS backdoor. Surprisingly, the first serious example of this has surfaced this week. The researchers at Palo Alto Networks, who have been killing it lately with great iOS research, did a breakdown of a piece of Chinese malware known as AceDeceiver. AceDeceiver breaks the cryptographic leash baked into the iPhone’s App Store system, allowing an attacker to install applications on the host iPhone even after they’ve been revoked by Apple. While the malware, in its present form, isn’t likely to cause widespread damage, the vulnerabilities in Apple’s DRM that this presents could be used for far more malicious purposes.

AceDeceiver starts its life as malware on your desktop. In its present form, you’d have to be dumb enough to install a Chinese pirate app store in order to have to worry about this, but in a more malicious form, something like it could potentially be embedded as a trojan in legitimate software. The malware performs a man-in-the-middle attack between your computer and the App Store, and fudges the authorizations used to let your iPhone run purchased software. Think of the attack as forging a receipt, like paying for a set of towels at Target, then returning a different set. Apple has no way to check the towels (your apps) to make sure they’re the same ones, so the iPhone lets the app run since you have a valid receipt. It’s even worse than this, because the receipts aren’t tied to your iTunes account – you can pull someone else’s receipt out of the trash and return towels you never purchased. It’s this receipt that is re-used to install the malware’s own software on your iPhone by impersonating iTunes. The malware author can use his or her own receipts to load previously approved App Store software onto your phone.

Read More

Apple vs. FBI: Where We Are Now, and Where We’re Going

Much has happened since a California magistrate court originally granted an order for Apple to assist the FBI under the All Writs Act. For one, most of us now know what the All Writs Act is: An ancient law that was passed before the Fourth Amendment even existed, now somehow relevant to modern technology a few hundred years later. Use of this act has exploded into a legal argument about whether or not it grants carte blanche rights of the government to demand anything and everything from private companies (and incidentally, individuals) if it helps them prosecute crimes. Of course, that’s just the tip of the iceberg. We’ve seen strong debates about whether any person should be allowed to have private conversations, thoughts, or ideas that can’t later be searched, whether forcing others to work for the government violates the constitution, whether other countries will line up to exploit technology if America does, and ultimately – at the heart of all of these – whether fear of the word “terrorism” is enough to cause us all to burn our constitution.

Over the past few weeks, the entire tech community has gotten behind Apple, filing a barrage of friend-of-the-court briefs on Apple’s behalf. Security experts such as myself, Crypto “Rock Stars”, constitutionalists, technologists, lawyers, and 30 Helens all agree that Apple is in the right, and that backdooring iOS would cause irreparable damage to the security, privacy, and safety of hundreds of millions of diplomats, judges, doctors, CEOs, teenage girls, victims of crimes, parents, celebrities, politicians, and all men and women around the world. Throughout the past month, legal exchanges have escalated from ice cold to white hot, and from professional to a traveling flea circus as ridiculous terms such as “lying dormant cyber pathogen” have been introduced. Congress, the courts, and the public have seen strong technical and legal arguments, impassioned pleas from victims, attempts at reason by the law enforcement community, name calling, proverbial mugs-thrown-across-the-room, uncontrollable profanity on media briefings, and just about any other form of pressure manifesting itself that one can imagine.

Read More

A Bomb on a Leash

The idea of a controlled explosion comes to mind when I think about pending proceedings with Apple. The Department of Justice argues that a backdoored version of iOS can be controlled in that Apple’s existing security mechanisms can prevent it from blowing up any device other than Farook’s. This is quite true. The code signing and TSS signing mechanism used to install firmware have controls that can most certainly bind a firmware bundle to a given device UDID. What’s not true is the amount of real control and protection this provides.

Think of Apple’s signing mechanisms as a kind of “leash” if you will; they provide a means of digital rights management to control any payload delivered onto the device. Where the DOJ’s argument falls into error is that their focus is too much on this leash, and too little on the payload itself. The payload in this scenario is a modified version of iOS that has a direct line into a device’s security mechanisms to both disable them and manipulate them to rapidly brute force a passcode (remotely, mind you). It’s the electronic equivalent of an explosive for an iPhone that will blow the safe open (FBI’s analogy, not mine). What Apple is being forced to design, develop, test, validate, and protect is essentially a bomb on a leash.

Read More

Apple Should Own The Term “Warrant Proof”

The Department of Justice, in a March 10 filing, accused Apple of outrightly making “warrant proof” devices, and accused Apple of obstruction of justice by making these devices so secure that they could not be searched, even with a warrant. While these words belonged to DOJ, I think Apple should own them. If you study our state laws, federal laws, and international treaties, you’ll see many examples of intellectual property that actually are protected against warrants. Yes, there are things in this country that are deemed warrant proof.

As per The State Department, Article 27.3 of the Vienna Convention on Diplomatic Relations states, a diplomatic pouch “shall not be opened or detained”. In other words, it’s warrant proof. No law enforcement agency in our country is permitted – under international treaty – to open a diplomatic pouch, and any warrants issued are null and void. Guidelines even permit for unaccompanied diplomatic pouches that are traveling without a diplomat or courier, which even further emphasizes the impetus for security of such pouches: they should have locks, and strong ones at that. Do we still have spying? Absolutely, and it’s illegal. It is not only reasonable then, but important to have a device like the iPhone – secure against illegal search and seizure.

Read More

1 2 3 4