Introducing Little Flocker: File Access Enforcement for macOS

Little Flocker is a utility for keeping your personal data safe from spyware, ransomware, misbehaving applications, and other common threats to your computer’s security, by preventing any application from accessing your files without explicit permission. Available for macOS El Capitan and Sierra, Little Flocker is easy to install and relatively unobtrusive to the user.

Screenshot

Little Flocker protects your files by integrating with the operating system on a low (kernel) level, and has capabilities higher than root, making it effective against a number of types of root kits and malware. It is ideal for users who need to run certain applications that they don’t necessarily trust.

Screen Shot 2016-08-16 at 7.27.29 PM

One of the worst parts about being compromised is not knowing you’re compromised, and having your data stolen or encrypted by malware. Most security experts cannot even be positive that they are not compromised. Whether it’s spyware, ransomware, government hacking, trojanized applications, or apps that don’t respect your privacy, Little Flocker works with macOS to prevent unauthorized software from stealing your data.

More information: http://www.zdziarski.com/blog/?page_id=6171

WhatsApp Forensic Artifacts: Chats Aren’t Being Deleted

Sorry, folks, while experts are saying the encryption checks out in WhatsApp, it looks like the latest version of the app tested leaves forensic trace of all of your chats, even after you’ve deleted, cleared, or archived them… even if you “Clear All Chats”. In fact, the only way to get rid of them appears to be to delete the app entirely.

whatsapp

To test, I installed the app and started a few different threads. I then archived some, cleared, some, and deleted some threads. I made a second backup after running the “Clear All Chats” function in WhatsApp. None of these deletion or archival options made any difference in how deleted records were preserved. In all cases, the deleted SQLite records remained intact in the database.

Just to be clear, WhatsApp is deleting the record (they don’t appear to be trying to intentionally preserve data), however the record itself is not being purged or erased from the database, leaving a forensic artifact that can be recovered and reconstructed back into its original form.

A Common Problem

Forensic trace is common among any application that uses SQLite, because SQLite by default does not vacuum databases on iOS (likely in an effort to prevent wear). When a record is deleted, it is simply added to a “free list”, but free records do not get overwritten until later on when the database needs the extra storage (usually after many more records are created). If you delete large chunks of messages at once, this causes large chunks of records to end up on this “free list”, and ultimately takes even longer for data to be overwritten by new data. There is no guarantee the data will be overwritten by the next set of messages. In other apps, I’ve often seen artifacts remain in the database for months.

The core issue here is that ephemeral communication is not ephemeral on disk. This is a problem that Apple has struggled with as well, which I’ve explained and made design recommendations recently in this blog post.

Apple’s iMessage has this problem and it’s just as bad, if not worse. Your SMS.db is stored in an iCloud backup, but copies of it also exist on your iPad, your desktop, and anywhere else you receive iMessages. Deleted content also suffers the same fate.

The way to measure “better” in this case is by the level of forensics trace an application leaves. Signal leaves virtually nothing, so there’s nothing to worry about. No messy cleanup. Wickr takes advantage of Apple’s CoreData and encrypts their database using keys stored in the keychain (much more secure). Other apps would do well to respect the size of the forensic footprint they’re leaving.

Continue reading

Moving Semi-Auto Rifles Into the National Firearms Act

I’m a long time responsible gun owner and enthusiast who, like many, would like to see more controls on semi-automatic weapons; particularly, what many refer to as “assault rifles”. Indeed, I’m well aware of all the Kool-Aid on both sides surrounding assault weapons, and I think both sides have some ridiculous notions about them. The extreme left seems to have developed an irrational fear and hatred of all guns and the right believes the only solution to guns are more guns. I used to drink the gun rights Kool-Aid. I’ve shot and smithed guns for over 15 years now, gotten the NRA certifications to supervise ranges and to carry concealed weapons, and have been an avid long range shooter. Up until a few years ago, when I sold the rights to it, I produced the #1 ballistics computer in the App Store, and made a very comfortable revenue off of the gun community with just a few hours a month of coding.

I’ve spent over 15 years in the gun community, listened to all of the political arguments surrounding “assault weapons”, and have met some intelligent people. I’ve met more wildly ignorant people, who don’t even know how their AR-15s work let alone have any sense of the world. The political dogma surrounding assault weapons is downright embarrassing on the gun rights side. If we really believe that guns are keeping the government in check, then they’ve quite failed at that task. If guns supposedly make for a more peaceful society, then why is our country’s gun violence worse than most third world countries, and why are virtually all other first world countries statistically safer on a high order of magnitude? Perhaps if more gun owners traveled and saw how other societies lived – many without the distrust and predisposition to violence that we hold even towards our own neighbors – they might see how sorely mistaken their mindset has been all this time. Instead, many choose to live ignorantly believing that the rest of the world is as vicious and violent as many are in the US. It’s simply not true.

Continue reading

General Motors 2015-2016 Safety Issue w/Cruise Control [Ignored by Chevrolet]

I’ve filed the following safety issue with the NHTSA, after spending considerable time attempting to explain this safety issue to Chevrolet only to get incoherent answers by people who don’t appear competent enough to understand the problem. If you’ve been in an accident caused by GM’s speed control, it’s possible that this may potentially have come into play. I’ve been able to reproduce this glitch in 2015-2016 Silverado models, however it’s likely to affect any vehicles with the same speed control. It most likely affects the GMC Sierra, as well as other trucks and vehicles using the same speed control system (possibly Yukon, Suburban, Escalade, and Tahoe).

In the case below, speed control acts directly contrary to the way it is stated in the user manual, and how the driver expects it to behave. Chevrolet doesn’t appear to either understand or has dismissed the safety implications below. If you’ve been affected by this, I recommend you contact your attorney.

The final response I received from Chevrolet is to hold the “set” button in rather than press it multiple times – in spite of the fact that their own owner’s manual specifically states that pressing it briefly multiple times will lower the speed:

“To slow down in small increments, briefly press the SET– button. For each press, the vehicle goes about 1.6 km/h (1 mph) slower”

So Chevrolet’s “solution” is, rather than fix cruise control so that it behaves the way it’s documented in the manual, instead to have me change my driving habits to use cruise control in a way that is counter-intuitive and not standard to other vehicles, including other Chevrolet models. It is sad that software bugs like this are among the easiest to fix and issue a recall for, yet also appear to often be the most likely types of problems to be dismissed or rationalized by Chevrolet. In the event this costs someone their life, I wanted this to be documented publicly since Chevrolet has expressed no interest in correcting the problem or issuing a recall.

A CONDITION EXISTS WHERE, AFTER THE DRIVER HAS USED THE GAS PEDAL TO ACCELERATE, THEN HAS REMOVED THEIR FOOT FROM THE PEDAL, THEN PRESSES THE CRUISE “SET” BUTTON IMMEDIATELY OR A BRIEF MOMENT LATER, AND THEN IMMEDIATELY ATTEMPTS TO DECELERATE BY REPEATEDLY PRESSING MINUS “-” ON THE CRUISE CONTROL, THAT THE SPEED CONTROL BECOMES CONFUSED AND DISPLAYS MULTIPLE DIFFERENT SPEEDS, WHILE MAINTAINING THE ORIGINAL SPEED, EVEN THOUGH THE DRIVER BELIEVES THEY ARE DECELERATING. THIS CAN BE REPRODUCED ON ANY 2015-2016 SILVERADO MODEL BY FOLLOWING THESE STEPS: THROTTLE UP AND ACCELERATE (TO PASS, FOR EXAMPLE), REMOVE FOOT FROM ACCELERATOR, THEN IMMEDIATELY PRESS THE “SET” BUTTON, FOLLOWED BY 5-10 PRESSES ON THE DECELERATE “-” BUTTON; THE SPEED WILL SET AT 65, FOR EXAMPLE, THEN FLIP BETWEEN 64, 65, 63, 65, 62, 65, 61, 65, 60, 65, AND SO ON, MAINTAINING SPEED AT 65 EVEN THOUGH THE DRIVER IS INSTRUCTING THE VEHICLE TO DECELERATE AND THE REDUCED SPEED IS TEMPORARILY DISPLAYED. IT MAY TAKE 5-10 SECONDS FOR THE SPEED CONTROL TO CLEAR ALLOWING THE DRIVER TO MAKE CHANGES, HOWEVER THEY WILL STILL BE CRUISING AT 65. DURING THIS PERIOD, THE DRIVER DOES NOT REALIZE THAT THEY WERE NOT DECELERATING AT WHICH POINT THEY MAY TAP THE BRAKES TO DISENGAGE CRUISE, BUT HAVE LOST 5-10 SECONDS OF REFLEX TIME. THIS HAS PRESENTED A DANGEROUS CONDITION WHERE THE DRIVER BELIEVES THEY’RE DECELERATING WHEN TOO QUICKLY APPROACHING ANOTHER VEHICLE, RISKING COLLISION.

Backdoors: A Technical Definition

A clear technical definition of the term backdoor has never reached wide consensus in the computing community. In this paper, I present a three-prong test to determine if a mechanism is a backdoor: “intent”, “consent”, and “access”; all three tests must be satisfied in order for a mechanism to meet the definition of a backdoor. This three-prong test may be applied to software, firmware, and even hardware mechanisms in any computing environment that establish a security boundary, either explicitly or implicitly. These tests, as I will explain, take more complex issues such as disclosure and authorization into account.

The technical definition I present is rigid enough to identify the taxonomy that backdoors share in common, but is also flexible enough to allow for valid arguments and discussion.

Backdoors, A Technical Definition ]

WSJ Describes Reckless Behavior by FBI in Terrorism Case

The Wall Street Journal published an article today citing a source at the FBI is planning to tell the White House that “it knows so little about the hacking tool that was used to open terrorist’s iPhone that it doesn’t make sense to launch an internal government review”. If true, this should be taken as an act of recklessness by the FBI with regards to the Syed Farook case: The FBI apparently allowed an undocumented tool to run on a piece of high profile, terrorism-related evidence without having adequate knowledge of the specific function or the forensic soundness of the tool.

Best practices in forensic science would dictate that any type of forensics instrument needs to be tested and validated. It must be accepted as forensically sound before it can be put to live evidence. Such a tool must yield predictable, repeatable results and an examiner must be able to explain its process in a court of law. Our court system expects this, and allows for tools (and examiners) to face numerous challenges based on the credibility of the tool, which can only be determined by a rigorous analysis. The FBI’s admission that they have such little knowledge about how the tool works is an admission of failure to evaluate the science behind the tool; it’s core functionality to have been evaluated in any meaningful way. Knowing how the tool managed to get into the device should be the bare minimum I would expect anyone to know before shelling out over a million dollars for a solution, especially one that was going to be used on high-profile evidence.

A tool should not make changes to a device, and any changes should be documented and repeatable. There are several other variables to consider in such an effort, especially when imaging an iOS device. Apart from changes made directly by the tool (such as overwriting unallocated space, or portions of the file system journal), simply unlocking the device can cause the operating system to make a number of changes, start background tasks which could lead to destruction of data, or cause other changes unintentionally. Without knowing how the tool works, or what portions of the operating system it affects, what vulnerabilities are exploited, what the payload looks like, where the payload is written, what parts of the operating system are disabled by the tool, or a host of other important things – there is no way to effectively measure whether or not the tool is forensically sound. Simply running it against a dozen other devices to “see if it works” is not sufficient to evaluate a forensics tool – especially one that originated from a grey hat hacking group, potentially with very little actual in-house forensics expertise.

Continue reading

Hardware-Entangled APIs and Sessions in iOS

Apple has long enjoyed a security architecture whose security, in part, rests on the entanglement of their encryption to a device’s physical hardware. This pairing has demonstrated to be highly effective at thwarting a number of different types of attacks, allowing for mobile payments processing, secure encryption, and a host of other secure services running on an iPhone. One security feature that iOS lacks for third party developers is the ability to validate the hardware a user is on, preventing third party applications from taking advantage of such a great mechanism. APIs can be easily spoofed, as a result, and sessions and services are often susceptible to a number of different forms of abuse. Hardware validation can be particularly important when dealing with crowd-sourced data and APIs, as was the case a couple years ago when a group of students hacked Waze’s traffic intelligence. These types of Sybil attacks allow for thousands of phantom users to be created off of one single instance of an application, or even spoof an API altogether without a connection to the hardware. Other types of MiTM attacks are also a threat to applications running under iOS, for example by stealing session keys or OAuth tokens to access a user’s account from a different device or API. What can Apple do to thwart these types of attacks? Hardware entanglement through the Secure Enclave.

Continue reading

Open Letter to Congress on Encryption Backdoors

To the Honorable Congress of the United States of America,

I am a proud American who has had the pleasure of working with the law enforcement community for the past eight years. As an independent researcher, I have assisted on numerous local, state, and federal cases and trained many of our federal and military agencies in digital forensics (including breaking numerous encryption implementations). Early on, there was a time when my skill set was exclusively unique, and I provided assistance at no charge to many agencies flying agents out to my small town for help, or meeting with detectives while on vacation. I have developed an enormous respect for the people keeping our country safe, and continue to help anyone who asks in any way that I can.

With that said, I have seen a dramatic shift in the core competency of law enforcement over the past several years. While there are many incredibly bright detectives and agents working to protect us, I have also seen an uncomfortable number who have regressed to a state of “push button forensics”, often referred to in law enforcement circles as “push and drool forensics”; that is, rather than using the skills they were trained with to investigate and solve cases, many have developed an unhealthy dependence on forensics tools, which have the ability to produce the “smoking gun” for them, literally with the touch of a button. As a result, I have seen many open-and-shut cases that have had only the most abbreviated of investigations, where much of the evidence was largely ignored for the sake of these “smoking guns” – including much of the evidence on the mobile device, which often times conflicted with the core evidence used.

Continue reading

The Dangers of the Burr Encryption Bill

The Burr Encryption Bill – Discussion Draft dropped last night, and proposes legislation to weaken encryption standards for all United States citizens and corporations. The bill itself is a hodgepodge of technical ineptitude combined with pockets of contradiction. I would cite the most dangerous parts of the bill, but the bill in its entirety is dangerous, not just for its intended uses but also for all of the uses that aren’t immediately apparent to the public.

The bill, in short, requires that anyone who develops features or methods to encrypt data must also decrypt the data under a court order. This applies not only to large companies like Apple, but could be used to punish developers of open source encryption tools, or even encryption experts who invent new methods of encryption. Its broad wording allows the government to hold virtually anyone responsible for what a user might do with encryption. A good parallel to this would be holding a vehicle manufacturer responsible for a customer that drives into a crowd. Only it’s much worse: The proposed legislation would allow the tire manufacturer, as well as the scientists who invented the tires, to be held liable as well.

Screen Shot 2016-04-08 at 10.20.12 AM

Continue reading

An Open Letter to FBI Director James Comey

Mr. Comey,

Sir, you may not know me, but I’ve impacted your agency for the better. For several years, I have been assisting law enforcement as a private citizen, including the Federal Bureau of Investigation, since the advent of the iPhone. I designed the original forensics tools and methods that were used to access content on iPhones, which were eventually validated by NIST/NIJ and ingested by FBI for internal use into your own version of my tools. Prior to that, FBI issued a major deviation allowing my tools to be used without validation, due to the critical need to collect evidence on iPhones. They were later the foundation for virtually every commercial forensics tool to make it to market at the time. I’ve supported thousands of agencies worldwide for several years, trained our state, federal, and military in iOS forensics, assisted hands-on in numerous high-profile cases, and invested thousands of hours of continued research and development for a suite of tools I provided at no cost – for the purpose of helping to solve crimes. I’ve received letters from a former lab director at FBI’s RCFL, DOJ, NASA OIG, and other agencies citing numerous cases that my tools have been used to help solve. I’ve done what I can to serve my country, and asked for little in return.

First let me say that I am glad FBI has found a way to get into Syed Farook’s iPhone 5c. Having assisted with many cases, I understand from firsthand experience what you are up against, and have enormous respect for what your agency does, as well as others like it. Often times it is that one finger that stops the entire dam from breaking. I would have been glad to assist your agency with this device, and even reached out to my contacts at FBI with a method I’ve since demonstrated in a proof-of-concept. Unfortunately, in spite of my past assistance, FBI lawyers prevented any meetings from occurring. But nonetheless, I am glad someone has been able to reach you with a viable solution.

Continue reading

How Apple Can Make Their FBI Problems Go Away

An adversary has an unknown exploit, and it could be used on a large scale to attack your platform. Your threat isn’t just your adversary, but also anyone who developed or sold the exploit to the adversary. The entire chain of information from conception to final product could be compromised anywhere along the way, and sold to a nation state on the side, blackmailed or bribed out of someone, or just used maliciously by anyone with knowledge or access. How can Apple make this problem go away?

The easiest technical solution is a boot password. The trusted boot chain has been impressively solid for the past several years, since Apple minimized its attack surface after the 24kpwn exploit in the early days. Apple’s trusted boot chain consists of a multi-stage boot loader, with each phase of boot checking the integrity of the next. Having been stripped down, it is now a shell of the hacker’s smorgasbord it used to be. It’s also a very direct and visible execution path for Apple, and so if there ever is an alleged exploit of it, it will be much easier to audit the code and pin down points of vulnerability.

Continue reading

Why a Software Exploit Would be a Threat to Secure Enclave Devices

As speculation continues about the FBI’s new toy for hacking iPhones, the possibility of a software exploit continues to be a point of discussion. In my last post, I answered the question of whether such an exploit would work on Secure Enclave devices, but I didn’t fully explain the threat that persists regardless.

For sake of argument, let’s go with the theory that FBI’s tool is using a software exploit. The software exploit probably doesn’t (yet) attack the Secure Enclave, as Farook’s 5c didn’t have one. But this probably also doesn’t matter. Let’s assume for a moment that the exploit being used could be ported to work on a 64-bit processor. The 5c is 32-bit, so this assumes a lot. Some exploits can be ported, while others just won’t work on the 64-bit architecture. But let’s assume that either the work has already been done or will be done shortly to do this; a very plausible scenario.

Continue reading

Viability of Software Exploit and SEP Devices

A number of people have been asking me my thoughts on the viability of a software exploit against Secure Enclave enabled devices, so here’s my opinion:

First, is interesting to note that the way the FBI categorizes this tool’s capabilities is “5c” and “9.0”; namely, hardware model and firmware version. They won’t confirm that it’s the only combination that the tool runs on, but have noted that these are the two factors they’re categorizing it by. This is consistent with how exploit-based forensics tools have functioned in the past. My own forensics tools (for older iPhones) came in different modules that were tailored for a specific hardware platform and firmware version. This is because most exploits require taking Apple’s own firmware and patching it; those patches require slightly different offsets in the kernel (and possibly boot loader). The software to patch is also going to be slightly different for each hardware and firmware combination. So without saying really anything, FBI has kind of hinted that this might be a software exploit. Had this been a hardware attack, such as a NAND mirroring technique, firmware version likely wouldn’t be a point of discussion, as the technique’s feasibility is dependent on hardware revision. This is all conjecture, of course, but is worth noting that the hints are already there.

If the FBI did in fact use a software exploit, the question then becomes one of how viable it is on other platforms. Typically, a software exploit of this magnitude could very well take advantage of vulnerabilities that have long existed in the firmware, making it more than likely that the exploit may also be effective (possibly with a little tailoring) to older versions of iOS. Even if the exploit today was tailored specifically for this device, adjusting offsets and patching slightly different copies of Apple’s firmware is a relatively painless process. A number of open source tools even exist to find and patch the correct bytes in decrypted Apple kernels.

Continue reading

FBI Breaks Into San Bernardino iPhone

As expected, the FBI has succeeded in finding a method to recover the data on the San Bernardino iPhone, and now the government can see all of the cat pictures Farook was keeping on it. We don’t know what method was used, as it’s been classified. Given the time frame and the details of the case, it’s possible it could have been the hardware method (NAND mirroring) or a software method (exploitation). Many have speculated on both sides, but your guess is as good as mine. What I can tell you are the implications.

Continue reading

NAND “Mirroring” Concept Demonstration

This is a simple “concept” demonstration / simulation of a NAND mirroring attack on an iOS 9.0 device. I wanted to demonstrate how copying back disk content could allow for unlimited passcode attempts. Here, instead of using a chip programmer to copy certain contents of the NAND, I demonstrate it by copying the data using a jailbreak. For Farook’s phone, the FBI would remove the NAND chip, copy the contents into an image file, try passcodes, and then copy the original content back over onto the chip.

I did this here, only with a jailbreak: I made a copy of two property lists stored on the device, then copied them back and rebooted after five attempts. When doing this on a NAND level, actual blocks of encrypted disk content would be copied back and forth, whereas I’m working with files here. The concept is the same, and serves only to demonstrate that unlimited passcode attempts can be achieved by back-copying disk content. Again, NO JAILBREAK IS NEEDED to do this to Farook’s device, as the FBI would be physically removing the NAND to copy this data.

Other techniques can be used to speed this up. For example, the clock could possibly be fudged by giving the device a data connection and rerouting time requests to a local server. Think IMSI catcher. This could be used to continuously bump the time five or ten minutes so that more passcode attempts could be tried per reboot without as many delays. The NAND chip could also be socketed or reworked in other ways to make switches seamless. Lastly, the same techniques used in IP BOX such as entering pins through the usb, and using a light sensor to detect an unlock, could help to automate this to be more efficient. Overall, I think this puts to bed any notion that the technique “doesn’t work”.


My Take on FBI’s “Alternative” Method

FBI acknowledged today that there “appears” to be an alternative way into Farook’s iPhone 5c – something that experts have been shouting for weeks now; in fact, we’ve been saying there are several viable methods. Before I get into which method I think is being used here, here are some possibilities of other viable methods and why I don’t think they’re part of the solution being utilized:

  1. A destructive method, such as de-capping or deconstruction of the microprocessor would preclude FBI from being able to come back in two weeks to continue proceedings against Apple. Once the phone is destroyed, there’s very little Apple can do with it. Apple cannot repair a destroyed processor without losing the UID key in the process. De-capping, acid and lasers, and other similar techniques are likely out.
  2. We know the FBI hasn’t been reaching out to independent researchers, and so this likely isn’t some fly-by-night jailbreak exploit out of left field. If respected security researchers can’t talk to FBI, there’s no way a jailbreak crew is going to be allowed to either.
  3. An NSA 0-day is likely also out, as the court briefs suggested the technique came from outside USG.
  4. While it is possible that an outside firm has developed an exploit payload using a zero-day, or one of the dozens of code execution vulnerabilities published by Apple in patch releases, this likely wouldn’t take two weeks to verify, and the FBI wouldn’t stop a full court press (literally) against Apple unless the technique had been reported to have worked. A few test devices running the same firmware could easily determine such an attack would work, within perhaps hours. A software exploit would also be electronically transmittable, something that an outside firm could literally email to the FBI. Even if that two weeks accounted for travel, you still don’t need anywhere near this amount of time to demonstrate an exploit. It’s possible the two weeks could be for meetings, red tape, negotiating price, and so on, but the brief suggested that the two weeks was for verification, and not all of the other bureaucracy that comes after.
  5. This likely has nothing to do with getting intel about the passcode or reviewing security camera footage to find Farook typing it in at a cafe; the FBI is uncertain about the method being used and needs to verify it. They wouldn’t go through this process if they believed they already had the passcode in their possession, unless it was for fasting and prayer to hope it worked.
  6. Breaking the file system encryption on one of NSA/CIAs computing clusters is unlikely; that kind of brute forcing doesn’t give you a two week heads-up that it’s “almost there”. It can also take significantly longer – possibly years – to crack.
  7. Experimental techniques such as frankensteining the crypto engine or other potentially niche edge techniques would take much longer than two weeks (or even two months) to develop and test, and would likely also be destructive.

Continue reading

iOS / OS X and Forensics Trace Leakage

As I sit here, trying to be the good parent in reviewing my daughter’s text messages, I have to assume that I’ve made my kids miserable by having a father with DFIR skills. I’m able to dump her device wirelessly using a pairing record and a tool I designed to talk to the phone. I generate a wireless desktop backup, decrypt it, and this somehow makes my kids safer. What bothers me, as a parent, is the incredible trove of forensic artifacts that I find in my children’s data every month. Deleted messages, geolocation information, even drafts and thumbnails that had all been deleted months ago. Thousands of messages sometimes. In 2008, I wrote a small book with O’Reilly named iPhone Forensics that detailed this forensic mess. Apple has made some improvements, but the overall situation has gotten worse since then. The biggest reason the iPhone is so heavily targeted for forensics and intelligence is because of the very wide data recovery surface it provides: it’s a forensics gold mine.

To Apple’s credit, they’ve done a stellar job of improving the security of iOS devices in general (e.g. the “front door”), but we know that they, just like every other manufacturer, is still dancing on the lip of the volcano. Aside from the risk of my device being hacked, there’s a much greater issue at work here. Whether it’s a search warrant, an unlawful traffic stop in Michigan, someone stealing my backups, an ex-lover, or just leaving my phone unlocked on accident at my desk, it only takes a single violation of the user’s privacy to obtain an entire lengthy history of private information that was thought deleted. This problem is also prevalent in desktop OS X.

Continue reading

Free Software Always Costs Something

Back in the late 1960s, University of California, Berkeley, published its first public BSD licenses promoting free software that could be reused by anyone. A few years later, in the 70s, BSD Unix was released by CSRG, a research group inside of Berkeley, and laid the foundation for many operating systems (including Mac OS X) as we know it today. It gradually evolved over time to support socket models, TCP/IP, Unix’s file model, and a lot more. You’ll find traces of all of these principals – and very often, core code itself, still used 50 years later in cutting edge operating systems. The idea of “free software” (whether “free as in beer” or “free as in freedom”) is credited as a driving force behind today’s technology, multi-billion dollar fortune companies, and even the iPhone or Android device sitting in your pocket. Here’s the rub: None of it was ever really free.

Continue reading

AceDeceiver: Breaking Apple’s Cryptographic Leash

The past week, I’ve been writing all about cryptographic leashes and how they could be easily broken in the case of controlling FBI’s iOS backdoor. Surprisingly, the first serious example of this has surfaced this week. The researchers at Palo Alto Networks, who have been killing it lately with great iOS research, did a breakdown of a piece of Chinese malware known as AceDeceiver. AceDeceiver breaks the cryptographic leash baked into the iPhone’s App Store system, allowing an attacker to install applications on the host iPhone even after they’ve been revoked by Apple. While the malware, in its present form, isn’t likely to cause widespread damage, the vulnerabilities in Apple’s DRM that this presents could be used for far more malicious purposes.

AceDeceiver starts its life as malware on your desktop. In its present form, you’d have to be dumb enough to install a Chinese pirate app store in order to have to worry about this, but in a more malicious form, something like it could potentially be embedded as a trojan in legitimate software. The malware performs a man-in-the-middle attack between your computer and the App Store, and fudges the authorizations used to let your iPhone run purchased software. Think of the attack as forging a receipt, like paying for a set of towels at Target, then returning a different set. Apple has no way to check the towels (your apps) to make sure they’re the same ones, so the iPhone lets the app run since you have a valid receipt. It’s even worse than this, because the receipts aren’t tied to your iTunes account – you can pull someone else’s receipt out of the trash and return towels you never purchased. It’s this receipt that is re-used to install the malware’s own software on your iPhone by impersonating iTunes. The malware author can use his or her own receipts to load previously approved App Store software onto your phone.

Continue reading

Apple vs. FBI: Where We Are Now, and Where We’re Going

Much has happened since a California magistrate court originally granted an order for Apple to assist the FBI under the All Writs Act. For one, most of us now know what the All Writs Act is: An ancient law that was passed before the Fourth Amendment even existed, now somehow relevant to modern technology a few hundred years later. Use of this act has exploded into a legal argument about whether or not it grants carte blanche rights of the government to demand anything and everything from private companies (and incidentally, individuals) if it helps them prosecute crimes. Of course, that’s just the tip of the iceberg. We’ve seen strong debates about whether any person should be allowed to have private conversations, thoughts, or ideas that can’t later be searched, whether forcing others to work for the government violates the constitution, whether other countries will line up to exploit technology if America does, and ultimately – at the heart of all of these – whether fear of the word “terrorism” is enough to cause us all to burn our constitution.

Over the past few weeks, the entire tech community has gotten behind Apple, filing a barrage of friend-of-the-court briefs on Apple’s behalf. Security experts such as myself, Crypto “Rock Stars”, constitutionalists, technologists, lawyers, and 30 Helens all agree that Apple is in the right, and that backdooring iOS would cause irreparable damage to the security, privacy, and safety of hundreds of millions of diplomats, judges, doctors, CEOs, teenage girls, victims of crimes, parents, celebrities, politicians, and all men and women around the world. Throughout the past month, legal exchanges have escalated from ice cold to white hot, and from professional to a traveling flea circus as ridiculous terms such as “lying dormant cyber pathogen” have been introduced. Congress, the courts, and the public have seen strong technical and legal arguments, impassioned pleas from victims, attempts at reason by the law enforcement community, name calling, proverbial mugs-thrown-across-the-room, uncontrollable profanity on media briefings, and just about any other form of pressure manifesting itself that one can imagine.

Continue reading