I originally published this in 2012, after the Sandy Hook shooting, and dust it off every time there’s a random mass shooting in the news. This post has seen the top of my feed year after year, as politicians continue to offer nothing – nothing, but thoughts and prayers.
I’ve been a long time responsible gun owner, by the old definition of what that used to mean. Like a majority of them, I’ve wanted more controls on semi-automatic rifles – particularly, assault rifles, for a long time. There’s idiocy on both sides of this debate, and both have some questionable notions about them. The extreme left seems to have developed an irrational fear and hatred of all guns and the extreme right believes the only solution to guns are more guns. Consider this more realistic perspective from someone who spent over a decade shooting and working on guns, held NRA certifications to supervise ranges and carry concealed weapons, and up until some years ago – when I sold the rights to it – produced the #1 ballistics computer in the App Store.
While often obscure to most, there is – today – a system in place to perform intensive checks of individuals looking to own firearms categorized as highly lethal; the problem is it isn’t being used to control most assault rifles. Introduced in the National Firearms Act legislation, this system was applied to machine guns, short barrel rifles, silencers, sawed off shotguns, and other types of firearms that individuals can still legally own today, but with more than the casual regulation of AR-15s and other firearms. It could be changed to include semi-automatic rifles with the stroke of a pen. In my opinion, it should be, and in this post I’ll argue why I’d like the President and legislators push for this.
In the book of Chronicles, King Josiah breaks down the altars of false gods, tears down carved images, and rids Judah and Jerusalem of the ungodliness of the time. When his priest finds the Book of the Law, Josiah tears his robe and instates moral rule according to the laws of the book. The chronicler Ezra writes, “Josiah removed all the detestable idols from all the territory belonging to the Israelites, and he had all who were present in Israel serve the Lord their God. As long as he lived, they did not fail to follow the Lord, the God of their ancestors.” An often overlooked detail in this story is that in spite of a society living under (and clearly practicing!) moral law, God tells Josiah that he will take his life early so that he will not see the disaster God still plans to bring about. A useful object lesson can be found here: perceived morality counts for little when it is compelled.
White evangelical Christians woke up to some rather unexpected news. A draft opinion, somehow leaked out of the Supreme Court, suggests that Roe v. Wade is to soon be overturned. I single out white evangelicals here because, according to a recent Pew Research study, they are twice as likely to want to see abortion outlawed than other Americans. It would be an error though to conclude this means white evangelicals are the most pro-life; this is not the case at all. White evangelicals are likely no more pro-life than other religious groups, Christian or otherwise – they are, however, the most autocratic. While many other Christians value life just as much, where we differ from evangelicals is on a solution to the number of unwanted pregnancies in the country. Evangelicals largely believe outlawing abortion is the only solution, while most others believe it is an ineffective and dangerous solution. At the center of the controversy is not Christian doctrine at all, or even morality, but rather love of money. I’ll explain.
I only regret that I have but one life to lose for my country.
On the day of Nathan Hale’s execution, a British officer wrote of Hale, “he behaved with great composure and resolution, saying he thought it the duty of every good Officer, to obey any orders given him by his Commander-in-Chief; and desired the Spectators to be at all times prepared to meet death in whatever shape it might appear.” Nearly ten years ago, I viewed Edward Snowden as a slightly nerdier, yet similar patriot to the greats. I wanted to believe he was serving his country, and was unfairly targeted by the state for standing up for those beliefs. Much of tech did too, which is why this is an important discussion to have. It’s affected how the tech community views and interacts with government in many ways, with all of the prejudices it brought into play. For all the pontificating since then about freedom that Snowden has done, his taking up permanent citizenship in Russia, and his silence since the beginning of the war with Ukraine (except, more recently, to criticize the US once more), today I rather see the pattern of a common deserter in Snowden, rather than the champion of free speech that some position him as. If Snowden is to set the narrative for how tech views and responds to government, then our occasional criticism of his own behavior should be fair game.
During his time in Russia, we have seen the whistleblower system work effectively here at home. The details of Trump’s Ukraine call, and the subsequent freezing of security aid seems rather relevant today. More impressively so, this same whistleblower system Snowden criticized worked against a sitting president having no capacity for restraint. The fruits of it were significant, and the process brought both public dissemination and a full press by congress to protect the whistleblower. Mr. X, whose identity is still somewhat contested, was a hero. He stood up to the bully, knowing better than most how lawless the tyrant was, and of the angry mob he commanded. What happened to X? Very little, certainly far less than the charges Snowden brought on himself or the freedoms he gave up by not using the right channels. Instead of following process, Snowden fled the country under the Obama administration, who was a teddy bear compared to Trump. Snowden rejected this government process, insisting the whistleblower system was corrupt, using it as justification to leak classified documents, shortly before departing the country. In 2020, he asked us to excuse him again while he applied for Russian citizenship “for the sake of his kids”. Yet even in being proved wrong by a true hero like X while the country lived under a tyrant, Snowden continues to hide from the consequences of this terrible miscalculation.
First they came for the socialists, and I did not speak out— Because I was not a socialist.
Then they came for the trade unionists, and I did not speak out— Because I was not a trade unionist.
Then they came for the Jews, and I did not speak out— Because I was not a Jew.
Then they came for me—and there was no one left to speak for me.
Post-war Confessional, Rev. Martin Niemöller
Watching the world respond to Russia with quiet acquiescence has been nearly as horrifying as watching the events themselves unfold. Our lines in the sand have always been drawn exclusively around our country club. Instead of action, we offer compliance to a deranged man’s orders to stay, as if his obedient dog. Through our inaction, the world has professed that human life in Ukraine is not equal to human life elsewhere. The words “never again” were, in an instant, exchanged for a simple “meh”.
I am in disbelief. It has been sobering to watch the world refuse to stand up to evil, to instead allow the innocent to continue to be murdered, solely because they don’t serve American or European interests. Even American businesses – one by one leave, yet are too scared to use their remaining influence or capabilities to become a much needed megaphone inside Russia, still afraid to defy Russian law even though Russia has become lawless herself. The time for compliance is quite over. Bridges must be burned, and the leaders on them.
Will we really stand idly by while this country’s fresh embrace with democracy is snuffed out by a warmonger? How humiliated we should be to stand by and watch the innocent be helplessly murdered at the pleasure of such a tiny little man. America, the home of the brave, unwilling to hear the screams for help of those who don’t serve our interests. Where is this bravery we speak of? Has our own freedom been of any significance if we won’t rescue the oppressed? Instead, we choose to sacrifice their lives to preserve our own. Such bravery.
If we are not willing to take up the cause of the innocent, we’ve lost far more than we stand to risk by answering the call. Our identity as a people hinges on our willingness to sacrifice for those in need. God and history will judge us one day for these sins.
Our ancestors swore never again. We’ve let them down immeasurably. The brave courage of former generations puts us to shame. Freedom is no longer worth fighting for, unless it interferes with our own Twitter posts. We’ve become desensitized to oppression, and in the process become prisoners of a different kind.
Wash and make yourselves clean. Take your evil deeds out of my sight; stop doing wrong. Learn to do right; seek justice. Defend the oppressed. Take up the cause of the fatherless; plead the case of the widow.
In the beginning wickedness did not exist. Nor indeed does it exist even now in those who are holy, nor does it in any way belong to their nature. Athanasius, Against the Heathen I’ve devoted much of the past 30 years as an evangelical Christian “layperson” to Christian studies to try and become anRead More
The Biden administration is having a little Twitter fight about whether or not to reset the followers of the @potus account. While followers were rolled over from the Obama administration to Trump’s, the Trump administration, who views Twitter followers as if they represented actual voters-who-love-Donald, doesn’t think the incoming president should get to inherit all of those bots and disenfranchised twelve-year olds. Let us stop and reflect on the stupidity and pettiness of this argument. What the Biden administration really should be thinking about is whether to close @potus and get the White House off of Twitter completely.
Social media, especially Twitter, has year after year been on a steady course of devolving into one of the most toxic and unpleasant public gatherings on the Internet. Long before Trump took office, social media was the leading source of disinformation, threats, harassment, toxicity, and division. Combined with a platform that adopts thought-terminating loaded language hash tags (e.g. #StopTheSteal) and abbreviated messaging that lacks critical thought, Twitter has long been a platform designed to capitalize on the cult phenomenon. Twitter has been not only markedly complicit, but in a position to profit off of the toxicity, disinformation, and abuse it allows by the Trump administration and other public officials who’ve started emulating the behavior.
If you watched yesterday’s senate judiciary hearings with CEOs from Twitter and Facebook, two things would have stuck out to you. First, why is Jack Dorsey addressing the senate from the kitchen department at an IKEA? Second, how did a judiciary hearing about misinformation campaigns somehow turn into a misinformation campaign itself? At the heartRead More
Is anyone surprised the Obama-era whistleblower directive put into place actually worked? I bet Edward Snowden is. Not only did it work, but Congress wouldn’t have given it such weight had the information been otherwise leaked in a Snowden or Manning-esque style, nor would the IG have had the chance to acknowledge the information asRead More
Social media is ripe with analysis of an FBI joint report on Russian malicious cyber activity, and whether or not it provides sufficient evidence to tie Russia to election hacking. What most people are missing is that the JAR was not intended as a presentation of evidence, but rather a statement about the Russian compromises, followed by a detailed scavenger hunt for administrators to identify the possibility of a compromise on their systems. The data included indicators of compromise, not the evidentiary artifacts that tie Russia to the DNC hack.
One thing that’s been made clear by recent statements by James Clapper and Admiral Rogers is that they don’t know how deep inside American computing infrastructure Russia has been able to get a foothold. Rogers cited his biggest fear as the possibility of Russian interference by injection of false data into existing computer systems. Imagine the financial systems that drive the stock market, criminal databases, driver’s license databases, and other infrastructure being subject to malicious records injection (or deletion) by a nation state. The FBI is clearly scared that Russia has penetrated more systems than we know about, and has put out pages of information to help admins go on the equivalent of a bug bounty.
I wasn’t originally going to dig into some of the ugly details about San Bernardino, but with FBI Director Comey’s latest actions to publicly embarrass Hillary Clinton (who I don’t support), or to possibly tip the election towards Donald Trump (who I also don’t support), I am getting to learn more about James Comey and from what I’ve learned, a pattern of pushing a private agenda seems to be emerging. This is relevant because the San Bernardino iPhone matter saw numerous accusations of pushing a private agenda by Comey as well; that it was a power grab for the bureau and an attempt to get a court precedent to force private business to backdoor encryption, while lying to the public and possibly misleading the courts under the guise of terrorism.
The Wall Street Journal published an article today citing a source at the FBI is planning to tell the White House that “it knows so little about the hacking tool that was used to open terrorist’s iPhone that it doesn’t make sense to launch an internal government review”. If true, this should be taken as an act of recklessness by the FBI with regards to the Syed Farook case: The FBI apparently allowed an undocumented tool to run on a piece of high profile, terrorism-related evidence without having adequate knowledge of the specific function or the forensic soundness of the tool.
Best practices in forensic science would dictate that any type of forensics instrument needs to be tested and validated. It must be accepted as forensically sound before it can be put to live evidence. Such a tool must yield predictable, repeatable results and an examiner must be able to explain its process in a court of law. Our court system expects this, and allows for tools (and examiners) to face numerous challenges based on the credibility of the tool, which can only be determined by a rigorous analysis. The FBI’s admission that they have such little knowledge about how the tool works is an admission of failure to evaluate the science behind the tool; it’s core functionality to have been evaluated in any meaningful way. Knowing how the tool managed to get into the device should be the bare minimum I would expect anyone to know before shelling out over a million dollars for a solution, especially one that was going to be used on high-profile evidence.
A tool should not make changes to a device, and any changes should be documented and repeatable. There are several other variables to consider in such an effort, especially when imaging an iOS device. Apart from changes made directly by the tool (such as overwriting unallocated space, or portions of the file system journal), simply unlocking the device can cause the operating system to make a number of changes, start background tasks which could lead to destruction of data, or cause other changes unintentionally. Without knowing how the tool works, or what portions of the operating system it affects, what vulnerabilities are exploited, what the payload looks like, where the payload is written, what parts of the operating system are disabled by the tool, or a host of other important things – there is no way to effectively measure whether or not the tool is forensically sound. Simply running it against a dozen other devices to “see if it works” is not sufficient to evaluate a forensics tool – especially one that originated from a grey hat hacking group, potentially with very little actual in-house forensics expertise.
To the Honorable Congress of the United States of America,
I am a proud American who has had the pleasure of working with the law enforcement community for the past eight years. As an independent researcher, I have assisted on numerous local, state, and federal cases and trained many of our federal and military agencies in digital forensics (including breaking numerous encryption implementations). Early on, there was a time when my skill set was exclusively unique, and I provided assistance at no charge to many agencies flying agents out to my small town for help, or meeting with detectives while on vacation. I have developed an enormous respect for the people keeping our country safe, and continue to help anyone who asks in any way that I can.
With that said, I have seen a dramatic shift in the core competency of law enforcement over the past several years. While there are many incredibly bright detectives and agents working to protect us, I have also seen an uncomfortable number who have regressed to a state of “push button forensics”, often referred to in law enforcement circles as “push and drool forensics”; that is, rather than using the skills they were trained with to investigate and solve cases, many have developed an unhealthy dependence on forensics tools, which have the ability to produce the “smoking gun” for them, literally with the touch of a button. As a result, I have seen many open-and-shut cases that have had only the most abbreviated of investigations, where much of the evidence was largely ignored for the sake of these “smoking guns” – including much of the evidence on the mobile device, which often times conflicted with the core evidence used.
The Burr Encryption Bill – Discussion Draft dropped last night, and proposes legislation to weaken encryption standards for all United States citizens and corporations. The bill itself is a hodgepodge of technical ineptitude combined with pockets of contradiction. I would cite the most dangerous parts of the bill, but the bill in its entirety is dangerous, not just for its intended uses but also for all of the uses that aren’t immediately apparent to the public.
The bill, in short, requires that anyone who develops features or methods to encrypt data must also decrypt the data under a court order. This applies not only to large companies like Apple, but could be used to punish developers of open source encryption tools, or even encryption experts who invent new methods of encryption. Its broad wording allows the government to hold virtually anyone responsible for what a user might do with encryption. A good parallel to this would be holding a vehicle manufacturer responsible for a customer that drives into a crowd. Only it’s much worse: The proposed legislation would allow the tiremanufacturer, as well as the scientists who invented the tires, to be held liable as well.
Sir, you may not know me, but I’ve impacted your agency for the better. For several years, I have been assisting law enforcement as a private citizen, including the Federal Bureau of Investigation, since the advent of the iPhone. I designed the original forensics tools and methods that were used to access content on iPhones, which were eventually validated by NIST/NIJ and ingested by FBI for internal use into your own version of my tools. Prior to that, FBI issued a major deviation allowing my tools to be used without validation, due to the critical need to collect evidence on iPhones. They were later the foundation for virtually every commercial forensics tool to make it to market at the time. I’ve supported thousands of agencies worldwide for several years, trained our state, federal, and military in iOS forensics, assisted hands-on in numerous high-profile cases, and invested thousands of hours of continued research and development for a suite of tools I provided at no cost – for the purpose of helping to solve crimes. I’ve received letters from a former lab director at FBI’s RCFL, DOJ, NASA OIG, and other agencies citing numerous cases that my tools have been used to help solve. I’ve done what I can to serve my country, and asked for little in return.
First let me say that I am glad FBI has found a way to get into Syed Farook’s iPhone 5c. Having assisted with many cases, I understand from firsthand experience what you are up against, and have enormous respect for what your agency does, as well as others like it. Often times it is that one finger that stops the entire dam from breaking. I would have been glad to assist your agency with this device, and even reached out to my contacts at FBI with a method I’ve since demonstrated in a proof-of-concept. Unfortunately, in spite of my past assistance, FBI lawyers prevented any meetings from occurring. But nonetheless, I am glad someone has been able to reach you with a viable solution.
An adversary has an unknown exploit, and it could be used on a large scale to attack your platform. Your threat isn’t just your adversary, but also anyone who developed or sold the exploit to the adversary. The entire chain of information from conception to final product could be compromised anywhere along the way, and sold to a nation state on the side, blackmailed or bribed out of someone, or just used maliciously by anyone with knowledge or access. How can Apple make this problem go away?
The easiest technical solution is a boot password. The trusted boot chain has been impressively solid for the past several years, since Apple minimized its attack surface after the 24kpwn exploit in the early days. Apple’s trusted boot chain consists of a multi-stage boot loader, with each phase of boot checking the integrity of the next. Having been stripped down, it is now a shell of the hacker’s smorgasbord it used to be. It’s also a very direct and visible execution path for Apple, and so if there ever is an alleged exploit of it, it will be much easier to audit the code and pin down points of vulnerability.
As speculation continues about the FBI’s new toy for hacking iPhones, the possibility of a software exploit continues to be a point of discussion. In my last post, I answered the question of whether such an exploit would work on Secure Enclave devices, but I didn’t fully explain the threat that persists regardless.
For sake of argument, let’s go with the theory that FBI’s tool is using a software exploit. The software exploit probably doesn’t (yet) attack the Secure Enclave, as Farook’s 5c didn’t have one. But this probably also doesn’t matter. Let’s assume for a moment that the exploit being used could be ported to work on a 64-bit processor. The 5c is 32-bit, so this assumes a lot. Some exploits can be ported, while others just won’t work on the 64-bit architecture. But let’s assume that either the work has already been done or will be done shortly to do this; a very plausible scenario.
As expected, the FBI has succeeded in finding a method to recover the data on the San Bernardino iPhone, and now the government can see all of the cat pictures Farook was keeping on it. We don’t know what method was used, as it’s been classified. Given the time frame and the details of the case, it’s possible it could have been the hardware method (NAND mirroring) or a software method (exploitation). Many have speculated on both sides, but your guess is as good as mine. What I can tell you are the implications.
FBI acknowledged today that there “appears” to be an alternative way into Farook’s iPhone 5c – something that experts have been shouting for weeks now; in fact, we’ve been saying there are several viable methods. Before I get into which method I think is being used here, here are some possibilities of other viable methods and why I don’t think they’re part of the solution being utilized:
A destructive method, such as de-capping or deconstruction of the microprocessor would preclude FBI from being able to come back in two weeks to continue proceedings against Apple. Once the phone is destroyed, there’s very little Apple can do with it. Apple cannot repair a destroyed processor without losing the UID key in the process. De-capping, acid and lasers, and other similar techniques are likely out.
We know the FBI hasn’t been reaching out to independent researchers, and so this likely isn’t some fly-by-night jailbreak exploit out of left field. If respected security researchers can’t talk to FBI, there’s no way a jailbreak crew is going to be allowed to either.
An NSA 0-day is likely also out, as the court briefs suggested the technique came from outside USG.
While it is possible that an outside firm has developed an exploit payload using a zero-day, or one of the dozens of code execution vulnerabilities published by Apple in patch releases, this likely wouldn’t take two weeks to verify, and the FBI wouldn’t stop a full court press (literally) against Apple unless the technique had been reported to have worked. A few test devices running the same firmware could easily determine such an attack would work, within perhaps hours. A software exploit would also be electronically transmittable, something that an outside firm could literally email to the FBI. Even if that two weeks accounted for travel, you still don’t need anywhere near this amount of time to demonstrate an exploit. It’s possible the two weeks could be for meetings, red tape, negotiating price, and so on, but the brief suggested that the two weeks was for verification, and not all of the other bureaucracy that comes after.
This likely has nothing to do with getting intel about the passcode or reviewing security camera footage to find Farook typing it in at a cafe; the FBI is uncertain about the method being used and needs to verify it. They wouldn’t go through this process if they believed they already had the passcode in their possession, unless it was for fasting and prayer to hope it worked.
Breaking the file system encryption on one of NSA/CIAs computing clusters is unlikely; that kind of brute forcing doesn’t give you a two week heads-up that it’s “almost there”. It can also take significantly longer – possibly years – to crack.
Experimental techniques such as frankensteining the crypto engine or other potentially niche edge techniques would take much longer than two weeks (or even two months) to develop and test, and would likely also be destructive.
Much has happened since a California magistrate court originally granted an order for Apple to assist the FBI under the All Writs Act. For one, most of us now know what the All Writs Act is: An ancient law that was passed before the Fourth Amendment even existed, now somehow relevant to modern technology a few hundred years later. Use of this act has exploded into a legal argument about whether or not it grants carte blanche rights of the government to demand anything and everything from private companies (and incidentally, individuals) if it helps them prosecute crimes. Of course, that’s just the tip of the iceberg. We’ve seen strong debates about whether any person should be allowed to have private conversations, thoughts, or ideas that can’t later be searched, whether forcing others to work for the government violates the constitution, whether other countries will line up to exploit technology if America does, and ultimately – at the heart of all of these – whether fear of the word “terrorism” is enough to cause us all to burn our constitution.
Over the past few weeks, the entire tech community has gotten behind Apple, filing a barrage of friend-of-the-court briefs on Apple’s behalf. Security experts such as myself, Crypto “Rock Stars”, constitutionalists, technologists, lawyers, and 30 Helens all agree that Apple is in the right, and that backdooring iOS would cause irreparable damage to the security, privacy, and safety of hundreds of millions of diplomats, judges, doctors, CEOs, teenage girls, victims of crimes, parents, celebrities, politicians, and all men and women around the world. Throughout the past month, legal exchanges have escalated from ice cold to white hot, and from professional to a traveling flea circus as ridiculous terms such as “lying dormant cyber pathogen” have been introduced. Congress, the courts, and the public have seen strong technical and legal arguments, impassioned pleas from victims, attempts at reason by the law enforcement community, name calling, proverbial mugs-thrown-across-the-room, uncontrollable profanity on media briefings, and just about any other form of pressure manifesting itself that one can imagine.
The idea of a controlled explosion comes to mind when I think about pending proceedings with Apple. The Department of Justice argues that a backdoored version of iOS can be controlled in that Apple’s existing security mechanisms can prevent it from blowing up any device other than Farook’s. This is quite true. The code signing and TSS signing mechanism used to install firmware have controls that can most certainly bind a firmware bundle to a given device UDID. What’s not true is the amount of real control and protection this provides.
Think of Apple’s signing mechanisms as a kind of “leash” if you will; they provide a means of digital rights management to control any payload delivered onto the device. Where the DOJ’s argument falls into error is that their focus is too much on this leash, and too little on the payload itself. The payload in this scenario is a modified version of iOS that has a direct line into a device’s security mechanisms to both disable them and manipulate them to rapidly brute force a passcode (remotely, mind you). It’s the electronic equivalent of an explosive for an iPhone that will blow the safe open (FBI’s analogy, not mine). What Apple is being forced to design, develop, test, validate, and protect is essentially a bomb on a leash.