Reading of the OceanGate tragedy this week has been heartbreaking, and my prayers are for the families of those lost in this tragic accident. Yet while the investigation gets underway, most in that community seem to already have concluded that this accident was 100% preventable. Stockton Rush’s alleged disregard for safety and process in the name of innovation seems well known throughout their world, and this failure came as no surprise. Yet while we all want to hate on the seemingly reckless CEO that may have caused this accident, we should perhaps give pause to consider that in America, we seem entirely comfortable with sociopaths driving progress at the expense of human life, and even continue to financially support them.
Stockton Rush reminds me a lot of another, not entirely different CEO who is well known for his lack of restraint and cavalier approach to safety. According to The Washington Post which analyzed NHTSA’s data, Tesla’s autopilot was found to have been involved in 736 crashes and 17 fatalities since 2019, and has been on the rise since expanding their full self-driving technology. Over 800,000 Teslas have supposedly been under investigation, and a recent internal leak from last month alleges thousands of safety complaints. In spite of a fatality count more than triple that of the OceanGate Titan, our own government seems perfectly content with these deaths and has taken little corrective action. Neither have investors, nor Tesla’s cult following of customers. Many supporters of Tesla quickly jump to point out the good that such technology will do for safety once it works, but that isn’t in question. What’s at issue is the lack of process and the seemingly unnecessary urgency, much like OceanGate, to rush forward without the proper safety controls in place. While Teslas might still make fewer mistakes than humans, WaPo’s report seems to indicate those mistakes are far more biased toward school children, first responders, and pedestrians. While human drivers who hit children are typically prevented from driving again, AI drivers who hit them not only remain on the road, but their mistakes are replicated across the entire fleet. Similarly, Stockton Rush is dead, but the logic that allegedly led to 17 fatalities is still very much alive in hundreds of thousands of other vehicles. The value alignment problem has been a long known issue in AI, but we’ve become accustomed to treating it with a wrench and a reset button as if it were still in the lab, rather than the new reality of human life being on the line with every error. There seem to be other problems in Tesla’s information gathering due to lack of adequate sensory equipment; the AI should be able to better explore its environment to modify its future percepts. Removing sensors was, in my opinion, another risky gamble Tesla has made with others’ lives, making the environment less observable to the AI.
Mechanical arts are of ambiguous use, serving as well for hurt as for remedy.
It’s been a long time since I’ve worked in the field of ML (or what some call AI), and we’ve come a long way from simple text classification to what’s being casually called generative AI today. While the technology has made many advances, the foundational concepts of machine learning have remained analogous over time. ML depends heavily on a large set of training data, which is analyzed to pull out its most interesting and defining features, and this becomes the basis for training a model. The process might involve parsing text, or performing analysis like object identification or analyzing stylistic features in art. Each of these is, in itself, a smaller – but mathematical – process. I experimented with a primitive form of meta-level learning in text classification several years ago, which may help convey the general idea. This identifies “features” of the reference sample being trained. The features this process pulls out can be simple, like words in a document or pixels from a handwriting sample, though today can be more sophisticated “critical patterns” correlated to literary authorship or artistry, such as patterns within art and music composition, sometimes stored in other models. Whatever the content is, the purpose of the training algorithm is to identify patterns and correlations across the data to build a weighted or structured model. The most interesting patterns in the training data influence weights or probabilities, creating a hidden layer: millions of “gears” that converge to compute the most statistically significant outcomes. In this sense, the term “learning” is a bit of a stretch; what’s happening is more along the lines of statistical transcription of a set of features. Feature selection is one of the key differences between various ML models, and why you have some constructing music, while others render art. The math is pretty consistent – more sophisticated machines like neural nets are typically trained using backpropagation and gradient descent, while other machines such as chat bots and text generators might use weighted Markov models or Bayesian networks. These approaches have been applied to everything from natural language processing and handwriting recognition, to today’s work in genome sequencing and autonomous driving. Still, these traditional forms of machine learning are not much more than a sophisticated pattern recognizer. It is largely a deconstructive process with weights and statistical magic.
Today’s generative AI still goes through this type of deconstructive process, but also has a formative element. Where these new approaches excel is in going beyond parsing information into a knowledge base, but now also applying a formative process to that information – what we might conflate with intelligence, but still falls short of what most would consider the result of human reasoning. To present the data in some coherent form, this involves training not just the information, but the many dimensions of that information (such as the number of different contexts a word may be used in), or in the context of constructs and critical patterns of that information (ABBA, or 1-4-5, as very basic examples), enabling it to formulate an output in the pattern of an existing set of learned reference samples. Even modern training approaches, such as those used in the transformer model, still require supervised testing to tell the model what bits of its output are garbage, so that the output eventually looks intelligent; it is actually closer to “filtered garbage”. So identifying the pattern of Iambic Pentameter, for example, is still an artificial process. It can be computed adaptively with a large enough data set. Moving from atomic and factoral learning into structural learning allows a system to fingerprint complex patterns much more efficiently. Scale those patterns to music, art, literature, and the more sophisticated patterns that make up our repertoire of human creativity and it is impressive – but still synthesized. Information processing is still very primitive, and lacks many of the traits of human understanding. The inability to conceive tradition, authority, and prejudice is why all of this advanced technology still leaves us with Nazi chatbots. Some would call this confirmation theory, which is an area quite underdeveloped (and the AI reading this wouldn’t disagree). Even the raw objectives of AI are based on human-engineered goals, and evaluated using performance metrics to select the best behavior. This is a very mechanical process. Certain behaviors we may view as creative tasks may in fact be simple randomness introduced into most AIs to avoid infinite logic loops. In short, a lot of what you see is quite the opposite of the autonomous, self-motivated behavior it looks like. Any good AI behaves rationally only because someone programmed good objectives into it. Garbage in, garbage out.
One of the big differences between traditional forms of ML and generative AI is the direction in which the data flows. Traditionally, inputs flow into the system for training and queries. To train traditional systems, you’d suck in “a bunch of other people’s stuff”, and it identifies all of the interesting patterns that are then compared with the input sample. Generative AI takes this a step further, and flips the switch on the vacuum cleaner – and now all of the dirt that was initially fed into the system is shot out the pipe to produce the equivalent of a digital dust cloud of the original training medium. The output of generative AI takes the critical patterns and concepts weighted during the AI’s training and applies some formative computation to produce its own reference sample as a result. Neat-o. Nice parlor trick.
With billions of dollars, this ML scales to perform impressive computational tasks. The risk of this type of system goes beyond the traditional vision of a robot building a better chair, or replacing a worker at a plant. Today’s ML systems are white collar professionals and don’t require mechanical bodies; the computational capabilities of these systems can replace a broad array of professions using the thought product of millions of humans at once – so how could anyone compete with that? No one was ever supposed to, in fact. Doug Englebart, pioneer in the field of human-computer interaction, saw AI’s value more in intelligence-augmentation (that is, IA rather than AI), as a means of assisting the worker. Corporate greed has already led to the recent misapplication of AI, using its advanced capabilities to replace, rather than to augment, humans. Hollywood’s ML generation of “extras” is a quite extreme and literal example of this. But corporate greed isn’t AI generated. AI is replacing employees for very human reasons, and little to do with artificial intelligence itself. Yet correct computer-human interfaces are a fundamental principle that many computer scientists and science fiction authors alike both fear will be broken. Should you hate AI? No, you should hate greed.
The cold irony is this: at a deconstructed level, the output of generative AI represents the collective intelligence of other people’s thought products – their ideas, writings, music, theology, facts, opinions, and so on, likely also including those who lose their job to it. This also means others’ patents and copyrighted works, either directly or indirectly. ML has proven wildly successful at identifying the most effective critical patterns and gluing them together in some coherent form that communicates a desired result – but at the end of the day, all of its intelligence indeed belongs to the other people whose content was used to train it, almost always without their permission. In the end, generative AI takes from the world’s best authors, artists, musicians, philosophers, and other thinkers – erasing their identities, and taking their credit in its output. Without the proper restraints, it will produce the master forgeries of our generation. Should we forget its limitations and begin to rely on it for information, AI will easily blur the lines between what we view as real facts and synthesized ones. Consider a recent instance of this, where an attorney got himself in hot water for citing case law that didn’t exist – AI had seemingly fabricated it, where the attorney thought they were leveraging AI to do research. Imagine the impact to future case law should courtroom outcomes be based on fictional precedent should it fail to be fact checked every time.
I’ve recently written about the problems with social media in provoking speech and conformity, as well as the cult phenomenon that social media companies capitalize on. Elon Musk’s recent purchase of Twitter seems an apropos time to address the direct suppression of free speech.
Among Musk’s poorly thought out misadventures, he recently and rightfully reinstated the Twitter accounts of several journalists who had been critical of him in the past, whom he had previously rage-banned without warning. What’s really appalling to me isn’t that he suspended them in the first place (which was deeply troubling), but rather the guise under which he reinstated them. Like many of his twit-decisions, Musk started with a Twitter poll, regarded as having roughly the same credibility as a Russian election. This was followed with a decree that “the people have spoken”, referring to the disenfranchised twelve year olds, Russian trolls, and bots that vote on Twitter. Musk uses this business strategy, which cost $44 billion in research, whenever he wants to make a public policy decision that doesn’t involve putting people out of work. This policy-related polling seems almost an attempt to make the Twitterverse feel empowered by the new CEO.
Yet while Musk might have his users believe that they are now participants in the free speech narrative, the very concept of free speech itself is at odds with – even downright hostile to the notion of crowd-sourced policy. The Bill of Rights was designed intentionally to “prevent a sheep and two wolves from voting on what’s for dinner”. It seems to elude Musk that the right of free speech exists at a level higher than himself; that, rather than handing it out by vote, he is a mere steward of it with the responsibility of defending it. The Twitterverse at large has not and should not be empowered to make decisions about what speech to permit, because doing so destroys free speech. Failing to understand the requirements of such a basic human right is a dangerous thing for someone dictating policy of any system that depends on it. Musk, rather, seems to lack either the capacity or the restraint to make responsible decisions about free speech, or how to distinguish free speech from misinformation (today’s “Fire!” in a crowded theatre). Musk’s inability to handle such a delicate instrument of civil society is truly terrifying given the sheer amount of unilateral power he now has over public discourse.
Twitter was already a sick animal when Musk took over not long ago; the idea of giving a popular vote on speech policy to all users is not just the adolescent prank it looks like, but stands to set a dangerous norm across all social media platforms unless users push back on such an offensive thing. A society that believes the people should be allowed to choose what speech is acceptable is a society that burns books and compels conformity. Musk is simply taking the first step by normalizing this type of behavior among the online community. Anyone who is a free speech advocate should be condemning, not participating in it. If Musk doesn’t start to apply his brain here rather than his ego, Twitter 2.0 could very easily resemble German Student Union 1.0. Empowering children over others was how things started to go wrong back then too.
I had struggled to propose a solution to this problem, at least as far as Twitter is concerned, and then awoke to the most appropriate and fitting news on the subject: Musk created another poll, in which Twitter users voted he resign his post as CEO. It seems he occasionally does poll before putting people out of a job.
The priest shall bring her and have her stand before the Lord. Then he shall take some holy water in a clay jar and put some dust from the tabernacle floor into the water. After the priest has had the woman stand before the Lord, he shall loosen her hair and place in her hands the reminder-offering, the grain offering for jealousy, while he himself holds the bitter water that brings a curse. Then the priest shall put the woman under oath and say to her, “If no other man has had sexual relations with you and you have not gone astray and become impure while married to your husband, may this bitter water that brings a curse not harm you. But if you have gone astray while married to your husband and you have made yourself impure by having sexual relations with a man other than your husband”— here the priest is to put the woman under this curse—“may the Lord cause you to become a curse[d] among your people when he makes your womb miscarry and your abdomen swell. May this water that brings a curse enter your body so that your abdomen swells or your womb miscarries. Then the woman is to say, “Amen. So be it.”
Documented use of an Abortifacient, Numbers 5:16-22
In May 2022, white evangelical Christians woke up to some rather unexpected news. A draft opinion had somehow leaked out of the Supreme Court, suggesting that Roe v. Wade would soon be overturned. Shortly after, it was. I single out white evangelicals here because, according to a recent Pew Research study, they are twice as likely to want to see abortion outlawed than other Americans (including other Christians). It would be an error though to conclude this means white evangelicals are the most pro-life. No no no, this is not the case at all. White evangelicals are no more pro-life than other religious groups, Christian or otherwise – they are, however, the most autocratic. Yet those who would use the Bible to institute government sponsored morality seem to have forgotten where the bodies are buried: also in their Bible.
The concept of abortion is nothing new. The practice of inducing an abortion as punishment for unfaithful women was once conducted as part of priestly duties in pre-Christian Judaism. A woman suspected of adultery, yet maintaining her innocence would be partially stripped, treated as an animal (right down to the presentation of an animal’s meal offering), and made to drink a type of holy water concoction; it was believed an unfaithful woman would abort her lover’s fetus and die within up to three years were she guilty (Mishnah Sotah 3). Holy water has a long tradition of being used to cleanse and purify, and so the implication was that the illegitimate fetus was evil, and therefore must be purged from the woman. Behind the scenes, this seemed to have more to do with the financial aspects of marriage contracts and intimidation than it did holiness, and the practice was eventually ended prior to the destruction of the second temple. Today’s American evangelicals take the opposing viewpoint of their ancestors – namely, against all forms of abortion – yet still firmly hold onto the practice of controlling women in much the same way. Yet while many other Christians value life just as much as autocratic evangelicals, we differ greatly from them particularly on a solution to the number of unwanted pregnancies in the country. The earliest Christians opposed abortion by adopting others’ discarded and unwanted live babies – a Roman practice known as “infant exposure” would leave abandoned babies in the trash or otherwise discarded after birth, left to die or be raised as slaves and prostitutes by others. It was this practice that many early writers condemned as “the worst abomination of all” (Philo of Alexandria). They wrote about Roman abortion practices far less. Yet while early Christians put their faith into action by sacrificially taking in these babies to save them from such a fate, today’s evangelicals largely believe opposing abortion through politics and legislation is the only solution. Most others believe it is an ineffective and dangerous solution – perhaps just as dangerous as the ancient practice that once caused them (or at least was perceived to; the practice’s effectiveness was highly questionable among rabbis).
Forced morality is likewise nothing new either. In the book of Chronicles, King Josiah breaks down the altars of false gods, tears down carved images, and rids Judah and Jerusalem of the ungodliness of the time. When his priest finds the Book of the Law, Josiah tears his robe and imposes moral rule according to the laws of the book. The chronicler Ezra writes, “Josiah removed all the detestable idols from all the territory belonging to the Israelites, and he had all who were present in Israel serve the Lord their God. As long as he lived, they did not fail to follow the Lord, the God of their ancestors.” An often overlooked detail in this story is that in spite of a society living under (and clearly practicing!) moral law, God tells Josiah that he will take his life early so that he will not see the disaster God plans to bring about. A useful object lesson can be found here: perceived morality counts for little when it is compelled. At the center of today’s controversy is not really Christian doctrine at all (there is no Christian doctrine concerning abortion), or even morality, but rather the same desire for power; today, that translates to the church’s desire for socio-economic power.
I only regret that I have but one life to lose for my country.
On the day of Nathan Hale’s execution, a British officer wrote of Hale, “he behaved with great composure and resolution, saying he thought it the duty of every good Officer, to obey any orders given him by his Commander-in-Chief; and desired the Spectators to be at all times prepared to meet death in whatever shape it might appear.” Nearly ten years ago, I viewed Edward Snowden as a slightly nerdier, yet similar patriot to the greats. I wanted to believe he was serving his country, and was unfairly targeted by the state for standing up for those beliefs. Much of tech did too, which is why this is an important discussion to have. It’s affected how the tech community views and interacts with government in many ways, with all of the prejudices it brought into play. For all the pontificating since then about freedom that Snowden has done, his taking up permanent citizenship in Russia, and his silence since the beginning of the war with Ukraine (except, more recently, to criticize the US once more), today I rather see the pattern of a common deserter in Snowden, rather than the champion of free speech that some position him as. If Snowden is to set the narrative for how tech views and responds to government, then our occasional criticism of his own behavior should be fair game.
During his time in Russia, we have seen the whistleblower system work effectively here at home. The details of Trump’s Ukraine call, and the subsequent freezing of security aid seems rather relevant today. More impressively so, this same whistleblower system Snowden criticized worked against a sitting president having no capacity for restraint. The fruits of it were significant, and the process brought both public dissemination and a full press by congress to protect the whistleblower. Mr. X, whose identity is still somewhat contested, was a hero. He stood up to the bully, knowing better than most how lawless the tyrant was, and of the angry mob he commanded. What happened to X? Very little, certainly far less than the charges Snowden brought on himself or the freedoms he gave up by not using the right channels. Instead of following process, Snowden fled the country under the Obama administration, who was a teddy bear compared to Trump. Snowden rejected this government process, insisting the whistleblower system was corrupt, using it as justification to leak classified documents, shortly before departing the country. In 2020, he asked us to excuse him again while he applied for Russian citizenship “for the sake of his kids”. Yet even in being proved wrong by a true hero like X while the country lived under a tyrant, Snowden continues to hide from the consequences of this terrible miscalculation.
The Biden administration is having a little Twitter fight about whether or not to reset the followers of the @potus account. While followers were rolled over from the Obama administration to Trump’s, the Trump administration, who views Twitter followers as if they represented actual voters-who-love-Donald, doesn’t think the incoming president should get to inherit all of those bots and disenfranchised twelve-year olds. Let us stop and reflect on the stupidity and pettiness of this argument. What the Biden administration really should be thinking about is whether to close @potus and get the White House off of Twitter completely.
Social media, especially Twitter, has year after year been on a steady course of devolving into one of the most toxic and unpleasant public gatherings on the Internet. Long before Trump took office, social media was the leading source of disinformation, threats, harassment, toxicity, and division. Combined with a platform that adopts thought-terminating loaded language hash tags (e.g. #StopTheSteal) and abbreviated messaging that lacks critical thought, Twitter has long been a platform designed to capitalize on the cult phenomenon. Twitter has been not only markedly complicit, but in a position to profit off of the toxicity, disinformation, and abuse it allows by the Trump administration and other public officials who’ve started emulating the behavior.
If you watched yesterday’s senate judiciary hearings with CEOs from Twitter and Facebook, two things would have stuck out to you. First, why is Jack Dorsey addressing the senate from the kitchen department at an IKEA? Second, how did a judiciary hearing about misinformation campaigns somehow turn into a misinformation campaign itself? At the heartRead More
I was just a teenager when I got involved in the open source community. I remember talking with an old bearded guy once about how this new organization, GNU, is going to change everything. Over the years, I mucked around with a number of different OSS tools and operating systems, got excited when symmetric multiprocessing came to BSD, screwed around with Linux boot and root disks, and had become both engaged and enthralled with the new community that had developed around Unix over the years. That same spirit was simultaneously shared outside of the Unix world, too. Apple user groups met frequently to share new programs we were working on with our ][c’s, and later our ][gs’s and Macs, exchange new shareware (which we actually paid for, because the authors deserved it), and to buy stacks of floppies of the latest fonts or system disks. We often demoed our new inventions, shared and exchanged the source code to our BBS systems, games, or anything else we were working on, and made the agendas of our user groups community efforts to teach and understand the awful protocols, APIs, and compilers we had at the time. This was my first experience with open source. Maybe it was not yours, although I hope yours was just as positive.
It wasn’t open source that people were excited about, and we didn’t really even call it open source at first. It was computer science in general. Computer science was a brand new world of discovery for many of us, and open source was merely the bi-product of natural curiosity and the desire to share knowledge and collaborate. You could call it hacking, but at the time we didn’t know what the hell we were doing, or what to call it. The environment, at the time, was positive, open, and supportive; words that, unfortunately, you probably wouldn’t associate with open source today. You could split hairs and call this the “computing” or “hacking” community, but at the time all of these things were intertwined, and you couldn’t tease them apart without destroying them all: perhaps that’s what went wrong, eventually we did.
To the Honorable Congress of the United States of America,
I am a proud American who has had the pleasure of working with the law enforcement community for the past eight years. As an independent researcher, I have assisted on numerous local, state, and federal cases and trained many of our federal and military agencies in digital forensics (including breaking numerous encryption implementations). Early on, there was a time when my skill set was exclusively unique, and I provided assistance at no charge to many agencies flying agents out to my small town for help, or meeting with detectives while on vacation. I have developed an enormous respect for the people keeping our country safe, and continue to help anyone who asks in any way that I can.
With that said, I have seen a dramatic shift in the core competency of law enforcement over the past several years. While there are many incredibly bright detectives and agents working to protect us, I have also seen an uncomfortable number who have regressed to a state of “push button forensics”, often referred to in law enforcement circles as “push and drool forensics”; that is, rather than using the skills they were trained with to investigate and solve cases, many have developed an unhealthy dependence on forensics tools, which have the ability to produce the “smoking gun” for them, literally with the touch of a button. As a result, I have seen many open-and-shut cases that have had only the most abbreviated of investigations, where much of the evidence was largely ignored for the sake of these “smoking guns” – including much of the evidence on the mobile device, which often times conflicted with the core evidence used.
Sir, you may not know me, but I’ve impacted your agency for the better. For several years, I have been assisting law enforcement as a private citizen, including the Federal Bureau of Investigation, since the advent of the iPhone. I designed the original forensics tools and methods that were used to access content on iPhones, which were eventually validated by NIST/NIJ and ingested by FBI for internal use into your own version of my tools. Prior to that, FBI issued a major deviation allowing my tools to be used without validation, due to the critical need to collect evidence on iPhones. They were later the foundation for virtually every commercial forensics tool to make it to market at the time. I’ve supported thousands of agencies worldwide for several years, trained our state, federal, and military in iOS forensics, assisted hands-on in numerous high-profile cases, and invested thousands of hours of continued research and development for a suite of tools I provided at no cost – for the purpose of helping to solve crimes. I’ve received letters from a former lab director at FBI’s RCFL, DOJ, NASA OIG, and other agencies citing numerous cases that my tools have been used to help solve. I’ve done what I can to serve my country, and asked for little in return.
First let me say that I am glad FBI has found a way to get into Syed Farook’s iPhone 5c. Having assisted with many cases, I understand from firsthand experience what you are up against, and have enormous respect for what your agency does, as well as others like it. Often times it is that one finger that stops the entire dam from breaking. I would have been glad to assist your agency with this device, and even reached out to my contacts at FBI with a method I’ve since demonstrated in a proof-of-concept. Unfortunately, in spite of my past assistance, FBI lawyers prevented any meetings from occurring. But nonetheless, I am glad someone has been able to reach you with a viable solution.
Back in the late 1960s, University of California, Berkeley, published its first public BSD licenses promoting free software that could be reused by anyone. A few years later, in the 70s, BSD Unix was released by CSRG, a research group inside of Berkeley, and laid the foundation for many operating systems (including Mac OS X) as we know it today. It gradually evolved over time to support socket models, TCP/IP, Unix’s file model, and a lot more. You’ll find traces of all of these principals – and very often, core code itself, still used 50 years later in cutting edge operating systems. The idea of “free software” (whether “free as in beer” or “free as in freedom”) is credited as a driving force behind today’s technology, multi-billion dollar fortune companies, and even the iPhone or Android device sitting in your pocket. Here’s the rub: None of it was ever really free.