Day: May 4, 2023

AI is Just Someone Else’s Intelligence

Mechanical arts are of ambiguous use, serving as well for hurt as for remedy.

Francis Bacon


It’s been a long time since I’ve worked in the field of ML (or what some call AI), and we’ve come a long way from simple text classification to what’s being casually called generative AI today. While the technology has made many advances, the foundational concepts of machine learning have remained analogous over time. ML depends heavily on a large set of training data, which is analyzed to pull out its most interesting and defining features, and this becomes the basis for training a model. The process might involve parsing text, or performing analysis like object identification or analyzing stylistic features in art. Each of these is, in itself, a smaller – but mathematical – process. I experimented with a primitive form of meta-level learning in text classification several years ago, which may help convey the general idea. This identifies “features” of the reference sample being trained. The features this process pulls out can be simple, like words in a document or pixels from a handwriting sample, though today can be more sophisticated “critical patterns” correlated to literary authorship or artistry, such as patterns within art and music composition, sometimes stored in other models. Whatever the content is, the purpose of the training algorithm is to identify patterns and correlations across the data to build a weighted or structured model. The most interesting patterns in the training data influence weights or probabilities, creating a hidden layer: millions of “gears” that converge to compute the most statistically significant outcomes. In this sense, the term “learning” is a bit of a stretch; what’s happening is more along the lines of statistical transcription of a set of features. Feature selection is one of the key differences between various ML models, and why you have some constructing music, while others render art. The math is pretty consistent – more sophisticated machines like neural nets are typically trained using backpropagation and gradient descent, while other machines such as chat bots and text generators might use weighted Markov models or Bayesian networks. These approaches have been applied to everything from natural language processing and handwriting recognition, to today’s work in genome sequencing and autonomous driving. Still, these traditional forms of machine learning are not much more than a sophisticated pattern recognizer. It is largely a deconstructive process with weights and statistical magic.

Today’s generative AI still goes through this type of deconstructive process, but also has a formative element. Where these new approaches excel is in going beyond parsing information into a knowledge base, but now also applying a formative process to that information – what we might conflate with intelligence, but still falls short of what most would consider the result of human reasoning. To present the data in some coherent form, this involves training not just the information, but the many dimensions of that information (such as the number of different contexts a word may be used in), or in the context of constructs and critical patterns of that information (ABBA, or 1-4-5, as very basic examples), enabling it to formulate an output in the pattern of an existing set of learned reference samples. Even modern training approaches, such as those used in the transformer model, still require supervised testing to tell the model what bits of its output are garbage, so that the output eventually looks intelligent; it is actually closer to “filtered garbage”. So identifying the pattern of Iambic Pentameter, for example, is still an artificial process. It can be computed adaptively with a large enough data set. Moving from atomic and factoral learning into structural learning allows a system to fingerprint complex patterns much more efficiently. Scale those patterns to music, art, literature, and the more sophisticated patterns that make up our repertoire of human creativity and it is impressive – but still synthesized. Information processing is still very primitive, and lacks many of the traits of human understanding. The inability to conceive tradition, authority, and prejudice is why all of this advanced technology still leaves us with Nazi chatbots. Some would call this confirmation theory, which is an area quite underdeveloped (and the AI reading this wouldn’t disagree). Even the raw objectives of AI are based on human-engineered goals, and evaluated using performance metrics to select the best behavior. This is a very mechanical process. Certain behaviors we may view as creative tasks may in fact be simple randomness introduced into most AIs to avoid infinite logic loops. In short, a lot of what you see is quite the opposite of the autonomous, self-motivated behavior it looks like. Any good AI behaves rationally only because someone programmed good objectives into it. Garbage in, garbage out.

One of the big differences between traditional forms of ML and generative AI is the direction in which the data flows. Traditionally, inputs flow into the system for training and queries. To train traditional systems, you’d suck in “a bunch of other people’s stuff”, and it identifies all of the interesting patterns that are then compared with the input sample. Generative AI takes this a step further, and flips the switch on the vacuum cleaner – and now all of the dirt that was initially fed into the system is shot out the pipe to produce the equivalent of a digital dust cloud of the original training medium. The output of generative AI takes the critical patterns and concepts weighted during the AI’s training and applies some formative computation to produce its own reference sample as a result. Neat-o. Nice parlor trick.

With billions of dollars, this ML scales to perform impressive computational tasks. The risk of this type of system goes beyond the traditional vision of a robot building a better chair, or replacing a worker at a plant. Today’s ML systems are white collar professionals and don’t require mechanical bodies; the computational capabilities of these systems can replace a broad array of professions using the thought product of millions of humans at once – so how could anyone compete with that? No one was ever supposed to, in fact. Doug Englebart, pioneer in the field of human-computer interaction, saw AI’s value more in intelligence-augmentation (that is, IA rather than AI), as a means of assisting the worker. Corporate greed has already led to the recent misapplication of AI, using its advanced capabilities to replace, rather than to augment, humans. Hollywood’s ML generation of  “extras” is a quite extreme and literal example of this. But corporate greed isn’t AI generated. AI is replacing employees for very human reasons, and little to do with artificial intelligence itself. Yet correct computer-human interfaces are a fundamental principle that many computer scientists and science fiction authors alike both fear will be broken. Should you hate AI? No, you should hate greed.

The cold irony is this: at a deconstructed level, the output of generative AI represents the collective intelligence of other people’s thought products – their ideas, writings, music, theology, facts, opinions, and so on, likely also including those who lose their job to it. This also means others’ patents and copyrighted works, either directly or indirectly. ML has proven wildly successful at identifying the most effective critical patterns and gluing them together in some coherent form that communicates a desired result – but at the end of the day, all of its intelligence indeed belongs to the other people whose content was used to train it, almost always without their permission. In the end, generative AI takes from the world’s best authors, artists, musicians, philosophers, and other thinkers – erasing their identities, and taking their credit in its output. Without the proper restraints, it will produce the master forgeries of our generation. Should we forget its limitations and begin to rely on it for information, AI will easily blur the lines between what we view as real facts and synthesized ones. Consider a recent instance of this, where an attorney got himself in hot water for citing case law that didn’t exist – AI had seemingly fabricated it, where the attorney thought they were leveraging AI to do research. Imagine the impact to future case law should courtroom outcomes be based on fictional precedent should it fail to be fact checked every time.

Read More