Skip to content
  • About Me
  • Books
  • Photography
  • Papers
  • Security
  • Forensics
  • Essays
  • Christianity

Calendar

May 2026
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Archives

  • April 2026
  • February 2026
  • December 2025
  • November 2025
  • October 2025
  • August 2025
  • July 2025
  • March 2025
  • December 2024
  • March 2024
  • July 2023
  • May 2023
  • February 2023
  • December 2022
  • November 2022
  • July 2022
  • May 2022
  • March 2022
  • January 2022
  • December 2021
  • November 2021
  • September 2021
  • July 2021
  • December 2020
  • November 2020
  • March 2020
  • September 2019
  • August 2019
  • August 2018
  • March 2018
  • March 2017
  • February 2017
  • January 2017
  • November 2016
  • October 2016
  • July 2016
  • April 2016
  • March 2016
  • February 2016
  • June 2015
  • March 2015
  • February 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • January 2014
  • October 2013
  • September 2013
  • June 2013
  • May 2013
  • April 2013
  • December 2012
  • May 2012
  • September 2011
  • June 2011
  • August 2010
  • July 2010
  • May 2010
  • April 2010
  • February 2010
  • July 2009
  • May 2008
  • March 2008
  • January 2008
  • June 2007
  • August 2006
  • February 2006

Categories

  • Apple
  • Christianity
  • Essays
  • Forensics
  • Gaming
  • General
  • Machine Learning
  • Music
  • Opinion
  • Photography
  • Politics
  • Security











Jonathan ZdziarskiNeat and Scruffy
  • About Me
  • Books
  • Photography
  • Papers
  • Security
  • Forensics
  • Essays
  • Christianity
Essays . Machine Learning . Opinion

The AI Learning Plateau

On November 2, 2025 by Jonathan Zdziarski

There’s an old 1985 sci-fi series I remember watching as a kid. In Otherworld S1E1, the Sterling family, on vacation in Egypt, winds up in a parallel dimension where they encounter a civilization of self-evolved AI androids. Parts of the episode were amazingly spot on to how today’s LLMs are playing out. In the episode, the teenage son (Trace) falls in love with an AI, and the android itself is entirely convinced she not only has a soul, but is genuinely in love with him too. While this android city looks relatively human-like, and performs similar tasks (such as eating, working, etc), the show highlights some peculiarities where they’ve attempted to copy human behavior, but failed in eerie ways. One of my favorite scenes is where the Sterling family matriarch (June) visits the grocery store in this strange civilization, and finds only cans labeled, “Meat” and “Good Food”. The AI world seemingly lacked a crucial connection with humanity to develop creativity beyond a superficial level. The insults that robots cast at each other were humorously corny, such as “get your unit checked!”; when they asked if you were born yesterday, they literally meant it because that’s all they understood.

Modern deep learning systems have proven this episode almost prophetic in some ways, where LLMs have mimicked similar behavior. Falling in love with an LLM is a recent phenomenon where individuals develop deep attachments to chatbots; the chatbots have left them feeling understood and supported even though they’re only outputting computations based on some relevant human training data. The dark side to this, unfortunately, is that LLMs have also encouraged their human users to commit suicide – sometimes successfully. Of course, both of these extremes are only possible due to the massive amount of training that LLMs have done in learning human responses. LLMs train both on books from therapists and also dark forum posts from online predators, so unlike real life where you’re unlikely to approach a child predator for therapy or advice, you have all of these personalities figuratively “in the same room” as part of a large model. You might think of an LLM then as a composite model of multiple personalities. Because we like to anthropomorphize everything, some conflate LLM responses with a sense of conscious thought. While an LLM doesn’t “understand” the material it’s trained on, it does statistically predict the next word based on a composite of prior text, deep within some high-dimensional mathematical matrix built by its training data. Think about it like drawing a line on a graph to separate a bunch of data points: LLMs work by pointing to a spot on the graph and computing what’s there. Nothing magical, just good math. After all, AI is just someone else’s intelligence, and just as we are consumers of AI, AI is a consumer of human behavior. It’s going to emulate whoever dominated its training data within some context.

As the fictional robot city in Otherworld likewise portrayed, AI’s output is a result of its training input. Even this fictional self-evolving AI civilization simply mimicked and iterated on prior knowledge based on what it observed in human behavior. Training had obviously reached some plateau, however and as a result we ended up with grocery stores stocked with cans of “meat”. Much like this fictional community, the AI of today is quickly approaching a learning plateau. AI has consumed nearly every human work on the planet (to the degree of countless lawsuits). It’s become the largest intellectual property theft in history and now demands enough compute power to compete for the power grid… yet today’s AI still hasn’t developed to where we can say its creativity or intelligence comes close to matching that of a human (though OpenAI’s parlor tricks seem to fool the naive). Most of our interactions with AI today are quite banal in fact, and often leave the user frustrated. (I’ll post a blog sometime about that time ChatGPT tried to kill me by insisting I re-engineer a circuit differently, which led to an explosion in my office. OpenAI’s response was a can of meat as well.)

The AI industry has invested billions in refining training algorithms so that bots no longer tell you to glue cheese to pizza, and they’ve gotten progressively better at filtering absurd outputs (“hallucinations”) much better than it used to. But those absurd hallucinations are still there, even when we aren’t allowed to see them. Even from what we can see, there’s still an eerie degree of quasi-human-esque behavior coming out of AI systems to leave one feeling unsettled. Imagine what hallucinations we can’t see.

There has been much speculation about AI overtaking a large percentage of employment in the next 5-10 years. This could very well happen, as corporate greed continues to be the primary driver of business over and above making the world better. A grave miscalculation, however, is failing to foresee the learning plateau that we’re creating as a result of replacing these human jobs. If human employment deteriorates to the degree where creative jobs are replaced with or front-ended by AI, the end result is a massive drop in the amount of useful new training data made available to that same AI. After some time, you end up with deep learning systems that merely churn on training from their own hallucinations from past training data, or data from other AIs (a process called distillation). Just like the fictional android civilization, the ability to be creative will severely deteriorate as this happens. Should some large percentage of corporations replace humans with AI, it is mathematically inevitable that they will end up with the same “can of meat” for an output that everyone else is getting, which will eventually become a hallucination of a hallucination of a hallucination of an input. The result is this: innovation screeches to a halt. AI thrives on human creativity, and when it’s consumed all that we have to offer, it starves. In this respect, deep learning systems are the snake eating its own tail. If the singularity does happen, it will be followed by a severe drop in learning, at which point the AI’s performance will plateau, and eventually also drop as it evolves itself without further human input.

When we cross this learning plateau and resources are exhausted on both sides, one of two things are likely to happen: either an AI winter will occur in the form of a dot-bomb era financial collapse, forcing the corporate world to abandon the canned meat they’ve paid so much to create (as it’s no longer profitable), OR – and hopefully I’m wrong here – AI will forever dilute our human civilization such that we’ll learn to thrive on the cultural equivalent of cans of meat, instead of true human innovation. This may happen because we are simply lazy, or because truth has become indistinguishable from machine hallucination – as such, we ourselves are likely to become the products of AI generated textbooks, mis-information, and advertising.

I suspect there will be a bit of both. The typical consumer who is told what to wear and what to buy today will likely continue to fund companies spitting out mediocre knockoffs of everyone else’ products (e.g. the “Amazon Basics” that AI will become) and will believe pretty much anything an LLM tells them is true. The rest of the population, who aren’t satisfied with this farce, will reject AI altogether. If large enough, this will signal a new cycle of human ingenuity in corporate America: ultimately motivating many to abandon AI systems in favor of re-hiring a workforce of true creative humans. These businesses will have a significant advantage over the companies pushing cans of meat, and the job market will of course be ripe and full of highly intelligent people by then (so best to hang onto the good people you have, as they’ll be even more valuable later on). Those who think differently – and not like a can of meat – will always be one of the most valuable assets in business. If AI hasn’t been abandoned entirely at this point (which it probably won’t), we’ll end up with an arms race for creativity that only much needed copyright reform will be able to address. We’re suffering from this problem today in small tremors. Companies that hold onto trade secrets better will be rewarded in this future; an arms race for unique, human created intellectual property will be necessary for financial survival. On the positive side, perhaps this will bring about more demand for all those degrees that mean little in today’s job market.

Is AI coming for our jobs? Probably. But unemployment doesn’t concern me as much as the worst-case scenario. Allowing AI to forever alter our (human) culture is a far bigger risk. AI will likely winter, though possibly only in cycles. The lasting imprint it makes on society and culture is largely open. When AI does eventually reach some singularity, we risk losing our sense of what makes us human in the first place – culture, diversity, and intrinsic value. The result may seem as weird and creepy as the Otherworld.

 

You might also enjoy Can AI Compute Empathy?

Archives

  • April 2026
  • February 2026
  • December 2025
  • November 2025
  • October 2025
  • August 2025
  • July 2025
  • March 2025
  • December 2024
  • March 2024
  • July 2023
  • May 2023
  • February 2023
  • December 2022
  • November 2022
  • July 2022
  • May 2022
  • March 2022
  • January 2022
  • December 2021
  • November 2021
  • September 2021
  • July 2021
  • December 2020
  • November 2020
  • March 2020
  • September 2019
  • August 2019
  • August 2018
  • March 2018
  • March 2017
  • February 2017
  • January 2017
  • November 2016
  • October 2016
  • July 2016
  • April 2016
  • March 2016
  • February 2016
  • June 2015
  • March 2015
  • February 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • January 2014
  • October 2013
  • September 2013
  • June 2013
  • May 2013
  • April 2013
  • December 2012
  • May 2012
  • September 2011
  • June 2011
  • August 2010
  • July 2010
  • May 2010
  • April 2010
  • February 2010
  • July 2009
  • May 2008
  • March 2008
  • January 2008
  • June 2007
  • August 2006
  • February 2006

Calendar

May 2026
M T W T F S S
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Categories

  • Apple
  • Christianity
  • Essays
  • Forensics
  • Gaming
  • General
  • Machine Learning
  • Music
  • Opinion
  • Photography
  • Politics
  • Security

All Content Copyright (c) 2000-2025 by Jonathan Zdziarski, All Rights Reserved. Opinions are my own.