Little fanfare has been given to the story of a glitch in an experimental AI game from 2019, but the results seem rather poignant to me. TL;DR, the AI decided that committing suicide at the beginning of the game was the best strategy because the game was too hard, and it meant fewer points off. For any kid growing up in the 80s, the idea of a computer learning the concept of futility should seem a rather significant accomplishment. The characteristic of learning futility had seemed exclusively a human trait to me that computers would never grasp, at least until I read this story. As the author of the piece put it, “it’s hard to predict what conditions matter and what doesn’t to a neural network”. Its implications in computer science are quite fascinating, though, and a good object lesson for those contemplating the Trolley Dilemma in technology.