As a longtime fan of Neal Stephenson—nobody’s better at inserting long, absorbing asides about esoteric nonfiction topics into otherwise blisteringly paced novels—I was intrigued by his latest missive about artificial intelligence. Stephenson is one of those wizards with an uncanny ability to predict the future; for instance, his novel “Cryptonomicon,” which I devoured in high school, anticipated the rise of cryptocurrency by roughly ten years (although his fictional characters back their online currency with real gold as opposed to a principle of artificial scarcity). His other books have toyed with the “metaverse” and other concepts that, for better or worse, will likely hit maturity over the next several years.
Anyway, Stephenson thinks that the AI ecosystem, which largely exists in its current state as a bunch of LLMs and a gaggle of tech bros bullshitting about the potential of agentic AI, will eventually mimic our biological one. For example, we may eventually have AIs that follow their own obscure missions, with relatively little interaction with humans, like insects; we’ll also have AIs that exist in a more symbiotic relationship with us, like lapdogs or horses. Or as he puts it:
“More interesting and more important in the long run will be AIs that are like sheepdogs, in that they do useful things for us that we can’t do ourselves. But I think that we’ll also have AIs that are like ravens, in that they are aware of us but basically don’t care about us, and ones like dragonflies that don’t even know we exist. What people worry about is that we’ll somehow end up with AIs that can hurt us, perhaps inadvertently like horses, or deliberately like bears, or without even knowing we exist, like hornets driven by pheromones into a stinging frenzy.”
Then he gets even funkier. If you’re going to have an ‘ecosystem,’ you need creatures eating each other, right?
“I am hoping that even in the case of such dangerous AIs we can still derive some hope from the natural world, where competition prevents any one species from establishing complete dominance. Even T. Rex had to worry about getting gored in the belly by Triceratops, and probably had to contend with all kinds of parasites, infections, and resource shortages. By training AIs to fight and defeat other AIs we can perhaps preserve a healthy balance in the new ecosystem. If I had time to do it and if I knew more about how AIs work, I’d be putting my energies into building AIs whose sole purpose was to predate upon existing AI models by using every conceivable strategy to feed bogus data into them, interrupt their power supplies, discourage investors, and otherwise interfere with their operations. Not out of malicious intent per se but just from a general belief that everything should have to compete, and that competition within a diverse ecosystem produces a healthier result in the long run than raising a potential superpredator in a hermetically sealed petri dish where its every need is catered to.”
Well, that’s a terrifying thought to have over your morning coffee: all around you—including that phone in your pocket—systems whose existence you can’t quite comprehend will be locked in a Darwinian battle for survival, leading to chaos both felt and invisible. Your headlines are lies, the people on your screen are totally synthetic, and the nukes are warming up. Happy Monday!
Why are you talking about AI? you might be asking at this juncture. I’m here for the occasional snarky post about movies or books.
I’m glad you asked, my hypothetical friend! Right around the same time I stumbled on Stephenson’s column, I noted the general fervor around the Chicago Sun-Times using generative AI to create a list of its hottest books of the summer. There was just one little problem: all but five of the books on said list didn’t exist. "Huge mistake on my part and has nothing to do with the Sun-Times. They trust that the content they purchase is accurate and I betrayed that trust. It's on me 100 percent," said the freelancer who put the list together, and whose name I’m not mentioning because they’ve already been dragged to scraps by the whole internet. (Some of the fake book summaries sounded pretty good, honestly—I’d read a Percival Everett book about literal rainmakers in a post-apocalyptic American West.)
This was all an idiotic blunder, of course, but view it through the lens of Stephenson’s predicted future. Imagine an AI with actual agency that deliberately creates a list of fake books to ruin a newspaper’s reputation, or to screw with another AI tasked with pumping up a real book’s publicity numbers. Every author I know is afraid that the publishing industry will drown beneath a tide of AI slop, especially considering the gazillion AI-generated books released on Kindle every year; now they need to be concerned that AI will screw with everything else, from marketing campaigns to real-world supply chains, perhaps for reasons beyond human reasoning. “Your social media campaign died in the bud because two warring AIs decided to down TikTok for five minutes this morning, and we think they’re doing it for the laughs,” etc.
I don’t know if there’s a solution here. The internet might become a lost cause, at least in its current form, forcing authors to revert to older-school methods: more hand-selling, more in-person events, less reliance on the algorithm. Which doesn’t seem all that bad, come to think of it, except to those who’ve burned their precious lives building up their BookTok presence. And meanwhile, the Web, which thirty years ago was sold as the ultimate way to unlock human potential, is becoming the providence of other entities entirely.
Brilliant speculation on what's coming with this renegade child of unfettered tech and blind greed. Like you say, maybe back to the local community and real encounters won't be so bad.
Yes, well thanks for that cataclysmic start of my Tuesday! I already have to deal with mini tornado dropping tons of debris around the house ... now it's artificial phenomena on top of natural ones. Great post Nick!