I recently vacationed in Italy. As one does these days, I ran my itinerary past GPT-5 for sightseeing suggestions and restaurant recommendations. The bot reported that the top choice for dinner near our hotel in Rome was a short walk down Via Margutta. It turned out to be one of the best meals I can remember. When I got home, I asked the model how it chose that restaurant, which I hesitate to reveal here in case I want a table sometime in the future (Hell, who knows if I’ll even return: It is called Babette. Call ahead for reservations.) The answer was complex and impressive. Among the factors were rave reviews from locals, notices in food blogs and the Italian press, and the restaurant’s celebrated combination of Roman and contemporary cooking. Oh, and the short walk.
Something was required from my end as well: trust. I had to buy into the idea that GPT-5 was an honest broker, picking my restaurant without bias; that the restaurant wasn’t shown to me as sponsored content and wasn’t getting a cut of my check. I could have done deep research on my own to double-check the recommendation (I did look up the website), but the point of using AI is to bypass that friction.
The experience bolstered my confidence in AI results but also made me wonder: As companies like OpenAI get more powerful, and as they try to pay back their investors, will AI be prone to the erosion of value that seems endemic to the tech apps we use today?
Word Play
Writer and tech critic Cory Doctorow calls that erosion “enshittification.” His premise is that platforms like Google, Amazon, Facebook, and TikTok start out aiming to please users, but once the companies vanquish competitors, they intentionally become less useful to reap bigger profits. After WIRED republished Doctorow’s pioneering 2022 essay about the phenomenon, the term entered the vernacular, mainly because people recognized that it was totally on the mark. Enshittification was chosen as the American Dialect Society’s 2023 Word of the Year. The concept has been cited so often that it transcends its profanity, appearing in venues that normally would hold their noses at such a word. Doctorow just published an eponymous book on the subject; the cover image is the emoji for … guess what.
If chatbots and AI agents become enshittified, it could be worse than Google Search becoming less useful, Amazon results getting plagued with ads, and even Facebook showing less social content in favor of anger-generating clickbait.
AI is on a trajectory to be a constant companion, giving one-shot answers to many of our requests. People already rely on it to help interpret current events and get advice on all sorts of buying choices—and even life choices. Because of the massive costs of creating a full-blown AI model, it’s fair to assume that only a few companies will dominate the field. All of them plan to spend hundreds of billions of dollars over the next few years to improve their models and get them into the hands of as many people as possible. Right now, I’d say AI is in what Doctorow calls the “good to the users” stage. But the pressure to make back the massive capital investments will be tremendous—especially for companies whose user base is locked in. Those conditions, as Doctorow writes, allow companies to abuse their users and business customers “to claw back all the value for themselves.”
When one imagines the enshittification of AI, the first thing that comes to mind is advertising. The nightmare is that AI models will make recommendations based on which companies have paid for placement. That’s not happening now, but AI firms are actively exploring the ad space. In a recent interview, OpenAI CEO Sam Altman said, “I believe there probably is some cool ad product we can do that is a net win to the user and a sort of positive to our relationship with the user.” Meanwhile, OpenAI just announced a deal with Walmart so the retailer’s customers can shop inside the ChatGPT app. Can’t imagine a conflict there! The AI search platform Perplexity has a program where sponsored results appear in clearly labeled follow-ups. But, it promises, “these ads will not change our commitment to maintaining a trusted service that provides you with direct, unbiased answers to your questions.”
Will those boundaries hold? Perplexity spokesperson Jesse Dwyer tells me, “For us, the number one guarantee is that we won’t let it.” And at OpenAI’a recent developer’s day, Altman said that the company is “hyper aware of the need to be very careful” about serving its users rather than serving itself. The Doctorow doctrine doesn’t put much credence in statements like that: “Once a company can enshittify its products, it will face the perennial temptation to enshittify its products,” he writes in his book.
Putting ads in chatbot conversations or in search results is not the only way that AI can become enshittified. Doctorow cites examples where companies, once they dominate a market, change their business model and fees. For instance, in 2023, Unity, the most popular provider of videogame development tools, decided to charge a new “runtime fee.” That misbehavior was so egregious that users revolted and got the fee walked back. But look at what has happened to streaming services like Amazon Prime Video: It used to be an ad-free service. Now it makes you watch commercials before and during the movie. You have to pay to turn them off. Oh, and the price of Amazon Prime keeps rising. So it might be standard big-tech practice to lock users into a service and then charge ever higher fees. It could even be that in order to maintain the same level of intelligence in a chatbot’s results, users one day might have to upgrade to a higher, even more expensive tier—another enshittification trick. Maybe companies that once promised that your chatbot activities would not be used to train future models will change their minds about that—simply because they can get away with it.
Cory Speaks
Doctorow didn’t address AI in his book, so I gave him a call to see whether he thinks the category is destined to travel down defecation row. I expected that he might outline the various ways that AI companies will fall prey to his smelly syndrome. To my surprise, he had a different take. He is not a fan of AI, and he claims the field has not even reached the “good to users” stage I outlined earlier. Nonetheless, he says, it could be that the enshittification process happens anyway. Because it’s so hard to see what goes on inside the “black boxes” of LLMs, he says, “they have an ability to disguise their enshittifying in a way that would allow them to get away with an awful lot.” Most of all, he says, the “terrible economics” of the field demand that the companies can’t afford to wait and will enshittify even before they deliver value. “I think they’ll try every sweaty gambit you can imagine as the economics circle the drain,” he said.
I disagree with Doctorow about the value of AI. Hey, it found Babette for me! But I do fear that the technology might be prone to the enshittification process that he unerringly identified in the current tech giants. And guess what—GPT-5 agrees with me. When I posed the question to the chatbot, it replied, “Doctorow’s ‘enshittification’ framework (platforms start good for users, then shift value to business customers, then extract it for themselves) maps disturbingly well onto AI systems if incentives go unchecked.” GPT-5 then proceeded to lay out a number of methods by which AI companies could degrade their products for profit and power. AI companies might assure us they won’t enshittify. But their own products have already written the blueprint.
This is an edition of Steven Levy’s Backchannel newsletter. Read previous newsletters here.