AI Is the Bubble to Burst Them All

ai-is-the-bubble-to-burst-them-all

AI may not simply be “a bubble,” or even an enormous bubble. It may be the ultimate bubble. What you might cook up in a lab if your aim was to engineer the Platonic ideal of a tech bubble. One bubble to burst them all. I’ll explain.

Since ChatGPT’s viral success in late 2022, which drove every company within spitting distance of Silicon Valley (and plenty beyond) to pivot to AI, the sense that a bubble is inflating has loomed large. There were headlines about it as early as May 2023. This fall, it became something like the prevailing wisdom. Financial analysts, independent research firms, tech skeptics, and even AI executives themselves agree: We’re dealing with some kind of AI bubble.

But as the bubble talk ratcheted up, I noticed few were analyzing precisely how AI is a bubble, what that really means, and what the implications are. After all, it’s not enough to say that speculation is rampant, which is clear enough, or even that there’s now 17 times as much investment in AI as there was in internet companies before the dotcom bust. Yes, we have unprecedented levels of market concentration; yes, on paper, Nvidia has been, at times, valued at almost as much as Canada’s entire economy. But it could, theoretically, still be the case that the world decides AI is worth all that investment.

What I wanted was a reliable, battle-tested means of evaluating and understanding the AI mania. This meant turning to the scholars who literally wrote the book on tech bubbles.

In 2019, economists Brent Goldfarb and David A. Kirsch of the University of Maryland published Bubbles and Crashes: The Boom and Bust of Technological Innovation. By examining some 58 historical examples, from electric lighting to aviation to the dotcom boom, Goldfarb and Kirsch develop a framework for determining whether a particular innovation led to a bubble. Plenty of technologies that went on to become major businesses, like lasers, freon, and FM radio, did not create bubbles. Others, like airplanes, transistors, and broadcast radio, very much did.

Where many economists view markets as the product of sound decisions made by purely rational actors—to the extent that some posit that bubbles don’t exist at all—Goldfarb and Kirsch contend that the story of what an innovation can do, how useful it will be, and how much money it stands to make creates the conditions for a market bubble. “Our work puts the role of narrative at center stage,” they write. “We cannot understand real economic outcomes without also understanding when the stories that influence decisions emerge.”

Goldfarb and Kirsch’s framework for evaluating tech bubbles considers four principal factors: the presence of uncertainty, pure plays, novice investors, and narratives around commercial innovations. The authors identify and evaluate the factors involved, and rank their historical examples on a scale of 0 to 8—8 being the most likely to predict a bubble.

As I began to apply the framework to generative AI, I reached out to Goldfarb and asked him to weigh in on where Silicon Valley’s latest craze stands in terms of its bubbledom, though I should note that these are my conclusions, not his, unless stated otherwise.

Uncertainty

In 1895, the city of Austin, Texas, purchased 165-foot-tall “moonlight towers” and installed them in public hot spots. The towers were equipped with arc lighting, which burned carbon filaments. Spectators gathered to stare up in awe as ash rained down upon them.

With some technologies, Goldfarb says, the value is obvious from the start. Electric lighting “was so clearly useful, and you could immediately imagine, ‘Oh, I could have this in my house.’” Still, he and Kirsch write in the book, “as marvelous as electric light was, the American economy would spend the following five decades figuring out how to fully exploit electricity.”

“Most major technological innovations come into the world like electric arc lighting—wondrous, challenging, sometimes dangerous, always raw and imperfect,” Goldfarb and Kirsch write in Bubbles. “Inventors, entrepreneurs, investors, regulators, and customers struggle to figure out what the technology can do, how to organize its production and distribution and what people are willing to pay for it.”

Uncertainty, in other words, is the cornerstone of the tech bubble. Uncertainty over how an innovation might make money, which parts of a value chain it might replace, how many competitors will flock to the field, and how long it will take to come to fruition. And if uncertainty is the foundational element to a tech bubble, alarm bells are already ringing for AI. From the beginning, OpenAI’s Sam Altman has bet the house on building AGI, or artificial general intelligence—to the point where he once addressed a crowd of industry observers who asked him about OpenAI’s business model, and told them with a straight face that his plan is to build a general intelligence system and simply ask it how to make money. (He has since moved away from that bit, saying AGI is not “a super useful term.”) Meta is aiming for “superintelligence,” whatever that means. The goal posts keep on moving.

In the nearly three years since AI took center stage in Silicon Valley, the major players, with the exception of Nvidia, whose chips would likely still be in use post-bust, still haven’t demonstrated what their long-term AI business model will be. OpenAI, Anthropic, and the AI-embracing tech giants are burning through billions, inference costs haven’t fallen (those companies still lose money on nearly every user query), and the long-term viability of their enterprise programs are a big question mark at best. Is the product that will justify hundreds of billions in investment a search engine replacement? A social media substitute? Workplace automation? How will AI companies price in the costs of energy and computing, which are still sky-high? If copyright lawsuits don’t break their way, will they have to license their training data, and will they pass on that additional cost to consumers? A recent MIT study made waves—and helped stoke this most recent wave of bubble fears—with a finding that 95 percent of firms that adopted generative AI did not profit from the technology at all.

“Usually over time, uncertainty goes down,” Goldfarb says. People learn what’s working and what’s not. With AI, that hasn’t been the case. “What has happened in the last few months,” he says, “is that we’ve realized there is a jagged frontier, and some of the earliest claims about the effectiveness of AI have been mixed or not as great as initially claimed.” Goldfarb thinks the market is still underestimating the difficulty of integrating AI into organizations, and he’s not alone. “If we are underestimating this difficulty as a whole,” Goldfarb says, “then we will be more likely to have a bubble.”

AI’s closest historical analogue here may be not electric lighting but radio. When RCA started broadcasting in 1919, it was immediately clear that it had a powerful information technology on its hands. But less clear was how that would translate into business. “Would radio be a loss-leading marketing for department stores? A public service for broadcasting Sunday sermons? An ad-supported medium for entertainment?” the authors write. “All were possible. All were subjects of technological narratives.” As a result, radio turned into one of the biggest bubbles in history—peaking in 1929, before losing 97 percent of its value in the crash. This wasn’t an incidental sector; RCA was, along with Ford Motor Company, the most high-traded stock on the market. It was, as The New Yorker recently wrote, “the Nvidia of its day.”

Pure Play

Tech investors are supposed to invest in tech products—tangible tools and services with provable profit streams. So what happens when there’s frothy new tech but no real killer app idea yet? Then the venture capitalists go for “pure plays.” They bet on companies that, in turn, have bet their own survival on being the first to discover a marketable product.

So far this year, according to Silicon Valley Bank, 58 percent of all VC investment has gone to AI companies. When a sector is seeing a lot of pure plays, according to Goldfarb and Kirsch’s framework, it’s more likely to overheat and have a bubble. SoftBank has plans to sink tens of billions of dollars into OpenAI, the purest AI play there is, though it’s not yet open to retail investments. (If and when it finally is, analysts speculate that OpenAI may become the first trillion-dollar IPO.) Investors have also backed pure-play companies such as Perplexity (now valued at $20 billion) and CoreWeave ($61 billion market cap). In the case of AI, these pure-play investments are especially worrying, because the biggest companies are increasingly bound up with one another. Nvidia just announced a $100 billion proposed investment in OpenAI, which in turn relies on Nvidia’s chips. OpenAI relies on Microsoft’s computing power, the result of a $10 billion partnership, and Microsoft, in turn, needs on OpenAI’s AI models.

“The big question is how much of that is in the private markets, and how much of that is in the public markets?” Goldfarb says. If most of the money is in private markets, then it’s mostly private investors who would lose their shirts in a crash. If it’s mostly in public markets, such as stocks and mutual funds, then the crash would bleed regular people’s pensions and 401(k)s. (Though many market watchers have been pointing to the rise of private credit as an increasing source of systemic risk, as more small investors have been able to dump their money into opaque deals over the past year.) Either way, the sums are huge. As of late summer 2025, Nvidia accounts for about 8 percent of the value of the entire stock market.

Novice Investors

Twenty-five years ago, on March 10, 2000, the stock market hit a milestone: The tech-heavy Nasdaq reached a then-high of 5,132 units. At the time it appeared to merely be continuing its rapid ascent—it had risen an astonishing 86 percent in the previous year alone—buoyed by an investor gold rush for internet companies like eToys, CDNow, Amazon, and, yes, Pets.com.

Today, hordes of novice retail investors are pumping money into AI through E-Trade and their Robinhood app. In 2024, Nvidia was the single most-bought equity by retail traders, who plowed nearly $30 billion into the chipmaker that year. And AI-interested retail investors are similarly flocking to other big tech stocks like Microsoft, Meta, and Google.

Most of the investment thus far is fueled by institutional investors, but along with Nvidia and the giants, more pure-play—and more risky—AI startups like CoreWeave are going public or preparing to go public. CoreWeave’s March IPO was initially seen as lackluster, but it’s been on the rise since, as another way for retail investors to push money into AI.

As Goldfarb points out, everyone is something of a novice investor when it comes to AI, because it’s such a new field and technology, because there’s so much uncertainty, because no one knows how it’s going to play out. What makes today different from 100 years ago, Goldfarb and Kirsch note in the book, is that anyone can get in on the action. A hundred years ago, stocks were simply too expensive for most working people to buy, which sharply limited the capacity to inflate bubbles (though that didn’t stop the Depression from happening). Now there are stocks of every size and stripe available to purchase with a tap on a Robinhood app; and with the casino-ification of the economy, the breakdown of a meaningful regulatory apparatus to rein in all of the above—well, it has all come just in time to give novice investors a vehicle to sink their savings into the vague promise of superintelligence.

Coordination or Alignment of Beliefs Through Narratives

In 1927, Charles Lindbergh flew the first solo nonstop transatlantic flight from New York to Paris. The aviation industry had been underwritten by government subsidies for a quarter of a century by then, but the flight made news around the world. It was the biggest tech demo of the day, and it became an enormous, ChatGPT-launch-level coordinating event—a signal to investors to pour money into the industry.

“Expert investors appreciated correctly the importance of airplanes and air travel,” Goldfarb and Kirsch write, but “the narrative of inevitability largely drowned out their caution. Technological uncertainty was framed as opportunity, not risk. The market overestimated how quickly the industry would achieve technological viability and profitability.”

As a result, the bubble burst in 1929—from its peak in May, aviation stocks dropped 96 percent by May 1932.

When it comes to AI, this inevitability narrative is probably the easiest and clearest one to mark as a huge affirmative on the bubble matrix. There’s no bigger narrative than the one AI industry leaders have been pushing since before the boom: AGI will soon be able to do just about anything a human can do, and will usher in an age of superpowerful technology the likes of which we can only begin to imagine. Jobs will be automated, industries transformed, cancer cured, climate change solved; AI will do quite literally everything. And if that wasn’t enough reason, in the US, the specter of China “beating” us only makes it more important to not slow, or God forbid, regulate the market. Full steam ahead!

“Is this a good story?” Goldfarb says. “The answer is profoundly yes.”

Previous tech bubbles were not necessarily an indictment of the technology; radio, aviation, and the internet all proved to be revolutionary leaps forward, even if the economic hype left considerable damage in their wake. But what aviation would be good at—moving people from one place to another, much more quickly than was possible with cars, trains, or horses—was clear enough early on. This is, to me, what elevates AI bubbledom to another level altogether: The promise of AI, to investors, is that it can do just about anything. Different parts of the AI story, whether it’s, say, “AI will cure cancer” or “AI will automate all jobs,” appeal to different investors and partners, and that’s what makes it so uniquely powerful in its bubble-inflating capacities. And so dangerous to the economy.

So yes, Goldfarb says, AI has all the hallmarks of a bubble. “There’s no question,” he says. “It hits all the right notes.” Uncertainty? Check. Pure plays? Check. Novice investors? Check. A great narrative? Check. On that 0-to-8 scale, Goldfarb says, it’s an 8. Buyer beware.

Related Posts

Leave a Reply