OpenAI announced on Monday that it will acquire several data centers’ worth of chips from AMD in a blockbuster deal that could also give OpenAI the option to acquire a roughly 10 percent stake in the chipmaker.
It’s another bold bet from OpenAI that demand for generative artificial intelligence will continue rising—bubble be damned.
“Excited to partner with AMD to use their chips to serve our users!” OpenAI CEO Sam Altman said on X, adding that the company will also ramp up its investments in Nvidia chips. He added: “The world needs much more compute …”
OpenAI said in a blog post this morning that it would commit to purchasing 6 gigawatts’ worth of AMD chips over the next several years. The first deployment of a gigawatt worth of AMD Instinct MI450 GPUs will take place in the second half of 2026, the company said.
The deal also involves AMD issuing OpenAI the right to purchase up to 160 million shares of AMD common stock (roughly a 10 percent stake). OpenAI will be able to acquire the stock as it deploys AMD’s chips.
“Our view is that we’re nowhere near the top of the demand curve as every major enterprise, cloud provider, and sovereign initiative is scaling AI infrastructure in parallel,” says Forrest Norrod, an executive vice president at AMD. He added: “we see this as the foundation of a long-term growth cycle for AI infrastructure, not a short-term surge.”
Today’s announcement should also help AMD challenge Nvidia in supplying chips for AI training and inference.
“This is a breakthrough achievement for AMD,” Patrick Moorhead, a prominent chip industry analyst, wrote on X in response to today’s news. “It will be a strategic supplier for the leading AI company and this will attract even more tier 1 customers.”
This deal is just the latest in a string of data center investments involving OpenAI and other tech firms. Shortly after his inauguration in January, US president Donald Trump announced that OpenAI, SoftBank, and Oracle would invest $100 billion in building AI data centers on US soil and would pour up to $500 billion into AI infrastructure over time.
The data center gold rush hinges in part on the idea that models will improve in line with the laws of scaling—so long as they’re trained on more data and compute. “There’s this basic story that scale is the thing, and that’s been true for a while, and it might still be true,” Jonathan Koomey, a visiting professor at UC Berkeley who studies computing and data center efficiency, told WIRED in September, talking about an OpenAI-Oracle deal to create three new data center sites. “That’s the bet that many of the US AI companies are making.”
Derek Thompson, a prominent economics reporter, noted in a recent post that the tech industry is projected to spend around $400 billion on AI infrastructure this year—while demand for AI services from US consumers stands at just about $12 billion a year, according to one study.
OpenAI had a strategic relationship with AMD prior to today’s announcement. At AMD’s Advancing AI event in Silicon Valley in June, Altman briefly joined AMD CEO Lisa Su on stage. Su said that AMD has been getting feedback from customers for a few years as the company has been designing the upcoming MI400 series of chips, and that OpenAI is one of those marquee customers.
Altman noted on stage that the industry’s move toward reasoning models has been putting pressure on AI model makers in terms of efficiency and long-context rollouts and that OpenAI needs “tons of compute, tons of memory, and tons of CPUs,” in addition to the Nvidia GPUs that the generative AI industry is so reliant on.
Su, at that event, described Altman as a “great friend” and an “icon in AI.”
Lauren Goode contributed to this report.
Update 10/02/25 3:00pm ET: This story has been updated to include additional comment AMD.