Advertisement

Why OpenAI is betting on custom chips with Broadcom for AI expansion

OpenAI’s Sam Altman talks
OpenAI’s Sam Altman said that customizing chips for AI will make models faster and more efficient.
(Bloomberg)

OpenAI signed a multiyear agreement with Broadcom Inc. to collaborate on custom chips and networking equipment, marking the latest step in the AI startup’s ambitious plan to add computing infrastructure. Broadcom shares jumped.

As part of the pact, OpenAI will design the hardware and work with Broadcom to develop it, according to a joint statement on Monday. The plan is to add 10 gigawatts’ worth of AI data center capacity, with the companies beginning to deploy racks of servers containing the gear in the second half of 2026.

By customizing the processors, OpenAI said it will be able to embed what it has learned from developing AI models and services “directly into the hardware, unlocking new levels of capability and intelligence.” The hardware rollout should be completed by the end of 2029, according to the companies.

Advertisement

For Broadcom, a maker of components for everything from the iPhone to optical networking, the move vaults the company deeper into the booming AI market. Monday’s agreement confirms an arrangement that Broadcom Chief Executive Officer Hock Tan had hinted at during an earnings conference call last month.

Shares of Broadcom rose as much as 8.9% after markets opened in New York. They had been up 40% this year through Friday’s close.

OpenAI, the creator of ChatGPT, has inked a number of blockbuster deals this year, aiming to ease constraints on computing power. Nvidia Corp., whose chips handle the majority of AI work, said last month that it will invest as much as $100 billion in OpenAI to support new infrastructure, with a goal of at least 10 gigawatts of capacity. And just last week, OpenAI announced a pact to deploy 6 gigawatts of Advanced Micro Device Inc. processors over multiple years.

Advertisement

While purchasing chips from others, OpenAI has also been working on designing its own semiconductors. They’re mainly intended to handle the inference stage of running AI models — the phase after the technology is trained.

OpenAI CEO Sam Altman said that his company has been working with Broadcom for 18 months.

The startup is rethinking technology, starting with the transistors and going all the way up to what happens when someone asks ChatGPT a question, he said on a podcast released by his company. “By being able to optimize across that entire stack, we can get huge efficiency gains, and that will lead to much better performance, faster models, cheaper models.”

When Tan referred to the agreement last month, he didn’t name the customer, though people familiar with the matter identified it as OpenAI.

Advertisement

“If you do your own chips, you control your destiny,” Tan said in the podcast Monday.

Broadcom has increasingly been seen as a key beneficiary of AI spending, helping propel its share price this year. The stock was up 40% in 2025 through the end of last week, outpacing a 29% gain by the benchmark Philadelphia Stock Exchange Semiconductor Index. OpenAI, meanwhile, has garnered a $500-billion valuation, making it the world’s biggest startup by that measure.

By tapping Broadcom’s networking technology, OpenAI is hedging its bets. Broadcom’s Ethernet-based options compete with Nvidia’s proprietary technology. OpenAI will also be designing its own gear as part of its work on custom hardware, the startup said.

Broadcom won’t be providing the data center capacity itself. Instead, it will deploy server racks with custom hardware to facilities run by either OpenAI or its cloud-computing partners.

Advertisement

As AI and cloud companies announce large projects every few days, it’s often not clear how the efforts are being financed. The interlocking deals have also boosted fears of a bubble in AI spending.

There’s no investment or stock component to the deal, OpenAI said, making it different from the agreements with Nvidia and AMD. An OpenAI spokesperson declined to comment on how the company will finance the chips, but the underlying idea is that more computing power will let the company sell more services.

A single gigawatt is about the capacity of a conventional nuclear power plant. Still, 10 gigawatts of computing power alone isn’t enough to support OpenAI’s vision of achieving artificial general intelligence, said OpenAI co-founder and President Greg Brockman.

“That is a drop in the bucket compared to where we need to go,” he said.

Getting to the level under discussion isn’t going to happen quickly, said Charlie Kawwas, president of Broadcom’s semiconductor solutions group. “Take railroads — it took about a century to roll it out as critical infrastructure. If you take the internet, it took about 30 years,” he said. “This is not going to take five years.”

Bass and Ghaffary write for Bloomberg.

Inside the business of entertainment

The Wide Shot brings you news, analysis and insights on everything from streaming wars to production — and what it all means for the future.

By continuing, you agree to our Terms of Service and our Privacy Policy.

Advertisement
Advertisement