Say farewell to the AI bubble, and get ready for the crash
- Share via
Most people not deeply involved in the artificial intelligence frenzy may not have noticed, but perceptions of AI’s relentless march toward becoming more intelligent than humans, even becoming a threat to humanity, came to a screeching halt Aug. 7.
That was the day when the most widely followed AI company, OpenAI, released GPT-5, an advanced product that the firm had long promised would put competitors to shame and launch a new revolution in this purportedly revolutionary technology.
As it happened, GPT-5 was a bust. It turned out to be less user-friendly and in many ways less capable than its predecessors in OpenAI’s arsenal. It made the same sort of risible errors in answering users’ prompts, was no better in math (or even worse), and not at all the advance that OpenAI and its chief executive, Sam Altman, had been talking up.
AI companies are really buoying the American economy right now, and it’s looking very bubble-shaped.
— Alex Hanna, co-author, “The AI Con”
“The thought was that this growth would be exponential,” says Alex Hanna, a technology critic and co-author (with Emily M. Bender of the University of Washington) of the indispensable new book “The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want.” Instead, Hanna says, “We’re hitting a wall.”
The consequences go beyond how so many business leaders and ordinary Americans have been led to expect, even fear, the penetration of AI into our lives. Hundreds of billions of dollars have been invested by venture capitalists and major corporations such as Google, Amazon and Microsoft in OpenAI and its multitude of fellow AI labs, even though none of the AI labs has turned a profit.
Public companies have scurried to announce AI investments or claim AI capabilities for their products in the hope of turbocharging their share prices, much as an earlier generation of businesses promoted themselves as “dot-coms” in the 1990s to look more glittery in investors’ eyes.
Nvidia, the maker of a high-powered chip powering AI research, plays almost the same role as a stock market leader that Intel Corp., another chip-maker, played in the 1990s — helping to prop up the bull market in equities.
If the promise of AI turns out to be as much of a mirage as dot-coms did, stock investors may face a painful reckoning.
AI ‘hallucinations’ are causing lawyers professional embarrassment, sanctions from judges and lost cases. Why do they keep using it?
The cheerless rollout of GPT-5 could bring the day of reckoning closer. “AI companies are really buoying the American economy right now, and it’s looking very bubble-shaped,” Hanna told me.
The rollout was so disappointing that it shined a spotlight on the degree that the whole AI industry has been dependent on hype.
Here’s Altman, speaking just before the unveiling of GPT-5, comparing it with its immediate predecessor, GPT-4o: “GPT-4o maybe it was like talking to a college student,” he said. “With GPT-5 now it’s like talking to an expert — a legitimate PhD-level expert in anything any area you need on demand ... whatever your goals are.”
Well, not so much. When one user asked it to produce a map of the U.S. with all the states labeled, GPT-5 extruded a fantasyland, including states such as Tonnessee, Mississipo and West Wigina. Another prompted the model for a list of the first 12 presidents, with names and pictures. It only came up with nine, including presidents Gearge Washington, John Quincy Adama and Thomason Jefferson.
Experienced users of the new version’s predecessor models were appalled, not least by OpenAI’s decision to shut down access to its older versions and force users to rely on the new one. “GPT5 is horrible,” wrote a user on Reddit. “Short replies that are insufficient, more obnoxious ai stylized talking, less ‘personality’ … and we don’t have the option to just use other models.” (OpenAI quickly relented, reopening access to the older versions.)
The tech media was also unimpressed. “A bit of a dud,” judged the website Futurism and Ars Technica termed the rollout “a big mess.” I asked OpenAI to comment on the dismal public reaction to GPT-5, but didn’t hear back.
None of this means that the hype machine underpinning most public expectations of AI has taken a breather. Rather, it remains in overdrive.
A projection of AI’s development over the coming years published by something called the AI Futures Project under the title “AI 2027” states: “We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.”
Shares in AI companies have powered a huge runup in the stock market this year, but users are beginning to question whether the craze will fall flat.
The rest of the document, mapping a course to late 2027 when an AI agent “finally understands its own cognition,” is so loopily over the top that I wondered whether it wasn’t meant as a parody of excessive AI hype. I asked its creators if that was so, but haven’t received a reply.
One problem underscored by GPT-5’s underwhelming rollout is that it exploded one of the most cherished principles of the AI world, which is that “scaling up” — endowing the technology with more computing power and more data — would bring the grail of artificial general intelligence, or AGI, ever closer to reality.
That’s the principle undergirding the AI industry’s vast expenditures on data centers and high-performance chips. The demand for more data and more data-crunching capabilities will require about $3 trillion in capital just by 2028, in the estimation of Morgan Stanley. That would outstrip the capacity of the global credit and derivative securities markets. But if AI won’t scale up, most if not all that money will be wasted.
As Bender and Hanna point out in their book, AI promoters have kept investors and followers enthralled by relying on a vague public understanding of the term “intelligence.” AI bots seem intelligent, because they’ve achieved the ability to seem coherent in their use of language. But that’s different from cognition.
“So we’re imagining a mind behind the words,” Hanna says, “and that becomes associated with consciousness or intelligence. But the notion of general intelligence is not really well-defined.”
Indeed, as long ago as the 1960s, that phenomenon was noticed by Joseph Weizenbaum, the designer of the pioneering chatbot ELIZA, which replicated the responses of a psychotherapist so convincingly that even test subjects who knew they were conversing with a machine thought it displayed emotions and empathy.
“What I had not realized,” Weizenbaum wrote in 1976, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” Weizenbaum warned that the “reckless anthropomorphization of the computer” — that is, treating it as some sort of thinking companion — produced a “simpleminded view of intelligence.”
Meta’s LLaMA chatbot was trained on a database of 200,000 books, including mine, without pay. How should authors like me think about that?
That tendency has been exploited by today’s AI promoters. They label the frequent mistakes and fabrications produced by AI bots as “hallucinations,” which suggests that the bots have perceptions that may have gone slightly awry. But the bots “don’t have perceptions,” Bender and Hanna write, “and suggesting that they do is yet more unhelpful anthropomorphization.”
The general public may finally be cottoning on to the failed promise of AI more generally. Predictions that AI will lead to large-scale job losses in creative and STEM fields (science, technology, engineering and math) might inspire feelings that the whole enterprise was a tech-industry scam from the outset.
Predictions that AI would yield a burst of increased worker productivity haven’t been fulfilled; in many fields, productivity declines, in part because workers have to be deployed to double-check AI outputs, lest their mistakes or fabrications find their way into mission-critical applications — legal briefs incorporating nonexistent precedents, medical prescriptions with life-threatening ramifications and so on.
Some economists are dashing cold water on predictions of economic gains more generally. MIT economist Daron Acemoglu, for example, forecast last year that AI would produce an increase of only about 0.5% in U.S. productivity and an increase of about 1% in gross domestic product over the next 10 years, mere fractions of the AI camp’s projections.
The value of Bender’s and Hanna’s book, and the lesson of GPT-5, is that they remind us that “artificial intelligence” isn’t a scientific term or an engineering term. It’s a marketing term. And that’s true of all the chatter about AI eventually taking over the world.
“Claims around consciousness and sentience are a tactic to sell you on AI,” Bender and Hanna write. So, too, is the talk about the billions, or trillions, to be made in AI. As with any technology, the profits will go to a small cadre, while the rest of us pay the price ... unless we gain a much clearer perception of what AI is, and more importantly, what it isn’t.
More to Read
Insights
L.A. Times Insights delivers AI-generated analysis on Voices content to offer all points of view. Insights does not appear on any news articles.
Viewpoint
Perspectives
The following AI-generated content is powered by Perplexity. The Los Angeles Times editorial staff does not create or edit the content.
Ideas expressed in the piece
The author argues that GPT-5’s launch on August 7 marked a definitive end to perceptions of AI’s relentless advancement, describing the release as “a bust” that was less user-friendly and capable than its predecessors. The model made embarrassing errors, including creating a fantasy map of the U.S. with states like “Tonnessee” and “West Wigina,” and struggled with basic tasks like listing the first 12 presidents.
According to the author’s perspective, the AI industry represents a dangerous bubble reminiscent of the 1990s dot-com era, with hundreds of billions invested in AI companies despite none achieving profitability. The author contends that public companies are frantically announcing AI capabilities merely to inflate their stock prices, while Nvidia plays a similar market-propping role that Intel did during the previous tech bubble.
The author maintains that GPT-5’s disappointing performance has shattered the fundamental “scaling up” principle that underpins AI development—the belief that more computing power and data will lead to artificial general intelligence. This principle has justified massive expenditures on data centers and chips, with Morgan Stanley estimating $3 trillion in required capital by 2028, which the author argues will largely be wasted.
The piece emphasizes that “artificial intelligence” is fundamentally a marketing term rather than a scientific one, arguing that AI promoters exploit public misunderstanding by anthropomorphizing chatbots and labeling their mistakes as “hallucinations.” The author cites the 1960s ELIZA chatbot experiment to demonstrate how humans have long been susceptible to attributing consciousness to simple computer programs.
The author concludes that predictions of AI-driven productivity gains and job displacement have failed to materialize, with some workers actually becoming less productive due to the need to verify AI outputs, while economists like MIT’s Daron Acemoglu project minimal economic benefits from AI advancement.
Different views on the topic
Despite initial user complaints, industry data suggests GPT-5’s adoption has been successful, with ChatGPT usage actually increasing since the model’s release according to OpenAI’s head of ChatGPT[1]. The model now serves nearly 700 million weekly users, indicating sustained market confidence in AI technology[2].
OpenAI and industry partners present GPT-5 as representing significant technological advancement, describing it as achieving “state-of-the-art performance across coding, math, writing, health, visual perception, and more”[4]. Companies like Amgen report that GPT-5 has met their high scientific accuracy standards and demonstrates improved navigation of ambiguous contexts compared to previous models[2].
Technology companies and developers offer evidence of GPT-5’s superior capabilities, with Cursor describing it as “the smartest model [they’ve] used” and Windsurf reporting it has “half the tool calling error rate over other frontier models”[3]. These assessments suggest meaningful technical progress rather than stagnation.
Industry leaders acknowledge implementation challenges while maintaining confidence in the underlying technology’s trajectory, with OpenAI CEO Sam Altman quickly responding to user feedback by promising doubled rate limits, restored access to older models, and personality improvements[1]. The company’s rapid response demonstrates adaptability rather than fundamental failure.
The business integration of GPT-5 continues expanding across major enterprises including Microsoft platforms, with organizations like BNY, California State University, and T-Mobile already deploying AI systems for operational transformation[2][3]. This widespread adoption indicates sustained corporate belief in AI’s practical value proposition.