Advertisement
Dov Greenbaum and Mark Gerstein

Can AI developers avoid Frankenstein’s fateful mistake?

Oscar Isaac stars as Victor Frankenstein in the latest adaptation of Mary Shelley's gothic masterwork.
Oscar Isaac stars as Victor Frankenstein in the latest adaptation of Mary Shelley’s gothic masterwork.
(Ken Woroner / Netflix)
0:00 0:00

This is read by an automated voice. Please report any issues or inconsistencies here.

Audiences already know the story of Frankenstein. The gothic novel — adapted dozens of times, most recently in director Guillermo del Toro’s haunting revival now available on Netflix — is embedded in our cultural DNA as the cautionary tale of science gone wrong. But popular culture misreads author Mary Shelley’s warning. The lesson isn’t “don’t create dangerous things.” It’s “don’t walk away from what you create.”

This distinction matters: The fork in the road comes after creation, not before. All powerful technologies can become destructive — the choice between outcomes lies in stewardship or abdication. Victor Frankenstein’s sin wasn’t simply bringing life to a grotesque creature. It was refusing to raise it, insisting that the consequences were someone else’s problem. Every generation produces its Victors. Ours work in artificial intelligence.

Jacob Elordi’s yearning creature breaks spines and hearts in a violent, kinetic adaptation that co-stars Oscar Isaac, Christoph Waltz and Mia Goth.

Recently, a California appeals court fined an attorney $10,000 after 21 of 23 case citations in their brief proved to be AI fabrications — nonexistent precedents. Hundreds of similar instances have been documented nationwide, growing from a few cases a month to a few cases a day. This summer, a Georgia appeals court vacated a divorce ruling after discovering that 11 of 15 citations were AI fabrications. How many more went undetected, ready to corrupt the legal record?

Advertisement

The problem runs deeper than irresponsible deployment. For decades, computer systems were provably correct — a pocket calculator can consistently offer users the mathematically correct answers every time. Engineers could demonstrate how an algorithm would behave. Failures meant implementation errors, not uncertainty about the system itself.

Modern AI changes that paradigm. A recent study reported in Science confirms what AI experts have long known: plausible falsehoods — what the industry calls “hallucinations” — are inevitable in these systems. They’re trained to predict what sounds plausible, not to verify what’s true. When confident answers aren’t justified, the systems guess anyway. Their training rewards confidence over uncertainty. As one AI researcher quoted in the report put it, fixing this would “kill the product.”

This creates a fundamental veracity problem. These systems work by extracting patterns from vast training datasets — patterns so numerous and interconnected that even their designers cannot reliably predict what they’ll produce. We can only observe how they actually behave in practice, sometimes not until well after damage is done.

This unpredictability creates cascading consequences. These failures don’t disappear, they become permanent. Every legal fabrication that slips in undetected enters databases as precedent. Fake medical advice spreads across health sites. AI-generated “news” circulates through social media. This synthetic content is even scraped back into training data for future models. Today’s hallucinations become tomorrow’s facts.

Advertisement

So how do we address this without stifling innovation? We already have a model in pharmaceuticals. Drug companies cannot be certain of all biological effects in advance, so they test extensively, with most drugs failing before reaching patients. Even approved drugs face unexpected real-world problems. That’s why continuous monitoring remains essential. AI needs a similar framework.

Responsible stewardship — the opposite of Victor Frankenstein’s abandonment — requires three interconnected pillars. First: prescribed training standards. Drug manufacturers must control ingredients, document production practices and conduct quality testing. AI companies should face parallel requirements: documented provenance for training data, with contamination monitoring to prevent reuse of problematic synthetic content, prohibited content categories and bias testing across demographics. Pharmaceutical regulators require transparency while current AI companies need to disclose little.

Second: pre-deployment testing. Drugs undergo extensive trials before reaching patients. Randomized controlled trials were a major achievement, developed to demonstrate safety and efficacy. Most fail. That’s the point. Testing catches subtle dangers before deployment. AI systems for high-stakes applications, including legal research, medical advice and financial management, need structured testing to document error rates and establish safety thresholds.

Third: continuous surveillance after deployment. Drug companies are obligated to track adverse events of their products and report them to regulators. In turn, the regulators can mandate warnings, restrictions or withdrawal when problems emerge. AI needs equivalent oversight.

Advertisement

Why does this need regulation rather than voluntary compliance? Because AI systems are fundamentally different from traditional tools. A hammer doesn’t pretend to be a carpenter. AI systems do, projecting authority through confident prose, whether retrieving or fabricating facts. Without regulatory requirements, companies optimizing for engagement will necessarily sacrifice accuracy for market share.

The trick is regulating without crushing innovation. The EU’s AI Act shows how hard that is. Under the Act, companies building high-risk AI systems must document how their systems work, assess risks and monitor them closely. A small startup might spend more on lawyers and paperwork than on building the actual product. Big companies with legal teams can handle this. Small teams can’t.

Pharmaceutical regulation shows the same pattern. Post-market surveillance prevented tens of thousands of deaths when the FDA discovered that Vioxx — an arthritis medication prescribed to more than 80 million patients worldwide — doubled the risk of heart attacks. Still, billion-dollar regulatory costs mean only large companies can compete, and beneficial treatments for rare diseases, perhaps best tackled by small biotechs, go undeveloped.

Graduated oversight addresses this problem, scaling requirements and costs with demonstrated harm. An AI assistant with low error rates gets extra monitoring. Higher rates trigger mandatory fixes. Persistent problems? Pull it from the market until it’s fixed. Companies either improve their systems to stay in business, or they exit. Innovation continues, but now there’s more accountability.

Responsible stewardship cannot be voluntary. Once you create something powerful, you’re responsible for it. The question isn’t whether to build advanced AI systems — we’re already building them. The question is whether we’ll require the careful stewardship those systems demand.

The pharmaceutical framework — prescribed training standards, structured testing, continuous surveillance — offers a proven model for critical technologies we cannot fully predict. Shelley’s lesson was never about the creation itself. It was about what happens when creators walk away. Two centuries later, as Del Toro’s adaptation reaches millions this month, the lesson remains urgent. This time, with synthetic intelligence rapidly spreading through our society, we might not get another chance to choose the other path.

Advertisement

Dov Greenbaum is professor of law and director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies at Reichman University in Israel.

Mark Gerstein is the Albert L. Williams Professor of Biomedical Informatics at Yale University.

Insights

L.A. Times Insights delivers AI-generated analysis on Voices content to offer all points of view. Insights does not appear on any news articles.

Viewpoint
This article generally aligns with a Center point of view. Learn more about this AI-generated analysis

Perspectives

The following AI-generated content is powered by Perplexity. The Los Angeles Times editorial staff does not create or edit the content.

Ideas expressed in the piece

The authors argue that the Frankenstein narrative has been widely misunderstood—the cautionary tale is not about avoiding creation but about refusing stewardship after creation. Victor Frankenstein’s failure was abandonment, not innovation. The authors contend that modern AI systems inevitably produce “hallucinations,” or plausible falsehoods, because they are trained to predict what sounds convincing rather than verify truth. The authors note that incidents of AI-generated false legal citations are proliferating from a few cases monthly to several daily, corrupting legal databases and spreading permanent misinformation. Critically, the authors emphasize that AI differs fundamentally from traditional tools because these systems remain unpredictable even to their designers, with failure consequences that become permanent fixtures in databases and training data. The authors propose a pharmaceutical regulatory model with three pillars: prescribed training standards including documented data provenance and bias testing across demographics, pre-deployment testing establishing error rates and safety thresholds for high-stakes applications, and continuous post-deployment surveillance allowing regulators to mandate fixes or market withdrawal when problems emerge. The authors argue that graduated oversight—scaling requirements based on demonstrated harm—can address regulatory burden concerns while maintaining accountability and enabling innovation. They conclude that responsible stewardship cannot remain voluntary once powerful technologies are deployed into society[1][2].

Different views on the topic

The search results provided do not contain substantive opposing viewpoints from trusted, credible sources that directly address the regulatory framework proposed by the authors. While the authors themselves acknowledge concerns about regulatory burden potentially disadvantaging smaller competitors and stifling innovation through excessive compliance costs, as exemplified by challenges under the EU’s AI Act, this represents an internal recognition of tradeoffs rather than a developed counterargument presented by other sources.

A cure for the common opinion

Get thought-provoking perspectives with our weekly newsletter.

By continuing, you agree to our Terms of Service and our Privacy Policy.

Advertisement