Advertisement
Iddo Gefen

The human brain doesn’t learn, think or recall like an AI. Embrace the difference

A laptop displaying a brain scan
Brains aren’t computers.
(Stan Grossfeld / Boston Globe / Getty Images)

Recently, Nvidia founder Jensen Huang, whose company builds the chips powering today’s most advanced artificial intelligence systems, remarked: “The thing that’s really, really quite amazing is the way you program an AI is like the way you program a person.” Ilya Sutskever, co-founder of OpenAI and one of the leading figures of the AI revolution, also stated that it is only a matter of time before AI can do everything humans can do, because “the brain is a biological computer.”

I am a cognitive neuroscience researcher, and I think that they are dangerously wrong.

The biggest threat isn’t that these metaphors confuse us about how AI works, but that they mislead us about our own brains. During past technological revolutions, scientists, as well as popular culture, tended to explore the idea that the human brain could be understood as analogous to one new machine after another: a clock, a switchboard, a computer. The latest erroneous metaphor is that our brains are like AI systems.

I’ve seen this shift over the past two years in conferences, courses and conversations in the field of neuroscience and beyond. Words like “training,” “fine-tuning” and “optimization” are frequently used to describe human behavior. But we don’t train, fine-tune or optimize in the way that AI does. And such inaccurate metaphors can cause real harm.

The 17th century idea of the mind as a “blank slate” imagined children as empty surfaces shaped entirely by outside influences. This led to rigid education systems that tried to eliminate differences in neurodivergent children, such as those with autism, ADHD or dyslexia, rather than offering personalized support. Similarly, the early 20th century “black box” model from behaviorist psychology claimed only visible behavior mattered. As a result, mental healthcare often focused on managing symptoms rather than understanding their emotional or biological causes.

Advertisement

And now there are new misbegotten approaches emerging as we start to see ourselves in the image of AI. Digital educational tools developed in recent years, for example, adjust lessons and questions based on a child’s answers, theoretically keeping the student at an optimal learning level. This is heavily inspired by how an AI model is trained.

This adaptive approach can produce impressive results, but it overlooks less measurable factors such as motivation or passion. Imagine two children learning piano with the help of a smart app that adjusts for their changing proficiency. One quickly learns to play flawlessly but hates every practice session. The other makes constant mistakes but enjoys every minute. Judging only on the terms we apply to AI models, we would say the child playing flawlessly has outperformed the other student.

But educating children is different from training an AI algorithm. That simplistic assessment would not account for the first student’s misery or the second child’s enjoyment. Those factors matter; there is a good chance the child having fun will be the one still playing a decade from now — and they might even end up a better and more original musician because they enjoy the activity, mistakes and all. I definitely think that AI in learning is both inevitable and potentially transformative for the better, but if we will assess children only in terms of what can be “trained” and “fine-tuned,” we will repeat the old mistake of emphasizing output over experience.

I see this playing out with undergraduate students, who, for the first time, believe they can achieve the best measured outcomes by fully outsourcing the learning process. Many have been using AI tools over the past two years (some courses allow it and some do not) and now rely on them to maximize efficiency, often at the expense of reflection and genuine understanding. They use AI as a tool that helps them produce good essays, yet the process in many cases no longer has much connection to original thinking or to discovering what sparks the students’ curiosity.

If we continue thinking within this brain-as-AI framework, we also risk losing the vital thought processes that have led to major breakthroughs in science and art. These achievements did not come from identifying familiar patterns, but from breaking them through messiness and unexpected mistakes. Alexander Fleming discovered penicillin by noticing that mold growing in a petri dish he had accidentally left out was killing the surrounding bacteria. A fortunate mistake made by a messy researcher that went on to save the lives of hundreds of millions of people.

This messiness isn’t just important for eccentric scientists. It is important to every human brain. One of the most interesting discoveries in neuroscience in the past two decades is the “default mode network,” a group of brain regions that becomes active when we are daydreaming and not focused on a specific task. This network has also been found to play a role in reflecting on the past, imagining and thinking about ourselves and others. Disregarding this mind-wandering behavior as a glitch rather than embracing it as a core human feature will inevitably lead us to build flawed systems in education, mental health and law.

Advertisement

Unfortunately, it is particularly easy to confuse AI with human thinking. Microsoft describes generative AI models like ChatGPT on its official website as tools that “mirror human expression, redefining our relationship to technology.” And OpenAI CEO Sam Altman recently highlighted his favorite new feature in ChatGPT called “memory.” This function allows the system to retain and recall personal details across conversations. For example, if you ask ChatGPT where to eat, it might remind you of a Thai restaurant you mentioned wanting to try months earlier. “It’s not that you plug your brain in one day,” Altman explained, “but … it’ll get to know you, and it’ll become this extension of yourself.”

The suggestion that AI’s “memory” will be an extension of our own is again a flawed metaphor — leading us to misunderstand the new technology and our own minds. Unlike human memory, which evolved to forget, update and reshape memories based on myriad factors, AI memory can be designed to store information with much less distortion or forgetting. A life in which people outsource memory to a system that remembers almost everything isn’t an extension of the self; it breaks from the very mechanisms that make us human. It would mark a shift in how we behave, understand the world and make decisions. This might begin with small things, like choosing a restaurant, but it can quickly move to much bigger decisions, such as taking a different career path or choosing a different partner than we would have, because AI models can surface connections and context that our brains may have cleared away for one reason or another.

This outsourcing may be tempting because this technology seems human to us, but AI learns, understands and sees the world in fundamentally different ways, and doesn’t truly experience pain, love or curiosity like we do. The consequences of this ongoing confusion could be disastrous — not because AI is inherently harmful, but because instead of shaping it into a tool that complements our human minds, we will allow it to reshape us in its own image.

Iddo Gefen is a PhD candidate in cognitive neuroscience at Columbia University and author of the novel “Mrs. Lilienblum’s Cloud Factory.”. His Substack newsletter, Neuron Stories, connects neuroscience insights to human behavior.

Insights

L.A. Times Insights delivers AI-generated analysis on Voices content to offer all points of view. Insights does not appear on any news articles.

Viewpoint
This article generally aligns with a Center Left point of view. Learn more about this AI-generated analysis

Advertisement