Advertisement

Bold Plans to Learn How We Learn

Share

The information age comes with a lot of excess baggage.

While we are inundated with new information borne on the wings of new technologies, experts debate fiercely over whether much of it really helps. We can find the answers to almost any question these days on the Internet, but are we learning anything, or are we just collecting data?

How can we be sure that the considerable funds spent on classroom computers these days wouldn’t be better spent on more tutors? At issue are some very fundamental questions. How do we learn? How can new technologies help us learn? How can we best deal with the avalanche of new data that threatens to bury us?

In an effort to answer those questions, the National Science Foundation has launched a major new initiative called Learning and Intelligent Systems.

Advertisement

The NSF, which funds most academic research in this country, recently awarded 28 grants worth more than $22.5 million in the first leg of a three-pronged effort that will involve experts from a wide range of disciplines at institutions across the country.

“It’s a very, very challenging project,” says Juris Hartmanis, assistant director for computer and information science and engineering at the NSF. “What we want to do is simply understand the nature of learning and intelligence and see how that can be utilized to improve our educational processes.”

Hartmanis says the “tools” used in the project will range from the ubiquitous computer to classroom observations to such new technologies as functional magnetic resonance imaging, which identifies which part of the brain is employed during different learning situations.

Researchers using magnetic imaging, for example, have determined “where in the brain you store the language you learned first as a child,” Hartmanis says. Surprisingly, people who learn a second language store that information in a different area of the brain.

At Carnegie-Mellon University, scientists will use that technique to help develop a “systems-level neural theory of incremental learning.” They will study the brains of rats and monkeys and humans to see what parts of the brain are used to learn different things.

Researchers at the University of Massachusetts will work on the ground floor, dealing mostly with infants. They want to know how an infant armed with a relatively empty vessel for a brain can progress to somewhat sophisticated levels of understanding in the short span of a couple of years.

Advertisement

The key goal, according to Neil Berthier, the principal investigator, is “to understand how highly complex intelligent systems could arise from simple initial knowledge through interactions with the environment.”

His team is intrigued by the fact that a baby learns very quickly how to use its arms, and team members wonder if that learning ability might somehow be transferred to intelligent robots.

Building on that theme, researchers at Brown University will study the role of the mind in “sensory-motor activities.”

*

A child learns how to “navigate” so quickly that “it seems almost effortless in normal circumstances,” says principal investigator William H. Warren. Researchers at Brown will take what they learn from observing humans and other animals and try to incorporate it into robots in hopes of gaining a broader understanding of “spatial cognition,” or how we know when we are about to fall into a manhole, even if we’ve never done it before.

Researchers at UCLA will go into the classroom to seek answers to their part of the project. Psychology professor Rochel Gelman says her nine-member team will focus on how learning occurs in complex environments where the data can be both rich and confusing.

The UCLA program will look at such things as how math and science is taught in schools, and Gelman says it will seek to answer this question:

Advertisement

“How do we come to learn about concepts that we might not have been prepared for?”

Gelman says her team will work with the Keck Institute for Math and Science at Crossroads High School in Santa Monica, where students will be monitored as they struggle to learn. Cameras will record the students’ activities.

“The cameras give us immediate access, and we can track the kids responses as they are doing all kinds of things on computers,” she says.

The students’ computers will also be connected directly to Orville Chapman’s laboratory on the UCLA campus, where researchers in biology, psychology and genetics can examine the results. That, she hopes, will define the kinds of research that needs to be done on memory.

The unifying theme running throughout the project, she says, “is the interaction between the structure of the brain’s learning mechanisms and the structure of the data that support learning.”

In other words, how does one package stuff so that it is intellectually palatable, and what does the brain do with it when it receives the package?

Of course, what we learn means nothing if we don’t know how to use it, and other researchers at Carnegie-Mellon will study how students transfer their new knowledge “to situations outside the original learning context,” according to principal investigator Marsha C. Lovett.

Advertisement

Students will be studied as they try to apply new skills and new understanding to different areas. And that, Lovett says, is not easy.

Some researchers will try to steal the thunder from one of the most effective teaching aides available today--the tutor. Recognizing that students learn best if they have to think for themselves rather than being fed information, researchers at the University of Memphis are building a “fully automatic computer tutor.”

The computerized tutor will give hints, prod and prompt the students, but it won’t provide a substitute for thinking. It will be used first in courses on computer literacy and introductory medicine.

Nora Sabelli, a theoretical chemist with the NSF and one of the organizers of the nationwide project, says some research will be directed at a common problem in education: Too often, she says, we assume the student knows too little.

“Very often when we teach people we assume that we are starting from scratch,” she says. “We don’t take into account what they already know, and what they already know may make the teaching much faster.”

She hopes the initiative will lead to new technologies that will better measure prior knowledge. That could be a major boost in retraining people for the workplace.

Advertisement

*

What it all boils down to is this, according to NSF director Neal Lane:

We have enormous amounts of information now available to us. But if we don’t understand how the brain works and how we can use that information, it won’t do us a lot of good.

Access is useless, Lane says, without “intelligently absorbing, refining and analyzing the information.”

(BEGIN TEXT OF INFOBOX / INFOGRAPHIC)

Smart Money

National Science Foundation awards for the Learning and Intelligent Systems initiative:

*Boston U.

Awards: 1

Amount: $624,980

*Brown U.

Awards: 3

Amount: $2,322,340

*Carnegie Mellon

Awards: 4

Amount: $3,2242,900

*Case Western

Awards: 1

Amount: $775,000

*Hampshire College

Awards: 1

Amount: $1,092,500

*Johns Hopkins

Awards: 2

Amount: $1,548,900

*New York U.

Awards: 1

Amount: $875,000

*N.C. State U.

Awards: 1

Amount: $600,470

*Northwestern

Awards: 2

Amount: $2,156,750

*UC Berkeley

Awards: 1

Amount: $725,000

*UCLA

Awards: 1

Amount: $751,000

*U. of Chicago

Awards: 1

Amount: $881,000

*U. of Ill. (Chicago)

Awards: 1

Amount: $250,000

*U. of Maryland

Awards: 1

Amount: $775,000

*U. of Massachusetts

Awards: 1

Amount: $624,050

*U. of Memphis

Awards: 1

Amount: $900,000

*U. of Pittsburgh

Awards: 2

Amount: $1,500,000

*Wash. U. (St. Louis)

Awards: 1

Amount: $706,400

*SRI (Menlo Park)

Awards: 1

Amount: $1,450,000

Source: National Science Foundation

*

Lee Dye can be reached via e-mail at leedye@compuserve.com

Advertisement