Advertisement

SCIENCE / MEDICINE : PROOF POSITIVE : Certainty is a comforting concept, but scientists say it hardly ever exists. History is littered with convictions that were eventually found to be false.

Share
TIMES SCIENCE WRITER

Suppose Juliet had asked Romeo if he loved her and he had responded, “I’m 99% sure.”

Would that have been enough?

No way.

“It’s hard for human beings to maintain a sort of intermediate state between belief and disbelief in which you just suspend judgment,” says James Woodward, philosopher of science at Caltech. “We tend to gravitate to one of two possibilities and say, ‘No way, that can’t possibly be right,’ or else we say, ‘Yeah, that is right.’ I think human beings are sort of built this way.”

Traditionally, humans have looked to the world of science for a firm understanding of what is true and what is false because science, after all, is based upon experimentation and verification that ultimately leads to proof. Or at least so goes popular legend.

Yet most scientists would jump with glee if they thought they could “prove” anything with Romeo’s 99% level of certainty. In fact, the word proof practically never appears in scientific literature because most scientists do not think it is appropriate beyond the narrow confines of the pure world of mathematics.

Advertisement

“Knowledge, aside from mathematics, never has absolute certainty,” says Peter Galison, who teaches both physics and philosophy at Stanford University.

That has led to a sense of frustration among many who believe that if scientists can do something as grand as splitting the atom, then they ought to be able to predict next month’s weather and tell us for certain if oat bran really does lower the level of cholesterol in the human body.

That perceived failure is particularly troubling at a time when science is faced with the specter of potential worldwide disasters, such as global warming or the destruction of the ozone layer that protects us from lethal solar radiation. Scientists today can demonstrate that some precursors seem to be taking place, but no one can say with certainty exactly what the consequences will be.

Why should that be the case? One reason lies in the complexity of so many issues that confront the world today. It is much easier, some experts maintain, to split an atom than to collect the evidence and reach a consensus on complex matters that are global in scope, as are subtle atmospheric changes, or even to understand the inner workings of that wondrous organism the human body.

That leaves policy makers in the precarious position of having to make decisions on the basis of what scientists think will “probably” be the case, not what they know to be true. In the years ahead, civilizations will almost certainly be asked to make great sacrifices because evidence suggests that a failure to do so could lead to disaster, but don’t expect any scientist to be able to deliver incontrovertible “proof” that failure to do so would be a mistake.

For the great issues that will have to be resolved in the years ahead, “proof” is a false concept.

Advertisement

“We’re going to have to make decisions on a policy level before we have certainty in many cases,” Galison added. “You have to be able to understand what the confidence level is, and that’s hard. It’s easy to convey certainty to somebody.”

A concept such as “statistical probability” would hardly warm Juliet’s heart.

Yet as scientists probe an increasingly complex world of dramatic environmental changes, people will have to get used to the idea of uncertainty, even when they are asked to make considerable sacrifices.

That is largely because of changes in the basic tools of science. Many instruments are so sophisticated today that they are invalidating major findings of the past--for example, which chemicals can produce cancer. And the ubiquitous computer, the bedside companion of every scientist, has changed all the rules of the game. Computers have made it possible for scientists to create models of the entire universe, thus granting them a testing ground for the most complex theories, such as global warming.

While computerization has led to major scientific advances, it has also increased the level of uncertainty in many areas because scientists are now better equipped to grapple with issues for which there may be no definitive answers.

At best, in many areas of great public concern, scientists can only offer odds and probabilities, not proof.

Some experts, however, believe that the public has already gone through a few recent learning experiences that should have prepared them to accept the limitations of science.

Advertisement

For example, there is an abundance of evidence today showing that the amount of carbon dioxide in the atmosphere is increasing because of the burning of fossil fuels. Scientists generally agree on that point, and there is further agreement on the evidence that carbon dioxide can trap heat in the atmosphere.

However, there is much debate over how much effect that will have. Will it lead to significant global warming and a rise in sea level that could have catastrophic effects all over the planet? Although most experts believe the stage has been set for just such a scenario, the evidence that global warming has begun is woefully inadequate.

In fact, a team of scientists from the University of Alabama and the National Aeronautics and Space Administration reported on March 29 that an exhaustive, 10-year study of satellite measurements of atmospheric temperature changes found no obvious trend toward global warming. Yet scientists are convinced that the level of carbon dioxide in the atmosphere has increased dramatically since the birth of the Industrial Revolution, and that should, at some point, lead to an increase in atmospheric temperatures, according to most computer models.

That leaves policy makers faced with making decisions on a global scale that will be very costly for many, and no one can say with absolute certainty that action at this point is really necessary.

It is in areas such as global warming, which cannot be taken into the laboratory and dissected like a rat, that “we have a lack of consensus, and proof is hardest to get,” says UCLA chemist David Eisenberg.

“It may be difficult or impossible to make the right predictions,” he added.

Another example is the controversy over smoking. Most experts are comfortable with the validity of evidence that links smoking to lung cancer, although there has never been any proof in the popular sense of the word. What has been shown is that smoking significantly increases the odds of getting cancer.

Advertisement

“Hasn’t the public begun to get the notion that the idea of causality--that A causes B--has a statistical component to it?” asked Robert McGinn, vice chairman of Stanford University’s Values, Technology, Science and Society Program.

“There has been this, if you will, ‘statistic-ization’ of the notion of cause,” said McGinn, a philosopher. “Perhaps the public is more ready than it was 15 or 20 years ago to think in terms of likelihoods.”

That does not mean, however, that scientists come away from their experiments devoid of conviction.

“We come to things with the full strength of our convictions,” said Galison, who also teaches science history. “We come to believe them as much as we come to believe anything.

“It’s absolutely true that we can’t have absolute certainty, but nonetheless our knowledge can be made very secure.”

History is littered with the good intentions of scientists who thought they were very secure, however. That includes some of the great names of science, such as the Scottish physicist William Thomson, better known as Lord Kelvin, who formulated the laws of thermodynamics

Advertisement

“Kelvin was a very brilliant scientist, and yet he made a colossal error in a whole series of papers,” said UCLA’s Eisenberg. Kelvin, a deeply religious man who opposed Charles Darwin’s theory of evolution, produced “two independent proofs” that the world was no more than 50,000 years old and thus too young for the evolution that Darwin had postulated.

Both his “proofs,” however, were based on the assumption that heat from the sun was caused by burning gases, and the cooling rate of the sun meant the Earth would have been a molten mass as recently as 50,000 years ago, Kelvin concluded.

But half a century later Ernest Rutherford made a series of discoveries that created the field of nuclear physics.

“It wasn’t until the discovery by Rutherford of atomic energy and the later recognition that the sun’s heat came not from the burning of gases but rather from atomic energy” that Kelvin’s concept of the age of the sun was discredited, Eisenberg said.

Many scientific “findings” turn out to be wrong because of procedural errors. Astronomers who announced last year that they had discovered a pulsar spinning faster than anyone had thought possible sent other scientists off on a wild chase.

Months later, the astronomers admitted they had made an embarrassing mistake. The signal they thought had been coming from the pulsar was in fact coming from a television camera used in the observation.

Advertisement

Sometimes, scientists get into trouble because they don’t follow the basic scientific method of subjecting their work to the evaluation of their peers before announcing major discoveries. A year ago, two chemists stunned the world when they announced at the University of Utah that they had achieved nuclear fusion at room temperature with a simple device, promising to end the world’s energy woes.

But other scientists at other laboratories around the world were unable to duplicate the results, and today all but a handful of scientists believe the Utah announcement was a major blunder. Had the chemists presented their work first to their peers before announcing it in a controversial press conference, they might have saved themselves profound embarrassment.

In many cases, though, errors creep into scientific research because of a basic lack of sound data. Many experts, for example, doubt the dire conclusions about global warming because most atmospheric models do not include the effects of clouds. That is not because scientists have overlooked the obvious role clouds perform. The problem stems from the fact that no one fully understands the complex part that clouds play.

Clouds cool by screening out solar radiation, but they also heat by trapping the Earth’s warmth in the lower atmosphere. But how do clouds fit into the overall global warming problem? No one is certain, so they are simply left out of many models.

The difficulty in dealing with clouds underscores one of the fundamental weaknesses of computer models, which are very dependent upon the quality of the data and extremely vulnerable to error and oversimplification. Nonetheless, they serve a valuable purpose because they provide a framework to study a subject that is so complex that “you can’t follow each step,” Galison said.

A computer model is “halfway between a theory and an experiment,” Galison said. “It’s like a theory because you are using the computer to make believe that certain things are happening, and it’s like an experiment in the sense that every time you run a computer model you get a slightly different answer.”

Advertisement
Advertisement