Advertisement

COLUMN ONE : Why Can’t Science Get It Right? : A rush to publish and widely differing techniques leave the truth buried in an avalanche of conflicting studies.

Share
TIMES SCIENCE WRITER

A reputable scientist recently reported in a prestigious journal that his research found no link between an increased risk of childhood cancer and exposure to electric transmission lines and appliances.

A few months later, equally reputable scientists reported contrary evidence: Children living near power lines were up to four times as likely to develop leukemia or brain cancer.

Then, in March, scientists turned the research on its head again, reporting no link between power lines and cancer among electrical utility workers.

Advertisement

Not surprisingly, other researchers said that more studies are on the way--and some may well contradict these earlier reports, as well as one another.

But wait. Either electromagnetic fields cause cancer or they don’t.

Why can’t scientists get it right?

The same could be asked about contradictory scientific evidence on the benefits of oat bran and the danger of eggs, the miracle of cold fusion and the value of vitamins, the life span of left-handers and the marriage prospects of women over 30.

Surely, the truth is out there.

It usually is, experts say, and more often than not the scientists eventually come up with it. But because computers have fueled a boom in published research with widely varying approaches, that truth may wind up buried for a time in an ever-growing pile of scientific papers with conflicting results.

“People are naturally becoming confused. . . . I find myself reacting the same way,” said Marcel LaFollette, associate professor of science and technology policy at George Washington University in Washington.

LaFollette and others who study scientists acknowledge that some research can conflict--even contradict--other research. Knowledge is acquired by exploring the unknown, they said, and some steps inevitably will be in the wrong direction.

But they insist that errors and falsehoods are eventually corrected as others attempt to confirm new findings.

Advertisement

For people frustrated by the apparent muddle in certain subjects, scientists have some simple advice: When a startling new study recommends some fundamental change in lifestyle, file it in the back of your mind and do nothing until a scientific consensus emerges.

“We have to educate the public to be more skeptical,” said LaFollette, author of a recent book, Stealing Into Print, about fraud in scientific research. “People need to treat science news the same way they treat medical advice: They should seek a second opinion.”

However, skepticism must not become cynicism, she warned. Conflicting evidence on whether low-fat diets reduce the risk of developing one particular disease, such as breast cancer, may prompt people to discount all recommendations about low-fat diets. Ample evidence supports the notion that a low-fat diet is more healthful overall.

Scientists have many options in the way they can approach a problem.

Relationships between food additives and health, for example, can be studied with computer models, animals or human subjects. Each has advantages and shortcomings. Computer models may oversimplify biochemistry, lab animals may need abnormally high doses to produce a measurable response quickly, and people may have genetic histories or other factors that skew results.

“You can approach the same question in different ways and get inconsistent results,” said Alex Michalos, a social scientist at the University of Guelph in Ontario, Canada. “It doesn’t mean the world is different or people are different, it just means that two people looked at something in slightly different ways and saw slightly different things.”

Sometimes the differences are not so slight, and scientists can seem like proverbial blind people studying an elephant: If the first feature they examine is the trunk, some may conclude that elephants resemble snakes. Those who see a leg first may conclude that elephants resemble tree trunks.

Advertisement

At other times, otherwise solid scientific studies have been challenged because they failed to take some critical factor into account.

When University of Rochester scientists looked for a link between gender and heart attacks, for example, they found that women who suffered heart attacks were 43.8% more likely than men to die while hospitalized. They later softened their assertion, which contradicted a Yale University study, when others noted that they had not studied medical records to see if the women had histories of coronary problems before their heart attacks. That factor may have skewed the results.

Scientists said the issue often comes down to how many factors they can afford to consider. In the Rochester study, researchers believed the large number of patients being studied--5,839 men and women--would factor out the influence of individual patient histories. But if the subjects’ medical histories are important, how about those of their parents, grandparents and other relatives?

“The more questions you ask, the better the questionnaire is and more reliable the data from it,” Michalos said. “But you can’t always ask as many as you want.”

If researchers cannot afford to ask a lot of questions of a large number of people, they can try to compensate by intensively questioning a relatively small number of people, a strategy that LaFollette said has its own problems.

“Statistics are a wonderful thing, and you can draw good inferences from even small sample sizes,” she said, “but only if you are very careful in selecting the sample in the first place.”

Advertisement

“Science is not reality,” said Raymond Eve, a professor of social psychology at the University of Texas at Arlington who writes about the sociology of science and technology. “It is, at best, a good approximation of reality--a model of it.”

Eve said models often are “very simplified versions of reality” because most people, even scientists, find it too difficult to track problems with a lot of variables. “They will leave out details to capture the bigger picture,” he said, “because that makes it easier to grasp in the human mind.”

It also makes it easier for others, including science writers and the public, to oversimplify or incorrectly generalize about the results.

“Often studies are carefully qualified,” LaFollette said. But by the time they are written up in newspapers, debated on talk shows, or discussed over the dinner table, “too often those qualifications drop away.”

Take electromagnetic fields, for example. The three studies that seemed to contradict one another over the last year all approached the EMF question in different ways.

The first study concluded that because the use of electricity had increased 25-fold since 1940 while the incidence of certain cancers had remained steady, EMF did not cause cancer. Epidemiologists criticized the study as being “fatuous” and invalid because it falsely assumed that use of electricity is the same as EMF exposure and ignored the increasing incidence of other cancers.

Advertisement

The second study was better-received in the scientific community because it established the “dose-response relationship” that is considered to be critical to valid epidemiological studies. It showed that the closer children lived to electric transmission lines, the greater the risk they would develop leukemia or brain cancer.

The latest study again questioned the EMF-cancer connection by comparing cancer rates among the public with those of 36,221 Southern California Edison employees exposed to strong EMF fields on the job; it found the rates statistically indistinguishable. Critics said Edison is not a disinterested party and did not study enough people to detect subtle effects.

Each of these studies may be accurate in its own narrow view and one may be more valid than the others, but none yielded the whole truth. More studies will be added to the mosaic until a consensus emerges, scientists said.

People, however, want to know sooner rather than later whether the EMF emitted by the power lines overhead or the computers on their desks is causing any harm. This compels decisions based on incomplete scientific evidence. If the evidence is not confirmed, if it is modified or even overturned, some people wonder if scientists knew what they were talking about.

Sometimes, the studies are at fault. Scientists who have investigated this phenomenon say research can go bad in four broad ways:

* Studies focusing on a narrow question could be used ignorantly or improperly to generalize about similar topics or a broader phenomenon. Research showing that a certain drug effectively fights the symptom of a disease, for example, does not mean it fights the disease itself.

Advertisement

* Research may be badly designed from the outset, using the wrong experiment, asking the wrong question, or relying on fundamentally unrepresentative samples. Until recently, many studies were conducted only on men or only on male college students.

* Scientists can be unconsciously biased or base their work on false assumptions or outright guesswork, or researchers can simply ignore evidence that conflicts with a desired conclusion. The late Nobel laureate Irving Langmuir called this “pathological science.”

* Critical evidence and even whole studies can be deliberately falsified for personal glory, professional advancement or financial gain.

Scientific journals try to weed out these problems by asking other researchers to review every article being considered. These anonymous critics can ask for new experiments, additional analysis and writing changes--or they can recommend the article be rejected.

“Of course, peer review is never a guarantee of accuracy or legitimacy,” said LaFollette, who edits the journal Knowledge: Creation, Diffusion, and Utilization. “No peer reviewer can guarantee to me that the author really did what he or she said they did, can’t say if they got the results they’re reporting. We can’t replicate the experiments. It can’t guarantee the work is good, it can only increase the likelihood that the work is good.”

The literature is littered with examples of how peer review can work.

Chemists B. Stanley Pons and Martin Fleischmann stood to become millionaires and scientific legends when they announced in 1989 that they demonstrated “cold fusion,” the long-sought atomic reaction in a glass jar at room temperature that promised unlimited energy. The promise fizzled when other scientists could not replicate the feat.

Advertisement

Some scientists said their cold fusion experiments emitted the excess heat originally found by Pons and Fleischmann. A few interpreted this as evidence of cold fusion, although the other fusion byproducts--helium, tritium, free neutrons and gamma radiation--were nowhere to be found. “When the data is ambiguous, the mind imposes a pattern on it, even if there is no pattern,” Eve said.

Before cold fusion, there was polywater. Soviet chemists N.N. Fedyakin and Boris Deryagin reported in 1968 that they had found a “new” form of water in which water molecules were linked in long chains, or polymers. This polymeric water--”polywater”--was as thick as Vaseline, came to a boil at 572 degrees Fahrenheit instead of 212 degrees and froze into a glassy solid at 40 degrees below 0 Fahrenheit instead of crystalline ice at 32 degrees above 0.

Polywater papers appeared in scientific journals, a polywater symposium was held, polywater production plants were proposed. Politicians feared a “polywater gap” could give the Soviets a Cold War edge. This polywater panic persisted until a Bell Telephone Laboratories scientist, Denis L. Rousseau, and his former professor at USC, Sergio P.S. Porto, found that polywater was regular water contaminated by human sweat.

Even what seems to be straightforward research can produce wildly different answers.

Take, for instance, the matter of how well copper conducts heat. In 1974, Purdue University scientists scoured scientific journals, textbooks and other sources for published measurements of this simple phenomenon.

There is, in principle, a single value for this, depending on the temperature of the copper when it is measured. However, most of the 202 examples differed markedly from one another.

“You’d think any damned fool could do it,” said Michalos, “but out of the (202) kicks at the can, there was this wide variety of answers.”

Advertisement

The Purdue researchers concluded that the variations were due to impurities in the copper samples and the difficulty of accurately measuring samples when chilled to tens or hundreds of degrees below zero. But each measurement--right or wrong--appeared somewhere in the literature.

“It just goes to show,” Michalos said, “in science, as in everything else, people should treat every pronouncement of human beings as fallible in the first place, and tentative in the second place.”

Times staff researcher Cary Schneider contributed to this article.

Advertisement