Advertisement

Stories on What You Eat, Drink Reflect Lack of Context, Appetite for Conflict

Share
TIMES STAFF WRITER

Few stories arouse more interest in the media and in the public than those purporting to demonstrate the health benefits or dangers of various foods, beverages and nutritional supplements.

In the past year, we have heard that tomato sauce might prevent prostate cancer, that tea might prevent heart disease, that the combination of alcohol and coffee might prevent stroke damage and that blueberries may reduce the effects of aging and improve balance, coordination and short-term memory. In previous years, we also heard that fettuccine Alfredo is “a heart attack on a plate” and that caffeine can cause cancer, birth defects and high blood pressure.

Indeed, one of the major problems with such stories is that they often seem to contradict one another. One week, stories say a particular substance is good for you; the next week, it’s bad for you. Caffeine, margarine, fish and beta carotene are only a few of the substances on which conflicting stories have appeared in recent years.

Advertisement

“The media characterize contradictory and conflicting scientific and medical findings as a problem, a weakness, proof that the researchers don’t really know what they’re doing,” says David Anderson, a molecular biologist at Caltech. “But this is inherent in the scientific process. It’s part of the control mechanism that enables the right answer to ultimately emerge.”

Simple Shortcomings

Journalism likes conflict, though. It makes good copy--even if it sometimes lacks context. Different results may not be quite as different as they seem--and some “differences’ may occur because the researchers used different methods or populations for their studies.

Moreover, says Dr. Richard Glass, co-editor of the Journal of the American Medical Assn., “It’s very hard to do reliable, randomized, controlled clinical trials on dietary factors. The time frame for results on drug treatments is weeks or months, but with dietary trials, it takes years, and it’s hard to maintain the controls and keep the group intact that long.”

Such studies are generally epidemiological--based not on clinical experimentation but on observation of and recollection by the test subjects. Since it’s virtually impossible to find two groups of people--a test group and a control group--who are identical in every way except for the one, isolated condition being examined, such studies rarely produce proof of a causal relationship. They yield only hypotheses.

“This is the single area that I see the most repetitive problems with” in the media, says Dr. Richard Klausner, director of the National Cancer Institute. The failure of the media to grasp and convey the fundamental difference between a causal effect and a simple correlation or association “seems to be just rampant.”

Anderson offers this illustration: If you read in the paper that there’s a finding that eating filet mignon is associated with an increased frequency of ulcers, you might jump to the conclusion that eating filet mignon causes ulcers. And it might. But it could just as well mean that having ulcers creates a craving for filet mignon. Or it could mean that stockbrokers, who tend to get ulcers for other reasons, tend to also like filet mignon, and maybe the population sample in the study was not corrected for whether it was predominantly representing stockbrokers.

Advertisement

The other problem with media coverage of most epidemiological studies, scientists say, is a misunderstanding and misuse of statistics.

When a study says that “a woman who drinks alcohol has a 30% increase in the risk for breast cancer over the next 10 years--and there has been such a study--it sounds like a big increase,” says Marcia Angell, editor in chief of the New England Journal of Medicine.

“Multiplied by all the women in the country, it is. . . . But if you look at what it means for an individual middle-aged woman, whose chance of developing breast cancer in the next 10 years might be 3%, a 30% increase would [change that to] . . . 4%. That means her chances of remaining free of breast cancer has dropped from 97% to 96%. Is that worth giving up your dinner wine for? Probably not.”

Selecting Numbers

The media rarely frame stories that way, though. They tend to take largest, most startling numbers in a journal story and use them, even when they may be grossly misleading.

“From the journalist’s perspective, there are interesting numbers and non-interesting numbers,” says David Anderson, research director of the Statistical Assessment Service, a nonpartisan agency based in Washington, “and interesting numbers are those that can be fit into a narrative, into a story line, a template, a moral drama [with] a hero and a villain and a dramatic outcome and breakthroughs and turnings.”

Advertisement