Advertisement

Studies in Confusion

Share
TIMES HEALTH WRITER

One week medical researchers report beta carotene prevents cancer. Then they say it may cause cancer. Or we hear fiber is good, so we dutifully load up on oatmeal and green vegetables. Then we’re told that maybe it’s not so good. The list goes on and on--it’s enough to give you mental whiplash.

So how can consumers, especially those with chronic or serious ailments, sort through this contradictory data in order to make informed decisions about their health care?

“It’s very difficult,” says Michele Rakoff, a breast cancer survivor who directs a peer support mentoring program at Long Beach Memorial Hospital in Long Beach. “Going through having breast cancer is frightening. Then, there’s a news flash on TV about a new cancer cure, and desperate women start calling their doctors. When a closer look reveals it’s no big deal, everyone feels let down.”

Advertisement

It can be disheartening even for people who aren’t in the midst of a medical crisis and just want to stay healthy. Part of the problem is that most Americans get their health information secondhand, through television, newspapers and magazines, according to a 2000 survey conducted by the Kaiser Family Foundation. In this era of the instant news cycle and the pithy sound bite, preliminary studies are often inflated into major breakthroughs, and the subtle nuances and caveats that put findings into context are glossed over or lost.

No wonder there’s so much confusion. Unfortunately, this constant waffling “breeds distrust of the traditional channels of communication, making some people vulnerable to quack cures, or ‘X-Files’-style theories about diseases,” says David W. Murray, director of the Statistical Assessment Service, a nonprofit medical science think tank in Washington, D.C. “The public’s interpretation of what went wrong is how these medical conspiracy stories get started: ‘There really is a cure for cancer, but people have been bought off,’ or ‘We know something, but they won’t let us tell you.’ ”

With a little effort, though, consumers can separate the hope from the hype. The key is to understand how medical research works, and the different levels of evidence, so one can determine the significance of studies.

“The public is looking for magic bullets and believes that when a study is done it means we have the answer once and for all,” says Dr. Linda Rosenstock, dean of the UCLA School of Public Health. “But the reality is that science proceeds in a series of steps that don’t always go in the same direction. And results can be overstated or oversimplified.”

Learning to Ask the Right Questions

The gold standard in medical research is the randomized double-blind, placebo-controlled clinical trial. What that means in plain English is that half the people in the study are selected at random to get the new drug or treatment, and the other half get a dummy pill, or placebo. The study is blinded so no one--not even the researchers--knows who is getting the real McCoy.

Such studies avoid biases that might skew the results. Otherwise, researchers may subconsciously treat subjects differently, or participants might have such a strong belief that a treatment works that they’ll feel better even if it’s not effective.

Advertisement

Ideally, when the study is finished, the people getting the therapy either benefited or didn’t as compared with the control group. This is the way virtually all new drugs are tested by pharmaceutical companies to get clearance by the Food and Drug Administration to market them in the United States.

Still, even seemingly well-executed research can have inherent flaws--and a host of questions needs to be answered before accepting results as gospel. How many people were in the study? How long did it last? If there were only 50 people over a six-month period, that’s not enough time or study subjects to prove effectiveness. Who conducted the research? Research from scientists affiliated with reputable academic institutions is usually more reliable. Where was the study published? Articles in major journals tend to be more rigorously scrutinized.

There’s such a thing as publication bias, too. “The major medical journals, drug companies and scientists themselves tend not to publish negative results,” says Kay Dickersin, an associate professor in the Brown University School of Medicine in Providence, R.I. So we often only hear about the exciting “breakthroughs,” and not the follow-up studies where the treatments turned out to be duds.

Funding sources can also subtly create built-in biases. Scientists bristle at the suggestion that taking money from drug companies can influence their results. Yet research has consistently shown that studies of new treatments or drugs were much more favorable when funded by drug makers than other sources such as the federal government. “If you have a negative study, good luck asking the drug company to sponsor you in their speakers bureau,” says Dr. Alan M. Garber, director of the Center of Primary Care and Outcomes Research at Stanford University.

Earlier this month, for instance, when new research questioned whether cholesterol-lowering drugs reduced the risk of bone fractures from osteoporosis, some experts were skeptical about the findings because the study was financed by Proctor & Gamble Pharmaceuticals, which makes Actonel, a rival osteoporosis drug.

Similarly, two weeks ago, a study in the Journal of the American Medical Assn. revealed that the herbal supplement St. John’s wort wasn’t effective in treating major depression. But the fine print at the end of the journal article, where researchers disclose their monetary ties, should have given readers pause. Pfizer, which makes the antidepressant Zoloft, not only underwrote some of this research but also had financial connections with many of the study’s investigators.

Advertisement

“There are lots of vested interests,” says UCLA’s Rosenstock. “And they’re not just economic--they can be emotional, too, where a scientist feels a stake in the outcome. People can use science for or against any point of view they want to promote.”

A case in point was a widely publicized 1996 study that suggested having an abortion increases a woman’s risk of developing breast cancer by 30%. Critics quickly challenged the validity of the study, noting that the increased risks were actually quite minuscule, and pointed out that Dr. Joel Brind, a biochemist at Baruch College in New York and the lead author of the study, had spoken out against abortions.

Smoking Gun Can Be Elusive

Sometimes, it’s not possible to do the neatly designed double-blind studies. “We can’t do randomized trials for many of the questions that are of the greatest interest to the public,” says Dr. Warren S. Browner, scientific director of the research institute at California Pacific Medical Center in San Francisco. “You can’t assign someone randomly to be obese, or to smoke, or even to take up exercise.”

Consequently, scientists tend to rely on epidemiological research, which consists of vast studies that look at large groups or populations--often 25,000 or more subjects--over long periods of time, sometimes 30 and 40 years, to see if there’s a connection between such things as diet, exercise, or personal habits and health. Epidemiological research, however, is not definitive. It’s medical detective work: It generates circumstantial evidence, but not enough to convince a scientific jury, and more work in the research trenches is required to uncover the smoking gun.

Consequently, if one of these mammoth studies yields an intriguing, statistically significant link, researchers will study it further. The beta carotene controversy is a good example. Doctors conducting one epidemiological study noticed that people who ate foods rich in beta carotene had lower rates of cancer and heart disease. So they decided to take a closer look. But when the supplement was put under the microscope of more tightly controlled clinical trials, it showed no benefit. And in a study of a large group of smokers, it seemed to increase cancer risks.

“Not every lead generated by observational studies pans out,” says Garber, of Stanford.”Sometimes, they lead to major advances, and sometimes, they take us down blind alleys.” The real world is messy, and all sorts of unforeseen--or confounding--variables can muddy up the results, which is why you end up with these contradictory studies.

Advertisement

Often, it’s tricky teasing out the culprit from a host of likely suspects. We know that wealthier women are more prone to breast cancer, for example. But affluence in of itself doesn’t cause breast cancer. Something about these women’s lifestyles raises their risks. Perhaps it’s because they’re more inclined to have gardeners who tend their lawn with toxic chemicals. Or maybe they use dry cleaners more often, exposing themselves to more chemicals. Or they’re better educated, they have fewer children and childbearing is delayed. Perhaps hormonal changes triggered by pregnancy later in life sparks the abnormal cell growth that is the hallmark of cancer.

Right now, we don’t know the answer. “Wealth is a marker for a variable that we haven’t yet identified,” says Dickersin, who is also director of the New England Cochrane Center in Providence, which is part of an international nonprofit consortium that analyzes medical research. “Just because there’s an association doesn’t mean there’s a direct cause and effect.”

Even when there’s a significant link that seems patently obvious, it may take years to unmask the real villain. Several years ago, for instance, researchers noticed that women who smoke had much higher rates of cervical cancer. “The connection between smoking and cervical cancer seemed like a slam dunk,” says California Pacific’s Browner. Later on, however, other scientists discovered that the most common cause of cervical cancer was the human papilloma virus, a sexually transmitted microbe. The original researchers overlooked the fact that women smokers were also more sexually active--which was the real link.

Putting the Puzzle Together

Sometimes, seemingly contradictory evidence may all be valid. Two weeks ago, a study suggested that developing schizophrenia is associated with having an older father at the time of conception. Previous research speculated this mental illness may be triggered by a virus; other studies suggest there’s a genetic component to the disease. “Schizophrenia might be caused by many factors, which means each of these reports could be true and they’re just different pieces of the puzzle,” says Murray of the Statistical Assessment Service. “Or they could be all wrong, and these are just fellow travelers.”

With all this uncertainty among experts, it’s not surprising that many health-conscious Americans feel like they’re trapped in the Woody Allen movie “Sleeper,” in which he plays a neurotic health food store owner who awakens 200 years in the future to discover all the things he thought were bad for him--cigarettes, chocolates, rich desserts--are actually beneficial.

The take-home message is that health care decisions shouldn’t be based on the results of one study, no matter how encouraging the results. It’s only when cumulative evidence points to one inexorable conclusion that medical science feels comfortable making the connection.

Advertisement

Take the link between smoking and lung cancer. When studies first came out in the 1960s that suggested people who smoked were 200 to 800 times as likely to be stricken with lung cancer as nonsmokers, they were greeted with skepticism. But subsequent research showed that laboratory animals got cancer when they were habitually exposed to cigarette smoke, that cancer risks plummeted when people quit smoking, and on and on, until the case became airtight.

The wisest strategy is to take all this information with a grain of salt. “People are hungry for answers and recognizing the complexity doesn’t make it easier for an individual,” says UCLA’s Rosenstock. “But the wheels of science grind slowly, and a healthy dose of skepticism, along with patience, is a good thing.”

(BEGIN TEXT OF INFOBOX / INFOGRAPHIC)

Chain of Evidence * The best evidence is the blinded clinical trial, in which researchers divide up the study group, with half getting the treatment and half receiving a dummy pill. Be sure there’s a large enough sample, at least a couple of hundred subjects, and that the study continued for a year or more before you start pestering your doctor for this new treatment.

* Epidemiological studies yield clues as to what might be causing an illness, but not definitive answers. In media reports, look for wording such as “observed,” “noticed” or “followed” a certain number of participants over several years. That’s a tip-off that the findings were based on this type of research.

* In laboratory research, experiments on laboratory animals are used just as preliminary tests to see if a treatment is toxic or has an effect on living tissue before research is pursued in humans. However, new therapies often behave much differently in humans than they do in lab rats, or in tissue in a petri dish. And it can be decades before “breakthroughs” in the laboratory become available to patients.

* Expert opinion is considered the least reliable. “The big guns are quoted a lot, and what they say is taken at face value,” says Kay Dickersin of Brown University. “But that doesn’t mean they’re right.”

Advertisement

(BEGIN TEXT OF INFOBOX / INFOGRAPHIC)

Where to Go for More Information

* The National Breast Cancer Coalition offers an intensive five-day educational program for breast cancer survivors that teaches them how to critically analyze scientific literature, familiarizes them with the language and concepts of science and helps them become more sophisticated consumers and advocates. For more information about this free workshop, contact the Los Angeles Breast Cancer Alliance at (310) 399-4453 or https://www.labca.com. Or visit the Web site of the national organization at https://www.stopbreastcancer.org.

* The Cochrane Collaboration is an international consortium of groups in Great Britain, Australia, Canada and the United States that analyzes and evaluates medical evidence. The San Francisco Cochrane Center is gearing up to do consumer education programs in the Bay Area and may do them in Southern California in the future. For more information on these workshops, call (415) 502-8204. For evaluations of studies, check the group’s Web site at https://www.ucsf.edu/sfcc or the Cochrane Collaboration Web site at https://www.cochrane.org.

* The Agency for Health Care Research and Quality is a U.S. government agency that evaluates medical research. For the latest information on prevention, treatments and other research, go to www.ahcpr.gov.

* To get more information than is provided in news reports, you can find medical studies in the Journal of the American Medical Assn. (www.ama-assn.org), the New England Journal of Medicine (www.nejm.org), Lancet (www.thelancet.com) and the National Library of Medicine (www.nlm.nih.gov).

Advertisement