Advertisement

Animal studies riddled with bias, report finds

Share

Too much good news in medicine may be bad news for science, according to a new study that suggests animal research is riddled with bias that allows too many treatments to advance to human trials.

Researchers examined data from thousands of experiments on animals for such neurological diseases as Parkinson’s, Alzheimer’s and stroke. They found that the published findings were too good to be true from a statistical perspective.

“There’s just too many significant results out there,” said Stanford University epidemiologist John Ioannidis, lead author of the study published online Tuesday in PLOS Biology.

Advertisement

The authors concluded that only eight of 160 treatments they examined should have advanced to human trials, where many results of animal trials have failed to reproduce their touted benefits.

Dozens of remedies related to strokes would appear to hold promise based on the animal studies Ioannidis’ team studied. Yet when they advanced to human trials, few showed much promise, the study found.

“Something is obviously wrong,” Ioannidis said. “Apparently what is happening is there are just too many studies being done that are being selectively reported, either having negative results suppressed or having the analysis presented in a way that the results would look positive even though they are really negative.”

Ioannidis stopped far short of identifying possible fraud, and blamed institutional practices that favor reports with “positive” findings.

The study is the latest reexamination of one of the core practices of medical research – testing hypotheses and remedies using random sampling, tight controls of variables, and rigorous statistical analysis. Those methods have brought untold advances in treatment, but also controversy.

Ioannidis has become the most well-known practitioner of turning the power of statistics back on science. His 2005 paper, “Why Most Published Research Findings Are False,” which outlined many sources of bias in scientific literature, caused a stir in the research community.

Advertisement

Others have added to the controversy. A 2008 study by researchers at Oregon Health and Science University found that the published literature on antidepressants made it seem that 94% of studies showed positive results, despite analysis by the Food and Drug Administration that showed little more than half produced positive results.

Two psychiatrists from the University of Michigan and Yale University looked at the studies presented in the American Psychiatric Assn. annual meetings in 2009 and 2010. They found 97.4% of those that were supported by the pharmaceutical industry reported positive findings. Only 68.7% of those that weren’t supported by drug manufacturers were as encouraging.

In 2012, a former AmGen scientist revealed that the company’s researchers could confirm the results of only six of 53 so-called landmark studies in hematology and oncology.

That same year, GlaxoSmithKline LLC agreed to a $3-billion fine levied over whistle-blower allegations that included suppression of data on possible adverse effects of the diabetes drug Avandia, and helping publish a study that misrepresented data on the dangers and benefits of the antidepressant drug Paxil, for use among children and adolescents. The company did not admit wrongdoing in the settlement.

In the latest study, Ioannidis and his colleagues examined “meta-analysis” reports that covered data from 4,445 animal studies, looking for what proportion reported statistically significant or “positive” results.

By looking more closely at the most precise studies, the team calculated how much of the published body of work should have come to statistically significant conclusions. The team found evidence of such “excess significance” in 31% of these analyses.

Advertisement

The biases alleged by Ioannides’ team can’t be blamed on a singular culprit. They may, however, be more troubling: institutions and practices have created rewards and incentives that may be leading to the so-called significance bias, Ioannides said.

Publishers and the peer review boards that examine studies, philanthropies that fund them, and the universities and researchers whose prestige depends on such work can create a systemic bias toward studies with significant findings, said Ioannides, who acknowledged he was a factor in that system.

“I am an investigator; at the same time I review applications, I review papers, I’m an editor of one peer-review journal and a member of the editorial board of another 30 peer-reviewed journals,” Ioannidis said. “So, if anything, I am to blame in all of my roles in which I try to find significant results.”

Ioannidis suggested, among other measures, stricter guidelines for animal research and an online registry of study plans, methodology and results, regardless of whether the work winds up published.

“I think the take-home message would be that we can do better,” Ioannidis said. “It doesn’t mean that science cannot do well. I think it means that we have ways to identify where the problems are and we have ways to solve these problems.”

Advertisement