In studies of human populations, don’t believe everything you read
This article was originally on a blog post platform and may be missing photos, graphics or links. See About archive blog posts.
All those studies linking broccoli, fiber, blueberries, wine, fat, sugar, this, that, with a greater or lower risk for cancer, heart disease, Parkinson’s, Alzheimer’s, headache, earache, you name it. Perhaps you sometimes wonder: How big of an effect are we talking about? Or even: Should I believe this?
The interview starts out with radio news segments -- not sure if they’re real or mock-ups -- on two items that seem to come up often in media reports (the former especially around Valentine’s Day): chocolate and coffee. (‘People who eat dark chocolate have less heart disease.’ ‘People who drink coffee are less likely to develop Alzheimer’s.’ Etcetera.)
The problem, Young says in the NPR interview, is if you collect lots of information about people’s habits and number-crunch the bejabbers out of them, you’ll find associations of some kind. And then, ‘If you can put a good story around the probably random finding that you found, you got a paper,’ he says.
The studies often end up in very reputable journals, he says. One of his favorite was a report -- published in the respected Proceedings of the Royal Society B -- that people who ate breakfast cereal in the morning around the time of conception were more likely to have baby boys than baby girls.
Young says the problem lies with observational studies -- the kind where you go into a population and collect information about the people’s habits and diseases and such, then try to see what lifestyle factors are linked with what outcomes. Young’s point is that many of the associations kicked up by such analyses are just plain random. (And even if they aren’t, the associations certainly aren’t proof of cause and effect, though that is not a point he makes in this interview.)
Much of the data scientists accumulate about human habits is of exactly this kind, because it’s very expensive to conduct randomized clinical trials -- the kind where a group of people is randomly assigned to take either a drug or placebo and everything else is kept the same. You might see the money fall into place for a potential blockbuster drug -- not for broccoli, though.
Here’s a technical paper of the issue as Young sees it, taken from a 2008 lecture he gave, which I accessed at his website. Among other things, he states: ‘The NIH has funded a large number of randomized clinical trials testing the claims coming from observational studies. Of 20 claims coming from observational studies only one replicated when tested in [randomized clinical trials]. The overall picture is one of crisis.’
Here’s a 2007 article we ran on the issue, written by freelance writer Andreas Von Bubnoff (Young is one of the scientists quoted in the article). In it, you’ll see what epidemiologists have to say in defense of their science.
-- Rosie Mestel