January 21, 2013

Journalist on science journalism

From Columbia Journalism Review (via Tony Cooper), a good long piece on science journalism by David H. Freedman (whom Google seems to confuse with statistician David A. Freedman)

What is a science journalist’s responsibility to openly question findings from highly credentialed scientists and trusted journals? There can only be one answer: The responsibility is large, and it clearly has been neglected. It’s not nearly enough to include in news reports the few mild qualifications attached to any study (“the study wasn’t large,” “the effect was modest,” “some subjects withdrew from the study partway through it”). Readers ought to be alerted, as a matter of course, to the fact that wrongness is embedded in the entire research system, and that few medical research findings ought to be considered completely reliable, regardless of the type of study, who conducted it, where it was published, or who says it’s a good study.

Worse still, health journalists are taking advantage of the wrongness problem. Presented with a range of conflicting findings for almost any interesting question, reporters are free to pick those that back up their preferred thesis—typically the exciting, controversial idea that their editors are counting on. When a reporter, for whatever reasons, wants to demonstrate that a particular type of diet works better than others—or that diets never work—there is a wealth of studies that will back him or her up, never mind all those other studies that have found exactly the opposite (or the studies can be mentioned, then explained away as “flawed”). For “balance,” just throw in a quote or two from a scientist whose opinion strays a bit from the thesis, then drown those quotes out with supportive quotes and more study findings.

I think the author is unduly negative about medical science — part of the problem is that published claims of associations are expected to have a fairly high false positive rate, and there’s not necessarily anything wrong with that as long as everyone understand the situation.  Lowering the false positive rate would either require much higher sample sizes or a much higher false  negative rate, and the coordination problems needed to get a sample size that will make the error rate low are prohibitive in most settings (with phase III clinical trials and modern genome-wide association studies as two partial exceptions).    It’s still true that most interesting or controversial findings about nutrition are wrong, and that journalists should know they are mostly wrong, and should write as if they know this.   Not reprinting Daily Mail stories would probably help, too.

 

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »