July 15, 2015

A modest proposal

Positive-looking results are more likely to be published in scientific journals, much more likely to get press releases, and hugely more likely to end up in the news. This trend is exaggerated if the size of the association is large.  The most likely way to get a large association is to do a very small study and be lucky enough (by chance or sloppiness) to overestimate the strength of association, so the news selects for small, early-stage, and poorly-done research.

One way to reduce this bias would be for media to quote the lower (less impressive) end of the uncertainty interval (confidence interval, credibility interval) rather than quoting the midpoint of the interval as scientists usually do. In small studies, the lower end of the interval will be close to no association, even if the midpoint of the interval is a strong association. In large, well-designed studies the change in practice would have little impact.

Isn’t that biased?

If you assume that in most cases the association being tested is smaller that the uncertainty in the experiment (ie, close to zero), and that positive results are more likely to make the news then it’s less biased than using the middle of the interval.

Scientists would’t be able to use tests that don’t produce confidence intervals.

How sad. Anyway, they would, they just wouldn’t be able to get their press releases into the papers

Press releases often don’t report uncertainty estimates.

So those ones wouldn’t get in the papers. The silver linings are just piling up.

 

 

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »

Comments

  • avatar
    Joseph Delaney

    I am also unsure if bias should be an issue in household news reporting. This is especially true if we think there is publication bias due to the challenges of publishing null studies.

    This proposal would focus the media on associations that we think are very unlikely to be null (or on extremely sloppy work that creates large associations — like immortal time bias). But the sloppy work could be more directly called out (another upside?) and it would make things look less alarming.

    I agree with you — it isn’t clear that there is a major downside to this approach, except perhaps to nudge people back towards higher quality data and better study design.

    9 years ago

  • avatar
    Simon Arnold

    One topical area where this applies is in the reporting of out of sample projections of climate models.

    Having said that I tend to the view that too many papers inappropriately report what are really just descriptive statistics with things called confidence limits and p tests. One needs a hypotheses and independent data for these to start to have any meaning, but they seem to be essential trappings to get anything published.

    So perhaps ban any publications with confidence limits and significance tests :)

    9 years ago