February 6, 2013

The checklist: a worked example

The Herald has a story about increased stroke risk in young adults using cannabis.  Let’s run it past the JOHN HUMPHRYS checklist:

  • Just observing people:  X
  • Original information unavailable:  ?. The abstract should be available, but I can’t find it on the conference website — it will probably be out soon.  It looks as if this story may have leaked early.
  • Headline exaggerated:  The headline is fine.
  • No independent comment: X.
  • Higher risk: X.  Absolute risks are not given, and are extremely low in “young adults”
  • Unjustified advice:? The advice that cannabis smoking is probably bad for your health is justified, but not by this study.
  • Might be explained by something else: X.  Tobacco, for example, which is mentioned in the story but dismissed without justification.
  • Public relations puff: no real problem here.
  • Half the picture: this one’s ok, though the publication bias issue could have been mentioned — this is the first study to find a link, but was it the first to look for one?
  • Relevance unclear: this isn’t a problem
  • Yet another single study: X
  • Small: X only 160 strokes, only 12 or 13 in cannabis users.

It’s not that all stories should pass all checks on the list — sometimes small observational studies can be important or at least interesting.  The problem (as with the Bechdel test) is that such a small fraction of stories pass the checks.

 

[Update: forgot the link originally]

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »