Posts filed under Evidence (71)

August 2, 2014

When in doubt, randomise

The Cochrane Collaboration, the massive global conspiracy to summarise and make available the results of clinical trials, has developed ‘Plain Language Summaries‘ to make the results easier to understand (they hope).

There’s nothing terribly noticeable about a plain-language initiative; they happen all the time.  What is unusual is that the Cochrane Collaboration tested the plain-language summaries in a randomised comparison to the old format. The abstract of their research paper (not, alas, itself a plain-language summary) says

With the new PLS, more participants understood the benefits and harms and quality of evidence (53% vs. 18%, P < 0.001); more answered each of the five questions correctly (P ≤ 0.001 for four questions); and they answered more questions correctly, median 3 (interquartile range [IQR]: 1–4) vs. 1 (IQR: 0–1), P < 0.001). Better understanding was independent of education level. More participants found information in the new PLS reliable, easy to find, easy to understand, and presented in a way that helped make decisions. Overall, participants preferred the new PLS.

That is, it worked. More importantly, they know it worked.

July 30, 2014

If you can explain anything, it proves nothing

An excellent piece from sports site Grantland (via Brendan Nyhan), on finding explanations for random noise and regression to the mean.

As a demonstration, they took ten baseball batters and ten pitchers who had apparently improved over the season so far, and searched the internet for news that would allow them to find an explanation.  They got pretty good explanations for all twenty.  Looking at past seasons, this sort of short-term improvement almost always turns out be random noise, despite the convincing stories.

Having a good explanation for a trend feels like convincing evidence the trend is real. It feels that way to statisticians as well, but it isn’t true.

It’s traditional at this point to come up with evolutionary psychology explanations for why people are so good at over-interpreting trends, but I hope the circularity of that approach is obvious.

July 29, 2014

A treatment for unsubstantiated claims

A couple of months ago, I wrote about a One News story on ‘drinkable sunscreen’.

In New Zealand, it’s very easy to make complaints about ads that violate advertising standards, for example by making unsubstantiated therapeutic claims. Mark Hanna submitted a complaint about the NZ website of the company  selling the stuff.

The decision has been released: the complaint was upheld. Mark gives more description on his blog.

In many countries there is no feasible way for individuals to have this sort of impact. In the USA, for example, it’s almost impossible to do anything about misleading or unsubstantiated health claims, to the extent that summoning a celebrity to be humiliated publicly by a Senate panel may be the best option.

It can at least produce great television: John Oliver’s summary of the Dr Oz event is viciously hilarious

July 14, 2014

Multiple testing, evidence, and football

There’s a Twitter account, @FifNdhs, that has five tweets, posted well before today’s game

  • Prove FIFA is corrupt
  • Tomorrow’s scoreline will be Germany win 1-0
  • Germany will win at ET
  • Gotze will score
  • There will be a goal in the second half of ET

What’s the chance of getting these four predictions right, if the game isn’t rigged?

Pretty good, actually. None of these events is improbable on its own, and  Twitter lets you delete tweets and delete accounts. If you set up several accounts, posted a few dozen tweets on each, describing plausible events, and then deleted the unsuccessful ones, you could easily come up with an implausible-sounding remainder.

Twitter can prove you made a prediction, but it can’t prove you didn’t also make a different one, so it’s only good evidence of a prediction if either the predictions were widely retweeted before they happened, or the event described in a single tweet is massively improbable.

If @FifNdhs had predicted a 7-1 victory for Germany over Brazil in the semifinal, that would have been worth paying attention to. Gotze scoring, not so much.

May 22, 2014

Briefly

Health and evidence edition

  • Evidently Cochrane, a blog with non-technical explanations of Cochrane Collaboration review results
  • Design process for a graphic illustrating the impact of motorbike helmet laws.  In contrast to bicycle helmet laws, laws for motorbikes do have a visible effect on death statistics
  • Stuff has quite a good story on alcohol in New Zealand.
  • The British Association of Dermatologists responds to ‘drinkable sunscreen’.
  • 3News piece on Auckland research into extracts of the lingzhi mushroom. Nice to see local science, and the story was reasonably balanced, with Shaun Holt pointing out that this is not even approaching being anywhere near evidence that drinking the stuff would do more good than harm.
May 8, 2014

Think I’ll go eat worms

This table is from a University of California alumni magazine

Screen-Shot-2014-05-06-at-9.06.38-PM

 

Jeff Leek argues at Simply Statistics that the big problem with Big Data is they, too, forgot statistics.

May 2, 2014

Mammography ping-pong

Hilda Bastian at Scientific American

It’s like a lot of evidence ping-pong matches. There are teams with strongly held opinions at the table, smashing away at opposing arguments based on different interpretations of the same data.

Meanwhile, women are being advised to go to their doctors if they have questions. And their doctors may be just as swayed by extremist views and no more on top of the science than anyone else.

She explains where the different views  and numbers come from, and why the headlines keep changing.

April 25, 2014

Sham vs controlled studies: Thomas Lumley’s latest Listener column

How can a sham medical procedure provide huge benefits? And why do we still do them in a world of randomised, blinded trials? Thomas Lumley explores the issue in his latest New Zealand Listener column. Click here.

April 23, 2014

Citation needed

I couldn’t have put it less clearly myself, but if you follow the link, you do get to one of those tall, skinny totem-pole infographics, and the relevant chunk of it saystxt

What it doesn’t do is tell you why they believe this. Neither does anything else on the web page, or, as far as I can tell, the whole set of pages on distracted driving.

A bit of Googling turns up this New York Times story from 2009

The new study, which entailed outfitting the cabs of long-haul trucks with video cameras over 18 months, found that when the drivers texted, their collision risk was 23 times greater than when not texting

That sounds fairly convincing, though the story also mentions that a study of college students using driving simulators found only an 8-fold increase, and notes that texting might well be more dangerous when driving a truck than a car.

The New York Times doesn’t link, but with the name of the principal researcher we can find the research report and Table 17, on page 44 does indeed include the number 23. There’s a pretty huge margin of error: the 95% confidence interval goes down to 9.7. More importantly,  though, the table header says “Likelihood of a Safety-Critical Event”. 

A “Safety-Critical Event” could be a crash, but it could also be a near-crash, or a situation where someone else needed to alter their behaviour to avoid a crash, or an unintentional lane change. Of the 4452 “safety-critical events”, 21 were crashes.  There were 31 safety-critical events observed during texting.

So, the figure of 23 is not actually for crashes, but it is at least for something relevant, measured carefully.  Texting, as would be pretty obvious, isn’t a good thing to do when you’re driving. And even if you’re totally rad,hip, and cool like the police tweetwallah, it’s ok to link.  Pretend you’re part of the Wikipedia generation or something.

 

 

March 26, 2014

Are web-based student drinking interventions worthwhile?

Heavy drinking and the societal harm it causes is a big issue and attracts a lot of media and scholarly attention (and Statschat’s, too). So we were interested to see today’s new release from the Journal of the American Medical Association. It describes a double-blind, parallel-group, individually-randomised trial that studied moderate to heavy student drinkers from seven of our eight universities to see if a web-based alcohol screening and intervention programme reduced their unhealthy drinking behaviour.

And the short answer? Not really. But if they identified as Māori, the answer was … yes, with a caveat. More on that in a moment.

Statistician Nicholas Horton and colleagues used an online questionnaire to identify students at Otago, Auckland, Canterbury, Victoria, Lincoln, Massey, and Waikato who had unhealthy drinking habits. Half the students were assigned at random to receive personalised feedback and the other students had no input. Five months later, researchers followed up with the students on certain aspects of their drinking.

The overall result? “The intervention group tended to have less drinking and fewer problems then the control group, but the effects were relatively modest,” says Professor Horton. The take-away message: A web-based alcohol screening and intervention program had little effect on unhealthy drinking among New Zealand uni students. Restrictions on alcohol availability and promotion are still needed if we really want to tackle alcohol abuse.

But among Māori students, who comprise 10% of our national uni population, those receiving intervention were found to drink 22% less alcohol and to experience 19% fewer alcohol-related academic problems at the five-month follow-up. The paper suggests that Māori students are possibly more heavily influenced by social-norm feedback than non-Māori students. “Māori students may have a stronger group identity, enhanced by being a small minority in the university setting.” But the paper warns that the difference could also be due to chance, “underscoring the need to undertake replication and further studies evaluating web-based alcohol screening and brief intervention in full-scale effectiveness trials.”

The paper is here. Read the JAMA editorial here.