Posts filed under Evidence (69)

July 29, 2014

A treatment for unsubstantiated claims

A couple of months ago, I wrote about a One News story on ‘drinkable sunscreen’.

In New Zealand, it’s very easy to make complaints about ads that violate advertising standards, for example by making unsubstantiated therapeutic claims. Mark Hanna submitted a complaint about the NZ website of the company  selling the stuff.

The decision has been released: the complaint was upheld. Mark gives more description on his blog.

In many countries there is no feasible way for individuals to have this sort of impact. In the USA, for example, it’s almost impossible to do anything about misleading or unsubstantiated health claims, to the extent that summoning a celebrity to be humiliated publicly by a Senate panel may be the best option.

It can at least produce great television: John Oliver’s summary of the Dr Oz event is viciously hilarious

July 14, 2014

Multiple testing, evidence, and football

There’s a Twitter account, @FifNdhs, that has five tweets, posted well before today’s game

  • Prove FIFA is corrupt
  • Tomorrow’s scoreline will be Germany win 1-0
  • Germany will win at ET
  • Gotze will score
  • There will be a goal in the second half of ET

What’s the chance of getting these four predictions right, if the game isn’t rigged?

Pretty good, actually. None of these events is improbable on its own, and  Twitter lets you delete tweets and delete accounts. If you set up several accounts, posted a few dozen tweets on each, describing plausible events, and then deleted the unsuccessful ones, you could easily come up with an implausible-sounding remainder.

Twitter can prove you made a prediction, but it can’t prove you didn’t also make a different one, so it’s only good evidence of a prediction if either the predictions were widely retweeted before they happened, or the event described in a single tweet is massively improbable.

If @FifNdhs had predicted a 7-1 victory for Germany over Brazil in the semifinal, that would have been worth paying attention to. Gotze scoring, not so much.

May 22, 2014

Briefly

Health and evidence edition

  • Evidently Cochrane, a blog with non-technical explanations of Cochrane Collaboration review results
  • Design process for a graphic illustrating the impact of motorbike helmet laws.  In contrast to bicycle helmet laws, laws for motorbikes do have a visible effect on death statistics
  • Stuff has quite a good story on alcohol in New Zealand.
  • The British Association of Dermatologists responds to ‘drinkable sunscreen’.
  • 3News piece on Auckland research into extracts of the lingzhi mushroom. Nice to see local science, and the story was reasonably balanced, with Shaun Holt pointing out that this is not even approaching being anywhere near evidence that drinking the stuff would do more good than harm.
May 8, 2014

Think I’ll go eat worms

This table is from a University of California alumni magazine

Screen-Shot-2014-05-06-at-9.06.38-PM

 

Jeff Leek argues at Simply Statistics that the big problem with Big Data is they, too, forgot statistics.

May 2, 2014

Mammography ping-pong

Hilda Bastian at Scientific American

It’s like a lot of evidence ping-pong matches. There are teams with strongly held opinions at the table, smashing away at opposing arguments based on different interpretations of the same data.

Meanwhile, women are being advised to go to their doctors if they have questions. And their doctors may be just as swayed by extremist views and no more on top of the science than anyone else.

She explains where the different views  and numbers come from, and why the headlines keep changing.

April 25, 2014

Sham vs controlled studies: Thomas Lumley’s latest Listener column

How can a sham medical procedure provide huge benefits? And why do we still do them in a world of randomised, blinded trials? Thomas Lumley explores the issue in his latest New Zealand Listener column. Click here.

April 23, 2014

Citation needed

I couldn’t have put it less clearly myself, but if you follow the link, you do get to one of those tall, skinny totem-pole infographics, and the relevant chunk of it saystxt

What it doesn’t do is tell you why they believe this. Neither does anything else on the web page, or, as far as I can tell, the whole set of pages on distracted driving.

A bit of Googling turns up this New York Times story from 2009

The new study, which entailed outfitting the cabs of long-haul trucks with video cameras over 18 months, found that when the drivers texted, their collision risk was 23 times greater than when not texting

That sounds fairly convincing, though the story also mentions that a study of college students using driving simulators found only an 8-fold increase, and notes that texting might well be more dangerous when driving a truck than a car.

The New York Times doesn’t link, but with the name of the principal researcher we can find the research report and Table 17, on page 44 does indeed include the number 23. There’s a pretty huge margin of error: the 95% confidence interval goes down to 9.7. More importantly,  though, the table header says “Likelihood of a Safety-Critical Event”. 

A “Safety-Critical Event” could be a crash, but it could also be a near-crash, or a situation where someone else needed to alter their behaviour to avoid a crash, or an unintentional lane change. Of the 4452 “safety-critical events”, 21 were crashes.  There were 31 safety-critical events observed during texting.

So, the figure of 23 is not actually for crashes, but it is at least for something relevant, measured carefully.  Texting, as would be pretty obvious, isn’t a good thing to do when you’re driving. And even if you’re totally rad,hip, and cool like the police tweetwallah, it’s ok to link.  Pretend you’re part of the Wikipedia generation or something.

 

 

March 26, 2014

Are web-based student drinking interventions worthwhile?

Heavy drinking and the societal harm it causes is a big issue and attracts a lot of media and scholarly attention (and Statschat’s, too). So we were interested to see today’s new release from the Journal of the American Medical Association. It describes a double-blind, parallel-group, individually-randomised trial that studied moderate to heavy student drinkers from seven of our eight universities to see if a web-based alcohol screening and intervention programme reduced their unhealthy drinking behaviour.

And the short answer? Not really. But if they identified as Māori, the answer was … yes, with a caveat. More on that in a moment.

Statistician Nicholas Horton and colleagues used an online questionnaire to identify students at Otago, Auckland, Canterbury, Victoria, Lincoln, Massey, and Waikato who had unhealthy drinking habits. Half the students were assigned at random to receive personalised feedback and the other students had no input. Five months later, researchers followed up with the students on certain aspects of their drinking.

The overall result? “The intervention group tended to have less drinking and fewer problems then the control group, but the effects were relatively modest,” says Professor Horton. The take-away message: A web-based alcohol screening and intervention program had little effect on unhealthy drinking among New Zealand uni students. Restrictions on alcohol availability and promotion are still needed if we really want to tackle alcohol abuse.

But among Māori students, who comprise 10% of our national uni population, those receiving intervention were found to drink 22% less alcohol and to experience 19% fewer alcohol-related academic problems at the five-month follow-up. The paper suggests that Māori students are possibly more heavily influenced by social-norm feedback than non-Māori students. “Māori students may have a stronger group identity, enhanced by being a small minority in the university setting.” But the paper warns that the difference could also be due to chance, “underscoring the need to undertake replication and further studies evaluating web-based alcohol screening and brief intervention in full-scale effectiveness trials.”

The paper is here. Read the JAMA editorial here.

 

 

 

March 18, 2014

Your gut instinct needs a balanced diet

I linked earlier to Jeff Leek’s post on fivethirtyeight.com, because I thought it talked sensibly about assessing health news stories, and how to find and read the actual research sources.

While on the bus, I had a Twitter conversation with Hilda Bastian, who had read the piece (not through StatsChat) and was Not Happy. On rereading, I think her points were good ones, so I’m going to try to explain what I like and don’t like about the piece. In the end, I think she and I had opposite initial reactions to the piece from on the same starting point, the importance of separating what you believe in advance from what the data tell you. (more…)

January 15, 2014

Fancy packaging of plain packaging impact

The Sydney Morning Herald has a story on the impact of plain packaging for cigarettes in Australia.  Cancer researchers in Sydney found a big spike in calls to Quitline after the packaging change, and interpreted this as evidence it was working

The researchers said although the volume of calls to Quitline was an ”indirect” measure of people’s quitting intentions and behaviour, it was more objective than community surveys where people can answer questions in a socially desirable and biased way.

On the other side, tobacco companies say there hasn’t been any actual fall in smoking.

”In November 2013, a study by London Economics found that since the introduction of plain packaging in Australia there has been no change in smoking prevalence … What matters is whether fewer people are smoking as a result of these policies – and the data is clear that overall tobacco consumption and smoking prevalence has not gone down,” he said.

In this setting you might reasonably be concerned that either side is putting their results in fancy packaging. So what should you believe?

In fact, the claims are consistent with each other and don’t say much either way about the success of the program.  If you look at the research paper, they found an increase peaking at about 300 calls per week and then falling off by about 14% per week. That works out to be a total of roughly 2000 extra calls attributed to the packaging change, ie, just over half a percent of all smokers in Australia, or perhaps a 10% increase in the annual Quitline volume. If the number of people actively trying to quit by methods other than Quitline also goes up by 10%, you still wouldn’t expect to see much impact on total tobacco sales after one year.

The main selling point for the plain packaging (eg) was that it would prevent young people from starting to smoke. That’s what really needs to be evaluated, and it’s probably too early to tell.

 

[Update: Of course, other countries that were independently considering changing their policies shouldn't wait for years just because Australia started first. That would be silly.]

[Update: the Quitline data are just for NSW; so perhaps 1.5% of smokers]