April 6, 2012

When in doubt, randomise.

This week, John Key announced a package of mental-health funding, including some new treatment initiatives.  For example, Whanau Ora will be piloting a whanau-based approach, initially on 40 Maori and Pacific young people.

It’s a pity that the opportunity wasn’t taken to get reliable evidence of whether the new approaches are beneficial, and by how much.  For example, there must be a lot more than 40 Maori and Pacific youth who could potentially benefit from Whanau Ora’s approach, if it is indeed better.  Rather than picking the 40 test patients by hand from the many potential participants, a lottery system would ensure that the 40 were initially comparable to those receiving the current treatment strategies.  If the youth in whanau-based care did better we would then know for sure that the approach worked, and could compare its cost and effectiveness, and decide how far to expand it.   Without a random allocation, we won’t ever be sure, and it will be a lot easier for future government cuts to remove expensive but genuinely useful programs, and leave ones that are cheaper but don’t actually work.

In some cases it’s hard to argue for randomisation, because it seems better at least to try to treat everyone.  But if we can’t treat everyone and have to ration a new treatment approach in some way, a fair and random selection is no worse than other rationing approaches and has the enormous benefit of telling us whether the treatment works.

Admittedly, statisticians are just as bad as everyone else on this issue.   As Andrew Gelman points out in the American Statistical Association’s magazine “Chance”, when we have good ideas about teaching we typically just start using them on an ad hoc selection of courses. We have, over fifty years, convinced the medical community that it is possible, and therefore important, to know whether things really work.  It would be nice if the idea spread a bit further.

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »

Comments

  • avatar

    Amen, brother Thomas. It is sad that there are so few attempts at using even simple experimental designs when evaluating the effects of public policy.

    Concerning Andrew Gelman’s comments, I tend to agree. However, in my mind there is a difference between one individual affecting a small group and public policy (funded by taxpayers) affecting the overall population. Same issue when I see the changes to teaching mathematics in the NZ school curriculum. The changes are based more on faith than on any independent assessment of efficacy of the ‘new’ methods.

    12 years ago