February 20, 2015

Why we have controlled trials

 

joc80747f2

The graph is from a study — a randomised, placebo-controlled trial published in a top medical journal — of a plant-based weight loss treatment, an extract from Garcinia cambogia, as seen on Dr Oz. People taking the real Garcinia cambogia lost weight, an average of 3kg over 12 weeks. That would be at least a little impressive, except that people getting pretend Garcinia cambogia lost an average of more than 4kg over the same time period.  It’s a larger-than-usual placebo response, but it does happen. If just being in a study where there’s 50:50 chance of getting a herbal treatment can lead to 4kg weight loss, being in a study where you know you’re getting it could produce even greater ‘placebo’ benefits.

If you had some other, new, potentially-wonderful natural plant extract that was going to help with weight loss, you might start off with a small safety study. Then you’d go to a short-term, perhaps uncontrolled, study in maybe 100 people over a few weeks to see if there was any sign of weight loss and to see what the common side effects were. Finally, you’d want to do a randomised controlled trial over at least six months to see if people really lost weight and kept it off.

If, after an uncontrolled eight-week study, you report results for only 52 of 100 people enrolled and announce you’ve found “an exciting answer to one of the world’s greatest and fastest growing problems” you perhaps shouldn’t undermine it by also saying “The world is clearly looking for weight-loss products which are proven to work.”

 

[Update: see comments]

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »

Comments

  • avatar

    “If just being in a study where there’s 50:50 chance of getting a herbal treatment can lead to 4kg weight loss, being in a study where you know you’re getting it could produce even greater ‘placebo’ benefits.”

    That’s a great point that hadn’t occurred to me. Given that the “fat mates” trial didn’t include a placebo arm, instead meaning to compare their trial data with reference placebo data from other studies, this sounds like a very significant source of bias.

    I imagine it could be possible to get comparable reference placebo data if you give all participants a placebo and lie to them all to say they’re getting an experimental treatment, but surely that would be unlikely to get ethical approval.

    9 years ago

    • avatar
      Thomas Lumley

      Ideally, I’d want to find past uncontrolled trials of treatments that we’re now pretty sure don’t work, and use the weight loss data from those. Unfortunately, those studies tend not to be published anywhere accessible.

      9 years ago

  • avatar
    Thomas Lumley

    Via the Science Media Centre, I hear that the trial in this post wasn’t used in the placebo analysis because it had diet and exercise recommendations. That’s a fair point, but I don’t think it changes the conclusions:

    1. First, a request not to change diet or exercise is very unnatural when you’re trying to help people lose weight. If the trial been controlled, this wouldn’t be a problem.

    2. The proposed (and entirely sensible) mechanism for Satisfax is that people feel fuller and so eat less. The press release says Most reported eating less of the same food and were now able to exercise.

    3. I wouldn’t be worried about the lack of controls if it wasn’t for the very short duration and high dropout. If the study lasted six months and included everyone or nearly everyone in the analysis, and there was weight loss on average or weight loss above 5% in a large proportion, I’d be pretty convinced. It’s rare for people to lose substantial weight and keep it off for a long time.

    4. In the other direction, if the study had been effectively blinded and controlled, and the dropout had been similar in the treatment and control groups, I would be less worried about dropout.

    With a short-term study, no blinding and no control group, we’d need to be very sure that the combination of dropout and placebo effect was much less than the observed change. Even stipulating the choice of trials in the analysis, there were only five of them, and they were all controlled, so they can’t provide strong evidence that that the bias is reliably small in this trial.

    I’m not saying the product doesn’t work. It’s got a plausible mechanism, and it might be effective. However, there is no way the trial provided strong evidence that it works.

    9 years ago