Posts filed under Design of experiments (20)

February 27, 2015

What are you trying to do?


There’s a new ‘perspectives’ piece (paywall) in the journal Science, by Jeff Leek and Roger Peng (of Simply Statistics), arguing that the most common mistake in data analysis is misunderstanding the type of question. Here’s their flowchart


The reason this is relevant to StatsChat is that you can use the flowchart on stories in the media. If there’s enough information in the story to follow the flowchart you can see how the claims match up to the type of analysis. If there isn’t enough information in the story, well, you know that.


February 20, 2015

Why we have controlled trials



The graph is from a study — a randomised, placebo-controlled trial published in a top medical journal — of a plant-based weight loss treatment, an extract from Garcinia cambogia, as seen on Dr Oz. People taking the real Garcinia cambogia lost weight, an average of 3kg over 12 weeks. That would be at least a little impressive, except that people getting pretend Garcinia cambogia lost an average of more than 4kg over the same time period.  It’s a larger-than-usual placebo response, but it does happen. If just being in a study where there’s 50:50 chance of getting a herbal treatment can lead to 4kg weight loss, being in a study where you know you’re getting it could produce even greater ‘placebo’ benefits.

If you had some other, new, potentially-wonderful natural plant extract that was going to help with weight loss, you might start off with a small safety study. Then you’d go to a short-term, perhaps uncontrolled, study in maybe 100 people over a few weeks to see if there was any sign of weight loss and to see what the common side effects were. Finally, you’d want to do a randomised controlled trial over at least six months to see if people really lost weight and kept it off.

If, after an uncontrolled eight-week study, you report results for only 52 of 100 people enrolled and announce you’ve found “an exciting answer to one of the world’s greatest and fastest growing problems” you perhaps shouldn’t undermine it by also saying “The world is clearly looking for weight-loss products which are proven to work.”


[Update: see comments]

January 31, 2015

Big buts for factoid about lying

At StatsChat, we like big buts, and an easy way to find them is unsourced round numbers in news stories. From the Herald (reprinted from the Telegraph, last November)

But it’s surprising to see the stark figure that we lie, on average, 10 times a week.

It seems that this number comes from an online panel survey in the UK last year (Telegraph, Mail) — it wasn’t based on any sort of diary or other record-keeping, people were just asked to come up with a number. Nearly 10% of them said they had never lied in their entire lives; this wasn’t checked with their mothers.  A similar poll in 2009 came up with much higher numbers: 6/day for men, 3/day for women.

Another study, in the US, came up with an estimate of 11 lies per week: people were randomised to trying not to lie for ten weeks, and the 11/week figure was from the control group.  In this case people really were trying to keep track of how often they lied, but they were a quite non-representative group. The randomised comparison will be fair, but the actual frequency of lying won’t be generalisable.

The averages are almost certainly misleading, because there’s a lot of variation between people. So when the Telegraph says

The average Briton tells more than 10 lies a week,

or the Mail says

the average Briton tells more than ten lies every week,

they probably mean the average number of self-reported lies was more than 10/week, with the median being much lower. The typical person lies much less often than the average.

These figures are all based on self-reported remembered lies, and all broadly agree, but another study, also from the US, shows that things are more complicated

Participants were unaware that the session was being videotaped through a hidden camera. At the end of the session, participants were told they had been videotaped and consent was obtained to use the video-recordings for research.

The students were then asked to watch the video of themselves and identify any inaccuracies in what they had said during the conversation. They were encouraged to identify all lies, no matter how big or small.

The study… found that 60 percent of people lied at least once during a 10-minute conversation and told an average of two to three lies.



April 25, 2014

Sham vs controlled studies: Thomas Lumley’s latest Listener column

How can a sham medical procedure provide huge benefits? And why do we still do them in a world of randomised, blinded trials? Thomas Lumley explores the issue in his latest New Zealand Listener column. Click here.

December 27, 2013

Meet Tania Tian, Statistics Summer Scholar 2013-2014

Every year, the Department of Statistics at the University of Auckland offers summer scholarships to a number of students so they can work with our staff on real-world projects. We’ll be profiling the 2013-2014 summer scholars on Stats Chat. Tania is working with Dr Stephanie Budgett on a project titled First-time mums: Can we make a difference?

Tania (right) explains:Tania Tian

“This project is based on the ongoing levator ani study (LA, commonly known as the pelvic floor muscles) from the Pelvic Floor Research Group at the Auckland Bioengineering Institute (ABI), which looks at how the pelvic floor muscles change after first-time mums give birth.

“The aim is to see whether age, ethnicity, delivery conditions and other related factors are associated with the tearing of the muscle. Interestingly, the stiffness of the muscle at rest has been identified as a key factor and is being measured by a specially designed device, an elastometer, that was built by engineers at the ABI.

“Pelvic-floor muscle injury following a vaginal delivery can increase the risks for prolapse where pelvic organs, such as the uterus, small bowl, bladder and rectum, descend and herniate. Furthermore, the muscle trauma may also promote or intensify urinary and/or bowel incontinence.

“Not only do these pelvic- floor disorders cause discomfort and distress, and reduce the mother’s quality of life, and, if left untreated, may lead to major health concerns later in life. Therefore, a statistical model based on key factors elucidated from the study may aid health professionals in deciding the best strategy for delivering a woman’s baby and whether certain interventions are needed.

“I have recently completed my third year of a Bachelor of Science majoring in Statistics and Pharmacology and intend to pursue postgraduate studies. I hope to integrate my knowledge of medical sciences and statistics and specialise in medical statistics.

“Statistics appeals to me because it is a useful field with direct practical applications in almost every industry. I had initially taken the stage one paper as a standalone in order to broaden my knowledge, but eventually realised that I really liked the subject and that it could complement whichever career I have. That’s when I decided to major in statistics, and I’m very glad that I did.

“Over this summer, aside from the project, I am hoping to spend more time with friends and family – especially with my new baby brother! I am also looking forward to visiting the South Island during the Christmas break.”


October 22, 2013

Cookies not as addictive as cocaine

Sometimes a scientific claim is obviously unreasonable, like when a physicist tells you “No, really, the same electron goes through both slots in this barrier”. You’re all “Wut? No. Can’t be.” They show you the interference pattern. “But did you think of…?” “Yes”. “Couldn’t it be..” “No, we tried that.” “But…”  “And that.”  “Still, what about…?” “That too.” Eventually you give up and accept that the universe is weird. An electron really can go through two holes at once.

On the other hand, sometimes the claim isn’t backed up that well, like when Stuff tells us “Cookies as addictive as cocaine”. For example, while some rats were given Oreo cookies and others were given cocaine, there weren’t any rates who were offered both, so there wasn’t any direct evaluation of preference, let alone of addiction. The cookies weren’t even compared to the same control as the cocaine — cookies were compared to rice cakes, and cocaine-laced water to plain water.

There’s a more detailed take-down on the Guardian site, by an addiction researcher.

August 23, 2013

Just making it easier to understand?

From the Journal of Nutritional Science

Young adult males (n 35) were supplemented with either half or two kiwifruit/d for 6 weeks. Profile of Mood States questionnaires were completed at baseline and following the intervention. No effect on overall mood was observed in the half a kiwifruit/d group; however, a 35 % (P = 0·06) trend towards a decrease in total mood disturbance and a 32 % (P = 0·063) trend towards a decrease in depression were observed in the two kiwifruit/d group. Subgroup analysis indicated that participants with higher baseline mood disturbance exhibited a significant 38 % (P = 0·029) decrease in total mood disturbance, as well as a 38 % (P = 0·048) decrease in fatigue, 31 % (P = 0·024) increase in vigour and a 34 % (P = 0·075) trend towards a decrease in depression, following supplementation with two kiwifruit/d. There was no effect of two kiwifruit/d on the mood scores of participants with lower baseline mood disturbance

From the Otago press release

Eating two kiwifruit a day can improve a person’s mood and give them extra energy, new research from the University of Otago, Christchurch (UOC) shows.

Over a six-week period, normally-healthy young men either ate two kiwifruit a day or half a kiwifruit daily as part of a research study into the potential mood-enhancing effects of the fruit.

Researchers found those eating two kiwifruit daily experienced significantly less fatigue and depression than the other group. They also felt they had more energy. These changes appeared to be related to the optimising of vitamin C intake with the two kiwifruit dose

From the Herald

Eating two kiwifruit a day can improve mood and energy levels, a new University of Otago study shows.

Those eating two kiwifruit were found to experience significantly less fatigue and depression than the others. They also felt they had more energy.

I’m not criticizing the research, which was a perfectly reasonable designed experiment, but if the findings are newsworthy, they are also worth presenting accurately.

August 19, 2013

Sympathetic magic again

Once again, the Herald is relying on sympathetic magic in a nutrition story (previous examples)

1. Walnuts: These nuts look just like a brain, so it makes sense that they’re packed with good stuff for your grey matter.The British Journal of Nutrition reported that eating half a cup of walnuts a day for eight weeks increased reasoning skills by nearly 12 per cent in students. 

There’s no way that the appearance of a natural food could possibly be a guide to its nutritional value — how would the walnut know that it’s good for human brains, and why would it care? Pecans, which look a bit like brains, don’t contain the levels of n-3 fatty acids that are supposed to be the beneficial component of walnuts, and fish and flax seeds, which do contain n-3 fatty acids, don’t look like brains.

The story gets two cheers for almost providing a reference: searching on “British Journal of Nutrition walnuts reasoning skills” leads to the paper. It’s a reasonable placebo-controlled randomised experiment, with participants eating banana bread with or without walnuts.  The main problem is that the researchers tested 34 measurements of cognitive function or mood, and found a difference in just one of them.  As they admit

The authors are unable to explain why inference alone was affected by consumption of walnuts and not the other ‘critical thinking’ subtests – recognition of assumption, deduction, interpretation, and evaluation of arguments.

The prior research summarised in the paper shows the same problem, eg,  one dose of walnuts improved one coordination test in rats, but a higher dose improved a different test, and the highest dose didn’t improve anything.

June 3, 2013

The research loophole

We keep going on here about the importance of publishing clinical trials.  Today (in Britain), the BBC program Panorama is showing a documentary about a doctor who has been running clinical trials of the same basic treatment regimen for twenty years, without publishing any results. And it’s not that these are trials that take a long time to run — the participants have advanced cancer. If the treatment was effective, it would have been easy to gather and publish convincing evidence by now, many times over.

These haven’t been especially good clinical trials by usual standards — not randomized, not controlled — and they have been anomalous in other ways as well. For example, patients participating in the trial are charged large sums of money for the treatment being tested (not just for other care), which is very unusual.  Unusual, but not illegal.  Without published evidence that the treatment works, it couldn’t be sold outside trials, but it’s still entirely legal to charge money for the treatment in research. It’s a bit like whaling.

According to the BBC, Dr Burzynski says it’s not his decision to keep the results secret

He said the medical authorities in the US would not let him release this information: “Clinical trials, phase two clinical trials, were completed just a few months ago. I cannot release this information to you at this moment.”

If true, that would be very unusual. I don’t know of any occasion when the FDA has restricted scientific publication of trial results, and it’s entirely routine to publish results for treatments that have not been approved or even where other research is still ongoing. The BBC also checked with the FDA:

But the FDA told us this was not true and he was allowed to share the results of his trials.

This is all a long way away from New Zealand, and we can’t even watch the documentary, so why am I mentioning it? Last year, the parents of an NZ kid were trying to raise money to send him to the Burzynski clinic, with the help of the Herald.   You can’t fault the parents for trying to buy hope at any cost, but you sure can fault the people selling it.

Wikipedia has pretty good coverage  if you want more detail.

May 17, 2013

Science survey

From the Wellcome Trust Monitor, a survey examining knowledge and attitudes related to biomedical science in the UK

The survey found a high level of interest in medical research among the public – more than seven in ten adults (75 per cent) and nearly six out of ten of young people (58 per cent). Despite this, understanding of how research is conducted is not deep – and levels of understanding have fallen since 2009. While most adults (67 per cent) and half of all young people (50 per cent) recognise the concept of a controlled experiment in science, most cannot articulate why this process is effective.

Two-thirds of the adults that were questioned trusted medical practitioners and university scientists to give them accurate information about medical research. This fell to just over one in ten (12 per cent) for government departments and ministers. Journalists scored lowest on trustworthiness — only 8 per cent of adults trusted them to give accurate information about medical research, although this was an improvement on the 2009 figure of 4 per cent.