Posts filed under Design of experiments (23)

August 19, 2013

Sympathetic magic again

Once again, the Herald is relying on sympathetic magic in a nutrition story (previous examples)

1. Walnuts: These nuts look just like a brain, so it makes sense that they’re packed with good stuff for your grey matter.The British Journal of Nutrition reported that eating half a cup of walnuts a day for eight weeks increased reasoning skills by nearly 12 per cent in students. 

There’s no way that the appearance of a natural food could possibly be a guide to its nutritional value — how would the walnut know that it’s good for human brains, and why would it care? Pecans, which look a bit like brains, don’t contain the levels of n-3 fatty acids that are supposed to be the beneficial component of walnuts, and fish and flax seeds, which do contain n-3 fatty acids, don’t look like brains.

The story gets two cheers for almost providing a reference: searching on “British Journal of Nutrition walnuts reasoning skills” leads to the paper. It’s a reasonable placebo-controlled randomised experiment, with participants eating banana bread with or without walnuts.  The main problem is that the researchers tested 34 measurements of cognitive function or mood, and found a difference in just one of them.  As they admit

The authors are unable to explain why inference alone was affected by consumption of walnuts and not the other ‘critical thinking’ subtests – recognition of assumption, deduction, interpretation, and evaluation of arguments.

The prior research summarised in the paper shows the same problem, eg,  one dose of walnuts improved one coordination test in rats, but a higher dose improved a different test, and the highest dose didn’t improve anything.

June 3, 2013

The research loophole

We keep going on here about the importance of publishing clinical trials.  Today (in Britain), the BBC program Panorama is showing a documentary about a doctor who has been running clinical trials of the same basic treatment regimen for twenty years, without publishing any results. And it’s not that these are trials that take a long time to run — the participants have advanced cancer. If the treatment was effective, it would have been easy to gather and publish convincing evidence by now, many times over.

These haven’t been especially good clinical trials by usual standards — not randomized, not controlled — and they have been anomalous in other ways as well. For example, patients participating in the trial are charged large sums of money for the treatment being tested (not just for other care), which is very unusual.  Unusual, but not illegal.  Without published evidence that the treatment works, it couldn’t be sold outside trials, but it’s still entirely legal to charge money for the treatment in research. It’s a bit like whaling.

According to the BBC, Dr Burzynski says it’s not his decision to keep the results secret

He said the medical authorities in the US would not let him release this information: “Clinical trials, phase two clinical trials, were completed just a few months ago. I cannot release this information to you at this moment.”

If true, that would be very unusual. I don’t know of any occasion when the FDA has restricted scientific publication of trial results, and it’s entirely routine to publish results for treatments that have not been approved or even where other research is still ongoing. The BBC also checked with the FDA:

But the FDA told us this was not true and he was allowed to share the results of his trials.

This is all a long way away from New Zealand, and we can’t even watch the documentary, so why am I mentioning it? Last year, the parents of an NZ kid were trying to raise money to send him to the Burzynski clinic, with the help of the Herald.   You can’t fault the parents for trying to buy hope at any cost, but you sure can fault the people selling it.

Wikipedia has pretty good coverage  if you want more detail.

May 17, 2013

Science survey

From the Wellcome Trust Monitor, a survey examining knowledge and attitudes related to biomedical science in the UK

The survey found a high level of interest in medical research among the public – more than seven in ten adults (75 per cent) and nearly six out of ten of young people (58 per cent). Despite this, understanding of how research is conducted is not deep – and levels of understanding have fallen since 2009. While most adults (67 per cent) and half of all young people (50 per cent) recognise the concept of a controlled experiment in science, most cannot articulate why this process is effective.

Two-thirds of the adults that were questioned trusted medical practitioners and university scientists to give them accurate information about medical research. This fell to just over one in ten (12 per cent) for government departments and ministers. Journalists scored lowest on trustworthiness — only 8 per cent of adults trusted them to give accurate information about medical research, although this was an improvement on the 2009 figure of 4 per cent.

 

May 9, 2013

Counting signatures

A comment on the previous post about the asset-sales petition asked how the counting was done: the press release says

Upon receiving the petition the Office of the Clerk undertook a counting and sampling process. Once the signatures had been counted, a sample of signatures was taken using a methodology provided by the Government Statistician.

It’s a good question and I’d already thought of writing about it, so the commenter is getting a temporary reprieve from banishment for not providing a full name.  I don’t know for certain, and the details don’t seem to have been published, which is a pity — they would be interesting and educationally useful, and there doesn’t seem to be any need for confidentiality.

While I can’t be certain, I think it’s very likely that the Government Statistician provided the estimation methodology from Statistics New Zealand Working Paper No 10-04, which reviews and extends earlier research on petition counting.

There are several issues that need to be considered

  • removing signatures that don’t come with the required information
  • estimating the number of eligible vs ineligible signatures
  • estimating the number of duplicates
  • estimating the margin of error in the estimate
  • deciding what level of uncertainty is acceptable

The signatures without the required information are removed completely; that’s not based on sampling.  Estimating eligible vs ineligible signatures is fairly easy by checking a sufficiently-large random sample — in fact, they use a systematic sample, taking names at regular intervals through the petition list, which tends to give more precise results and to be more auditable.  

Estimating unique signatures is  tricky, because if you halve your sample size, you expect to see 1/4 as many duplicates, 1/8 as many triplicates, and so on. The key part of the working paper shows how to scale up the the sample data on eligible, ineligible, and duplicate, triplicate, etc, signatures to get the unique unbiased estimator of the number of valid signatures and its variance.

Once the level of uncertainty is specified, the formulas tell you what sample size to verify and what to do with the results.  I don’t know how the sample size is chosen, but it wouldn’t take a very large sample to get the uncertainty down to a few thousand, which would be good enough.   In fact, since the methodology is public and the parties have access to the electoral roll in electronic form, it’s a bit surprising that the petition organisers didn’t run a quick check themselves before submitting it.

 

 

May 7, 2013

Video on randomised trials — in poverty relief

The organisation Innovations for Poverty Action have made a neat little animated video explaining how and why they do randomised controlled trials of poverty-relief programs.

 

 

April 30, 2013

Return of the Pomegranate: the sequel

While distracted by a conference in late January, I missed the next exciting installment of the Edinburgh pomegranate saga.

As you will recall, a research group in Edinburgh have put out press releases in recent years about the impact of pomegranate juice on blood pressure, cortisol (a stress hormone), and testosterone.  They haven’t published any scientific papers about these findings, though they have produced a presentation at a scientific conference.

The most recent installment claims that pomegranate extract reduces hunger and food consumption. This study seems to be better designed than the previous ones: participants were randomised to pomegranate extract tablets or placebo for three weeks.  They were then given a glass of pomegranate juice and a meal.  Those who had been taking the pomegranate extract reported feeling less hungry and ate less — 22% less. It’s a pity the study didn’t measure weight, because if the 22% reduction in food consumption generalised beyond the one experimental meal it would have led to measurable weight loss over three weeks.

Again, this was a premature and unsubstantiated press release, and the experiment has not been published in a peer-reviewed journal, although the researchers do at least say they will be presenting it at a conference later in the year.

January 23, 2013

Biologists want more statistics

An article in Nature (not free access, unfortunately) by Australian molecular biologist David L. Vaux

 “Experimental biologists, their reviewers and their publishers must grasp basic statistics, or sloppy science will continue to grow.”

This doesn’t come as a surprise to statisticians, but it is nice to get the support from the biology side.  His recommendations are also familiar and welcome

How can the understanding and use of elementary statistics be improved? Young researchers need to be taught the practicalities of using statistics at the point at which they obtain the results of their very first experiments.

[Journals] should refuse to publish papers that contain fundamental errors, and readily publish corrections for published papers that fall short. This requires engaging reviewers who are statistically literate and editors who can verify the process. Numerical data should be made available either as part of the paper or as linked, computer-interpretable files so that readers can perform or confirm statistical analyses themselves.

Professor Vaux goes on to say

When William Strunk Jr, a professor of English, was faced with a flood of errors in spelling, grammar and English usage, he wrote a short, practical guide that became The Elements of Style(also known as Strunk and White). Perhaps experimental biologists need a similar booklet on statistics.

And here I have to quibble. Experimental biologists already have too many guides like Strunk & White, full of outdated prejudices and policies that the authors themselves would not follow.  What we need is a guide that lays out how good scientists and statisticians actually do handle common types of experiment (ie, evidence-based standard recipes), together with some education on the basic principles: contrasts, blocking, randomization, sources of variation, descriptions of uncertainty. And perhaps a few entertaining horror stories of Doing It Rong and the consequences.

 

January 13, 2013

Fascinating research into the placebo effect

Harvard Magazine has an article on Ted Kaptchuk’s research into how (not if) the placebo effect works. From a new clinical trial, his team has found that the methods of placebo administration are as important as the administration itself:

“It’s valuable insight for any caregiver: patients’ perceptions matter, and the ways physicians frame perceptions can have significant effects on their patients’ health.”

Read more »

April 6, 2012

When in doubt, randomise.

This week, John Key announced a package of mental-health funding, including some new treatment initiatives.  For example, Whanau Ora will be piloting a whanau-based approach, initially on 40 Maori and Pacific young people.

It’s a pity that the opportunity wasn’t taken to get reliable evidence of whether the new approaches are beneficial, and by how much.  For example, there must be a lot more than 40 Maori and Pacific youth who could potentially benefit from Whanau Ora’s approach, if it is indeed better.  Rather than picking the 40 test patients by hand from the many potential participants, a lottery system would ensure that the 40 were initially comparable to those receiving the current treatment strategies.  If the youth in whanau-based care did better we would then know for sure that the approach worked, and could compare its cost and effectiveness, and decide how far to expand it.   Without a random allocation, we won’t ever be sure, and it will be a lot easier for future government cuts to remove expensive but genuinely useful programs, and leave ones that are cheaper but don’t actually work.

In some cases it’s hard to argue for randomisation, because it seems better at least to try to treat everyone.  But if we can’t treat everyone and have to ration a new treatment approach in some way, a fair and random selection is no worse than other rationing approaches and has the enormous benefit of telling us whether the treatment works.

Admittedly, statisticians are just as bad as everyone else on this issue.   As Andrew Gelman points out in the American Statistical Association’s magazine “Chance”, when we have good ideas about teaching we typically just start using them on an ad hoc selection of courses. We have, over fifty years, convinced the medical community that it is possible, and therefore important, to know whether things really work.  It would be nice if the idea spread a bit further.

October 7, 2011

Medicine for muggles

Scientific American’s blog is having a series of posts on how clinical trials for new medicines work (the author describes the series as ‘medicine for muggles’).