Posts filed under Experiments (32)

January 27, 2013

Clinical trials in India

Stuff has a story from the Sydney Morning Herald on clinical trials in India.  The basic claims are damning if true:

…clinical drug trials are at the centre of a growing controversy in India, as evidence emerges before courts and, in government inquiries, of patients being put onto drug trials without their knowledge or consent…

With a very few exceptions (eg some trials of emergency resuscitation techniques and some minimal-risk cluster-randomised trials of treatment delivery)  it is absolutely fundamental that trial participants give informed consent. Trial protocols are supposed to be reviewed in advance to make sure that participants aren’t asked to consent to unreasonably things, but consent is still primary.  This isn’t just a technical detail, since researchers who were unclear on the importance of consent have often been bad at other aspects of research or patient care.

The Fairfax story mixes in the claimed lack of consent with other claims that are either less serious or not explained clearly. For example

Figures from the drugs controller- general show that in 2011 there were deaths during clinical trials conducted by, or on behalf of, Novartis, Quintiles, Pfizer, Bayer, Bristol Mayer Squibb, and MSD Pharmaceutical.

Of course there were deaths in clinical trials. If you are comparing two treatments for a serious illness, the trial participants will be seriously ill.  When you need to know if a new treatment reduces the risk of death, the only way to tell is to do a study large enough that some people are expected to die.  Even if improved survival isn’t directly what you’re measuring, a large trial will include people who die. In the main Women’s Health Initiative hormone replacement trial, for example, 449 women had died by the time the trial was stopped.  The question isn’t whether there were deaths, it’s whether there were deaths that wouldn’t have occurred if the trials had been done right.

There’s also a claim that families of participants who died were not given adequate compensation as part of the trial.  If there had been consent, this wouldn’t necessarily matter. Lots of trials in developed countries don’t specifically compensate participants or relatives, and there’s actually some suspicion of those that do, because it provides another incentive to participate even if you don’t really want to.

Other sources: Times of India, Chemistry World, a couple of review articles, the Nuremberg Code

 

January 23, 2013

Biologists want more statistics

An article in Nature (not free access, unfortunately) by Australian molecular biologist David L. Vaux

 “Experimental biologists, their reviewers and their publishers must grasp basic statistics, or sloppy science will continue to grow.”

This doesn’t come as a surprise to statisticians, but it is nice to get the support from the biology side.  His recommendations are also familiar and welcome

How can the understanding and use of elementary statistics be improved? Young researchers need to be taught the practicalities of using statistics at the point at which they obtain the results of their very first experiments.

[Journals] should refuse to publish papers that contain fundamental errors, and readily publish corrections for published papers that fall short. This requires engaging reviewers who are statistically literate and editors who can verify the process. Numerical data should be made available either as part of the paper or as linked, computer-interpretable files so that readers can perform or confirm statistical analyses themselves.

Professor Vaux goes on to say

When William Strunk Jr, a professor of English, was faced with a flood of errors in spelling, grammar and English usage, he wrote a short, practical guide that became The Elements of Style(also known as Strunk and White). Perhaps experimental biologists need a similar booklet on statistics.

And here I have to quibble. Experimental biologists already have too many guides like Strunk & White, full of outdated prejudices and policies that the authors themselves would not follow.  What we need is a guide that lays out how good scientists and statisticians actually do handle common types of experiment (ie, evidence-based standard recipes), together with some education on the basic principles: contrasts, blocking, randomization, sources of variation, descriptions of uncertainty. And perhaps a few entertaining horror stories of Doing It Rong and the consequences.

 

November 11, 2012

It’s not the sensation, it’s the neuroscience

Philosophers have argued about whether it’s even conceivable to have pain without the physical sensation. According to 3News (and other media outlets worldwide), University of Chicago neuroscientists don’t have a problem with this:

Mathematics can be difficult, and a new study shows even thinking about doing it can physically hurt.

Of course, that’s not quite what the study found (and credit to 3News for linking), though it does seem to be what the researchers said they found. The study was in people with ‘high levels of math anxiety’ and the abstract says

We show that, when anticipating an upcoming math-task, the higher one’s math anxiety, the more one increases activity in regions associated with visceral threat detection, and often the experience of pain itself (bilateral dorso-posterior insula).

That is, some of the parts of the brain that are active during pain or threat were also active when anticipating a maths task, even though there was no actual pain reported.

A simpler explanation might be that if you’re scared of maths, then your brain looks as if you’re scared of something.  Although the researchers don’t believe this, they do actually concede it is an alternative explanation in the discussion section of the paper

the INSp activity we found could be reflective of something else. For example, it has been suggested that INSp activity is not so much reflective of nocioception, but rather reflects detection of events that are salient for (e.g., threatening to) bodily integrity, regardless of the input sensory modality

(via)

October 18, 2012

Never mind the numbers, look at the neuroscience.

Q:  Have you seen the headline: “Skipping breakfast makes you gain weight: study”?

A:  If that’s the one with the chocolate cupcake photo, yes.

Q:  Was this just another mouse study, or did they look at weight gain in people?

A: People, yes, but they didn’t measure weight gain.

Q: But doesn’t the headline say “makes you gain weight”?

A: Indeed.

Q: So what did they do?

A: They measured brain waves, and how much pasta lunch people ate. The people who skipped breakfast ate more.

Q: So it was a lab experiment.

A: You can’t really tell from the Herald story, which makes it sound as though the participants just chose whether or not to have breakfast, but yes.  If you look at the BBC version, it says that the same people were measured twice, once when they had breakfast and once when they didn’t.

Q: And how much more lunch did they eat when they didn’t eat breakfast?

A: An average of 250 calories more.

Q: How does that compare to how much they would have eaten at breakfast?

A:  There were brain waves, as well.

Q: How many calories would the participants have eaten at breakfast?

A: The part of the brain thought to be involved in “food appeal”, the orbitofrontal cortex, became more active on an empty stomach.

Q: Are you avoiding the question about breakfast?

A: Why would you think that?  The breakfast was 730 calories.  But the MRI imaging showed that fasting made people hungrier

Q: Isn’t 730 more than 250?

A: Comments like that are why people hate statisticians.

October 14, 2012

One of the most important meals of the day

Stuff is reporting “Food and learning connection shot down”,based on a local study

Researchers at Auckland University’s School of Population Health studied 423 children at decile one to four schools in Auckland, Waikato and Wellington for the 2010 school year.

They were given a free daily breakfast – Weet-Bix, bread with honey, jam or Marmite, and Milo – by either the Red Cross or a private sector provider.

My first reaction on reading this was: why didn’t they take this opportunity to do a randomised trial, so we could actually get reliable data.  So I went to the Cochrane Library to see what randomised trials had been done in the past. These have mostly been in developing countries and have found improvements in growth, but smaller differences in school performance.

Then I tried asking the Google, and its second link was a paper by Dr Ni Mhurchu, the researcher mentioned in the story, detailing the plans for a randomised trial of school breakfasts in Auckland.  At that point it was easy to find the results, and see that in fact Stuff is talking about a randomized trial. They just didn’t think it was important enough to mention that detail.

To the extent that one can trust the Stuff story at this point, there seem to be three reactions:

  • I don’t believe it because my opinions are more reliable than this research
  • Lunch would work even if breakfast didn’t
  •  We should be making sure kids have breakfast even if it doesn’t improve school performance.

The latter two responses are perfectly reasonable positions to take (though they’re more convincing where they were taken before the results came out).  School lunches might be more effective than breakfasts, and the US (hardly a hotbed of socialism) has had a huge school nutrition program for 60 years.

Still, if we’re going to supply subsidised meals to school kids, we do need to know why we’re doing it and what we expect to gain.    This study is one of the first to go beyond just saying that the benefits are obvious.

 

October 4, 2012

Science communication training through blogging

Mind the Science Gap is a blog from the University of Michigan:

Each semester, ten Master of Public Health students from the University of Michigan participate in a course on Communicating Science through Social Media. Each student on the course is required to post weekly articles here as they learn how to translate complex science into something a broad audience can understand and appreciate. And in doing so they are evaluated in the most brutal way possible – by you: the audience they are writing for!

The post that attracted me to the blog was on sugar and hyperactivity in kids, not just for the science, but because someone has actually found a good use for animated GIFs in communicating information: click to see the effect, since embedding it in WordPress seems to kill it.

July 13, 2012

Our new robot overlords

Since I regularly complain about the lack of randomised trials in education, I really have to mention a recent US study.  At six public universities in the US, introductory statistics students who consented were randomised between the usual sort of teaching by real live instructors or a format with one hour per week of face-to-face instruction augmented by independent computer-guided instruction.  Within each campus, the students were assessed in the same way regardless of their instruction method, and across all campuses they also took a standardised test of statistics competence.   Statistics is a good target for this sort of experiment, because it is a widely required course, and the median introductory statistics course is not very good.

The results were interesting.  The students using the hybrid computer-guided approach found the course less interesting than those with live instructors, but their performance in the course and in the standardised tests was the same.   If you ignore the cost of developing the software (which in this case already existed), the computer-guided approach would allow more students to be taught by the same number of instructors, saving money in the long run.

This doesn’t mean instructors are obsolete — people like face-to-face classes, and we do actually care if students end up interested in statistics –but it does mean that we need to think about the most efficient ways to use class contact time.  There’s an old joke about lectures as a method of transferring information from the lecturer’s notes into the students’ notebooks without it passing through the brains of either.  We’ve got the internet for that, now.

January 21, 2012

Eggs for breakfast

Earlier in the week I complained that the Egg Foundation and the Herald were over-interpreting a lab study of mouse brain cells.  The study was a perfectly reasonable, and probably technically difficult, piece of basic biological research.  It’s the sort of research that answers the question “By what mechanisms might different foods affect brain function differently?”.   It doesn’t answer the question “What’s for breakfast?”.

If you wanted to know whether a high-protein breakfast such as eggs really increases alertness there are at least two ways to set up a relevant study.    The first would be an open-label randomized comparison of eggs and something else; the second would be a double-blind study of high-protein and high-carbohydrate versions of the same breakfast.  In both cases, you recruit people and randomly allocate them to higher-protein breakfasts on some days and lower-protein on other days.

In an open-label study you have to be careful to minimise response bias, so you would tell participants, truthfully, that some people think protein for breakfast is better and others think complex carbohydrates are better.  You would have to be careful not to indicate what you believed,  and it would be a good idea to measure some addition information beyond alertness, such as hunger, what people ended up eating for lunch.   There’s always some potential for bias, and one strategy is to ask participants about something that you don’t expect to be affected, like headaches.  This strategy was used in the home heating randomized trial that underlies the government’s ‘warm home’ advertising, which found that asthma was reduced by better heating, but twisted ankles were not.

In a blinded version of the study, you might recruit muesli eaters and, perhaps with the help of a cereal manufacturer, randomize them to higher-protein and lower-protein versions of breakfast.  This would be a bit more expensive, but perfectly feasible.  There would be less risk of reporting bias, since neither the participant nor the people recording the data would know whether the meals were higher or lower in protein on a particular day.  At the end of the study, you unmask the breakfasts and compare alertness.   The main disadvantage of this approach is the same as its main advantage — you learn about higher-protein vs lower-protein muesli, and have to make some assumptions to generalize this to eggs vs cereal or toast.

If it really mattered whether eggs for breakfast increased alertness, these studies would be worth doing.  But the Egg Foundation is unlikely to be interested, since it wouldn’t benefit from knowing the facts.  The mouse brain study is enough of a fig-leaf to let the claim stand up in public, and they don’t want to risk finding out that it doesn’t have any clothes.

 

September 29, 2011

Faster-than-light neutrinos.

So. It turns out that there is a statistical angle to the recent report of neutrinos travelling faster than light, since it’s the assessment of uncertainty in the travel time and distance that is really at issue.  For a nice summary of the measurement issues from a card-carrying physicist and popular science writer, see Chad Orzel’s blog.

June 29, 2011

Even toddlers use statistics!

New research published in the journal Science from researchers at MIT has shown that toddlers as young as 16 months old are able to make accurate judgments about whether a toy failed to operate due to their own mistake or due to circumstances beyond their control.

The results give insight into how toddlers use prior knowledge with some statistical data to make accurate inferences about the cause of a failed action. These findings are contrary to commonly held educational assumptions that young children aren’t able to distinguish among causes and has implications for early childhood education and for how humans learn in general.

“Infants who saw evidence suggesting the failure was due to their own action tried to hand the toy to their parents for help. Conversely, babies who saw evidence suggesting that the toy was broken were more likely to reach for a new toy, as another one was always nearby.

“That’s the amazing thing about what the babies are doing,” said Schulz. “They can use very, very sparse evidence because they have these rich prior beliefs and they can use that to make quite sophisticated, quite accurate inferences about the world.”

“It was fascinating to see that they are even sensitive to this problem of figuring out whether it’s them or the world to begin with,” added Gweon, “and that they can track such subtle statistical dependence between agents, objects and event outcomes to make rational inferences.”

Read more about the study »