Posts filed under Experiments (21)

April 4, 2014

Thomas Lumley’s latest Listener column

…”One of the problems in developing drugs is detecting serious side effects. People who need medication tend to be unwell, so it’s hard to find a reliable comparison. That’s why the roughly threefold increase in heart-attack risk among Vioxx users took so long to be detected …”

Read his column, Faulty Powers, here.

December 23, 2013

Meet Callum Gray, Statistics Summer Scholar 2013-2014

Every year, the Department of Statistics at the University of Auckland offers summer scholarships to a number of students so they can work with our staff on real-world projects. We’ll be profiling the 2013-2014 summer scholars on Stats Chat. Callum is working with Dr Ian Tuck on a project titled Probability of encountering a bus.  

Callum (right) explains:

“If you encounter a bus on a journey, you are likely to be exposed to higher levels of pollution. I am trying to find the probability of encountering a bus and how many you will encounter when you travel from place A to place B, taking into account variables such as the time of day and mode of transport.

???????????????????????????????

“This research is useful because it will give us more of an understanding about the impact that buses have on our daily exposure to pollution. we can use this information to plan journeys and learn more about an issue that is becoming more and more apparent.

“I was born in Auckland and have lived here my whole life. I just finished my third year of a Bachelor of Commerce/Bachelor of Science conjoint majoring in Accounting, Finance, and Statistics, which I will finish at
the end of 2014.

“Statistics appeals to me because it is used everyday in conjunction with many other areas. It is very useful to know in a lot of workplaces, and it is interesting because it has a lot of real-life applications.

“I am going to Napier for Christmas and Rhythm and Vines for New Year. In the rest of my spare time, I will be playing cricket and golf, as well as hanging out with friends.”

 

 

October 8, 2013

100% protection?

The Herald tells us

Sunscreen provides 100 per cent protection against all three types of skin cancer and also safeguards a so-called superhero gene, a new study has found.

That sounds dramatic, and you might wonder how this 100% protection was demonstrated.

The study involved conducting a series of skin biopsies on 57 people before and after UV exposure, with and without sunscreen.

There isn’t any link to the research or even the name of the journal, but the PubMed research database suggests that this might be it, which is confirms by the QUT press release. The researcher name matches, and so does the number of skin biopsies.  They measured various types of cellular change in bits of skin exposed to simulated solar UV light, at twice the dose needed to turn the skin red, and found that sunscreen reduced the changes to less than the margin of error.  This looks like good quality research, and it indicates that sunscreen definitely will give some protection from melanoma, but 100% must be going too far given the small sample and moderate UV dose.

I was a also bit surprised by the “so-called superhero gene”, since I’d never seen p53 described that way before. It’s n0t just me: Google hasn’t seen that nickname either, except on copies of this story.

August 18, 2013

Correlation, genetics, and causation

There’s an interesting piece on cannabis risks at Project Syndicate. One of the things they look at is the correlation between frequent cannabis use and psychosis.  Many people are, quite rightly, unimpressed with the sort of correlation, since it isn’t hard to come up with explanations for psychosis causing cannabis use or for other factors causing both.

However, there is also some genetic data.  The added risk of psychosis seems to be confined to people with two copies of a particular genetic variant in a gene called AKT1. This is harder to explain as confounding (assuming the genetics has been done right), and is one of the things genetics is useful for. This isn’t just a one-off finding; it was found in one study and replicated in another.

On the other hand, the gene AKT1 doesn’t seem to be very active in brain cells, making it more likely that the finding is just a coincidence.  This is one of the things bioinformatics is good for.

In times like these it’s good to remember Ben Goldacre’s slogan “I think you’ll find it’s a bit more complicated than that.”

June 4, 2013

Survey respondents are lying, not ignorant

At least, that’s the conclusion of a new paper from the National Bureau of Economic Research.

It’s a common observation that some survey responses, if taken seriously, imply many partisans are dumber than a sack of hammers.  My favorite example is the 32% of respondents who said the Gulf of Mexico oil well explosion made them more likely to support off-shore oil drilling.

As Dylan Matthews writes in the Washington Post, though, the research suggests people do know better. Ordinarily they give the approved politically-correct answer for their party

In the control group, the authors find what Bartels, Nyhan and Reifler found: There are big partisan gaps in the accuracy of responses. …. For example, Republicans were likelier than Democrats to correctly state that U.S. casualties in Iraq fell from 2007 to 2008, and Democrats were likelier than Republicans to correctly state that unemployment and inflation rose under Bush’s presidency.

But in an experimental group where correct answers increased your chance of winning a prize, the accuracy improved markedly:

Take unemployment: Without any money involved, Democrats’ estimates of the change in unemployment under Bush were about 0.9 points higher than Republicans’ estimates. But when correct answers were rewarded, that gap shrank to 0.4 points. When correct answers and “don’t knows” were rewarded, it shrank to 0.2 points.

This is probably good news for journalism and for democracy.  It’s not such good news for statisticians.

April 1, 2013

Briefly

Despite the date, this is not in any way an April Fools post

  • “Data is not killing creativity, it’s just changing how we tell stories”, from Techcrunch
  • Turning free-form text into journalism: Jacob Harris writes about an investigation into food recalls (nested HTML tables are not an open data format either)
  • Green labels look healthier than red labels, from the Washington Post. When I see this sort of research I imagine the marketing experts thinking “how cute, they figured that one out after only four years”
  • Frances Woolley debunks the recent stories about how Facebook likes reveal your sexual orientation (with comments from me).  It’s amazing how little you get from the quoted 88% accuracy, even if you pretend the input data are meaningful.  There are some measures of accuracy that you shouldn’t be allowed to use in press releases.
March 22, 2013

Briefly

  • A post at Scientific American about covering clinical trials, for journalists and readers.  It’s a summary from the Association of Health Care Journalists annual conference. Starts out “My message: Ask the hard questions.”
  • Asking the hard questions is also useful in covering surveys.  Stuff reports “Kiwi leaders amongst the world’s riskiest”,
  • New Zealand leaders are among the most likely in the world to ignore data and fail to seek a range of opinions when making decisions

    with no provenance except that this was based on a 600,000 person survey of managers and professionals by SHL.  Before trying to track down any more detail, just think: how could this have worked? How would you get reliable information to support those conclusions from each of 600,000 people? 

  • You may have heard about the famous Hawthorne experiment, where raising light levels in a factory improved output, as did lowering them, as did anything else experimental. The original data have been found and this turns out not to be the case.
February 23, 2013

When in doubt, randomise.

There has been (justified) wailing and gnashing of teeth over recent year-9 maths comparisons, and the Herald reports that a `back to basics’ system is being considered

Auckland educator Des Rainey, who did the research with teachers to test his home-made Kiwi Maths memorisation system, said the results came as a shock to the teachers and made him doubt his programme could work.

But after a year of practising multiplication and division on the Kiwi Maths grids for up to 10 minutes a day, the students more than doubled their speed.

This program looks promising, but why is anyone even talking about implementing a major nationwide intervention based on a small, uncontrolled before/after comparison measuring a surrogate outcome?

That is, unless you believe teachers and schoolchildren are much less individually variable than, say, pneumococci, you would want a randomised controlled comparison, and since presumably Des Rainey would agree that speed of basic arithmetic is important primarily because it’s a foundation for actual numeracy, you’d want to measure the success of the program based on numeracy tasks rather than on arithmetic speed. The results being reported are what the medical research community would call a non-randomised Phase IIa efficacy trial — an important stepping stone, but not a basis for policy.

Of course, that’s not how education works, is it?

February 15, 2013

Overselling research findings

The Herald has a story claiming that facial proportions indicate racism (in men).  Well, they have a headline claiming that. The story (and the research paper, even more explicitly) pretty much contradicts the headline, and says that facial proportions have nothing to do with racism but indicate whether men write magazine articles about express their racist views or hide them.

If you believe the story, the relationship is very strong

Looking at the photos from the first study, a new group of participants evaluated men with wider, shorter faces as more prejudiced, and they were able to accurately estimate the target’s self-reported prejudicial beliefs just by looking at an image of his face.

and to be fair to the journalist, that’s what the researchers said.  If you look at their actual results, it’s not what they found.

They found an average difference of 1.92 on a 6-point perceived-racism scale for men who differ by 1 unit on the facial proportion scale.  The full range of the facial proportion scale appears to only be about 0.7 units. The paper doesn’t tell us the actual distribution of the measurements, but according to another research paper I found on the internets, the standard deviation of this facial proportion scale is about 0.12.  That means two randomly chosen men would differ by about 0.17 units, and the relationship  would predict a difference in the 6-point perceived-racism scale of about 0.3 units.  The association with self-reported racism was about as strong, though I haven’t been able to find enough information to compute the predicted differences (it shouldn’t be this hard).

In my book, that’s not an “accurate estimate”.

 

 

January 27, 2013

Clinical trials in India

Stuff has a story from the Sydney Morning Herald on clinical trials in India.  The basic claims are damning if true:

…clinical drug trials are at the centre of a growing controversy in India, as evidence emerges before courts and, in government inquiries, of patients being put onto drug trials without their knowledge or consent…

With a very few exceptions (eg some trials of emergency resuscitation techniques and some minimal-risk cluster-randomised trials of treatment delivery)  it is absolutely fundamental that trial participants give informed consent. Trial protocols are supposed to be reviewed in advance to make sure that participants aren’t asked to consent to unreasonably things, but consent is still primary.  This isn’t just a technical detail, since researchers who were unclear on the importance of consent have often been bad at other aspects of research or patient care.

The Fairfax story mixes in the claimed lack of consent with other claims that are either less serious or not explained clearly. For example

Figures from the drugs controller- general show that in 2011 there were deaths during clinical trials conducted by, or on behalf of, Novartis, Quintiles, Pfizer, Bayer, Bristol Mayer Squibb, and MSD Pharmaceutical.

Of course there were deaths in clinical trials. If you are comparing two treatments for a serious illness, the trial participants will be seriously ill.  When you need to know if a new treatment reduces the risk of death, the only way to tell is to do a study large enough that some people are expected to die.  Even if improved survival isn’t directly what you’re measuring, a large trial will include people who die. In the main Women’s Health Initiative hormone replacement trial, for example, 449 women had died by the time the trial was stopped.  The question isn’t whether there were deaths, it’s whether there were deaths that wouldn’t have occurred if the trials had been done right.

There’s also a claim that families of participants who died were not given adequate compensation as part of the trial.  If there had been consent, this wouldn’t necessarily matter. Lots of trials in developed countries don’t specifically compensate participants or relatives, and there’s actually some suspicion of those that do, because it provides another incentive to participate even if you don’t really want to.

Other sources: Times of India, Chemistry World, a couple of review articles, the Nuremberg Code