Posts filed under Research (206)

December 8, 2015

Sense of direction

From the Herald:

For a lot of men, the notion that they have a better sense of direction than women was already a fact, now a scientific study proves it.

Researchers from the Norwegian University of Science and Technology conducted a study where volunteers completed a series of navigation based tasks with brain scans taken in the process.

The results show men have a more adept sense of direction because they use a separate part of their brain to find their way.

The press release is here, and it describes the research as coming from two separate experiments. There’s a link to the research paper, but only for the second experiment involving testosterone. No link is given for the claim about men vs women.  I tried the PubMed research database, but the data aren’t in any of the other papers published by the same lead researcher.

The second experiment involved only women, half of whom were given a dose of testosterone. The story says

It was also found when women in the study had a dose of testosterone dropped onto their tongue, their navigational skills improved.

The research paper says

Surprisingly, the specific increase in MTL activity was not accompanied by increased navigation performance in the testosterone group.

That is, they saw changes in brain waves, but no change in navigation. The press release has this right, saying

“We hoped that they would be able to solve more tasks, but they didn’t.”

So, we have two claims. For one of them the evidence isn’t available, for the other it contradicts the story.

November 27, 2015

What should data use agreements look like?

After the news about Jarrod Gilbert being refused access to crime data, it’s worth looking at what data-use agreements should look like. I’m going to just consider agreements to use data for one’s own research — consulting projects and commissioned reports are different.

On Stuff, the police said

“Police reserves the right to discuss research findings with the academic if it misunderstands or misrepresents police data and information,” Evans said. 

Police could prevent further access to police resources if a researcher breached the agreement, he said. 

“Our priority is always to ensure that an appropriate balance is drawn between the privacy of individuals and academic freedom.

That would actually be reasonable if it only went that far: an organisation has confidential data, you get to see the data, they get to check whether you’ve reported anything that would breach their privacy restrictions. They can say “paragraph 2, on page 7, the street name together with the other information is identifying”, and you can agree or disagree, and potentially get an independent opinion from a mediator, ombudsman, arbitrator, or if it comes to that, a court.

The key here is that a breach of the agreement is objectively decidable and isn’t based on whether they like the conclusions. The problem comes with discretionary use of data. If the police have discretion about what analyses can be published, there’s no way to tell whether and to what extent they are misusing it. Even if they have only discretion about who can use the data, it’s hard to tell if they are using the implied threat of exclusion to persuade people to change results.

Medical statistics has a lot of experience with this sort of problem. That’s why the International Committee of Medical Journal Editors says, in their ‘conflict of interest’ recommendations

Authors should avoid entering in to agreements with study sponsors, both for-profit and non-profit, that interfere with authors’ access to all of the study’s data or that interfere with their ability to analyze and interpret the data and to prepare and publish manuscripts independently when and where they choose.

Under the ICMJE rules, I believe the sort of data-use restrictions we heard about for crime data would have to be disclosed as a conflict of interest.  The conflict wouldn’t necessarily lead to a paper being rejected, but it would be something for editors and reviewers to bear in mind as they looked at which results were presented and how they were interpreted.

 

 

November 25, 2015

Why we can’t trust crime analyses in New Zealand

Jarrod Gilbert has spent a lot of time hanging out with people in biker gangs.

That’s how he wrote his book, Patched, a history of gangs in New Zealand.  According to the Herald, it’s also the police’s rationale for not letting him have access to crime data. I don’t know whether it would be more charitable to the Police to accept that this is their real reason or not.

Conceivably, you might be concerned about access to these data for people with certain sorts of criminal connections. There might be ways to misuse the data, perhaps for some sort of scam on crime victims. No-one suggests that is  the sort of association with criminals that Dr Gilbert has.

It gets worse. According to Dr Gilbert, also writing in the Herald, the standard data access agreement for the data says police “retain the sole right to veto any findings from release.” Even drug companies don’t get away with those sorts of clauses nowadays.

To the extent these reports are true, we can’t entirely trust any analysis of New Zealand crime data that goes beyond what’s publicly available. There might be a lot of research that hasn’t been affected by censorship and threats to block future work, but we have no way of picking it out.

November 13, 2015

Blood pressure experiments

The two major US medical journals each published  a report this week about an experiment on healthy humans involving blood pressure.

One of these was a serious multi-year, multi-million-dollar clinical trial in over 9000 people, trying to refine the treatment of high blood pressure. The other looks like a borderline-ethical publicity stunt.  Guess which one ended up in Stuff.

In the experiment, 25 people were given an energy drink

We hypothesized that drinking a commercially available energy drink compared with a placebo drink increases blood pressure and heart rate in healthy adults at rest and in response to mental and physical stress (primary outcomes). Furthermore, we hypothesized that these hemodynamic changes are associated with sympathetic activation, which could predispose to increased cardiovascular risk (secondary outcomes).

The result was that consuming caffeine made blood pressure and heart rate go up for a short period,  and that levels of the hormone norepinephrine  in the blood also went up. Oh, and that consuming caffeine led to more caffeine in the bloodstream than consuming no caffeine.

The findings about blood pressure, heart rate, and norepinephrine are about as surprising as the finding about caffeine in the blood. If you do a Google search on “caffeine blood pressure”, the recommendation box at the top of the results is advice from the Mayo Clinic. It begins

Caffeine can cause a short, but dramatic increase in your blood pressure, even if you don’t have high blood pressure.

The Mayo Clinic, incidentally, is where the new experiment was done.

I looked at the PubMed research database for research on caffeine and blood pressure.  The oldest paper in English for which I could get full text was from 1981. It begins

Acute caffeine in subjects who do not normally ingest methylxanthines leads to increases in blood pressure, heart rate, plasma epinephrine, plasma norepinephrine, plasma renin activity, and urinary catecholamines.

This wasn’t news already in 1981.

Now, I don’t actually like energy drinks; I prefer my caffeine hot and bitter.  Since many energy drinks have as much caffeine as good coffee and some have almost as much sugar as apple juice, there’s probably some unsafe level of consumption, especially for kids.

What I don’t like is dressing this up as new science. The acute effects of caffeine on the cardiovascular system have been known for a long time. It seems strange to do a new human experiment just to demonstrate them again. In particular, it seems ethically dubious if you think these effects are dangerous enough to put out a press release about.

 

November 10, 2015

New blood pressure trial

A big randomised trial comparing strategies for treating high blood pressure has just ended early (paper, paywalled).  There’s good coverage in the New York Times, and there will probably be a lot more over the next week. It’s a relatively complicated story.

The main points:

  • Traditionally, doctors try to get your blood pressure below 140mmHg, but some people always thought lower would be better.
  • The study, funded by the US government, randomly allocated over 9000 people with high blood pressure and some other heart disease risk factor (but not diabetes) to either try to get blood pressure of 140mmHg or try to get 120mmHg.
  • A previous trial with the same targets, but in people with diabetes, had been unimpressive: the results slightly favoured more-intensive treatment, but the difference was small, and well within the variation you’d expect by chance.
  • In the new trial blood pressure targeting worked really well: the average blood pressure in the low group was 122mmHg, and in the normal group was 135.
  • Typically, people in the low group took two or three blood pressure medications, those in the normal group typically took one or two — but in both cases with quite a lot of variation.
  • There were 76 fewer ‘primary outcome events’:  heart attack, stroke, heart failure, or death from heart disease in the low BP group, and 55 fewer deaths from any cause.
  • From the beginning, the plan was to stop whenever the difference in number of ‘primary outcome events’ exceeded a specified threshold, unless there was a good reason based on the data to continue. The difference had been just barely over the threshold at the previous analysis, and they continued. In mid-September it was clearly over the threshold, and they stopped.
  • Stopping early will tend to overestimate the benefit, but the fact that they waited for one more analysis reduces this bias.

I’m surprised the benefit from extreme blood pressure reduction is so large (in a relative sense), but even more surprised that they managed to get so many healthy people to take their treatments that consistently for over three years.  As context for this, data from a US national survey in 2011-12 showed only about two-thirds of those currently taking medications for high blood pressure even get down to 140mmHg.

In an absolute sense the risk reduction is relatively small: for every thousand people on intensive blood pressure reduction — healthy people taking multiple pills, multiple times per day — they saw 12 fewer deaths and 16 fewer ‘events’.   On the other hand, the treatments are cheap and most people can find a combination without much in the way of side effects. If intensive treatment becomes standard, there will probably be more use of combination pills to make multiple drugs easier to take.

There’s one moderately worrying factor: a higher rate of kidney impairment in the low BP group (higher by a couple of percentage points). The researchers indicate that they don’t know if this is real, permanent  damage, and that more follow-up and testing of those people is needed. If it is a real problem it could be more serious in ordinary medical practice than in the obsessively-monitored trial.  This may well explain why the trial didn’t stop even earlier:  the monitoring committee would have wanted to be sure the benefits were real given the possibility of adverse effects — the sort of difficult decision that is why you have experienced, independent monitoring committees. 

November 9, 2015

To each according to his needs

There’s a fairly overblown story in the Guardian about religion and altruism

“Overall, our findings … contradict the commonsense and popular assumption that children from religious households are more altruistic and kind towards others,” said the authors of The Negative Association Between Religiousness and Children’s Altruism Across the World, published this week in Current Biology.

“More generally, they call into question whether religion is vital for moral development, supporting the idea that secularisation of moral discourse will not reduce human kindness – in fact, it will do just the opposite.”

The research found that kindergarten (update: and primary school) children from religious families scored lower on an altruism test (a version of the Dictator game).  Given ten stickers, non-religious children would give about one more away on average than religious children.

 

While it’s obviously true that this sort of simple moral behaviour doesn’t require religion, the cause-and-effect conclusion the story is trying to draw is stronger than the data. I’m pretty confident the people quoted approvingly wouldn’t have been as convinced by the same sort of research if it had found the opposite result.

The research does provide convincing evidence on another point, though: three-dimensional graphics are a Bad Idea.

religion

 

November 3, 2015

Dogs and asthma

One News saysThe family dog or growing up on a farm could be the keys to reducing the chances of a young person suffering from asthma.

This is pretty good research. It’s obviously not a randomised experiment, but it uses the population administrative and medical data of Sweden to get a reasonable estimate of associations, and it is consistent with other population studies and has a reasonable explanation in immunology. One News gave all the relevant numbers, and got Dr Collin Brooks from Massey in as an expert. So that’s all good.

But (you knew there was a ‘but’), the population impact is smaller than the news story suggests.  That has to be the case: New Zealand, with very high asthma rates by international standards, already has fairly high dog ownership rates.  In fact, as often happens, this new study has found less benefit than earlier, smaller studies.

At current NZ asthma rates, for every extra 100 little kids who live with dogs, the research would predict that you’d prevent one or two cases of asthma. And that’s without worrying about, say, reduced housing options for households with pets.

 

August 30, 2015

Genetically targeted cancer treatment

Targeting cancer treatments to specific genetic variants has certainly had successes with common mutations — the most well known example must be Herceptin for an important subset of  breast cancer.  Reasonably affordable genetic sequencing has the potential for finding specific, uncommon mutations in cancers where there isn’t a standard, approved drug.

Most good ideas in medicine don’t work, of course, so it’s important to see if this genetic sequencing really helps, and how much it costs.  Ideally this would be in a randomised trial where patients are randomised to the best standard treatment or to genetically-targeted treatment. What we have so far is a comparison of disease progress for genetically-targeted treatment compared to a matched set of patients from the same clinic in previous years.  Here’s a press release, and two abstracts from a scientific conference.

In 72 out of 243 patients whose disease had progressed despite standard treatment, the researchers found a mutation that suggested the patient would benefit from some drug they wouldn’t normally have got. The median time until these patients starting getting worse again was 23 weeks; in the historical patients it was 12 weeks.

The Boston Globe has an interesting story talking to researchers and a patient (though it gets some of the details wrong).  The patient they interview had melanoma and got a drug approved for melanoma patients but only those with one specific mutation (since that’s where the drug was tested). Presumably, though the story doesn’t say, he had a different mutation in the same gene — that’s where the largest benefit of sequencing is likely to be.

An increase from 12 to 23 weeks isn’t terribly impressive, and it came at a cost of US$32000 — the abstract and press release say there wasn’t a cost increase, but that’s because they looked at cost per week, not total cost.  It’s not nothing, though; it’s probably large enough that a clinical trial makes sense and small enough that a trial is still ethical and feasible.

The Boston Globe story is one of the first products of their new health-and-medicine initiative, called “Stat“. That’s not short for “statistics;” it’s the medical slang meaning “right now”, from the Latin statum.

August 28, 2015

Trying again

CNbxnQDWgAAXlKL

This graph is from the Open Science Framework attempt to replicate 100 interesting results in experimental psychology, led by Brian Nozek and published in Science today.

About a third of the experiments got statistically significant results in the same direction as the originals.  Averaging all the experiments together,  the effect size was only half that seen originally, but the graph suggests another way to look at it.  It seems that about half the replications got basically the same result as the original, up to random variation, and about half the replications found nothing.

Ed Yong has a very good article about the project in The Atlantic. He says it’s worse than psychologists expected (but at least now they know).  It’s actually better than I would have expected — I would have guessed that the replicated effects would average quite a bit smaller than the originals.

The same thing is going to be true for a lot of small-scale experiments in other fields.

August 5, 2015

What’s in a browser language default?

Ok, so this is from Saturday and I hadn’t seen it until this morning, so perhaps it should just be left in obscurity, but:

Claims foreign buyers are increasingly snapping up Auckland houses have been further debunked, with data indicating only a fraction of visitors to a popular real estate website are Asian.

Figures released by website realestate.co.nz reveal about five per cent of all online traffic viewing Auckland property between January and April were primary speakers of an East Asian language.

Of that five per cent, only 2.8 per cent originated from outside New Zealand meaning almost half were viewing from within the country.

The problem with Labour’s analysis was that it conflated “Chinese ethnicity” and “foreign”, but at least everyone on the list had actually bought a house in Auckland, and they captured about half the purchases over a defined time period. It couldn’t say much about “foreign”, but it was at least fairly reliable on “Chinese ethnicity” and “real-estate buyer”.

This new “debunking” uses data from a real-estate website. There is no information given either about what fraction of house buyers in Auckland used the website, or about what fraction of people who used the website ended up buying a house rather than just browsing, (or about how many people have their browser’s language preferences set up correctly, since that’s what was actually measured).  Even if realestate.co.nz captured the majority of NZ real-estate buyers, it would hardly be surprising if overseas investors who primarily prefer to use non-English websites used something different.  What’s worse, if you read carefully, is they say “online traffic”: these aren’t even counts of actual people.

So far, the follow-up data sets have been even worse than Labour’s original effort. Learning more would require knowing actual residence for actual buyers of actual Auckland houses: either a large fraction over some time period or a representative sample.  Otherwise, if you have a dataset lying around that could be analysed to say something vaguely connected to the number of overseas Chinese real-estate buyers in Auckland, you might consider keeping it to yourself.