Posts filed under Medical news (341)

March 17, 2013

Briefly

  • When data gets more important, there’s more incentive to fudge it.  From the Telegraph: ” senior NHS managers and hospital trusts will be held criminally liable if they manipulate figures on waiting times or death rates.”
  • A new registry for people with rare genetic diseases, emphasizing the ability to customise what information is revealed and to whom.
  • Wall St Journal piece on Big Data. Some concrete examples, not just the usual buzzwords
  • Interesting visualisations from RevDanCat
March 15, 2013

Better evidence in education

There’s a new UK report by Ben Goldacre, “Building Evidence into Education”, which has been welcomed by the Teacher Development Trust

Part of the introduction is worth quoting in detail:

Before we get that far, though, there is a caveat: I’m a doctor. I know that outsiders often try to tell teachers what they should do, and I’m aware this often ends badly. Because of that, there are two things we should be clear on.

Firstly, evidence based practice isn’t about telling teachers what to do: in fact, quite the opposite. This is about empowering teachers, and setting a profession free from governments, ministers and civil servants who are often overly keen on sending out edicts, insisting that their new idea is the best in town. Nobody in government would tell a doctor what to prescribe, but we all expect doctors to be able to make informed decisions about which treatment is best, using the best currently available evidence. I think teachers could one day be in the same position.

Secondly, doctors didn’t invent evidence based medicine. In fact, quite the opposite is true: just a few decades ago, best medical practice was driven by things like eminence, charisma, and personal experience. We needed the help of statisticians, epidemiologists, information librarians, and experts in trial design to move forwards. Many doctors – especially the most senior ones – fought hard against this, regarding “evidence based medicine” as a challenge to their authority.

In retrospect, we’ve seen that these doctors were wrong. The opportunity to make informed decisions about what works best, using good quality evidence, represents a truer form of professional independence than any senior figure barking out their opinions. A coherent set of systems for evidence based practice listens to people on the front line, to find out where the uncertainties are, and decide which ideas are worth testing. Lastly, crucially, individual judgement isn’t undermined by evidence: if anything, informed judgement is back in the foreground, and hugely improved.

This is the opportunity that I think teachers might want to take up.

March 13, 2013

Is epidemiology 90% wrong?

There’s been a recent recurrence of the factoid that 90% of results in epidemiology are wrong. For example, @StatFact on Twitter posted ‘Empirical evidence is that 80-90% of the claims made by epidemiologists are false.’ with a link to a talk by Stanley Young at the National Institute of Statistical Sciences.  I replied “For suitable values of ‘claim’ and ‘false'”, and if you don’t want to read further, that’s a good summary. (more…)

March 11, 2013

How could we test this?

As you will have heard, there is reasonable evidence that an infant has been cured of HIV infection, by giving fairly high doses of antiretroviral drugs immediately after birth.  If this case continues to hold up to investigation, what next?

You would normally want to do a randomized trial, to get evidence that this wasn’t just a one-off fluke, but that’s going to be hard.  Obviously, parents would be very reluctant to have their children randomized. To make matters worse, since the usual antiretroviral treatments are almost completely effective in preventing mother:child transmission, anymost infected infants in Western countries will have been born to mothers who either didn’t know they were infected or knew and were unable to get normal medical care.  This is not a group you want to target for research, for both practical and ethical reasons.   The same issue arises in countries where mother:child transmission is more common. Antiretroviral treatment to prevent transmission is simpler and less expensive than the potentially-curative treatment treatment for the infant, so any system that is able to deliver the cure reliably would rarely need to.

On the other hand, if this (relatively drastic) treatment really does work, not having a randomised trial is likely to slow its acceptance.  Back in the late 1980s, a new lung-bypass technique for premature infants was invented.  This technique, ECMO, appeared to dramatically improve survival, but it required major surgery.  Researchers at the University of Michigan tried a novel ‘play the winner’ trial design that was intended to reduce the number of infants randomized to an ineffective treatment. In a sense, this worked.  The trial ended up randomising 11 infants to ECMO, all of whom survived, and one to standard treatment, who died.  Unfortunately, the trial design was sufficiently unusual and unfamiliar that people didn’t seem to be able to interpret the results (it’s been the subject of multiple statistics papers). A similar design was used in a follow-up trial at Harvard, ending up with 28 infants given  ECMO (with one death) and 10 given standard treatment (with four deaths). Again, there wasn’t consensus on what the result meant, and it wasn’t until after a third, standard randomised trial was done that the treatment was widely used — and if the standard trial had been done first, fewer infants would have been randomised to standard care, and infants outside the trial would have gotten the treatment earlier.

Individual-level randomisation may well not be possible to do efficiently and ethically.  Another approach, in some country that is making efforts to provide prophylaxis against mother:child transmission and that believes treating infected infants is feasible, would be a stepped-wedge design.  This design takes advantage of the fact that we can’t do everything at once.  If treatment is being rolled out across a developing country, some areas will get it first and some will get it later.  Rather than a haphazard allocation (or one based on where the health ministry officials have relatives, or where the international TV representatives want to film) using a truly random order allows evaluation of the effectiveness of treatment policy while still delivering treatment to as many people as possible, as fast as possible.   This design also has the advantage of testing a real public-health question: does a policy of treating infected infants result in fewer infected children?  It’s conceivable, especially in a country where health care is expensive and there’s a lot of prejudice against HIV-positive people, that having treatment available for infected infants could reduce the use of HIV testing and prophylaxis by pregnant women, and the net effect could be negative.

March 10, 2013

Your media on drugs

Last night, 3News had a scare story about positive drug tests at work.  The web headline is “Report: More NZers working on drugs”, but that’s not what they had information on:

New figures reveal more New Zealanders were caught with drugs in their system at work last year.

…new figures from the New Zealand Drug Detection Agency reveal 4300 people tested positive for drugs at work last year.

but

The New Zealand Drug Detection Agency says employers are doing a better job of self-regulating. The agency performed almost 70,000 tests last year, 30 percent more than in 2011.

If 30% more were tested, you’d expect more to be positive. The story doesn’t say how many tested positive the previous year, but with the help of the Google, I found last year’s press release, which says

8% of men tested “non-negative” compared with 6% of women tested in 2011.

Now, 8% of 70000 is 5600, and even 6% of 70000 is 4200. Given that the majority of the tests are in men, it looks like the proportion testing positive went down this year.

The worst part of the story statistically is when they report changes in proportions of which drug was found as if this was meaningful.  For example,

When it comes to industries, oil and gas had an 18 percent drop in positive tests for methamphetamine, but showed a marked increase in the use of opiates.

That’s an increase in the use of opiates as a proportion of those testing positive.  Since proportions have to add up to 100%, a decrease in the proportion positive tests that are for methamphetamine has to come with an increase in some other set of drugs — just as a matter of arithmetic.

Stuff‘s story from January just as bad, with the lead

Employers are becoming more aware of the dangers of drugs and alcohol in the workplace as well as the benefits of testing for them.

and quoting an employer as saying

“And, we have no fear of an employee turning up to work and operating in an unsafe way, putting themselves and others at risk.”

as if occasional drug tests were the answer to all occupational health and safety problems.

The other interesting thing about the Stuff story is that it’s about a different organisation: Drug Testing Services, not NZ DDA — there’s more than one of them out there! You might easily have thought from the 3News story that the figures they quoted referred to all workplace drug tests in NZ, rather than just those sold by one company.

Given the claims being made, the evidence for either financial or safety benefits is amazingly weak.   No-one in these stories even claims that introducing testing has actually reduced  on-the-job accidents in their company, for example, let alone presents any data.

If you look on PubMed, the database of published medical research, there are lots of papers on new testing methods and reproducibility of test results, and a few that show people who have accidents are more likely than others to test positive.  There’s very little even of before-after comparisons: a Cochrane review on this topic found three before-after comparisons. Two of the three found a small decrease in accident rates immediately after introducing testing; the third did not.  A different two of the three found that the long-term decreasing trend in injuries got faster after introducing testing; again, the third did not.   The review concluded that there was insufficient evidence to recommend for or against testing.

There’s better evidence for mandatory alcohol testing of truck drivers, but since those tests measure current blood alcohol concentrations, not past use, it doesn’t tell us much about other types of drug testing.

 

 

March 8, 2013

Eat bacon and die

The Herald, under the arguably-overstated headline Eating processed meats could cut your life short, have the reasonable lead

A diet packed with sausages, ham, bacon and other processed meats appears to be linked to an increased risk of dying young, a study of half a million people across Europe suggests.

The main problem with the summaries of risk that reported in the story is that they are for the people who eat the highest amount of processed meat.  It’s notable that nowhere in the Herald story do they tell you how high this consumption level was, either as a fraction of the participants or as a weight or number of servings. (3News did better)

It’s probably true that you would have lower risk if you ate less processed meat than this highest-consumption group, but you probably already do — they were the top half a percent of the 450000 participants, and they averaged more than 160g per day, or 1.1kg per week.

There are also problems with how the statistics get translated into deaths.  The study estimated hazard ratios, which compare the rates of death for high and low processed meat consumption, and then try to turn these into proportions. The Herald quotes a study researcher as saying

“Overall, we estimate that 3 per cent of premature deaths each year could be prevented if people ate less than 20 grams of processed meat per day.”

This should get the response “define ‘premature'”, but it’s actually more carefully phrased than in the research paper, which says

We estimated that 3.3% (95% CI 1.5% to 5.0%) of deaths could be prevented if all participants had a processed meat consumption of less than 20 g/day.

suggesting that  3.3% of vegetarians would be immortal.

Turning hazard ratios into information about life expectancy or premature death is tricky.  David Spiegelhalter’s microlives are useful here. The study estimates a hazard ratio of 1.18 for 50g extra per day of processed meat.  If that really is due to the meat, not to other differences in health risk,  and if it really is approximately constant across all types of processed meat, it corresponds to about 2 microlives per 50g — about an hour of life per serving, or about the same as four cigarettes.

There are reasons to be a bit skeptical about the magnitude of the results: the study didn’t find any evidence of higher risk in people who eat a lot of red meat, contradicting previous studies.  Also, the analysis used statistical techniques to correct for measurement error in meat consumption, but not in any of the other risk factors they analysed.  If people with high processed meat consumption are also at higher risk in other ways (which they are), this analysis will tend to shift the apparent risk towards processed meat.

Still, I shouldn’t think anyone is really surprised that bacon’s not a health food.

March 7, 2013

Briefly

  • From the frozen north: the most pointless bar graph I’ve seen in a long time.

 li-drinking-graph

  • A website with interviews in data science and analytics, currently featuring UoA graduate Hadley Wickham, in his role as Chief Scientist of RStudio

 

  • From the Herald, a successful HRC-funded randomised trial of an NZ-invented inhaler for asthma.  They don’t link to the paper and editorial (which are not in ‘the prestigious Lancet medical journal’, but in the perfectly respectable Lancet Respiratory Medicine journal)

 

  • The US Census Bureau has released data on commute times, collected in the American Community Survey.  The Census Bureau has an infographic (sigh),  but since the data are available, other people can do better, in this case the New York public radio station WNYC (via)

 

March 6, 2013

Twitter is not a random sample

From Stuff,

If you’ve ever viewed Twitter as a gauge of public opinion, a weathervane marking the mood of the masses, you are very much mistaken.

That is the rather surprising finding of a new US study, which suggests the microblog zeitgeist differs markedly from mainstream public opinion.

Apart from being completely unsurprising, this is a useful thing to have data on.  The Pew Charitable Trusts, who do a lot of surveys, compared actual opinion polls to tweet summaries for some major political and social issues in the US, and found they didn’t agree.

Along the same lines, it was reported last month that Google’s Flu Trends overestimated the number of flu cases this year (after having initially underestimated the H1N1 pandemic), probably because the high level of publicity for the flu vaccine this year made people more aware.

These data summaries can be very useful, because they are much less expensive and give much more detail in space and time than traditional data collection, but they are also sensitive to changes in online behaviour. Getting anything accurate out of them requires calibration to ‘ground truth’, as a previous generation of Big Data systems called it.

March 4, 2013

Successful randomized trial of diet

As I have previously observed, there are far too many clever ideas and far too few actual evaluations of effectiveness in diet research, so it’s great to see something that has actually worked.

Researchers in Spain randomized 7500 people at high risk of cardiovascular disease to be told to follow a Mediterranean diet with extra olive oil, a Mediterranean diet with extra nuts, or just to get standard background dietary advice.  The trial was stopped early, after about five years’ followup, because the two Mediterranean diet groups had a substantially lower rate of major cardiovascular events.  The relative risk reduction was about 30%, and the absolute risk reduction about 1 percentage point.

Getting people to adopt a Mediterranean diet may be easier in Spain, so it would be good to have similar results for other recommended diets, eg, based on south-east Asian food.

(via Simply Statistics)

February 28, 2013

Unclear on the concept

The whole point of the Alltrials.net campaign is to prevent selective publication of clinical trial results.  The problem is that drug companies (and everyone else) publish only about half of their trials and are more likely to publish results if they are positive, distorting the available evidence. The only fix is not to let them be selective.

Roche has responded to the campaign by saying it will set up a panel to approve requests for access to anonymised patient data.  That’s nice, and it will be helpful for certain types of research, but it completely misses the point of the AllTrials campaign.

As Tracey Brown, of the British organisation Sense about Science, comments: “Which bit of All and Trials do they not understand?”