Posts filed under Medical news (341)

February 27, 2013

A school-based randomised trial

From the Herald (taken from the Daily Mail)

Volunteering is good for the heart as well as the soul, researchers say.

In fact, the research didn’t examine either the soul or the heart, but they did look at weight, cholesterol, and biochemical measurements related to inflammation.

The study, published in the journal JAMA Pediatrics, tested 106 teenagers from Vancouver. Those involved in altruistic activities had lower levels of cholesterol and inflammation.

The paper is here, and even if you can only see the abstract, you can see the story missed an important issue. This was actually a randomised trial:

Intervention  Weekly volunteering with elementary school–aged children for 2 months vs wait-list control group.

That is, the researchers took all the students from a high school in western Canada (presumably Vancouver, though it doesn’t say).  These students are required to do some volunteer work as part of the standard curriculum, and they were randomised to do it in first or second semester.

The article doesn’t address the possibility that the volunteering might have involved an increase in exercise: even just at the level of standing up and moving around vs sitting in front of a screen.  Also, as the researchers admit, this is a very small study, intended as a pilot for larger-scale research, and they may just have been lucky.  It’s still interesting to see the reductions in cholesterol and biochemical markers of inflammation.

February 22, 2013

Drug safety is hard

There are new reports, according to the Herald, that synthetic cannabinoids are ‘associated’ with suicidal tendencies in long-term users.  One difficulty in evaluating this sort of data is the huge peak in suicide rates in young men.  Almost anything you can think of that might be a bad idea is more commonly done by young men than by other people, so an apparent association isn’t all that surprising.  There is also the problem with direction of causation — the sorts of problems that make suicide a risk might also increase drug use — and difficulties even in getting a reasonable estimate of the denominator, the number of people using the drug. Serious, rare effects of a recreational drug are the hardest to be sure about, and the same is true of prescription medications.  It took big randomized trials to find out that Vioxx more than doubled your rate of heart attack , and a study of 1500 lung-cancer cases even to find the 20-fold increase in risk from smoking.

In this particular example there is additional supporting evidence. A few years back there was a lot of research into anti-cannabinoid drugs for weight loss (anti-munchies), and one of the things that sank these was an increase in suicidal thoughts in the patients in the early randomized trials.  It’s quite plausible that the same effect would happen as a dose of the cannabinoid wears off.

In general, though, this is the sort of effect that the proposed testing scheme for psychoactive drugs will have difficulty finding, or ruling out.

February 13, 2013

Pain and heartbreak

There’s a new paper out in PLoS Medicine that’s starting to show up in overseas news and will probably turn up here soon.  The researchers looked at  painkillers (non-steroidal anti-inflammatory drugs, NSAIDS) that have been shown to increase heart disease risk, and how they are used around the world.

The background here is that NSAIDS block two related enzymes, COX-1 and COX-2.  Blocking COX-2 reduces pain and inflammation; blocking COX-1 decreases blood clotting and leads to gastrointestinal upset and potentially to ulcers.  That’s an obvious reason to look for drugs that just block COX-2, and researchers did this. The best known example is rofecoxib, or as it’s known in marketing, Vioxx.

It turns out that selectively blocking COX-2 is not as good an idea as it seemed, and leads to an increase in heart attacks, strokes, and lawsuits. Vioxx is off the market pretty much everywhere now, but there are some older NSAIDS that were popular because they caused less stomach irritation, and these also turn out to be selective blockers of COX-2. In the light of what was seen for Vioxx and Celebrex, you might expect these drugs also to increase heart attack risk, and the new paper summarises randomised trial data that shows they probably do.  The most important example is diclofenac, which is apparently the world’s most popular painkiller. It’s certainly one of the most popular in New Zealand pharmacies, where it’s called Voltaren.

The research paper estimates that diclofenac increases heart attack risk by about 40% while you’re taking it.  They don’t know how long you need to take it before the risk increases, but if the side-effect is related to blood clotting, it’s plausible that the risk is more or less immediate. The researchers argue that diclofenac should be banned, which seems a bit extreme; the increased heart attack risk is a real issue for some users, but not necessarily for others.  If you’re a 30-year old athlete training every day, your heart attack risk is roughly zero, so increasing it by a factor of 1.4 isn’t a big deal, and the lower risk of stomach irritation might well be worth it.  On the other hand, a lot of people who take NSAIDS regularly for arthritis are at relatively high risk of heart attack and so their risk/benefit tradeoff is very much against diclofenac.

The real problem here is that no-one thought to do any thorough studies of long-term safety for these drugs when they were developed.  Even for the new COX-2 inhibitors such as rofecoxib and celecoxib, the trials that told us about cardiovascular risk were mostly being done not for safety but to look for other potential (but not actual) benefits in Alzheimer’s Disease or in colon cancer prevention.  The world’s regulatory agencies are pretty good at making sure that drugs don’t get approved unless they work, but the regulation of safety is a lot more difficult and is done a lot less well.

[ps: environmentalists may recall that diclofenac is also really bad for vultures]

[Update: now in Stuff, and the Herald]

February 10, 2013

Is cherry-picking season finally over?

Ben Goldacre writes in the New York Times about the need for all clinical trials to be published

The Food and Drug Administration Amendments Act of 2007 is the most widely cited fix. It required that new clinical trials conducted in the United States post summaries of their results at clinicaltrials.gov within a year of completion, or face a fine of $10,000 a day. But in 2012, the British Medical Journal published the first open audit of the process, which found that four out of five trials covered by the legislation had ignored the reporting requirements. Amazingly, no fine has yet been levied.

An earlier fake fix dates from 2005, when the International Committee of Medical Journal Editors made an announcement: their members would never again publish any clinical trial unless its existence had been declared on a publicly accessible registry before the trial began. The reasoning was simple: if everyone registered their trials at the beginning, we could easily spot which results were withheld; and since everyone wants to publish in prominent academic journals, these editors had the perfect carrot. Once again, everyone assumed the problem had been fixed.

But four years later we discovered, in a paper from The Journal of the American Medical Association, that the editors had broken their promise: more than half of all trials published in leading journals still weren’t properly registered, and a quarter weren’t registered at all.

There’s a new campaign and petition at Alltrials.net, and if you’re in Auckland, you can hear Ben Goldacre in May at the Auckland Readers and Writers Festival

February 6, 2013

The checklist: a worked example

The Herald has a story about increased stroke risk in young adults using cannabis.  Let’s run it past the JOHN HUMPHRYS checklist:

  • Just observing people:  X
  • Original information unavailable:  ?. The abstract should be available, but I can’t find it on the conference website — it will probably be out soon.  It looks as if this story may have leaked early.
  • Headline exaggerated:  The headline is fine.
  • No independent comment: X.
  • Higher risk: X.  Absolute risks are not given, and are extremely low in “young adults”
  • Unjustified advice:? The advice that cannabis smoking is probably bad for your health is justified, but not by this study.
  • Might be explained by something else: X.  Tobacco, for example, which is mentioned in the story but dismissed without justification.
  • Public relations puff: no real problem here.
  • Half the picture: this one’s ok, though the publication bias issue could have been mentioned — this is the first study to find a link, but was it the first to look for one?
  • Relevance unclear: this isn’t a problem
  • Yet another single study: X
  • Small: X only 160 strokes, only 12 or 13 in cannabis users.

It’s not that all stories should pass all checks on the list — sometimes small observational studies can be important or at least interesting.  The problem (as with the Bechdel test) is that such a small fraction of stories pass the checks.

 

[Update: forgot the link originally]

February 5, 2013

A quick tongue-in-cheek checklist for assessing usefulness of media stories on risk

Do you shout at the morning radio when a story about a medical “risk” is distorted, exaggerated, mangled out of all recognition? You are not alone. Kevin McConway and David Spiegelhalter, writing in Significance, a quarterly magazine published by the Royal Statistical Society, have come up with a checklist for scoring media stories about medical risks. Their mnemonic checklist comprises 12 items and is called the ‘John Humphrys’ scale, said Mr Humphrys being a well-known UK radio and television presenter.

Capture

They assign one point for every ‘yes’ and do a test on a story about magnetic fields and asthma, and another about TV and length of life. The article, called Score and Ignore: A radio listener’s guide to ignoring health stories, is here.

Could form the basis of a useful classroom resource.

January 27, 2013

Clinical trials in India

Stuff has a story from the Sydney Morning Herald on clinical trials in India.  The basic claims are damning if true:

…clinical drug trials are at the centre of a growing controversy in India, as evidence emerges before courts and, in government inquiries, of patients being put onto drug trials without their knowledge or consent…

With a very few exceptions (eg some trials of emergency resuscitation techniques and some minimal-risk cluster-randomised trials of treatment delivery)  it is absolutely fundamental that trial participants give informed consent. Trial protocols are supposed to be reviewed in advance to make sure that participants aren’t asked to consent to unreasonably things, but consent is still primary.  This isn’t just a technical detail, since researchers who were unclear on the importance of consent have often been bad at other aspects of research or patient care.

The Fairfax story mixes in the claimed lack of consent with other claims that are either less serious or not explained clearly. For example

Figures from the drugs controller- general show that in 2011 there were deaths during clinical trials conducted by, or on behalf of, Novartis, Quintiles, Pfizer, Bayer, Bristol Mayer Squibb, and MSD Pharmaceutical.

Of course there were deaths in clinical trials. If you are comparing two treatments for a serious illness, the trial participants will be seriously ill.  When you need to know if a new treatment reduces the risk of death, the only way to tell is to do a study large enough that some people are expected to die.  Even if improved survival isn’t directly what you’re measuring, a large trial will include people who die. In the main Women’s Health Initiative hormone replacement trial, for example, 449 women had died by the time the trial was stopped.  The question isn’t whether there were deaths, it’s whether there were deaths that wouldn’t have occurred if the trials had been done right.

There’s also a claim that families of participants who died were not given adequate compensation as part of the trial.  If there had been consent, this wouldn’t necessarily matter. Lots of trials in developed countries don’t specifically compensate participants or relatives, and there’s actually some suspicion of those that do, because it provides another incentive to participate even if you don’t really want to.

Other sources: Times of India, Chemistry World, a couple of review articles, the Nuremberg Code

 

January 24, 2013

Rare disease dilemma

The Herald has a story about a new treatment for a very rare blood disorder, and the fact that Pharmac isn’t funding it.

The drug, eculizumab (brand name Soliris), is currently the world’s most expensive, at about NZ$500 000 per year. It’s also very effective.  There’s starting to be a lot of this: we now have the technology to develop specific treatments for a wider range of rare diseases, and most of the rest of that ‘most expensive’ list are replacement enzymes for rare deficiency disorders.   Another recent example is ivacaftor (brand name Kalydeco), which, in about 5% of cases of cystic fibrosis allows the defective chloride transporter protein to work normally.  The result appears to be complete control of the disease, but at a cost of US$300 000 per year. Similar drugs for other variants of cystic fibrosis are being tested.

Funding any one of these drugs would be a minor total cost for Pharmac, because each rare disease is rare. There are only about eight people in New Zealand who would take eculizumab, which would cost only 0.5% of Pharmac’s budget; there would be about 25 who could take ivacaftor, adding up to a percent or two of the budget. The difficulty is that rare diseases collectively are not rare — the European Organization for Rare Diseases estimates that 6-8% of the European Union population have a rare disease and applying that figure to the NZ population still gives 270 000 people.  At $500 000 per person, Pharmac’s total budget would pay for 1500 people to get this sort of very expensive treatment.  At the moment there probably aren’t 1500 people in NZ whose rare diseases are expensively treatable, but there are a lot more than eight.

The patient support group for people with this rare blood disorder obviously think the treatment should be funded

The group’s founder, Auckland artist Daniel Webby, 32 – who almost died from PNH complications – said the funding process did not recognise the rights of rare-disease sufferers.

“They need to recognise that for rare diseases, [drug] development costs are higher per patient. They need to put that into their budget and make sure people get … life-saving treatments when they are available.”

I’m sure Pharmac does recognise this, but changing the national approach to subsidy of health care to give priority to ‘miracle’ treatments for rare diseases is not the sort of decision Pharmac should be making on its own, and the money shouldn’t be taken out of the current Pharmac budget (which is already on the low side).   Kiwis need to decide whether a miracle drug fund is something we want to support and are willing to pay for.

 

[Update: The Herald has an editorial weighing in strongly against expensive drugs even if effective.  I basically agree, but it’s a pity they don’t have the same attitude to miracle treatments that don’t work]

January 21, 2013

Journalist on science journalism

From Columbia Journalism Review (via Tony Cooper), a good long piece on science journalism by David H. Freedman (whom Google seems to confuse with statistician David A. Freedman)

What is a science journalist’s responsibility to openly question findings from highly credentialed scientists and trusted journals? There can only be one answer: The responsibility is large, and it clearly has been neglected. It’s not nearly enough to include in news reports the few mild qualifications attached to any study (“the study wasn’t large,” “the effect was modest,” “some subjects withdrew from the study partway through it”). Readers ought to be alerted, as a matter of course, to the fact that wrongness is embedded in the entire research system, and that few medical research findings ought to be considered completely reliable, regardless of the type of study, who conducted it, where it was published, or who says it’s a good study.

Worse still, health journalists are taking advantage of the wrongness problem. Presented with a range of conflicting findings for almost any interesting question, reporters are free to pick those that back up their preferred thesis—typically the exciting, controversial idea that their editors are counting on. When a reporter, for whatever reasons, wants to demonstrate that a particular type of diet works better than others—or that diets never work—there is a wealth of studies that will back him or her up, never mind all those other studies that have found exactly the opposite (or the studies can be mentioned, then explained away as “flawed”). For “balance,” just throw in a quote or two from a scientist whose opinion strays a bit from the thesis, then drown those quotes out with supportive quotes and more study findings.

I think the author is unduly negative about medical science — part of the problem is that published claims of associations are expected to have a fairly high false positive rate, and there’s not necessarily anything wrong with that as long as everyone understand the situation.  Lowering the false positive rate would either require much higher sample sizes or a much higher false  negative rate, and the coordination problems needed to get a sample size that will make the error rate low are prohibitive in most settings (with phase III clinical trials and modern genome-wide association studies as two partial exceptions).    It’s still true that most interesting or controversial findings about nutrition are wrong, and that journalists should know they are mostly wrong, and should write as if they know this.   Not reprinting Daily Mail stories would probably help, too.

 

January 17, 2013

Melanoma apps

Stuff has a good story about a research study looking at smartphone apps for diagnosing melanoma.  These turn out not to be very accurate: they miss quite a lot of melanomas. The story doesn’t mention the false positives, but they are just as bad. Three of the four apps reported  more than 60% of non-melanomas as of concern, so you might as well just cut out the middleman and get checked properly from the start.

Nitpicking: the study was done at the University of Pittsburgh, which is in Pittsburgh, not in Chicago as Stuff seems to think.  Also, the name of the journal isn’t “Online First”, it’s JAMA Dermatology.