Posts written by Thomas Lumley (1857)

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient

September 25, 2016

Briefly

  • A post from Minding Data looking at the proportion of syndicated stories in the Herald.  I’m not sure about the definition — some stories are edited here, and it’s not clear what it takes to not have an attribution to another paper.
  • On measuring the right numbers, from Matt Levine at Bloomberg View “The infamous number is that 5,300 Wells Fargo employees were fired for setting up fake customer accounts to meet sales quotas, but it is important — and difficult — to try to put that number in context. For instance: How many employees were fired for not meeting sales quotas because they didn’t set up fake accounts? “
  • Data Visualisation: how maps have shown elevation, from National Geographic — including why maps of European mountains are lit from the northwest,  rather from somewhere the sun might be. (via Evelyn Lamb)
  • I was Unimpressed when the authors of an unconvincing paper on GMO dangers had a ‘close-hold embargo’ — allowing journalists an advance look only if they promised not to get any expert input to their stories. It’s not any better when the FDA does it.
September 18, 2016

Yo mamma so smart

Q: Did you see intelligence is inherited just from mothers?

A: Yeah, nah.

Q: No, seriously. It’s in Stuff. “Recent scientific research suggests that rather than intelligence being genetically inherited from both their parents, it comes from their mother.”

A: I don’t think so.

Q: You’re objecting to their definition of intelligence, aren’t you?

A: Not this time. For today, I’m happy to stipulate to whatever their definition is.

Q: But they have Science! The “intelligence genes originate from the X chromosome” and “Some of these affected genes work only if they come from the mother. If that same gene is inherited from the father, it is deactivated.”

A: That sounds like two different explanations grafted together.

Q: Huh?

A: Some genes are imprinted so the paternal and maternal copies work differently, but that’s got nothing to do with the X chromosome.

Q: Why not?

A: Because any given cell has only one functioning X chromosome: for men, it comes from your mother, for women, it’s a random choice between the ones from each parent.

Q: Ok. But are all the intelligence genes on the X chromosome?

A: No. In fact, modern studies using hundreds of thousands of genetic variants suggest that genes contributing to intelligence are everywhere on the genome.

Q: But what about the ‘recent research’?

A: What recent research? I don’t see any links

Q: Maybe they’re in the blog post that the story mentions but doesn’t link to. Can you find it?

A: Yes.

Q: And the references?

A: Mostly in mice.

Q: But there’s one about a study in Glasgow, Scotland. In nearly 13,000 people.

A: There is, though it’s actually an analysis of the US National Longitudinal Study of Youth.  Which, strangely enough, did not recruit from Glasgow, Scotland. And less than half of the 12,686 participants ended up in the analysis.

Q: Whatever. It’s still recent research?

A: Ish. 2006.

Q: And it found mother’s intelligence was the most important predictor of child’s intelligence, though?

A: Yes, of the ones they looked at.

Q: So, more important than father’s intelligence?

A: That wasn’t one of the ones they looked at.

Q: “Wasn’t one of the ones they looked at”

A: Nope.

Q: Ok. So is there any reason for saying intelligence genes are on the X chromosome or is it all bollocks?

A: Both.

Q: ಠ_ಠ

A: Especially before modern genomics, it was much easier to find out about the effects of genes on the X chromosome, since breaking them will often cause fairly dramatic disorders in male children.

Q: So it’s not that more intelligence-related genes are on the X chromosome, just that we know more about them?

A: That could easily be the case. And just because a gene affects intelligence when it’s broken doesn’t necessarily mean small variations it in affect normal intelligence.

Q: But wouldn’t be it great if we could show those pretentious ‘genius’ sperm-donor organisations were all useless wankers?

A: On the other hand, we don’t need more reasons to blame mothers for their kids’ health and wellbeing.

September 17, 2016

Local polls

Since we have another episode of democracy coming on, there are starting to be more stories about polls for me to talk about.

First, the term “bogus”.  Two people, at least one of whom should have known better, have described poll results they don’t like as “bogus” recently. Andrew Little used the term about a One News/Colmar Brunton poll, and Nick Leggett said “If you want the definition of a bogus poll this is it” about results from Community Engagement Ltd.

As one of the primary NZ users of the term ‘bogus poll’ I want it to mean something. Bogus polls are polls that aren’t doing anything to get the right answer. For example, in the same Dominion Post story, Jo Coughlan mentioned

“…two independent Fairfax online Stuff polls of 16,000 and 3200 respondents showing me a clear winner on 35 per cent and 50 per cent respectively.”

Those are bogus polls.

So, what about the two Wellington polls cited as support for the candidates who sponsored them? Curia gives more detail than the Dominion Post.  The results differ by more than the internal margin of error, which will be partly because the target populations are different (‘likely voter’ vs ‘eligible’), and partly because the usual difficulties of sampling are made worse by trying to restrict to Wellington.

It wouldn’t be unreasonable to downweight the poll from Community Engagement Ltd just because seem to be a new company, but the polls agree the vote will go to preferences. That’s when things get tricky.

Local elections in NZ use Single Transferable Vote, so second and later preferences can matter a lot.  It’s hard to do good polling in STV elections even in places like Australia where there’s high turnout and almost everything really depends on the ‘two-party preferred’ vote — whether you rank Labor above or below the L/NP coalition.  It’s really hard when you have more than two plausible candidates, and a lot of ‘undecided’ voters, and a really low expected turnout.

With first-past-the-post voting the sort of posturing the candidates are doing would be important — you need to convince your potential supporters that they won’t be wasting their vote.  With STV, votes for minor candidates aren’t wasted and you should typically just vote your actual preferences, and if you don’t understand how this works (or if think you do and are wrong) you should go read Graeme Edgeler on how to vote STV.

September 15, 2016

Briefly

  • From Cardiogram: using the Apple Watch to diagnose abnormal heart rhythms
  • From MIT Technology Review, an analysis of emotional patterns in fiction. “We find a set of six core trajectories which form the building blocks of complex narratives” They don’t really cover the possibility that they find six just because that’s as many as they can align neatly with their current approach..
  • From Hilda Bastian: The quality of a research study is rarely uniformly good across all the things it studies (though it could be uniformly bogus)
  • On diagnosing depression from Instagram photos “But they’ve buried the real story. The depression rate among adults in the United States is 6.7%. The depression rate among the crowdsourced workers who shared their photos is 41.2%” (Medium)
September 14, 2016

Why links matter

I wrote last week about the importance of links.  Having links doesn’t guarantee the claims are justified, but it does make it a lot easier to check.  As an exhibit, consider today’s Stuff story about “Healing Foods for Spring Allergies“, which has lots of links.

  • Garlic, we are told “has incredible antibiotic properties that can help clear mucous and fight infection.” There are two links. One is to a review article that summarise a lot of lab studies of allicin, a chemical in garlic.  These studies did not involve anyone, even mice, eating garlic — the chemical was applied directly to bacteria growing in Petri dishes.  The other is a review of a much wider set of studies of garlic and garlic extracts. Again, the studies are almost all in vitro, that is“in lab glassware”.  This one contains the sentence “However, claims of effectiveness of garlic on common cold appear to rely largely on poor quality evidence”. And there’s nothing that even purports to be slightly related to “spring allergies”
  • Vitamin C is supposed to be an “effective anti-histamine”.  The link to “clinical trials” refers to two studies in 1992, actually in people and published in real journals — though not available online through the journals’ websites.  However, the abstract of one of them is easily available online. It isn’t a clinical trial — it’s an uncontrolled before-after experimental study in healthy people — and they didn’t find the effect of vitamin C they were actually looking for.The other one was harder to find, and I don’t have an open-access source. It also wasn’t a clinical trial: it was an experiment in nine healthy  university students.  They started off getting 500mg/day of vitamin C — a level that could be achieved by a well-chosen diet — and then were escalated to 2000mg/day. There was no effect on blood histamine levels from 500mg/day, but a 40% lowering with 2000mg/day.  On the other hand “At the end of the third week of the study, two participants withdrew complaining of chronic diarrhea. Another reported diarrhea but completed the study. Osmotic diarrhea is considered the only major side effect of taking large doses of vitamin C (15). After several days at the 2,000-mg dosage level, two participants complained of dry nose and nosebleeds. Their dosage level was reduced to 1,000 mg daily and they completed the study without further complications. According to a document (JPI-HS-103-2) prepared in May 1991 by Janssen Pharmaceuticals of Piscataway, NJ, dryness of the eye, ear, nose, and throat is a common side effect of antihistamine drugs.” And, again, they didn’t even look at allergy symptoms.
  • “If you suffer from sinus inflammation, pineapple is the fruit for you.”  The link is to a review of studies of pineapple extract for arthritis. Sinus inflammation is not mentioned.  Even for arthritis, the article says “The data available at present indicate the need for trials to establish the efficacy and optimum dosage for bromelain and the need for adequate prospective adverse event monitoring in such chronic conditions as osteoarthritis.”
  • “Gargling salt water has recently been shown to help prevent upper respiratory infections in healthy people.” That’s quite a good study, but it compared plain water to water with povidone/iodine antiseptic.  It’s quite plausible that salt water also works, but the research didn’t show it.
  • “Chicken soup is a wives’ tale with substance!”.  There are two links. One is to Mercola.com, and so isn’t going to be useful.  The other is to an in vitro study of mixing chicken soup extracts with neutrophils, a particular type of immune-related white blood cell.  It might be relevant if you were going to stick the chicken soup up your nose instead of drinking it.

There’s nothing wrong with these foods from a health point of view. I like chicken soup, although I prefer it with lemongrass, lime, and chili. But you’d expect the links to be to the strongest evidence available. And the disappointing thing is, they might well be.

September 13, 2016

Moon and earthquakes

So, there’s a new analysis in Nature Geoscience, with a story on Radio NZ, about how some earthquakes do appear to be linked to tides.

There’s a couple of things to note, though. First, the correlation is only found for the very largest quakes (in their analysis, they found it for magnitudes 8.2 or higher, and possibly for magnitudes 7.5 and higher). Second, they needed to look not just at the position of the moon, but at the orientation of the tidal stress relative to the fault line that slipped. Thirdly, the correlation isn’t anywhere near big enough to use for predictions.

 

September 10, 2016

You’ve got to be carefully taught

From the Guardian

The first international beauty contest judged by “machines” was supposed to use objective factors such as facial symmetry and wrinkles to identify the most attractive contestants. … But when the results came in, the creators were dismayed to see that there was a glaring factor linking the winners: the robots did not like people with dark skin.

The way statistical `supervised learning’ algorithms work is that you give the algorithm a lot of measurements, and a training set of ‘correctly’ classified examples. It tries to find the best way to get the ‘correct’ answers from the measurements, in an objective, unprejudiced way.

The algorithm doesn’t know or care about race. There’s nothing in the maths about skin colour or eye shape. But if it never saw a ‘beautiful’ label on anyone who looked like Zoë Kravitz, it’s not going to spontaneously generalise its idea of beauty.

We’re lucky people learn in a much more sophisticated way than computers, aren’t we?

 

Briefly

  • From Vox: Amazon is a giant purveyor of medical quackery
  • From Joel Eastwood on Twitter: different approaches to balancing geographic accuracy and number of voters in maps of the US (the ‘one cow, one vote’ problem, as I like to call it)
  • From Matt Levine at Bloomberg View: “Measurement is sort of an evil genie: It grants your wishes, but it takes them just a bit too literally. “
  • From 538Is A 50-State Poll As Good As 50 State Polls?
September 8, 2016

For small values of ‘infinitely’

Q: Did you see “women are infinitely more likely than men to suffer from virtually every form of headache there is”?

A: What would that even mean?

Q: Well, it says “statistically’, so you should know.

A: No.

Q: Could it be like ‘literally’ and it just means ‘a lot’. Like ‘five times as much’ or something?

A: I suppose, but it still isn’t true.

Q: What does the story say?

A: “a review of 24 worldwide studies published in the journal Headache in 2011 found that while more than half of women polled (52 per cent) reported having a problem with headaches at the time of the research, only 37 per cent of men did.”

Q: And is that right?

A: My English teachers would say they’re missing commas after “studies” and “2011”, but I might not have the high ground in that kind of debate.

Q: Restricting to statistical pedantry, is it right?

A: Well, the first step would be to find the paper, which they don’t make all that easy.

Q: You mean they don’t link?

A: That’s part of it, as Mark Hanna eloquently complained on Twitter. But they also don’t give the author names or the month.

Q: Did you find it?

A: Yes.

Q: And?

A: Here’s the relevant graph
crzs68jvuaaoufl

Q: Wait, what? More than 80% of men and women in North America have headaches right now? But only about 50% of people in total?

A: Looking up the reference that little 1 points to, it seems ‘current’ headache really means ‘in the past three months’, or even ‘in the past year’, depending on the study.

Q: And the total being much less than either men or women separately?

A: ¯\_(ツ)_/¯   If I had to guess, maybe the studies that separated out men and women used a longer time period

Q: And is the time period why there are more headaches in North America than Europe?

A: Could be. Or the quality of the cheese. Or the fact that they’re in an election campaign, like, half the time.

Q: I’m sensing you don’t like this graph.

A: There are others
headache

Q:  Right. 7.4% in women is less than 6.4% in men. ಠ_ಠ

A: Actually, the lines are ok, it’s just that the numbers are in the wrong places. If they’d written the numbers on the y-axis like we’ve been doing for centuries, they’d be ok.

Q: Ok. But we nearly digress. You’re saying ‘infinitely more” means something in the range 1.5 times to three or four times more?

A: And that some of the comparisons are a bit dodgy.

Q: It seems to have taken more work than necessary to establish that.

A: It sure has.

Q: Would you like to quote some of Mark Hanna’s tweets on linking?

A: “Listen up, reporters & editors. Every day I see you publish articles using “studies say” that make it hard or impossible to find the studies. You should be improving public understanding of science, but instead you are training your audience to believe in “studies say”. 

Yes, do write about research. But let your readers read it too. Being able to criticise research is, erm, critical to scientific literacy. You are reinforcing the perception that science is opaque, impenetrable, and not for the eyes of laypeople. But that’s not true at all.

Worse than that, by training your readers to believe in “studies say” you are priming them to be fooled by pseudoscience.”

Q: Yes, that’ll preach.

Theory and data

From the Herald (from the Daily Telegraph)

A revolutionary blood test, which acts like a smoke detector to spot cancer up to 10 years before symptoms appear, could be available within five years.

It looks like this is genuinely impressive research, and deserves its spot at the British Science Festival, but it’s harder to assess the realism of the claims. What do we actually know now? Well, less than we should, because the claim is based on a press release and interviews about unpublished research. However, earlier research by the same group is available, with a bit of detective work.

In a conference abstract published in February, they report what they were trying to do: measure mutations in a specific gene in red blood cells. As the Herald story says:

Scientists at Swansea University have discovered that mutations occur in red blood cells way before any signs of cancer are evident.

But it’s more than that. Mutations in red blood cells occur before cancer even exists — another reason this test is potentially useful is for studying low levels of mutations that would have a very low chance of leading to cancer, so that the risk of realistic doses of potential carcinogens can be assessed. Since the test picks up mutations in the absence of cancer, there’s justification for worrying about false positives.

In the February abstract they had used the test on 121 people, and were claiming five-times-higher mutation rates in people with cancer than healthy people. Now they have 300 people and are claiming ten-times-higher rates — one possible explanation is that they’ve made the test more selective somehow and so are picking up fewer uninteresting mutations.  In any case, progress. The earlier data didn’t look as if it could support a useful test; the new data might be able to.

We still don’t know about the false-positive rate — with 300 people tested, it’s too early to say.  The false-positive rate is important for another reason, though.  The Independent has another story, quoting the lead researcher

Professor Jenkins said they needed to find evidence that it would work for other cancers, but added it would be hard to imagine that it would not.

“It would be really difficult to think why it would only affect oesophageal cancer,” he said.

As he says, it’s hard to think why oesophageal cancer would be unique — though you might expect some cancers to be different. For example, in cervical cancer, the mutations are caused by a virus that only infects certain cell types, so it might not cause mutations that show up in red blood cells.  But if we assume many cancers show the same pattern of red blood cell mutations, assessing the usefulness of the test gets more difficult. Suppose a positive result means you’re going to get some type of cancer over the next ten years, but it could be almost any type. What would the next step be?

There’s another important point in the first sentence of the Herald story. It contains two numbers. One is bigger than the other.  As far as I can tell, this test is done on freshly-collected blood, and hasn’t been done on large numbers of healthy people yet. If the test is available within five years, it will, at best only come with reliable information for five years after testing.