February 2, 2017

Eat more kale?

From the Mail, via the Herald

Eating nuts, kale and avocado could help protect women from suffering a miscarriage, new research suggests.

Being deficient in vitamin E starves an embryo of vital energy and nutrients it needs to grow, scientists have found.

There’s a sense in which this is true. But only a weak one.  Here’s the first sentence of the research paper (via Mark Hanna)

Vitamin E (α-tocopherol, VitE) was discovered in 1922 because it prevented embryonic mortality in rats, but the involved mechanisms remain unknown 

That is, it’s been known since vitamin E was discovered 95 years ago that severe deficiency causes miscarriage in rats. In fact, the chemical name ‘tocopherol’ comes from Greek words meaning, basically, “to carry a pregnancy.” This isn’t new.  The new research was a study of severe deficiency in little tropical fish, so it wouldn’t be an improvement over rats from the point of view of a public health message.  And the research paper doesn’t try to say anything about avocados and kale for preventing miscarriage; it’s about clarifying what goes wrong with the embryos at a biochemical level.

The dietary-advice question would be whether it’s common for women to have low enough levels of vitamin E to increase miscarriage risk, and if so whether nuts, kale, and avocado would help or whether supplements make sense as they do with folate and perhaps iodine.  Somewhat surprisingly, the first published research on this question seems to be from 2014 (story, paper).  In a study in rural Bangladesh, where nearly 75% of women had vitamin E deficiency, those with low vitamin E were twice as likely to miscarry.  I don’t have data for New Zealand, but in the US less than 1% of people have vitamin E deficiency of that severity.  It doesn’t look to be a big problem. And, from the authors of the 2014 study:

Schulze says that the study may not be generalizable to higher-income nations where women of childbearing age tend to have better nutritional status.

It’s possible that slight deficiency increases miscarriage risk slightly, but there isn’t any direct evidence. And the new research doesn’t even try to address this issue.

Finally, if someone wanted to get more vitamin E, would the recommendations help? Well, according to this site, it would take 14 cups of kale a day to get up to the recommended daily intake. And we know there are problems with avocado in younger adults. So perhaps try the nuts instead.

CensusAt School kicks off next Tuesday

As many of you may already know, the Department of Statistics runs the magnificent, biennial CensusAtSchool TataurangaKiTeKura, a national statistics literacy programme in schools supported by the Ministry of Education and Statistics New Zealand. Students aged 9 to 18 (Year 5 to Year 13) use digital devices to answer 35 online questions in English or te reo Māori about their lives and opinions. The aim is to turn them into data detectives – and turn them on to the value of statistics in everyday life.

Pakuranga College visit by Minister of Statistics and local MP Maurice Williamson, to see Census At School 2013 in action with teacher Priscilla Allan's Year 9 digital maths class, along with co-directors of the programme from The University of Auckland, on Monday 6 May 2013, Auckland, New Zealand.  Photo: Stephen Barker/Barker Photography. ©The University of Auckland.

Photo: Stephen Barker.  © The University of Auckland.

The latest edition of CAS starts next Tuesday, February 7, after the Waitangi Day holiday, and we’re hoping to get more than 50,000 Kiwi students taking part, which would be a record since CAS started in Aotearoa in 2003. Registrations have been open for a few weeks and are piling in, and I can see that so far we have 780 teachers from 507 Māori-language and English-medium schools registered – and there’s also a school from the Cook Islands, Tereora College. Check out if your local school is involved here.

CAS started as a pilot programme here, in 1990, run by Sharleen Forbes. As an international educational project, it started in the UK in 2000, and now runs in the UK, New Zealand, Ireland, Australia, Canada, South Africa, Japan, and the US. Good ole NZ, still punching above its weight in stats education.

There are questions common to all the censuses so comparisons can be made, but there are locally-specific questions as well – you can see the list of questions here. This year, we’re asking students about topics such as whether they get pocket money, and how much; whether there is there a limit on their screen time after school; and if anything in their lunchbox that day had been grown at home. In each census, students also carry out practical activities such as weighing the laptops and tablets they take to school and measuring each other’s heights, as in the picture of these Pakuranga College students. From mid-June, the data will be released for teachers to use in the classroom.

As this census is the only national picture of how kids are feeling, what they’re thinking and what they’re doing, journalists love the stories that flow from the results. The publicity isn’t only fascinating – it helps raise awareness of the value of statistics to everyday life. With any luck, some of the kids who do this year’s census will end up being our statisticians of tomorrow.

February 1, 2017

Safe, but not effective

Pharmaceutical giant Eli Lilly has basically given up on its candidate Alzheimer’s treatment, solanezumab (they’re still trying for one special genetically-driven subtype).

In a Herald story in 2015, the lead was

The first drug that slows down Alzheimer’s disease could be available within three years after trials showed it prevented mental decline by a third.

That was clearly overstating the case. Previous trials has failed to find the benefits they were looking for; this report was based on hints of benefit in a new trial earlier in the disease — a reasonable hope, but nothing like good evidence.

Last year, the company changed the ‘primary endpoint’ of the trial — the definition of what they were hoping to find. That’s usually not a good sign. And it wasn’t.  The company now says

there was no scientific basis to believe they would find a “meaningful benefit to patients with prodomal Alzheimer’s disease.”

Alzheimer’s is an especially difficult condition to research, in part because scientists don’t have a good handle on exactly what’s going wrong. Solanezumab binds to the amyloid protein that causes plaques, enabling it to be removed. That makes a lot of sense as an approach — it wasn’t successful this time. Finding that out was very time-consuming and expensive, because the only way is to run large, long-term randomised trials in people with early-stage disease.

For some drugs and some conditions, you can find out about effectiveness easily: we know Sudafed works for nasal congestion, because it’s obvious. We knew penicillin worked in septicaemia and pneumococcal pneumonia , because of all the people who didn’t die. It took a lot more effort to learn than penicillin works for preventing rheumatic fever. And studies in chronic, slow-moving diseases are far harder than that.

Solanezumab is safe, unlike some previous drugs with similar mechanisms.  It’s an example of a treatment for a serious disease that would have been available years ago if it weren’t for FDA regulation. Millions of people with early dementia could have bought it and used it. It still wouldn’t work.

 

Briefly

  • Maps as a research communication tool: a research project into the relationship between rateable value and sales value for homes in Milwaukee, and who ends up overpaying their rates.
  • What happens when you make a major change to the definition of an important official statistic.
  • One of the minor aspects of Donald Trump’s awful executive order is collection and reporting of crimes committed by immigrants. Whether this was a mistake for him depends on how good people are at denominators: immigrants commit less crime on average than people born in the US, and there are fewer immigrants than people think there are, so the number will be smaller than people should expect. But that takes maths. Or in the US, math.
January 30, 2017

Stat of the Week Competition: January 28 – February 3 2017

Each week, we would like to invite readers of Stats Chat to submit nominations for our Stat of the Week competition and be in with the chance to win an iTunes voucher.

Here’s how it works:

  • Anyone may add a comment on this post to nominate their Stat of the Week candidate before midday Friday February 3 2017.
  • Statistics can be bad, exemplary or fascinating.
  • The statistic must be in the NZ media during the period of January 28 – February 3 2017 inclusive.
  • Quote the statistic, when and where it was published and tell us why it should be our Stat of the Week.

Next Monday at midday we’ll announce the winner of this week’s Stat of the Week competition, and start a new one.

(more…)

January 28, 2017

Charms to soothe the savage beast

Q: Did you see dogs prefer reggae and soft rock?

A: Not rap?

Q: Rap? You mean because of the human voices? Or because of Snoop Dogg?

A: Um. Yes. Voices. Definitely the voices thing. Wouldn’t dream of the horrible pun.

Q: Anyway, how did they find out what sort of music the dogs liked? Did they give them buttons to push, like those experiments with rats?

A: No

Q: Did they see which speaker the dogs liked to sit near?

A: No.

Q: Can you work with me here?

A: They measured how relaxed the dogs were, by heart rate and whether they were lying down, and whether they were barking.

Q: The music they ‘liked most’ was really the music that made them lie down quietly and relax?

A: Yes.

Q: Have these people ever been teenagers?

A: To be fair, the research paper didn’t claim they were looking at preferences. That seems to be an invention of the press release.

Q: That would be the research paper that none of the stories linked, and most of them didn’t even hint at the existence of?

A: Yes, that one.

Q: So what were they really looking at?

A: The Scottish SPCA wants dogs to be quiet and relaxed (and presumptively  happy) in the kennels, while they’re waiting to find a new home.

Q: And soft rock and reggae were more relaxing than rap or thrash metal?

A: They didn’t look at all musical genres, just a few.  The dogs got a week of no music and a week with a different style each day (in random order, with music from Spotify).

Q: Soft rock and reggae were better than the other ones?

A: Well, Motown seemed to increase heart rate rather than decrease it, but the others were all pretty much the same.

Q: The others?

A:  Soft Rock, Reggae, Pop, Classical, Silence

Q: Wait, what? “Silence”?

A: Yes, a day of no music was about as good as a day of relaxing music.  It looks like variety might be the key. The researchers say

Interestingly, the physiological and behavioural changes observed in this study were maintained over the 5d of auditory stimulation, suggesting that providing a variety of different genres may help minimise habituation to auditory enrichment

Q: So what they really found is that playing dogs a variety of music relaxes them

A: Yes, but that’s not such a good headline.

January 23, 2017

Stat of the Week Competition: January 21 – 27 2017

Each week, we would like to invite readers of Stats Chat to submit nominations for our Stat of the Week competition and be in with the chance to win an iTunes voucher.

Here’s how it works:

  • Anyone may add a comment on this post to nominate their Stat of the Week candidate before midday Friday January 27 2017.
  • Statistics can be bad, exemplary or fascinating.
  • The statistic must be in the NZ media during the period of January 21 – 27 2017 inclusive.
  • Quote the statistic, when and where it was published and tell us why it should be our Stat of the Week.

Next Monday at midday we’ll announce the winner of this week’s Stat of the Week competition, and start a new one.

(more…)

January 20, 2017

Recycling

Herald storySchool costs: $40,000 for ‘free’ state education

Last year’s Dom Post storyFamilies struggle to afford the rising cost of back-to-school requirements

Recycling last year’s StatsChat post:

So, it’s a non-random, unweighted survey, probably with a low response rate, among people signed up for an education-savings programme. You’d expect it to overestimate, but it’s not clear how much. Also

Figures have been rounded and represent the upper ranges that parents can reasonably expect to pay

It’s a real issue, but these particular numbers don’t deserve the publicity they get.

Measuring accuracy

Stuff  has a story “Scanner that can detect brain bleeds to be introduced in New Zealand.” The scanner uses infrared light to see relatively small accumulations of blood in the brain, with the aim of detecting bleeding quickly.  The traditional approach of looking at symptoms can often miss a bleed until after it’s done a lot of damage.

Accuracy is important for a device like this.  You don’t want to send lots of people off for CT scans, which are expensive and expose the patient to radiation, but you also don’t want to falsely reassure someone who really has a brain injury and who might then ignore later symptoms.

The story at Stuff claims 94% accuracy, but doesn’t say exactly what they mean by ‘accuracy’. Another story, at Gizmodo, says “A green light on the scanner gives the patient the all clear, and a red light shows a 90 per cent chance of haemorrhage.” The Gizmodo figures fit better with what’s on the manufacturer’s website, where they claim “Sensitivity = 88% / Specificity = 90.7%”.  That is, of people with (the right sort of) bleed, 88% will be detected, and of people without those bleeds, 90.7% will be cleared.

The Gizmodo story still confuses the forward and backwards probabilities. Out of 100 people with brain bleeds, 88 will get a red light on the machine. That’s not the same as their claim: that out of 100 people who get a red light on the machine, 90 have a bleed.

Suppose about 10% of the people it’s used on really do have brain bleeds. Out of an average 100 uses there would be 10 actual bleeds, 9 of whom would get a red light. There would be 90 without actual bleeds, about 9 of whom would get a red light.  So the red light would only indicate about a 50% chance of a haemorrhageThat’s still pretty good, especially as it can be done rapidly and safely, but it’s not 90%.

The other aspect of the story that’s not clear until you read the whole thing is what the news actually is. Based on the headline, you might think the point of the story is that someone’s started using this device in NZ, maybe in rugby or in ambulances, or is trialling it, or has at least ordered it.  But no.

No-one in New Zealand has yet got their hands on an infrascanner, but the hope is for it to be rolled out among major sporting bodies, public and private ambulance services, trauma centres and remote healthcare facilities.

 

 

January 18, 2017

Recognising te reo

Those of you on Twitter will have seen the little ‘translate this tweet’ suggestions that it puts up. If you’re from or in New Zealand you probably will have seen that reo Māori is often recognised by the algorithm as Latvian, presumably because Latvian also has long vowels indicated by macrons.   I’ve always been surprised by this, because Latvian looks so different.

It turns out I’m right.  Even looking just at individual letters, it’s very easy to distinguish the two.  I downloaded 74000 paragraphs of Latvian Wikipedia, a total of 6.5 million letters, and looked at how long the Latvians can go without using letters that don’t appear in te reo: specifically, s,z,j,v,d,c, g not as ng, the six accented consonants, and any consonant at the end of a word. On average, I only needed to wait five letters to know the language is Latvian rather than Māori, and 99% of the time it took less than 21 letters.

Another language that Twitter often guesses is Finnish. That makes more sense: many of the letters not used in Māori are also rare or absent in Finnish, and ‘g’ appears mostly as ‘ng’.   However, Finnish does have ‘s’, has ‘ä’ and ‘ö’, and ‘y’, and has words ending in consonants, so it should also be feasible to distinguish.

 

Update: Indonesian is another popular guess, but it has ‘d’,’j’,’y’,”b”, and it has lots of works ending with consonants.  The average time to rule out te reo is slightly longer, at nearly 6 characters, and the 99th percentile is 22 letters.  So if the algorithm can’t tell, it should probably guess it’s not Indonesian.

Update: For very short tweets, and those in mixed languages, nothing’s going to work, but this is about tweets where the answer is obvious to a human.