December 11, 2014

Very like a whale

We see patterns everywhere, whether they are there or not. This gives us conspiracy theories, superstition, and homeopathy. It’s really hard to avoid drawing conclusions about patterns, even when you know they aren’t really there.

Some of the most dramatic examples are visual

HAMLET
Do you see yonder cloud that’s almost in shape of a camel?

LORD POLONIUS
By the mass, and ’tis like a camel, indeed.

HAMLET
Methinks it is like a weasel.

LORD POLONIUS
It is backed like a weasel.

HAMLET
Or like a whale?

LORD POLONIUS
Very like a whale.

Hamlet was probably trolling, but he got away with it because seeing shapes in the clouds is a common experience.

Just as we’re primed to see causal relationships whether they are there or not, we are also primed to recognise shapes whether they are there or not. The compulsion is perhaps strongest for faces, as in this bitter melon (karela) from Reddit

ByuOkJuIIAARd-z

and this badasss mop

BZs7gHTIcAAHAuC

It turns out that computers can be taught similar illusions, according to new research from the University of Wyoming.  The researchers took software that had been trained to recognise certain images. They then started off with random video snow or other junk patterns and made repeated random changes, evolving images that the computer would recognise.

B4aMt24CcAAOeZv

These are, in a sense, computer optical illusions. We can’t see them, but they are very convincing to a particular set of artificial neural networks.

There are two points to this. The first is that when you see a really obvious pattern it isn’t necessarily there. The second is that even if computers are trained to classify a particular set of examples accurately, they needn’t do very well on completely different sets of examples.

In this case the computer was looking for robins and pandas, but it might also have been trained to look for credit card fraud or terrorists.

 

December 10, 2014

Briefly

Spin and manipulation in science reporting

From The Independent

“Although it is common to blame media outlets and their journalists for news perceived as exaggerated, sensationalised, or alarmist, our principle findings were that most of the inflation detected in our study did not occur de novo in the media but was already present in the text of the press release produced by academics and their establishments,” the researchers write in the BMJ.

The study seems to be arguing that press offices are to blame for the spin, not journalists.

Ed Yong, a well-known freelance journalist and science writer, interpreted it differently on Twitter

Blame is not a zero-sum game. If exaggerations or inaccuracies end up in science/health reporting, then the journalist should always take 100% of the blame, even if the errors originated with scientists or press releases. Errors can arise anywhere; they are meant to end with us. We are meant to be bullshit filters. That is our job

It can be a hard job, with many systemic factors—editorial demands, time pressures, lack of expertise—that stop us from doing it properly. These are reasons for empathy, but they change nothing. If we publish misleading information, and try to apportion blame to our sources, we implicitly admit that we are mere stenographers—and thus useless. If we claim to matter, we must take blame.

I’d agree the blame isn’t zero-sum, and I think the scientists also deserve a lot of it.  Ben Goldacre has previously suggested that press releases should bear the name of one of the researchers as a responsible person and should appear in the journal next to the paper (easy for online journals).

In a way, the press offices of the universities and journals are the only people not really at fault, even if most of the spin originates there. They are the only people involved without a professional responsibility for getting the story accurate and in proportion. Making lemonade out of lemons is their job.

I would link to the paper and to Ben Goldacre’s commentary in the BMJ, but it isn’t available yet. You can read the Science Media Centre notes, which are based on the actual paper. The journal seem to have timed their media information releases so that there is plenty of media commentary and online discussion without the facts from the research being available.

The irony, it burns.

 

[Update: research paper is now available]

[Further update: and the research paper puts the blame more clearly on the researchers than the story in the Independent does — see comments]

Not net tax

A recurring bad statistic 

But Finance Minister Bill English told Morning Report that was is not the answer, and half of all New Zealand households pay no net tax at all.

In some ways this is an improvement over one of the other version of the statistics, where it’s all households with income under $110,000 who collectively paid no net tax. It’s still misleading.  It seems to be modelled on the similar figure for the US, but the NZ version is less accurate. On the other hand, the NZ version is less pernicious — unlike Mitt Romney, Bill English isn’t saying the 50% are lazy and irresponsible.

In the US figure, ‘net tax’ meant ‘net federal income tax’, ie, federal income tax minus the subset of benefits that are delivered through the tax system.  In New Zealand, the figure appears to mean national income tax minus benefits delivered through the tax system (eg Working For Families tax credits) and also minus cash benefits delivered by other means.  In both cases, though, the big problem is the taxes that aren’t included.  In New Zealand, that’s GST.

The median household income in New Zealand is about $68,000. If we assume Mr English has done his sums correctly, this is where the ‘net tax’ starts (though the original version of the claim was 43% rather than ‘half’, which would push the cutpoint down to $50,000).  Suppose the household is paying 30% of income on housing (higher than the national average), which is GST-exempt, and that they’re saving 3%, eg, through Kiwisaver (also higher than the national average). By assumption, they get back what they pay in income tax, so they spend the rest. GST on what they spend is $6834: their tax rate net of transfers is about 10%. To get a negative “net tax” you need to include some things that aren’t taxes and leave out some things that are taxes.

If you use this table from 2011, which David Farrar at Kiwiblog attributed to English’s office, it looks like many people in the $30k-$40k band will also pay tax net of transfers

nettaxpaid-560x342

If everyone in that band was at the midpoint, and they had no tax deductions (so that the $35k taxable income is all the non-transfer income they have), the total taxable income plus gross transfers for that band is about $7150 million, and 15% of 60% of that is $643 million, so they’d have to use 40% of their money in GST-exempt ways to pay no tax net of transfers.  Presumably the switch from positive to negative tax net of transfers is somewhere in this band. So, somewhere between 27% and 37% of New Zealand households pay less in tax than they receive in transfers.

Of course, cash benefits aren’t the only thing you get from the government, and more detailed modelling of where taxes are actually paid and the value of education and health benefits estimates that the lower 60% of households (adjusted for household size) get more in direct benefits and social services than they pay in direct and indirect taxes — but a lot of that is ‘getting what you pay for’, not redistribution.

Most importantly of all, there isn’t an obvious target value for the proportion of households who pay no tax net of transfers. There’s nothing obviously special about the claimed 50% or the actual 30ish%. The question is whether increasing taxes and transfers to reduce inequality would be good or bad overall, and this statistic really isn’t relevant.

 

Previously for this set of statistics

December 9, 2014

Health benefits and natural products

The Natural Health and Supplementary Products Bill is back from the Health Committee. From the Principles section of the Bill:

(c) that natural health and supplementary products should be accompanied by information that—

   (i)is accurate; and

   (ii)tells consumers about any risks, side-effects, or benefits of using the product:

(d)that health benefit claims made for natural health and supplementary products should be supported by scientific or traditional evidence.

There’s an unfortunate tension between (c)(i) and (d), especially since (for the purposes of the Bill) the bar for ‘traditional evidence’ is set very low: evidence of traditional use is enough.

Now, traditional use obviously does convey some evidence as to safety and effectiveness. If you wanted a herbal toothache remedy, you’d be better off looking in Ngā Tipu Whakaoranga and noting traditional Māori use of kawakawa, rather than deciding to chew ongaonga.

For some traditional herbal medicines there is even good scientific evidence of a health benefit. Foxglove, opium poppy, pyrethrum, and willowbark are all traditional herbal products that really are effective. Extracts from two of them are on the WHO essential medicines list, as are synthetic adaptions of the other two. On the other hand, these are the rare exceptions — these are the  ones where a vendor wouldn’t have to rely only on traditional evidence.

It’s hard to say how much belief in a herbal medicine is warranted by traditional use, and different people would have different views. It would have been much better to allow the fact of traditional use to be advertised itself, rather than allowing it to substitute for evidence of benefit.  Some people will find “traditional Māori use” a good reason to buy a product, others might be more persuaded by “based on Ayurvedic principles”.  We can leave that evaluation up to the consumer, and reserve claims of ‘health benefit’ for when we really have evidence of health benefit.

This isn’t treating science as privileged, but it is treating science as distinguished. There are some questions you really can answer by empirical study and repeatable experiment (as the Bill puts it), and one of them is whether a specific treatment does or does not have (on average) a specific health benefit in a specific group of people.

 

Diversity and segregation

A very nice interactive game and simulation by Vi Hart and Nicky Case, showing how very high levels of segregation can result even from just a preference for not being overwhelming outnumbered.

On the positive side, a fairly small active preference for diversity can overcome this problem.

December 8, 2014

Political opinion: winning the right battles

From Lord Ashcroft (UK, Conservative) via Alex Harroway (UK, decidedly not Conservative), an examination of trends in UK opinion on a bunch of issues, graphed by whether they favour Labour or the Conservatives, and how important they are to respondents. It’s an important combination of information, and a good way to display it (or it would be if it weren’t a low-quality JPEG)

Ashcroft-Chart

 

Ashcroft says

The higher up the issue, the more important it is; the further to the right, the bigger the Conservative lead on that issue. The Tories, then, need as many of these things as possible to be in the top right quadrant.

Two things are immediately apparent. One is that the golden quadrant is pretty sparsely populated. There is currently only one measure – being a party who will do what they say (in yellow, near the centre) – on which the Conservatives are ahead of Labour and which is of above average importance in people’s choice of party.

and Alex expands

When you campaign, you’re trying to do two things: convince, and mobilise. You need to win the argument, but you also need to make people think it was worth having the argument. The Tories are paying for the success of pouring abuse on Miliband with the people turned away by the undignified bully yelling. This goes, quite clearly, for the personalisation strategy in general.

Stat of the Week Competition: December 6 – 12 2014

Each week, we would like to invite readers of Stats Chat to submit nominations for our Stat of the Week competition and be in with the chance to win an iTunes voucher.

Here’s how it works:

  • Anyone may add a comment on this post to nominate their Stat of the Week candidate before midday Friday December 12 2014.
  • Statistics can be bad, exemplary or fascinating.
  • The statistic must be in the NZ media during the period of December 6 – 12 2014 inclusive.
  • Quote the statistic, when and where it was published and tell us why it should be our Stat of the Week.

Next Monday at midday we’ll announce the winner of this week’s Stat of the Week competition, and start a new one.

(more…)

December 7, 2014

Briefly

Bot or Not?

Turing had the Imitation Game, Phillip K. Dick had the Voight-Kampff Test, and spammers gave us the CAPTCHA.  The Truthy project at Indiana University has BotOrNot, which is supposed to distinguish real people on Twitter from automated accounts, ‘bots’, using analysis of their language, their social networks, and their retweeting behaviour. BotOrNot seems to sort of work, but not as well as you might expect.

@NZquake, a very obvious bot that tweets earthquake information from GeoNet, is rated at an 18% chance of being a bot.  Siouxsie Wiles, for whom there is pretty strong evidence of existence as a real person, has a 29% chance of being a bot.  I’ve got a 37% chance, the same as @fly_papers, which is a bot that tweets the titles of research papers about fruit flies, and slightly higher than @statschat, the bot that tweets StatsChat post links,  or @redscarebot, which replies to tweets that include ‘communist’ or ‘socialist’. Other people at a similar probability include Winston Peters, Metiria Turei, and Nicola Gaston (President of the NZ Association of Scientists).

PicPedant, the twitter account of the tireless Paulo Ordoveza, who debunks fake photos and provides origins for uncredited ones, rates at 44% bot probability, but obviously isn’t.  Ben Atkinson, a Canadian economist and StatsChat reader, has a 51% probability, and our only Prime Minister (or his twitterwallah), @johnkeypm, has a 60% probability.