Posts filed under Just look it up (249)

October 8, 2015

He’s a lumberjack and he’s inconsistently counted

Official statistics agencies publish lots of useful information that gets used by researchers, by educators, by businesses, by journalists, and (with the help of groups like Figure.NZ) by everyone else.  A dilemma for these agencies is how to handle changes in the best ways to measure something. If you never change the definitions you get perfectly consistent reports of no-longer-useful information. If you do change the definitions, things don’t match up.

This graph is from a blog post by a Canadian economist, Liveo Di Matteo. It shows the number of Canadians employed in the lumber industry over time, patched together from several Statistics Canada time series.


Dr Di Matteo is a professional, and wasn’t trying to do anything subtle here — he just wanted a lecture slide — and a lot of this data was from the time when Stats Canada was among the best in the world, so it’s not a problem that’s easy to avoid. It’s just harder than it sounds to define who works in the lumber industry. For example, are the log drivers in the lumber industry, or are they something like “transport workers, not elsewhere classified”?


September 30, 2015

Three strikes: some evidence

The usual objection to a “three-strikes” law imposing life sentences without parole, in addition to the objections against severe mandatory minimums, is

  • It doesn’t work; or
  • It doesn’t work well enough given the injustice involved; or
  • There isn’t good enough evidence that it works well enough given the potential for injustice involved.

New Zealand’s version of the law is much less bad than the US versions, but there are still both real problems, and theoretical problems (robbery and aggravated burglary both include crimes of a wide range of severity).

Graeme Edgeler (who is not an enthusiast for the law) has a post at Public Address arguing that there is, at least, evidence of a reduction in subsequent offending by people who receive a first-strike warning, based a mixture of published data and OIA requests.

Here’s his data in tabular form, showing second convictions for offences that would qualify under the three-strikes law. The red cell is ‘first strike’ convictions, the other rows did not count as strikes because the law isn’t retrospective.

Offence Conviction Number Second conviction Number
7/05-6/10 7/05-6/10 6809 7/05-6/10 256
Before 7/10 7/10-6/15 2437 7/10-1/15 300
7/10-6/15 7/10-6/15 5422 7/10-6/15 81


The first and last rows are directly comparable five-year periods. Offences that now qualify as ‘strikes’ are down 20% in the last five-year period; second convictions are down a further 62%. Data in the middle row isn’t as comparable, but there is at least no apparent support for a general reduction in reoffending in the last five-year period.

The overall 20% decrease could easily be explained as part of the long-term trends in crime, but the extra decrease in second-strike offences can’t be.  It’s also much larger than could be expected from random variation. The law isn’t keeping violent criminals off the streets, but it does seem to be deterring second offences.

Reasonable people could still oppose the three-strikes law (and Graeme does) but unless we have testable alternative explanations for the large, selective decrease, we should probably be looking at arguments that the law is wrong in principle, not that it’s ineffective.


September 28, 2015

Seeing the margin of error

A detail from Andrew Chen’s visualisation of all the election polls in NZ:


His full graph is somewhat interactive: you can zoom in on times, select parties, etc. What I like about this format is how clear it makes the poll-to-poll variability.  The poll result for, say, National isn’t a line, it’s a cloud of uncertainty.

The cloud of uncertainty gets narrower for minor parties (as detailed in my cheatsheet), but for the major parties you can see it span an entire 10-percentage-point grid cell or more.

September 26, 2015

US:China graph of the day

This (via @albertocairo) is from the Guardian, two years ago.


At first it looks like a pie chart, but it isn’t. It’s a set of bar charts warped into a circle, so that the ratio of blue and red areas in a wedge is the square of the ratio of the numbers. Also, the circle format means the longest wedge in each pair must be the same length: 8.6% unemployment rate is the same as 4.6% military expenditure, 104% market capitalisation, and 46 Olympic gold medals.

Many of these are proportions or per-capita figures, but not all. Carbon emissions are national totals, making China look worse. Film industry revenues and exports are totals; they are also gross revenues — because the whole visual metaphor falls apart completely for numbers that can be negative. That’s why the current-year budget surplus/deficit isn’t treated like the other numbers.

There are also some unusual definitions. “Social media”, the bar where China is furthest behind, is defined just by the proportion who use Facebook, which obviously underestimates the social-media activity of the US (and also, perhaps, of China).

The post has some discussion of the difficulties — for example, the measurement and even the definition of unemployment in the two counties — and is much better than the graph.

Here’s a different take on the same countries, in the same format, from the World Economic Forum


They have similar problems with total vs proportion/mean variables. They solve the y-axis problem by working with international ranks, which at least gives a common scale. However, having 1 as the largest rank and some unspecified large number as the smallest rank does make the relationship between area and number fairly weird.  It also means that the actual numbers for each wedge aren’t fractions of a total in any sensible way.

If the main point is to be an eye-catching hook for the story, the Guardian graph is more successul

September 16, 2015

How many immigrants?

Before reading on, what proportion of New Zealand residents do you think were born overseas? (more…)

September 7, 2015

Some refugee numbers

First, the Gulf States. It has been widely reported that the Gulf States have taken zero refugees from Syria.  This is by definition: they are not signatories to the relevant UN Conventions, so people fleeing to the Gulf States do not count as refugees according to the UNHCR. Those people still exist. There are relevant questions about why these states aren’t signatories, and about how they have treated the (many) Syrians who fled there,  and about whether they should accept more people from Syria, and about their humanitarian record in general. The official figure of zero refugees isn’t a good starting point, though.


Second, New Zealand. The Government has announced an increase in the refugee quota, but the announcement is a mixture of annual figures and figures added up across two and a half years. It would be clearer if the numbers used the same time period.

The current quota is 750 per year. Over the next 2.5 years that would be 1875 people. We are increasing this by 600, to 2475.  The current budget is $58 million/year. Over the next 2.5 years that would be $145 million. We are increasing this by an estimated $48 million, to $193 million. Either by numbers or by dollars, this is about a 1/3 increase.

August 22, 2015

Changing who you count

The New York Times has a well-deserved reputation for data journalism, but anyone can have a bad day.  There’s a piece by Steven Johnson on the non-extinction of the music industry (which I think makes some good points), but which the Future of Music Coalition doesn’t like at all. And they also have some good points.

In particular, Johnson says

“According to the OES, in 1999 there were nearly 53,000 Americans who considered their primary occupation to be that of a musician, a music director or a composer; in 2014 more than 60,000 people were employed writing, singing, or playing music. That’s a rise of 15 percent.”


He’s right. This is a graph (not that you really need one)


The Future of Music Coalition give the numbers for each year, and they’re interesting. Here’s a graph of the totals:


There isn’t a simple increase; there’s a weird two-humped pattern. Why?

Well, if you look at the two categories, “Music Directors and Composers” and “Musicians and Singers”, making up the total, it’s quite revealing


The larger category, “Musicians and Singers”, has been declining.  The smaller category, “Music Directors and Composers” was going up slowly, then had a dramatic three-year, straight-line increase, then decreased a bit.

Going  into the Technical Notes for the estimates (eg, 2009), we see

May 2009 estimates are based on responses from six semiannual panels collected over a 3-year period

That means the three-year increase of 5000 jobs/year is probably a one-off increase of 15,000 jobs. Either the number of “Music Directors and Composers” more than doubled in 2009, or more likely there was a change in definitions or sampling approach.  The Future of Music Coalition point out that Bureau of Labor Statistics FAQs say this is a problem (though they’ve got the wrong link: it’s here, question F.1)

Challenges in using OES data as a time series include changes in the occupational, industrial, and geographical classification systems

In particular, the 2008 statistics estimate only 390 of these people as being employed in primary and secondary schools; the 2009 estimate is 6000, and the 2011 estimate is 16880. A lot of primary and secondary school teachers got reclassified into this group; it wasn’t a real increase.

When the school teachers are kept out of  “Music Directors and Composers”, to get better comparability across years, the change is from 53000 in 1999 to 47000 in 2014. That’s not a 15% increase; it’s an 11% decrease.

Official statistics agencies try not to change their definitions, precisely because of this problem, but they do have to keep up with a changing world. In the other direction, I wrote about a failure to change definitions that led the US Census Bureau to report four times as many pre-schoolers were cared for by fathers vs mothers.

August 17, 2015

More diversity pie-charts

These ones are from the Seattle Times, since that’s where I was last week.

IMAG0103, like many other tech companies, had been persuaded to release figures on gender and ethnicity for its employees. On the original figures, Amazon looked  different from the other companies, but Amazon is unusual in being a shipping-things-around company as well as a tech company. Recently, they released separate figures for the ‘labourers and helpers’ vs the technical and managerial staff.  The pie chart shows how the breakdown makes a difference.

In contrast to Kirsty Johnson’s pie charts last week, where subtlety would have been wasted  given the data and the point she was making, here I think it’s more useful to have the context of the other companies and something that’s better numerically than a pie chart.

This is what the original figures looked like:


Here’s the same thing with the breakdown of Amazon employees into two groups:


When you compare the tech-company half of Amazon to other large tech companies, it blends in smoothly.

As a final point, “diversity” is really the wrong word here. The racial/ethnic diversity of the tech companies is pretty close to that of the US labour force, if you measure in any of the standard ways used in ecology or data mining, such as entropy or Simpson’s index.   The issue isn’t diversity but equal opportunity; the campaigners, led by Jesse Jackson, are clear on this point, but the tech companies and often the media prefer to talk about diversity.


August 14, 2015

Sometimes a pie chart is enough

From Kirsty Johnson, in the Herald, ethnicity in the highest and lowest decile schools in Auckland.


Statisticians don’t like pie charts because they are inefficient; they communicate numerical information less effectively than other forms, and don’t show subtle differences well.  Sometimes the differences are sufficiently unsubtle that a pie chart works.

It’s still usually not ideal to show just the two extreme ends of a spectrum, just as it’s usually a bad idea to show just two points in a time series. Here’s the full spectrum, with data from EducationCounts



[The Herald has shown the detailed school ethnicity data before in other contexts, eg the decile drift story and graphics from Nicholas Jones and Harkanwal Singh last year]

I’ve used counts rather than percentages to emphasise the variation in student numbers between deciles. The pattern of Māori and Pacific representation is clearly different in this graph: the numbers of Pacific students fall off dramatically as you move up the ranking, but the numbers of Māori students stabilise. There are almost half as many Māori students in decile 10 as in decile 1, but only a tenth as many Pacific students.

If you’re interested in school diversity, the percentages are the right format, but if you’re interested in social stratification, you probably want to know how students of different ethnicities are distributed across deciles, so the absolute numbers are relevant.


August 1, 2015

NZ electoral demographics

Two more visualisations:

Kieran Healy has graphs of the male:female ratio by age for each electorate. Here are the four with the highest female proportion,  rather dramatically starting in the late teen years.



Andrew Chen has a lovely interactive scatterplot of vote for each party against demographic characteristics. For example (via Harkanwal Singh),  number of votes for NZ First vs median age