Posts written by Thomas Lumley (2534)

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient

August 14, 2022

Briefly

August 5, 2022

Briefly

  • There’s a new version of ESR’s Wastewater Covid dashboard. It has information on which variants are being found, by location and over time
  • Hashigo Zake, the Wellington craft beer bar, has a new Twitter bot tweeting out the CO2 concentration inside the bar. I summarised a couple of days of it:
  • How far can you go by train in 5 hours? A map of Europe
  • How likely are people to win the lottery: the Washington Post did a quiz
  • Jamie Morton in the Herald has a good discussion of the Stats NZ review of the population denominator used in Covid vaccine stats.  The HSU undercounts somewhat, especially for Māori and Pacific Peoples, but it has the virtue of counting ethnicity the same way that the vaccination data does, and of including people in NZ who are not residents.
August 2, 2022

Homelessness statistics

Radio NZ reported an estimate by the charity Orange Sky “One in six kiwis have been homeless and tonight about 41,000 of us will bed down without adequate access to housing”.  I saw some skepticism of these figures on Twitter, so let’s take a look.

Based on the 2018 Census, researchers at the University of Otago estimated

  • 3,624 people who were considered to be living without shelter (on the streets, in improvised dwellings – including cars – and in mobile dwellings). 
  • 7,929 people who were living in temporary accommodation (night shelters, women’s refuges, transitional housing, camping grounds, boarding houses, hotels, motels, vessels, and marae). 
  • 30,171 people who were sharing accommodation, staying with others in a severely crowded dwelling. 
  • 60,399 people who were living in uninhabitable housing that was lacking one of six basic amenities: tap water that is safe to drink; electricity; cooking facilities; a kitchen sink; a bath or shower; a toilet.

So, the figure of 41,000 is a surprisingly close match to the Census data for those first three groups — if you’d only count the first group or the first two, you would obviously get a smaller number.  Because it would be hard to estimate current homelessness from a YouGov survey panel, I suspect the number did come from the Census,  and the ‘new study’ the story mentions is responsible for the ‘one in six’, though Orange Sky actually gives the number as ‘more than one in five (21%)’.

Do the two figures match? Well, if about a million people had ever been homeless (in the broad sense) and 41,000 currently are, that’s a ratio of 25.  The median age of adults (YouGov interviews adults) is probably in the 40s, so if the typical person who was ever homeless spent less than a couple of years homeless the figures would match.  The People’s Project NZ say that homelessness in NZ is mostly short-term — in the sense that most people who are ever homeless are only that way for a relatively short time (which isn’t the same as saying most people who are currently homeless will be that way for a short time).

So, the figures aren’t obviously implausible, and given that they’re presented as the result of research that should be able to get reasonable estimates, they may well be reasonably accurate.

July 28, 2022

Counting bots better

I wrote before about estimating the proportion of spam bots among the apparent people on Twitter.  The way Twitter does it seems ok. According to some people in the internet who seem to know about Delaware mergers and acquisitions law it doesn’t even matter if the way Twitter does it is ok, as long as it roughly matches what they have claimed they do.  But it’s still interesting from a statistics point of view to ask whether it could be done better given the existence of predictive models (“AI”, if you must).  It’s also connected to my research.

Imagine we have a magic black box that spits out “Bot” or “Not” for each user.  We don’t know how it works (it’s magic) and we don’t know how much to trust it (it’s magic). We feed in the account details of 217 million monetisable daily active users and it chugs and whirrs for a while before saying “You have 15696969 bots.”

We’re not going to just tell investors “A magic box says we have 15696969 bots among our daily active users“, but it’s still useful information.  We also have reviewed a genuine random sample of 1000 accounts by hand, over a couple of weeks, and we get 54 bots. We don’t want to just ignore the magic box and say “we have 5.4% bots” What should our estimate be, combining the two? It obviously depends on how accurate the magic box is!  We can get some idea by looking at what the magic box says for the 1000 accounts reviewed by hand.

Maybe the magic box says 74 of the 1000 accounts are bots: 50 of the ones that really are, and 24 others. That means it’s fairly accurate, but it overcounts by about 40%.  Over all of Twitter, you probably don’t have 15696969 bots; maybe you have more like 11,420,000 bots.   If we want the best estimate that doesn’t require trusting the magic box and only requires trusting the random sampling, we can divide up Twitter into accounts the box says are bots and ones that it says aren’t bots, estimate the true proportion in each group, and combine.   In this example, we’d get 5.3% with a 95% confidence interval of  (4.4%, 6.2%). If we didn’t have the magic box at all, we’d get an estimate of 5.4% with a confidence interval of (4.0%, 6.8%).  The magic box has improved the precision of the estimate.  With this technique, the magic box can only be helpful. If it’s accurate, we’ll get a big improvement in precision. If it’s not accurate, we’ll get little or no improvement in precision, but we still won’t introduce any bias.

The techique is called post-stratification, and it’s the simplest form of a very general approach to using information about a whole population to improve an estimate from a random sample.  Improving estimates of proportions or counts with post-stratification is a very old idea (well, very old by the standards of statistics).  More recent research in this area includes ways to improve estimation of more complicated statistical estimates, such as regression models. We also look at ways to use the magic box to pick a better random sample  — in this example, instead of picking 1000 users at random we might pick a random sample of 500 accounts that the magic box says are bots and 500 accounts that it says are people. Or maybe it’s more reliable on old accounts than new ones, and we want to take random samples from more new accounts and fewer old accounts.

In practical applications the real limit on this idea is the difficulty of doing random sampling.  For Twitter, that’s easy. It’s feasible when you’re choosing which medical records from a database to check by hand, or which frozen blood samples to analyse, or which Covid PCR swabs to send for genome sequencing.  If you’re sampling people, though, the big challenge is non-response. Many people just won’t fill in your forms or talk to you on the phone or whatever. Post-stratification can be part of the solution there, too, but the problem is a lot messier.

 

July 27, 2022

Attendance figures

Chris Luxon said today on RNZ Morning Report that “55% of kids aren’t going to school regularly”.  On Twitter, Simon Britten said “In Term 1 of 2022 the Unjustified Absence rate was 6.4%, up from 4.1% the year prior. Not great, but also not 50%.”

It’s pretty unusual for NZ politicians to make straightforwardly false statements about publicly available statistics, so if there are numbers that seem to disagree or are just surprising, the most likely explanation is that the number doesn’t mean what you think it means.   It sounds like we have a disagreement about facts here, but we actually have a disagreement about which summary is most useful.

New Zealand does have an ongoing problem with school attendance — according to the Government, not just the Opposition.  The new Attendance and Engagement Strategy document (PDF) says that the percentage of regular attendance was  59.7% in 2021, down from  69.5% in 2015. The aim is to raise this to 70% by 2024 and 75% by 2026.

So if the unjustified absence rate is 6.4%, how can the regular attendance rate be 59.7% or 45%?  “Regular attendance” is defined as attending school at least 90% of the time — so if you miss more than one day per fortnight, or more than one week per term, you are not attending regularly.

For example, suppose half the kids in NZ missed one week and one day in term 1. The absence rate would be about 12% but the regular attendance rate would be 50%.  The unjustified absence rate could be anything from 0% to 12%. It’s quite possible to have a 5% unjustified absence rate and a 50% regular attendance rate.

Now we want more details. They are available here.  The regular attendance rate is down dramatically this year, from 66.8% in term 1 last year to 46.1% in term 1 this year. The proportion of half-days attended is down less dramatically, from 90.5% in term 1 last year to 84.5% in term 1 this year.  Justified absences are up 4.5 percentage points and unjustified absences up by just under 2 percentage points.

What’s different between term 1 this year and term 1 last year?

Well…

It wouldn’t be surprising if a fair fraction of NZ kids took a week off school in term 1, either because they had Covid or because they were in isolation as household contacts.  That’s what should have happened, from a public health point of view.  It’s actually a bit surprising to me that justified absences weren’t even higher. Term 1, 2022, shouldn’t really representative of the long-term state of schools in NZ.  Attendance rates were higher before the Omicron spike; they will probably be higher in the future even without anti-truancy interventions.

It’s reasonable to be worried about school attendance, as the Government and Opposition both claim they are. I don’t think “55% of kids aren’t going to school regularly”  is a particularly good way to describe a Covid outbreak.  Last year’s figures are more relevant if you want to talk about the problem seriously.

July 26, 2022

Briefly

  • Derek Lowe writesLate last week came this report in Science about doctored images in a series of very influential papers on amyloid and Alzheimer’s disease. That’s attracted a lot of interest, as well it should, and as a longtime observer of the field (and onetime researcher in it), I wanted to offer my own opinions on the controversy.”  As he says, the interest in amyloid is not just (or primarily) driven by the allegedly fraudulent research. There’s a lot of support for the importance of beta-amyloid from genetics: mutations that cause early-onset Alzheimer’s, and perhaps even more convincingly, a mutation found in Icelanders that protects against Alzheimers. The alleged fraud is bad, as is the current complete failure of research into treatments, but the link between the two isn’t as strong as some people are implying.
  • Prof Casey Fiesler, who teaches in the area of tech ethics and governance, is developing a TikTok-based tech ethics and privacy course
  • ESR’s Covid wastewater dashboard is live.  This is important because Everyone Poops. We don’t have an exact conversion from measured viruses to active cases, and the conversion could vary with the strain of Covid and with age of the patients, but at least it won’t depend on who decides to get tested and report their test results.
  • The wastewater data will be an excellent complement for the prevalence survey that the Ministry of Health is starting up. The survey, assuming that a reasonable fraction of people go along with getting tested, will give a direct estimate of the true population infection rate, but it will not be as detailed as the wastewater data, which can give estimates for relatively small areas and short time frames.
  • Briefing on the Data and Statistics Bill from the NZ Council of Civil Liberties. If you follow StatsChat you’ve seen these points before. And you will see them again.
July 18, 2022

Briefly

  • Training data for emotions/sentiment from Google appears to be badly wrong (Inconceivable!)
  • About 12% of people surveyed in the UK said they knew “a great deal” or “a fair amount” about a non-existent candidate for leader of the Conservative Party.  More reassuringly, the proportion who had ‘never heard of’ this candidate was much higher than for the real candidates.
  • The New York Times asks what’s the chance that Trump adversaries McCabe and Comey got tax audits — and, much more usefully, shows how the answer to this question depends on how you define the comparison
  • Hilda Bastian looks at the evidence on whether female national leaders handled the pandemic better, now that we have more follow-up
  • From the President of the Royal Society (of London), the need for data literacy, but also the need to “avoid shoehorning everything to do with numbers into a box labelled “Maths”, which has negative connotations for many. If you use that box as a place to pigeonhole quantitative literacy, you are shooting yourself in the foot.” (disclaimer: he’s a statistician)
  • A re-analysis suggests that the vaccine effectiveness data for the Sputnik coronavirus vaccine cannot possibly be correct. Among other red flags, the estimated effectiveness in different age groups was far more similar than would be expected even if the true effectiveness was identical in the groups.

Sampling and automation

Q: Did you see Elon Musk is trying to buy or maybe not buy Twitter?

A: No, I have been on Mars for the last month, in a cave, with my eyes shut and my fingers in my ears

Q: <poop emoji>.  But the bots? Sampling 100 accounts and no AI?

A: There are two issues here: estimating the number of bots, and removing spam accounts

Q: But don’t you need to know how many there are to remove them?

A: Not at all. You block porn bots and crypto spammers and terfs, right?

Q: Yes?

A: How many?

Q: Basically all the ones I run across.

A: That’s what Twitter does, too. Well, obviously not the same categories.  And they use automation for that.  Their court filing says they suspend over a million accounts a day (paragraph 65)

Q: But the 100 accounts?

A: They also manually inspect about 100 accounts per day, taken from the accounts that they are counting as real people — or as they call us, “monetizable daily active users” — to see if they are bots.  Some perfectly nice accounts are bots — like @pomological or @ThreeBodyBot or @geonet or the currently dormant @tuureiti — but bots aren’t likely to read ads with the same level of interest as monetizable daily active users do, so advertisers care about the difference.

Q: Why not just use AI for estimation, too?

A: One reason is that you need representative samples of bots and non-bots to train the AI, and you need to keep coming up with these samples over time as the bots learn to game the AI

Q: But how can 100 be enough when there are 74.3 bazillion Twitter users?

A: The classic analogy is that you only need to taste a teaspoon of soup to know if it’s salty enough.   Random sampling really works, if you can do it.  In many applications, it’s hard to do: election polls try to take a random sample, but most of the people they sample don’t cooperate.  In this case, Twitter should be able to do a genuine random sample of the accounts they are counting as monetizable daily active users, and taking a small sample allows them to put more effort into each account.  It’s a lot better to look at 100 accounts carefully than to do a half-arsed job on 10,000.

Q: 100, though? Really?

A: 100 per day.  They report the proportion every 90 days, and 9000 is plenty.  They’ll get good estimates of the average even over a couple of weeks

 

June 15, 2022

Briefly

  • The Herald says House prices: ‘Another bloodbath’ as prices slump again in May for sixth straight month – REINZ figures. Estimates in the story range from 10-15% drop by the end of the year, and maybe 18% peak to trough.  In January, the Herald reported that prices had risen 30% in 2021, so even an 18% drop would leave prices about 10% higher than they were at the start of 2021.  So even the most optimistic forecast has housing prices pretty much keeping up with inflation over 2021-22.
  • Emma Vitz updated her housing price maps for the Spinoff, which you probably saw this time last year
  • Len Cook, former Government Statistician of New Zealand and former Chief Statistician of the UK, is Not Happy with the proposed  Data and Statistics Bill,  replacing the old Statistics Act.  As I said on Twitter, you don’t necessarily have to agree with Len, but you do need to pay attention to what he thinks.
  • Covid has now killed more White people in the US than Hispanic or Black or Asian, according to a New York Times story.  As this Twitter thread points out, that’s because of age differences in the populations of different ethnicities.  Mortality rates are lower for White people <45 than Black or Hispanic. The same is true for White people 45-64. And 65-74, and over 75. Because the White population averages older, the total numbers of deaths are higher — but that’s like the way deaths are higher in Australia than New Zealand because the population is larger.  Age standardisation is really important if you want to think about reasons for differences between groups
June 8, 2022

Briefly