July 10, 2019

Facial recognition: defining accuracy

From Sky News and ABC News and the Guardian

Four out of five people identified by the Metropolitan Police’s facial recognition technology as possible suspects are innocent, according to an independent report.

The Police prefer a different definition of accuracy that says the error rate is about 1 in 1000.  It’s not surprising they prefer to say the error rate is 1 in 1000, but you might wonder how you can get two definitions that different.

The full report is here (PDF). In a set of trials in UK cities, the system identified 42 matches to people on the watchlists.  Of these, 8 were confirmed as correct, 16 were identified as wrong just by looking at the pictures, 14 were identified as wrong by checking id, and for 4 of the matches the police couldn’t find the person to check their id.  If you’re feeling really generous you could say the police would be just as good at discarding poor matches in real life as they are in a carefully audited field trial, and you might then say you had 8 right, 14 wrong, and 4 “don’t know” in cases where the police were convinced enough to go up to someone and ask for id; that’s still not 50%.

The Police definition of error rate is the number of errors as a fraction of all the faces scanned.  If you make 14 or 18 or 34 errors in scanning tens of thousands of faces, the error rate per face scanned will be low.  The problem is that, under this definition, the error rate of just not using the facial recognition software is even lower than the 1 in 1000 from using it.

What the stories don’t really do is ask what the error rate should be? The right answer would need to combine the harm done by false matches and the benefit from true matches.  One might also want to consider the benefits from deterring crime or the harm from giving the police more pretexts to challenge people they didn’t like.   In medicine we tolerate screening tests that have error rates worse than this facial recognition system — but in situations where people give consent to be screened and to any further follow-up.

It’s hard to answer the question of what error rate would be ok, but it’s important to ask it.

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »

Comments

  • avatar
    James Sukias

    In addition to the “false positives”, there may also be “false negatives” (i.e. the software failed to identify someone on the watch list) which the data doesn’t show and the police are defining as correct identification.

    5 years ago