There’s a long tradition in law and ethics of thinking about how much harm to the innocent should be permitted in judicial procedures, and at what cost. The decision involves both uncertainty, since any judicial process will make mistakes, and consideration of what the tradeoffs would be in the absence of uncertainty. An old example of the latter is the story of Abraham bargaining with God over how many righteous people there would have to be in the notorious city of Sodom to save it from destruction, from a starting point of 50 down to a final offer of 10.
With the proposed new child protection laws, though, the arguments have mostly been about the uncertainty. The bills have not been released yet, but Paula Bennett says they will provide for protection orders keeping people away from children, to be imposed by judges not only on those convicted of child abuse but also ‘on the balance of probabilities’ for some people suspected of being a serious risk.
We’ve had two stat-of-the-week nominations for a blog post about this topic (arguably not ‘in the NZ media’, but we’ll leave that for the competition moderator). The question at issue is how many innocent people would end up under child protection orders if 80 orders were imposed each year.
The ‘balance of probabilities’ standard theoretically says that an order can be imposed (?must be imposed) if the probability of being a serious risk is more than 50%. The probability could be much higher than 50% — for example, if you were asked to decide on the balance of probabilities which of your friends are male, you will usually also be certain beyond reasonable doubt for most of them. On the other hand, there wouldn’t be any point to the legislation unless it is applied mostly to people for whom the evidence isn’t good enough even to attempt prosecution under current law, so the typical probabilities shouldn’t be that high.
Even if we knew the distribution of probabilities, we still don’t have enough information to know how many innocent people will be subject to orders. The probability threshold here is the personal partly-subjective uncertainty of the judge, so even if we had an exact probability we’d only know how many innocent people the judge thought would be affected, and there’s no guarantee that judges have well-calibrated subjective probabilities on this topic.
In fact, the judicial system usually rules out statistical prior information about how likely different broad groups of people are to be guilty, so the judge may well be using a probability distribution that is deliberately mis-calibrated. In particular, the judicial system is (for very good but non-statistical reasons) very resistant to using as evidence the fact that someone has been charged, even though people who have been charged are statistically much more likely to be guilty than random members of the population.
At one extreme, if the police were always right when they suspected people, everyone who turned up in court with any significant evidence against them would be guilty. Even if the evidence was only up to the balance of probabilities standard, it would then turn out that no innocent people would be subject to the orders. That’s the impression that Ms Bennett seems to be trying to give — that it’s just the rules of evidence, not any real doubt about guilt. At the other extreme, if the police were just hauling in random people off the street, nearly everyone who looked guilty on the balance of probabilities might actually just be a victim of coincidence and circumstance.
So, there really isn’t an a priori mathematical answer to the question of how many innocent people will be affected, and there isn’t going to be a good way to estimate it afterwards either. It will be somewhere between 0% and 100% of the orders that are imposed, and reasonable people with different beliefs about the police and the courts can have different expectations.
Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »