Q: It’s shocking how computers can be so sexist
A: Not really the computers; more the users
Q: But they took this computer program and showed it lots of people’s applications, and it downrated the ones from women
A: Yes, but that’s because they also trained it with information about which applications they thought were best, and it learned from them that women’s applications weren’t as good
Q: Couldn’t it just have seen that more men that women were accepted because more men applied, and over-generalised?
A: Not really. It should be looking at the probability of acceptance, which wouldn’t be affected by overall proportions, but would be affected by human bias.
Q: Could the bias all have come in via word associations, like in that ‘how to make a racist AI’ blog post.
A: Perhaps. But only if they weren’t really trying. In particular, however the bias came in, they should have been aware of the potential and audited the results. I mean, this is a respectable organisation; you’d assume they were that responsible
Q: That sounds like a simple piece of advice
A: Yes, but even 30 years later, people are still making the same mistakes
Q: Wait, what? Aren’t we talking about Amazon?
A: No, St George’s Hospital Medical School, London.  In the BMJ in 1988, based on a program written in the 1970s