February 18, 2019

Briefly

  • “Often these studies are not found out to be inaccurate until there’s another real big dataset that someone applies these techniques to and says ‘oh my goodness, the results of these two studies don’t overlap‘,” she said. Genevra Allen (who gave one of the inaugural Ihaka Lectures here in Auckland) on machine learning in science.
  • Good piece by Jenny Nicholls from North and South on algorithm risks (based around a new book, Hello World, by Hannah Fry)
  • From the open AI blog, about a new neural network algorithm for generating realistic text: “Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights.” Exercise for the reader: does this feel like a good idea? Would it feel like a good idea if Facebook were saying it?
  • PredPol claims to use an algorithm to predict crime in specific 500-foot by 500-foot sections of a city, so that police can patrol or surveil specific areas more heavily.” And they say that when police go to these areas they really do find crimes occurring there. Which…is less reassuring than PredPol seems to think.
  • ” For example, a tench (a very big fish) is typically recognized by fingers on top of a greenish background. Why? Because most images in this category feature a fisherman holding up the tench like a trophy. About how neural networks work (a bit technical).
  • Julian Sanchez argues that, yes, online click-through agreements are bad for data privacy, but partly because data consent is genuinely a hard problem
avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »

Comments

  • avatar
    Steve Curtis

    Yes , predicting crime. ” Look someone has broken a window”. But wouldnt it possibly prevent some crime if police were regularly in the area?
    I suppose we could test that in NZ , as the Police had a major drop in roadside breath testing ( they were given money from the Roading fund to do so and some of it went on new motorways instead). Did accidents involving alcohol increase? (especailly as the breath testing drop may have varied by region)

    5 years ago

    • avatar
      Thomas Lumley

      It certainly could prevent crime if police were in the area. But it could also lead to more crime being detected in that area and less in other areas.

      The problem isn’t so much the targeted policing; it’s the feedback when the results of targeted policing are fed into the training of the algorithm.

      5 years ago