August 16, 2021

Briefly

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »

Comments

  • avatar
    Eric Crampton

    On the x-ray AI bit: any chance that the machines used at different centres provide different imaging, and that if there’s some self-identified racial sorting in who goes to which centres, the AI could be picking up on the type of image? Like “Well, this looks like a picture from an Acme X-1 unit, and previous images from an Acme X-1 have all been from India…”

    3 years ago

  • avatar
    Thomas Lumley

    No, the accuracy assessment was on data from different centres from the model training, and with a white/black/other mix in each dataset. Also, they looked at whether any particular part of the image was more important (which would pick up at least some machine artifacts) and didn’t see anything helpful.

    3 years ago

  • avatar
    Megan Pledger

    I wonder if it’s colour that the AI is picking up? Plots with no features are able to get ethnicity right. The only information left in the plot appears to be colour. Difference in calcium intake might be producing colour differences – Asian’s are more likely to be lactose intolerant than people of European ancestry. That may lead to lower calcium density in their bones and mean their bones look more speckled on xray.

    3 years ago