The findings raise some unsettling issues regarding AI’s role in medical diagnosis, evaluation, and treatment: may computer software inadvertently apply racial prejudice when viewing photographs like these?
An multinational team of health researchers from the United States, Canada, and Taiwan tested their AI on X-ray pictures that the computer program had never seen before after training it with hundreds of thousands of prior X-ray photos annotated with specifics about the patient’s race (and had no additional information about).
Even when the scans were collected from persons of the same age and sex, the AI was able to predict the patient’s claimed racial identification on these photos with amazing accuracy. With some sets of photos, the algorithm achieved 90% accuracy.
The researchers wrote in their published report, “We wanted to undertake a complete study of AI’s capacity to determine a patient’s racial identification from medical photographs.”
“We show that typical AI deep learning models can be taught to accurately predict race from medical pictures across different imaging modalities, with excellent performance that was maintained under external validation settings.”
Artificial intelligence scans of X-ray pictures were more likely to overlook indicators of sickness among Black persons, according to the findings of a prior study. To prevent this from happening, scientists must first figure out why it is happening.
AI is designed to replicate human thinking in order to discover patterns in data fast. However, this implies it is vulnerable to the same sorts of biases. Worse, their intricacy makes it difficult to separate our preconceptions from them.
Scientists are now unsure why the AI system is so excellent at recognizing race from photographs that don’t appear to contain such information. Even with minimal information, such as omitting hints about bone density or focusing on a tiny portion of the body, the models were very good at predicting the race represented in the file.
It’s likely that the system is detecting melanin, the pigment that gives skin its color, in ways that science has yet to discover.
“Our discovery that AI can reliably predict self-reported race from distorted, cropped, and noised medical photos, even when clinical specialists can’t,” the researchers write, “creates an immense danger for all model deployments in medical imaging.”
The study adds to a growing body of data that artificial intelligence systems may often replicate human biases and prejudices, whether racism, misogyny, or anything else. Training data that is skewed can lead to skewed findings, making them useless.
This must be balanced against artificial intelligence’s strong ability to process far more data much faster than humans, in anything from illness diagnosis to climate change predictions.
There are many unsolved concerns from the study, but for the time being, it’s crucial to be conscious of the possibility of racial bias in artificial intelligence systems – especially if we’re going to give them more responsibility in the future.
The Massachusetts Institute of Technology’s research scientist and physician Leo Anthony Celi told the Boston Globe, “We need to take a break.”
“We can’t rush the algorithms into hospitals and clinics until we’re convinced they’re not making racist or sexist choices,” she says.