New Study Shows AI Systems Are Making Humans More Biased
After a series of experiments involving 1,401 participants, researchers from University College London and MIT found that even small initial biases in humans can snowball into larger ones through repeated human-AI interaction.
Because AI systems are trained on data from humans, AI systems adopt different human biases, including cognitive, racial, and gender. The study warns that over time these systems can make humans more prejudiced than they initially were.
In one experiment, researchers showed participants 12 faces for half a second each and asked them to judge whether they appeared more happy or sad. Participants classified the faces as sad about 53% of the time, showing some bias. Then researchers trained an AI system called Convolutional Neural Network on the participant data. The AI system labeled the same set of faces as sad 65% of the time, amplifying the bias.
When new participants interacted with the biased AI system, they adopted its perspective. Those who initially disagreed with the AI’s judgment changed their minds nearly one-third of the time (32.72%), compared to only about one-tenth of the time (11.27%) with other humans.
Researchers witnessed the bias amplification effect consistently across various types of experiments. For example, participants who interacted with an AI system purposely trained on gender bias (mirroring biases in many existing AI systems) were more likely to overestimate men’s performance.
In another experiment, researchers asked the AI image generation system Stable Diffusion to create images of “financial managers.” The system generated images of white men 85% of the time. After viewing these AI-generated images, participants were significantly more likely to identify the role of financial manager with white men.
Later in this experiment, researchers told participants they were interacting with another person, while in reality, it was AI. They internalized the biases to a lesser degree, suggesting people believe AI is more accurate than humans. These findings show that bias is stronger in human-AI interactions compared to humans. Researchers explain it’s because AI systems tend to amplify biases, and humans are unaware to which extent AI can impact their perception, making them more prone to it.
Given AI’s prevalence, researchers find these results concerning. In particular, they worry about the potentially harmful impact of AI on future generations. On a positive note, the study found that when humans interacted with unbiased AI systems, their judgment improved over time, highlighting the importance of developing accurate AI systems.