SU Professor's Research Shows Bias Salience Makes Patients More Accepting of AI in Medical Care
Wednesday, September 4, 2024
怡红院 Albers School of Business and Economics Professor Mathew Isaac, PhD, a consumer psychologist, has published “To err is human: Bias salience can help overcome resistance to medical AI” and is available for interviews.
**FOR MEDIA MEMBERS: Contact Lincoln Vander Veen at vanderv1@seattleu.edu if interested in speaking with Professor Isaac and/or receiving the published version of Isaac’s research.
A free, shareable link to Isaac’s research appears below and is also available for download at the journal. His collaborators on this research are faculty members in business and psychology from Lehigh University.
In brief:
The widespread adoption of artificial intelligence, or AI, has been less rapid in health care and medicine. Prior research has attempted to increase people’s acceptance of AI by highlighting its accuracy or improving people’s understanding of how it works. Professor Isaac’s research suggests a new intervention—making the concept of discriminatory bias more salient—that can lead people to be more accepting of the application of artificial intelligence in health care contexts. It turns out that people think that bias is a uniquely human shortcoming. As a result, a bias salience intervention—for example, asking people to reflect on a time when they were the victim of age bias or gender bias—makes them think about a situation in which another person acted in a discriminatory way toward them. Isaac and his co-authors find that when asked to reflect on or consider bias, people disproportionately focus on human bias and are consequently more accepting of AI in health care.
Key takeaways:
- This research shows that increasing bias salience (i.e., the concept of discriminatory bias) leads to relatively greater receptiveness to AI in medicine and health care contexts.
- This is an important finding because people have been particularly resistant to AI in medicine despite the potential benefits it may offer.
- The way in which people think about bias may evolve over time. Although bias is currently viewed as a unique human error now, as people learn more about AI hallucinations and the biased training of AI algorithms, they may be more likely to associate bias with machines.
###
Wednesday, September 4, 2024