Brianna White

Administrator
Staff member
Jul 30, 2019
4,608
3,443
Artificial intelligence holds great promise for healthcare, and it is already being put to use by many forward-looking hospitals and health systems.
One challenge for healthcare CIOs and clinical users of AI-powered health technologies is the biases that may pop up in algorithms. These biases, such as algorithms that improperly skew results because of race, can compromise the ultimate work of AI – and clinicians.
We spoke recently with Dr. Sanjiv M. Narayan, co-director of the Stanford Arrhythmia Center, director of its Atrial Fibrillation Program and professor of medicine at Stanford University School of Medicine. He offered his perspective on how biases arise in AI – and what healthcare organizations can do to prevent them.
Q. How do biases make their way into artificial intelligence?
A. There is an increasing focus on bias in artificial intelligence, and while there is no cause for panic yet, some concern is reasonable. AI is embedded in systems from wall to wall these days, and if these systems are biased, then so are their results. This may benefit us, harm us or benefit someone else.
A major issue is that bias is rarely obvious. Think about your results from a search engine "tuned to your preferences." We already are conditioned to expect that this will differ from somebody else's search on the same topic using the same search engine. But, are these searches really tuned to our preferences, or to someone else's preferences, such as a vendor? The same applies across all systems.
Continue reading: https://www.healthcareitnews.com/news/how-ai-bias-happens-and-how-eliminate-it
 

Attachments

  • p0005962.m05615.113021_qanda_ai_biases_sanjiv_narayan_1200_x.jpg
    p0005962.m05615.113021_qanda_ai_biases_sanjiv_narayan_1200_x.jpg
    291.4 KB · Views: 13
  • Like
Reactions: Brianna White