Fighting algorithmic bias in artificial intelligence 04 May 2021

In 2011, during her undergraduate degree at Georgia Institute of Technology, Ghanaian-US computer scientist Joy Buolamwini discovered that getting a robot to play a simple game of peek-a-boo with her was impossible – the machine was incapable of seeing her dark-skinned face. Later, in 2015, as a Master’s student at Massachusetts Institute of Technology’s Media Lab working on a science–art project called Aspire Mirror, she had a similar issue with facial analysis software: it detected her face only when she wore a white mask. Was this a coincidence?

Buolamwini’s curiosity led her to run one of her profile images across four facial recognition demos, which, she discovered, either couldn’t identify a face at all or misgendered her – a bias that she refers to as the “coded gaze”. She then decided to test 1270 faces of politicians from three African and three European countries, with different features, skin tones and gender, which became her Master’s thesis project “Gender Shades: Intersectional accuracy disparities in commercial gender classification” (figure 1). Buolamwini uncovered that three commercially available facial-recognition technologies made by Microsoft, IBM and Megvii misidentified darker female faces nearly 35% of the time, while they worked almost perfectly (99%) on white men (Proceedings of Machine Learning Research 81 77).

Machines are often assumed to make smarter, better and more objective decisions, but this algorithmic bias is one of many examples that dispels the notion of machine neutrality and replicates existing inequalities in society. From Black individuals being mislabelled as gorillas or a Google search for “Black girls” or “Latina girls” leading to adult content to medical devices working poorly for people with darker skin, it is evident that algorithms can be inherently discriminatory (see box below).

“Computers are programmed by people who – even with good intentions – are still biased and discriminate within this unequal social world, in which there is racism and sexism,” says Joy Lisi Rankin, research lead for the Gender, Race and Power in AI programme at the AI Now Institute at New York University, whose books include A People’s History of Computing in the United States (2018 Harvard University Press). “They only reflect and amplify the larger biases of the world.”

Continue reading: https://physicsworld.com/a/fighting-algorithmic-bias-in-artificial-intelligence/

Reply Oldest first
  • Oldest first
  • Newest first
  • Active threads
  • Popular
Like Follow
  • 3 days agoLast active
  • 1Views
  • 1 Following

Quick Resources

CompTIA BizTech Podcast
Communities and Council YouTube Channel