Brianna White

Administrator
Staff member
Jul 30, 2019
4,606
3,443
Let’s play a little game. Imagine that you’re a computer scientist. Your company wants you to design a search engine that will show users a bunch of pictures corresponding to their keywords — something akin to Google Images.
On a technical level, that’s a piece of cake. You’re a great computer scientist, and this is basic stuff! But say you live in a world where 90 percent of CEOs are male. (Sort of like our world.) Should you design your search engine so that it accurately mirrors that reality, yielding images of man after man after man when a user types in “CEO”? Or, since that risks reinforcing gender stereotypes that help keep women out of the C-suite, should you create a search engine that deliberately shows a more balanced mix, even if it’s not a mix that reflects reality as it is today?
This is the type of quandary that bedevils the artificial intelligence community, and increasingly the rest of us — and tackling it will be a lot tougher than just designing a better search engine.
Computer scientists are used to thinking about “bias” in terms of its statistical meaning: A program for making predictions is biased if it’s consistently wrong in one direction or another. (For example, if a weather app always overestimates the probability of rain, its predictions are statistically biased.) That’s very clear, but it’s also very different from the way most people colloquially use the word “bias” — which is more like “prejudiced against a certain group or characteristic.”
Continue reading: https://www.vox.com/future-perfect/22916602/ai-bias-fairness-tradeoffs-artificial-intelligence
 

Attachments

  • p0007670.m07316.fair_ai.jpg
    p0007670.m07316.fair_ai.jpg
    93.5 KB · Views: 12