View all topics
Back to CompTIA

When There Is Bias In AI—And What We Can Do

  • 11 mths ago

Artificial Intelligence is increasingly underlying the systems and processes that we interact with on a daily basis. While this has benefits such as improved efficiency, increased capacity, and the ability to institute more sophisticated applications, it can also be a double-edged sword. Code is unbiased, but the people who create it are not, and their unconscious biases can inform that code and everything that it interacts with.

This is true of every product. We’ve seen clear and catastrophic examples with things like airbags, which were designed by mostly male teams resulting in a product that makes women 17% more likely to be killed in an auto collision and 73% more likely to be injured. In the tech industry, we’ve seen this in facial recognition software, with scanners being more accurate when identifying male faces and pale skin.

When it comes to AI this is even more dangerous because it is not just a single, biased algorithm that’s being instituted. It is a program that iteratively creates itself. It evolves based on the information being put into it. If there is unconscious prejudice in the initial data, or in the way it is designed, then that can negatively affect systems indefinitely down the line.

Continue reading:

Reply Oldest first
  • Oldest first
  • Newest first
  • Active threads
  • Popular
Like Follow
  • 11 mths agoLast active
  • 4Views
  • 1 Following
Powered by Forumbee


View all topics