Forums

View all topics
Back to CompTIA
0

3 principles for protecting the world from A.I. bias

  • 1 yr ago

Until the late 1960s, we knew very little information about what went into the foods we bought. Americans prepared most food at home, with fairly common ingredients. We didn’t see much need to know more. Then, food production began to evolve. Our foods contained more artificial additives. In 1969, a White House conference recommended the Food and Drug Administration take on a new responsibility—developing a new way to understand the ingredients and nutrition of what we eat.

That task took two decades. It wasn’t until 1990 that the FDA published rules mandating nutrition labels on packaged food. In other words, from the stasis of the ‘60s, and the recognition of what we needed, it took 20 years to get the safeguards in place.

Like the arrival of processed foods, the advent of artificial intelligence marks a new age—and whether it turns out to be good or bad for us will depend on what goes into it.  The difference is, at the pace with which A.I. is developing, we do not have 20 years—or even two—to put safety measures in place. The good news: Businesses can take the first and most critical step of identifying harmful or unacceptable A.I. bias, and then rapidly coalesce around the principles that mitigate it. 

A.I. bias is when software does something unintended or something with malintent. In the case of hiring, for example, we could design an A.I. system to look for the best candidates for a role. The A.I. would look for exactly what we specify: relevant work experience, strong educational background, and perhaps community service. Over time, the A.I. could exclude an entire population just because of the classes they took in college. It might do this by drawing a correlation between community service and courses taken, even if that connection isn’t causal in any way. In other words, A.I. could unintentionally lead to poor hiring decisions.

It is not hard to imagine even more egregious scenarios: A developer unintentionally embedding bias in A.I. that excludes a population because of gender. Or, in the case of a bank, A.I. that rejects all loans originating in a certain zip code, without any human knowledge of that decision. Or in retail, a loyalty program only rewarding customers of a certain socioeconomic background.

Human models reflect human biases. Because they do, whatever the intent, we may find that the most critical decisions are being made by an irrational actor: poorly trained software. To combat this, we must proactively address bias and develop and deploy A.I. in a socially responsible way, using a governed approach to protect both individuals and our society.

We must begin to make sure the A.I. we use makes decisions with bias mitigated, particularly when it comes to high-stakes arenas such as health care, public or financial services, and justice.

Fortunately, there is a set of principles we can follow to quickly get us on the path to socially responsible A.I. or A.I. risk management in general. 

Continue reading: https://fortune.com/2021/07/12/ai-bias-artificial-intelligence-business-protection/

Reply Oldest first
  • Oldest first
  • Newest first
  • Active threads
  • Popular
  • 1 yr agoLast active
  • 4Views
  • 1 Following
Powered by Forumbee

Forums

View all topics