Brianna White

Administrator
Staff member
Jul 30, 2019
4,616
3,447
As developers unlock new AI tools, the risk for perpetuating harmful biases becomes increasingly high — especially on the heels of a year like 2020, which reimagined many of our social and cultural norms upon which AI algorithms have long been trained.
A handful of foundational models are emerging that rely upon a magnitude of training data that makes them inherently powerful, but it’s not without risk of harmful biases — and we need to collectively acknowledge that fact.
Recognition in itself is easy. Understanding is much harder, as is mitigation against future risks. Which is to say that we must first take steps to ensure that we understand the roots of these biases in an effort to better understand the risks involved with developing AI models.
The sneaky origins of bias
Today’s AI models are often pre-trained and open source, which allows researchers and companies alike to implement AI quickly and tailor it to their specific needs.
While this approach makes AI more commercially available, there’s a real downside — namely, that a handful of models now underpin the majority of AI applications across industries and continents. These systems are burdened by undetected or unknown biases, meaning developers who adapt them for their applications are working from a fragile foundation.
Continue reading: https://techcrunch.com/2021/09/24/ai-tradeoffs-balancing-powerful-models-and-potential-biases/
 

Attachments

  • p0004964.m04633.gettyimages_1282056852.jpg
    p0004964.m04633.gettyimages_1282056852.jpg
    256.4 KB · Views: 12
  • Like
Reactions: Brianna White