Brianna White

Administrator
Staff member
Jul 30, 2019
4,604
3,443
Artificial intelligence (AI) has become an everyday reality and business tool spurred by computer advancement, data science and the availability of huge data sets. Big tech companies – GoogleAmazon and Meta – are now developing AI-based systems. The technology can mimic human speechdetect cancerpredict criminal activitydraft legal contracts, solve accessibility problems, and accomplish tasks better than humans. For businesses, AI promises to predict business outcomes, improve processes and deliver efficiencies at substantial cost savings.
But there are growing concerns with AI, still.
AI algorithms have become so powerful – with some experts labelling AI as being sentient – that any corruption, tampering, bias or discrimination can have massive implications on organizations, human life and society.
AI algorithms and digital discrimination
AI decisions increasingly influence and impact people’s lives at scale. Using them irresponsibly can exacerbate existing human biases and discriminatory measures such as racial profiling, behavioral prediction or sexual orientation identification. This inbuilt prejudice occurs because AI is only as good as the amount of training data we can provide, which can be susceptible to human biases.
Biases can also occur when machine learning algorithms are trained and tested on data that under-represent certain subpopulations, such as women, people of color or people in certain age demographics. For example, studies show that people of color are particularly vulnerable to algorithmic bias in facial recognition technology.
Biases can also occur in usage. For example, AI algorithms designed for a particular application may be used for unintended purposes for which they were not built, which results in misinterpretation of outputs.
Validating AI algorithm performance
AI-led discrimination can be abstract, un-intuitive, subtle, intangible and difficult to detect. The source code may likely be restricted from the public or auditors may not know how an algorithm is deployed. The complexity of getting inside an AI algorithm to see how it’s been written and responding cannot be underestimated.
Current privacy laws rely on notice and choice; therefore, the resultant barrage of notifications asking consumers to agree to lengthy privacy policies is seldom read. If such notices were applied to AI, it would have serious consequences for the security and privacy of consumers and society.
Continue reading: https://www.weforum.org/agenda/2022/08/how-the-responsible-use-of-ai-can-create-safer-online-spaces/
 

Attachments

  • p0008698.m08293.responsible_use_of_ai.jpg
    p0008698.m08293.responsible_use_of_ai.jpg
    57.9 KB · Views: 11
  • Like
Reactions: Brianna White