AI bias harms over a third of businesses, 81% want more regulation
AI bias is already harming businesses and there’s significant appetite for more regulation to help counter the problem.
The findings come from the State of AI Bias report by DataRobot in collaboration with the World Economic Forum and global academic leaders. The report involved responses from over 350 organisations across industries.
Kay Firth-Butterfield, Head of AI and Machine Learning at the World Economic Forum, said:
“DataRobot’s research shows what many in the artificial intelligence field have long-known to be true: the line of what is and is not ethical when it comes to AI solutions has been too blurry for too long.
The CIOs, IT directors and managers, data scientists, and development leads polled in this research clearly understand and appreciate the gravity and impact at play when it comes to AI and ethics.”
Just over half (54%) of respondents have “deep concerns” around the risk of AI bias while a much higher percentage (81%) want more government regulation to prevent.
Given the still relatively small adoption of AI at this stage across most organisations; there’s a concerning number reporting harm from bias.
Over a third (36%) of organisations experienced challenges or a direct negative business impact from AI bias in their algorithms. This includes:
- Lost revenue (62%)
- Lost customers (61%)
- Lost employees (43%)
- Incurred legal fees due to a lawsuit or legal action (35%)
- Damaged brand reputation/media backlash (6%)
Ted Kwartler, VP of Trusted AI at DataRobot, commented:
“The core challenge to eliminate bias is understanding why algorithms arrived at certain decisions in the first place.
Organisations need guidance when it comes to navigating AI bias and the complex issues attached. There has been progress, including the EU proposed AI principles and regulations, but there’s still more to be done to ensure models are fair, trusted, and explainable.”
Four key challenges were identified as to why organisations are struggling to counter bias:
- Understanding why an AI was led to make a specific decision
- Comprehending patterns between input values and AI decisions
- Developing trustworthy algorithms
- Determining what data is used to train AI