What You Need to Know About AI Ethics
AI is transforming how enterprises function and engage with people. The technology offers the ability to automate simple and repetitive tasks, unlock insights hidden inside data, and help adopters make better, more informed decisions. Yet as AI firmly embeds itself into the IT mainstream, concerns are growing over its potential misuse.
To address the ethical problems that can arise from non-human data analysis and decision-making, a growing number of enterprises are starting to pay attention to how AI can be kept from making potentially harmful decisions.
AI is a powerful technology with an immense number of positive attributes. “However, to fully gauge its potential benefits, we need to build a system of trust, both in the technology and in those who produce it,” says Francesca Rossi, IBM's AI ethics global leader. “Issues of bias, explainability, data handling, transparency on data policies, systems capabilities, and design choices should be addressed in a responsible and open way.”
“AI ethics should be focused on understanding AI's impact on society, mitigating unintended consequences, and driving global innovation toward good,” explains Olivia Gambelin, an AI ethicist and CEO of ethics advisory firm Ethical Intelligence. The practice of operationalizing AI ethics involves the translation of high-level principles into concrete, detailed actions and seeks to enable technology focused on human values at the core,” she says.