Brianna White

Administrator
Staff member
Jul 30, 2019
4,604
3,443
Creating successful artificial intelligence programs doesn’t end with building the right AI system.  These programs also need to be integrated into an organization, and stakeholders — particularly employees and customers — need to trust that the AI program is accurate and trustworthy.
This is the case for building enterprisewide artificial intelligence explainability, according to a new research briefing by Ida Someh, Barbara Wixom, and Cynthia Beath of the MIT Center for Information Systems Research. The researchers define artificial intelligence explainability as “the ability to manage AI initiatives in ways that ensure models are value-generating, compliant, representative, and reliable.”
The researchers identified four characteristics of artificial intelligence programs that can make it hard for stakeholders to trust them, and ways they can be overcome:
1. Unproven value. Because artificial intelligence is still relatively new, there isn’t an extensive list of proven use cases. Leaders are often uncertain if and how their company will see returns from AI programs.
Continue reading: http://mitsloan.mit.edu/ideas-made-to-matter/why-companies-need-artificial-intelligence-explainability
 

Attachments

  • p0009054.m08634.ai_explainability.jpg
    p0009054.m08634.ai_explainability.jpg
    99.2 KB · Views: 9