Why Is Solving For Trust In AI So Challenging?
Technology revolutions often move through predictable phases. What begins in science laboratories is put to work and scaled by organizations—and then those organizations contend with how to manage the technology toward its greatest value. Today, artificial intelligence (AI) has reached this third phase, and to capture the most potential benefit, we are called to consider AI through the lenses of effective governance, ethics and trust.
Like any technology, enterprises should be able to trust that the AI tools they use deliver the outcomes they expect. While there is no silver bullet, there are leading practices. To understand why they are important, we should first appreciate just how nuanced AI is in today’s marketplace.
The many shapes of AI deployment
The qualities of an AI tool are a function of the model type, the underlying data and the factors specific to each use case. Assessing AI impact means understanding it within the context of a business application.
Imagine a facial recognition system. In one scenario, it is acquired by a retailer and deployed in stores to track customer age, sentiments and reactions while shopping. The AI outputs are used to inform real-time personalized advertising to influence buying decisions.
In another scenario, the same facial recognition system is trained to detect expressions consistent with someone who is being abducted or trafficked. The system is paired with CCTV cameras in airports and train stations to help law enforcement stop human trafficking.
We see from this how the same AI system can be deployed for unrelated use cases and with significantly different outcomes. This can be true for AI used within the same industry and even within the same organization.