4 Ways To Bridge The AI-Integrity Gap
Consider this cautionary tale about a major digital-economy company’s AI-driven breakthrough.
The tech company, always seeking new ways to improve advancement and performance, developed an algorithm to predict promotability of mid-level employees including managers. Based on items including recent quantifiable achievements and 360-degree reviews, the algorithm did significantly better than higher-level managers at “stack-ranking” employees by potential and predicting who would perform well in advanced roles. Management was thrilled about the results, as they felt the technology represented a fairer way to make advancement decisions, and would likely improve performance and retention if more of the “right” people were moved up within the organization.
But when they discussed rolling out the algorithm with a wide range of employees, the pushback was immediate, with focus on procedural justice. People felt strongly that employees across levels deserved to have their promotability reviewed by human beings; some expressed concerns that people might learn to “game” the AI system. Thus, allowing the promotion algorithm to make the decision seemed unfair to employees. Ultimately, the company decided it was too risky to replace the human-judgment- system of decision-making, and shelved the algorithm; there simply wasn’t sufficient will to move forward.
This example, taken from a real-life situation, highlights the challenge of developing and implementing AI-enabled technologies. Even proven technologies will face significant internal resistance, depending on likely applications. So leaders must find ways to navigate this challenging intersection of technology, ethics, and integrity.
Below I discuss why people may react so strongly to AI technologies and how to help your organization close the AI-integrity gap.