Our global agreement on AI could reduce bias and surveillance Read more: https://www.newscientist.com/article/2298789-our-global-agreement-
Artificial intelligence is more present in our lives than ever: it predicts what we want to say in emails, helps us navigate from A to B and improves our weather reports. The unprecedented speed with which vaccines for covid-19 were developed can also partly be attributed to the use of AI algorithms that rapidly crunched the data from numerous clinical trials, allowing researchers around the world to compare notes in real time.
But the technology isn’t always beneficial. The data sets used to build AI often aren’t representative of the diversity of the population, so it can produce discriminatory practices or biases. One example is facial recognition technology. This is used to access our mobile phones, bank accounts and apartment buildings, and is increasingly employed by police forces. But it can have problems accurately identifying women and Black people. For three such programs released by major technology companies, the error rate was only 1 per cent for light-skinned men, but 19 per cent for dark-skinned men and up to a staggering 35 per cent for dark-skinned women. Biases in face-recognition technologies have led to wrongful arrests.
This is no surprise when you look at how AI is developed. Only 1 in 10 software developers worldwide are women and only 3 per cent of employees at the top 75 tech companies in the US identify as Black. But now there’s hope that the world is about to pivot to a much better approach.