How to stop AI from recognizing your face in selfies
Uploading personal photos to the internet can feel like letting go. Who else will have access to them, what will they do with them—and which machine-learning algorithms will they help train?
The company Clearview has already supplied US law enforcement agencies with a facial recognition tool trained on photos of millions of people scraped from the public web. But that was likely just the start. Anyone with basic coding skills can now develop facial recognition software, meaning there is more potential than ever to abuse the tech in everything from sexual harassment and racial discrimination to political oppression and religious persecution.
The largest ever study of facial-recognition data shows how much the rise of deep learning has fueled a loss of privacy.
A number of AI researchers are pushing back and developing ways to make sure AIs can’t learn from personal data. Two of the latest are being presented this week at ICLR, a leading AI conference.
“I don't like people taking things from me that they're not supposed to have,” says Emily Wenger at the University of Chicago, who developed one of the first tools to do this, called Fawkes, with her colleagues last summer: “I guess a lot of us had a similar idea at the same time.”
Data poisoning isn’t new. Actions like deleting data that companies have on you, or deliberating polluting data sets with fake examples, can make it harder for companies to train accurate machine-learning models. But these efforts typically require collective action, with hundreds or thousands of people participating, to make an impact. The difference with these new techniques is that they work on a single person's photos.
“This technology can be used as a key by an individual to lock their data,” says Daniel Ma at Deakin University in Australia. “It’s a new frontline defense for protecting people’s digital rights in the age of AI.”