AI Regulation: Holding Data Bad Actors Accountable
Artificial Intelligence (AI) has been present in our professional and civilian lives for a while, but both the European Union (EU) and the U.S. Federal Trade Commission (FTC) have recently taken prominent steps to recognize its power and potential harm. As more entities choose to rely on AI algorithms to predict, recommend, or make decisions based on troves of collected data and specially created algorithms designed to generate sound results, governments recognize that there are clear perils inherent in over-reliance on automated decision-making and predictive analytics.
As a noted AI thinker Cathy O’Neil has posited many times in her book and on stage (see Weapons of Math Destruction) — bias cannot be eliminated — it can only be identified and managed. No one wants to be “on the hook” in the way that Amazon was back in 2018 when it used AI to identify ideal job candidates, and unintentional but clear and unfortunate bias came through, in a disappointing way.
In the U.S., the FTC’s role is to monitor trade practices that are unfair and dishonest. While it has some specific tools of enforcement against dishonest business practices, its jurisdiction is limited and excludes government agencies, banks, and non-profits. When companies mislead or otherwise oversell their products and services, the FTC can act. Most recently, the FTC indicated that it would use its authority to do precisely this, aimed particularly at companies that are selling biased algorithms. Given that all algorithms are inherently biased, this recent FTC blog post was a loud and powerful shot across the bow.