New AI Regulations Are Coming. Is Your Organization Ready?
Over the last few weeks, regulators and lawmakers around the world have made one thing clear: New laws will soon shape how companies use artificial intelligence (AI). In late March, the five largest federal financial regulators in the United States released a request for information on how banks use AI, signaling that new guidance is coming for the finance sector. Just a few weeks after that, the U.S. Federal Trade Commission (FTC) released an uncharacteristically bold set of guidelines on “truth, fairness, and equity” in AI — defining unfairness, and therefore the illegal use of AI, broadly as any act that “causes more harm than good.”
The European Commission followed suit on April 21 released its own proposal for the regulation of AI, which includes fines of up to 6% of a company’s annual revenues for noncompliance — fines that are higher than the historic penalties of up to 4% of global turnover that can be levied under the General Data Protection Regulation (GDPR).
For companies adopting AI, the dilemma is clear: On the one hand, evolving regulatory frameworks on AI will significantly impact their ability to use the technology; on the other, with new laws and proposals still evolving, it can seem like it’s not yet clear what companies can and should do. The good news, however, is that three central trends unite nearly all current and proposed laws on AI, which means that there are concrete actions companies can undertake right now to ensure their systems don’t run afoul of any existing and future laws and regulations.
The first is the requirement to conduct assessments of AI risks and to document how such risks have been minimized (and ideally, resolved). A host of regulatory frameworks refer to these types of risk assessments as “algorithmic impact assessments” — also sometimes called “IA for AI” — which have become increasingly popular across a range of AI and data protection frameworks.
Indeed, some of these types of requirements are already in place, such as Virginia’s Consumer Data Protection Act — signed into law last month, it requires assessments for certain types of high-risk algorithms. In the EU, the GDPR currently requires similar impact assessments for high-risk processing of personal data. (The UK’s Information Commissioner’s Office, which enforces the GDPR, keeps its own plain language guidance on how to conduct impact assessments on its website).