Proposed New EU AI Regulations: A Pre-Planning Guide for U.S. In-House Counsel
If the European Commission’s newly proposed harmonized rules on Artificial Intelligence (“AI”) (the “Artificial Intelligence Act”) (published April 21, 2021) are adopted, U.S.-based AI companies operating in European Union (“EU”) countries (or expecting to do so) may soon be subject to significant new regulatory requirements. The proposed regulations, with few exceptions, would apply to companies or individuals (“providers”) who place on the market or put into service certain high-risk AI systems in the EU, “users” (including companies) of those AI systems who are located in the EU, and providers and users of such AI systems that are located outside the EU but whose system outputs are used in the EU. If the timeline of the EU’s General Data Protection Regulation (“GDPR”) is any indication, it may take many months before the proposed AI regulations are adopted and become effective. Even so, U.S.-based AI companies who may be subject to the regulations would do well to use this time to map out a framework for achieving compliance.
The proposed regulations define “AI system” as software that is developed with one or more of machine learning techniques, logic- and knowledge-based techniques, statistical methods, Bayesian estimations, or search and optimization methods, and that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. This is a broad definition that is expected to impact many data-based businesses.
The proposed regulations generally do not apply to low- or medium-risk AI systems, nor to specific AI technologies and systems that are explicitly prohibited from operating in the EU after the effective date (such systems are listed in Article 5 of the regulation). The rest—so called “high risk” AI systems—will need to comply with the rules (including existing systems if they are modified after the effective date). Example high-risk AI systems include certain medical devices, biometric identification systems, education or vocational training systems, law enforcement surveillance systems, and AI systems intended to be used as safety components of a product, among several others.