Europe Seeks to Tame Artificial Intelligence with the World’s First Comprehensive Regulation
In what could be a harbinger of the future regulation of artificial intelligence (AI) in the United States, the European Commission published its recent proposal for regulation of AI systems. The proposal is part of the European Commission’s larger European strategy for data, which seeks to “defend and promote European values and rights in how we design, make and deploy technology in the economy.” To this end, the proposed regulation attempts to address the potential risks that AI systems pose to the health, safety, and fundamental rights of Europeans caused by AI systems.
Under the proposed regulation, AI systems presenting the least risk would be subject to minimal disclosure requirements, while at the other end of the spectrum AI systems that exploit human vulnerabilities and government-administered biometric surveillance systems are prohibited outright except under certain circumstances. In the middle, “high-risk” AI systems would be subject to detailed compliance reviews. In many cases, such high-risk AI system reviews will be in addition to regulatory reviews that apply under existing EU product regulations (e.g., the EU already requires reviews of the safety and marketing of toys and radio frequency devices such as smart phones, Internet of Things devices, and radios).