Brianna White

Administrator
Staff member
Jul 30, 2019
4,608
3,443
Artificial intelligence (AI) technologies have become increasingly widespread over the last decade. As the use of AI has become more common and the performance of AI systems has improved, policymakers, scholars, and advocates have raised concerns. Policy and ethical issues such as algorithmic bias, data privacy, and transparency have gained increasing attention, raising calls for policy and regulatory changes to address the potential consequences of AI (Acemoglu 2021). As AI continues to improve and diffuse, it will likely have significant long-term implications for jobs, inequality, organizations, and competition. Premature deployment of AI products can also aggravate existing biases and discrimination or violate data privacy and protection practices. Because of AI technologies’ wide-ranging impact, stakeholders are increasingly interested in whether firms are likely to embrace measures of self-regulation based on ethical or policy considerations and how decisions of policymakers or courts affect the use of AI systems. Where policymakers or courts step in and regulatory changes affect the use of AI systems, how are managers likely to respond to new or proposed regulations?
AI-RELATED REGULATION
 
In the United States, the use of AI is implicitly governed by a variety of common law doctrines and statutory provisions, such as tort law, contract law, and employment discrimination law (Cuéllar 2019). This implies that judges’ rulings on common law-type claims already play an important role in how society governs AI. While common law often involves decision-making that builds on precedent, federal agencies also engage in important governance and regulatory tasks that may affect AI across various sectors of the economy (Barfield & Pagollo 2018). Federal autonomous vehicle legislation, for instance, carves out a robust domain for states to make common law decisions about autonomous vehicles through the court system. Through tort, property, contract, and related legal domains, society shapes how people utilize AI while gradually defining what it means to misuse AI technologies (Cuéllar 2019). Existing law (e.g., tort law) may, for instance, require that a company avoid any negligent use of AI to make decisions or provide information that could result in harm to the public (Gallaso & Luo 2019). Likewise, current employment, labor, and civil rights laws imply that a company using AI to make hiring or termination decisions could face liability for its decisions involving human resources.
Policymakers and the public also consider new legal and regulatory approaches when faced with potentially transformative technologies, as these may challenge existing legislation (Barfield & Pagollo 2018). The Algorithmic Accountability Act of 2022 is one proposal to deal with such perceived gaps. The Algorithmic Accountability Act was first proposed in 2019 to regulate large firms through mandatory self-assessment of their AI systems, including disclosure of firm usage of AI systems, their development process, system design, and training, as well as the data gathered and used.
Continue reading: https://www.brookings.edu/research/how-does-information-about-ai-regulation-affect-managers-choices/
 

Attachments

  • p0008608.m08210.ai_regulations.jpg
    p0008608.m08210.ai_regulations.jpg
    95.1 KB · Views: 14
  • Like
Reactions: Brianna White