Brianna White

Administrator
Staff member
Jul 30, 2019
4,606
3,443
In the course of the most recent wave of expectation and hype about “artificial intelligence” (AI) — let’s say the last 10 years — there have been repeated attempts to define what it is. Serious documents such as from academics, governments, or professional bodies typically say that there is no agreed definition and then propose their own or fall back on a well-known one (for example the UK Government used the phrasing: “AI can be defined as the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence”). Popular articles tend not to agonize about it, but use the term to imply something technically advanced or futuristic.
Mostly this doesn‘t matter too much. But now that governments are crafting laws referring to AI (e.g. the EU’s AI Act and the UK National Security and Investment Act 2021) it is beginning to matter a lot. The scope of a law should include neither too much nor too little; be clear which cases fall within it and which do not; be understandable by anyone using it; anyone should be able to easily determine whether a case falls under it, and it should not need continual updating. Consequently, the debate on the scope of the EU AI Act (ongoing at the time of writing) is crucial to the impact of the eventual regulation.
Unfortunately, such debates about what “AI” is are probably unresolvable as they are based on a false premise. It is a semantic problem, but words matter, particularly in law.
We’ve gone down a blind alley
This seems like a roadblock for prospective legislators, but it should not be as it is irrelevant. The purpose of regulation is to protect one group of people from harm resulting from the actions of other people, such as those selling or using dangerous products. The focus of attention for regulators needs to be on the point at which people may be harmed by an action — intentionally or accidentally, directly or indirectly. Laboratory experiments and innovations need not generally be a regulatory concern until they leave the lab (beyond ethical considerations and the health and safety of the lab workers of course). The discovery that “E=mc2” didn’t immediately trouble lawmakers, but when people were able to utilize the energy released by nuclear fission, their attention was rightly grabbed.
Continue reading: https://www.themandarin.com.au/190364-defining-artificial-intelligence-for-regulation/
 

Attachments

  • p0008083.m07717.ai_regulation.jpg
    p0008083.m07717.ai_regulation.jpg
    100.7 KB · Views: 6