Brianna White

Administrator
Staff member
Jul 30, 2019
4,608
3,443
As the EU’s Artificial Intelligence (AI) Act fights its way through multiple rounds of revisions at the hands of MEPs, in the US a little-known organization is quietly working up its own guidelines to help channel the development of such a promising and yet perilous technology.
In March, the Maryland-based National Institute of Standards and Technology (NIST) released a first draft of its AI Risk Management Framework, which sets out a very different vision from the EU.
The work is being led by Elham Tabassi, a computer vision researcher who joined the organization just over 20 years ago. Then, “We built [AI] systems just because we could,” she said. “Now we ask ourselves: should we?”
While the EU’s AI Act is legislation, NIST’s framework will be entirely voluntary. NIST, as Tabassi repeatedly stresses, is not a regulator. Founded at the beginning of the 20th century, NIST instead creates standards and measurement systems for technologies ranging from atomic clocks to nanomaterials, and was asked by the US Congress to work up AI guidelines in 2020.
Unlike the EU’s AI Act, NIST does not single out any particular use of AI as off limits (the Act, by contrast, could ban facial recognition in public spaces by the authorities, albeit with exceptions for things like terrorism).  
And as NIST’s guidelines dryly note, its framework “does not prescribe risk thresholds or [risk] values.”  In other words, it is up to developers to weigh the risks and advantages of unleashing their AI systems on the world.
“At the end of the day, we truly believe that there isn't one size fits all,” said Tabassi. “It's up to the application owner, developer […] whoever is in charge, to do a cost benefit analysis and decide.” Facial recognition by police, say, is a much riskier prospect than using it to unlock a smartphone, she argues. Given this, prohibiting a particular use case makes no sense (though recent compromise texts on the EU AI Act suggest there may be exceptions for unlocking phones).
The EU AI Act repeatedly emphasizes that there needs to be ultimate “human oversight” of AI. NIST’s guidelines don’t mention this, because whether or not it is needed all comes down to how AI is being used. “We truly believe that AI is all about context, and ‘AI without a human’ doesn't mean much,” said Tabassi. NIST is not trying to regulate to that level of detail, of when exactly a human should be in the loop, she stresses.
Continue reading: https://sciencebusiness.net/news/how-us-plans-manage-artificial-intelligence
 

Attachments

  • p0008052.m07687.it_dev_1050x450.jpg
    p0008052.m07687.it_dev_1050x450.jpg
    64.8 KB · Views: 10