How much should we trust explainable AI?

As machine learning and artificial intelligence (AI) applications play a growing role in the services we use, knowingly or not, there is a greater emphasis than ever on the need to be able to understand how these systems are making their decisions.

If a bank refused a mortgage or loan application based on an algorithm, for example, the customer should be entitled to know why.

Explainable AI is now a principle of ethical artificial intelligence. The UK’s Information Commissioner’s Office (ICO) even put forward a regulation that businesses and other organizations are required to explain decisions made by AI or face multimillion-dollar fines. 

At the root of this drive is trust: by being able to access and understand an algorithm’s decision-making process, society will more openly accept the technology.

Continue reading: https://techhq.com/2020/10/how-much-should-we-trust-explainable-ai/

Reply Oldest first
  • Oldest first
  • Newest first
  • Active threads
  • Popular
Like Follow
  • 8 days agoLast active
  • 1Views
  • 1 Following

Quick Resources

CompTIA BizTech Podcast
Communities and Council YouTube Channel