The What, Why, and How of Explainable AI
Will removing AI from its impenetrable âblack boxâ and bringing its workings out into the light help organizations sustain their AI adoption?
AI has met with some roadblocks along the way to complete adoption. One of them is simple to explain but hard to overcome: people donât know how AI works, so they donât know if they can trust it.
Itâs perfectly natural. Take sales as an example. Good sales reps work hard to get to know their clients and their product lines. They develop techniques and soft skills that make them better at their job. And, in time, their years of experience give them a âfeelâ for what to do in a given situation.
As humans, we like to trust our gut â but weâre willing to be persuaded if and only if the other person gives us good reason to be. When the other âpersonâ is a mysterious computer program and their reasons are absent or unclear, we donât want to take their advice.
As I said, this is perfectly natural. So how can we work around it?
Thatâs the question Explainable AI (or XAI) tries to answer.
What Is Explainable AI?
According to Wikipedia, âExplainable AI ⦠is artificial intelligence in which humans can understand the decisions or predictions made by the AI. It contrasts with the âblack boxâ concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision. [â¦] XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions.â
In short, XAI seeks to demystify AI by using various processes and techniques that allow humans to understand what goes into its decisions and why that particular action was chosen. Itâs interesting to note that at times AI designers and developers canât explain why or how a certain outcome was reached; itâs easy to see how this can deter users from trusting that machine-based intelligence.
Itâs also quite hard to verify that the system is working correctly if you donât understand how itâs working. Thus, in addition to increasing end-user adoption, Explainable AI also aims to improve the accuracy and effectiveness of AI. It can also help ensure compliance and regulatory goals are being met; transparency has been called one of the hallmarks of responsible AI.
Recommended by LinkedIn
NISTâs 4 Principles of Explainable AI
In 2020, the U.S. the National Institute of Standards (NIST) released a report outlining the four key principles of Explainable AI. In a nutshell, they are:
Benefits of Explainable AI
An AI system that lets you know when itâs unable to generate an accurate response and provides explanations of its actions can benefit your organization in the two ways already discussed: greater acceptance by users and an easier path for its creators. This can have a spillover effect on various parts of AI development process itself:
Implementing XAI, as you might deduce from the above list, requires planning and forethought. Itâs crucial to look for potential biases and inaccuracies (first in the data, and then in the model). Continuous monitoring is also essential, as is automating lifecycle management. This automation should also look for signs of drift, but manual review and analysis should never be discounted.
Itâs not too difficult to imagine Explainable AI as the new standard of business AI. It has the potential to eliminate several of the pernicious roadblocks that have barred the way to higher rates of AI acceptance. It will be very interesting to watch how this progresses in the coming months; for those who develop AI tools and those who use them, XAI may just become the next evolution of Artificial Intelligence.