How to apply XAI?
Silvie Spreeuwenberg explaining something difficult to BRPN in 2013

How to apply XAI?

If you believe the reports, AI is simply uploading big-data, running an AI algorithm and getting results. Unfortunately, it is not that simple.  Practical methods to apply AI are often limited to guidelines on choosing the right AI algorithm. As a result, AI solutions are black boxes, with unclear scope, that are not trusted by the workforce and not integrated in the organization’s strategy cycle.

This post provides practical guidelines to create a sustainable AI solution that is integrated in the organization’s strategy development cycle and uses explanations to guide humans in a plan-do-check-act process. The result is an XAI solution combining symbolic reasoning – business rules - to deal with common sense knowledge and machine learning - statistics - to deal with uncertainty. In AI terms this is named a 'hybrid solution'.

Any easy to remember six step process merits some warnings.

It is a generalized method and blindly following a generalized method as a recipe on a specific case never yields good results. Understanding the underlying knowledge and thinking hard how to best apply it in a specific situation will. So, when applying this method make sure to understand the motive behind each step and assess how to apply each step to your specific case. Furthermore, the steps need not be applied linearly, instead may be revisited as often as needed.

Step 1: Concept Model Definition

It all starts by agreeing, as a business, on the meaning of the important concepts in your business. Business that can’t state clearly who they target and what it means to be successful are unlikely to be efficient, effective and successful in the application of AI and decision support systems.

A well-defined business concept has a distinct meaning that does not overlap with another business concept unless it is a more general concept and it is known how it relates to other concepts. It should be SMART ( Specific, Measurable, Achievable, Realistic and Timebound).

For most businesses a returning customer is an important concept.

If we use AI to predict the likelihood of a new customer becoming a returning customer, how do we define clearly what a returning customer is? A well-defined business concept should be SMART (Specific, Measurable, Achievable, Realistic and Timebound). Is it a customer who ordered in two months after his first order and placed the last order at most one year ago? This example demonstrates that SMART definitions form the basis for a good scope definition, results in concepts with a clear and distinct meaning, and relate a concept to other important concepts.

In my experience, businesses that have well defined concepts are better in communicating with their customers, partners and technology departments and therefore, are more successful in finishing technology projects, including AI innovations.

Step 2: Task Definition

To collect the right data, you have to understand the task that you want to support. I recommend that you always start with basic statistical analysis on the available data.This opens the conversation and discussion in the organization and will lead to better understood and better defined concepts.

In our returning customer example, a distribution of orders in time may confirm that indeed, when customers wait more than 3 months to place a second order it is unlikely that they will return.

But what predicts that they will return when they place a first order?

The business is likely to have some good ideas and some preconceptions, which need to be brainstormed, analyzed and tested. Are multiple outcomes acceptable or do you need just one outcome?  Is the result of the task a classification, a ranking or a number? Agreeing on these questions is important for step 4.

Step 3: Data Collection

The good news about this decennium is that most organizations are already collecting data. Are we collecting the right data for our objectives? Most of the time data is collected for administrative purposes and not for predictive purposes.

For predictions we may need to collect more data or get new data sources. 

Let’s use the returning customer example again. Maybe the parking conditions at the time of the first order are an important predictive factor in your business for a customer to make the decision to return or not. Did we collect that data? Most likely we did not. Can we get it? Most likely we can.

Step 4: Technique Selection

Now we can choose a suitable AI technique because we know from the previous steps what kind of data we have as input for our decision support task, how the data elements relate, what kind of decision-outcome we accept and what we already know about the task that needs to be supported. 

So, don’t use a hammer when you need a screwdriver!

You may have heard about 'random forest’ and 'deep learning’. These are two of many AI techniques to choose from. Which technique to apply depends on the characteristics of the task and data. There will be another post about these techniques, but I can already tell you that it is definitely not the other way around: it is not the technique that tells us what to do! No, it is the task that dictates what technique to use. So, don’t use a hammer when you need a screwdriver!

Step 5: Explanation Generation

The major concern with decision support systems, in general, is that if users do not trust the underlying model or a prediction, they will not use it.

It has been proven that a good explanation may increase the trust in the system.

However, if the explanation is complicated, nonsensical or is difficult to understand, the distrust towards the system’s decision will increase. So, generating a good explanation is a separate step, and an important step.

AI experts tend to claim that models that are good in predicting are not good in explaining and vice versa. This stance suggests that one has to choose between transparency and accuracy. I don’t agree. To the contrary I claim that high accuracy and explainability can and should go hand-in-hand. 

Generating the explanation does involve a second model. The explanation model has to receive information from the predictive model, get the distinguishing factors, select the factors that contribute to the user’s confidence and present those in the right way.

Suppose that, in our returning customer’s example, we identified that a customer is not likely to return because of the lack of parking spaces and the acquired product is not in the top 10 for returning customers. However, the customer asked a question and we know this increases the likelihood that a customer may return. Then, it makes sense to make an explanation to the user that includes both evidence for and against the conclusion. This makes the system trustworthy and intelligible for users and this has proven to increase their trust in the advice of the system.

Step 6: System Evolution

Finally, there needs to be a way to track and analyze errors and new trends toevolve the system. Besides tracking the accuracy of the predictions automatically we need to actively look for side effects, biases and new knowledge. That is, WE, being creative and experienced business people, must be supported by a good automated workflow process to improve the decision-making process.

In our returning customer case study, each customer that returns but was not 'flagged' by the system as a 'returner' should be analyzed. Do we know something that the system did not know? For example, the customer returns every spring and only buys something for the garden…. if it makes sense, it should result in an update of the system’s knowledge or system’s scope description.

Better explanations should result in increased trust and also in increased performance.

These are the KPI’s of our XAI solution and they should be measured.

The 6-steps of XAI versus AI practices today.

Maybe you expected steps like: write use cases, create MVP, integrate MVP in IT infrastructure, refine the business process, assess performance, automate feedback loop. This is the typical terminology used in the AI world today. The focus is on getting fast results, often part of a "fail fast, learn fast" culture. As a consequence, some of these efforts are not sustainable and will not result in business value.

AI adoption has tripled in 2018, moving AI towards the Gartner-hype-cycle peak.

Now that AI is getting more mainstream, more conservative companies have good reasons to enter this arena. These companies will ask for a method that results in more reliable project outcomes and integrated business systems.

We don’t want decision support systems that result in the headline:

AI model of <ENTER YOUR COMPANY NAME HERE> uses the last name of an applicant to determine the applicants eligibility, is that legal?

AI solutions assume that big-data includes knowledge about the correct input-output relationship. However, what this knowledge is about remains a black box to us and therefore the machine-made decision maker is comparable to an oracle. In XAI we still use these AI techniques but add an extra diagnostic feedback mechanism.

This is the fifth post in a serie. As an expert in decision support systems development, I have been promoting transparency and self-explanatory systems to close the plan- do-check-act cycle. All too often I come across modern systems that have similar issues, and face the same fate, than the legacy systems they replaced because the domain experts or end-users are not involved in the feedback loop. My impression is that the journey is starting all over again as organizations start using AI technology as black box systems. It is not needed and this is one of my contributions to a topic that is close to my heart. Next time we will describe what makes an explanation a good explanation.

Let me know if you liked this post by sharing it in your network.

Acknowledgements: I am grateful to Patricia Henao for helping me to edit the articles, reflection and creating supporting training material.

 

Dr. Curtis J. Tinsley

No Title at The Company of Man Retired Pathologist

5y

Great series, but my brain is still in the stall (metaphor for impaired symbolic reasoning) could you please explain, does the algorithm come before the artificail intelligence or vice versa?  Kind of a chicken before the egg ontological conundrum.

Thank you Silvie. Let's mention that symbolic reasoning is more than business rules and explanations are just one way to build trust in a system. Maybe you could address those points in the remaining posts. Sincerely,

Fred Simkin

Developing and delivering knowledge based automated decisioning solutions for the Industrial and Agricultural spaces.

5y

Well said!

To view or add a comment, sign in

More articles by Silvie Spreeuwenberg

Insights from the community

Others also viewed

Explore topics