Business Adoption of AI and Explainable AI — So close yet so far?

Cetas AI
6 min readMay 2, 2022

Introduction

In today’s world, Artificial intelligence (AI) has been a critical factor for any organization and one of its most important areas has been that of XAI, or Explainable AI which has been receiving massive attention over the past few years.

The need and desire to interpret the decision-making process of black-box AI models and solutions has been felt in recent times and the current XAI methods and systems have been helping the cause however many feel that it is not enough. In order to increase Business adoption of AI, the solution and models need to be trustworthy, transparent, and easily interpret-able by the stakeholders. If our goal is to give the best assurance and interpretations of the decisions of these AI models, then we have to focus on delivering “Understandable AI” instead.

Why is XAI needed anyway?

In recent times, AI solutions have been increasingly making important decisions that affect our daily lives.

“Artificial intelligence will be more profound for humanity than fire and electricity.”

– Sundar Pichai (Google CEO)

From loan applications and insurance claims to medical diagnostics and employment, businesses trying to implement AI and ML-based systems are constantly growing. But Consumers and Enterprises, however, are also increasingly wary of such AI integrations into everyday business processes. For example, in the insurance sector, only about 17% of organizations have actively started using AI solutions in production. Because most are unable to understand how these black-box models reach their decisions, and hence cannot trust them to review insurance claims, which are important life-altering decisions.

Trying to find ways to explain AI solutions is almost as old as building the AI itself. In recent years, educational research has produced many promising XAI methods and there are many software companies out there that have come forward to provide XAI tools and techniques in the market. However the major problem is in the approach being taken, which is more oriented towards the technical aspects of the model. In fact, the need for interpret-ability in AI is actually more of a business and social problem — which requires a better solution than XAI can offer, to tackle.

Explainable to Whom? And what are the challenges?

Let’s do a quick thought experiment: Imagine you are a data scientist who has built the perfect AI solution for a highly valuable use case, that is currently using a moderately reliable but easily understandable rule-based solution. Now, in order to pitch this solution to higher executives in your company, you have utilized state-of-the-art XAI tools to explain your solution.

This solution can create a great competitive advantage and generate a lot of revenue if it delivers on its promise or it may also permanently damage the company’s reputation/brand and in turn, swing the company’s shares down, if the model operates incorrectly. It is, therefore, safe to say that the higher-ups may seek evidence before they green-light the solution and the model goes live.

But once they look at those explain-ability outputs from the XAI tools that you implemented, they only find gobbledegook, i.e., uninterruptible and decontextualized data with no logic or intelligence that they can expect to attribute to the word “explanation”.

Herein lies the major problem of XAI as a solution to be used in businesses. The common issue, faced by any non-technical person, is: Explanations require tech experts’ translations. The end-users of these solutions are unable to understand and utilize these explanations, which is, in turn, hindering the business adoption of AI. Interpret-ability of these explanations needs to traverse across all levels of an organization, from the ground level of data scientists and ML engineers to the leadership level, business and compliance teams. Gaining trust and confidence becomes increasingly crucial.

The fact is, most AI explain-ability tools are only interpret-able and useful to a person with a strong technical background and a deep understanding of how that model works. XAI is an essential part of a professional’s toolkit — but is not a practical or intuitive way to “explain” AI solutions.

Solution: Understandable and Intelligent AI — XAI+

In order to get to the stage where there is complete confidence in the decisions made by AI models, the only way is to enrich the descriptive domain and expand its audience. What we need is “Understandable AI”. It is XAI+, i.e. an XAI solution that satisfies the needs of non-technical teams plus professional teams, in an organization.

The basis of understanding is transparency. Non-technical people should have access to all the decisions made by the models they oversee. They should be able to search the record system, based on key parameters, to evaluate each decision and compile it. They should be able to make false assumptions about individual decisions by performing counterfactual and/or what-if analysis, i.e., changing and alternating the various input variables to experiment with the results, and decide for themselves if those are reliable to trust and adopt into their businesses.

But we should not stop there. Understandable AI also needs to incorporate the larger context in which the models work. In order to build trust, the visibility into the human decisions that preceded and lead the models throughout their AI life cycle needs to be provided to business owners. Here are a few key questions that everyone involved in the model’s life-cycle should ask themselves:

  • Why was this particular model considered the best option for tackling the business problem that is being addressed? What were some other options that had been considered; and why were they cast out?
  • What were the risks and limitations noted during the selection process? How were they reduced?
  • How was this model optimized? Were any business KPIs considered?
  • What data was selected for training the model? How were the suitability and potential problems within the data assessed?
  • Were the data sources within or outside the company? If third-party data is used, what assurance do retailers offer about their data management processes?
  • What did we learn during model development and training? How did those studies affect the final product?
  • How does our company ensure that any potential problems are identified and fixed continuously when the model is deployed and live?

Conclusion: XAI is just one piece of the puzzle

XAI alone will not solve the problem of intelligently interpreting and understanding how ML models and AI solutions behave. However, it is still a very crucial piece to the bigger puzzle of Understandable AI. It is the strong foundation onto which more intuitive and end-user-friendly solutions can be built which will boost the trust in AI across the globe.

Concentrating on making AI understandable, along with shedding light across the AI pipeline, should be the true essence of Explainable AI, which will then become a key driver in the business adoption of AI.

--

--

Cetas AI

Navigating your journey “From Model centric to Value centric AI” visit: www.cetas.ai