What Is Explainable AI & How Can It Help in the Context of Drug Discovery?

Today, most life sciences experts agree that Artificial Intelligence (AI) is poised to revolutionize drug discovery. However, in order for AI to achieve its full, game-changing potential significant changes will need to be made to the R&D status quo. For heavily regulated industries like biopharma, that will mean satisfying a high bar of explainability – especially as regulatory bodies require evidence of repeatability, and clinicians and scientists want to be able to explain the inner workings of the complex AI models challenging their established methods.

Enter: explainable AI (XAI). XAI is emerging as a methodology that can help improve trust, confidence, and successful adoption of AI-driven approaches – especially in the context of drug discovery. Let’s take a closer look.

What is explainable AI (XAI)?

Explainable artificial intelligence (XAI), sometimes also called Interpretable AI, is a framework and methodology aimed to help human users understand and trust the results or outputs of machine learning (ML) algorithms. 

Overall, the goal of XAI is to:

  • Present AI models to non-technical audiences in clear language
  • Describe the methods used for predictions or classifications
  • Debug any questionable results that may have occurred during modeling
  • Regulate a model’s behavior to avoid bias

An organization’s XAI toolkit might include: videos, tutorials, example-based explanations, model analysis, feature attributions, explainability algorithms and more. XAI toolkits allow users and regulators to explore an organization’s AI technology in more detail in order to build trust in its outcomes (e.g. accuracy; fairness/bias) and transparency in its processes.

What are some of the benefits of XAI?

XAI helps organizations adopt a responsible and ethical approach to AI development through an enhanced understanding of model data, inputs, outputs and algorithms. By providing explanations around how AI systems work and make decisions, developers and ML scientists can better ensure project requirements are satisfied, and non-technical audiences and stakeholders are able to address their concerns about model behavior. Not only does this increase transparency and help build trust, it also serves to mitigate many of the compliance, legal, security and reputational risk inherent in AI-based approaches.

Overall, the benefits of XAI can be summarized as follows:

  • More fair algorithmic outputs
  • Increased transparency, confidence and reliability in AI models
  • Enhanced efficiency and effectiveness across AI model pipeline
  • Improved organizational performance and reputation

How is explainable AI (XAI) different from other types of AI?

Although typical AI and ML techniques tend to have high accuracy rates, these models can be very difficult to interpret, especially in deep learning where complex neural networks are often opaque. Sometimes known as “black box” models, in non-explainable AI, users (and possibly even the humans designing them) can’t articulate precisely how a model reached its conclusions. This, of course, impacts the trust of its outputs, as users and regulators want to be certain these models are operating without bias or other irregularities. In addition, when a typical AI model acts in an unexpected way or fails, developers and end-users may struggle to understand the root cause and find suitable solutions to address the issue. 

By contrast, the XAI techniques and methods implemented across a ML lifecycle can analyze the data used to develop models (sometimes called pre-modeling), while also incorporating interpretability into the architecture of its system via explainable modeling. Doing so allows for post-modeling explanations of system behavior. These interpretable models may be deliberately constrained in order to provide this level of transparency and clarity, whereas most standard machine learning models would not be designed with such interpretability constraints.

How is explainable AI (XAI) impacting drug discovery?

In drug discovery, most researchers now agree that explainability is a requirement for AI-based clinical decision support systems, as transparency and interpretability are central to decision-making between medical professionals and patients. In addition, demand for explainable deep learning methods is strong in the molecular sciences, where deep learning, AI-based methods are being used for image analysis, structure of molecule and function prediction, and other critical research applications. 

XAI techniques like SHAP (SHApley Additive ExPlanations), LIME (Locally Interpretable Model-Independent Explanations) and causal-inference methods, like Bayesian networks, can help give valuable insight into the particular drivers of a treatment decision and help clarify the drug targeting phase. XAI methods are also showing the ability to create time and cost efficiencies when it comes to computational drug discovery studies. 

XAI is also helping increase collaboration between medicinal chemists, chemoinformaticians and data scientists – supporting shared analysis and interpretation of complex chemical data.

What’s next for explainable AI (XAI)?


Because of the pharma industry’s highly regulated nature, we should expect to see more and more advanced development of XAI models for healthcare diagnostics, drug design and treatment. While the field of XAI is still in its relative infancy, progress is coming quickly and its relevance continually increasing.

VeriSIM Life has developed its own sophisticated computational platform that leverages advanced AI and ML techniques to improve drug discovery and development by greatly reducing the time and money it takes to bring a drug to market. The BIOiSIM® platform’s primary output, a Translational Index™ score, is an explainable metric for use in the prediction of drug translatability. Contact us to learn more about BIOiSIM™ and how our AI-enabled platform helps de-risk R&D decisions.