AI-Informed Drug Development & Discovery: Excerpts From a Conversation with FDA Regulatory Science Leadership

Dr. Szczepan Baran
Chief Scientific Officer

Hardly a week passes without the press spotlighting the growing attention governments are paying to artificial intelligence (AI). For the pharmaceutical and biotech industries, this has created some confusion. Should the progress made by scientists to integrate AI technology as another method to advance exploration and discovery be reconsidered? Are regulators now more concerned about research supported by AI and ML? Will my investigational new drug application be slowed or rejected if submitted with AI-based research? 

I had the opportunity to discuss the state of regulatory science as it relates to AI and ML with Director Dr. Weida Tong from the FDA National Center for Toxicological Research to get insight into some of these questions. You can view recording of the full discussion online, but following are a few takeaways.

Firstly, the FDA has long embraced the integration of new technologies and scientific advances developed by both academia and commercial industry. Regulators urge sponsors to use AI and ML techniques in their research, and share the details of its integration and methodology. One example of the FDA’s sentiments on encouraging exploration and innovation in the discovery of new technologies for drug development is the FDA’s Innovative Science and Technology Approaches for New Drugs (ISTAND) pilot program. This program encourages development of technologies such as AI and machine learning, and states that this includes the “use of artificial intelligence (AI)-based algorithms to evaluate patients, develop novel endpoints, or inform study design.”

Another example of the FDA’s encouragement of AI/ML exploration in drug development is the Predictive Toxicology Roadmap. Developed in 2017, it is a six-part framework that describes itself as a tool for “integrating predictive toxicology methods into safety and assessment.” This was one of the first notable instances of the FDA’s consideration of AI/ML techniques as viable alternatives to traditional methods of drug discovery & development, as it acknowledges a need to understand and improve core processes related to predictive data in toxicology.

In all aspects of engaging with regulators like FDA, early communication is critical. “How can we utilize the information from AI? I think that the communication between sponsors and the FDA is really important,” says Dr. Weida Tong, “(as it enables us) to have the early conversation (about) how the AI is being developed, and how the AI information is used to support (analysis) of safety or efficacy.” 

Additional instances of the FDA discussing and conceptualizing integration of AI/ML in the pharmaceutical sector are documented in its report from 2022 Advancing Regulatory Science at FDA: Focus Areas of Regulatory Science. This publication not only recognizes but also underscores the significant role of predictive analytics in drug development and highlights the critical need to understand the nuances of emerging technologies such as AI/ML. The FDA’s interest in engagement with industry and academic partners is also reinforced by the agency’s request for feedback this year (Using AI and Machine Learning in the Development of Drug & Biological Products). According to this publication, the FDA “understands that AI/ML use in drug development is diverse, and careful assessments that consider the specific context of use are needed.” This allows sponsors to leverage the technology and advance drug development, within rational boundaries. Sponsors should maintain open and ongoing communication with the FDA and ensure their practices are transparent, aligning with a regulator’s understanding of a given context of use. 

One primary area of concern is bias. “[Regarding bias in AI systems,] the more important issue we need to understand is what kind of data has been used to train the model, to develop the models. Once we understand that, we will be able to define the context of the use of the models,” explains Dr. Tong. Sponsors utilizing AI and ML technologies must also diligently consider additional topics beyond bias that the FDA has formally identified as pertinent to its regulatory science interests. The FDA has acknowledged the importance of eliminating concerns around transparency and explainability as well, encouraging drug developers to ask themselves questions such as, ‘How can the use, development and performance of AI/ML in drug development applications be inspected?’ and ‘What processes are in place to enhance and enable traceability and auditability? How can changes be captured and monitored?’

Safety is another essential aspect to keep in mind, and to be in accordance with best practices, drug developers improving their processes with AI need to be able to identify any harm or problems AI could potentially introduce before it happens. Model influence (the weight of the model in the totality of evidence for a specific decision) and decision consequence (the potential consequences of a wrong decision) are critical considerations to prioritize. The aforementioned FDA Predictive Toxicology roadmap was developed with the importance of safety in mind. 

Reliability and quality control are core areas to focus on as well. In terms of reliability, the FDA’s current statistical modeling standard sets the bar high, requiring a 90 percent confidence interval of the test-to-reference geometric mean ratio of Cmax, and AUC must fall inside the range of 80 to 125 percent. And when considering quality control, drug developers should make sure to 1) identify which practices and documentation are being used to inform and record data source selection and inclusion or exclusion criteria, 2) determine the data quality for the particular context of use and 3) ensure that the data is representative of the population within which the AI/ML method will be utilized. 

Internationally, the same level of interest and attention is being directed towards AI/ML in drug discovery, as evidenced by the report from the European Medicine Agency: Reflection paper on the use of artificial intelligence in the lifecycle of medicines. This publication states, “The use of artificial intelligence is rapidly developing in society and as regulators we see more and more applications in the field of medicines. AI brings exciting opportunities to generate new insights and improve processes.”

VeriSIM Life has been a part of this conversation and submitted our set of recommendations based on the guiding principles governing our technology platform, which is one example of a computational drug discovery & development platform leveraging hybrid AI. 

There is no doubt that AI technology will play a pivotal role in bolstering public health Regulators, such as the FDA, acknowledge that the pharmaceutical industry's aim to incorporate AI in preclinical drug development coincides with efforts to expedite innovations. These advancements promise to enhance the efficacy, safety, and affordability of medicines. Engaging with regulators early and often is the first step to accelerating the adoption of this innovation. 

Learn more about VeriSIM Life’s BIOiSIM platform and unique Translational Index™️ technology.