50 shades of AI in regulatory science with Dr. Weida Tong from FDA

Dr. Szczepan Baran, Dr. Weida Tong

as published in Drug Discovery Today, June 6th, 2024

Weida Tong, National Center for Toxicological Research, Food and Drug Administration, Jefferson, AR 72079

Szczepan W. Baran, Verisim Life Inc. 1 Sansome Street, Suite 3500, San Francisco, CA 94104


  • The integration of AI into regulatory science introduces significant advancements in how regulatory bodies approach review, oversight, compliance, and innovation.
  • AI’s roles in regulatory applications are categorized into multiple shades, including: assisting, augmenting, autocratic, and autonomous, each with distinct implications and applications.
  • Understanding the context-of-use for each AI shade is crucial to address biases, ensure transparency, and enhance decision-making processes within regulatory frameworks.
  • The authors emphasize the need for tailored regulatory measures to accommodate AI’s diverse roles, ensuring AI enhances rather than complicates regulatory processes.
  • Effective governance frameworks must be adaptable to the varying degrees of human-AI interaction inherent in the different shades of AI, promoting responsible and ethical AI integration in regulatory science.


In the rapidly advancement of artificial intelligence (AI), particularly Generative AI, its integration into the regulatory science practice has never been more challenging. The benefits of AI such as ChatGPT in various domains have been widely recognized including in regulatory application within drug discovery and development. Still, while it comes with unprecedented opportunities, we are also faced with formidable challenges. As we recognize AI’s nuanced roles, from assisting and augmenting human capabilities to adopting autocratic and autonomous functionalities, it becomes clear that a one-size-fits-all setting to apply AI is impractical and counterproductive without considering context-of-use1. Within this context, the delineation of AI's roles into “50 shades of 'A'” (i.e., assisting, augmenting, autocratic, and autonomous) not only offers a framework for understanding but also highlights the necessity for tailored regulatory measures that recognize each function’s distinct characteristics and implications.

Integrating AI into regulatory science marks a significant evolution in how regulatory bodies approach review, oversight, compliance, and innovation. AI's role extends from enhancing operational efficiencies to redefining the boundaries of human-machine collaboration. To navigate the landscape of AI in this field, it is crucial to understand the foundational categories that define its application and the context within which each is being used. Additionally, as AI is being incorporated into every stage of drug development and chemical risk assessment, aligning on shades of AI terminology makes communication easier and collaborations more effective.

Within regulatory science, AI is part of a toolbox, and how we use this tool depends on the regulatory application and the shade of AI is related to it. Each shade drives how we treat AI in terms of statistical performance, reproducibility, and explainability; this matrix can help us address AI within the context of use in regulatory applications. Importantly, each shade represents a unique interaction between human intelligence and machine capabilities, from automating tedious tasks to enabling machines to learn and operate with minimal human intervention. Here we discuss these distinctions, presenting definitions alongside exemplified applications of these AI categories in the regulatory domain. Through this exploration, we aim to stimulate a dialogue among researchers, policymakers, and practitioners about AI’s transformative potential and challenges within regulatory frameworks.

Presented below is Table 1, which outlines the four pivotal definitions of AI and examples pertaining to regulatory application. We certainly understand that this is just a subset of AI shades which can also overlap. The confidence in AI within regulatory science hinges on the specific context-of-use and the models’ degree of explainability, transparency, and statistical boundaries, which varies based on the significance of the AI-driven decisions and the extent of human involvement in these decision-making processes. Essentially, the criticality of the outcome influenced by AI necessitates varying levels of understanding and clarity about how AI models arrive at their conclusions, underlining the importance of these factors in situations where AI plays a pivotal role alongside human experts (Figure 1).

Understanding the context of use for each shade of AI is crucial, as it helps identify and address specific biases inherent in different AI systems. For instance, augmenting intelligence requires understanding of data and algorithmic biases, such as patient population prevalence in clinical settings. Similarly, autocratic intelligence necessitates awareness of echo chamber biases, where a narrow set of perspectives derived from the limited input data might influence decision-making. Whereas autonomous intelligence remains a concept yet to be fully realized within regulatory frameworks, and its realization remains uncertain. However, there is potential for the development of partially autonomous intelligent systems, akin to partially autonomous vehicles. For instance, a vehicle may possess the capability to execute tasks like parallel parking independently, yet rely on human intervention for highway navigation. Moreover, varying degrees of partial autonomy could be envisaged; extending the previous vehicle example, the driver may relinquish control even on highways, but continuous monitoring by the driver could still be mandated. While the ideal of fully autonomous vehicles entails zero tolerance for accidents, the current reality still witnesses occasional incidents. However, in such cases, thorough investigation allows for the identification of causative factors and implementation of corrective measures to preclude recurrence. Yet, within regulatory frameworks, the occurrence of such incidents is untenable, necessitating a preference for partially autonomous intelligence, where oversight and corrective action can be more readily ensured. As AI becomes increasingly integrated into regulatory science, the need for comprehensive governance frameworks becomes paramount. These frameworks must be adaptable to the varying degrees of human-AI interaction inherent in the different “shades” of AI. As each category embodies a unique aspect of AI's utility in regulatory science, from automating mundane tasks to enabling machines to learn and adapt independently. By understanding these definitions and their implications, regulatory bodies, and stakeholders can better navigate the complexities of integrating AI into their workflows, ensuring that technology serves to enhance, rather than complicate, the regulatory landscape. These definitions, examples and matrices offer a gateway to further exploration of each category and the ability to delve into AI’s practical applications, benefits, and ethical considerations in this crucial field.

In conclusion, navigating the shades of AI in regulatory science requires a comprehensive and forward-thinking approach. By embracing the spectrum of AI's roles and potential, we can develop a regulatory framework that is both responsive and responsible.

The complete article can be accessed with your subscription to Drug Discovery Today, or purchased online.