Prescribing AI: Contrasting Approaches to AI in Drug Development across the Atlantic

According to a common quip in the U.S. they innovate on the technology front, and the Europeans innovate on the regulatory front. It has been argued that regulating AI with more emphasis on risks than on innovation is a common European theme so far, especially with the AI Act on the horizon. In this context it is interesting to observe and compare how regulatory bodies in the U.S. and Europe position themselves. For the pharmaceutical sector, the U.S. Food and Drug Administration (“FDA”) and the European Medicine Agency (“EMA”) have recently published papers, which give some insight into their positions on regulation of artificial intelligence (“AI”) and machine learning (“ML”) in medicinal product development (Artificial Intelligence and Machine Learning (AI/ML) for Drug Development | FDA; The use of Artificial Intelligence (AI) in the medicinal product lifecycle | European Medicines Agency ( But does it hold true in this case that the EMA is more compliance focused, not prioritizing   the innovation potential of AI, while the FDA would leave more room for innovation, with a lesser compliance focus?

Basis for the Evaluation

FDA and EMA both follow a risk-based approach, meaning that regulatory requirements depend on the specific context of use and consider specific risks in the phases of the lifecycle of medicinal products, while also taking into account the advantages and impact of regulation on innovation.[1] Both papers discuss different areas in which regulation can be considered, which are outlined in the following sections.


Both papers discuss the need for human governance of data, algorithms, and AI/ML models throughout the lifecycle of medicinal products, to ensure compliance with relevant laws (e.g., data protection laws), ethical standards and principles on AI, but also to promote trust in effectiveness, reliability, and fairness of AI through accountability and transparency.[2]


The FDA paper touches on ensuring data quality, reliability, representativeness, integrity, privacy, and security, as well as documentation of data acquisition (“provenance”), relevance of the data for the intended use and replicability and reproducibility of results.[3] With regard to these areas the EMA paper goes into more detail, describing how data should be acquired, processed, analyzed, augmented, and documented to create a balanced dataset, free of biases.[4] It also references legal standards for data protection and discusses approaches to ensure integrity of the data.[5]

Taking data integrity as an example. Where the EMA states that personal data used for model data must be evaluated to mitigate risks of re-identification, the FDA states that potential data-related issues to consider include data integrity, before defining “integrity” and finally asking stakeholder what practices they are currently utilizing to assure integrity.[6] This leaves room for interdisciplinary discussion and regulation.


Furthermore, both papers describe the need to monitor, document, and assess model risks, planning, development, performance, and function. This is generally seen as necessary to identify, manage, mitigate, and review risks, but also to ensure transparency and explainability of outputs of the models.[7] Both bodies also agree that corrective action should be taken whenever necessary, to achieve intended results.[8] Interestingly in this context the EMA paper mentions risk management plans especially for models, “where there is no human-in-the-loop”.[9]At first this seems to contradict the principle of human governance, but it is most likely a reference to the Guidelines of the High-Level Expert Group on AI established by the European Commission.[10] In these guidelines there is a differentiation between three different approaches: (i) “human-in-the-loop” (which is the capability for human intervention in every decision cycle of the system, but is often neither possible nor desirable) (ii)  “human-on-the-loop” (where humans intervene during design cycle and monitor the system) and (iii)“human-in-command” (which is the capability to oversee overall activity of the AI system and the ability to decide when and how to use the system in any particular situation).[11] Thus, EMA seems to see a particular need for risk management plans for models that do not allow for human intervention in every decision cycle. In the area of medicinal product development conceivable examples are the use of AI in manufacturing, post-authorization market monitoring and generally every area where large quantities of data are processed, and human interference would be inefficient. In any case, the FDA seems to prefer a simpler approach, as they simply consider risk management plans a part of [human-led] governance and question stakeholders on good practices for providing meaningful human involvement.[12]

While the FDA does promote accountability for AI in drug development, it does not explicitly allocate any responsibilities to specific stakeholders.[13] The EMA, on the other hand, follows a different approach, partially outlining responsibilities for marketing authorization holders for model development and data protection.[14] Both papers also discuss the need for verification and validation of data.[15]

Finally, when assessing performance, both bodies recommend using transparent models, if they have a similar performance to complex, non-transparent models, to ensure explainability. Therefore, non-transparent models should only be used if they show better performance.[16] This is a noteworthy admission since the general approach thus far has been primarily focused on promoting transparent models and could be due to the fact, that a different approach would exclude many exciting models from being placed on the market.

Regulatory Interactions

Another  difference between the papers, is that the EMA paper discusses the necessity for interaction with regulators.[17] Although the FDA paper asks the question, whether “transparency” explicitly includes communication of information with regulators and/or stakeholders, it does not give insight into its position on the topic.[18] The EMA approach undoubtably provides more legal certainty for companies, which can have an influence on their willingness to invest time and money in innovative projects. While the FDA´s restrained approach to interaction with stakeholders could in this context seem to potentially suppress innovation by not providing sufficient legal certainty, in the U.S. the approach to regulation is generally more liberal and regulators are not known for trying to catch companies offside in grey areas of law. Hence, the FDA´s restrained approach is in line with their innovation-friendly approach to regulation.

Regulatory Impact

Finally, the EMA paper refers to the need to consider regulatory impact but does not outline risks of overregulation for the development and use of AI.[19]

In this area the FDA paper does not explicitly reference regulatory impact, as it is not a guidance or policy, nor does it endorse specific approaches. Instead, it implicitly considers it, when aiming to gather feedback, initiate discussions and inform future regulatory activities.[20]This could be seen as an attempt to balance risk regulation and promotion of innovation. The difference in approach could play a significant role for example in areas of documenting, monitoring, and assessment of models and data.

Conclusion: Contrasting Approaches

The papers discuss similar advantages, risks, and potential requirements. However, the EMA provides relatively concrete consideration for the use of AI/ML, giving detailed insight into risks and resulting requirements, to mitigate these risks. The FDA paper on the other hand explicitly states that its paper is neither a guideline nor policy, before similarly discussing the different phases of development of medicinal products, but primarily highlighting advantages.[21]

Finally, while both authorities seek stakeholder feedback on their reflection papers, the EMA considerations seem to represent a more developed regulatory concept, as they are also to be read in coherence with legal requirements and overarching EU principles and discuss the application of currently established regulatory principles, guidelines, and best practices to AI in medicinal product development.[22] This implies that a general legal framework for regulation already exists, narrowing the scope in which stakeholders can give feedback.

The FDA meanwhile refers to overarching standards and practices, but also acknowledges that these have not been tailored specifically for drug development and that the utility and applicability of these standards is yet to be explored.[23] Instead, the feedback and discussions with stakeholders are to be used to help inform future regulatory activities, before adapting standards and practices to address the use of AI/ML in the context of drug development. This approach is additionally highlighted by the questions for stakeholders formulated for all key areas discussed above, showing an open-minded approach to potential future regulation.[24]

Based on a side-by-side comparison of the papers, it appears that one of the main drivers for the EMA’s approach was how to fit AI solutions into the current regulatory framework whereas the FDA’s approach rather aims at triggering general discussions on how to structure future regulatory frameworks.  Both approaches have their pros and cons as to whether one is preferable to the other in terms of openness to innovative future approaches.

While the EMA’s approach may provide guardrails for solutions currently being developed/implemented, thereby attempting to create regulatory certainty that could arguably encourage innovation, it could also be seen as creating an overly prescriptive framework with regulatory requirements that, by their very nature, have the potential to stifle innovation. The FDA’s approach, on the other hand, may lack this legal certainty and therefore leave companies uncertain (at least for the time being) about the applicable requirements, which may likewise stifle innovation. Conversely, however, it may also encourage innovation by taking a more ‘observatory approach’, allowing companies to test the current AI environment and only regulate where actual risks arise, based on the real-world experience gained during the ‘experimental phase’. Either way, it remains exciting to see where the journey of both regulators takes us.


This article is part of our AI & Life sciences series. Please find our first and other articles here. In case of questions, do reach out to Nils LölfingChristian Lindenthal or Hester Borgers.


[1] EMA, Reflection Paper on the use of Artificial Intelligence (AI) in the medicinal product lifecycle, l. 90ff.; FDA, Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products, l. 66ff., 591ff

[2] EMA, l. 359ff., 410ff.; FDA, l. 609ff. 698 (1)

[3]  FDA, l. 698 (2)

[4] EMA, l. 255ff.

[5]  EMA, l. 363ff., 382ff

[6] EMA, l. 385f.; FDA, l. 698 (2)

[7] EMA, l. 90ff., 250ff., 298ff., 327ff., 354ff., 364ff., 316ff., 415, 426ff.; FDA, l. 698 (1), (3)

[8] EMA, l. 356; FDA, l. 698 (3)

[9] EMA, l. 354f.

[10] High-level expert group on artificial intelligence | Shaping Europe’s digital future (

[11] High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI, p. 16

[12] FDA, l. 698 (1)

[13] FDA, l. 698 (1)

[14] EMA, l. 299ff., 364ff.

[15]  EMA, l. 281ff.; FDA, l. 698 (3)

[16] EMA, 327ff.; FDA, l. 698 (3)

[17] EMA, l. 234ff

[18] FDA, l. 698 (1)

[19] EMA, l. 92

[20]  FDA, l. 680ff

[21]  EMA, l. 114ff.; cf. FDA, l. 88ff

[22] EMA, l. 420ff

[23] FDA, l. 609ff

[24] FDA, l. 671ff

Written by
Hester Borgers
Hester Borgers
As an associate in our Intellectual Property Group in Amsterdam, I specialise in patent law and life sciences regulatory. I have experience in complex patent litigation, with a strong focus on the medical devices industry and pharmaceutical sector. Furthermore, I assist a broad range of life sciences clients with their regulatory matters, including litigation. My regulatory experience also includes advising with regard to all things digital health - from telemedicine to AI.
Christian Lindenthal
Christian Lindenthal
As a partner in our Munich team, I advise clients from the life sciences sector on matters at the interface between IP, unfair competition and regulatory law. I am a member of our Intellectual Property Practice Group as well as our Life Sciences and Healthcare Sector Group. Having an IP background, I specialise in advising clients from heavily regulated industries, most notably from the life sciences sector, but also from related sectors such as food and beverages, and cosmetics. My clients range from global players to innovative start-ups.
Nils Loelfing
Nils Loelfing
I am a counsel in our Technology & Communications Sector Group. I provide pragmatic and solution-driven advice to our clients on all issues around data and information technology law, with a strong focus on and experience with AI and machine learning projects. My experience includes providing expert advice to clients on various aspects of technology, data and data protection law, in particular GDPR, mobile commerce, ecommerce and the commercialization of data ("Data as an asset"). I have provided advice on complex GDPR and data law issues to a number of major companies including from the life sciences sector. Over the years, I have gained particular experience in hot topics such as the Internet of things, (personal and non-personal) data, machine learning and AI.
Dylan Bossmann Everitt
Dylan Bossmann Everitt
I am an intern at Bird & Bird's office in Hamburg.

Related articles

Final(ly) Rules of Procedure of the UPC
Apart from the Unified Patent Court Agreement and a large body of further treaties, regulations,...
China Health and Medical Data Protection (I): Human Genetic Resources Information
I. An Overview of Regulatory Regime Under the Chinese law, human generic resources...

Leave a comment