In December 2023, the joint HMA-EMA Big Data Steering Group (BDSG) published a multi-annual (2023-2028) AI workplan to guide use of AI in medicines development. The HMA and EMA recognize that pharmaceutical companies are increasingly using AI-powered tools in research, development and monitoring medicines. In this workplan it is stated that the aim is to embrace the opportunities of AI for personal productivity, automating processes and systems, increasing insights into data and supporting more robust decision-making to benefit public and animal health. The workplan focuses on four key areas to facilitate the development and use of responsible and beneficial AI:
Guidance, policy and product support
The development and evaluation of AI guidance in the medicines lifecycle continues to be a focus point, including exploration of the need for domain-specific areas such as pharmacovigilance. An important part of this will be finalising the current draft reflection paper from the EMA taking into consideration the feedback from the stakeholders (find our previous article on the EMA draft reflection paper here). More specifically, guidance on the use of large language models (LLM) will be developed. Besides that, in 2024 work will be initiated in preparation of the AI act coming into force. This also includes the creation of a so-called AI observatory (including horizon scanning) at the end of 2024, to make the constant monitoring of the impact of AI as well as of the emergence of new systems possible.
Tools & technologies
In the workplan it is included that LLM, particularly chatbots in the role of personal assistants, are very likely to become an important tool. In 2024, their implementation will be phased and monitored to ensure the benefits of their implementation are maximised. At the end of 2024, a Network Tools policy on open and collaborative AI development will be published, which aims to foster collaboration, integration and reusability of AI tools and models.
Even though the workplan only makes reference to the use of LLM as chatbots used as personal assistants, in the last few years we have also seen an increase of LLM, and chatbots in particular, in patient engagement programs, so the upcoming guidance will be of importance here as well.
Collaboration and change management
The workplan lists several partners for the European Medicines Regulatory Network to (continue to) work together with, which includes the International Coalition of Medicines Regulatory Authorities (ICMRA). On an EU level, an AI Virtual Community will be established, with other EU Agencies, the Devices sector and academia.
The European Specialised Expert Community of the the EMA Methodology Working Party will establish a so-called Special Interest Area on AI.
Possibly the most interesting topic for the (pharmaceutical) industry is experimentation and the (regulatory) leeway companies will have while incorporating or developing AI.
The workplan underlines the importance of experimentation, as it deems it fundamental to expedite learning and reduce uncertainty about a technology or system. In the workplan, so-called experimentation cycles of up to six months are proposed. For this, principles and guidelines will be developed, including identifying research priorities, with which the experimentation cycles will align. Part of the experimentation process is also the proposed technical ‘deep dives’, which will focus on specific tools and techniques.
All in all, the workplan touches upon a lot of different areas and focus points, and mainly identifies areas in which further guidance will be needed. However, it does show the EMA and HMA are keen on developing solid regulatory guidance with regard to AI, also understanding the importance of specific regulatory ‘room’ for experimentation.
This article is the start of our AI & Life sciences series. In our next articles we will dive deeper in some of these topics mentioned, see below for our future articles. In case of questions, do reach out to Christian Lindenthal or Hester Borgers.