Life Sciences Horizons Brochure 2025 - Flipbook - Page 16
15
2025 Horizons Life Sciences and Health Care
Increasing regulatory oversight of AI now includes mitigation plan expectations
Many pharmaceutical and medical device companies
are now using chatbots that utilize AI to respond to
human language, or other general-purpose AI tools
for their in-house work. When a life science company
uses general-purpose AI, such use is often related
to supporting employees in the life cycle of a
pharmaceutical or device product, i.e., used in
supporting research and development, clinical trials,
application procedures, postmarket surveillance,
promotion, or reimbursement.
Where companies use company-adapted, general-purpose AI
(e.g., their own in-house adapted chatbot), they must ensure
compliance with nation-specific AI legislation. Compliance
requires determining whether the in-house adaptation of
general-purpose AI means that company is now deemed a
“provider” of that AI, rather than merely being a “deployer”
of the AI tool. There are significantly greater legal obligations
imposed on AI “providers”; for example, providers need to map
the different AI use cases to ensure that such use happens
in a compliant and informed manner.
Where AI is used in a life science product, such as a medical
device, that use needs to comply with sector-specific regulatory
requirements, e.g., the performance and safety requirements
pursuant to the European MDR, or FDA requirements.
However, beyond that, the use of AI in the life cycle of the
regulated pharmaceutical or device product (where that product
does not incorporate AI itself) further requires care from the
regulatory side. Some global regulators have asserted that any
use of AI in the research development and approval process of
pharmaceutical and device products also needs to undergo a risk
assessment process.
The European Medicines Agency has released a reflection
paper on the use of AI in medicinal product development
and regulation, emphasizing the need for greater transparency,
accountability, and ethical considerations in AI applications.
It identifies two primary risks that companies using AI
should address:
Regulatory risk which is the potential effect on the quality of
data submitted within a dossier or authority decision-making.
Patient risk which is whether AI use in preparing a clinical
trial design may pose risks on patients, or when AI is used for
adverse event or incident tracking or reporting.
Once these AI risks are identified, regulators expect that they are
appropriately addressed in a “risk mitigation plan.”
Dr. Jörg Schickert
Partner
Munich