Life Sciences Horizons Brochure 2025 - Flipbook - Page 17
16
2025 Horizons Life Sciences and Health Care
Increasing regulatory oversight of AI chatbots used by HCPs and patients
The use of AI-facilitated chatbots by health care
professionals (HCPs) – and by patients in relation
to pharmaceutical and biological products – is
accelerating quickly. Chatbots may provide
recommendations as to when a product can be
prescribed, or regarding the reimbursement and
coding landscape for a product. For patients, chatbots
offering instructions on the proper use of a drug and
answering related questions are becoming
increasingly common as well.
In the EU under the AI Act, AI-facilitated chatbots for HCPs or
patients may be classified as "high risk" if they are a “medical
device,” or part of a medical device. Thus, sponsors of these
products must consider whether the chatbot has a specific medical
use and meets the definition of a “medical device.” If so, and
the product is found to be “high risk,” then those AI systems
must comply with strict legal requirements, including risk
management, data governance, and conformity
assessment procedure.
Beyond the legal requirements stemming from AI legislation,
there are regulatory considerations and requirements as well.
Where the chatbot is used adjacent to product use by HCPs and
patients, e.g., in a clinical trial or in real world use, this may have
an impact on patient safety, and could also have an impact on
compliance with regulatory obligations. For example, in a clinical
trial that aims to ensure data submitted in a dossier is accurate, a
chatbot must ensure proper use of the product in accordance with
the label.
However, even where a chatbot that deploys AI will not be deemed
a “medical device,” certain minimum requirements of the AI Act
must be met. These AI-related requirements include conscious
use based on AI literacy, transparency requirements, and privacy
considerations, among others.
The European Medicines Agency (EMA) has released a reflection
paper on the use of AI in medicinal product/drug development:
Hereunder, deployers of AI have to perform a risk assessment
considering and addressing both patient risk as well as regulatory
risk. This has to be done in the structure and documented process,
ideally based on underlying company SOPs.
Dr. Jörg Schickert
Partner
Munich
Visit our website
to read more on
Digital Health
and AI