Life Sciences Horizons Brochure 2025 - Flipbook - Page 64
63
2025 Horizons Life Sciences and Health Care
Product liability in the EU for AI-powered medical devices
Current European Union (EU) product liability law and
industry-specific regulations, including the Medical
Device Regulation (MDR), were primarily designed for
traditional medical devices that rely on predictable
algorithms and well-established protocols. In contrast,
AI-powered medical devices are often highly complex,
interconnected with other devices and utilize machine
learning (ML) algorithms that can autonomously
evolve over time as they process new data. This
complexity introduces new product liability challenges,
including the risk of algorithm bias, device
hallucination, and software malfunctions.
To address these regulatory challenges in the era of AI, the EU
legislature has particularly enacted Regulation (EU) 2024/1689
(AI Act) and amended the EU Product Liability Directive (EU)
2024/2853 (PLD), which took effect on 01 August 2024 and 08
December 2024, respectively. In addition, the EU Commission had
also announced a specific AI liability directive, which provided
for certain simplifications of the burden of proof. However, in
February 2025, the Commission has unexpectedly withdrawn
this legislative initiative.
The AI Act establishes harmonized rules for AI systems across
various sectors, including medical devices, following a risk-based
approach. AI systems deemed to pose an unacceptable risk are
prohibited, while those classified as limited risk must meet
transparency requirements. For low-risk AI systems, only
voluntary codes of conduct apply. AI-powered medical devices,
which are typically classified as high-risk AI systems, must
undergo a conformity assessment before being placed on the EU
market. Furthermore, the AI Act introduces additional obligations
beyond those under the MDR, including requirements concerning
data governance, transparency, and human oversight.
The obligations set forth in the AI Act will be introduced
progressively. Although the general application date is scheduled
for 02 August 2026, conformity assessments for high-risk AI
systems will not commence until 02 August 2027. Stakeholders
will need to perform a gap analysis to identify their existing AI
technologies, categorize them by risk level, and incorporate the
additional requirements under the AI Act into their governance
framework. At the same time, stakeholders must adhere to
sector-specific requirements under the MDR, particularly when
these obligations are more stringent than those outlined in the AI
Act, as is the case with reporting requirements for (potential)
incidents with serious consequences.