Life Sciences Horizons Brochure 2025 - Flipbook - Page 12
11
2025 Horizons Life Sciences and Health Care
HHS expects AI product monitoring throughout its life cycle
In overseeing health care AI, the U.S. Health and
Human Services Department (HHS) is increasingly
signaling a focus on organizations’ ongoing
assessment throughout a product’s entire life cycle.
This was demonstrated through its recently released
Strategic Plan and the frameworks and guidance
released by several HHS agencies.
Compliance with data privacy and cybersecurity:
Awareness of and adherence to appropriate privacy practices
and security safeguards are necessary to comply with existing
and evolving privacy and cybersecurity requirements. This
includes the need to protect sensitive patient information,
and confirm that companies and associated third parties
involved in developing and deploying AI systems have
the appropriate permissions to collect, use, and share
this information.
For example, the FDA’s recent guidance outlined key regulatory
expectations for the use of AI in regulatory decision-making
regarding safety, effectiveness, or quality for drugs and biologics,
including a risk-based framework to assess AI credibility. The
FDA expects that companies develop a life cycle maintenance
plan that’s integrated into the manufacturing quality system
and submitted with marketing applications. The HHS’s actions
emphasize specific compliance measures that regulators expect
companies to implement for AI products used in health care:
Bias mitigation: To create AI systems that are fair, accurate,
ethical, and trustworthy, potential biases must be addressed.
AI systems should be tested on diverse, representative
datasets and in real-world scenarios to reduce risk of
discrimination based on race, gender, socioeconomic status,
or other factors, and to help ensure the system works equally
well across different population groups.
Continuous monitoring: Regular monitoring of AI
systems allows for improved accuracy and consistency in
system performance. AI systems should be monitored after
deployment to detect emerging risks and performance issues,
like performance drift. Ongoing monitoring should also focus
on data quality and management throughout the AI system’s
life cycle to support long-term performance.
Validation of algorithms: Taking steps to confirm the
validity of algorithms is essential to protect patient safety.
Systems should undergo rigorous validation processes to
confirm that they are appropriately trained and correctly
deployed in each specific context.
User feedback and reporting: Establishing a clear process
for collecting and responding to feedback is critical for
optimizing system performance and building trust in AI
systems. This includes transparency about where and how
AI systems are being deployed, and what data and entities
are involved in the training and deployment of AI systems.
As AI adoption in health care continues to rapidly increase
and evolve, regulators are prioritizing transparency, equity,
and privacy as core guiding principles for responsible AI use.
A life cycle approach to AI risk management facilitates prompt
identification and mitigation of potential AI risks, enabling
trustworthy AI that safeguards patients and fosters continued
support of innovative technologies.
Alyssa Golay
Senior Associate
Washington, D.C.
Ashley Grey
Associate
Washington, D.C.
Surya Swaroop
Associate
Washington, D.C.
Read the
Life Sciences
and Health
Care chapter
AI Trends guide