Life Sciences Horizons Brochure 2025 - Flipbook - Page 97
96
2025 Horizons Life Sciences and Health Care
Use of AI in compliance and investigations:
Expectations from regulators and enforcement agencies
The rise of AI has profoundly impacted various
industries. Recent AI advances are not only redefining
business processes, but also serving as the source for
changes in how the government is responding to its
use. The life sciences industry faces a noticeable shift
in expectations from regulators and enforcement
agencies. Key jurisdictions have adopted significant
policy updates against the misuse of AI – and also
with regard to setting expectations on where AI
should be used.
Compliance expectations are tightening around the world.
Back in September 2024, then-Principal Deputy Attorney General
(PDAAG) for the U.S. Department of Justice (DOJ) Nicole
Argentieri announced revisions to the Evaluation of Corporate
Compliance Programs (ECCP) guidance. PDAAG Argentieri
unveiled updates to the ECCP surrounding the use and
assessment of risks associated with emerging technologies.
The 2024 changes drew prosecutors’ attention to the “deliberate
or reckless misuse” of new and emerging technologies (especially
AI). On the other hand, the updates also made clear that
compliance programs need to use AI and technology where
this is helpful to achieve compliance goals.
Just two months later, a new Guidance to Organisations on the
Offence of Failure to Prevent Fraud was published in the United
Kingdom. This recently published guidance accompanies the
introduction of a new corporate offence of failure to prevent fraud
through the Economic Crime and Corporate Transparency Act
2023. When describing the required elements of a compliance
system, this guidance also expects the use of appropriate
technology in managing fraud risks. In this context, it is also
important to point out that the Serious Fraud Office (SFO) has
increased its AI-trained staff considerably in recent years.
Similarly, against the backdrop of the enactment and
implementation of the EU AI Act, German enforcement agencies
are keeping an eye on the potential misuse of AI. For example,
Germany’s Federal Financial Supervisory Authority (BaFin)
introduced principles for the use of algorithms in decisionmaking processes already back in 2021. Simultaneously, however,
German enforcement agencies expect that companies use
technology to make their compliance programs more robust
and to complete investigations within a required time and depth.
In addition, they increasingly work with vendors using AI to
obtain large amounts of data and thereby increase their
investigation speed.
These significant recent developments call for the following:
Company-wide AI governance frameworks. These
frameworks should define clear accountability and oversight
mechanisms and align on AI initiatives and acceptable uses.
Implementation of AI and appropriate technology in
compliance and investigation processes. This includes,
for example, AI in compliance monitoring, in compliance
spot checks, and in investigations.
Periodic checks on the potential misuse of AI. Regulators
and enforcers are wary about the misuse of AI to circumvent
compliance safeguards. They expect companies to adopt
defensive strategies to safeguard against the misuse of
advanced technologies by bad actors.
Désirée Maier
Partner
Munich
Jodi Scott
Partner
Denver
Peter S. Spivack
Partner
Washington, D.C.
Lilian Michaelis
Associate
Berlin