Life Sciences Horizons Brochure 2025 - Flipbook - Page 62
61
2025 Horizons Life Sciences and Health Care
The False Claims Act, health care, and AI-related risks
As artificial intelligence (AI) develops, its use in the
health care sector creates new risks and benefits.
The technology is evolving quickly, and its increased
utilization has grabbed the attention of regulators and
lawmakers. In the past several years, the Department
of Justice (DOJ) intervened in False Claims Act (FCA)
qui tam actions alleging that providers used AI to
generate false diagnosis codes submitted under
Medicare Advantage.4 And in 2024, the U.S. Senate
Subcommittee on Investigations released a report
on the “proliferation of information resources,
assessment tools, and organizations that [make]
case-by-case review of proposed services feasible
on a large scale.” 5
AI in health care has rapidly developed from a more predictive
technology that uses machine learning (ML) to identify patterns
and implement them in future coverage determinations, to the
advanced ability to produce convincingly human work product
that generates new content based on provider input. Previously,
companies used predictive AI to output codes assigned to billing
claims. Now, AI demonstrates a greater ability to review a patient’s
medical records, compare to a large swatch of data on diagnoses,
and offer a completely new conclusion for the patient’s care. With
AI doing more tasks independently instead of in support of
human workers, this transition raises the potential for
government enforcement.
4
Using AI in health care where government programs require
precise recordkeeping creates unique FCA risks. New technology
does not necessarily require new theories of liability. The
government and relators can still rely on certification and false
statement theories to pursue companies using AI to generate or
submit false claims. If a health care provider expressly or
impliedly certifies compliance with legal requirements when
submitting government claims – including, for example,
certifying the services billed were performed or were performed
accurately despite the use of AI – and the provider was mistaken,
FCA enforcement could follow.
For now, litigation on algorithms leading to the submission of
false claims is largely nonexistent.6 DOJ has, however, increased
its warnings towards companies and individuals using AI, which
could be a preview for enforcement to come. In July of 2024, for
example, the Criminal Division of the DOJ submitted its annual
report to the U.S. Sentencing Commission, which recommended
enhanced penalties for defendants who use AI.7 DOJ expressed its
concern that AI “can make crimes easier to commit; amplify the
harms that flow from crimes once committed; and enable
offenders to delay or avoid detection.” Its recommendation
includes a Chapter 3 enhancement for the misuse of AI during
commission of an offense, which involves an increased penalty
separate from a sophisticated-means or special-skill
enhancement. Companies should not ignore the DOJ’s increased
focus on AI and criminal activity: while the FCA is a civil statute,
the government can also bring a parallel criminal action. If not
properly monitored and audited, AI technology could lead to
new FCA actions under the same familiar theories.
Resource: The False Claims Act Guide: 2024 and the road ahead
See, e.g., https://www.justice.gov/opa/pr/government-intervenes-false-claims-act-lawsuits-against-kaiser-permanente-affiliates (Osinek v. Permanente Medical Group, 640 F.
Supp. 3d 885 (N.D. Cal., 2022)).
5
See, https://www.hsgac.senate.gov/wp-content/uploads/2024.10.17-PSI-Majority-Staff-Report-on-Medicare-Advantage.pdf.
6
The consolidated Kaiser Permanente case, which led to partial government intervention, is the only standout FCA enforcement involving AI. See, https://www.justice.gov/opa/pr/
government-intervenes-false-claims-act-lawsuits-against-kaiser-permanente-affiliates (Osinek v. Permanente Medical Group, 640 F. Supp. 3d 885 (N.D. Cal., 2022)).
7
See, https://www.justice.gov/criminal/media/1362211/dl?inline.