Client Alert
Dismissal of False Claims Act Lawsuit Tainted by Expert’s AI Hallucinations Presents Cautionary Tale
October 17, 2025
By Gary Giampetruzzi, Jessica R. Montes and Vanna Mavromatis
Introduction
On September 30, 2025, a federal judge granted the United States’ motion to intervene in and dismiss U.S. ex rel. Khoury v. Intermountain Healthcare Inc et al.,[1] a False Claims Act (FCA) lawsuit in the District of Utah where the relator had unwittingly disclosed an expert report containing generative AI hallucinations. The qui tam case was dismissed with prejudice to the relator and without prejudice to the United States, which, though unlikely, can conceivably refile if it so chooses. The unfortunate expert report in question addressed Medicare and Medicaid conditions of payment, and the hallucinations included bogus deposition testimony from a government representative and fictitious quotes from government manuals.
The United States filed its motion on the heels of extensive motion practice between the parties where the relator sought to substitute the expert, and the defendants moved to disqualify the relator and his counsel for disclosing the report. Though the United States’ motion does not reference the hallucinations, the government’s timing suggests the relator’s disclosure of the expert report was relevant to the government’s decision. This case serves as another in the growing list of cautionary tales for litigants, warning that they must remain vigilant for generative AI hallucinations in all briefs and expert reports disclosed in litigation.
Relevant Procedural History
On June 16, 2020, the relator filed a lawsuit under the qui tam provisions of the FCA against Mountain West Anesthesia and individual anesthesiologists, alleging that their anesthesiologists used personal electronic devices to attend to personal matters during anesthetic care that was billed to federal healthcare programs. In 2021, the United States declined to intervene, and the relator proceeded with the lawsuit.
During discovery, a magistrate judge limited the scope of the relator’s deposition of a Center for Medicare and Medicaid (CMS) representative. Consequently, the relator retained an expert, under ordinary circumstances an unremarkable event, to opine on Medicare and Medicaid conditions of payment for anesthesia care, and on May 15, 2025, served the expert’s report on the defendant.
AI Hallucinations in the Expert Report
On July 21, 2025, the defendants deposed the expert and asked about apparently fictitious or erroneous information in the report and whether the expert had used AI to prepare the report. After initially denying use of AI to draft the report, the expert admitted that certain content was in fact written by a generative AI tool, that he used this tool to combine three drafts of the report and that he had failed to verify content in certain instances, despite knowing that generative AI may yield hallucinations (i.e., outputs that contain false, nonsensical or inconsistent information). The expert also testified that he did not think to disclose his utilization of AI to the relator or his counsel, nor did they inquire.
The report’s hallucinations included fake deposition testimony from the CMS representative, fictitious quotations from the Medicare Program Integrity Manual and Nevada and Utah Medicaid documents, and inaccurate titles for certain government and industry publications. The defendants asserted in a motion to exclude the report that 12 excerpts from the report contained fictitious or erroneous content.
Motions on the Expert Report
In briefs relating to the relator’s motion to substitute the expert and defendants’ motions to exclude the report and for sanctions, the defendants argued that the relator knew, should have known or was willfully blind to the expert’s use of generative AI and incorporation of the hallucinations. For example, the defendants asserted that the relator’s counsel should have recognized that certain testimony from the CMS representative did not exist, as counsel attended the underlying deposition and examined the representative, or that certain publications were not titled or quoted correctly, due to their importance to the case. Additionally, the defendants noted the report was abound with typos, inconsistent use of “dumb” and “smart” quotation marks, and other grammatical and stylistic issues. The defendants also flagged the expert’s limited qualifications.
Defendants also argued that the relator’s counsel “knowingly abdicated their own professional responsibilities by failing to take any steps to ensure that the quotations from the testimony and documents upon which [the expert] based his opinions were accurate or even real.”[2] Additionally, the defendants claimed that the relator’s counsel failed to take accountability, shifted blame and obfuscated relevant facts. The defendants asserted that counsel’s conduct warranted their disqualification, and that the relator “grossly perverted” the FCA and should not be permitted to continue representing the United States.
In the relator’s briefs, the relator and his counsel admitted that they did not double-check the report’s citations and quotations, but explained that they intentionally limited their review to checking for general clarity and grammar, in accordance with Federal Rule of Civil Procedure 26(a)(2)(B). According to relator, that rule — providing that expert testimony “must be accompanied by a written report prepared and signed by the witness” — means attorneys must be careful not to unduly influence an expert report, or else a judge may exclude the report. Of course, Rule 26(a)(2)(B) also requires the expert report to disclose “a complete statement of all opinions the witness will express and the basis and reasons for them” and “the facts or data considered by the witness in forming them.” To that issue, the relator and his counsel purportedly also believed the expert had thoroughly double-checked his own work, given that he billed over 150 hours for research and writing and voluntarily declared under penalty of perjury as to the truth and accuracy of the report. They also argued the expert was appropriately qualified, and they were entitled to rely on his report.
According to the relator and his counsel, all the fabrications and errors pointed out by defendants appeared plausible in context, and many were substantively correct. Since the expert did not disclose his use of the generative AI tool, the relator and his counsel asserted they could not have been expected to review the report for AI hallucinations. For example, fake testimony from the CMS representative was a “substantively correct statement and as such it never jumped out as being a fabrication.”[3] The relator and his counsel acknowledged they had made a mistake by failing to review the report more carefully but argued that disqualification was not a proportional sanction.
Conclusion
Though it is unknown how the court would have ruled on the parties’ motions, it is notable that the court agreed to stay the litigation pending a decision on the motions, and the relator did not oppose the government’s motion to intervene and dismiss, which did not explain whatsoever why this case on the brink of trial would not serve the government’s interests. The relator may have believed that it would be unable to substitute the expert, who was retained to support a key argument that Medicare and Medicaid would not have paid for the anesthesia care at issue, or that he would be unlikely to overcome the broad dismissal authority afforded to the government in qui tam cases under United States ex rel. Polansky v. Executive Health Resources.[4]
The relator’s admitted failure to identify the hallucinations generated by the AI tool and the motion practice that followed should serve as an important reminder to all litigants that their opposing party will likely scrutinize briefs and expert reports for AI hallucinations and may seek sanctions. This case also underscores the role of the relator as a representative of the United States and suggests that AI hallucinations in relator documents concerning government testimony or standards may be viewed as particularly egregious. Thus, litigants generally should take reasonable steps to understand whether their attorneys or experts are utilizing AI and, if so, protect against hallucinations and other types of errors.
[1] 2:20-cv-00372 (D. Utah)
[2] Defendant’s Motion for Sanctions, Dkt. 291, at 14. Initially, the defendants’ briefs did not cite any particular rules of professional responsibility, but they later cited the Utah Rules of Professional Conduct concerning the submission of false evidence, dishonest or prejudicial conduct, and the duty of candor, as well as the court’s inherent disciplinary authority (Defendant’s Reply to Relator’ Response to Motion for Sanction, Dkt 303, at 3).
[3] Relator’s Response to Motion for Sanctions, Dkt. 301, at 4.
[4] 143 S. Ct. 1720 (2023)
Contributors

