A United States court has thrown out a motion filed by a federal prosecutor after it was discovered to contain multiple inaccurate citations generated by artificial intelligence. The embarrassing incident has raised serious questions about the use of AI tools in legal proceedings and the verification processes within the justice system.
The Problematic Motion
The case unfolded when Assistant US Attorney Thomas Windom submitted a motion in a criminal case that referenced several legal precedents. However, upon closer examination, defence attorneys and the presiding judge discovered that many of the cited cases either did not exist or were fundamentally misrepresented.
When questioned about the erroneous citations, Windom admitted to using an unspecified artificial intelligence tool to assist in drafting the motion. He claimed the AI had "hallucinated" the cases and legal arguments, creating seemingly plausible but entirely fictional legal precedent. The prosecutor acknowledged he had failed to properly verify the AI-generated content before submitting it to the court.
Judicial Response and Consequences
US District Judge James Selna did not mince words in his response to the AI-generated errors. He denied the motion in its entirety and issued a stern warning about the dangers of relying on artificial intelligence without adequate human oversight in legal matters.
The judge emphasised that legal professionals bear ultimate responsibility for the accuracy of all submissions to the court, regardless of what tools they use in their preparation. The incident occurred in November 2024 and has since sparked broader discussions within legal circles about establishing proper protocols for AI use in law.
Broader Implications for Legal AI Use
This case represents one of the most high-profile instances of AI-generated errors affecting official court proceedings. It highlights the persistent problem of AI hallucination, where artificial intelligence systems generate false or misleading information presented as fact.
Legal experts have expressed concern that such incidents could undermine the integrity of judicial processes if they become more common. Many are now calling for clear guidelines governing how AI can be ethically and responsibly used in legal practice, including mandatory verification procedures for any AI-generated content.
The incident serves as a cautionary tale for legal professionals worldwide, including those in the UK, about the potential pitfalls of over-relying on emerging technologies without maintaining traditional standards of due diligence and verification.