Texas Judge Mandates AI Disclosure in Court: Landmark Ruling Targets ChatGPT Use in Legal Filings
Texas Judge Mandates AI Disclosure in Court Filings

In a landmark move for the US legal system, a federal judge in Texas has mandated that all attorneys appearing before his court must explicitly disclose if artificial intelligence was used to draft their legal filings.

US District Judge Brantley Starr of the Northern District of Texas has introduced a new standing order compelling lawyers to certify that no generative AI tool—such as OpenAI's ChatGPT—was employed in preparing documents, or if it was, that every citation and legal analysis was meticulously verified by a human.

The Catalyst: AI 'Hallucinations' Invade the Courtroom

The ruling comes in direct response to a growing number of alarming incidents where AI-generated legal submissions have cited non-existent cases, precedents, and rulings—a phenomenon known as 'hallucination'.

Judge Starr's order leaves no room for ambiguity, stating: "These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them."

The judge highlighted the core issue: these AI systems are trained to be persuasive, not to uncover truth. Their primary function is to predict the next word in a sequence, not to provide accurate legal reasoning.

A Proactive Measure for Legal Integrity

This isn't merely a guideline; it's an enforceable requirement. Any attorney filing documents in Judge Starr's court must now include the following certification:

"I certify that no generative artificial intelligence programme provided any text in the filing I have submitted. Alternatively, if one was used, the provided text was independently checked for accuracy by a human being."

This proactive stance aims to preserve the sanctity of legal proceedings and prevent the pollution of the court record with fabricated information. It places the burden of verification squarely on the legal professionals involved.

A Growing National Concern

Judge Starr's order reflects a burgeoning national apprehension within the US judiciary regarding the unregulated use of AI in legal practice. It follows several high-profile cases where lawyers faced sanctions for submitting AI-generated briefs filled with fictitious cases.

This ruling sets a significant precedent, likely to be closely watched and potentially adopted by other federal and state courts across the United States as they grapple with the challenges and ethical dilemmas posed by rapidly advancing AI technology.