Australian Federal Court Issues Strict AI Guidelines for Lawyers, Warns of Penalties
Australian Court Warns Lawyers Over AI Use, Sets Penalties

Australian Federal Court Warns Lawyers Over 'Unacceptable' Use of AI in Legal Cases

The Federal Court of Australia has issued a stark warning to the legal profession regarding the use of generative artificial intelligence in court proceedings. In new guidance released on Thursday, the court emphasized that while it embraces technological advancements, lawyers must adhere to strict rules or face severe consequences, including adverse costs orders and compliance issues.

New Practice Note Addresses AI Risks in Court Filings

Amid a surge in court filings across Australia and globally that have included false citations and errors generated by AI, the federal court has introduced a comprehensive practice note. This document outlines permissible uses of AI and mandates disclosure requirements to prevent the presentation of inaccurate information. Chief Justice Debra Mortimer stated that misleading the court with AI-generated content is "unacceptable" and undermines the just resolution of cases efficiently and cost-effectively.

Mortimer highlighted specific risks, noting that AI tools can produce fictitious cases, citations, quotes, and factual errors. Lawyers are now required to confirm whether AI has been used in preparing documents, verify that cited legal authorities exist and support their arguments, and ensure affidavits and expert reports reflect genuine recollection or knowledge even if AI is employed.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Disclosure and Confidentiality Concerns

The practice note mandates that any use of generative AI must be disclosed at the start of documents, detailing how and where the technology was applied. This includes instances where AI summarises or analyses information, creates images, videos, or sound recordings presented in court, or affects evidence admissibility. Mortimer also cautioned against inputting confidential, suppressed, or private information into AI tools, warning of "serious consequences" even if unintended sharing occurs.

While acknowledging AI's potential to enhance litigation efficiency, Mortimer stressed that its use must be appropriate and careful to avoid risks to justice administration and public confidence. She reiterated that violations could lead to adverse costs orders and breaches of professional obligations.

Rising Incidents and Judicial Response

Australia has seen at least 73 identified cases where generative AI resulted in false citations, fabricated quotes, or other errors. In a notable incident last year, a Victorian lawyer became the first in the country to face sanctions for such false citations, losing his ability to practise as a principal lawyer. Regulatory bodies in Western Australia and New South Wales have launched similar investigations.

Courts have observed that unchecked AI errors can propagate, as seen in a full court judgment where a non-existent case was cited, attributed to AI "hallucination." High Court Chief Justice Stephen Gageler remarked in November that judges are acting as "human filters" for AI-generated arguments, describing the current phase as "unsustainable."

This move by the federal court aims to curb misuse and ensure integrity in legal proceedings, setting a precedent for responsible AI adoption in the legal sector.

Pickt after-article banner — collaborative shopping lists app with family illustration