Senior Lawyer Apologises After Filing AI-Generated Submissions in Victorian Murder Case

A shocking turn of events has unfolded in a Victorian murder case, as senior lawyer Rishi Nathwani KC has apologized for filing submissions that included AI-generated errors.

Nathwani, who holds the title of King’s Counsel, took "full responsibility" for filing incorrect information in submissions in the case of a teenager charged with murder. In an unexpected move, he personally met with Justice James Elliott on Wednesday to offer his apology and explain the circumstances surrounding the mistake.

"We are deeply sorry and embarrassed for what occurred," Mr Nathwani told Justice Elliott, as reported by court documents seen by The Associated Press. "At the risk of understatement, the manner in which these events have unfolded is unsatisfactory."

The AI-Generated Errors

The fake submissions included fabricated quotes from a speech to the state legislature and non-existent case citations purportedly from the Supreme Court. The errors were discovered by Justice Elliott's associates, who couldn't find the cases and requested that defence lawyers provide copies.

"The lawyers admitted the citations 'do not exist' and that the submission contained 'fictitious quotes'," court documents say. It appears that the lawyers incorrectly assumed the accuracy of their initial citations, wrongly believing that the subsequent ones would also be correct. Unfortunately, they failed to verify the information independently.

The Consequences

The AI-generated errors caused a 24-hour delay in resolving the case, which Justice Elliott had hoped to conclude on Wednesday. This disruption put the entire process at risk, highlighting the need for vigilance and attention to detail when using artificial intelligence tools in legal submissions.

The Judge's Response

Justice Elliott ruled on Thursday that Mr Nathwani’s client, who cannot be identified due to their age as a minor, was not guilty of murder because of mental impairment. While the judge acknowledged that the manner in which the events unfolded was unsatisfactory, he praised the defence team for taking responsibility and cooperating with his office.

"It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Justice Elliott said. This statement highlights the importance of verifying AI-generated information before submitting it in court.

The Implications

This case serves as a stark reminder of the risks associated with relying on artificial intelligence tools in legal submissions. In a comparable case in the United States in 2023, a federal judge imposed $US5,000 ($7,600) fines on two lawyers and a law firm after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury claim.

The Guidelines

Last year, the Supreme Court released guidelines for how lawyers use artificial intelligence. These guidelines emphasize the need for independent verification and thorough review of AI-generated information before submitting it in court.

"Providing false material as if it were genuine could be considered contempt of court or, in the 'most egregious cases', perverting the course of justice," warned British High Court Justice Victoria Sharp in June. This warning underscores the gravity of the situation and the importance of adhering to these guidelines.