Journalism begins where hype ends

,,

The greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

— Eliezer Yudkowsky

“Do Not Become An Artificial Lawyer”: Judges Come Down Hard on AI in Courts

Young working professionals shaking hands with a robo humanoid resembling artificial intelligence.
March 11, 2026 03:15 PM IST | Written by Neelam Sharma | Edited by Vaibhav Jha
With increase in instances of AI driven legal hallucinations, judges from Singapore to New York to Chandigarh in India, are now sounding the alarm and warning lawyers on the use of Generative AI, while writing pleadings or preparing arguments for court, without cross-checking the research and evidence. In several cases, GenAI tools like ChatGPT had generated fictitious cases, which were then submitted as evidence in court, in a Coup de grâce moment for the law fraternity worldwide.
In that regard, judges have also taken action against lawyers who had submitted AI driven legal hallucination documents in court, in order to set a precedent and warn against the perils of 100% dependency on AI.
In Singapore, two lawyers — Goh Peck San and Amarjit Singh Sidhu — were fined SGD 5,000 each by the Singapore High Court for citing fictitious cases generated using AI in a lawsuit.
In a judgment dated March 6, Judge S. Mohan of Singapore High Court said AI-generated “hallucinated” authorities pose a serious threat to the administration of justice if lawyers fail to verify them before presenting submissions in court.

Similar warning calls were made during a panel discussion titled “Judges: Past and Present” held on Monday evening at Chandigarh’s iconic Open Hand Monument near the Punjab and Haryana High Court as part of the ongoing India International Disputes Week 2026.

Rajasthan High Court Justice Arun Monga underlined that lawyers must cross-verify all content produced using artificial intelligence tools with the original source material.
“Otherwise, you will become an artificial lawyer. You don’t want to do that,” he remarked, cautioning against over-dependence on AI.
Justice Monga also referred to a recent case where a trial court judge cited AI-generated case laws in an order, describing the situation as “alarming”. He noted that the matter has reached the Supreme Court, which may frame guidelines under Article 142 regarding the use of AI in the legal system.
Punjab and Haryana High Court Justice Vinod Bhardwaj acknowledged that technology can assist the judicial process but warned about “AI hallucinations”. He recalled encountering a petition in which a lawyer cited a judicial precedent that did not exist.
“When I asked for the copy of the verdict, it was not there because no such citation exists. This increases the burden on judges who have to verify non-existent precedents,” he said.
Justice Bhardwaj also observed that some lawyers misuse hybrid court facilities to seek adjournments by citing technical glitches during online hearings.
Justice Hakesh Manuja said artificial intelligence could aid the judiciary but should never replace human adjudication. He added that AI-based tools could help the government assess the likelihood of success in appeals through predictive analysis.
Echoing similar concerns internationally, UK First-tier Tribunal Judge Sukhi Gill urged lawyers to cross-reference AI-generated material with legislation and authentic case law.
According to Dinesh Jotwani, Advocate Supreme Court of India-“ While AI can assist in compiling legal principles, identifying relevant issues, and drafting preliminary versions of documents, it must be emphasized that these platforms do not constitute official legal sources. Judicial authorities have made it clear that any information generated by AI must be cross-verified with authentic laws, valid case law, and official court records prior to submission or reliance in legal proceedings.”

Mr Jotwani also commented that lawyers must always put themselves to uncompromising standards of honesty and competency.

“Uncritical adoption of AI-generated content without proper verification may result in judicial censure, reputational harm, and, in severe cases, disciplinary proceedings,” he added.

The issue of AI driven legal hallucinations is not a one off incident as per this recent report, since mid 2023, more than 300 instances of such cases have been documented. While Generative AI keeps making advances into our daily lives and professions, it is critical to address its core issues of hallucinations and going rogue, that makes it still unreliable for automation sans human verification.

Also Read: Hallucination Effect in AI

Authors

  • Neelam Sharma

    Neelam Sharma is a passionate storyteller, and journalist with over a decade of experience across leading Indian media houses.
    Known for her calm presence on screen and powerful storytelling off it, Neelam brings a rare blend of credibility, creativity, and empathy to journalism. Her strength lies in ground reporting and research-driven narratives that connect with the heart of the audience. Whether covering social issues, human-interest features, or breaking news, she combines factual depth with a human touch—making every story not just informative.

  • Vaibhav Jha

    Vaibhav Jha is an Editor and Co-founder of AI FrontPage. In his decade long career in journalism, Vaibhav has reported for publications including The Indian Express, Hindustan Times, and The New York Times, covering the intersection of technology, policy, and society. Outside work, he’s usually trying to persuade people to watch Anurag Kashyap films.