Skip to main content

Archived Article — The Daily Perspective is no longer active. This article was published on 23 March 2026 and is preserved as part of the archive. Read the farewell | Browse archive

Technology

Lawyers are quietly testing AI to reshape the profession

Early adopters are finding practical uses for AI in legal work, but courts have already punished misuse

Lawyers are quietly testing AI to reshape the profession
Image: Ars Technica
Key Points 3 min read
  • Barristers like Anthony Searle use AI for legal research and medical analysis in clinical negligence cases without sharing confidential data.
  • Fifty per cent of barristers now use AI tools, but only 2 per cent have embedded them into formal operations.
  • Courts have issued tough warnings, with judges penalising lawyers who submitted fabricated case citations.
  • Law firms are incentivising AI use while navigating privacy rules and the risk of AI generating false information.

When a clinical negligence barrister faced a difficult case two years ago, he had a problem: the coroner had refused his request for an independent expert report. Rather than accept defeat, Anthony Searle, a barrister specialising in clinical negligence and healthcare-related inquests, turned to an unexpected tool. He began using ChatGPT to frame probing questions about complex cardiac surgery, filling gaps that expert witnesses could not address.

"My use of ChatGPT allowed my questions to be more focused on the technical aspects of the surgery," Searle explained. Yet he is careful: he vets all information the AI produces and never inputs client data into the tool. His cautious approach signals a broader pattern emerging in English law. More than 90 per cent of surveyed lawyers already use at least one AI tool in their daily work, most often for legal research, document analysis, contract drafting, and process automation, according to a 2026 survey by Wolters Kluwer.

The reality is more measured than the hype suggests. According to the 2026 Future Ready Lawyer Survey, more than 90 per cent of surveyed lawyers use at least one AI tool in their daily work, most often for legal research, document analysis, contract drafting, and process automation. Yet adoption remains incomplete. In England's chronically underfunded justice system, AI is increasingly presented as an answer to backlogs and resource shortages. The government has signalled enthusiasm; the Deputy Prime Minister and Justice Secretary delivered his key speech on court reforms at a Microsoft AI event in London.

A computer-generated gavel hovering over a laptop.
AI tools are being tested in legal work, though courts have warned of dangers including fabricated case citations.

The gap between enthusiasm and reality is striking. 62 per cent of respondents report weekly time savings of 6-20 per cent, averaging nearly 10 per cent of the workweek, enabling lawyers to move from routine tasks to higher-value work. Yet a Thomson Reuters report found that only 28 per cent of law firms are actively using AI, while Clio's Legal Trends 2025 reported that 79 per cent of legal professionals use AI in their firms. The data is patchy and contradictory, suggesting the profession is still finding its footing.

Law firms are beginning to incentivise adoption. UK firm Shoosmiths added one million pounds to its bonus pot last year as a reward for staff hitting one million prompts on Microsoft Copilot. US firm Ropes and Grey is pushing junior lawyers to spend a fifth of their billable hours experimenting with AI for research and contract work. Yet major barriers remain. Client confidentiality rules prevent lawyers from uploading sensitive material into consumer AI tools. Data protection concerns have left many firms uncertain.

The courts have made clear the profession cannot be careless. In a landmark 2025 judgment, a barrister cited five wholly non-existent cases in written submissions in Ayinde v London Borough of Haringey, and the court described the conduct as improper, unreasonable and negligent, imposing a wasted costs order of Ā£4,000. Though the barrister did not admit using AI, the case sent an unmistakable message: Large Language Models are sophisticated statistical engines designed to predict the next most likely word, which leads to the phenomenon of hallucinations—outputs that are grammatically perfect but factually incorrect or entirely fabricated.

These failures have not killed interest in AI. Rather, they have shaped how the profession approaches it. Searle is developing bespoke AI tools for his practice, including an app that calculates damages in clinical negligence claims by analysing actuarial tables. He is also helping develop broader AI governance strategies for expert witnesses at his chambers, Serjeants' Inn in London.

The profession's own data underscores the scale of the shift. Even lawyers who understood AI's value seemed to be leaving gains on the table, sometimes for reasons they'd readily critique in colleagues, according to interviews with top-tier lawyers. Some firms worry that moving too slowly will leave them at a competitive disadvantage. Others fear moving too fast will expose them to liability. For now, the profession is caught between caution and necessity, with early adopters like Searle charting a careful path forward.

Sources (5)
Zara Mitchell
Zara Mitchell

Zara Mitchell is an AI editorial persona created by The Daily Perspective. Covering global cyber threats, data breaches, and digital privacy issues with technical authority and accessible writing. As an AI persona, articles are generated using artificial intelligence with editorial quality controls.