https://newsletter.en.creamermedia.com
Cutting|Efficiency|System|Technology|Training
Cutting|Efficiency|System|Technology|Training
cutting|efficiency|system|technology|training

AI “hallucinations” increase legal obligations

RETHA BEERMAN CDH foresees the likelihood that the ethical obligations associated with the use of algorithm-driven technologies will become clearer

SAFEE-NAAZ SIDDIQI Unless an AI tool is specifically designed for legal research within one’s own jurisdiction, it should not be used as a shortcut for research. Efficiency should not be confused with negligent corner-cutting

18th April 2025

By: Lynne Davies

Creamer Media Reporter

     

Font size: - +

As AI becomes more embedded in legal work, the ethical obligations deepen. Law firm Cliffe Dekker Hofmeyr (CDH) says lawyers must understand the tools they use and stay updated on their limitations, in addition to remaining accountable for the output.

“We foresee the likelihood that the ethical obligations associated with the use of algorithm-driven technologies will become clearer,” says CDH knowledge management practice head Retha Beerman.

She explains that “AI hallucinations” refer to instances where an AI system generates confident, but entirely fabricated, responses. In the legal profession, where arguments rely heavily on accurate precedent, these errors can be particularly damaging.

Hallucinations in AI-generated legal research are still being studied, so there is not a definitive answer yet.

“The technology evolves so quickly that any data becomes outdated almost immediately, and there’s also significant variation between models,” adds CDH senior associate Safee-Naaz Siddiqi.

Additionally, where some AI tools are used as “general-purpose”, such as ChatGPT, others have been built specifically for legal use, which can affect their performance.

Siddiqi recounts anecdotally that CDH has been made aware of instances of case law hallucinations encountered in its own law firm; however, these were identified and excluded early on in the normal quality-assurance process.

Should false citations go unnoticed, it can lead to professional embarrassment, cost orders and, more seriously, a breakdown in the court’s trust in a counsel’s reliability and integrity, Beerman emphasises.

Further, if these hallucination-based citations are not identified by any of the legal practitioners or judge during proceedings, they could erroneously influence the outcome of judgments. She notes that though CDH is not aware of this having occurred, it is a possible “worst-case scenario”.

Reducing AI Reliance

“Law firms should discourage the use of generative AI tools for legal research unless specifically built for research within their jurisdiction. Where approved tools are used, firms must ensure [that] proper training, usually provided by the developer, is made mandatory,” states Beerman.

Siddiqi avers that AI-generated case law can include all the hallmarks of legitimacy: party names, case numbers, year citations and even fabricated judicial commentary.

Moreover, trying to spot patterns or warning signs can be unreliable; therefore, the only safe approach, according to CDH, is to verify every reference against trusted primary sources.

CDH says it is unrealistic to issue a blanket prohibition against AI use. Instead, the law firm offers access to vetted platforms while encouraging honesty from juniors in open conversations about AI use cases.

Beerman reiterates that lawyers risk serious consequences should they rely on unverified AI-generated case law, including professional negligence lawsuits and lasting reputational damage to them and the firms.

While busy practitioners can and should find ways of using AI to enhance their efficiency, high workloads that tend to make turning to AI-generated research “tempting” are “irresponsible”.

Siddiqi adds that “unless an AI tool is specifically designed for legal research within one’s own jurisdiction, it should not be used as a shortcut for research. Efficiency should not be confused with negligent corner-cutting.”

CDH relies on vigilance, robust training programmes and clear AI-use policies to mitigate the risks. However, the first step is education, with legal practitioners needing to understand the risks.

Following this, consistent verification is key, including mandatory citation checks before filing, Siddiqi avers.

“Clear AI-use guidelines would help educate practitioners and establish a standard against which accountability and professional conduct can be measured,” adds Beerman.

One such stipulation, according to Siddiqi, is that should an AI tool be built specifically for legal research or a similar purpose, the developer should “absolutely share responsibility for reducing hallucinations”.

She notes that CDH has noted improvements in the ability of specialist tools to link responses to source content as one way of avoiding hallucinations.

“Unfortunately, as soon as a tool is ‘hamstrung’ by guardrails, its performance is slightly impacted. This will probably improve in future, but even with general-purpose tools, it should be noted that the burden remains squarely on the professional,” states Beerman.

The responsible integration starts with clear boundaries: “Use it as a support tool, not a substitute for legal judgment.”

Firms should develop in-house guidelines, prioritise training and always require verification of AI-generated content. Efficiency is valuable, but never at the expense of accuracy or ethical standards, Beerman concludes.

Edited by Nadine James
Features Deputy Editor

Comments

Latest Multimedia

Martin Creamer talks about safety, ruthenium and a new palladium centre plan.
On-The-Air (18/04/2025)
Updated 6 hours ago By: Martin Creamer

Latest News

Magazine round up | 18 April 2025
Magazine round up | 18 April 2025
18th April 2025

Showroom

Weir
Weir

Weir is a global leader in mining technology. We recognise that our planet’s future depends on the transition to renewable energy, and that...

VISIT SHOWROOM 
M and J Mining
M and J Mining

M and J Mining are leading suppliers of physical support systems as used by the underground mining industry. Our selection of products are not...

VISIT SHOWROOM 

Latest Multimedia

sponsored by

UP showcases mining VR centre
UP showcases mining VR centre
16th April 2025

Option 1 (equivalent of R125 a month):

Receive a weekly copy of Creamer Media's Engineering News & Mining Weekly magazine
(print copy for those in South Africa and e-magazine for those outside of South Africa)
Receive daily email newsletters
Access to full search results
Access archive of magazine back copies
Access to Projects in Progress
Access to ONE Research Report of your choice in PDF format

Option 2 (equivalent of R375 a month):

All benefits from Option 1
PLUS
Access to Creamer Media's Research Channel Africa for ALL Research Reports, in PDF format, on various industrial and mining sectors including Electricity; Water; Energy Transition; Hydrogen; Roads, Rail and Ports; Coal; Gold; Platinum; Battery Metals; etc.

Already a subscriber?

Forgotten your password?

MAGAZINE & ONLINE

SUBSCRIBE

RESEARCH CHANNEL AFRICA

SUBSCRIBE

CORPORATE PACKAGES

CLICK FOR A QUOTATION







sq:0.112 1.66s - 180pq - 2rq
Subscribe Now