Artificial Intelligence (AI) and the Erosion of Legal Accountability (A Judicial Warning from the Kenyan Courts)

The growing presence of Artificial Intelligence (AI) in legal practice presents a contradiction. On one hand, it promises efficiency, accessibility as well as innovation whereas on the other, it introduces risks that strike at the very core of legal accountability. The recent caution by the High Court of Kenya against the use of AI in drafting legal documents reflects a judiciary that is not resistant to technological progress, but one that is deeply conscious of its potential to undermine the administration of justice if left unchecked.

The court’s warning against the uncritical use of AI in drafting legal documents is not an outright rejection of innovation. Rather, it is a deliberate attempt to protect the integrity of judicial proceedings. This judicial intervention is both timely and necessary as it forces a reconsideration of a fundamental question; whether the increasing reliance on AI is enhancing legal reasoning or quietly displacing it.

Importantly, the court’s position is not a prohibition of AI, but a reaffirmation of a fundamental principle: The duty of accuracy lies with the litigant or advocate, not the tool used.

A central concern highlighted by courts globally is the phenomenon of AI “hallucinations”, where systems generate plausible but entirely fictitious legal authorities, citations, or reasoning. These outputs often appear convincing, making them particularly dangerous in legal drafting.

At the heart of any legal dispute lies the pleading. It defines the issues for determination, guides the evidentiary process and ultimately shapes the court’s decision. Kenyan courts have consistently underscored the centrality of pleadings and the obligation placed upon parties to ensure their accuracy and coherence.

In D.T. Dobie & Company (Kenya) Limited v Muchina , the court emphasized that pleadings must disclose a reasonable cause of action and must not be frivolous or vexatious. Similarly, in Independent Electoral and Boundaries Commission v Stephen Mutinda Mule & 3 Others , the Court of Appeal reaffirmed that parties are bound by their pleadings, and any departure from them is impermissible.

One of the most troubling aspects of AI-assisted drafting is its capacity to generate legal authorities that are fictitious. These outputs, often referred to as ‘hallucinations,’ are frequently presented in a manner that mimics legitimate legal reasoning, complete with citations and persuasive language.

The decision in Mata v Avianca, Inc . provides a stark illustration. In that case, counsel submitted legal authorities generated by AI that were entirely non-existent, leading to sanctions. The issue was not merely the use of AI, but the failure to verify its output.

The real issue is not whether AI should be used in legal drafting, but how it should be used. There is nothing improper about relying on technology to enhance efficiency but the problem arises when convenience replaces critical thinking.

Therefore, a responsible approach to AI in legal practice requires:

    • Careful verification of all authorities and legal propositions

    • A clear understanding of the law independent of AI-generated output

    • Conscious use of AI as a drafting aid, not a decision-maker

Acceptance of full responsibility for any document filed in court
In essence, AI should assist the lawyer, but it should never replace the lawyer’s mind.

The integration of AI into legal drafting raises important ethical questions and advocates are bound by duties of competence, diligence, and candour to the court as reflected in the Advocates Act (CAP 16) and the Law Society of Kenya Code of Standards of Professional Practice and Ethical Conduct .

It would be overly simplistic to view AI as purely detrimental. For unrepresented litigants, AI offers a means of navigating an otherwise complex and inaccessible legal system. However, poorly drafted or legally unsound pleadings do not advance justice, they frustrate it

The High Court’s caution is therefore best understood as protective rather than prohibitive. It seeks to preserve the integrity of the judicial process while allowing room for responsible technological use. As AI continues to reshape legal practice, the challenge will not be whether to adopt it, but how to do so responsibly. The law demands diligence, and diligence cannot be automated.

References

Advocates Act (CAP 16, Laws of Kenya)
Law Society of Kenya Code of Standards of Professional Practice and Ethical Conduct
D.T. Dobie & Company (Kenya) Limited v Muchina [1982] KLR 1
Independent Electoral and Boundaries Commission v Stephen Mutinda Mule & 3 Others [2014] eKLR
Mata v Avianca, Inc. No 22-cv-1461 (SDNY, 2023)

This article is provided free of charge for information purposes only; it does not constitute legal advice and should be relied on as such. No responsibility for the accuracy and/or correctness of the information and commentary as set in the article should be held without seeking specific legal advice on the subject matter. If you have any query regarding the same, please do not hesitate to contact us vide info@mnadvocates.co.ke  

Picture of ARTICLE BY FAITH MUMO

ARTICLE BY FAITH MUMO

Share this post on: 
Facebook
X
LinkedIn
WhatsApp