It seems Artificial Intelligence (AI) is revolutionizing human functions in different industries. Legal industry is not an exception to that and is already affected by AI fever in several countries to bring positive changes to the traditional legal profession. It applies to streamlining legal processes in courts, analysing legal instruments, and is even being used in certain jurisdictions to predict and prescribe outcomes.
Definitely, there are advantages to using these powerful modern algorithmic tools. However, the adverse aspects of AI are being washed away by the wave of its emerging use whereas mostly the positive features are being discussed widely. This write up aims to shed light on some of the ignored limitations of AI tools to make readers aware of the potential consequences of reliance on AI in the legal proceedings.
AI is actually a prediction tool by nature. Huge historical data are used to train AI powered tech tools and from there it learns to identify probabilistic patterns. As a result, it in fact doesn’t reason, think or question like human being although its output looks convincing and can mimic human style of reasoning. Yet a great prevailing concern is sometimes it generate false information which apparently looks authoritative.
It even doesn’t understand human language while it merely forecast the next token in a given sequence of words. So, it cannot rectify previous human error rather comes with its own set of biases which stem from the training data and subsequently reinforce them.
Law, legal proceedings and legal profession aren’t just about rules rather they are about people as well, as they deal with the problems of people. Moreover, law is neither mathematics nor a formula to be mechanically inserted into a scientific system. It is beyond that; a living and ever-changing matter that evolves with societal values, cultural norms, and ethical considerations. On the contrary, AI systems do not have the emotional empathy, ethical judgment, and human intuition, authority and experience that reflect societal values and mandatory for the legal profession.
Every case is different and so do all legal professionals i.e. counsels, judges, magistrates, court staff. Human lawyers not only apply law in court cases, but they also have skills to adopt with the judges and court environment. Accordingly, they adjust their responsibilities to the court, respond to the mood, tone, and dynamics of the moment. And here AI fails to assess the situation while human is champion in doing that.
Alongside, there is valid justification why every case is judged on its own facts cautiously. Because public need to feel justice is being done justly. There is high risk that dropping people's problem to algorithm may create barrier between public and the judicial process.
In the age of AI, there is a growing trend of representing own case to the court with the assistance of AI. Self represented litigation is gaining popularity as it saves the cost of lawyer as AI chatbot does not charge you for legal advice. Despite that, it won’t argue for you when the judge asks a question or seeks clarification during hearing. In addition, it won’t object to improper cross examination or if you’re being treated unfairly in the court. Apart from these, court is like a playground where the other side might pull a manoeuvre suddenly. Hence, you need to think promptly and make judgment calls on the spot what AI is not able to provide in self represented cases.
Furthermore, as of today, AI does not take responsibility for any wrongdoing, neither its programmer nor the company who trade it. No AI chatbot will stay beside you when the court penalize you for their fault. Recently, the Upper Tribunal (Immigration and Asylum Chamber) of the UK warned to lawyers about the use of AI, after finding that a British Bangladeshi Barrister had misled the tribunal by citing a fictitious Court of Appeal judgment generated by ChatGPT. The lawyer is also referred to the Bar Standards Board of the UK for investigation. Several other lawyers across jurisdictions also faced criticism and punishment from courts and regulatory bodies for referring to ChatGPT generated false information in real cases to the court. Thus, it’s you who are responsible for employing AI in your profession or case without knowing its limitation and having proper training on its design.
The role of the judiciary is not just to process cases efficiently but to weigh moral consequences, to ensure justice fairness, and to uphold the rule of law in a way that no machine can replicate. Therefore, we must draw a line between the application and use of AI generated outputs and our own conscience in complex, sophisticated and life related incidents resolved in courtroom.
Published in the Law & Our Rights Page of The Daily Star on 22 October 2025.
Published in the Weekly Sampratik Deshkal on 23 October 2025 at page 7.














