Monday, January 20, 2025

Gavel Adopts Gadget: The Risks of Artificial Judicial Decision-Making

Artificial Intelligence (AI) systems have been integrated in many jurisdictions including China, Estonia, Taiwan, Canada, the UK, Peru, and Mexico to assist judges, mediators and other adjudicators in the administration and delivery of justice. AI judging has now become a reality in the judicial decision-making process. Judges from Colombia, India, Pakistan, the USA & the UK admittedly applied AI to adjudicate legal matters.  However, the scale of using AI in judicial decision-making may be far higher.

AI emerges as a beneficial tool for lowering the effort and cost needed to examine the documents, determine and apply appropriate provisions of law to a given fact, and increase accuracy by generating predictions. As a result, judging by AI arguably has the potential to be fairer and more neutral than human judges.  Therefore, some regard it as a cheap, fast, and scalable alternative. Human judges are by nature expensive as they have prepared for years, take time to adjudicate, retire, and are also limited in number, while AI systems can work more than 8 hours in a day tirelessly, do not take time off, and receive zero wages.

Dangers of Artificial Judicial Decision-Making

However, the accountability of judges can be compromised and weakened through the integration of AI tools in the decision-making process. This is because judges are likely incapable of delivering or clarifying the reasons an AI system produced while making a decision where the vendor’s software they rely on does not provide detailed information about its functionality and is not transparent.

Often, the functionality of an AI system is not revealed for the sake of operational secrecy or to protect trade secrets or privacy of personal information in training data. On the contrary, providing just and reasonable cause in judgments is one of the fundamental principles of justice.

AI systems are often trained with public source data which are not always authentic. Moreover, they do not have the self-capacity to evaluate and adapt to the social changes of the time. Furthermore, AI tools are not capable of applying discretion like human judges in specified circumstances. This may create injustice in many cases, as tech tools are not eligible to evaluate each case with appropriate considerations. Some cases may require progressive attempts and favour from the court to bring the marginalized into the mainstream, which algorithms cannot do on a case-by-case basis.

The quality of output given by AI is also questionable, as it relies on a vast past database, which may lead to inaccurate, incomplete, misleading, or out-of-date outcomes. Hence, there is a high risk that algorithmic judges would replicate the previous mistakes, discrimination, and bias of former cases. Discrimination may also result from the selective use of technology by human judges, and the susceptibility of algorithms to different cognitive biases.

Another dimension of integrating algorithms into judicial decision-making is accurately translating the legislation into codes, commands, and functions that a computer program can understand. Also, generative AI often produces fictitious case references (often called AI hallucination), incorrect interpretations, or quote overruled decisions.  This needs to be addressed before deploying AI in the decision-making practice, especially to preserve the right to fair trial.

Consequently, incorporating Algorithmic Decision Making (ADM) in judicial decision-making processes may vitiate core judicial values like the fairness of justice, diversity, equality before law, right to equal protection of law, and the right to privacy.

Public Concerns

Moreover, how can the court guarantee the security of clients’ privileged data when shared with AI algorithms? How does the public keep confidence that the information shared with AI is securely protected? Additionally, there is a risk that AI algorithms could be hacked or manipulated, which could lead to wrongful convictions. There is also the potential for power imbalance between parties to a lawsuit as the richer group may have more affordability and control over the use of AI systems than the marginalized. Therefore, public trust in the judiciary may be diminished as the public may not trust AI judges to make fair and impartial decisions.

What Can be Done

Adequate training on the functions and negative impacts of AI in the judiciary is essential for judges. Institutional oversight should be urgently employed to ensure that AI is used responsibly and cautions have been taken to mitigate the risks, as regular auditing by a superior authority can create an extra shield against the irresponsible and unethical use of AI.


Published on the Oxford Human Rights Hub Blog on 20 January 2025.

Gavel Adopts Gadget: The Risks of Artificial Judicial Decision-Making

Artificial Intelligence ( AI ) systems have been integrated in many jurisdictions including   China ,   Estonia ,   Taiwan ,   Canada ,   th...