0

Artificial Intelligence and the Law: Will Judges Run on Punch Cards?

Written By: Mgr. Bc. Seda Fabian

This article was originally published by the Common Law Review.

I. Introduction

This year, the Estonian Ministry of Justice asked its Chief Data Officer, Ott Velsberg, and his team, to design a robot judge that could adjudicate small claims disputes of less than €7,000. [1] In the US, an algorithm called COMPAS helps recommend criminal sentences in some states by calculating the probability of defendant recidivism. [2] The UK-based DoNotPay AI-driven chatbot has already adjudicated more than 100,000 parking tickets in London and New York. [3] KLEROS is a blockchain-based dispute resolution program that provides fast, secure, and affordable arbitration. [4] And, ROSS intelligence – the first artificially intelligent lawyer, collects and analyzes relevant leading cases. [5]

 Although regulation and oversight are often seen as the antithesis of innovation, governments all over the world tend to embrace developments that advance the ultimate goal of winning the ongoing jurisdiction rat race in much the same way that the “Delaware effect” in the early 20th century spurred governments to rethink policy approaches vis-à-vis incorporation. This article explores the interplay between artificial intelligence and the judicial system. Are judges that decide cases based on punch-card algorithms another plot from the popular dystopian television series “Black Mirror,” or are such developments an inevitable part of how we decide cases in the near future? Are our human requirements for judges replaceable by the capabilities of AI? Indeed, is it possible that AI-driven adjudication would remove human prejudices and thus produce more righteous decisions?

II. The Role of Human Judges

Artificial intelligence (AI) has become ubiquitous, and with the advance of technology, the law must account for these sorts of changes in society. The very first question that should be asked in the context of the ability to replace human judges with robots is what roles judges play within our society, how they should be selected, and what this reveals about the intersection of AI and jurisprudence.

In the West, we view judges as an integral part of the moral compass of society and the whole process of judge selection is meant to focus on their qualifications. Indeed, as legal scholars Sourdin and Zariski note, “Emotion not alone but in combination with the law, logic and reason – helps the judges get it right.” [6] They need to respond consciously, rationally, and with intuition and empathy. [7] Law stabilizes a society; it does not create it. [8] The role of a judge is a complex and multifaceted one. In addition to knowledge, authority, credentials and reputation, judges must have the ability to be empathetic, predict human behaviour, and interact with all kinds of people compassionately and without prejudice. What judges do on a daily basis within the frame of the characteristics mentioned above is to assess evidence and make decisions on fundamental questions of fact and law: guilty or innocent? Liable or not? Who is at fault? Who must pay?

III. Current Uses and Advantages of AI Adjudication

Recent years have shown that, even though the role of judge seems quintessentially human, and thus not likely to be replaced by automation, AI in fact has the capacity to do certain aspects of the job better. AI, though naturally not possessing the qualities mentioned above, does have the capacity to collect large volumes of data including all relevant statutes, case law, and evidence and then produce a decision. In the legal world, AI may be sufficient in legal research, compliance, contract analysis, and case prediction as well as in document automation. [9]

One recent study showed that AI could predict a prosecutor’s decision with 88% accuracy. [10] Moreover, a closer examination shows that this does not mean that there was a 12% “error rate,” because human decisions were reviewed and appeals were affirmed at an 85% rate. [11] Even the head of the US Supreme Court, Chief Justice John G. Roberts, said AI is having a significant impact on how the legal system in the United States works. In 2017, he told The New York Times, It’s a day that’s here and it’s putting a significant strain on how the judiciary goes about doing things. (…) The impact of technology has been across the board, and we haven’t yet really absorbed how it’s going to change the way we do business.” [12] The renaissance of AI has been truly remarkable. The period of the so-called “AI winter” has passed, and now AI’s influence extends beyond the world of tech giants such as Amazon, Microsoft, and Google, and will touch nearly every aspect of our everyday lives, including one of the oldest disciplines on the planet: law. [13] And it indeed brings new challenges, as well as pros and cons.

Importantly, AI has already been deployed extensively in legal settings. Data analysis of legal documents is a burgeoning new industry powered by more elegant and effective AI algorithms that have the capacity to finish legal research within a shorter timeframe than even the most adept human being. Recently, the Computer Science Department at the University of Alberta developed an algorithm that can pass the notoriously difficult Japanese Bar Exam. The same team is now working on developing AI that can “weigh contradicting legal evidence, rule on cases, and predict the outcomes of future trials.” [14] The algorithm can use demographic and other data such as age, sex, and employment history to calculate the probability of criminal behaviour. It takes into account the past as well as more general trends within the data. This could be dangerous, as it creates the possibility that the algorithm produces a result based on statistical probability that, while generally likely, is not specifically true in an individual case. An approach such as this leads one to question the role of individuality and the characteristics of each person.

IV. Shortcomings of Human Adjudication

There presently exist AI programs that can accurately determine the likelihood of recidivism. We cannot, though, forget a high-profile case from 2017, where Eric Loomis was sentenced to six years in prison thanks to recommendations provided by the AI algorithm in the COMPAS system. This very same system managed to predict “a high risk of violence, high risk of recidivism, and high pretrial risk.” [15] The case raised many important questions. Does the human judge face any reduced responsibility? Should a defendant have a choice whether to have their case heard by a human judge or an AI? Should a judge rely more on numbers than on his own judgment? Is there any space left for compassion? What are the chances that the AI erred by not taking into account relevant information? Do we even need a judge who is, after all, just a human being, fault-tolerant, full of prejudice and likely to make a mistake?

During a recent TEDx presentation in Zurich, Elliott Ash, Assistant Professor of Law, Economics, and Data Science at ETH Zurich, presented research on the American immigration adjudication system and just how much the result depends upon the adjudicator. For instance, one judge in San Francisco granted 90.6 % of asylum requests, while another judge granted just 2.9 %. He referred to this form of justice as, in fact, “the luck of the draw.” [16] Even worse are numbers on jailing decisions before and after lunch breaks, which demonstrates that the same judge, depending on their mood, might deliver wildly different sentences. [17] Perhaps of most concern is data showing numbers related to lifetime likelihood of imprisonment of US residents born in 2001 based on racial discrimination. The shocking nature of these findings suggest that removing the human element from sentencing would go a long way toward ensuring fairness, or at least consistency. [18]

A closer look and deeper examination, unfortunately, reveals an even worse picture. There is a human-based tendency of judges to hide connections to litigants and to their lawyers which leads to conflicts of interest. Oversight bodies claimed to find wrongdoings in nearly half of complaints about judge conflicts of interest. That said, over 90 % of these investigated complaints were dismissed by state court authorities without even conducting any substantive inquiry. This sort of information naturally leads one to question the real independence and justice of our legal systems.

Sadly, conflicts of interest are just a small part of the larger problem of human bias that plagues the system. Judges are prone to racial biases, explicit and implicit. In these cases, dispassionate arbiters could be seen in a new light – they could bring fairness and consistency in decisions. In this way, AI may be more impartial than humans. “Humans can be swayed by emotion. Humans can be convinced. Humans get tired or have a bad day,” says Tracy Greenwood, an expert in e-discovery whose company uses machines to perform legal discovery work faster and more accurately than humans. “In a high crime city, a judge might start to hand out harsher sentences towards the upper end of the sentencing guidelines. In court, if a judge does not like one of the lawyers, that can affect the judge’s opinion,” says Greenwood. Machines could be a dispassionate solution to all of this, without human bias, irrationality, or mistakes.

V. Is AI Actually Objective?

Critics of AI-powered jurisprudence would resist any framing of the issue that idealizes the supposed neutrality and objectivity of algorithms. In recent years, genuine concerns have arisen that the way AI operates can lead to discriminatory outcomes. Moreover, because these systems are complex and built upon proprietary programs, there is little transparency in terms of how precisely decisions are reached. There are no open inspections, nor there are explanations of what specifically the AI relied upon to generate a decision. Thus, we are forced to answer a complicated question that moves us one step upstream in the process: are we certain that the algorithms themselves are not biased?

In May 2016, investigative journalism organization ProPublica ran an investigation on machine biases within the COMPAS algorithm. [19] According to ProPublica, COMPAS was prone to overestimate the likelihood of recidivism by black defendants and underestimate that of white defendants. They used an example of the algorithm’s assessed of two defendants. One of them was a 41-year-old man of European heritage, a seasoned criminal; the other a teenage African-American girl who had never been arrested before. Both had stolen items of the same value, but the machine failed to contextualize the fact that the girl stole a bicycle and had no serious criminal record and instead it took the racial bias into account. The girl was rated as a high risk, whereas the man was rated as low risk by COMPAS. [20]

Another example of machines being useless in the judicial system are privacy cases where probably only human beings can differentiate the subtle difference between positive and negative effect vis-à-vis keeping the privacy of a victim. This thin line is hard to see even for an experienced judge, and is thus nearly impossible for AI. [21]

VI. Conclusion

Taking all of these factors into consideration, our goal should be to use AI for what they do best. They are excellent at predicting the biases of individual judges and correcting them. AI could also be used to detect systematic bias, to understand it, and to provide that data to policymakers and the public so that we can find ways to reduce such biases. We also need to focus on creating a comprehensive legal framework that protects the data and our right to privacy. And last but not least, we should aim to create AI decisions that are accountable and transparent. [22] To these ends, Prof. Ryan Calo, whose research focuses on cyber law and robotics, makes an important point when he wrote that, “Ultimately, judges and their audiences will need to grapple with the increasing capability of robots to exercise discretion and act in unpredictable ways, updating both the ways judges invoke robots in judicial reasoning and the common law that attends legal conflicts involving real robots.” [23]

AI has the capacity to improve our legal system in myriad and important ways. However, by making ourselves subject purely to the decisions of opaque algorithms we might find ourselves within a non-human approach to justice that is ultimately suboptimal. The moral compass of our society cannot be placed into hands of machines. At this point of our evolution and their development, we must not forget that judging requires not only knowledge of the law and the case evidence, but also the empathetic ability to understand the emotions and motivations underlying human behaviour. If we can find a way to use robots to bring greater consistency and clarity to legal proceedings without risking fairness, we will have arrived at the ideal balance.

[1] Niiler, Eric. (2019) Can AI Be a Fair Judge in Court? Estonia Thinks So. WIRED [online]. Available at: https://www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/ [Accessed 2019-06-26].

[2] Kehl, Danielle, Guo, Priscilla, Kessler, Samuel. (2017). Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing. Responsive Communities. Available at: https://cyber.harvard.edu/ publications/2017/07/Algorithms. [Accessed 2019-06-26].

[3] Niiler, Eric. (2019). op. cit.

[4] KLEROS: The Blockchain Dispute Resolution Layer [online]. Available at: https://kleros.io [Accessed 2019-06-26].

[5] ROSS Intelligence [online]. Available at: https://rossintelligence.com [Accessed 2019-06-26].

[6] Sourdin, T., Zariski A. (2018) The Responsive Judge. International Perspectives. Springer. Page 88. AndChin 2012, 1581; see also Sinai and Alberstein 2016, esp. 225; Colby 2012, esp. 1946.

[7] Sourdin, T., Cornes, R. (2018) Do Judges Need to Be Human? The implications of Technology for Responsive Judging. [online]. Available at: https://www.researchgate.net/publication/326244385_Do_Judges_Need_to_Be_Human_The_Implications_of_Technology_for_Responsive_Judging [Accessed 2019-06-26].

[8] Laub, B. (1969). The Judge’s Role in a Changing Society. Judicature. Vol. 53, number 4. p. 140. [online]. Available at: https://heinonline.org/HOL/LandingPage?handle=hein.journals/judica53&div=44&id=&page=&t=1561551698 [Accessed 2019-06-26].

[9] Mills, M (2016) Artificial Intelligence in Law: The State of Play 2016 (Part 1). Legal Executive Institute, 23 February. [online]. Available at: http://www.legalexecutiveinstitute.com/artificial-intelligence-in-law-the-state-of-play-2016-part-1/. Accessed 21 June 2019.

[10] ASH, Elliott. Robot judges: TEDxZurichSalon [online]. Available at: https://www.youtube.com/watch?v=6qIj7xSZKd0. [Accessed 2019-06-26].

[11] ASH, Elliott. ibid; and EDWARDS, Barry C. Why Appeals Courts Rarely Reverse Lower Courts: An Experimental Study to Explore Affirmation Bias [online]. 2017 [cit. 2019-06-26]. Dostupné z: http://law.emory.edu/elj/elj-online/volume-68/essays/appeals-courts-reverse-lower-courts-study-explore-affirmation-bias.html.

[12] LIPTAK, Adam. (2017). Sent to Prison by a Software Program’s Secret Algorithms [online]. Available at: https://www.nytimes.com/2017/05/01/us/politics/sent-to-prison-by-a-software-programs-secret-algorithms.html. [Accessed 2019-06-26].

[13] KUGLER, Logan. (2018). AI Judges and JuriesCommunications of the ACM [online]. Vol. 61 No. 12 [online]. Available at: https://cacm.acm.org/magazines/2018/12/232890-ai-judges-and-juries/fulltext  [Accessed 2019-06-26].

[14] SNOWDON, Wallis. (2017) Robot judges? Edmonton research crafting artificial intelligence for courts [online]. Available at: https://www.cbc.ca/news/canada/edmonton/legal-artificial-intelligence-alberta-japan-1.4296763 [Accessed 2019-06-26].

[15] DRESSEL, Julia a Hany FARID. (2018) The accuracy, fairness, and limits of predicting recidivism. Science Advances[online]. Available at: https://advances.sciencemag.org/content/4/1/eaao5580 [Accessed 2019-06-26] and LARSON, Jeff, Surya MATTU, Lauren KIRCHNER a Julia ANGWIN. (2016) How We Analyzed the COMPAS Recidivism Algorithm. ProPublica [online]. Available at: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm [Accessed 2019-06-26].

[16] ASH, Elliott. Ibid.

[17] SOURDIN, Tania. (2018) Judge v Robot? Artificial Intelligence and Judicial Decision-Making. UNSW Law Journal. Vol. 41. [online]. Available at: http://www.unswlawjournal.unsw.edu.au/wp-content/uploads/2018/12/Sourdin.pdf [Accessed 2019-06-26].

[18] ASH, Elliott. Ibid.

[19] LARSON, Jeff, Surya MATTU, Lauren KIRCHNER a Julia ANGWIN. (2016) How We Analyzed the COMPAS Recidivism Algorithm. ProPublica [online]. Available at: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm [Accessed 2019-06-26].

[20] LARSON, Jeff, Surya MATTU, Lauren KIRCHNER a Julia ANGWIN. ibid.; and Washington, Anne. (2019) How to Argue with an Algorithm: Lessons from the COMPAS ProPublica Debate. The Colorado Technology Law Journal. Volume 17 Issue 1 http://ctlj.colorado.edu. [online]. Available at SSRN: https://ssrn.com/abstract=3357874. Page 22. [Accessed 2019-06-26].

[21] KUGLER, Logan. op. cit.

[22] IBM (2018) Bias in AI: How we Build Fair AI Systems and Less-Biased Humans [online]. Available at: https://www.ibm.com/blogs/policy/bias-in-ai/ [Accessed 2019-06-26].

[23] Calo, Ryan. (2017) Robots as Legal Metaphors. Harvard Journal of Law and Technology, Vol. 30, No. 1, 2016; University of Washington School of Law Research Paper No. 2017-04. [online]. Available at SSRN: https://ssrn.com/abstract=2913746 [Accessed 2019-06-26].