0

Who owns posthumously released music?

Written By: Ciana Custino-Phillips

Musicians who write and produce their own music often copyright their work to have complete control over its use and profitability. Copyrights allow artists to distribute, perform, and publicly display their work and subsequently profit from the music they create while preventing others from using or reproducing their work. [1] Copyrighted material is protected for seventy years after the artist’s death; however, many questions tend to arise when artists pass away and have their music released posthumously. [2] This exact situation occurred recently when Malcolm James McCormick, known professionally as Mac Miller, tragically died in September 2018, in the height of his career, only weeks after his fifth studio album released. [3]

McCormick was an established rapper, singer, songwriter, and record producer. By the age of twenty-six, McCormick had amassed millions of fans and become known for championing the genre of melodic hip-hop on several of his gold-certified albums. [4] His death was shocking and unexpected, but his fans were even more surprised when McCormick’s sixth studio album was released posthumously on January 17, 2020, after his good friend and music-making partner, Jon Brion, finished producing the records Miller created prior to his death. [5] Although it is clear that McCormick owns all of the work that he released and copyrighted prior to his death, his most recent release begs the question, who has ownership of McCormick’s intellectual property and who will be receiving royalties from his posthumous album release?

Copyrights and royalties are considered personal property and after one’s death, personal property can be transferred through intestate succession or assessment in trusts or wills. [6] Trust and wills allow artists to distribute their assets to various types of beneficiaries including individuals, charities, or even museums. [7] If an artist has not created a will or trust, their property can be obtained by their spouses or blood-relatives through intestate succession. [8]

In this case, prior to McCormick’s death, he had made the responsible decision of putting his assets in a revocable trust. [9] Revocable living trusts are trusts that can be adjusted as needed throughout the creator’s lifetime and allow the trustor to transfer property to their beneficiaries. [10] Although there are rumors that McCormick left all of his assets to his mother, father, and older brother, there is no way for the public to be sure, since revocable trusts are private, as opposed to wills, which are accessible to the general public. [11] Wills are common among most people; however, musicians have been creating revocable trusts in place of wills with an increasing regularity. [12] It allows them to protect their assets, assign who will be in charge of their property after their death, alter the trust as their financial circumstances change, and keep their assets protected from the public. [13]

Unfortunately, McCormick’s music career was tragically shortened, but his fans can rest easy knowing that his estate is in the control of the people he hand-selected to control his career after his death. And thankfully, many artists are following in his footsteps by protecting their work through revocable trusts.

[1] 17 U.S.C. § 106 (2019).

[2] 17 U.S.C. § 302 (2019).

[3] Brendan Klinkenberg, Mac Miller Dead at 26, Rolling Stone (Sept. 7, 2018, 5:25PM), https://www.rollingstone.com/music/music-news/mac-miller-dead-at-26-720756 [https://perma.cc/C8NG-D6TV].

[4] Id.

[5] Craig Jenkins, ‘Oh My God, He’s Even Better Than I Thought’ Producer Jon Brion on the gutting task of completing Mac Miller’s final album after his sudden death, Vulture (Jan 21, 2020), https://www.vulture.com/2020/01/mac-miller-circles-jon-brion-interview.html [https://perma.cc/AP7B-7D8K].

[6] 17 U.S.C. § 201 (2019).

[7] Id.

[8] Id.

[9] Sara M. Moniuskzo, Mac Miller Left Behind A Will, But Who Will His Estate Go To, USA TODAY (Sept. 15, 2018, 1:29PM) https://www.usatoday.com/story/life/people/2018/09/15/mac-millers-left-behind-but-who/1316618002 [https://perma.cc/YH6H-SM8J].

[10] Greg Iacurci, Deceased rapper Mac Miller was 26 and had a will – similar to that of Michael Jackson, Investment News (Sept. 21, 2018), https://www.investmentnews.com/deceased-rapper-mac-miller-was-26-and-had-a-will-similar-to-that-of-michael-jackson-76162 [https://perma.cc/LQ7G-TVZX] .

[11] Id.

[12] Id.

[13] Id.

0

The CCPA: What is it and what does it mean for consumer privacy?

Written By: Bryce Hoyt

Beginning on January 1, 2020, the California Consumer Privacy Act (“CCPA”) took effect, resulting in a flood of emails from corporations stating, “We’ve updated our privacy policy.” [1] The CCPA is the most comprehensive and far-reaching consumer privacy law to date, mimicking the European Union’s General Data Protection Regulation (“GDPR”). [2] For example, companies with $25 million in annual revenue or any company storing data on at least 50,000 people must comply or face a potential fine of up to $7,500 per record in violation. [3] Although CCPA is a state law, it applies to any business meeting the threshold requirement above, and that also does business in California or collects personal information on California residents. [4] This means that many companies outside California or even the United States are still mandated to comply if they do substantial business with California [residents].

A few key provisions of the act include prohibiting the sale of personal data on children under the age of 13 without parent authorization and requiring children between the ages of 13-16 to give affirmative consent themselves before collecting any data (also known as the “opt-in” requirement). [5] Additional provisions put more power in the hand of the consumer by allowing individuals to request full disclosure of the type of data the business collects, the category of third-party companies the data is sold to, and the purpose of selling said data. [6] One of the most unique provisions allows consumers to request all personal data relating to said individual to be permanently deleted from the company records and gives the right to a private cause of action for any violation (with exceptions). [7] These are just a few key aspects of the extensive requirements and guidelines set forth in the CCPA.

Privacy organizations and firms have started releasing CCPA “readiness assessment guides” to help advise companies and clients on how to comply with the sweeping changes to consumer privacy law. [8] Although the act lays out, in detail, many necessary changes companies must make to comply, some aspects remain ambiguous, such as what constitutes a data breach “cure”. Furthermore, it is unclear the degree of enforcement by the California Attorney General’s office. It appears only future litigation will answer the questions left open by the legislation—as of now, companies are diligently working to establish company protocol to avoid being the defining precedent.

[1] Maria Korolov, California Consumer Privacy Act (CCPA): What you need to know to be compliant, CSO (October 4, 2019, 3:00 AM PDT), [https://perma.cc/QN8T-CW8V].

[2] Id.

[3] Id.

[4] Emily Tabatabai, Antony Kim, & Jennifer Martin, Understanding California’s Game-Changing Data Protection Law, CORPORATE COUNSEL (July 16, 2018), https://s3.amazonaws.com/cdn.orrick.com/files/UnderstandingCaliforniaDataProtectionLaw.pdf [https://perma.cc/U5X3-BSME].

[5] Cal. Civ. Code §1798.120 (West 2019).

[6] Cal. Civ. Code §1798.110 (West 2019).

[7] Cal. Civ. Code §1798.150 (West 2019).

[8] ORRICK, California Consumer Privacy Act – Are you CCPA-Ready?, https://www.orrick.com/Practices/CCPA-Readiness [https://perma.cc/D6K4-G2E9].

0

CRISPR Patent Dispute Unlikely To End With A Winner

Written By: Aaron Shaw

The genetic engineering tool, CRISPR-Cas9, represents the center of an ongoing patent dispute between the University of California, Berkeley, and the Broad Institute of MIT and Harvard.  CRISPR is a protein-based mechanism rendered from bacteria that can be manipulated to precisely splice and replace portions of a plant or animal’s genetic material. [1] Alternative genetic engineering technology are more expensive, taking more time to produce.  As a result, CRISPR is a powerful tool that can be put in the hands of many researchers.

High precision genetic engineering will inevitably raise ethical and legal questions about who can receive treatment and for what purpose. Will we limit ourselves to curing genetic diseases or will we allow ourselves to add favorable traits to humans?  However, the technology remains in its infancy. [2] The current battle involves the USPTO and EPO as to who owns the patents to CRISPR.

Jennifer Doudna, of the University of California Berkeley, and Emanuelle Charpentier, of the University of Vienna, were the first to discover the potential of CRISPR in prokaryotes (bacteria), and they were the first to apply for a patent. [3] [4] They have encountered fierce opposition from the Broad Institute of MIT and Harvard, who predicate their rights to CRISPR by being the first to examine its use in mammalian cells. [5] [6] The Broad Institute obtained a patent from the USPTO in 2017, for CRISPR’s use in eukaryotes (plant and animal cells). [7] Some would consider this a hard knock to UC Berkeley, since it was the first to publish the genetic engineering potential of the CRISPR mechanism.  The obvious next step was to apply the mechanism to eukaryotes. [8]

One month after the Broad Institute received its patent, UC Berkeley was granted a patent for the use of CRISPR in eukaryotes by the EPO. [9] The apparent split decision reveals the complexity of determining the owner of a technology that is groundbreaking, highly lucrative, and used on an international basis.  The Europeans stressed that the initial discovery of CRISPR defined the right to use it in a broad field of application. [10] The American approach appears to have narrowed the process, determining that the initial demonstration of CRISPR in a specific setting, such as in animal cells, created the right for its use in animals.

In 2017, one of our law students wrote about the precedent on this blog. [11] At the time, the most recent news was that UC Berkeley and the University of Vienna had defended their right to the broad use of CRISPR in Europe. [12] One reason this appeared to be a never-ending battle involved the rate of CRISPR innovation.  Such technologies instigate extended legal battles because scientists cannot discern the parameters.  Multiple parties held patents for narrowly defined applications of the CRISPR system. [13] However, the dispute between Berkeley and the Broad Institute still linger as a contentious debate.  The EPO’s decision did not seem to resolve the difference between the strategy of using CRISPR in animal cells while defining such use in animal cells.

In 2018, the US Court of Appeals doubled down, confirming there was “‘no interference in fact’” between the two parties’ patents. [14] In July 2019, UC Berkeley claimed the Broad Institute deceived the PTO by withholding information. [15] The Broad Institute believes this is a low blow. [16] Afterall, UC Berkeley has just filed its eleventh patent (and they are expecting six more) involving CRISPR-Cas9; this time the patents involve precise methods and substances for targeting DNA­—a vital step for the CRISPR system to modify DNA. [17] Without knowing whether the Broad Institute deceived the PTO, it appears Berkeley has conceded by filing narrow patents in the U.S.  It is now clear Berkeley has to compete with all the other discoveries being made, [18][19][20][21] despite not giving up on the dispute. Neither the PTO, nor the EPO, is likely to change patent application requirements any time soon.  This feud is likely headed toward a dead end, but neither party can be blamed for their pursuit, given the capital at risk.

This problem underscores the international challenge biotech companies confront.  Laboratories and pharmaceutical companies conduct expensive and timely research, [22] subsequently compelled to litigate when parties obtain similar patents in other jurisdictions.  This only serves to delay innovation when dealing with such a powerful research tool, and in the case of CRISPR, the issue may go further than the patent offices. The Federal and Drug Administration (FDA) is primarily concerned with somatic cell therapy in humans, but CRISPR-related patent applications are constantly pushing the boundaries of gene editing beyond somatic cell therapy. New technologies advance our understanding of how to apply the law.  The FDA and European Medicines Agency must be responsive to new CRISPR products that charter unregulated waters in order to encourage potential innovation.

[1] Jennifer Doudna, Genome-editing revolution: My Whirlwind year with CRISPR, Nature (December 22, 2015), https://www.nature.com/news/genome-editing-revolution-my-whirlwind-year-with-crispr-1.19063 [https://perma.cc/QSV3-Y2JD].

[2] Id.

[3] Heidi Ledford, Why the CRISPR patent verdict isn’t the end of the story, Nature (February 17, 2017), https://www.nature.com/news/why-the-crispr-patent-verdict-isn-t-the-end-of-the-story-1.21510 [https://perma.cc/X4D7-37RC].

[4] Jinek, Marti, et. al., A programmable dual RNA-guided DNA endonuclease in adaptive bacterial immunity, 337 Science 816-821 (2012).

[5] Heidi, supra note 3.

[6] Cong, Le, et. al., Multiplex Genome Engineering Using CRISPR/Cas Systems, 339 Science 819-823 (2013).

[7] Heidi, supra note 3.

[8] Programmable DNA Scissors found for bacterial immune system, Science Daily (June 28, 2012), https://www.sciencedaily.com/releases/2012/06/120628193020.htm [https://perma.cc/72MW-DRCH].

[9] Jef Akst, UC Berkeley Receives CRISPR Patent in Europe, The Scientist (March 24, 2017) https://www.the-scientist.com/?articles.view/articleNo/48987/title/UC-Berkeley-Receives-CRISPR-Patent-in-Europe/ [https://perma.cc/B8WR-3D2B].

[10] Id.

[11] Charles Cheng, A Gene-Editing Patent Dispute – What Does it Mean?, USF Blogs: Intellectual Property and Law Journal (March 28, 2017), https://usfblogs.usfca.edu/iptlj/2017/03/28/a-gene-editing-patent-dispute-what-does-it-mean/#more-1002 [https://perma.cc/ZT9L-K5QL].

[12] Jim Daley, Berkeley CRISPR Inventors Get Another Important European Patent, The Scientist (March 12, 2018), https://www.the-scientist.com/?articles.view/articleNo/52042/title/Berkeley-CRISPR-Inventors-Get-Another-Important-European-Patent/ [https://perma.cc/NTP7-UNMS].

[13] Heidi, supra note 3.

[14] Mark Terry, UC-Berkeley Rekindles U.S. Patent Dispute with the Broad Institute over CRISPR, BioSpace (August 1, 2019), https://www.biospace.com/article/crispr-patent-battle-isn-t-quite-over-yet/ [https://perma.cc/3T9P-4CC3].

[15] Id.

[16] Id.

[17] University of California, U. Vienna, Charpentier Get 11th U.S. CRISPR Cas-9 Patent, ClinicalOmics (August 26, 2019), https://www.clinicalomics.com/topics/precision-medicine-topic/university-of-california-u-vienna-charpentier-get-11th-u-s-crispr-cas-9-patent/ [https://perma.cc/N755-F965].

[18] Jon Cohen, Nirja Desai, With its CRISPR revolution, China becomes a world leader in genome editing, Science (August 2, 2019, 8:00 AM), https://www.sciencemag.org/news/2019/08/its-crispr-revolution-china-becomes-world-leader-genome-editing [https://perma.cc/3N7J-4B3C].

[19] Rich Haridy, CRISPR breakthrough allows scientists to edit multiple genes simultaneously, Newatlas (August 15, 2019), https://newatlas.com/crispr-cas12a-gene-editing-multiple-eth-zurich/61068/ [https://perma.cc/WK2C-A5WW].

[20] CRISPR gene editing may halt progression of triple-negative breast cancer, Medical Xpress (August 26, 2019), https://medicalxpress.com/news/2019-08-crispr-gene-halt-triple-negative-breast.html [https://perma.cc/Z8J3-G4RF].

[21] Hanae Armitage, Scientists Zero in on cancer treatments using CRISPR, Stanford Medicine (August 26, 2019), https://scopeblog.stanford.edu/2019/08/26/scientists-zero-in-on-cancer-treatments-using-crispr/ [https://perma.cc/9WV9-QZT3].

[22] Biotechnology Law, HG.org, https://www.hg.org/biotechnology.html [https://perma.cc/7BNM-H34X] (last visited Jan. 26, 2020).

0

Artificial Intelligence and the Law: Will Judges Run on Punch Cards?

Written By: Mgr. Bc. Seda Fabian

This article was originally published by the Common Law Review.

I. Introduction

This year, the Estonian Ministry of Justice asked its Chief Data Officer, Ott Velsberg, and his team, to design a robot judge that could adjudicate small claims disputes of less than €7,000. [1] In the US, an algorithm called COMPAS helps recommend criminal sentences in some states by calculating the probability of defendant recidivism. [2] The UK-based DoNotPay AI-driven chatbot has already adjudicated more than 100,000 parking tickets in London and New York. [3] KLEROS is a blockchain-based dispute resolution program that provides fast, secure, and affordable arbitration. [4] And, ROSS intelligence – the first artificially intelligent lawyer, collects and analyzes relevant leading cases. [5]

 Although regulation and oversight are often seen as the antithesis of innovation, governments all over the world tend to embrace developments that advance the ultimate goal of winning the ongoing jurisdiction rat race in much the same way that the “Delaware effect” in the early 20th century spurred governments to rethink policy approaches vis-à-vis incorporation. This article explores the interplay between artificial intelligence and the judicial system. Are judges that decide cases based on punch-card algorithms another plot from the popular dystopian television series “Black Mirror,” or are such developments an inevitable part of how we decide cases in the near future? Are our human requirements for judges replaceable by the capabilities of AI? Indeed, is it possible that AI-driven adjudication would remove human prejudices and thus produce more righteous decisions?

II. The Role of Human Judges

Artificial intelligence (AI) has become ubiquitous, and with the advance of technology, the law must account for these sorts of changes in society. The very first question that should be asked in the context of the ability to replace human judges with robots is what roles judges play within our society, how they should be selected, and what this reveals about the intersection of AI and jurisprudence.

In the West, we view judges as an integral part of the moral compass of society and the whole process of judge selection is meant to focus on their qualifications. Indeed, as legal scholars Sourdin and Zariski note, “Emotion not alone but in combination with the law, logic and reason – helps the judges get it right.” [6] They need to respond consciously, rationally, and with intuition and empathy. [7] Law stabilizes a society; it does not create it. [8] The role of a judge is a complex and multifaceted one. In addition to knowledge, authority, credentials and reputation, judges must have the ability to be empathetic, predict human behaviour, and interact with all kinds of people compassionately and without prejudice. What judges do on a daily basis within the frame of the characteristics mentioned above is to assess evidence and make decisions on fundamental questions of fact and law: guilty or innocent? Liable or not? Who is at fault? Who must pay?

III. Current Uses and Advantages of AI Adjudication

Recent years have shown that, even though the role of judge seems quintessentially human, and thus not likely to be replaced by automation, AI in fact has the capacity to do certain aspects of the job better. AI, though naturally not possessing the qualities mentioned above, does have the capacity to collect large volumes of data including all relevant statutes, case law, and evidence and then produce a decision. In the legal world, AI may be sufficient in legal research, compliance, contract analysis, and case prediction as well as in document automation. [9]

One recent study showed that AI could predict a prosecutor’s decision with 88% accuracy. [10] Moreover, a closer examination shows that this does not mean that there was a 12% “error rate,” because human decisions were reviewed and appeals were affirmed at an 85% rate. [11] Even the head of the US Supreme Court, Chief Justice John G. Roberts, said AI is having a significant impact on how the legal system in the United States works. In 2017, he told The New York Times, It’s a day that’s here and it’s putting a significant strain on how the judiciary goes about doing things. (…) The impact of technology has been across the board, and we haven’t yet really absorbed how it’s going to change the way we do business.” [12] The renaissance of AI has been truly remarkable. The period of the so-called “AI winter” has passed, and now AI’s influence extends beyond the world of tech giants such as Amazon, Microsoft, and Google, and will touch nearly every aspect of our everyday lives, including one of the oldest disciplines on the planet: law. [13] And it indeed brings new challenges, as well as pros and cons.

Importantly, AI has already been deployed extensively in legal settings. Data analysis of legal documents is a burgeoning new industry powered by more elegant and effective AI algorithms that have the capacity to finish legal research within a shorter timeframe than even the most adept human being. Recently, the Computer Science Department at the University of Alberta developed an algorithm that can pass the notoriously difficult Japanese Bar Exam. The same team is now working on developing AI that can “weigh contradicting legal evidence, rule on cases, and predict the outcomes of future trials.” [14] The algorithm can use demographic and other data such as age, sex, and employment history to calculate the probability of criminal behaviour. It takes into account the past as well as more general trends within the data. This could be dangerous, as it creates the possibility that the algorithm produces a result based on statistical probability that, while generally likely, is not specifically true in an individual case. An approach such as this leads one to question the role of individuality and the characteristics of each person.

IV. Shortcomings of Human Adjudication

There presently exist AI programs that can accurately determine the likelihood of recidivism. We cannot, though, forget a high-profile case from 2017, where Eric Loomis was sentenced to six years in prison thanks to recommendations provided by the AI algorithm in the COMPAS system. This very same system managed to predict “a high risk of violence, high risk of recidivism, and high pretrial risk.” [15] The case raised many important questions. Does the human judge face any reduced responsibility? Should a defendant have a choice whether to have their case heard by a human judge or an AI? Should a judge rely more on numbers than on his own judgment? Is there any space left for compassion? What are the chances that the AI erred by not taking into account relevant information? Do we even need a judge who is, after all, just a human being, fault-tolerant, full of prejudice and likely to make a mistake?

During a recent TEDx presentation in Zurich, Elliott Ash, Assistant Professor of Law, Economics, and Data Science at ETH Zurich, presented research on the American immigration adjudication system and just how much the result depends upon the adjudicator. For instance, one judge in San Francisco granted 90.6 % of asylum requests, while another judge granted just 2.9 %. He referred to this form of justice as, in fact, “the luck of the draw.” [16] Even worse are numbers on jailing decisions before and after lunch breaks, which demonstrates that the same judge, depending on their mood, might deliver wildly different sentences. [17] Perhaps of most concern is data showing numbers related to lifetime likelihood of imprisonment of US residents born in 2001 based on racial discrimination. The shocking nature of these findings suggest that removing the human element from sentencing would go a long way toward ensuring fairness, or at least consistency. [18]

A closer look and deeper examination, unfortunately, reveals an even worse picture. There is a human-based tendency of judges to hide connections to litigants and to their lawyers which leads to conflicts of interest. Oversight bodies claimed to find wrongdoings in nearly half of complaints about judge conflicts of interest. That said, over 90 % of these investigated complaints were dismissed by state court authorities without even conducting any substantive inquiry. This sort of information naturally leads one to question the real independence and justice of our legal systems.

Sadly, conflicts of interest are just a small part of the larger problem of human bias that plagues the system. Judges are prone to racial biases, explicit and implicit. In these cases, dispassionate arbiters could be seen in a new light – they could bring fairness and consistency in decisions. In this way, AI may be more impartial than humans. “Humans can be swayed by emotion. Humans can be convinced. Humans get tired or have a bad day,” says Tracy Greenwood, an expert in e-discovery whose company uses machines to perform legal discovery work faster and more accurately than humans. “In a high crime city, a judge might start to hand out harsher sentences towards the upper end of the sentencing guidelines. In court, if a judge does not like one of the lawyers, that can affect the judge’s opinion,” says Greenwood. Machines could be a dispassionate solution to all of this, without human bias, irrationality, or mistakes.

V. Is AI Actually Objective?

Critics of AI-powered jurisprudence would resist any framing of the issue that idealizes the supposed neutrality and objectivity of algorithms. In recent years, genuine concerns have arisen that the way AI operates can lead to discriminatory outcomes. Moreover, because these systems are complex and built upon proprietary programs, there is little transparency in terms of how precisely decisions are reached. There are no open inspections, nor there are explanations of what specifically the AI relied upon to generate a decision. Thus, we are forced to answer a complicated question that moves us one step upstream in the process: are we certain that the algorithms themselves are not biased?

In May 2016, investigative journalism organization ProPublica ran an investigation on machine biases within the COMPAS algorithm. [19] According to ProPublica, COMPAS was prone to overestimate the likelihood of recidivism by black defendants and underestimate that of white defendants. They used an example of the algorithm’s assessed of two defendants. One of them was a 41-year-old man of European heritage, a seasoned criminal; the other a teenage African-American girl who had never been arrested before. Both had stolen items of the same value, but the machine failed to contextualize the fact that the girl stole a bicycle and had no serious criminal record and instead it took the racial bias into account. The girl was rated as a high risk, whereas the man was rated as low risk by COMPAS. [20]

Another example of machines being useless in the judicial system are privacy cases where probably only human beings can differentiate the subtle difference between positive and negative effect vis-à-vis keeping the privacy of a victim. This thin line is hard to see even for an experienced judge, and is thus nearly impossible for AI. [21]

VI. Conclusion

Taking all of these factors into consideration, our goal should be to use AI for what they do best. They are excellent at predicting the biases of individual judges and correcting them. AI could also be used to detect systematic bias, to understand it, and to provide that data to policymakers and the public so that we can find ways to reduce such biases. We also need to focus on creating a comprehensive legal framework that protects the data and our right to privacy. And last but not least, we should aim to create AI decisions that are accountable and transparent. [22] To these ends, Prof. Ryan Calo, whose research focuses on cyber law and robotics, makes an important point when he wrote that, “Ultimately, judges and their audiences will need to grapple with the increasing capability of robots to exercise discretion and act in unpredictable ways, updating both the ways judges invoke robots in judicial reasoning and the common law that attends legal conflicts involving real robots.” [23]

AI has the capacity to improve our legal system in myriad and important ways. However, by making ourselves subject purely to the decisions of opaque algorithms we might find ourselves within a non-human approach to justice that is ultimately suboptimal. The moral compass of our society cannot be placed into hands of machines. At this point of our evolution and their development, we must not forget that judging requires not only knowledge of the law and the case evidence, but also the empathetic ability to understand the emotions and motivations underlying human behaviour. If we can find a way to use robots to bring greater consistency and clarity to legal proceedings without risking fairness, we will have arrived at the ideal balance.

[1] Niiler, Eric. (2019) Can AI Be a Fair Judge in Court? Estonia Thinks So. WIRED [online]. Available at: https://www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/ [Accessed 2019-06-26].

[2] Kehl, Danielle, Guo, Priscilla, Kessler, Samuel. (2017). Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing. Responsive Communities. Available at: https://cyber.harvard.edu/ publications/2017/07/Algorithms. [Accessed 2019-06-26].

[3] Niiler, Eric. (2019). op. cit.

[4] KLEROS: The Blockchain Dispute Resolution Layer [online]. Available at: https://kleros.io [Accessed 2019-06-26].

[5] ROSS Intelligence [online]. Available at: https://rossintelligence.com [Accessed 2019-06-26].

[6] Sourdin, T., Zariski A. (2018) The Responsive Judge. International Perspectives. Springer. Page 88. AndChin 2012, 1581; see also Sinai and Alberstein 2016, esp. 225; Colby 2012, esp. 1946.

[7] Sourdin, T., Cornes, R. (2018) Do Judges Need to Be Human? The implications of Technology for Responsive Judging. [online]. Available at: https://www.researchgate.net/publication/326244385_Do_Judges_Need_to_Be_Human_The_Implications_of_Technology_for_Responsive_Judging [Accessed 2019-06-26].

[8] Laub, B. (1969). The Judge’s Role in a Changing Society. Judicature. Vol. 53, number 4. p. 140. [online]. Available at: https://heinonline.org/HOL/LandingPage?handle=hein.journals/judica53&div=44&id=&page=&t=1561551698 [Accessed 2019-06-26].

[9] Mills, M (2016) Artificial Intelligence in Law: The State of Play 2016 (Part 1). Legal Executive Institute, 23 February. [online]. Available at: http://www.legalexecutiveinstitute.com/artificial-intelligence-in-law-the-state-of-play-2016-part-1/. Accessed 21 June 2019.

[10] ASH, Elliott. Robot judges: TEDxZurichSalon [online]. Available at: https://www.youtube.com/watch?v=6qIj7xSZKd0. [Accessed 2019-06-26].

[11] ASH, Elliott. ibid; and EDWARDS, Barry C. Why Appeals Courts Rarely Reverse Lower Courts: An Experimental Study to Explore Affirmation Bias [online]. 2017 [cit. 2019-06-26]. Dostupné z: http://law.emory.edu/elj/elj-online/volume-68/essays/appeals-courts-reverse-lower-courts-study-explore-affirmation-bias.html.

[12] LIPTAK, Adam. (2017). Sent to Prison by a Software Program’s Secret Algorithms [online]. Available at: https://www.nytimes.com/2017/05/01/us/politics/sent-to-prison-by-a-software-programs-secret-algorithms.html. [Accessed 2019-06-26].

[13] KUGLER, Logan. (2018). AI Judges and JuriesCommunications of the ACM [online]. Vol. 61 No. 12 [online]. Available at: https://cacm.acm.org/magazines/2018/12/232890-ai-judges-and-juries/fulltext  [Accessed 2019-06-26].

[14] SNOWDON, Wallis. (2017) Robot judges? Edmonton research crafting artificial intelligence for courts [online]. Available at: https://www.cbc.ca/news/canada/edmonton/legal-artificial-intelligence-alberta-japan-1.4296763 [Accessed 2019-06-26].

[15] DRESSEL, Julia a Hany FARID. (2018) The accuracy, fairness, and limits of predicting recidivism. Science Advances[online]. Available at: https://advances.sciencemag.org/content/4/1/eaao5580 [Accessed 2019-06-26] and LARSON, Jeff, Surya MATTU, Lauren KIRCHNER a Julia ANGWIN. (2016) How We Analyzed the COMPAS Recidivism Algorithm. ProPublica [online]. Available at: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm [Accessed 2019-06-26].

[16] ASH, Elliott. Ibid.

[17] SOURDIN, Tania. (2018) Judge v Robot? Artificial Intelligence and Judicial Decision-Making. UNSW Law Journal. Vol. 41. [online]. Available at: http://www.unswlawjournal.unsw.edu.au/wp-content/uploads/2018/12/Sourdin.pdf [Accessed 2019-06-26].

[18] ASH, Elliott. Ibid.

[19] LARSON, Jeff, Surya MATTU, Lauren KIRCHNER a Julia ANGWIN. (2016) How We Analyzed the COMPAS Recidivism Algorithm. ProPublica [online]. Available at: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm [Accessed 2019-06-26].

[20] LARSON, Jeff, Surya MATTU, Lauren KIRCHNER a Julia ANGWIN. ibid.; and Washington, Anne. (2019) How to Argue with an Algorithm: Lessons from the COMPAS ProPublica Debate. The Colorado Technology Law Journal. Volume 17 Issue 1 http://ctlj.colorado.edu. [online]. Available at SSRN: https://ssrn.com/abstract=3357874. Page 22. [Accessed 2019-06-26].

[21] KUGLER, Logan. op. cit.

[22] IBM (2018) Bias in AI: How we Build Fair AI Systems and Less-Biased Humans [online]. Available at: https://www.ibm.com/blogs/policy/bias-in-ai/ [Accessed 2019-06-26].

[23] Calo, Ryan. (2017) Robots as Legal Metaphors. Harvard Journal of Law and Technology, Vol. 30, No. 1, 2016; University of Washington School of Law Research Paper No. 2017-04. [online]. Available at SSRN: https://ssrn.com/abstract=2913746 [Accessed 2019-06-26].