Towards Intelligent Regulation of AI

By Prajakta Pradhan on March 8, 2021.

Click here for PDF version.

Introduction

The term Artificial Intelligence (“AI”) was first coined by John McCarthy,[1] who defined AI as the science and art of making intelligent machines, particularly computer programs.[2] In layman’s terms, AI is the science and engineering of making machines more intelligent and capable of performing more complex tasks. It is about obtaining patterns to make sense of big data.

AI benefits various areas such as governance, decision making, and development. It is changing our lives with inventions like self-driving vehicle safety[3] and voice assistants like Siri and Alexa.[4] AI experiments have produced favorable outcomes on some occasions and extremely drastic outcomes on others. For instance, the first fatal accident involving AI occurred when an Uber self-driving car killed a man in Arizona.[5] Moreover, there is always a high risk of AI technology being misused in egregious ways, such as privacy violations. Therefore, the government must provide smart regulations for AI that encompass flexible, innovative, imaginative, and generally new forms of social control.[6]

I. Why AI Has to be Regulated

AI is one component of the more general concept of intelligent machines. The rise of smart machines is both a challenge and an opportunity for the foreseeable future. AI is a broad philosophy, not one technology: it is a juice powering algorithm. When algorithms become manifested in a specific application, the role of regulation comes. While there is much investment in AI technology, investment for AI safety is lacking.[7] As such, there should be a government committee to oversee anything related to AI, just as there are authorities for managing food and drug safety,[8] automotive safety,[9] and aircraft safety.[10]Such a committee is vital because the government should oversee things that possess a risk to the public—AI unequivocally has the potential to endanger the public and thus, should be regulated like other dangerous industry. The main obstacle to creating a regulatory agency is that government procedures take time. The government and other corporate organizations usually take action only after some disaster takes place, followed by public protests.[11]

On the other hand, the technology industry’s approach is to work within the boundary of AI ethics and principles. For instance, Google has pledged not to use AI algorithms for weapons and other technologies that cause “overall harm,” but can we expect that every organization will practice similar restraint?[12] No, we cannot. Researchers and application developers are often pressured to finish projects within a time constraint. The danger of pressured development is that even a relatively small mistake can result in extremely disastrous consequences.[13] Thus, it is necessary to incorporate AI ethics into laws so the government can better to regulate AI.

Tech giant executives like Tesla Chief Executive Officer, Elon Musk, and Google Chief Executive Officer, Sundar Pichai, have already raised concerns regarding AI regulation.[14] Musk has mentioned that, as a generation, we are already far behind in passing regulations for AI because the technology has already advanced tremendously and continues to expand every day.[15]Pichai has said that it is too important not to impose AI regulations and calls for a “sensible approach.”[16] Microsoft President, Brad Smith, is another distinguished figure who supports regulating AI.[17]

II. Why Arguments Against AI Regulation are Unpersuasive

Many researchers believe that while AI used in the public domain should be regulated, the development of new applications in the private sectors should remain unregulated. These researchers believe science should not be regulated or restricted because it will hamper innovations that can be life-changing.[18] For example, AI is currently used to detect earthquakes,[19] and in the future, may help doctors better predict patients’ risks of cancer.[20] However, this argument is inherently flawed because it assumes that AI regulation will result in a complete restriction of AI. In reality, AI regulations will allow the continued use of the technology, but in a more secure, transparent, and responsible manner. More importantly, this argument also assumes that AI innovation is always beneficial, and any regulation would stifle innovation. However, there have been many cases in which the application of AI technology has been extremely dangerous and controversial, such as law enforcement’s use of Clearview AI’s facial recognition products[21] and Cambridge Analytica’s use of personal data to target voters.[22]

The second argument asserted by researchers is that putting limitations on technological advancements will inevitably be ineffective because people do not diligently follow regulations. For instance, despite the medical community agreeing not to experiment on human embryos, a Chinese scientist, He Jiankui, ignored Chinese regulations on genome editing and created gene-edited babies.[23] Although some people might choose to ignore AI regulations, it does not mean that AI should not be regulated at all. If this logic was generally accepted, then no law would ever get passed simply because there are individuals who will not abide by them. This is not the case as evidenced by today’s society governed by laws.

Another argument against AI regulation is that the regulators and policymakers are unfit to draft regulations because they are uneducated about AI.[24] This argument fails to recognize that AI regulation is not dependent on human knowledge as humans are no longer capable of regulating advanced technology that can cause risks beyond the scope of government oversight.[25] Moreover, governments regulating new and unfamiliar technology is not a new concept. The U.S. government successfully passed regulations for automobiles[26] and railways,[27] which are both examples of technologies that were once revolutionary.

Furthermore, AI regulation is needed to address three main questions: who can use AI, on whom will AI be used, and for what purpose?[28] These three areas can be easily regulated by the government without the need for any technical expertise. For example, the U.S. Government did not need to understand the underlying technology used in steam engines to regulate the fare prices of railways.[29] AI can also be regulated with existing resources rather than leaving AI completely unregulated.

III. Considerations to Make Prior to Regulating

When drafting regulations for AI, the first thing to keep in mind is that the speed of regulation should match the rate at which the technology is evolving. Second, before moving forward with new laws and regulations, the government must balance the risks and benefits of AI. Our minds are critical because they notice the risks first, but AI is a technology that allows us to understand things far beyond the comprehension of our minds: AI surpasses our natural intelligence. For example, Deepmind’s AlphaGo is not constrained by the limits of human knowledge because it combines its own neural network with a powerful search algorithm to play board games.[30] Google now employs this technology in its data cooling systems, which is forty percent more energy efficient than the traditional methods of data cooling.[31] Furthermore, with AI’s energy potency, we can cut back carbon emissions.[32] In the medical field, we can use algorithms to make better clinical judgments and save the healthcare industry billions of dollars.[33] In agriculture, we can better utilize each acre of land by detecting plant diseases, controlling pests, and automating equipment.[34] Everywhere we look, if we apply this technology, we will see incredible gains.

 If we only focus on the negative consequences of AI, the resulting regulations will stifle innovation. For example, flying today is much safer than it was decades ago.[35] How did we evolve? Every single accident that happened was thoroughly investigated. Rather than abandoning the use of planes completely, we considered problems such as how and what went wrong. AI can be compared to the use of statistics in the 19th century, which allowed us to understand things in an entirely new way. For example, we used Apollo computers and statistics to take people to the moon and back.[36] We are doing something similar with AI but on a whole new level.

IV. The Emerging State of AI Laws

It is not the technology itself that needs to be regulated, but the intended use of the technology. The solution is not to control and regulate the entire sphere of AI, but its actual applications, be it is self-driving cars, medical systems, or recreation. But here also lies the problem: there are potential outcomes and applications of AI technology that are still unknown. To solve this, we need to work in stages. The first stage is to follow a comprehensive set of principles, and several organizations have already provided a set of ethical principles for AI.[37] An excellent example is the Asilomars’ set of AI principles—a set of twenty-three principles released in 2017.[38] The second stage is to understand the different levels of AI technology in need of regulation. The first level is data and ensuring it is not misused, the second level is the need for built-in privacy frameworks, and the final level is understanding where the technology has gone wrong.

The European Union is the most vigorous in proposing new AI rules and regulations.[39] With the advent of autonomous vehicles, many European countries including Belgium, Estonia, Germany, Finland, and Hungary enacted laws that allow for the testing of autonomous vehicles on their roads.[40] 

In contrast, in the United States, the White House has maintained a “light touch” regulatory policy approach when it comes to AI.[41] Recently the White House released “Guidance for the Regulation of Artificial Intelligence Applications.”[42]These guidelines follow an approach where sectorial regulators can formulate regulations within their separate jurisdictions.[43]This approach allows the central government to regulate some aspects of autonomous vehicles and state authorities to control others. 

Data is another aspect of AI that needs regulatory attention. Laws concerning data are significant for AI since those laws can impact the use and growth of AI systems. In 2018, the European Union introduced the General Data Protection Regulation (“GDPR”),[44] which requires its member states to maintain a reasonably prohibitive regulatory approach for data privacy and usage.[45] The Catholic Church has also expressed the need for stricter ethical and moral standards on the development of AI.[46] The Rome Call for AI Ethics set forth six basic principles for AI: transparency, responsibility, impartiality, inclusion, reliability, and security.[47]

Countries such as the United States, Brazil, and the United Kingdom have already enacted data privacy laws.[48]Singapore, Australia, and Germany are actively considering such regulations and are having advanced discussions on this topic.[49] Also, many countries are concerned about the potential use of AI to power autonomous weapon systems. For example, Belgium has passed legislation to thwart the use or development of lethal autonomous weapons systems.[50]

Conclusion

AI is set to transform society through innovations across all spheres of human endeavor. However, there must be regulations to control AI and to hold its creators accountable when mishaps occur. By ensuring that AI is developed responsibly, we can not only make future generations believe in the power of technology, but more importantly, improve society through the power of AI technology.



          *     B.A. LL.B. (Hons.)- 2024, National Law University (RMLNLU), Lucknow, India.

         [1].     V Rajaraman, John McCarthy – Father of Artificial Intelligence, 19 Resonance: J. Sci. Edu. 198, 198 (2014) (India),https://www.ias.ac.in/article/fulltext/reso/019/03/0198-0207 [https://perma.cc/6HEG-UQQY].

         [2].     John McCarthy, What Is AI? / Basic Questions, Stan. Univ.: Professor John McCarthy, http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html [https:
//perma.cc/79WQ-KGVL]. 

         [3].     Rilind Elezaj, How AI Is Paving the Way for Autonomous Cars, Mach. Design (Oct. 17, 2019), https://www.machinedesign.com/mechanical-motion-systems/article/21838234/how-ai-is-paving-the-way-for-autonomous-cars [https://perma.cc/Y9Q9-NMUD].

         [4].     Rozita Dara, The Dark Side of Alexa, Siri and Other Personal Digital Assistants, Conversation (Dec. 15, 2019, 8:34 AM), https://theconversation.com/the-dark-side-of-alexa-siri-and-other-personal-digital-assistants-126277 [https://perma.cc/R455-LEWZ].

         [5].     Troy Griggs & Daisuke Wakabayashi, How a Self-Driving Uber Killed a Pedestrian in Arizona, N.Y. Times (Mar. 21, 2018),https://www.nytimes.com/interactive/2018/03/20/us/self-driving-uber-pedestrian-killed.html#:~:text=The%20self%2Ddriving%20Uber%20was%20traveling%20north%20at%20about%2040,around%2010%20p.m.%20on%20Sunday [https://perma.cc/HZ8M-N4YR].

         [6].     Smart Regulation, Eurofound: EurWORK (May 4, 2011), https://www.eurofound.europa.eu/observatories/eurwork/industrial-relations-dictionary/smart-regulation [https://perma.cc/4E2D-MG68].

         [7].     Wyatt Berlinic, Why AI Safety Is Important, Wyaber (July 7, 2019), https://wyaber.com/why-ai-safety-is-important/ [https://perma.cc/UH7N-3EKL].

         [8].     What We Do, U.S. Food & Drug Admin. (Mar. 28, 2018), https://www.fda.gov/about-fda/what-we-do [https://perma.cc/H9LY-54FV].

         [9].     Nat’l Highway Traffic Safety Admin., https://www.nhtsa.gov/ [https:
//perma.cc/M2M6-UX9S
].

      [10].     Safety: The Foundation of Everything We Do, Fed. Aviation Admin. (Nov. 6, 2019, 3:01 PM), https://www.faa.gov/about/safety_efficiency/ [https://perma.cc/7MGQ-SMN4].

      [11].     See Jillian D’Onfro, Google Scraps Its AI Ethics Board Less than Two Weeks After Launch in the Wake of Employee Protest, Forbes (Apr. 4, 2019, 7:52 PM), https://www.forbes.com/sites/jilliandonfro/2019/04/04/google-cancels-its-ai-ethics-board-less-than-two-weeks-after-launch-in-the-wake-of-employee-protest/?sh=64d50f056e28 [https://perma.cc/F2EJ-MYRP].

      [12].     Mark Bergen, Google Renounces AI for Weapons; Will Still Work with Military, Bloomberg (June 7, 2018, 3:40 PM), https://www.bloomberg.com/news/articles/2018-06-07/google-renounces-ai-for-weapons-but-will-still-sell-to-military#:~:text=Google%20pledged%20not%20to%20use,pursue%20future%20lucrative%20government%20deals [https://perma.cc/A8TU-RVBN].

      [13].     See generally Geoff White, Use of Facial Recognition Tech ‘Dangerously Irresponsible’, BBC News (May 13, 2019), https://www.bbc.com/news/technology-48222017 [https://perma.cc/MN55-7BLW] (describing how slight mistakes in the use of AI by law enforcement could lead to a bias against racial minorities).

      [14].     AI Needs to Be Regulated, Says Alphabet CEO Sundar Pichai, HINDU (India) (Jan. 21, 2020, 12:03 PM), https://www.thehindu.com/sci-tech/technology/ai-needs-to-be-regulated-says-alphabet-ceo-sundar-pichai/article30613469.ece [https://perma.cc/SW8K-LCHE].

      [15].     James Vincent, Elon Musk Says We Need to Regulate AI Before It Becomes a Danger to Humanity, Verge (July 17, 2017, 4:43 AM), https://www.theverge.com/2017/7/17/15980954/elon-musk-ai-regulation-existential-threat [https://perma.cc/LZ9T-ET9F].

      [16].     Google Boss Sundar Pichai Calls for AI Regulation, BBC News (Jan. 20, 2020), https://www.bbc.com/news/technology-51178198#:~:text=The%20head%20of%20Google%20and,(AI)%20to%20be%20regulated.&text=He%20said%20that%20individual%20areas,health%20tech%2C%20required%20tailored%20rules [https://perma.cc/4KXE-A7JY].

      [17].     Monica Nickelsburg, Microsoft President Brad Smith Calls for AI regulation at Davos, GeekWire (Jan. 21, 2020, 10:13 AM), https://www.geekwire.com/2020/microsoft-president-brad-smith-calls-ai-regulation-davos/ [https://perma.cc/ZD2F-9T95].

      [18].     Andrea O’Sullivan, Don’t Let Regulators Ruin AI, MIT Tech. Rev. (Oct. 24, 2017), https://www.technologyreview.com/2017/10/24/3937/dont-let-regulators-ruin-ai/ [https://perma.cc/RN44-7MR7].

      [19].     Josie Garthwaite, AI Detects Hidden Earthquakes, Stan. News (Oct. 21, 2020), https://news.stanford.edu/2020/10/21/ai-detects-hidden-earthquakes/[https://perma.cc/EVG3-ACSK].

      [20].     Artificial Intelligence Helps Better Predict Mouth Cancer Risk, Hindu (India) (Nov. 5, 2020, 11:58), https://www.thehindu.com/sci-tech/health/artificial-intelligence-helps-better-predict-mouth-cancer-risk/article33028342.ece [https://perma.cc/47L7-G7SY].

      [21].     Kashmir Hill, The Secretive Company that Might End Privacy as We Know It, N.Y. Times (Feb. 10, 2020), https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html [https://perma.cc/2ZQG-JM2Y].

      [22].     Nicholas Confessore, Cambridge Analytica and Facebook: The Scandal and the Fallout So Far, N.Y. Times (Apr. 4, 2018), https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html [https://perma.cc/8MFY-YTNB].

      [23].     Sharon Begley, Amid Uproar, Chinese Scientist Defends Creating Gene-Edited Babies, STAT (Nov. 28, 2018), https://www.statnews.com/2018/11/28/chinese-scientist-defends-creating-gene-edited-babies/ [https://perma.cc/KE3U-YEN4].

      [24].     Tristan Greene, US Government Is Clueless About AI and Shouldn’t Be Allowed to Regulate It, Next Web (Oct. 24, 2017), https://thenextweb.com/artificial-intelligence/2017/10/24/us-government-is-clueless-about-ai-and-shouldnt-be-allowed-to-regulate-it/ [https://perma.cc/9SSM-5UWY].

      [25].     Michael Spencer, Artificial Intelligence Regulation May Be Impossible, Forbes (Mar. 2, 2019, 9:34 PM), https://www.forbes.com/sites/cognitiveworld/2019/03/02/artificial-intelligence-regulation-will-be-impossible/?sh=6670ff2e11ed [https://perma.cc/MJ83-JWV7].

      [26].     See generally The Interstate Commerce Act Is Passed, U.S. Senate, https://www.senate.gov/artandhistory/history/minute/Interstate_Commerce_Act_Is_Passed.htm [https://perma.cc/R4PV-RK7V].

      [27].     Federal Legislation Makes Airbags Mandatory, History (July 28, 2019), https://www.history.com/this-day-in-history/federal-legislation-makes-airbags-mandatory[https://perma.cc/2QY3-NTMW].

      [28].     We Can’t Regulate AI, AI Myths, https://www.aimyths.org/we-cant-regulate-ai [https://perma.cc/5Y65-RHKT].

      [29].     See generally The Interstate Commerce Act Is Passed, supra note 26.

      [30].     David Silver & Demis Hassabis, AlphaGo Zero: Starting from Scratch, DeepMind (Oct. 18, 2017), https://deepmind.com/blog/article/alphago-zero-starting-scratch [https://perma.cc/ZRP2-SZMU].

      [31].     Will Knight, Google Just Gave Control Over Data Center Cooling to an AI, MIT Tech. Rev. (Aug. 17, 2018), https://www.technologyreview.com/2018/08/17/140987/google-just-gave-control-over-data-center-cooling-to-an-ai/#:~:text=Google%20revealed%20today%20that%20it,centers%20to%20an%20AI%20algorithm.&text=This%20system%20previously%20made%20recommendations,percent%20in%20those%20cooling%20systems [https://perma.cc/3G7R-RRX2].

      [32].     James Vincent, Here’s How AI Can Help Fight Climate Change According to the Field’s Top Thinkers, Verge (June 25, 2019, 8:02 AM), https://www.theverge.com/2019/6/25/18744034/ai-artificial-intelligence-ml-climate-change-fight-tackle [https://perma.cc/B93A-JE9Z].

      [33].     See Bernard Marr, How Is AI Used in Healthcare – 5 Powerful Real-World Examples that Show the Latest Advances, Forbes (July 27, 2018, 12:41 AM) https://www.forbes.com/sites/bernardmarr/2018/07/27/how-is-ai-used-in-healthcare-5-powerful-real-world-examples-that-show-the-latest-advances/?sh=ac7c3f05dfbe [https://perma.cc/R4PV-RK7V].

      [34].     The Future of AI in Agriculture: Intel-Powered AI Helps Optimize Crop Yields, Intel, https://www.intel.in/content/www/in/en/big-data/article/agriculture-harvests-big-data.html [https://perma.cc/CZ8P-52LV].

      [35].     Mark Ellwood, What Flying Was Like 30 Years Ago, Condé Nast Traveler (Aug. 28, 2017), https://www.cntraveler.com/story/what-flying-was-like-30-years-ago [https://perma.cc
/4KMP-URTH].

      [36].     See Charles Fishman, The Amazing Handmade Tech that Powered Apollo 11’s Moon Voyage, History (July 17, 2019), https://www.history.com/news/moon-landing-technology-inventions-computers-heat-shield-rovers [https://perma.cc/KQA8-RATP].

      [37].     See Thilo Hagendorff, The Ethics of AI Ethics: An Evaluation of Guidelines, 30 Minds & Machs. 99 (2020).

      [38].     TechTarget Contributor, Asilomar AI Principles, TechTarget (Feb. 2019), https:
//whatis.techtarget.com/definition/Asilomar-AI-Principles [https://perma.cc/3CVV-SLHT].

      [39].     See Shaping Europe’s Digital Future: Artificial Intelligence, European Comm’n (Jan. 8, 2021), https://ec.europa.eu/digital-single-market/en/artificial-intelligence [https://perma.cc/JUE8-6LF8].

      [40].     Kathleen Walch, AI Laws Are Coming, Forbes (Feb. 20, 2020, 11:00 PM), https://www.forbes.com/sites/cognitiveworld/2020/02/20/ai-laws-are-coming/?sh=38f5ecfa2b48 [https://perma.cc/GW9Q-DGRP].

      [41].     Brandi Vincent, White House Proposes ‘Light Touch Regulatory Approach’ for Artificial Intelligence, Nextgov (Jan. 7, 2020),

https://www.nextgov.com/emerging-tech/2020/01/white-house-proposes-light-touch-regulatory-approach-artificial-intelligence/162276/ [https://perma.cc/9YPK-7EFJ].

      [42].     Clyde Wayne Crews Jr., How the White House “Guidance for Regulation of Artificial Intelligence” Invites Overregulation, Forbes (Apr. 15, 2020, 11:35 AM), https://www.forbes.com
/sites/waynecrews/2020/04/15/how-the-white-house-guidance-for-regulation-of-artificial-intelligence-invites-overregulation/#31fedaf53a2c [https://perma.cc/BQZ9-V83Q].

      [43].     Lee Tiedrich, AI Update: White House Issues 10 Principles for Artificial Intelligence Regulation, Inside Tech Media (Jan. 14, 2020),https://www.insidetechmedia.com/2020/01/14/ai-update-white-house-issues-10-principles-for-artificial-intelligence-regulation/ [https://perma.cc/V6BK-GFJA].

      [44].     Data Protection in the EU, European Comm’n, https://ec.europa.eu/info/law/law-topic/data-protection/data-protection-eu_en [https://perma.cc/2G5Y-K8TN].

      [45].     See generally Juliana De Groot, What Is the General Data Protection Regulation? Understanding & Complying with GDPR Requirements in 2019, Digit. Guardian (Sept. 30, 2020), https://digitalguardian.com/blog/what-gdpr-general-data-protection-regulation-understanding-and-complying-gdpr-data-protection [https://perma.cc/3JEF-TERJ] (describing GDPR regulations).

      [46].     Taylor Lyles, The Catholic Church Proposes AI Regulations that ‘Protect People’, Verge (Feb. 28, 2020, 4:06 PM), https://www.theverge.com/2020/2/28/21157667/catholic-church-ai-regulations-protect-people-ibm-microsoft-sign [https://perma.cc/8RAH-CTR6].

      [47].     Lance Eliot, Pope Francis Offers ‘Rome Call for AI Ethics’ to Step-Up AI Wokefulness, Which Is a Wake-Up Call for AI Self-Driving Cars too, Forbes (Mar. 10, 2020, 11:31 AM), https://www.forbes.com/sites/lanceeliot/2020/03/10/pope-francis-offers-rome-call-for-ai-ethics-to-step-up-ai-wokefulness-which-is-a-wake-up-call-for-ai-self-driving-cars-too/?sh=2d15ec567bae [https://perma.cc/LXE7-ABXY].

      [48].     A Practical Guide to Data Privacy Laws by Country, i-Sight Software (Nov. 5, 2018), https://i-sight.com/resources/a-practical-guide-to-data-privacy-laws-by-country/ [https://perma.cc/4ZCU-F63K].

      [49].     Walch, supra note 40.

      [50].     Mary Wareham, The Killer Robots Ban Is Coming. What Will Belgium Do?, Hum. Rts. Watch (May 30, 2018, 12:00 AM), https://www.hrw.org/news/2018/05/30/killer-robots-ban-coming-what-will-belgium-do [https://perma.cc/U7EG-24VR].