0

“For You”: AI Recommendations for Better or Worse?

Written by: Korina Buford

 

AI, or Artificial Intelligence, refers to the intelligence demonstrated by machines, allowing them to perform tasks that usually require human discernment, such as perceiving, synthesizing, and inferring information.[1]  Although the ability of this technology to learn at a rapid pace seems impressive, its seemingly boundless capabilities may have human consequences and legal liability implications worth addressing—sooner rather than later. In particular, the use of algorithmic recommendation systems that analyze people’s preferences, previous decisions, and characteristics based on data gathered about their interactions can predict consumer interests and desires on a personal level, leading to a more tailored user experience. [2]

However, a pending court case of Gonzalez v. Google, raises questions about the legal liability of tech companies for the content recommended by their AI-generated algorithms.  The case involves the family of a man killed in an ISIS attack suing Google for promoting harmful content through its algorithms. Google argues that algorithmic recommendations should be protected under Section 230 of the Communications Decency Act, a federal law granting immunity to internet platforms for third-party consent. A ruling in favor of the Gonzalez family would likely lead to more restrictive content regulation and new algorithms to detect and block harmful content, potentially limiting First Amendment freedoms of speech and expression. [4] Conversely, a ruling in favor of Google could emphasize a broad interpretation of Section 230 and ensure tech companies’ broad protection from liability. Some experts question whether the US Supreme Court is technologically skilled enough to make such a ruling. Supreme Court Justice Elena Kagan stated today that they’re not the “nine greatest experts on the Internet”. [5]

One of the main concerns of the court is how to hold tech companies accountable for the content produced by their algorithms while safeguarding innocuous content. Google argues that it cannot be held liable for what its algorithm promotes, as it is not responsible for the interests and engagement choices of its users. It states that it merely provides a platform that hosts users’ thoughts, ideas, and opinions. However, the Gonzalez family argues that the standard under Section 230 should apply to any claim based on a defendant making third-party content easier to find.

Holding tech companies accountable for the content produced by their algorithms may impact social media-based free speech and expression. Furthermore, requiring tech companies to moderate content more aggressively may frustrate the purpose of the internet as a marketplace of ideas, information, and communication. [6]

Nonetheless, tech giants’ increasing focus on maximizing engagement by recommending sensational, polarizing, or controversial content raises questions about the cost of such actions.

 

[1] B.J. Copeland, Artificial Intelligence, Britannica (Feb. 16, 2023), https://www.britannica.com/technology/artificial-intelligence.

[2] Clarissa Eyu, Podsplainer: What’s a Recommender System? NVIDIA’s Even Oldrige Breaks It Down, NVIDIA (Mar. 2, 2022), https://blogs.nvidia.com/blog/2022/03/02/whats-a-recommender-system-2/.

[3] Id.

 

[4] Matt G. Southern, Tech Giants On Trial: Why Gonzalez v. Google Matters, Search Engine Journal, (Feb. 21, 2023) https://www.searchenginejournal.com/tech-giants-on-trial-why-gonzalez-v-google-matters/480500/.

[5] Id.

[6] Id.

cfreeman2

Leave a Reply

Your email address will not be published. Required fields are marked *