The next two videos from the the University of San Francisco Center for Applied Data Ethics Tech Policy Workshop are available! Read more below, or watch them now:
(Dis)Information & Regulation
Renee DiResta shares a framework for evaluating disinformation campaigns, explains the dynamics of why and how disinformation and propaganda spread, and surveys proposed regulatory approaches to address these issues. She shares regulatory proposals around ads, antitrust, and privacy, and how these proposed laws impact the privacy-security-free expression balance. Disinformation is an ecosystem level problem, not a software feature level problem, so policy making needs to be agile and to address the broader ecosystem.
Renée DiResta is the technical research manager at Stanford Internet Observatory. She investigates the spread of malign narratives across social networks, and assists policymakers in devising responses to the problem. Renee has studied influence operations and computational propaganda in the context of pseudoscience conspiracies, terrorist activity, and state-sponsored information warfare, and has advised Congress, the State Department, and other academic, civil society, and business organizations on the topic. At the behest of the Senate Select Committee on Intelligence, she led one of the two research teams that produced comprehensive assessments of the Internet Research Agency’s and GRU’s influence operations targeting the U.S. from 2014-2018.
Watch her talk here:
- Free Speech Is Not the Same As Free Reach
- She Warned of ‘Peer-to-Peer Misinformation.’ Congress Listened.
- The Facebook hearings remind us: information warfare is here to stay
- The Digital Maginot Line
The Toxic Potential of YouTube’s Feedback Loop
Systemic factors contribute to the proliferation and amplification of conspiracy theories on platforms such as YouTube. The emphasis on metrics, cheap cost of experimentation, and potential for rewards incentivize propagandists to game recommendation system. The process of flagging and removing harmful content is much slower than the virality with which videos spread. The situation is even worse for languages other than English, where tech platforms tend to not invest many resources. For example, major concerns were raised in France about YouTube promoting pedophilia in 2006 and 2017, yet YouTube failed to take action until 2019 when it became a news topic in the USA after a high-profile New York Times Article and major American companies pulling their ads.
Guillaume Chaslot earned his PhD in AI working on the computer players of Go, worked at Google on YouTube’s recommendation system several years ago, and has since run the non-profit AlgoTransparency, quantitatively tracking the way that YouTube recommends conspiracy theories. His work has been covered in the Washington Post, The Guardian, the Wall Street Journal, and more. Watch his talk here:
- ‘Fiction is outperforming reality’: how YouTube’s algorithm distorts truth
- YouTube recommended a Russian media site thousands of times for analysis of Mueller’s report, a watchdog group says
- The Toxic Potential of YouTube’s Feedback Loop
- How Algorithms Can Learn to Discredit the Media
Learn More About the CADE Tech Policy Workshop
- 4 Principles for Responsible Government Use of Technology
- Reflection on Tech Policy Workshop at the Center for Applied Data Ethics at USF by Hongsup Shin
- Tech Ethics Crisis: The Big Picture, and How We Got Here
Special thanks to Nalini Bharatula for her help with this post.