We are excited to announce the line-up for our spring data ethics seminar series at the University of San Francisco Center for Applied Data Ethics, exploring the risks, challenges, and harms of how algorithmic systems are impacting people today. All talks will be held virtually; see the links below to sign up and for more details. All times listed in Pacific Time.
Transparency, Equity, and Community Challenges for Student Assignment Algorithms and Child-Welfare System: An Interaction of Policy, Practice, and Algorithms
The term “Artificial Intelligence” is a broad umbrella, referring to a variety of techniques applied to a range of tasks. This breadth can breed confusion. Success in using AI to identify tumors on lung x-rays, for instance, may offer no indication of whether AI can be used to accurately predict who will commit another crime or which employees will succeed, or whether these latter tasks are even appropriate candidates for the use of AI. Misleading marketing hype often clouds distinctions between different types of tasks and suggests that breakthroughs on narrow research problems are more broadly applicable than is the case. Furthermore, the nature of the risks posed by different categories of AI tasks varies, and it is crucial that we understand the distinctions.
The USF Center for Applied Data Ethics is home to a talented team of ethics fellows who often engage with current news events that intersect their research and expertise. Here are a few recent articles by or quoting CADE researchers:
The USF Center for Applied Data Ethics is home to a talented team of ethics fellows who often engage with current news events that intersect their research and expertise. Timely topics have included the risks posed as governments rush to adopt AI solutions during the pandemic and the issues created by Google’s explanation of why they fired a leading AI ethics researcher. Here are a few recent articles by CADE researchers:
The University of San Francisco is welcoming three Data Ethics research fellows (one started in January, and the other two are beginning this month) for year-long, full-time fellowships. We are so excited to have them join our community. They bring expertise in an interdisciplinary range of fields, inlcuding bioethics, public policy, anthropology, computer science, data privacy, and political philosophy. We had many fantastic applicants for the program, and we wish we had been able to offer a larger number of fellowships. We hope to be able to expand this program in the future. Without further ado, here is our first cohort of data ethics research fellows: Ali Alkhatib, Razvan Amironesei, and Nana Young.
The remaining 6 videos from the the University of San Francisco Center for Applied Data Ethics Tech Policy Workshop are now available. This workshop was held in November 2019, which seems like a lifetime ago, yet the themes of tech ethics and responsible government use of technology remain incredibly relevant, particularly as governments are considering controversial new uses of technology for tracking or addressing the pandemic.
You can go straight to the videos here, or read more below:
The next two videos from the the University of San Francisco Center for Applied Data Ethics Tech Policy Workshop are available! Read more below, or watch them now:
In November, a group of tech industry employees, concerned citizens, non-profit workers, activists, graduate students, and others gathered at the University of San Francisco for the Center for Applied Data Ethics Tech Policy Workshop to discuss issues related to disinformation, the criminal justice system, surveillance technologies, mass atrocity, and other issues of data misuse. People traveled from as far as Texas, Michigan, Pennsylvania, and even France to participate, and employees from several tech companies and local government joined us as well.
I’m excited to release the first two videos from the workshop today; please stay tuned as more will be released in the coming weeks. These talks by Y-Vonne Hutchinson, a former human rights lawyer and CEO of ReadySet, and Catherine Bracy, CEO of Tech Equity Collaborative, help paint the big picture of how we arrived at our current tech ethics crisis, as well as offer a path forward.
As governments consider new uses of technology, whether that be sensors on taxi cabs, police body cameras, or gunshot detectors in public places, this raises issues around surveillance of vulnerable populations, unintended consequences, and potential misuse. There are several principles to keep in mind in how these decisions can be made in a healthier and more responsible manner. It can be tempting to reduce debates about government adoption of technology into binary for/against narratives, but that fails to capture many crucial and nuanced aspects of these decisions.
We recently hosted the Tech Policy Workshop at the USF Center for Applied Data Ethics. One of the themes was how governments can promote the responsible use of technology. Here I will share some key recommendations that came out of these discussions.
Listen to local communities
Beware how NDAs obscure public sector process and law
Security is not the same as safety
Policy decisions should not be outsourced as design decisions
Update: The first year of the USF Center for Applied Data Ethics will be funded with a generous gift from Craig Newmark Philanthropies, the organization of craigslist founder Craig Newmark. Read the official press release for more details.
While the widespread adoption of data science and machine learning techniques has led to many positive discoveries, it also poses risks and is causing harm. Facial recognition technology sold by Amazon, IBM, and other companies has been found to have significantly higher error rates on Black women, yet these same companies are already selling facial recognition and predictive policing technology to police, with no oversight, regulations, or accountability. Millions of people’s photos have been compiled into databases, often without their knowledge, and shared with foreign governments, military operations, and police departments. Major tech platforms (such as Google’s YouTube, which auto-plays videos selected by an algorithm), have been shown to disproportionately promote conspiracy theories and disinformation, helping radicalize people into toxic views such as white supremacy.
In response to these risks and harms, I am helping to launch a new Center for Applied Data Ethics (CADE), housed within the University of San Francisco’s Data Institute to address issues surrounding the misuse of data through education, research, public policy and civil advocacy. The first year will include a tech policy workshop, a data ethics seminar series, and data ethics courses, all of which will be open to the community at-large.
Misuses of data and AI include the encoding & magnification of unjust bias, increasing surveillance & erosion of privacy, spread of disinformation & amplification of conspiracy theories, lack of transparency or oversight in how predictive policing is being deployed, and lack of accountability for tech companies. These problems are alarming, difficult, urgent, and systemic, and it will take the efforts of a broad and diverse range of people to address them. Many individuals, organizations, institutes, and entire fields are already hard at work tackling these problems. We will not reinvent the wheel, but instead will leverage existing tools and will amplify experts from a range of backgrounds. Diversity is a crucial component in addressing tech ethics issues, and we are committed to including a diverse range of speakers and supporting students and researchers from underrepresented groups.