by Rachel Thomas

Update: The first year of the USF Center for Applied Data Ethics will be funded with a generous gift from Craig Newmark Philanthropies, the organization of craigslist founder Craig Newmark. Read the official press release for more details.

While the widespread adoption of data science and machine learning techniques has led to many positive discoveries, it also poses risks and is causing harm. Facial recognition technology sold by AmazonIBM, and other companies has been found to have significantly higher error rates on Black women, yet these same companies are already selling facial recognition and predictive policing technology to police, with no oversight, regulations, or accountability. Millions of people’s photos have been compiled into databases, often without their knowledge, and shared with foreign governments, military operations, and police departments. Major tech platforms (such as Google’s YouTube, which auto-plays videos selected by an algorithm), have been shown to disproportionately promote conspiracy theories and disinformation, helping radicalize people into toxic views such as white supremacy.

5-story historic brick building in downtown San Francisco, with two taller, modern high rises behind it

USF Data Institute in downtown SF, Image Credit: By Eric in SF – Own work, CC BY-SA 4.0

In response to these risks and harms, I am helping to launch a new Center for Applied Data Ethics (CADE), housed within the University of San Francisco’s Data Institute to address issues surrounding the misuse of data through education, research, public policy and civil advocacy. The first year will include a tech policy workshop, a data ethics seminar series, and data ethics courses, all of which will be open to the community at-large.

Misuses of data and AI include the encoding & magnification of unjust bias, increasing surveillance & erosion of privacy, spread of disinformation & amplification of conspiracy theories, lack of transparency or oversight in how predictive policing is being deployed, and lack of accountability for tech companies. These problems are alarming, difficult, urgent, and systemic, and it will take the efforts of a broad and diverse range of people to address them. Many individuals, organizations, institutes, and entire fields are already hard at work tackling these problems. We will not reinvent the wheel, but instead will leverage existing tools and will amplify experts from a range of backgrounds. Diversity is a crucial component in addressing tech ethics issues, and we are committed to including a diverse range of speakers and supporting students and researchers from underrepresented groups.

Continue reading