by Rachel Thomas

The USF Center for Applied Data Ethics is home to a talented team of ethics fellows who often engage with current news events that intersect their research and expertise.  Timely topics have included the risks posed as governments rush to adopt AI solutions during the pandemic and the issues created by Google’s explanation of why they fired a leading AI ethics researcher.  Here are a few recent articles by CADE researchers:

The Risks of Using AI for Government Work

Covid-19 is speeding government adoption of AI, yet public use of AI differs from private use, posing distinctive requirements and heightened risks.  Dr. Rumman Chowdhury, CEO of Parity and entrepreneurial ethics fellow at CADE, wrote about the risks we must address when governments use AI:

AI systems in particular pose new risks to society as they encroach into public use. Our democratic processes risk being subverted as private companies increasingly take over our digital public infrastructure, potentially leading to unprecedented political capture under the guise of “modernizing” public services. We urgently need appropriate levels of oversight and review and the empowerment of elected officials, policymakers and citizens…

Read the full article at Brink News.

On Google and Intellectual Freedom

CADE research fellow Ali Alkhatib ponders the tensions and contradictions for how the broader research community should now engage with scholarship from those at Google:

I do critical work and sometimes engage with or build on the work of scholars who are affiliated with Google. Up until now, I’ve assumed that there are cultural and institutional biases that motivate people to be in certain places – for instance, I wouldn’t be surprised to find that people who want to build systems to address problems will join CS programs, whereas people who want to draft legislation to address problems might join a more policy-minded program – but I can contextualize someone having a worldview that would make them amenable to working at Google, see how that would motivate a softer critique of AI, and go from there.

That kind of generic caveat can’t possibly suffice for Google now…

He was quoted in recent news coverage on this topic:

Google workers reject company’s account of AI researcher’s exit as anger grows (The Guardian)

What future for ethical AI after Google scientist firing? (Thomson Reuters)

Read Alkhatib’s full essay here.

The Far-Reaching Impact of Dr. Timnit Gebru

In the wake of Google firing Dr. Timnit Gebru, a global leader in AI ethics research, I wrote about her most important contributions to the machine learning community and beyond:

Few researchers make breakthrough contributions to even a single field. Fewer still can claim to have made breakthrough contributions to multiple fields. Dr. Timnit Gebru is one of those few. She has worked on computer vision problems in fine-grained object recognition; used large-scale image sets to gain sociological insight; conducted audits of biased facial recognition systems which have influenced real-world regulation; designed standards and processes to mitigate ethical issues with datasets and models; developed a framework of algorithmic audits for AI accountability; and more. Many of her papers have been cited hundreds of times.

Read the rest at The Gradient.