MonthJanuary 2021

Getting Specific About AI Risks

by Rachel Thomas

The term “Artificial Intelligence” is a broad umbrella, referring to a variety of techniques applied to a range of tasks. This breadth can breed confusion. Success in using AI to identify tumors on lung x-rays, for instance, may offer no indication of whether AI can be used to accurately predict who will commit another crime or which employees will succeed, or whether these latter tasks are even appropriate candidates for the use of AI.  Misleading marketing hype often clouds distinctions between different types of tasks and suggests that breakthroughs on narrow research problems are more broadly applicable than is the case.  Furthermore, the nature of the risks posed by different categories of AI tasks varies, and it is crucial that we understand the distinctions.

Continue reading

CADE Link Round-Up: Medicine’s Machine Learning Problem, A Genealogy of ImageNet, How Google’s Meltdown Could Shape Policy

by Rachel Thomas

The USF Center for Applied Data Ethics is home to a talented team of ethics fellows who often engage with current news events that intersect their research and expertise.  Here are a few recent articles by or quoting CADE researchers:

Four headlines: Medicine's Machine Learning Problem, From Whistleblower laws to unions, Between Philosophy and Experience, Lines of Sight Continue reading

Important: Read our blog and commenting guidelines before using the USF Blogs network.

Skip to toolbar