Truth and Responsibility in Research – A Workshop on Academic Freedom and Corporate Norms in AI Research

The wall between academic AI research and the tech industry seems thin, fuzzy, or perhaps non-existent. Computer science academics circulate between start-ups, key research positions in large tech firms, academic positions, and corporate board seats. Given this symbiotic relation between academia and large tech companies, our proposed workshop explores the relation between contemporary practices of academic freedom and evolving corporate norms in conducting research in AI. In other words, in this specific historical and sociological context, what role, if any, should academic freedom play in the context of corporate AI research?

From its inception as an ideal concept in medieval European universities, academic freedom was shaped by two cardinal values: truth and freedom. In the American context, the 1940 Statement of Principles on Academic Freedom and Tenure puts forward a concept of academic freedom as a species of the constitutionally guaranteed right to free speech defined as a “full freedom in research and in the publication of the results […]”. While the concept of academic freedom has been shaped by the cardinal value of freedom in the service of a truth of inquiry for its own sake, academic freedom as a practice has always been embedded in its own set of norms, pressures, and politics. In the past few decades, substantial funding cuts to higher education, the adjunctification of the university, and other economic pressures have contributed to the increased role of industry funding and industry research. 

We take this distinction between the ideal and the actual practice of academic freedom as a starting point to interrogate its affordances and limits as a mode of public truth-telling. Specifically, as a way to further understand our claim, we point to two existing types of discourses that shape the NeurIPS community when it comes to AI research. On one end, we have researchers working on theoretical issues in machine learning who are pursuing truth for its own sake, thus channeling the basic ideals of academic freedom, while minimizing the importance of potential harmful social impacts, and the systemic risks associated with ML research. On the other end, we have a growing number of researchers who are calling for the creation and development of social responsibility frameworks and practices among the NeurIPS community in the name of the same concept of academic freedom. At the same time, we are cognizant that researchers in corporate settings may be facing a distinctive set of pressures, motivations, and constraints (both implicit and explicit) that we are hoping to raise in our workshop.

In what follows, we propose a few topics for discussion during our workshop: 

I. Reputational costs. As initially formulated in the text we referenced above, full academic freedom as a matter of policy is not unlimited. Two types of constraints shape the concept: the first is the relevance of the topic of inquiry in the act of teaching and the second is associated with the “special obligations” of the teacher. This second aspect is of importance to us because it raises, we claim, the issue of reputational costs associated with a principle of publicity of research in both universities and corporations. This concept of reputational cost is justified in the document we previously referenced by a distinction between two roles for the teacher: a) the teacher as a member of a political community and thus “free from institutional censorship or discipline” and b) the teacher as a member of a professional community who should be able to “exercise appropriate restraint” because “the public may judge their institution and their profession by their utterances” and by consequence should, if necessary, “indicate that they are not speaking for their institution”. Thus, at this point, we could ask the following questions:

  • What are the criteria for diagnosing reputational costs in both academia and industry?
  • What are the values and norms underlying reputational costs in both of these settings? Are they distinct? If so, for what reasons?
  • Under which conditions can we think about individual and institutional reputational costs as a) a practice of public truth-telling and b) as a practice of social responsibility?

 

II. Consequences of the symbiotic relation between academia and industry in AI. Given this sociological reality of a symbiotic relation between academia and industry in the space of AI, we are posing the following questions: 

  • What are the limits, both explicit and implicit, on what subjects of research or types of results may be published by industry researchers, particularly if the research is found to conflict with other company aims?
  • What career obstacles do academics who refuse corporate funding face?
  • Does research produced or sponsored by industry need to be evaluated differently? Do readers need an understanding of the constraints or pressures that have shaped a research paper?
  • How does having large tech companies as sponsors impact or change the nature of major conferences such as NeurIPS?
  • How do funding agencies promote connections between academia and industry by including industrial impact?

Our workshop will bring together academic researchers for a day of discussions around the above-mentioned topics as follows: 1) we will distribute a call for papers for research contributions on the role of academic freedom in AI corporate research and 2) we will convene a round table with our participants which will constitute the basis of a white paper that summarizes the findings of our discussion.

*In alphabetical order, equally shared contribution

Razvan Amironesei (USF)

Fernando Diaz (Google Research)

Rachel Thomas (Queensland University of Technology)

Andrew Smart (Google Research)

 

Logistics

Format

This one-day workshop will be a combination of contributed talks as well as several discussion panels involving faculty and researchers with diverse perspectives on the topic, from computer science, sociology, philosophy, political science, anthropology, and STS.  

Efforts to Ensure Diversity

This virtual workshop will bring together at NeurIPS roughly twenty faculty and researchers to discuss topics related to the role of academic freedom in corporate research. Our program committee members and invited guests come from academic backgrounds that are not traditionally included in computer science-oriented discussions on these topics. In addition, we will invite participants from Queer in AI, WiML, Black in AI, and LXAI, as well as accessibility experts from both industry and academia.