Can artificial intelligence (AI) develop moral attention? Moral attention refers to the ability to focus on the ethical dimension of situations and consider the consequences of one’s actions on others and society as a whole. It involves being mindful of the moral implications of one’s choices and taking responsibility for the ethical consequences of those decisions.

AI appears to understand moral attention well. The prior definition was extracted from asking ChatGPT, an artificial intelligence chatbot, to define moral attention, and its definition is as thorough as any I could write. The philosopher-mystic Simone Weil characterizes the cultivated habit of moral attention more insightfully as the ability in an encounter to truly see someone who is suffering, “The love of our neighbor in all its fullness simply means being able to say, ‘What are you going through?’” She captures a deeper, more human and theological dimension of empathy and compassion when she says, “The soul empties itself of its own content in order to receive into itself the being it is looking at.”1

One way to integrate moral attention into AI is through the processes surrounding its development and use. Given that current AI systems rely heavily on the data used to train them, attention to the technical design decisions made in building AI applications is critical to ensuring ethical outcomes. For example, when designing and using AI for healthcare, learning to attend to how technical decisions impact a patient’s ability to make free choices about their health can be vital.2

Moral attention extends Jesuit discernment about who one is called to become into the concrete, daily technical decisions made in service of a larger good. One learns to look through the technical minutiae to see the suffering of others and grasp what is ethically relevant in a particular situation. Many people find moral attention challenging, particularly when working on complex technical tasks.

There is a mental gap between how one performs a task and reflecting upon its ethical consequences. Moral attention requires cultivation and practice. The question then becomes whether some of these ethical tasks and skills can be integrated with AI systems. I believe they can. One of my collaborative projects involves building an ethics-based auditing tool for medical AI, and I hope to use what we learn from that to develop an AI system that can attend to and monitor the ethical dimension of healthcare AI systems.3

A major innovation in the current generation of AI involves an AI system learning to attend to its own data.4 So, a basic mechanism for attending to visual and linguistic cues exists. The challenge becomes how to incorporate ethically relevant data and additional training to improve AI’s attention to the ethical dimension of a situation. Although AI may lack the affective empathy that people possess, we can still build AI systems that exhibit compassionate behavior and support human freedom and dignity. This requires a thoughtful and intentional approach to AI development, with an emphasis on incorporating moral attention and ethical values into the design, implementation, and use of AI systems.

1. Weil, Simone. Reflections on the Right Use of School Studies with a View to the Love of God. In E. Craufurd (Trans.), Waiting for God. Harper, 1951.

2. Ratti, E., and M. Graves. (2021). Cultivating Moral Attention: A Virtue-Oriented Approach to Responsible Data Science in Healthcare. Philosophy & Technology, 34(4), 1819–1846. https://doi.org/10.1007/s13347-021-00490-3

3. I’m undertaking the project through AI & Faith, a community of experts bringing faith-based values and wisdom of the world’s religions to the ethical AI conversation, with initial funding for the project from the University of Notre Dame-IBM Technology Ethics Lab.

4. Vaswani, A., et al. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30. https://papers.nips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html

 

MARK GRAVES is a research fellow at AI and Faith. With over 25 years of experience in researching and modeling cognitive, biological, and religious dimensions of the person, he has published 40 technical and scholarly works in those areas, including Mind, Brain, and the Elusive Soul (2008).