Knowing this, this question comes to mind: What is morality’s role when engaging with AI?
Nick Bostrom and Eliezer Yudkowsky discuss Francis Kamm’s definition of moral status in ‘The Ethics of Artificial Intelligence.’ They mention that “X has moral status…because X counts morally in its own right, it is permissible/impermissible to do things to it for its own sake.” To illustrate this they provide a comparison between a rock and human:
Permissible: The rock has no moral status, causing us to “crush it, pulverize it, or subject it to any treatment we like without any concern for the rock itself.”
Impermissible: On the contrary, a human “must not be treated only as a means but also as an end. It involves taking [their] legitimate interests into account – giving weight to [their] well-being – and may [accept] strict moral side-constraints in [one’s] dealings with [them]…doing a variety of…things to [them] or [their] property without consent. Moreover, it is because a human person counts in [their] own right…a person has moral status.”
In other words, a rock is an artifact and a human should be cared for and recognized in their entirety because of their morality.
Engaging with scenarios regarding morality’s impacts on a larger system is overwhelming. But when examined on a smaller, more interpersonal level, we can recognize why a certain identity behaves a certain way or why a certain background believes what they believe do. An example is populations living in more impoverished or rural areas of the United States. Often times, people from these places don’t have the means necessary or same access to travel, education, or more advanced technologies. These areas have been developed as a result of systematic practices and beliefs, like racism and sexism, that have been passed down throughout history.
Moving forward, how can this mindset be integrated into AI applications?
It seems simple enough to engage in the act of thinking through doing going through iterative processes when it comes to AI. But when one’s sexuality or racial or gender identity is put in danger or discriminated, it’s not worth the risk. Prior to even developing AI applications, Bostrom and Yudowsky address this call to action:
“It is widely agreed that current AI systems have no moral status. We may change, copy, terminate, delete, or use computer programs as we please; at least as far as the programs themselves are concerned.The moral constraints to which we are subject in our dealings with contemporary AI systems are all grounded in our responsibilities to other beings, such as our fellow humans, not in any duties to the systems themselves.”
Before working with AI or any other emerging technologies, professionals need to build relationships with each other, their friends, families, and people around them. Advocacy for less privileged populations are often left to those in social work, education, and nonprofits. But those within business, communication, and technology, have equally, if not more responsibility to calling out immoral acts and prevent them from being integrated into technologies.
Engagement in practices such as active listening and courageousconversations are a few simple ways industry professionals can engage in promoting better morality and advocacy prior to developing a product or service. Considering the use of project development structures such as design thinking is beneficial as well.
AI has the potential to help or even eradicate many worldly issues. But without morality at the foundation of the technology, we’re left with more problems than solutions.