By Agnes Morelos
When working with emerging technologies, like artificial intelligence (AI), many components are considered: privacy, security, and accessibility, to name a few. But one part often overlooked and not thoroughly planned out is ethics and transparency. Although professionals working with such technologies are hopeful of how such innovations can advance humanity, it’s not possible to effectively do so if long-term consequences aren’t considered or critical thinking about systematic implications or impacts on marginalized populations have not occurred.

Knowing this, this question comes to mind: What is morality’s role when engaging with AI?

Nick Bostrom and Eliezer Yudkowsky discuss Francis Kamm’s definition of moral status in ‘The Ethics of Artificial Intelligence.’ They mention that “X has moral status…because X counts morally in its own right, it is permissible/impermissible to do things to it for its own sake.” To illustrate this they provide a comparison between a rock and human: 

Permissible: The rock has no moral status, causing us to “crush it, pulverize it, or subject it to any treatment we like without any concern for the rock itself.” 

Impermissible: On the contrary, a human “must not be treated only as a means but also as an end. It involves taking [their] legitimate interests into account – giving weight to [their] well-being – and may [accept] strict moral side-constraints in [one’s] dealings with [them]…doing a variety of…things to [them] or [their] property without consent. Moreover, it is because a human person counts in [their] own right…a person has moral status.”

In other words, a rock is an artifact and a human should be cared for and recognized in their entirety because of their morality.

Engaging with scenarios regarding morality’s impacts on a larger system is overwhelming. But when examined on a smaller, more interpersonal level, we can recognize why a certain identity behaves a certain way or why a certain background believes what they believe do. An example is populations living in more impoverished or rural areas of the United States. Often times, people from these places don’t have the means necessary or same access to travel, education, or more advanced technologies. These areas have been developed as a result of systematic practices and beliefs, like racism and sexism, that have been passed down throughout history.


Moving forward, how can this mindset be integrated into AI applications? 

It seems simple enough to engage in the act of thinking through doing going through iterative processes when it comes to AI. But when one’s sexuality or racial or gender identity is put in danger or discriminated, it’s not worth the risk. Prior to even developing AI applications, Bostrom and Yudowsky address this call to action:

“It is widely agreed that current AI systems have no moral status. We may change, copy, terminate, delete, or use computer programs as we please; at least as far as the programs themselves are concerned.The moral constraints to which we are subject in our dealings with contemporary AI systems are all grounded in our responsibilities to other beings, such as our fellow humans, not in any duties to the systems themselves.”

Before working with AI or any other emerging technologies, professionals need to build relationships with each other, their friends, families, and people around them. Advocacy for less privileged populations are often left to those in social work, education, and nonprofits. But those within business, communication, and technology, have equally, if not more responsibility to calling out immoral acts and prevent them from being integrated into technologies. 

Engagement in practices such as active listening and courageousconversations are a few simple ways industry professionals can engage in promoting better morality and advocacy prior to developing a product or service. Considering the use of project development structures such as design thinking is beneficial as well.

AI has the potential to help or even eradicate many worldly issues. But without morality at the foundation of the technology, we’re left with more problems than solutions. 

5 thoughts on “The Right Thing to Do: How Morality Impacts the Development of Artificial Intelligence

  1. Reliable and creative post for all the new generation but there are also very creative and obsessed services available like to have the most elegant Karachi car sale which is the best sources to having your destinations cars very easily in all over Pakistan.

  2. Indeed, with the development of the latest technology, moral aspects should remain a priority. Basic human qualities must be present when interacting and building relationships with each other. So for example the use of fonts, more can be found here Some of them can only be used with a commercial license. Otherwise, their use will be considered illegal. The price range allows you to choose the most suitable fonts for your design to suit your budget.

  3. Researchers at a Seattle A.I. lab say they have built a system that makes ethical judgments. But its judgments can be as confusing as those of humans. Researchers at an artificial intelligence lab in Seattle called the Allen Institute for AI unveiled new technology last month that was designed to make moral judgments.

    Best regards, writer from service.

Leave a Reply

Your email address will not be published.