The Case Against A.I. Controlling Our Moral Compass

Earlier this month, at the Federal Reserve Bank of New York, I saw something, or someone, that would, on any other day, be out of place: a philosopher. Damon Horowitz—a philosopher at Columbia University who has a history of serial entrepreneurship and was once In-House Philosopher and Director of Engineering at Google—was here to lend his wisdom to a conference on ethical culture in the corporate world, called “Building Cultural Capital in the Financial Services Industry: Emerging Practices, Risks and Opportunities.” His subject was the way that efficiency—aided by computer power, massive data-collection, and machine-learning algorithms—is beginning, or threatening, to creep into the moral sphere. That is, businesses are being confronted with the temptation to outsource the responsibility of ethical decision-making to A.I., a temptation humans shouldn’t give in to, he explained, because algorithms operate on incomplete information, their decision-making process is a black box, and it’d be undignified for us to use machines as ethical proxies out of convenience.

 

A fitting talk, no doubt, for the heading that afternoon: “Impact of New Technologies and the 21st Century Workforce on Culture.” And it was no surprise that Horowitz would take aim at the possibility of A.I. annexing human moral agency. He was worried about this sort of eventuality in 2011, in a TEDxSiliconValley talk, where pointed out that there’s no obvious universal moral formula we can apply to figure out what’s right in any given situation. “Ethics is hard,” he said. “Ethics requires thinking, and that’s uncomfortable. I know. I spent a lot of my career in artificial intelligence trying to build machines that could do some of this thinking for us.”

 

Perhaps it is a good thing, then, that people seem to be already averse to machines making moral decisions. A 2018 Cognition study found that people regard moral decisions made by machines to be less acceptable than ones made by humans, even when the decisions have positive outcomes. The research came out of the Mind Perception and Morality Lab, directed by psychologist Kurt Gray, at the University of North Carolina, Chapel Hill. In one way the study, led by postdoctoral fellow Yochanan Bigman, is surprising since we readily and happily allow algorithms to control decision-making in other areas—like risk-management, supply-chain distribution, flight-path determination, complex-inventory management, and road-driving navigation. “The success of machine decision-making across these domains may lead people to happily cede moral decisions to them as well,” Bigman and Gray wrote. “But there are reasons to believe otherwise.”

 

Morality, the researchers found, isn’t like any other decision space. People were averse to machines having the power to choose what to do in life and death situations—specifically in driving, legal, medical, and military contexts. This hinged on their perception of machine minds as incomplete, or lacking in agency (the capacity to reason, plan, and communicate effectively) and subjective experience (the possession of a human-like consciousness, with the ability to empathize and to feel pain and other emotions).

 

For example, when the researchers presented subjects with hypothetical medical and military situations—where a human or machine would decide on a surgery as well as a missile strike, and the surgery and strike succeeded—subjects still found the machine’s decision less permissible, due to its lack of agency and subjective experience relative to the human. Not having the appropriate sort of mind, it seems, disqualifies machines, in the judgement of these subjects, from making moral decisions even if they are the same decisions that a human made. Having a machine sound human, with an emotional and expressive voice, and claim to experience emotion, doesn’t help—people found a compassionate-sounding machine just as unqualified for moral choice as one that spoke robotically.

 

Only in certain circumstances would a machine’s moral choice trump a human’s. People preferred an expert machine’s decision over an average doctor’s, for instance, but just barely. Bigman and Gray also found that some people are willing to have machines support human moral decision-making as advisors. A substantial portion of subjects, 32 percent, were even against that, though, “demonstrating the tenacious aversion to machine moral decision-making,” the researchers wrote. The results “suggest that reducing the aversion to machine moral decision-making is not easy, and depends upon making very salient the expertise of machines and the overriding authority of humans—and even then, it still lingers.”

 

There is potential for A.I. capable of moral discussion to improve human ethical decision-making. In The Atlantic, Yale scientist Nicholas Christakis wrote about how his experiments with hybrid systems—where people and robots interact socially—show that “the right kind of AI can improve the way humans relate to one another.” Other researchers, like the political scientist Kevin Munger, have reinforced this finding. Munger “directed specific kinds of bots to intervene after people sent racist invective to other people online,” Christakis explained. “He showed that, under certain circumstances, a bot that simply reminded the perpetrators that their target was a human being, one whose feelings might get hurt, could cause that person’s use of racist speech to decline for more than a month.”

 

That’s not so bad. Horowitz, like many people, may be averse to machines taking on some of our moral responsibility, but perhaps letting A.I. have a seat at the table could help optimize our ethical reasoning and behavior.

 

Brian Gallagher is Ethical Systems’ Communications Director. Follow him on Twitter @brianga11agher.

 

Lead image is courtesy of maximilianschiffer via Flickr