Moral Minds, Intelligent Machines

moral.jpeg

JUNE 13, 2019

Consider a nurse charged with administering medication to an unwilling patient. Suppose that the nurse is experienced, trustworthy, and responsible. Suppose further that he or she is under the charge of a chief physician, who has explicitly directed that the patient be given the medication. The nurse refuses out of concern for the patient’s personal autonomy, with the knowledge that the medication is not indispensable for the patient’s wellbeing. Is this the right choice? Would it matter if the nurse were a robot?

This dilemma, the subject of a recent preprint by Michael Laakasuo et al., highlights a distinct problem from those of AI ethics and bias we’ve raised in prior newsletters. Much of this work has focused on how to ensure that AI makes the right choices, both procedurally and consequentially. But human moral psychology is messy: we're often influenced in our judgments by factors beyond intentions and outcomes. This seems to extend to whether the actor is a machine: as Laakasuo's paper suggests, we may find it more acceptable for a robot to disobey in the name of patient autonomy than for a human to do the same. 

If this finding is reliable, it complicates the picture painted by the research to date. Humans are, in general, averse to machines making moral decisions, preferring instead for them to comply with human judges. As Bigman & Gray (2018) found, “this aversion is mediated by the perception that machines can neither fully think nor feel” -- that is, the perception that they lack “mind.” Our faculty of mind perception, which breaks down into agency and experience, is central to our judgments about who or what can be moral. (A hurricane may cause tremendous harm, but we don’t tend to view it as a moral actor.) What’s fascinating about Laakasuo’s finding is that, in certain circumstances, our desire for machines to comply may be outweighed by the desire that they respect our autonomy.

As Iyad Rahwan noted in our interview with him, even strict utilitarians have reason to take notice of these quirks in our moral psychology. Because the rollout of AI systems will depend on public trust and support, even a minor affront to our moral intuitions may postpone adoption of these technologies. In the case of something like autonomous cars, which have the potential to save thousands of lives, such a delay would be tragic -- and morally worse, by the utilitarian’s own standards. Rahwan and his colleagues Jean-Francois Bonnefon and Azim Shariff have called this “the social dilemma of autonomous vehicles,” and it may extend to many other AI applications.

Of course, our moral intuitions did not evolve to accommodate autonomous machines, but nor did they evolve for many other aspects of modern life. Philosophical reflection and cultural change have moderated these intuitions and, in some cases, allowed us to transcend them. As AI systems gain more autonomy, and with it more moral responsibility, we’ll have the opportunity to observe our own reactions as our intuitions are played upon. Perhaps this will serve as a kind of collective thought experiment, prodding us to new moral insight.
 

Nathanael Fast