Robotics and Autonomous Systems Series

Answer Set Programming Rightful Machines: Solving the Trolley Problem

22nd May 2019, 13:00 add to calender
Ava Thomas Wright
Philosophy Department at Univ. Georgia

Abstract

In a recent massive experiment conducted online, millions of subjects were asked what a self- driving car whose brakes have failed should do when its only choices were to swerve or stay on course under various accident conditions (Awad, et al, 2018). Should the car swerve and kill one person in order to avoid killing five people on the road ahead? Most subjects agreed that it should. Most subjects also agreed, however, that the car should generally spare younger people (especially children) over older people, females over males, those of higher status over those of lower status, and the fit over the overweight, with some variations in preferences correlated with subjects' cultural backgrounds. But while such results may be interesting, I contend that they are largely
irrelevant to the question of what a self-driving car faced with such a dilemma should do.

In this paper, I set out a new approach to resolving conflicts between strict obligations for autonomous machine agents such as self-driving cars. First, I argue that efforts to build explicitly moral machine agents such as self-driving cars should focus on duties of right, or justice, which are in principle legitimately enforceable, rather than on duties of virtue, or ethics, which are not. While dilemmas such as the (in)famous "trolley problem"—which inspired the experiment above —have received enormous attention in machine ethics, there will likely never be an ethical consensus as to their correct resolution, and even if one could be achieved, it would be largely irrelevant. What matters normatively is whether machine agents charged with making decisions that affect human beings act rightfully, that is, in ways that respect real persons' equal rights of freedom and the rule of law. Whatever private ethical resolution one prefers of dilemmas such as in the trolley problem, it is public law that should determine when makers or users of semi- autonomous machines such as self-driving cars are liable or culpable for the machine's decisions, and law must conform to principles of justice, not the partial ethical preferences of one group or another. The first goal of machine ethics, therefore, should be to build rightful machines, machines that respect public law. Machine agents that act “ethically” otherwise seem to me to pose a threat to civil society.

I then evaluate some deontic logical approaches to handling conflicts between strict legal obligations against the normative requirements of justice. How should a deontic logic handle conflicts between strict legal obligations? I argue that the proper role of a deontic legal logic
is not to quarantine such conflicts but identify and expose them so that civil institutions can authoritatively qualify rights or duties that generate inconsistencies in the system of public laws. I suggest that a non-monotonic deontic logical approach to conflicts can meet these requirements of justice, though other approaches may be possible.

Finally, I implement such an approach to deontic legal logic by setting out how to encode and evaluate conflicts of strict legal obligations using “answer set programming,” which is an efficient machine implementation of non-monotonic forms of reasoning. I exploit only a part of the answer set semantics for non-monotonic reasoning systems to simply enumerate all reasonable rulings in cases of conflict. This allows the answer set program to admit conflicts at the descriptive level but normatively require their resolution at the prescriptive level of enforceable law. It does so by exposing the conflict so that its resolution must be made explicitly and authoritatively by qualifying legal rules in conflict.
add to calender (including abstract)