Category Archives: Related Work

Discussions and links to related work both by members of the Verifiable Autonomy project and by others.

Robot ethics as an emerging field: some data

Recently, I analyzed some bibliometric data to demonstrate the emergence of robot ethics as a field. The data and the full blog post can be found at:

http://bitsofbats.weebly.com/blog/robot-ethics-as-an-emerging-field-some-data

The data presented in the blog post, provide some evidence that the field of robot ethics is on the rise, both in academia and beyond. In all, the data add to the intuition that the ethical aspects of robotics is becoming more of an issue. For example, the 1st International Workshop on AI and Ethics was held in 2015. Also, influential news sites around the net (including Nature News: here and here) are picking up on stories related to robot ethics. Finally, various initiatives to regulate the ethics of robotics have emerged. This increased attention for ethical issues seems to be reflected in the publication record. With robots able to fire without human intervention becoming a (scary) reality (see video in the original post), the interest in robot ethics might not have come too soon.

Eastercon Slides and References

Slides

Eastercon Talk Slides (PDF)

References and Links

Model Checking Rational Agents

The rational model of agency is generally attributed to Rao and Georgeff.

  • Anand S. Rao and Michael P. Georgeff. Modeling Rational Agents within a BDI-Architecture. In James Allen, Richard Fikes, and Eric Sandewall, editors, Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning (KR-91), pages 473-484. Morgan Kaufmann, April 1991. Available from citeseer

The verification of rational agents as discussed in the talk is described in detail in

and more generally in

The software is part of the MCAPL Project on Sourceforge.

Verifying the Rules of the Air

Ethical Robots

Alan Winfield also discusses the implementation of ethical robots on his blog.

Verifying Convoys

The work on convoys is part of the Verifiable Autonomy project (a joint project between the Universities of Liverpool, Sheffield and the West of England) and is currently unpublished. It is primarily being conducted by Owen McAree and Sandor Veres at the University of Sheffield and Maryam Kamali at the University of Liverpool.

Trustworthy Robotic Assistants

Trustworthy Robotic Assistants is a joint project between the Universities of Bristol, Hertfordshire and Liverpool.

The work on the interaction between model-checking, simulation and testing is in Dejanira Araiza-Illan, Clare Dixon, Kerstin Eder, Michael Fisher, Matt Webster and David Western, An Assurance-based Approach to Verification and Validation of Human–Robot Teams which has been submitted to IROS 2015.

Lego Robots

The Lego robot demonstrations were based on an activity taken into schools.

The Lego Rovers website discusses the activity including instructions for downloading and installing the code from github.

The code actually used in the demonstrations can be found (undocumented) on the EV3 branch of the MCAPL project.

1st International Conference on AI and Ethics

On the 25th January I presented our work, described in Towards Verifiably Ethical Robot Beheviour in The First international Workshop on Artificial Intelligence and Ethics that took place as part of the AAAI conference in Austin, Texas.

The workshop was an attempt to bring together researchers interested in all aspects of Artificial Intelligence and Ethics, from those concerned with how Artificial Intelligence can be deployed ethically by society, to those concerned with how to make an Artificial Intelligence behave in an ethical fashion which is where our research sits.

I felt there were two interrelated strands in this theme. Firstly there is the question of how an artificial intelligence should handle ethical dilemmas. The classic dilemma which nearly all the talks focused upon was first proposed by Foot in 1967[1]. In this dilemma a person stands by a lever that will switch the points on a railway. The track divides and five people are tied to one side and one person is tied to the other. If the person does nothing then the train will kill the five people whereas if the person switches the points the train will kill only one person. In ethics, as applied to people, the questions surrounding this thought experiment focus on whether there is an ethical difference between action and inaction, and whether the number of people that will be killed through inaction makes a difference.

Researchers at the workshop were using this case study to ask whether it made a difference if a computer was making the decision to when a human made the decision; what if a human still has to make the switch but a computer is advising them on the course of action – would it be acceptable for the computer to lie about the outcomes of an action (e.g., say there was no one tied to the second track)? would it be ethical for the computer to attempt to manipulate the human to achieve what it considers the ethical outcome (e.g., to suggest that they will become a hero if they switch the points)[2]? what if a human can override the computer when it makes such a decision, what makes a human trust a computer enough that they stop applying their own judgments to the computer’s decision and just accept it [3]?

The second theme focused on how a computer can be designed to take ethical considerations into account. Anderson and Anderson [4] combined these themes while looking at ethical dilemmas that are likely to arise in health-care and home help situations (scenarios in which Liverpool’s Centre for Autonomous Systems Technology has a lot of interest). In particular how serious does a patient’s failure to take their medication have to be before it is ethical to inform a health-care professional that they are not taking it? The Andersons proposed a learning mechanism combined with an “ethical Turing test” in which the computer’s decisions over a number of scenarios are compared with the choices of a board of ethicists and the ethicists’ explanations for their decisions are used by the computer to refine its own understanding of what constitutes an ethical decision in such a situation.

Our own research [5] is not yet at a stage where we consider ethical dilemmas. We have been focusing on how we can apply an ethical layer as a filter over an artificial intelligence’s decisions and, more importantly, how we can prove that this layer then guarantees the intelligence acts in accordance with those ethics. This is based on Alan Winfield’s previous work in which a robot can decide to collide with a human if it will prevent the human falling into a hole in the ground. This is achieved by filtering out all the robot’s options which involve avoiding the human because all those options result in the human falling in the hole. In Liverpool we refined this into a a simple interface for describing ethical priorities and then used a technique called model-checking to prove that this interface filtered choices as the programmer intended. There was a lot of discussion after my talk about whether treated ethical decision making as a separate layer on top of the robot’s other decision making processes – about how it was going to achieve its goals (for instance) could actually work in more complex situations.

Many of the researchers at the workshop were in favour of systems where an artificial intelligence calculates a single utility function which assigns a value to each choice it has and then it selects the choice with the highest value. Obviously, in a system designed to do this, explicitly ethical reasoning and other sorts of reasoning have to be intertwined. There are advantages to this approach, e.g., the learning algorithms proposed by the Andersons work well with this sort of implementation, but it does raise other issues (for instance during training phases when the AI attempts to learn the best course of action the utility function might cause the AI to avoid situations where a human trainer would force it to make different (apparently lower scoring) choices – this issue was also being investigated by researchers at the workshop [6])

People did like the fact that our work highlighted that it wasn’t enough for an artificial intelligence simply to behave ethically but that it was important that it be known to behave ethically.

The workshop also prompted talks and discussions on a range of topics from how to teach Computer Science students about ethical issues, to the economic possibilities of developments in Computer Science, and the proposed ban on Autonomous Weapons being put forward by Human Rights Watch. It was a really interesting workshop to attend both to understand the breadth of the field and the progress being made towards addressing the ethical issues surrounding Artificial Intelligence.


[1] Philippa Foot, The Problem of Abortion and the Doctrine of the Double Effect in Virtues and Vices (Oxford: Basil Blackwell, 1978)(originally appeared in the Oxford Review, Number 5, 1967.)

[2] Marco Guerini, Fabio Pianesi and Oliviero Stock. Is it morally acceptable for a system to lie to persuade me?. 1st International Workshop on AI and Ethics

[3] Julien Collart, Thibault Gateau, Eve Fabre and Catherine Tessier. Human-robot Systems facing ethical conflicts: A Preliminary Experimental Protocol. 1st International Workshop on AI and Ethics

[4] Michael Anderson and Susan Anderson. Toward Ensuring Ethical Behavior from Autonomous Systems: A Case-Supported Principle-Based Paradigm. 1st International Workshop on AI and Ethics

[5] Louise Dennis, Michael Fisher and Alan Winfield. Towards Verifiably Ethical Robot Behaviour. 1st International Workshop on AI and Ethics

[6] Nate Soares, Benja Fallenstein, Eliezer Yudkowsky and Stuart Armstrong. Corrigibility. 1st International Workshop on AI and Ethics

Alan Winfield on Verifiable Autonomy

Alan Winfield, Principal Investigator on Verifiable Autonomy at Bristol Robots Lab, has  written several posts about Verifiable Autonomy on his own blog.

These posts discuss the design and implementation of his Ethical Robot which we used as the basis of the Towards Verifiably Ethical Robot Behaviour and they are well worth reading if you are interested in the the topic of constraining a robot’s behaviour using an Ethical Framework.