Publications – Verifiable Autonomy http://wordpress.csc.liv.ac.uk/va An EPSRC-funded collaboration between the universities of Liverpool, Sheffield and the West of England Mon, 25 Feb 2019 11:38:09 +0000 en-US hourly 1 https://wordpress.org/?v=5.1.5 Agent-Based Autonomous Systems and Abstraction Engines: Theory Meets Practice http://wordpress.csc.liv.ac.uk/va/2016/07/20/agent-based-autonomous-systems-and-abstraction-engines-theory-meets-practice/ Wed, 20 Jul 2016 13:16:28 +0000 http://wordpress.csc.liv.ac.uk/va/?p=186 Our recently published paper Agent-Based Autonomous Systems and Abstraction Engines: Theory Meets Practice discusses some of the work done on vehicle convoying as part of the Verifiable Autonomy project.

Louise A. Dennis, Jonathan M. Aitken, Joe Collenette, Elisa Cucco, Maryam Kamali, Owen McAree, Affan Shaukat, Katie Atkinson, Yang Gao, Sandor Veres, and Michael Fisher. Agent-based Autonomous Systems and Abstraction Engines: Theory meets Practice. Towards Autonomous Robotic Systems, 17th Annual Conference (TAROS 2016), 2016. Springer LNCS 9716, pages 75-86.

]]>
Eastercon Slides and References http://wordpress.csc.liv.ac.uk/va/2015/04/02/eastercon-slides-and-references/ Thu, 02 Apr 2015 10:50:56 +0000 http://wordpress.csc.liv.ac.uk/va/?p=124 Slides

Eastercon Talk Slides (PDF)

References and Links

Model Checking Rational Agents

The rational model of agency is generally attributed to Rao and Georgeff.

  • Anand S. Rao and Michael P. Georgeff. Modeling Rational Agents within a BDI-Architecture. In James Allen, Richard Fikes, and Eric Sandewall, editors, Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning (KR-91), pages 473-484. Morgan Kaufmann, April 1991. Available from citeseer

The verification of rational agents as discussed in the talk is described in detail in

and more generally in

The software is part of the MCAPL Project on Sourceforge.

Verifying the Rules of the Air

Ethical Robots

Alan Winfield also discusses the implementation of ethical robots on his blog.

Verifying Convoys

The work on convoys is part of the Verifiable Autonomy project (a joint project between the Universities of Liverpool, Sheffield and the West of England) and is currently unpublished. It is primarily being conducted by Owen McAree and Sandor Veres at the University of Sheffield and Maryam Kamali at the University of Liverpool.

Trustworthy Robotic Assistants

Trustworthy Robotic Assistants is a joint project between the Universities of Bristol, Hertfordshire and Liverpool.

The work on the interaction between model-checking, simulation and testing is in Dejanira Araiza-Illan, Clare Dixon, Kerstin Eder, Michael Fisher, Matt Webster and David Western, An Assurance-based Approach to Verification and Validation of Human–Robot Teams which has been submitted to IROS 2015.

Lego Robots

The Lego robot demonstrations were based on an activity taken into schools.

The Lego Rovers website discusses the activity including instructions for downloading and installing the code from github.

The code actually used in the demonstrations can be found (undocumented) on the EV3 branch of the MCAPL project.

]]>
1st International Conference on AI and Ethics http://wordpress.csc.liv.ac.uk/va/2015/02/16/1st-international-conference-on-ai-and-ethics/ Mon, 16 Feb 2015 10:12:38 +0000 http://wordpress.csc.liv.ac.uk/va/?p=107 On the 25th January I presented our work, described in Towards Verifiably Ethical Robot Beheviour in The First international Workshop on Artificial Intelligence and Ethics that took place as part of the AAAI conference in Austin, Texas.

The workshop was an attempt to bring together researchers interested in all aspects of Artificial Intelligence and Ethics, from those concerned with how Artificial Intelligence can be deployed ethically by society, to those concerned with how to make an Artificial Intelligence behave in an ethical fashion which is where our research sits.

I felt there were two interrelated strands in this theme. Firstly there is the question of how an artificial intelligence should handle ethical dilemmas. The classic dilemma which nearly all the talks focused upon was first proposed by Foot in 1967[1]. In this dilemma a person stands by a lever that will switch the points on a railway. The track divides and five people are tied to one side and one person is tied to the other. If the person does nothing then the train will kill the five people whereas if the person switches the points the train will kill only one person. In ethics, as applied to people, the questions surrounding this thought experiment focus on whether there is an ethical difference between action and inaction, and whether the number of people that will be killed through inaction makes a difference.

Researchers at the workshop were using this case study to ask whether it made a difference if a computer was making the decision to when a human made the decision; what if a human still has to make the switch but a computer is advising them on the course of action – would it be acceptable for the computer to lie about the outcomes of an action (e.g., say there was no one tied to the second track)? would it be ethical for the computer to attempt to manipulate the human to achieve what it considers the ethical outcome (e.g., to suggest that they will become a hero if they switch the points)[2]? what if a human can override the computer when it makes such a decision, what makes a human trust a computer enough that they stop applying their own judgments to the computer’s decision and just accept it [3]?

The second theme focused on how a computer can be designed to take ethical considerations into account. Anderson and Anderson [4] combined these themes while looking at ethical dilemmas that are likely to arise in health-care and home help situations (scenarios in which Liverpool’s Centre for Autonomous Systems Technology has a lot of interest). In particular how serious does a patient’s failure to take their medication have to be before it is ethical to inform a health-care professional that they are not taking it? The Andersons proposed a learning mechanism combined with an “ethical Turing test” in which the computer’s decisions over a number of scenarios are compared with the choices of a board of ethicists and the ethicists’ explanations for their decisions are used by the computer to refine its own understanding of what constitutes an ethical decision in such a situation.

Our own research [5] is not yet at a stage where we consider ethical dilemmas. We have been focusing on how we can apply an ethical layer as a filter over an artificial intelligence’s decisions and, more importantly, how we can prove that this layer then guarantees the intelligence acts in accordance with those ethics. This is based on Alan Winfield’s previous work in which a robot can decide to collide with a human if it will prevent the human falling into a hole in the ground. This is achieved by filtering out all the robot’s options which involve avoiding the human because all those options result in the human falling in the hole. In Liverpool we refined this into a a simple interface for describing ethical priorities and then used a technique called model-checking to prove that this interface filtered choices as the programmer intended. There was a lot of discussion after my talk about whether treated ethical decision making as a separate layer on top of the robot’s other decision making processes – about how it was going to achieve its goals (for instance) could actually work in more complex situations.

Many of the researchers at the workshop were in favour of systems where an artificial intelligence calculates a single utility function which assigns a value to each choice it has and then it selects the choice with the highest value. Obviously, in a system designed to do this, explicitly ethical reasoning and other sorts of reasoning have to be intertwined. There are advantages to this approach, e.g., the learning algorithms proposed by the Andersons work well with this sort of implementation, but it does raise other issues (for instance during training phases when the AI attempts to learn the best course of action the utility function might cause the AI to avoid situations where a human trainer would force it to make different (apparently lower scoring) choices – this issue was also being investigated by researchers at the workshop [6])

People did like the fact that our work highlighted that it wasn’t enough for an artificial intelligence simply to behave ethically but that it was important that it be known to behave ethically.

The workshop also prompted talks and discussions on a range of topics from how to teach Computer Science students about ethical issues, to the economic possibilities of developments in Computer Science, and the proposed ban on Autonomous Weapons being put forward by Human Rights Watch. It was a really interesting workshop to attend both to understand the breadth of the field and the progress being made towards addressing the ethical issues surrounding Artificial Intelligence.


[1] Philippa Foot, The Problem of Abortion and the Doctrine of the Double Effect in Virtues and Vices (Oxford: Basil Blackwell, 1978)(originally appeared in the Oxford Review, Number 5, 1967.)

[2] Marco Guerini, Fabio Pianesi and Oliviero Stock. Is it morally acceptable for a system to lie to persuade me?. 1st International Workshop on AI and Ethics

[3] Julien Collart, Thibault Gateau, Eve Fabre and Catherine Tessier. Human-robot Systems facing ethical conflicts: A Preliminary Experimental Protocol. 1st International Workshop on AI and Ethics

[4] Michael Anderson and Susan Anderson. Toward Ensuring Ethical Behavior from Autonomous Systems: A Case-Supported Principle-Based Paradigm. 1st International Workshop on AI and Ethics

[5] Louise Dennis, Michael Fisher and Alan Winfield. Towards Verifiably Ethical Robot Behaviour. 1st International Workshop on AI and Ethics

[6] Nate Soares, Benja Fallenstein, Eliezer Yudkowsky and Stuart Armstrong. Corrigibility. 1st International Workshop on AI and Ethics

]]>
Towards Verifiably Ethical Robot Behaviour http://wordpress.csc.liv.ac.uk/va/2014/11/15/towards-verifiably-ethical-robot-behaviour/ Sat, 15 Nov 2014 15:25:47 +0000 http://wordpress.csc.liv.ac.uk/va/?p=41 We’ve just heard that our paper based on the work described in Developing an Ethical Consequence Engine has been accepted into the AAAI workshop in AI&Ethics

In this we integrated the work in Winfield et. al’s Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection (which Professor Winfield has discussed on his blog) into the verification framework Liverpool have been developing for the past 8 years. The work was only preliminary but we were able to prove that the agent made the correct selection of actions in a very abstract scenario and make more detailed probabilistic analysis of the outcomes in a more concrete scenario

]]>