va – Verifiable Autonomy http://wordpress.csc.liv.ac.uk/va An EPSRC-funded collaboration between the universities of Liverpool, Sheffield and the West of England Mon, 25 Feb 2019 11:38:09 +0000 en-US hourly 1 https://wordpress.org/?v=5.1.5 Agent-Based Autonomous Systems and Abstraction Engines: Theory Meets Practice http://wordpress.csc.liv.ac.uk/va/2016/07/20/agent-based-autonomous-systems-and-abstraction-engines-theory-meets-practice/ Wed, 20 Jul 2016 13:16:28 +0000 http://wordpress.csc.liv.ac.uk/va/?p=186 Our recently published paper Agent-Based Autonomous Systems and Abstraction Engines: Theory Meets Practice discusses some of the work done on vehicle convoying as part of the Verifiable Autonomy project.

Louise A. Dennis, Jonathan M. Aitken, Joe Collenette, Elisa Cucco, Maryam Kamali, Owen McAree, Affan Shaukat, Katie Atkinson, Yang Gao, Sandor Veres, and Michael Fisher. Agent-based Autonomous Systems and Abstraction Engines: Theory meets Practice. Towards Autonomous Robotic Systems, 17th Annual Conference (TAROS 2016), 2016. Springer LNCS 9716, pages 75-86.

]]>
Blog Posts by Alan Winfield http://wordpress.csc.liv.ac.uk/va/2016/04/25/blog-posts-by-alan-winfield/ Mon, 25 Apr 2016 10:54:05 +0000 http://wordpress.csc.liv.ac.uk/va/?p=156 Alan Winfield has written a number of new posts on his blog that are relevant to the Verifiable Autonomy project. These include:

Engineering Moral Agents

Could we Make a Moral Machine?

How Ethical is your Robot?

Towards Ethical Robots: An Update

Like Doing Brain Surgery on Robots

]]>
Ethical Robotics discussed in a Nature News Article http://wordpress.csc.liv.ac.uk/va/2015/07/03/ethical-robotics-are-discussed-in-a-nature-news-article/ Fri, 03 Jul 2015 10:16:15 +0000 http://wordpress.csc.liv.ac.uk/va/?p=135 The question of ethical robots was featured in a Nature news article on the 1st July. This included input from Alan Winfield and Michael Fisher discussing the work that is being done as part of the Verifiable Autonomy Project.

The article can be found at Machine Ethics: The Robot’s Dilemman (HTML) and The Robot’s Dilemma (PDF)

]]>
Lego Robot Dinosaurs at Cheltenham Science Festival http://wordpress.csc.liv.ac.uk/va/2015/06/25/lego-robot-dinosaurs-at-cheltenham-science-festival/ Thu, 25 Jun 2015 14:34:06 +0000 http://wordpress.csc.liv.ac.uk/va/?p=131 Picture of three researchers holding lego robot triceratops under two dinosaur skeletons.
Lego Robot Dinosaurs at Cheltenham Science Festival

Lego Rovers Evolution is a public understanding activity funded by the Strategic Technology Facilities Council (STFC). It is based on an activity Louise Dennis takes into schools in the North West where she uses Lego Robots to introduce children to ideas from autonomous systems, robotics and artificial intelligence.

The STFC, together with Manchester University, sponsored a marquee at Cheltenham Science Festival and we got the opportunity to take a version of the activity along. The marquee was called the DinoZone and centred around two casts of Dinosaur skeletons borrowed from the universities of Oxford and Manchester. Normally, when working in a school, Louise presents the Lego Robots as versions of Mars Rovers. Clearly, for the DinoZone, they needed to be Lego Robot Dinosaurs!

While we already had the robots, we needed to find funds to pay for travel and subsistence for the people working on the stand. Cheltenham Science Festival lasts for six days, so in the end 9 people were involved: Louise, Michael Fisher and Maryam Kamali from the Verifiable Autonomy Project, Elisa Cucco from the Reconfigurable Autonomy project, Ipek Calliskanelli from the Virtual Engineering Centre at Daresbury labs, and 4 PhD students from the Department of Computer Science at Liverpool University. The Centre for Autonomous Systems Technology at Liverpool put up some of the money and the rest came from places like this Verifiable Autonomy project.

At the start of the week we had two robots with legs as well as several with wheels but, unfortunately had to abandon these. The way they were engineered to walk on four legs, using only two motors was fascinating, but sadly they simply weren’t strong enough to take the strain of hundreds of children racing them up and down the table.

Over the space of 6 days, the team on the stand spoke to over 2,000 people. The vast majority of these were children who had great fun driving the Lego Robots around the custom made table and experimenting with a simple line following algorithm from robotics. For people with a little more time and interest, we were able to discuss the functioning of sensors and give them a taste of rules’ based artificial intelligence programming. Several people wanted to discuss issues of safety and reliably which allowed us to talk about the work this project was doing.

We had a lot of fun and the feedback we got was very positive. While we may not do something as big as Cheltenham Science Festival again, we do hope to take the Lego robots to museums, and other local events in future, in order to demonstrate the basics of autonomy and, hopefully, spark of discussions about current research.

]]>
Eastercon Slides and References http://wordpress.csc.liv.ac.uk/va/2015/04/02/eastercon-slides-and-references/ Thu, 02 Apr 2015 10:50:56 +0000 http://wordpress.csc.liv.ac.uk/va/?p=124 Slides

Eastercon Talk Slides (PDF)

References and Links

Model Checking Rational Agents

The rational model of agency is generally attributed to Rao and Georgeff.

  • Anand S. Rao and Michael P. Georgeff. Modeling Rational Agents within a BDI-Architecture. In James Allen, Richard Fikes, and Eric Sandewall, editors, Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning (KR-91), pages 473-484. Morgan Kaufmann, April 1991. Available from citeseer

The verification of rational agents as discussed in the talk is described in detail in

and more generally in

The software is part of the MCAPL Project on Sourceforge.

Verifying the Rules of the Air

Ethical Robots

Alan Winfield also discusses the implementation of ethical robots on his blog.

Verifying Convoys

The work on convoys is part of the Verifiable Autonomy project (a joint project between the Universities of Liverpool, Sheffield and the West of England) and is currently unpublished. It is primarily being conducted by Owen McAree and Sandor Veres at the University of Sheffield and Maryam Kamali at the University of Liverpool.

Trustworthy Robotic Assistants

Trustworthy Robotic Assistants is a joint project between the Universities of Bristol, Hertfordshire and Liverpool.

The work on the interaction between model-checking, simulation and testing is in Dejanira Araiza-Illan, Clare Dixon, Kerstin Eder, Michael Fisher, Matt Webster and David Western, An Assurance-based Approach to Verification and Validation of Human–Robot Teams which has been submitted to IROS 2015.

Lego Robots

The Lego robot demonstrations were based on an activity taken into schools.

The Lego Rovers website discusses the activity including instructions for downloading and installing the code from github.

The code actually used in the demonstrations can be found (undocumented) on the EV3 branch of the MCAPL project.

]]>
1st International Conference on AI and Ethics http://wordpress.csc.liv.ac.uk/va/2015/02/16/1st-international-conference-on-ai-and-ethics/ Mon, 16 Feb 2015 10:12:38 +0000 http://wordpress.csc.liv.ac.uk/va/?p=107 On the 25th January I presented our work, described in Towards Verifiably Ethical Robot Beheviour in The First international Workshop on Artificial Intelligence and Ethics that took place as part of the AAAI conference in Austin, Texas.

The workshop was an attempt to bring together researchers interested in all aspects of Artificial Intelligence and Ethics, from those concerned with how Artificial Intelligence can be deployed ethically by society, to those concerned with how to make an Artificial Intelligence behave in an ethical fashion which is where our research sits.

I felt there were two interrelated strands in this theme. Firstly there is the question of how an artificial intelligence should handle ethical dilemmas. The classic dilemma which nearly all the talks focused upon was first proposed by Foot in 1967[1]. In this dilemma a person stands by a lever that will switch the points on a railway. The track divides and five people are tied to one side and one person is tied to the other. If the person does nothing then the train will kill the five people whereas if the person switches the points the train will kill only one person. In ethics, as applied to people, the questions surrounding this thought experiment focus on whether there is an ethical difference between action and inaction, and whether the number of people that will be killed through inaction makes a difference.

Researchers at the workshop were using this case study to ask whether it made a difference if a computer was making the decision to when a human made the decision; what if a human still has to make the switch but a computer is advising them on the course of action – would it be acceptable for the computer to lie about the outcomes of an action (e.g., say there was no one tied to the second track)? would it be ethical for the computer to attempt to manipulate the human to achieve what it considers the ethical outcome (e.g., to suggest that they will become a hero if they switch the points)[2]? what if a human can override the computer when it makes such a decision, what makes a human trust a computer enough that they stop applying their own judgments to the computer’s decision and just accept it [3]?

The second theme focused on how a computer can be designed to take ethical considerations into account. Anderson and Anderson [4] combined these themes while looking at ethical dilemmas that are likely to arise in health-care and home help situations (scenarios in which Liverpool’s Centre for Autonomous Systems Technology has a lot of interest). In particular how serious does a patient’s failure to take their medication have to be before it is ethical to inform a health-care professional that they are not taking it? The Andersons proposed a learning mechanism combined with an “ethical Turing test” in which the computer’s decisions over a number of scenarios are compared with the choices of a board of ethicists and the ethicists’ explanations for their decisions are used by the computer to refine its own understanding of what constitutes an ethical decision in such a situation.

Our own research [5] is not yet at a stage where we consider ethical dilemmas. We have been focusing on how we can apply an ethical layer as a filter over an artificial intelligence’s decisions and, more importantly, how we can prove that this layer then guarantees the intelligence acts in accordance with those ethics. This is based on Alan Winfield’s previous work in which a robot can decide to collide with a human if it will prevent the human falling into a hole in the ground. This is achieved by filtering out all the robot’s options which involve avoiding the human because all those options result in the human falling in the hole. In Liverpool we refined this into a a simple interface for describing ethical priorities and then used a technique called model-checking to prove that this interface filtered choices as the programmer intended. There was a lot of discussion after my talk about whether treated ethical decision making as a separate layer on top of the robot’s other decision making processes – about how it was going to achieve its goals (for instance) could actually work in more complex situations.

Many of the researchers at the workshop were in favour of systems where an artificial intelligence calculates a single utility function which assigns a value to each choice it has and then it selects the choice with the highest value. Obviously, in a system designed to do this, explicitly ethical reasoning and other sorts of reasoning have to be intertwined. There are advantages to this approach, e.g., the learning algorithms proposed by the Andersons work well with this sort of implementation, but it does raise other issues (for instance during training phases when the AI attempts to learn the best course of action the utility function might cause the AI to avoid situations where a human trainer would force it to make different (apparently lower scoring) choices – this issue was also being investigated by researchers at the workshop [6])

People did like the fact that our work highlighted that it wasn’t enough for an artificial intelligence simply to behave ethically but that it was important that it be known to behave ethically.

The workshop also prompted talks and discussions on a range of topics from how to teach Computer Science students about ethical issues, to the economic possibilities of developments in Computer Science, and the proposed ban on Autonomous Weapons being put forward by Human Rights Watch. It was a really interesting workshop to attend both to understand the breadth of the field and the progress being made towards addressing the ethical issues surrounding Artificial Intelligence.


[1] Philippa Foot, The Problem of Abortion and the Doctrine of the Double Effect in Virtues and Vices (Oxford: Basil Blackwell, 1978)(originally appeared in the Oxford Review, Number 5, 1967.)

[2] Marco Guerini, Fabio Pianesi and Oliviero Stock. Is it morally acceptable for a system to lie to persuade me?. 1st International Workshop on AI and Ethics

[3] Julien Collart, Thibault Gateau, Eve Fabre and Catherine Tessier. Human-robot Systems facing ethical conflicts: A Preliminary Experimental Protocol. 1st International Workshop on AI and Ethics

[4] Michael Anderson and Susan Anderson. Toward Ensuring Ethical Behavior from Autonomous Systems: A Case-Supported Principle-Based Paradigm. 1st International Workshop on AI and Ethics

[5] Louise Dennis, Michael Fisher and Alan Winfield. Towards Verifiably Ethical Robot Behaviour. 1st International Workshop on AI and Ethics

[6] Nate Soares, Benja Fallenstein, Eliezer Yudkowsky and Stuart Armstrong. Corrigibility. 1st International Workshop on AI and Ethics

]]>
Press Surrounding the Launch of the Verifiable Autonomy Project http://wordpress.csc.liv.ac.uk/va/2015/02/16/press-surrounding-the-launch-of-the-verifiable-autonomy-project/ Mon, 16 Feb 2015 10:04:11 +0000 http://wordpress.csc.liv.ac.uk/va/?p=62 National

Financial Times (FT Weekend Magazine): The rise of the (more ethical) machines (Print/online) 06/12/14

Mail Online: How to prevent robot world domination: Project is launched to ensure AI can follow rules and make ethical decisions 10/12/14

International

World Industrial Reporter: UK researchers join hands to make safer, more ethical, autonomous robots 09/12/14

Business Week: New research will help robots know their limits 08/12/14

Yahoo (Argentina): Británicos trabajan para que futuros robots puedan pensar y actuar 8/12/14

El Universel: Britanicos desarrollaran robots que puedan pensar y actuar 8/12/14

Trade & Specialist
Design Products and Applications: New research will help robots know their limits 08/12/14

My Science: New research will help robots know their limits 9/12/14

Phys.org: New research will help robots know their limits 8/12/14

Automation: New research will help robots know their limits 8/12/14

A-Z of Robotics: Bristol Robotics Laboratory to Research, Develop and Demonstrate Verifiably 'Ethical' Robots 8/12/14

ECN Magazine: New research will help robots know their limits 8/12/14

Robotics Tomorrow: Bristol Robotics Laboratory to Research, Develop and Demonstrate Verifiably 'Ethical' Robots 08/12/14

Process and Control Today: New research will help robots know their limits 10/12/14

Controlled Environments Magazine: Helping Robots Know Their Limits 8/12/14

The Conversation: We must be sure that robot AI will make the right decisions, at least as often as humans do 9 December 2014

IET Partner News: New research will help robots know their limits Spring 2015.

]]>
Alan Winfield on Verifiable Autonomy http://wordpress.csc.liv.ac.uk/va/2014/12/08/alan-winfield-on-verifiable-autonomy/ Mon, 08 Dec 2014 10:12:14 +0000 http://wordpress.csc.liv.ac.uk/va/?p=52 Alan Winfield, Principal Investigator on Verifiable Autonomy at Bristol Robots Lab, has  written several posts about Verifiable Autonomy on his own blog.

These posts discuss the design and implementation of his Ethical Robot which we used as the basis of the Towards Verifiably Ethical Robot Behaviour and they are well worth reading if you are interested in the the topic of constraining a robot’s behaviour using an Ethical Framework.

 

]]>
Verifiable Autonomy: First All Hands Meeting http://wordpress.csc.liv.ac.uk/va/2014/12/05/verifiable-autonomy-first-all-hands-meeting/ Fri, 05 Dec 2014 15:18:20 +0000 http://wordpress.csc.liv.ac.uk/va/?p=47 2014-12-05 14.09.55

The first meeting of the Verifiable Autonomy project took place today at Sheffield’s department for Automatic Control and Systems Engineering. Discussions ranged over many topics such as Open Science, Ethical Robotics, Autonomous Control of Vehicle Convoys and Abstracting Sensor Data for Reasoning.

We agreed to host a number of workshops including, hopefully a workshop on Agent Verification in September 2015 in association with the TAROS (Towards Autonomous Robotic Systems) conference in Liverpool. In future we hope to host workshops on Ethical and Legal Aspects of Autonomous Systems, and Verifiable Learning.

We are all very excited about our plans for the future of this project.

]]>
Towards Verifiably Ethical Robot Behaviour http://wordpress.csc.liv.ac.uk/va/2014/11/15/towards-verifiably-ethical-robot-behaviour/ Sat, 15 Nov 2014 15:25:47 +0000 http://wordpress.csc.liv.ac.uk/va/?p=41 We’ve just heard that our paper based on the work described in Developing an Ethical Consequence Engine has been accepted into the AAAI workshop in AI&Ethics

In this we integrated the work in Winfield et. al’s Towards an Ethical Robot: Internal Models, Consequences and Ethical Action Selection (which Professor Winfield has discussed on his blog) into the verification framework Liverpool have been developing for the past 8 years. The work was only preliminary but we were able to prove that the agent made the correct selection of actions in a very abstract scenario and make more detailed probabilistic analysis of the outcomes in a more concrete scenario

]]>