Verifiable Autonomy

Overview

The Verifiable Autonomy project was an EPSRC-funded collaboration between Computer Science researchers at the University of Liverpool, Roboticists at the University of the West of England and Control Scientists at the University of Sheffield.

People

Researchers

Aims

Autonomy is a core technology theme for the 21st century.  Within 20 years, we elect to see fully autonomous vehicles, robots and software, all of which will (and must) be able to make their own decisions without direct human intervention.  This potential is both exciting and beyond current possibilities; frightening in that the control of these systems is now taken away from us.  How do we know that they will work? How do we know that they are safe? And how can we trust them?  All of these are currently impossible to answer yet, without such guarantees, increased autonomy will not be accepted by engineers, allowed by regulators or trusted by the public.

The main objectives of the project are:

  • to provide viable verification routes beyond the current agent program model-checking approach we have already developed
  • Extend the class of logical properties that can be verified of such agents, beyond standard BDI-like properties to at least include probabilistic and real-time aspects.
  • Investigate what high-level ethical or legal requirements we actually want to verify and ho the agent verification approach can be extended to handle these
  • Develop reliable autonomous techniques for learning skills, plans and, potentially, goals and investigate both our requirements and verification techniques to cope with such learning
  • Implement demonstrators that tackle the above dimensions, and assess and evaluate the effectiveness of formal verification as well as the practical viability of the approach.