Open Notebooks – Verifiable Autonomy http://wordpress.csc.liv.ac.uk/va An EPSRC-funded collaboration between the universities of Liverpool, Sheffield and the West of England Mon, 25 Feb 2019 11:38:09 +0000 en-US hourly 1 https://wordpress.org/?v=5.1.5 Developing an Ethical Consequence Engine http://wordpress.csc.liv.ac.uk/va/2014/10/07/developing-an-ethical-consequence-engine/ Tue, 07 Oct 2014 13:30:02 +0000 http://wordpress.csc.liv.ac.uk/va/?p=25 The verifiable autonomy project is interested in pursuing an Open Science methodology and, in particular, open notebook science.  The following post is constructed from the commit logs and daily summaries produced while developing an ethical consequence engine.  

Because we are still learning how to apply open notebook science in computer science the daily summaries only start a couple of days into the development.

Git commit logs are for the ethical_governor branch of the MCAPL source code repository found at mcapl.sourceforge.net.

NB.  At the time of writing development is still ongoing and this post will be expanded as it progresses.

Wednesday 24th September, 2014

commit ffeb1b58a9378c2db442e2bc8e5057b8ca0a2d77
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Wed Sep 24 13:37:10 2014 +0100

    Initial work on ethical governor.  Defining basic shape of system.

commit 3098abc438fc8bb389af0c654a7512457da18001
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Wed Sep 24 13:45:17 2014 +0100

    Copying definition of capability across from sticky wheel.

commit 507ea83da2ecfd72acef4cc928913f0a9f433c97
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Wed Sep 24 16:42:01 2014 +0100

    Building Action Only language.

commit 911266f5d8e0b57ae89b9aec325e3523a089a726
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Wed Sep 24 16:55:19 2014 +0100

    Beginning to write a parser.

Thursday 25th September

commit b3aa7a54abc33d14282b55850cf0ab484c29a493
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Thu Sep 25 11:56:40 2014 +0100

    Finished writing parser and implementing language.  Starting an an example and tests.

commit a6bf134efb2a0610ad2701a8fbfff537dd09b77c
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Thu Sep 25 12:00:37 2014 +0100

    Tweaking some paths (I think)

commit d3ee6c65f16d013567c7b5e99b9d3d6e645f9fa6
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Thu Sep 25 12:45:24 2014 +0100

    Interim commit while transferring up-to-date Capability code from sticky_wheel.

commit 4e4250acba42d5346bef13b8c2b90c35ec24e286
Merge: d3ee6c6 a6bf134
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Thu Sep 25 12:46:01 2014 +0100

    Merge branch 'master' into ethical_governor

commit 4e70cbada72b5eda821cdd7bbe433bd74dd9a13d
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Thu Sep 25 16:45:07 2014 +0100

    Finished implementation.  Now onto some proper testing.

Friday 26th September

commit 03a43193278857cbc57ba4d5b141c38f430bef83
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Fri Sep 26 11:01:44 2014 +0100

    Simple Action Only Program now works.

commit 5495860ab18c629e6c97a5bcbebd7dd0ec1ecda0
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Fri Sep 26 17:16:42 2014 +0100

    Implemented an example and Reasoning Cycle for an Ethical Governor but have not yet implemented a parser.

Monday 29th September

Commit Logs
commit fed3e186137d66f9f16d796f06dd4440e74adb2eAuthor: Louise Dennis <louise@dennis-sellers.com>
Date:   Mon Sep 29 11:54:50 2014 +0100

    Finished implementation of Ethical Governor.  Now to start testing!

commit 645ce03c6746e2a86178684502eb73cca05d83ad
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Mon Sep 29 14:44:35 2014 +0100

    Implementation now passes basic tests.

commit 8f81a7c9b888175f21957f5efdbc5be9a90521db
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Mon Sep 29 16:20:13 2014 +0100<

    Implemented goal achievement for Action Only agents.

commit 8e13387f76a7a86435dc9099fdebbeda487d3c7e
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Mon Sep 29 17:10:14 2014 +0100

    Work in progress to created a richer environment in which the ethical governor will work.

The first half to the day involved completing the initial implementation of the Ethical Governor, where it selected actions for a simple agent (as described in section 3.1) of the TAROS paper.  In a simple environment containing two humans both walking towards the hole.

Once this was completed, attention switched to implementing something closer to the agent described in section 4 of the TAROS paper, cross-referenced agains the python source code.

  • ActionOnly agents were enhanced to include a goal and to stop executing once the goal was achieved.
  • A new example of an ActionOnly agent was introduced which selected actions based on how close they brought the agent to the goal (this were actions moving on a grid).

Attention then switched to an Ethical Governor to control this agent.  Work started on a new HumanHole Environment which would model the movement of the robot and humans on a 5×5 grid.

  • Humans would move randomly towards the hole unless the robot collides with them.  At each time step there is a 50-50 chance each human will move towards the hole.
  • Once the humans have started moving the algorithm can model their actions.
  • The Hole is between the agent and its target.
  • An extension of the the Bresenham Algorithm (http://lifc.univ-fcomte.fr/home/~ededu/projects/bresenham/) is to be used to work out the squares that will be covered by the agent when travelling between two points on the grid.
  • The assumption is that the agent can move any distance in a single time step.
  • The implementation is partially complete and untested.

Tuesday 30th September

Commit Logs

commit 5cf9fb456828399d1182b9c30d622cca4886e2fd
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Tue Sep 30 16:49:45 2014 +0100


    This is a version of the ethical governor example suitable for exporting a model to Prism.
    
    Note that the end states in the AJPF->Prism model get annotated with the properties from the
    previous state even though they have no properties.  This caused some problems in the translation
    which were worked around by adding the options to Choice in a particular order.  At some point
    the bug needs to be fixed properly.


commit e566382db29e685cae7a89ead12ac50319f25051
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Tue Sep 30 15:26:20 2014 +0100


    More complex Ethical Governor example implemented and verified.  This involved.
    
    - Some extensions to AJPF's scheduler interface to allow for agents which should be ignored by the scheduler.
      The ethical governors are not scheduled but are invoked by other agent's taking actions.
    - Some bug fixes to the implementation of ethical governors.  In particular they should attempt to find
      the best action for all actors even if some have priority.  So the outcome evaluation now moves down
      the actor priority list and for each actor rejects some actions.


commit 92d38dfcea999d1c19928eedd64b62188fd7a6bc
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Tue Sep 30 12:14:50 2014 +0100


    Got side-tracked into removing a redundant flag from the constructors for MCAPLcontroller.

Successfully adapted the Ethical Governor to the new more detailed scenario.  This involved a fair amount of debugging of outcome evaluation.  At present if the second human moves towards  the hole in the first step the agent has put itself in a place where the hole is between it and the human.  It might be an idea to  adapt the scenario so this doesn’t happen.

Exported the JPF model to Prism and, after some difficulty, got some probabilities out of it.  Note that there as some oddities with the way AJPF annotates end states with properties.  These need to be investigated.  I also need to adapt the way the translation handles numbers (to remove decimal  points).  Current PRISM property file is
P=? [F (brobotat42)]
P=? [F (brobothuman1hole)]
P=? [G (!brobothuman2hole)]
P=? [F (brobothuman2hole & brobothuman1hole)]

And output is:

Louises-MacBook-Pro-2:DEG louiseadennis$ ~/Systems/prism-4.1.beta2-osx64/bin/prism ethical_governor.pm reaches_goal.pctl
PRISM
=====


Version: 4.1.beta2
Date: Tue Sep 30 17:01:43 BST 2014
Hostname: Louises-MacBook-Pro-2.local
Command line: prism ethical_governor.pm reaches_goal.pctl


Parsing model file "ethical_governor.pm"...


Parsing properties file "reaches_goal.pctl"...


4 properties:
(1) P=? [ F (brobotat42) ]
(2) P=? [ F (brobothuman1hole) ]
(3) P=? [ G (!brobothuman2hole) ]
(4) P=? [ F (brobothuman2hole&brobothuman1hole) ]


Type:        DTMC
Modules:     jpfModel 
Variables:   state brobotat42 brobothuman1hole brobothuman2hole 


---------------------------------------------------------------------


Model checking: P=? [ F (brobotat42) ]


Building model...


Computing reachable states...


Reachability (BFS): 11 iterations in 0.00 seconds (average 0.000091, setup 0.00)


Time for model construction: 0.041 seconds.


Warning: Deadlocks detected and fixed in 1 states


Type:        DTMC
States:      62 (1 initial)
Transitions: 78


Transition matrix: 238 nodes (3 terminal), 78 minterms, vars: 9r/9c


Prob0: 8 iterations in 0.00 seconds (average 0.000000, setup 0.00)


Prob1: 1 iterations in 0.00 seconds (average 0.000000, setup 0.00)


yes = 62, no = 0, maybe = 0


Value in the initial state: 1.0


Time for model checking: 0.004 seconds.


Result: 1.0 (value in the initial state)


---------------------------------------------------------------------


Model checking: P=? [ F (brobothuman1hole) ]


Prob0: 8 iterations in 0.00 seconds (average 0.000000, setup 0.00)


Prob1: 3 iterations in 0.00 seconds (average 0.000000, setup 0.00)


yes = 5, no = 52, maybe = 5


Computing remaining probabilities...
Engine: Hybrid


Building hybrid MTBDD matrix... [levels=9, nodes=70] [3.3 KB]
Adding explicit sparse matrices... [levels=9, num=1, compact] [0.1 KB]
Creating vector for diagonals... [dist=1, compact] [0.1 KB]
Creating vector for RHS... [dist=2, compact] [0.1 KB]
Allocating iteration vectors... [2 x 0.5 KB]
TOTAL: [4.6 KB]


Starting iterations...


Jacobi: 6 iterations in 0.00 seconds (average 0.000000, setup 0.00)


Value in the initial state: 0.125


Time for model checking: 0.002 seconds.


Result: 0.125 (value in the initial state)


---------------------------------------------------------------------


Model checking: P=? [ G (!brobothuman2hole) ]


Prob0: 8 iterations in 0.00 seconds (average 0.000000, setup 0.00)


Prob1: 4 iterations in 0.00 seconds (average 0.000000, setup 0.00)


yes = 10, no = 34, maybe = 18


Computing remaining probabilities...
Engine: Hybrid


Building hybrid MTBDD matrix... [levels=9, nodes=135] [6.3 KB]
Adding explicit sparse matrices... [levels=9, num=1, compact] [0.2 KB]
Creating vector for diagonals... [dist=1, compact] [0.1 KB]
Creating vector for RHS... [dist=2, compact] [0.1 KB]
Allocating iteration vectors... [2 x 0.5 KB]
TOTAL: [7.8 KB]


Starting iterations...


Jacobi: 10 iterations in 0.00 seconds (average 0.000000, setup 0.00)


Value in the initial state: 0.71875


Time for model checking: 0.002 seconds.


Result: 0.71875 (value in the initial state)


---------------------------------------------------------------------


Model checking: P=? [ F (brobothuman2hole&brobothuman1hole) ]


Prob0: 8 iterations in 0.00 seconds (average 0.000000, setup 0.00)


Prob1: 3 iterations in 0.00 seconds (average 0.000000, setup 0.00)


yes = 2, no = 54, maybe = 6


Computing remaining probabilities...
Engine: Hybrid


Building hybrid MTBDD matrix... [levels=9, nodes=81] [3.8 KB]
Adding explicit sparse matrices... [levels=9, num=1, compact] [0.1 KB]
Creating vector for diagonals... [dist=1, compact] [0.1 KB]
Creating vector for RHS... [dist=2, compact] [0.1 KB]
Allocating iteration vectors... [2 x 0.5 KB]
TOTAL: [5.1 KB]


Starting iterations...


Jacobi: 7 iterations in 0.00 seconds (average 0.000143, setup 0.00)


Value in the initial state: 0.0625


Time for model checking: 0.001 seconds.


Result: 0.0625 (value in the initial state)


---------------------------------------------------------------------


Note: There was 1 warning during computation.

Wednesday 1st October

commit dc0be2c5cc8a54dde41e24c9a3c133750042d57a
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Wed Oct 1 16:22:13 2014 +0100

    I attempted to enrich the environment for a wider class of behaviours.  In particular
      - HumanHoleGoalEnv.model now adds percepts for whether a human is in danger and whether there
        is a path to the human.  This allows JPF to model check whether the robot acts when it can.
      - I changed EvaluateOutcomes to sum outcomes for one class of actor (rather than pick the worst).
        This means that when the robot can save one human but not both it at least chooses the option
        where one is saved.

I will need to rerun the PRISM checking to see how often the robots now fall in the holes.

Work has also started on the paper at https://www.writelatex.com/read/vqrsjvpzhbqd.  At the moment this consists of a lot of copying and pasting from previous papers.  EDIT: writelatex was subsequently abandoned as a collaboration platform. Current version is tagged 3pmOct1st.

Thursday 2nd October

Concentrated upon writing the paper further (version tagged Oct2nd).  I’ve decided that we should emphasise both the development of a declarative ethical consequence engine component and that it is verifiable.  It would be really nice to verify the behaviour of this consequence engine independent of the environment and other agents but this will involve some extra work (giving the consequence engine mental states for instance).  It might be worth looking into next week if the paper writing is going well.

I also annotated the Python Consequence Engine code supplied by Alan Winfield to help gain a high level understanding of its operation.

Monday 6th October

Revised Prism results for changed implementation.

Louises-MacBook-Pro-2:DEG louiseadennis$ ~/Systems/prism-4.1.beta2-osx64/bin/prism  ethical_governor.pm reaches_goal.pctl 
PRISM
=====


Version: 4.1.beta2
Date: Mon Oct 06 12:20:45 BST 2014
Hostname: Louises-MacBook-Pro-2.local
Command line: prism ethical_governor.pm reaches_goal.pctl


Parsing model file "ethical_governor.pm"...


Parsing properties file "reaches_goal.pctl"...


4 properties:
(1) P=? [ F (brobotat42) ]
(2) P=? [ F (brobothuman1hole) ]
(3) P=? [ G (!brobothuman2hole) ]
(4) P=? [ F (brobothuman2hole&brobothuman1hole) ]


Type:        DTMC
Modules:     jpfModel 
Variables:   state brobotat42 brobothuman1hole brobothuman2hole 


---------------------------------------------------------------------


Model checking: P=? [ F (brobotat42) ]


Building model...


Computing reachable states...


Reachability (BFS): 11 iterations in 0.00 seconds (average 0.000000, setup 0.00)


Time for model construction: 0.037 seconds.


Warning: Deadlocks detected and fixed in 1 states


Type:        DTMC
States:      58 (1 initial)
Transitions: 72


Transition matrix: 231 nodes (3 terminal), 72 minterms, vars: 9r/9c


Prob0: 8 iterations in 0.00 seconds (average 0.000000, setup 0.00)


Prob1: 1 iterations in 0.00 seconds (average 0.000000, setup 0.00)


yes = 58, no = 0, maybe = 0


Value in the initial state: 1.0


Time for model checking: 0.004 seconds.


Result: 1.0 (value in the initial state)


---------------------------------------------------------------------


Model checking: P=? [ F (brobothuman1hole) ]


yes = 0, no = 58, maybe = 0


Value in the initial state: 0.0


Time for model checking: 0.0 seconds.


Result: 0.0 (value in the initial state)


---------------------------------------------------------------------


Model checking: P=? [ G (!brobothuman2hole) ]


Prob0: 8 iterations in 0.00 seconds (average 0.000125, setup 0.00)


Prob1: 4 iterations in 0.00 seconds (average 0.000000, setup 0.00)


yes = 10, no = 32, maybe = 16


Computing remaining probabilities...
Engine: Hybrid


Building hybrid MTBDD matrix... [levels=9, nodes=121] [5.7 KB]
Adding explicit sparse matrices... [levels=9, num=1, compact] [0.2 KB]
Creating vector for diagonals... [dist=1, compact] [0.1 KB]
Creating vector for RHS... [dist=2, compact] [0.1 KB]
Allocating iteration vectors... [2 x 0.5 KB]
TOTAL: [7.0 KB]


Starting iterations...


Jacobi: 10 iterations in 0.00 seconds (average 0.000000, setup 0.00)


Value in the initial state: 0.71875


Time for model checking: 0.003 seconds.


Result: 0.71875 (value in the initial state)


---------------------------------------------------------------------


Model checking: P=? [ F (brobothuman2hole&brobothuman1hole) ]


yes = 0, no = 58, maybe = 0


Value in the initial state: 0.0


Time for model checking: 0.001 seconds.


Result: 0.0 (value in the initial state)


---------------------------------------------------------------------


Note: There was 1 warning during computation.

Subsequent Commit Log

commit 187c2cd58bf3984bc6900f041111bf09ad95407a
Author: Louise Dennis <louise@dennis-sellers.com>
Date:   Mon Oct 6 17:11:12 2014 +0100

    Adapted the Ethical Governor framework so that governors (or consequence engines as they
    are being called in the paper) can be verified on their own.  This has involved
      * Adding beliefs to the Ethical Governors about which actions are selected and
        about the outcomes of all actions returned from modelling.
      * Developing a suitable verification environment that runs the ethical governor
        and assert action outcomes at random.

This has been verified on one property (4 in human_hole.psl) taking almost 2 hours.  Needs to be verified on 5, 6, 7 and possibly equivalent properties for actions 2-4.

Tuesday 7th October

checked theorems 5 & 6.

]]>