BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//University of Liverpool Computer Science Seminar System//v2//EN
BEGIN:VEVENT
DTSTAMP:20260408T090729Z
UID:Seminar-dept-1310@lxserverA.csc.liv.ac.uk.csc.liv.ac.uk
ORGANIZER:CN=Lutz Oettershagen:MAILTO:Lutz.Oettershagen@liverpool.ac.uk
DTSTART:20251212T130000
DTEND:20251212T140000
SUMMARY:School Seminar Series
DESCRIPTION:Taylor T. Johnson: Neural Network Verification for Formally Verifying Neuro-Symbolic Artificial Intelligence (AI)\n\nThe ongoing renaissance in artificial intelligence (AI) has led to the advent of data-driven machine learning (ML) methods deployed within components for sensing, perception, actuation, and control in safety-critical cyber-physical systems (CPS). While such learning-enabled components (LECs) are enabling autonomy in systems such as autonomous vehicles and robots, ensuring such LECs operate reliably in all scenarios is extraordinarily challenging, as demonstrated in part through recent accidents in semi-autonomous/autonomous CPS and by adversarial ML attacks. We will discuss formal methods for assuring specifications---mostly robustness and safety---in autonomous CPS and subcomponents thereof using our NNV software tool (https://github.com/verivital/nnv), developed partly in DARPA Assured Autonomy and Assured Neuro Symbolic Learning and Reasoning (ANSR) projects. These methods have been evaluated in CPS development with multiple industry partners in automotive, aerospace, and robotics domains, and allow for formally analyzing neural networks and their usage in closed-loop systems. These methods are also enabling verification for the third wave of AI systems, namely neuro-symbolic systems. We will describe classes of neuro-symbolic models we have been developing such as neuro-symbolic automata and behavior trees. Neuro-symbolic behavior trees (NSBTs) are behavior trees (a form of hierarchical state machine becoming widely used in robotics) that execute neural networks, and for which we have been developing verification methods implemented in a tool called BehaVerify (https://github.com/verivital/behaverify). We will cover ongoing community activities we help organize, such as the Verification of Neural Networks Competition (VNN-COMP) held with the International Conference on Computer-Aided Verification (CAV) and the Symposium on AI Verification (SAIV) the past few years, as well as the AI and Neural Network Control Systems (AINNCS) category of the hybrid systems verification competition (ARCH-COMP) also held the past few years. We will conclude with a discussion of future directions in the broader safe and trustworthy AI domain, such as in ongoing projects verifying neural networks used in medical imaging analysis and malware classifiers.\n\nhttps://www.csc.liv.ac.uk/research/seminars/abstract.php?id=1310
LOCATION:Ashton Lecture Theatre
END:VEVENT
END:VCALENDAR
