BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//University of Liverpool Computer Science Seminar System//v2//EN
BEGIN:VEVENT
DTSTAMP:20260410T090218Z
UID:Seminar-DMML-545@lxserverA.csc.liv.ac.uk.csc.liv.ac.uk
ORGANIZER:CN=Danushka Bollegala:MAILTO:Danushka.Bollegala@liverpool.ac.uk
DTSTART:20190603T100000
DTEND:20190603T110000
SUMMARY:Data Mining and Machine Learning Series
DESCRIPTION:Zhengyao Jiang and Shan Luo: Neural Logic Reinforcement Learning\n\nDeep reinforcement learning (DRL) has achieved significant breakthroughs in various tasks. However, most DRL algorithms suffer a problem of generalising the learned policy which makes the learning performance largely affected even by minor modifications of the training environment. Except that, the use of deep neural networks makes the learned policies hard to be interpretable. To address these two challenges, we propose a novel algorithm named Neural Logic Reinforcement Learning (NLRL) to represent the policies in the reinforcement learning by first-order logic. NLRL is based on policy gradient methods and differentiable inductive logic programming that have demonstrated significant advantages in terms of interpretability and generalisability in supervised tasks. Extensive experiments conducted on cliff-walking and blocks manipulation tasks demonstrate that NLRL can induce interpretable policies achieving near-optimal performance, while demonstrating good generalisability to environments of different initial states and problem sizes.\n\nThe talk will be presented in ICML 2019.\n\nhttps://www.csc.liv.ac.uk/research/seminars/abstract.php?id=545
LOCATION:
END:VEVENT
END:VCALENDAR
