BCTCS 2021

Speaker

Jay Morgan

I am a 3rd year PhD candidate at Swansea University. Throughout my candidature period, I have been researching methods to make ML more ‘trustable’ for its usage in present day society, in which much of our lives are increasingly automated by this technology. Throughout this research I have been working in interdisciplinary settings with colleagues from different domains such as Quantum Chemistry, Corpus Linguistics, to Astrophysics. In these ventures, we have outlined key principles on how one may integrate prior expert knowledge into DL models to improve its performance, but verify the output matches what is expected by the expert. While my work covers many facets of ML, one method that may appeal to the Theoretical community is the verification of existence of so-called adversarial examples, very small, perhaps imperceptible, changes to the input of DL models that result in large changes to the output space and cause unexpected miss-classifications.

 Overview