BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//University of Liverpool Computer Science Seminar System//v2//EN
BEGIN:VEVENT
DTSTAMP:20260408T090815Z
UID:Seminar-dept-1309@lxserverA.csc.liv.ac.uk.csc.liv.ac.uk
ORGANIZER:CN=Lutz Oettershagen:MAILTO:Lutz.Oettershagen@liverpool.ac.uk
DTSTART:20251210T130000
DTEND:20251210T140000
SUMMARY:School Seminar Series
DESCRIPTION:Youcheng Sun: What LLMs Reveal and What They Believe\n\nHow do outputs leak inputs, and how does RAG get misled? Modern language models do not just generate text; under common settings they can also reveal it. The talk first explains how exposing model outputs enables exact reconstruction of the original input. This can aid debugging, for example by helping identify hidden backdoor triggers, yet it can also recover sensitive personal information (such as passwords and ID numbers) using only what the model returns. Meanwhile, on the other side (the inputs), the talk examines what models “believe” in retrieval‑augmented generation (RAG): how a single adversarially phrased document can hijack a pipeline, and how a fast graph‑based reranker restores consensus by rewarding mutually consistent sources and down‑weighting query‑echo outliers. Taken together, the talk aims to enable more informed discussions about when and how to trust LLMs.\n\nhttps://www.csc.liv.ac.uk/research/seminars/abstract.php?id=1309
LOCATION:ELEC201, 2th Floor Lecture Theater EEE
END:VEVENT
END:VCALENDAR
