Department Seminar Series

Towards a Science of Security Games: Key Algorithmic Principles, Deployed Applications and Research Challenges

6th November 2015, 11:00 add to calenderE4 (EEE, 2nd floor)
Prof Milind Tambe
Computer Science & Industrial and Systems Engineering Departments
University of Southern California
USA

Abstract

Security is a critical concern around the world, whether it is the challenge of protecting ports, airports and other critical infrastructure, interdicting the illegal flow of drugs, weapons and money, protecting endangered wildlife, forests and fisheries, suppressing urban crime or security in cyberspace. Unfortunately, limited security resources prevent full security coverage at all times; instead, we must optimize the use of limited security resources. To that end, we founded the "security games" framework that has led to building of decision-aids for security agencies. Security games is a novel area of research that is based on computational and behavioral game theory, while also incorporating elements of AI planning under uncertainty and machine learning. Today security-games based decision aids for infrastructure security are deployed in the US and internationally; examples include deployments at ports and ferry traffic with the US coast guard, for security of air traffic with the US Federal Air Marshals, and for security of university campuses, airports and metro trains with police agencies in the US and other countries. Moreover, recent work on "green security games" has led our decision aids to be deployed, assisting NGOs in protection of wildlife; and "opportunistic crime security games" have focused on suppressing urban crime. Recently, our security-game-based startup, ARMORWAY, is further enabling the deployment of game-theoretic security resource optimization. I will discuss our use-inspired research in security games that is leading to a new science of security games, including algorithms for scaling up security games as well as for handling significant adversarial uncertainty and learning models of human adversary behaviors.

Joint work with a number of current and former PhD students, postdocs all listed at teamcore.usc.edu/security
add to calender (including abstract)