My work in the field of Artificial Intelligence (AI) is motivated by real-world societal problems, particularly in the areas of Public Safety and Security (e.g., urban crime prevention and counterterrorism), Cybersecurity (e.g., the protection of network data from stealthy botnets), Sustainability (e.g., wildlife and fish protection), and Public Health (e.g., diabetes prevention). I aim to bridge the gap between theory and application in AI by providing practical, computational, AI-based solutions for these problems. The solutions I have developed employ techniques drawn not only from AI's various subfields, including Multi-Agent Systems, Game Theory, Machine Learning, and Optimization, but also from fields outside of AI, such as Psychology and Conservation Biology. My work is being successfully utilized in the real world: my models and algorithms have been incorporated into the wildlife-protection application PAWS (Protection Assistant for Wildlife Security), which has been extensively used by NGOs such as Panthera and the Wildlife Conversation Society in conservation areas in Malaysia and Uganda.
Our project is designed to raise health-risk awareness in underrepresentative communities. We aim at: (i) Building new behavioral models of human health-risk behavior and applying machine learning techniques to learn the models; and (ii) Developing new reinforcement learning algorithms to generate effective intervention plan that can help improving people's health.
Real-world security domains are often characterized by partial information: uncertainty (particularly on the defender’s part) about actions taken or underlying characteristics of the opposing agent. Experience observed through repeated interaction in such domains provides an opportunity for the defender to learn about the behaviors and characteristics of attacker(s).
To the extent that the defender relies on data, however, the attacker may choose to modify its behavior to mislead the defender. That is, in a particular interaction the attacker may select an action that does not actually yield the best immediate reward, to avoid revealing sensitive private information. Such deceptive behavior could manipulate the outcome of learning to the long-term benefit of the attacker. This project investigates strategic deception on the part of an attacker with private information.
Many real-world problems require the creation of Artificial Intelligence (AI) models which include both learning (i.e., training a predicted model from data) and planning (i.e., producing high quality decisions based on the predicted model). However, such AI models face increased threats from attacks to the learning component (via the exploitation of vulnerabilities of machine learning algorithms), which results in ineffective decisions in the end. This project will study the security of machine learning in a decision-focused multi-agent environment in which agents’ goals are to make effective action plans given some learning outcomes.
This project focuses on the development of practical game-theoretic solutions for real-world complex large-scale cybersecurity domains. Cybersecurity problems often involve dynamic stochastic interactions between network security administrators and cybercriminals on a computer network with exponential action spaces, imperfect information of players' knowledge and actions, and several potential unforeseen uncertainties. We are interested in applying simulation-based methodologies, particularly empirical game-theoretic analysis, and exploring a variety of parameterized heuristic game-theoretic solutions. This approach allows me to model and analyze complex cybersecurity scenarios.