Current Research Projects

  • [AI/ML] Reinforcement Learning with Human Feedback

    In this project, we focus on developing new reinforcement learning (RL) algorithms guided by human feedback. We aim to address several key research questions: (i) How can human feedback be effectively leveraged to enhance the performance of RL algorithms across different learning paradigms, including offline RL, imitation learning, and multi-agent RL? (ii) How can we handle heterogeneous human feedback data that vary in quality and reliability? and (iii) How can large language models (LLMs) be utilized to generate additional feedback data, helping to overcome the limitations of scarce human evaluations?

  • [Physics-AI] Physics-Informed AI for Ptychography

    Electron microscopy combined with ptychography—a computational imaging technique—has enabled record-breaking resolution in electron microscopy (EM). Despite recent advances in electron ptychography, conventional reconstruction methods remain computationally intensive, often requiring several hours or even days to converge. In this project, we aim to develop a novel predictive electron ptychography framework that synergistically integrates generative AI models with the forward model of conventional electron ptychography. By harnessing the strengths of both AI and physics-based modeling, our approach seeks to produce high-resolution phase images of both thin and thick specimens in a single, real-time forward pass. This framework addresses key limitations of existing methods—such as high computational cost—while substantially improving image reconstruction quality.

  • [Healthcare-AI] Tumor Microenvironment Modeling & Analysis

Past Research Projects

  • AI for Public Health

    Our project is designed to raise health-risk awareness in underrepresentative communities. We aim at: (i) Building new behavioral models of human health-risk behavior and applying machine learning techniques to learn the models; and (ii) Developing new reinforcement learning algorithms to generate effective intervention plan that can help improving people's health.

  • Deception in Security Games

    Real-world security domains are often characterized by partial information: uncertainty (particularly on the defender’s part) about actions taken or underlying characteristics of the opposing agent. Experience observed through repeated interaction in such domains provides an opportunity for the defender to learn about the behaviors and characteristics of attacker(s).

    To the extent that the defender relies on data, however, the attacker may choose to modify its behavior to mislead the defender. That is, in a particular interaction the attacker may select an action that does not actually yield the best immediate reward, to avoid revealing sensitive private information. Such deceptive behavior could manipulate the outcome of learning to the long-term benefit of the attacker. This project investigates strategic deception on the part of an attacker with private information.

  • Security in Data-based Decision Making

    Many real-world problems require the creation of Artificial Intelligence (AI) models which include both learning (i.e., training a predicted model from data) and planning (i.e., producing high quality decisions based on the predicted model). However, such AI models face increased threats from attacks to the learning component (via the exploitation of vulnerabilities of machine learning algorithms), which results in ineffective decisions in the end. This project will study the security of machine learning in a decision-focused multi-agent environment in which agents’ goals are to make effective action plans given some learning outcomes.

  • Information Leakage and Exploration

    In this project, we investigate strategic behavior of players in exploiting and revealing their private information to influence decision making of other players towards their benefit. Such behavior often exhibits in games with asymmetric information. We are particularly interested in analyzing the NP-hardness as well as designing efficient game-theoretic algorithms for optimizing information-revealing strategic decisions of players in various classes of games including security games.
  • Game Theory for Cybersecurity

    This project focuses on the development of practical game-theoretic solutions for real-world complex large-scale cybersecurity domains. Cybersecurity problems often involve dynamic stochastic interactions between network security administrators and cybercriminals on a computer network with exponential action spaces, imperfect information of players' knowledge and actions, and several potential unforeseen uncertainties. We are interested in applying simulation-based methodologies, particularly empirical game-theoretic analysis, and exploring a variety of parameterized heuristic game-theoretic solutions. This approach allows me to model and analyze complex cybersecurity scenarios.