About Me

I am a Postdoctoral Research Associate at the Lucy Family Institute for Data & Society at the University of Notre Dame. My research specializes in Responsible AI, with a focus on integrating interpretability, safety, fairness, and robustness in both traditional machine learning and modern large language models. I aim to build transparent and accountable AI systems that advance trustworthy and responsible AI for real-world applications.

Profile

Postdoctoral Researcher at the University of Notre Dame specializing in Responsible AI. My research integrates interpretability, safety, fairness and robustness in both traditional machine learning and modern large language models, aiming to build transparent and accountable AI systems that advance responsible and trustworthy AI.

Research Interests

My research focuses on the following key areas:

  • Explainable Artificial Intelligence (XAI): Developing principled methods to make AI models more interpretable and transparent
  • Trustworthy AI: Advancing explainability, safety, fairness, and robustness in AI systems
  • Large Language Models and Generative AI: Investigating the interpretability and reasoning capabilities of modern LLMs
  • AI Agents for Healthcare and Society: Building AI systems that promote accessible and equitable healthcare delivery

Background

I earned my Ph.D. and M.S. in Computer Science from Wayne State University (2015-2022), where my dissertation focused on “Interpretable Machine Learning and Applications” under the supervision of Prof. Dongxiao Zhu. I received my B.S. in Financial Mathematics from Southern University of Science and Technology (SUSTech) in Shenzhen, China (2011-2015).

Following my doctorate, I worked as an Applied Scientist at AntGroup Inc. in Shanghai (2022-2023), where I contributed to developing the Ant Model Risk Evaluation system for assessing robustness, fairness, and explainability of machine learning models. I also designed large-scale security solutions for the Alipay ecosystem, including anti-cheating algorithms and risk detection systems serving hundreds of millions of users.

Current Research Projects

I am currently engaged in several cutting-edge research initiatives:

Principled LLM Reasoning Alignment with Explainable AI Techniques: Utilizing principled and theory-grounded explainable AI methods to align the verbal reasoning of large language models (LLMs), enhancing the fidelity of explanations and mitigating hallucinations.

AI for Accessible and Equitable Healthcare: Collaborating with Hospital Infantil de México Federico Gómez (HIMFG) to digitize medical records and integrate Social Determinants of Health (SDoH) data, leveraging agentic AI to promote accessible and equitable healthcare delivery.

Additionally, I collaborate with the IBM Technology Ethics Lab on responsible AI initiatives centered on explainability, fairness, and transparency in modern LLMs.

Recent Updates

  • IJCAI-25 Paper Accepted: Our paper “Fast Explanations via Policy Gradient-Optimized Explainer” has been accepted to the 34th International Joint Conference on Artificial Intelligence (IJCAI-25)!
  • New Preprint: Our paper “Context Attribution with Multi-Armed Bandit Optimization” is available on arXiv:2506.19977 and is under review at ICLR 2026.
  • Teaching Excellence: Received the Striving for Excellence in College and University Teaching Certificate from the University of Notre Dame in 2025.