Yao Rong
Rice University
yao.rong@rice.edu
Bio
Yao Rong is a Postdoctoral Fellow in the Computer Science Department at Rice University. Previously, she obtained her Ph.D. in Computer Science from the Technical University of Munich in 2024. Her research focuses on building actionable Explainable AI (XAI) that helps users understand model behavior, audit AI model systems, and determine how to improve models. By integrating principles from human factors and psychology, she aims to advance the way we interact with complex models. Her work has been published in prestigious international machine learning venues such as ICML, IEEE TPAMI, and AAAI. She is a Junior Fellow of the Rice Academy and a recipient of the two-year Rice Fellowship supporting her postdoctoral research. Recently, she was selected as a Future Faculty Fellow at Rice University’s George R. Brown School of Engineering.
Areas of Research
- Artificial Intelligence
Actionable XAI for Understanding, Auditing, and Improving Models
Actionable XAI for Understanding, Auditing, and Improving Models
Explainable AI (XAI) plays a promising role in helping users understand and trust model behavior. However, in practice, explanations alone do not meet the expectations of users. Users not only want to understand what a model did, but also aim to identify issues and explore how to improve the system. Therefore, the expectation for XAI is shifting: how can it be systematically used to help humans take informed and effective actions? My research addresses this challenge by developing actionable XAI to support effective human-AI collaboration. One thrust of my work focuses on explanation-based validation, investigating how explanations can be communicated in ways that align with human cognitive processes so that users can better understand and verify model behavior. Another thrust explores explanation-informed audits, examining how explanations can be operationalized to support tasks such as auditing model failures at scale. A further thrust develops explanation-driven improvement, leveraging human feedback on explanations to enhance model performance. Together, these efforts advance XAI from passive interpretation toward active support, empowering users to evaluate, trust, and improve intelligent systems.