Sunnie S. Y. Kim

Princeton University

Position: Ph.D. Candidate
Rising Stars year of participation: 2024
Bio

Sunnie S. Y. Kim is a computer science PhD student at Princeton University advised by Olga Russakovsky. She works on responsible AIÑspecifically, on improving transparency, explainability, and fairness of AI systems and helping people have appropriate understanding and trust in them. Her research has been published in both AI and HCI venues (e.g., CVPR, ECCV, CHI, FAccT), and she has organized multiple workshops connecting the two communities. She is supported by the NSF Graduate Research Fellowship and has interned at Microsoft Research with the FATE (Fairness, Accountability, Transparency, and Ethics in AI) group. Prior to graduate school, she received a BSc degree from Yale University and spent a gap year at TTI-Chicago.

Areas of Research
  • Human-Computer Interaction
AI + HCI for Trustworthy and Appropriately Trusted AI

As AI systems are increasingly transforming our society, it is critical to build systems that are trustworthy and appropriately trusted by users. My research tackles both the technical and human sides of this problem by integrating knowledge and methodologies from AI and HCI. First, I worked on providing explainability for AI systems, a key pillar of trustworthiness. I developed novel AI explanation methods to help users understand and make informed decisions about AI outputs. Then to bridge the gap between research and real-world implementation, I conducted an in-depth analysis of current methods and identified factors that can limit their practical usefulness. I also interviewed users of a widely-deployed AI application and uncovered what explainability needs they have and how they perceive current methods, connecting technical research with real users. Equally important to building trustworthy AI systems is ensuring that these systems are appropriately trusted by users. To this end, I worked on deepening the fieldåÕs understanding of trust in AI. In a case study with real AI users, I studied multiple aspects of trust in a real-world context and identified human, AI, and context-related factors that can influence trust in AI. In parallel, I implemented and evaluated various trust calibration strategies, such as providing explanations and uncertainty information, finding conditions under which these strategies lead to appropriate trust and not. Together, my research addresses both technical and human factors in responsible AI development and contributes to AI and HCI fields.