Andrea Bajcsy
UC Berkeley
abajcsy@berkeley.edu
Bio
Andrea Bajcsy is a Ph.D. candidate at UC Berkeley in the Electrical Engineering and Computer Science Department advised by Professors Anca Dragan and Claire Tomlin. Her research focuses on bridging safety and machine learning in human-robot interaction. Specifically she aims to develop introspective robots: robots capable of self-assessing when their learned human models can be trusted effective decision-making despite imperfect models and data and continually improving their human models. Prior to her Ph.D. she earned her B.S. at the University of Maryland College Park in Computer Science in 2016. She is the recipient of the NSF Graduate Research Fellowship UC Berkeley Chancellor’s Fellowship and has worked at NVIDIA Research and Max Planck Institute for Intelligent Systems.
Bridging Safety and Learning in Human-Robot Interaction
Bridging Safety and Learning in Human-Robot Interaction
From autonomous cars in cities to mobile manipulators at home I aim to design robots that interact with people. These robots increasingly rely on machine learning throughout the design process and during deployment to build and refine models of humans. However the widespread use of machine learning in human-robot interaction (HRI) has unearthed a breadth of challenging new safety questions about the accuracy and generalizability of learned human models and the interplay between these models and robot decision-making. Reconciling the need for machine learning with safety concerns in HRI is centered around the issue that no model is perfect. By blindly trusting their learned human models today’s robots can confidently plan unsafe behaviors around people resulting in anything from miscoordination to dangerous accidents. My research vision is to develop introspective robots: robots capable of self-assessing when their learned human models can be trusted effective decision-making despite imperfect models and data and continually improving their human models. My approach towards this vision unites traditionally disparate tools from control theory and machine learning with structured human decision-making models to develop theoretically rigorous practical robot algorithms. Importantly core to my approach is consistently grounding and evaluating my methods in robotic hardware experiments with human participants. In my thesis work I developed online model confidence monitors enabling even misspecified human models to be safely deployed on robots in multi-agent scenarios and I bridged robust control theory and online human intent learning to enable novel safety analysis of adaptive human models. Long-term I aim to formalize notions of robot safety around people that go beyond collision-avoidance and to tackle the rising need to thoroughly understand monitor and correct the limitations of large data-driven human models that modern robots rely on.