Henny Admoni

Yale University

Position: PhD Candidate
Rising Stars year of participation: 2015
Bio

Henny Admoni is a PhD candidate at the Social Robotics Laboratory in the Department of Computer Science at Yale University, where she works with Professor Brian Scassellati. This winter, Henny will begin as a Postdoctoral Fellow at the Robotics Institute at Carnegie Mellon University, working with Siddhartha Srinivasa. Henny creates and studies intelligent, autonomous robots that improve people’s lives by providing assistance in social environments like homes and offices. Her dissertation research investigates how robots can recognize and produce nonverbal behaviors, such as eye gaze and pointing, to make human-robot interactions more natural and effective for people. Her interdisciplinary work spans the fields of artificial intelligence, robotics, and cognitive psychology. Henny holds an MS in Computer Science from Yale University, and a BA/MA joint degree in Computer Science from Wesleyan University. Henny’s scholarship has been recognized with awards such as the NSF Graduate Research Fellowship, the Google Anita Borg Memorial Scholarship, and the Palantir Women in Technology Scholarship.

Nonverbal Communication in Human-Robot Interaction

Nonverbal Communication in Human-Robot Interaction

Robotics has already improved lives by taking over dull, dirty, and dangerous jobs, freeing people for safer, more skillful pursuits. For instance, autonomous mechanical arms weld cars in factories, and autonomous vacuum cleaners keep floors clean in millions of homes. However, most currently deployed robotic devices operate primarily without human interaction, and are typically incapable of understanding natural human communication. My research focuses on enabling human-robot communication in order to develop social robots that interact with people in natural, effective ways. Application areas include social robots that help elderly users with tasks like preparing meals or getting dressed; manufacturing robots that act as intelligent third hands, improving efficiency and safety for workers; and robot tutors that provide students with personalized lessons to augment their classroom time.

Nonverbal communication, such as gesture and eye gaze, is an integral part of typical human communication. Nonverbal communication happens bidirectionally in an interaction, so social robots must be able to both recognize and generate nonverbal behaviors. These behaviors are extremely dependent on context, with different types of behaviors accomplishing different communicative goals like directing attention or managing conversational turn-taking. To be effective in the real world, nonverbal behaviors must occur in real time in dynamic, unstructured interactions.

My research focuses on developing bidirectional, context aware, real time nonverbal behaviors for personally assistive robots. Developing effective nonverbal communication for robots engages a number of disciplines including autonomous control, machine learning, computer vision, design, and cognitive psychology. My approach to this research is three-fold. First, I conduct well-controlled human-robot interaction studies to understand people’s perceptions of robots. Second, I build computational models of nonverbal behavior using data from human-human interactions. Third, I develop robot-agnostic behavior controllers for collaborative human-robot interactions based on my models of human behavior, and test these behavior controllers in real-world human-robot interactions.