Min Lee
Carnegie Mellon University
mklee@cs.cmu.edu
Bio
Min Kyung Lee is a research scientist in human-computer interaction at the Center for Machine Learning and Health at Carnegie Mellon University. Her research examines the social and decision-making implications of intelligent systems and supports the development of more human-centered machine learning applications. Dr. Lee is a Siebel Scholar and has received several best paper awards, as well as an Allen Newell Award for Research Excellence. Her work has been featured in media outlets such as the New York Times, New Scientist, and CBS. She received a PhD in HCI in 2013 and an MDes in Interaction Design from Carnegie Mellon, and a BS summa cum laude in Industrial Design from KAIST.
Designing human-centered algorithmic technologies
Designing human-centered algorithmic technologies
Algorithms are everywhere, acting as intelligent mediators between people and the world around them. Facebook algorithms decide what people see on their news feeds; Uber algorithms assign customers to drivers; robots drive cars on our behalves. Algorithmic intelligence offers opportunities to transform the ways people live and work for the better. Yet their opacity can introduce bias into the worlds that people access through such technologies, inadvertently provide unfair choices, blur accountability, or make the technology seem incomprehensible or untrustworthy.
My research examines the social and decision-making implications of intelligent technologies and facilitates more human-centered design. I study how intelligent technologies change work practices, and devise design principles and interaction techniques that give people appropriate control over intelligent technologies. In the process, I create novel intelligent products that address critical problems in the areas of on-demand work and robotic service.
In the first line of my research, I studied Uber and Lyft ridesharing drivers to understand the impact of algorithms used to manage human workers in on-demand work. The results suggested that workers do not always cooperate with algorithmic management because of the algorithms’ limited assumptions about worker behaviors and the opacity of algorithmic mechanisms. I further examined people’s perceptions of algorithmic decisions through an online experiment, and created design principles around how we can use transparency, anthropomorphization, and visualization to foster trust in algorithmic decisions and help people make better use of them.
In the second line of my research, I studied three service robots deployed in the field over long periods of time: a receptionist robot, a telepresence robot for distributed teams, and an office delivery robot that I helped build from scratch using human-centered design methods. The studies revealed individual and social factors that robots can personalize in order to be more successfully adopted into a workplace.