Trisha Mittal
University of Maryland College Park
trisha@umd.edu
Bio
Trisha Mittal is a fourth-year Ph.D. candidate at the University of Maryland College Park advised by Prof. Dinesh Manocha. She received a master’s degree in computer science from the University of Maryland (2020) and a bachelor’s and master’s degree in information technology from the International Institute of Information Technology Bangalore (2018). Her research focuses on Affective Computing which involves building systems that can understand interpret and respond to human emotions. Her work has resulted in better perception models for human emotion and has been applied in various AI domain. She has interned in research labs like Adobe Research San Jose (Summers’20 ’21) Max-Planck Institute (Spring’18) and Indian Institute of Science Bangalore (Summers’16 ’17). She enjoys teaching and mentoring students through initiatives like AI4ALL and Girls Who Code. She actively participates in diversity and inclusion initiatives in computing and has been funded to attend Grace Hopper Celebration and CRA Grad Cohort workshop.
Towards Holistic Emotion Perception with Applications in Social Media Analysis, Accessibility, and Multimedia Recommendation
Towards Holistic Emotion Perception with Applications in Social Media Analysis, Accessibility, and Multimedia Recommendation
My research focus is in Affective Computing where the goal is to develop Artificial Intelligence (AI) systems that can understand interpret and respond to human emotions and behavior. Humans are socially-aware creatures and can understand and perceive emotions of other humans. This ability to understand each others’ emotion enriches our interactions with people in our daily lives. The broad idea is to impart human emotion natural in the same way as human-human interactions do. This is becoming increasingly important given the rise in the number of ways in which humans interact with machines via phones computers and smart appliances. The core thesis of my research is that emotion perception and recognition depends on a wide range of causal factors and not just based on facial expressions as most of the current literature argues. My research therefore focuses on holistic emotion perception and recognition using not only visual cues like facial expressions but also integrating concepts from psychology such as context and multi-modality. I tackle my research problem statement in two parts. In the first part I propose new deep learning-based algorithms for improved emotion perception using multimodal cues and contextual information (AAAI’20 CVPR’20). In the second part of my research I explore the application of these algorithms to enrich different application domains in AI such as detecting manipulations in videos (ACM MM’20) better analyzing multimedia content (CVPR’21) creating intervention systems for addressing emotion contagion on social media and developing accessible video conferencing applications.