Yifan Gong

Northeastern University

Position: Ph.D. Candidate
Rising Stars year of participation: 2024
Bio

Yifan (Evelyn) Gong is a Research Scientist/Engineer at Adobe Research. She received the Ph.D. degree from the Department of Electrical and Computer Engineering at Northeastern University. Her research vision is in general artificial intelligence systems to facilitate deep learning implementation on various edge devices and bridge the gap between algorithm innovations and hardware performance optimizations. Yifan is a recipient of the ML and Systems Rising Star Award, College of Engineering Outstanding Graduate Student in Teaching Award, and the DeanåÕs Fellowship Award from Northeastern University. She also received the DAC Young Fellow and ICCAD Student Scholar Award. YifanåÕs research won the first place at DAC Ph.D. forum and her work on the sparse training framework on the edge won the Spotlight Paper Award in NeurIPS 2021. Her works on reverse engineering of deceptions hold a tutorial session at CVPR 2023.

Areas of Research
  • Computer Systems
Towards Energy-Efficient Deep Learning for Sustainable AI

The rapid advancements in deep learning (DL) and artificial intelligence (AI) have led to transformative applications across various domains, such as community/shared virtual reality experiences and autonomous systems. Edge devices including mobile and embedded systems play a vital role in carrying these applications, facilitating the widespread adoption of machine intelligence. However, executing deep neural networks (DNNs) on resource-limited edge devices faces great difficulties due to demanding computational and storage requirements, particularly when both high energy efficiency and accuracy are required. Moreover, the emergence of large-scale models in AI-Generated Content (AIGC) intensifies the urgency of addressing efficiency challenges. To tackle these challenges, my research vision is in general AI system with the objective to facilitate DL on various edge devices and bridge the gap between the algorithm innovations and hardware performance optimizations by a hardware and software co-design approach, which includes energy-efficient deep learning and artificial intelligence systems and accelerations of deep neural networks including large-scale models for AIGC.