Liyue Shen
Stanford University
liyues@stanford.edu
Bio
Liyue Shen is a PhD student at the Electrical Engineering Department of Stanford University co-advised by Professor John Pauly and Professor Lei Xing. Her research focuses on Medical AI which spans the interdisciplinary research areas of machine learning computer vision and medical imaging to improve image-guided clinical care and deepen our understanding of human health with applications in areas such as cancer patients treatment and radiation therapy. Her work has been published in both computer vision conferences (ICCV CVPR) and medical journals (Nature Biomedical Engineering IEEE TMI). Prior to her PhD Liyue received her bachelor’s degree in Electronic Engineering from Tsinghua University. She is the recipient of Stanford Bio-X Bowes Graduate Student Fellowship (2019-2021).
Exploiting Prior Knowledge in Physical World Incorporated with Machine Learning for Solving Medical Imaging Problems
Exploiting Prior Knowledge in Physical World Incorporated with Machine Learning for Solving Medical Imaging Problems
Medical imaging is crucial for clinical patient care. Recently machine learning has made progress in natural image processing. However when it comes to medical image problems how can deep networks deal with the unique challenges in medical images such as high-dimensional and multi-modality? To tackle these problems I develop efficient machine learning algorithms for medical imaging by exploiting prior knowledge from the physical world — exploit what you know — to incorporate with machine learning models. I present two directions of my research. First since the pure data-driven machine learning methods always suffer from limitations in generalizability reliability and interpretability By exploiting geometry and physics priors from the imaging system I proposed physics-aware and geometry-informed deep learning frameworks for radiation-reduced sparse-view CT and accelerated MR imaging. Incorporating geometry and physics priors the trained deep networks show more robust generalization across patients and better interpretability based on intermediate results. Second motivated by the unique characteristics of medical images that patients are often scanned serially over time during clinical treatment where earlier images provide abundant prior knowledge of a patient’s anatomy I proposed a prior embedding method to encode internal information of image priors through coordinate-based neural representation learning. Since this method requires no training data from external subjects it relaxes the burden of collecting large-scale datasets in medical AI and can be easily generalized across different imaging modalities and anatomies. Following this I developed a novel algorithm of temporal neural representation learning for longitudinal study. Combining both physics priors and image priors I showed the proposed algorithm can successfully capture subtle yet significant structure changes such as tumor progression in sparse-sampling image reconstruction which can be applied to real-world applications such as cancer patients treatment and radiation therapy.