Maya Varma
Stanford University
mvarma2@stanford.edu
Bio
Maya Varma is a PhD student in computer science at Stanford University advised by Prof. Curtis Langlotz and Prof. Akshay Chaudhari. Her research focuses on the development of artificial intelligence methods for addressing healthcare challenges, with a particular focus on medical imaging applications. Her work has led to one patent as well as publications in ICLR, Nature Machine Intelligence, ICCV, EMNLP, ACL, and Radiology. She is a recipient of the Knight-Hennessy Fellowship, NDSEG Fellowship, and Quad Fellowship. Previously, Maya obtained a BS in computer science and a minor in electrical engineering from Stanford, graduating with honors and distinction. She was awarded the Kennedy Prize for the best undergraduate thesis in the Stanford School of Engineering.
Areas of Research
- AI for Healthcare and Life Sciences
Towards accurate and reliable artificial intelligence methods for medical image interpretation
Artificial intelligence (AI) holds potential for improving timely delivery of healthcare services and enabling accessibility to diagnostics on a global scale. My research seeks to develop AI methods capable of (1) providing accurate disease diagnostics and (2) operating reliably across clinically-important data subgroups. First, I will present ViLLA, which improves automated disease diagnosis from chest X-rays (CXRs) by leveraging unstructured, free-form text in radiology reports rather than relying on structured labels. Medical images are often complex in nature with a large number of diagnostically-relevant features, which can be difficult for models to learn when dense labels are absent. Given input images and paired reports, we address this challenge by introducing a novel two-stage training framework that (i) decomposes image-text samples into region-attribute pairs using a self-supervised model, and then (ii) trains a contrastive vision-language model on the generated data. Our approach contributes to improved zero-shot performance across a range of fine-grained tasks, including CXR disease detection. I will then present RaVL, which aims to improve clinical reliability of vision-language models by addressing model failures resulting from learned spurious correlations. RaVL (i) identifies precise image features contributing to prediction errors using a region-level clustering approach and then (ii) mitigates the identified spurious correlation with a novel region-aware loss function. Our results show that RaVL can effectively surface and address spurious correlations; for instance, RaVL discovered that a popular off-the-shelf medical vision-language model likely learned a spurious correlation between the presence of cardiomegaly in CXRs and metal clips in clothing, which are an unrelated feature. Overall, our methods aim to pave the way towards accurate and reliable AI methods for medical image interpretation.