Vaidehi Srinivas
Northwestern University
vaidehi@u.northwestern.edu
Bio
Vaidehi Srinivas is a fifth-year Ph.D. student working in theoretical computer science and the foundations of machine learning at Northwestern University, advised by Aravindan Vijayaraghavan. She is interested in providing provable guarantees for machine learning tasks, particularly in beyond worst-case settings like smoothed analysis, algorithms with prediction, and conformal prediction. Before Northwestern, she was a Fulbright visiting student at the University of Vienna and earned her B.S. in Computer Science at Carnegie Mellon University. Her Ph.D. work was supported by the Northwestern Presidential Fellowship.
Areas of Research
- Theoretical Computer Science
New Paradigms for the Theory of Machine Learning
My research designs new algorithms and provides rigorous theoretical guarantees for machine learning methods, to develop a principled understanding of when these methods work well and why they sometimes fail. There is a large gap between the sophisticated machine learning methods that practitioners develop, and the methods that we can analyze in theory. I work towards closing this gap in two ways.
The long-term research goal of machine learning theory is to understand why existing heuristics work so well in solving problems that are intractable in the worst case. My research develops new theoretical insights that allow us to argue about these methods in non-worst-case settings. My work develops new techniques to analyze algorithms for challenging high-dimensional problems on typical instances through smoothed analysis, and applying these techniques to analyze iterative optimization methods.
In the meantime, new paradigms treat learned models as inherently unreliable, and design principled methods to use them as a black box. My work develops new methods with provable guarantees in these paradigms. For example, my work on algorithms with predictions and conformal prediction design new ways to use machine learned predictions that degrade gracefully with the performance of the model, rather than failing catastrophically.
