Shalmali Joshi
Harvard University
shalmali@seas.harvard.edu
Bio
I am a Postdoctoral Fellow at the Center for Research on Computation and Society at Harvard University. Previously I was a Postdoctoral Fellow at the Vector Institute. I received my Ph.D. from the University of Texas at Austin (UT Austin). My research expertise is developing reliable Machine Learning (ML) methods for healthcare where I leverage techniques from probabilistic modeling and causal inference for safe reliable and robust clinical decision-making. I contribute to areas of interpretable machine learning algorithmic fairness and sequential decision-making within Machine Learning. I also work on characterizing the ethical challenges of deploying Machine Learning and explainability tools in clinical healthcare.
Towards Safe, Robust and Reliable Machine Learning for Healthcare
Towards Safe, Robust and Reliable Machine Learning for Healthcare
To build deployment-ready Machine Learning (ML) for high-stakes domains several technical challenges need to be addressed. These challenges include building models robust to distribution shifts lack of interpretability setting better evaluation standards and ensuring safety from unintended harms. My long-term goal is to address these issues motivated by healthcare applications using probabilistic modeling reinforcement learning time-series modeling and causal inference. ML models are vulnerable to learning spurious correlations leading to a lack of robustness to distribution shifts. To address this I have worked on methods to improve robustness to such spurious correlations using principles of causal inference. Second while robustness is necessary it is not sufficient in practice. Ensuring model predictions are appropriately interpreted by end-users is equally critical. I have developed interpretable ML methods for high-dimensional time-series modeling ubiquitous in healthcare but have received less attention compared to the iid setting. There is also a need to reduce the unintended harms of deploying imperfect ML models. Interventions using ML-based policies may result in unintended side effects for out-of-distribution patients that only manifest in the longer term. Over-reliance on ML can make it challenging to recover from such harms. Training ML models to defer to a human expert can reduce such risks. I have developed a learning-to-defer framework that accounts for long-term outcomes using model-based reinforcement learning for non-stationary settings. Retaining human autonomy is another aspect of the safe deployment of ML methods. In many practical situations users can face unfavorable decisions from algorithms like being denied expensive treatments. In such cases users are entitled to recourse to retain agency. I have worked on algorithmic methods to generate recourses to help end-users improve outcomes. Finally I have proposed that ethics driving healthcare can help improve our current evaluation and safety standards for interpretable ML-based medical decision-making.