A. Cooper

Cornell University

Position: PhD Candidate
Rising Stars year of participation: 2021
Bio

A. Feder Cooper is a PhD student in the Department of Computer Science at Cornell University and is very fortunate to be advised by Chris De Sa. Her work focuses on building accountable scalable machine learning systems with theoretical guarantees. She approaches this topic from three directions: developing tools for reasoning about the correctness and speed of ML algorithms using those tools to inform building deployable ML systems and collaborating with social scientists to clarify appropriate abstractions for enabling policymakers to hold ML systems accountable in practice. She has published work at top computing conferences including NeurIPS and interdisciplinary venues such as AIES and EAAMO. She is also a member of Cornell’s MacArthur-Foundation-funded Artificial Intelligence Policy and Practice initiative and a Digital Life Doctoral Fellow at Cornell Tech in New York City.

(MC)^3: An Empirical Argument for Massively Capable MCMC

(MC)^3: An Empirical Argument for Massively Capable MCMC
Bayesian inference is a very popular technique for probabilistic modeling. Computing the posterior distribution directly is usually intractable in practice so we instead approximate it using methods like MCMC. While MCMC is guaranteed to converge asymptotically to the true posterior it is only effective on small tasks; it does not scale to large datasets. MCMC has therefore had limited utility in practice as it is not well-suited to the scale of modern data science problems such as modeling gene expression and geophysical phenomena. Researchers have attempted to meet the demands of modern inference problems by developing scalable alternatives to standard MCMC. These methods use various approximation techniques such as minibatching. However these methods often do not maintain the guarantee of convergence to the true distribution—they sacrifice exactness. In other words inexact MCMC methods trade-off reliability in favor of scalability. In recent work we showed that this trade-off choice is problematic: Inexact methods can exhibit arbitrarily large errors in inference. This finding suggests that inexact methods are unsafe to use in high-stakes domains; we should be using exact MCMC methods for problems that require fidelity. In the past such exact methods have had limited practical utility often suffering from the same scalability issues that limit traditional MCMC. But this is no longer the case. In prior work we put forth algorithms for exact scalable MCMC—both minibatch-based and stochastic gradient-based techniques—and have extended the frontier of what has generally been thought possible for scaling exact MCMC. In this work we complement these theoretical findings with an empirical argument for practically-useful massively capable MCMC (MC^3). Using a variety of real-world tasks from transportation to computational biology we make the case that now is the time for a renaissance for MCMC in Bayesian inference.