Anqi Wu

Princeton University

Position: PhD Student
Rising Stars year of participation: 2018
Bio

Anqi Wu is a PhD student in Princeton Neuroscience Institute at Princeton, advised by Professor Jonathan W. Pillow. She received a master’s degree in computer science at the University of Southern California and a bachelor’s degree in electrical engineering at Harbin Institute of Technology in China. Her research interest is to develop Bayesian statistical models to characterize structure in neural data in the interdisciplinary field of machine learning and computational neuroscience. She worked as a summer research associate at the University of Texas at Austin and as an intern at Microsoft Research, Cambridge, in the U.K.  She published several conference proceedings at top machine learning conferences.  She was nominated as one of two candidates for a Google PhD fellowship representing Princeton in 2018, and she received a Chevron Fellowship while at USC.

Gaussian Process Based Nonlinear Latent Structure Discovery in Multivariate Spike Train Data

Gaussian Process Based Nonlinear Latent Structure Discovery in Multivariate Spike Train Data
A large body of recent work in computational neuroscience focuses on methods for extracting low-dimensional, latent structure from multi-neuron spike train data. Most of these methods have used linear dimensionality reduction of firing rates, linear models of latent dynamics, or both. Here, we propose a nonlinear latent variable model that can identify low-dimensional highly-nonlinear structure underlying high-dimensional spike train data which we call the Poisson Gaussian-Process Latent Variable Model (P-GPLVM). This model specifies the joint distribution over multi-neuron spike trains in terms of conditionally Poisson spiking with firing rates driven by the composition of two underlying Gaussian processes (GPs)—one governing the trajectory of a low-dimensional temporal latent variable and another governing a set of tuning curves that map the latent variable to high-dimensional firing rates. The use of nonlinear tuning curves allows the model to discover low-dimensional latent structure even when spike responses are themselves high-dimensional (e.g., as in hippocampal place cell or entorhinal grid cell codes). To infer the model parameters and GP hyperparameters from data, we introduce the decoupled Laplace approximation, a fast approximate inference method that allows us to efficiently optimize the latent path while marginalizing over latent tuning curves. We show that this method outperforms previous Laplace-based inference methods in both the speed and accuracy. We apply the model to spike trains recorded from hippocampal place cells and show that it outperforms a variety of previous methods for latent structure discovery, including variational auto-encoder (VAE) based methods that parametrize the nonlinear mapping from latent space to spike rates with a deep neural network.