Beidi Chen
Stanford University
beidic@stanford.edu
Bio
Beidi Chen is a Postdoctoral scholar in the Department of Computer Science at Stanford University working with Dr. Christopher Re. Prior to joining Stanford she received her Ph.D. in the Department of Computer Science at Rice University advised by Dr. Anshumali Shrivastava. Beidi received a BS in EECS from UC Berkeley in 2015. Her research interests are in Large-scale machine learning and deep learning. Her work has won Best Paper awards at LISA and IISA. She received the Ken Kennedy Institute for Information Technology Fellowship. She also has held internships in Microsoft Research Redmond NVIDIA Research and Amazon AI. She was a recipient of EECS Rising Stars 2019 at UIUC.
Algorithms and Frameworks for Efficient Neural Network Training
Algorithms and Frameworks for Efficient Neural Network Training
With the exponential growth in data and model size it is necessary to reduce the computational and memory bottlenecks in neural network (NN) training. My research goal is to have a deeper understanding develop algorithms and build systems to address the model and data efficiency problems. We first present the framework SLIDE (Sub-LInear Deep learning Engine) which uniquely blends smart randomized algorithms Locality Sensitive Hashing (LSH) with multi-core parallelism and workload optimization for efficient NN training. Our evaluations on industry-scale recommendation datasets with large fully connected architectures show that training with SLIDE on a 44 core CPU is more than 3.5 times (1-hour vs. 3.5 hours) faster than the same network trained using TF on Tesla V100. Then we present MONGOOSE a learnable LSH framework equipped with a scheduler and low-cost learnable LSH hash functions. We show it achieves further improvement in wide applications — up to 8% better accuracy with 6.5x speed-up and 6x reduction in memory usage.