Lili Su

Massachusetts Institute of Technology

Position: Postdoctoral Associate
Rising Stars year of participation: 2018
Bio

Lili Su is a postdoctoral researcher in the Computer Science and Artificial Intelligence Laboratory at MIT, hosted by Professor Nancy Lynch.  She received a PhD  from the University of Illinois at Urbana-Champaign in 2017, supervised by Professor Nitin H. Vaidya.  Her master’s work was on ordinal data processing at the CSL Communication Group from 2012 to 2014 .  She received her BS degree from Nankai University in China in 2011.  Her research intersects distributed systems, brain computing, security optimization, and learning.  She was among three nominees for the Best Student Paper Award at the 2016 International Symposium on Distributed Computing, she received the 2015 Best Student Paper Award at the International Symposium on Stabilization, Safety, and Security of Distributed Systems.  She received UIUC’s Sundaram Seshu International Student Fellowship for 2016.  She has also served on program committes for several conferences.

Defending Distributed Learning Against Arbitrarily Malicious Attacks

Defending Distributed Learning Against Arbitrarily Malicious Attacks
A distributed system consists of networked components that interact with each other in order to achieve a common goal.  Given the ubiquity of distributed systems and their vulnerability to adversarial attacks, it is crucial to design systems that are provably secured.  I have been exploring designing robust distributed learning algorithms that are provably resilient to Byzantine attacks.  Two particular topics that I have been focusing on are: (1) Distributed statistical learning in the presence of arbitrarily malicious workers.  Here we focus on a map-reduce type of architecture and algorithm; we capture the distributed learning system using a server-client model where the clients are prone to Byzantine attacks.  The adversary can adaptively choose the set of clients to attack.  On the other hand, each worker only keeps a small sample.  As a result of this, to train a model over this learning system, close interaction between the network components is necessary.  As far as we know, we are (at least) among the first to study this problem.  We proposed methods that are provably resilient to Byzantine attacks even in the high dimensional setting.  (2) Byzantine-resilient distributed inference over multi-agent networks.  We studied (a) Consensus-Based Multi-Agent Optimization and (b) Consensus-Based Distributed Hypothesis Testing.  For the former, we characterized the performance degradation caused by the Byzantine attacks and designed efficient algorithms that can achieve the optimal fault-tolerance performance.  For the latter, we propose, as far as we know, the first learning algorithm under which the good agents can collaboratively identify the underlying truth.