Karla Kvaternik
Princeton University
karlak@princeton.edu
Bio
Karla Kvaternik obtained her B.Sc. in Electrical and Computer Engineering at the University of Manitoba, her M.Sc. specializing in control theory at the University of Alberta, and her Ph.D. in control theory at the University of Toronto. She was the recipient of the prestigious Vanier Canada Graduate Scholarship in 2010, and the recipient of the Best Student Paper award at the 2009 Multiconference on Systems and Control in St. Petersburg, Russia. Her research interests span nonlinear systems and control theory, Lyapunov methods, nonlinear programming and extremum-seeking control, but her main interest is the development and application of decentralized coordination control strategies for dynamic multiagent systems. She is currently a Postdoctoral Research Associate at Princeton University, where her research focuses on the development of optimal social foraging models.
Consensus Optimization Based Coordination Control Strategies
Consensus Optimization Based Coordination Control Strategies
Consensus-decentralized optimization (CDO) methods, originally studied by Tsitsiklis et al, have undergone significant theoretical development within the last decade. Much of this attention is motivated by the recognized utility of CDO in large-scale machine learning and sensor network applications.
In contrast, we are interested in a distinct class of decentralized coordination control problems (DCCPs) and we aim to investigate the utility and limitations of CDO-based coordination control strategies. Unlike prototypical machine learning and sensor network problems, DCCPs may involve a number of networked agents with heterogeneous dynamics that couple to those of a CDO-based coordination control strategy, thereby affecting its performance. We find that existing analytic techniques cannot easily accommodate such a problem setting. Moreover, the final desired agent configuration in general DCCPs does not necessarily involve consensus. This nuanced observation requires a re-interpretation of the variables updated in a standard CDO scheme, and exposes a limitation of CDO-based coordination control strategies.
Starting from this re-interpretation, we address this limitation by proposing the Reduced Consensus Optimization (RCO) method, which is a streamlined variant of CDO particularly well suited to the DCCP context. More importantly, we introduce a novel framework for the analysis of general CDO methods, which is based on the use of interconnected systems techniques, small-gain arguments and the concept of semiglobal, practical, asymptotic stability. This framework allows us to seamlessly study the performance of RCO, as well as problem settings involving dynamic agents. In addition, when applied to a general class of CDO methods themselves, this analytic viewpoint allows us to relax several standard assumptions.