Yao Qin
yaoqin@google.com
Bio
Yao Qin is a research scientist at Google Research. She achieved her Ph.D. degree at University of California San Diego under the supervision of Professor Garrison Cottrell. Her main research interest is in understanding and improving robustness in machine learning models. During her PhD Yao interned at Google Brain Toronto Team advised by Geoffrey Hinton in 2019 and Google Brain Red Team advised by Ian Goodfellow in 2018. In addition Yao also interned at Microsoft Research Cambridge UK in 2017 and NEC Lab in 2016.
Understanding and Improving Robustness in Machine Learning Models
Understanding and Improving Robustness in Machine Learning Models
The robustness of machine learning algorithms is becoming increasingly important as machining learning systems are being used in higher-stakes applications. However “robustness” issues arise in a variety of forms and are studied through multiple lenses in the machine learning literature. In one line of research neural networks suffer from distributional shift: there is a significant accuracy drop when models are asked to make predictions on data different from the training distribution. Further neural networks are vulnerable to adversarial examples — small perturbations to the input can successfully fool classifiers into making incorrect predictions. Taken together we see a significant challenge for trusting model predictions. To make AI more robust for the good of society my research mainly focuses on understanding and improving robustness in machining learning models. We have made progress on models’ robustness through three main prongs: 1) Building a connection between different robustness issues and then proposing general techniques that can address multiple robustness problems at the same time. (2) Bridging the gap in robustness techniques among different domains e.g. image audio and text domains. (3) Developing a better understanding of robustness issues in different network architectures e.g. Capsule Networks CNNs and Transformers. Through a series of research work we successfully improve the robustness of machine learning models. This serves as a step towards addressing the central issue of the deployment of neural networks.