Xinyi Zhou

University of Washington

Position: Postdoctoral Scholar
Rising Stars year of participation: 2024
Bio

Xinyi Zhou is a Postdoctoral Scholar at the Paul G. Allen School of Computer Science and Engineering and a Data Science Postdoctoral Fellow at the eScience Institute, University of Washington. She develops methods in data mining, machine learning, and natural language processing, leveraging multimodal and behavioral data while often drawing on social science insights to address social challenges ranging from mis/disinformation to well-being. Her research has been published in leading journals like ACM Computing Surveys and conferences, including WWW, CIKM, ICWSM, and PAKDD. She has co-led tutorials at KDD and WSDM to promote interdisciplinary mis/disinformation research. Her work has been featured in the ACM Showcase and media outlets such as Psychology Today.

Areas of Research
  • Artificial Intelligence
Harnessing AI to Combat Online Misinformation

Misinformation, identified as a short and long-term top risk, significantly undermines democracy, economics, public health, and other societal pillars. Manual fact-checking cannot keep pace with massive online content, necessitating AI for automation. However, online misinformation, largely multimodal (textual and visual), can be partially and even entirely factual but misleading through tactics like cherry-picking and conflating correlation with causation. Addressing it requires access to constantly growing and changing knowledge across domains. However, previous solutions predict textual falsehoods but (1) struggle with multimodal misinformation. While the predictors are increasingly accurate, they often (2) fail to explain the results to the public, and their accuracy (3) cannot reflect an ability to verify complex but simple claims. Moreover, (4) no research has computationally assessed the intent of misinformation spreaders, which could inform tailored interventions. My research addresses these key bottlenecks by integrating social science insights, creating benchmark datasets, building pioneering models, and designing comprehensive evaluation frameworks to answer novel questions. We developed the first neural network model capable of jointly learning from multimodal content and cross-modal consistency to predict misinformation. Additionally, we built MUSE, a large language model (LLM) augmented with credibility-aware multimodal retrieval to correct misinformation. MUSE generates high-quality responses to potential misinformation across modalities, domains, tactics, and political leanings within minutes of its appearance online. It excels in all 13 dimensions of response quality, from the accuracy of identifications and factuality of explanations to the relevance and credibility of references. Overall, it outperforms GPT-4 by 37% and even high-quality human-written responses by 29%. Furthermore, we proposed an AI-powered solution grounded in cognitive biases to differentiate between individuals’ unintentional spread of misinformationåÑwhere people are unaware of its falsehoodåÑand intentional spread. My work demonstrates AI’s potential to effectively, promptly, and responsibly combat online misinformation at scale.