Saadia Gabriel
University of Washington
skgabrie@cs.washington.edu
Bio
Saadia Gabriel is a PhD student in the Paul G. Allen School of Computer Science & Engineering at the University of Washington where she is advised by Prof. Yejin Choi. Her research revolves around natural language understanding and generation with a particular focus on machine learning techniques and deep-learning models for understanding social commonsense and logical reasoning in text. She has worked on toxic language detection and coherent text generation.
Probing Implications of News Headlines
Probing Implications of News Headlines
Readers project meaning onto news headlines based on not only the surface-level content of news but also their perception of its reliability. These implications can influence their reaction to news and their likelihood of spreading the misinformation through social networks. However most prior work focuses on fact-checking veracity of news or stylometry rather than measuring implications and perceived impact. We propose Misinfo Belief Frames (MBF) a pragmatic formalism for understanding how readers perceive the reliability of news and the impact of misinformation. We capture these hidden aspects of news through dimensions like potential reader reactions and likelihood of news being spread by sharing. Misinfo Belief Frames use commonsense reasoning to uncover implications of real and fake news headlines. We collect the Misinfo Belief Frames corpus a dataset of headlines focused on global crises: the Covid-19 pandemic and climate change. Our results predicting Misinfo Belief Frames with large-scale language models show that machine-generated inferences can increase reader trust in real news and decrease reader trust in misinformation. We further investigate the effectiveness of Misinfo Belief Frames for detecting misinformation and understanding reader behavior.