Award Abstract # 2027713
RAPID: Countering COVID-19 Misinformation via Situation-Aware Visually Informed Treatment

NSF Org: IIS
Div Of Information & Intelligent Systems
Recipient: UNIVERSITY OF PITTSBURGH - OF THE COMMONWEALTH SYSTEM OF HIGHER EDUCATION
Initial Amendment Date: April 24, 2020
Latest Amendment Date: April 24, 2020
Award Number: 2027713
Award Instrument: Standard Grant
Program Manager: Tatiana Korelsky
tkorelsk@nsf.gov
 (703)292-0000
IIS
 Div Of Information & Intelligent Systems
CSE
 Direct For Computer & Info Scie & Enginr
Start Date: May 1, 2020
End Date: April 30, 2022 (Estimated)
Total Intended Award Amount: $104,491.00
Total Awarded Amount to Date: $104,491.00
Funds Obligated to Date: FY 2020 = $104,491.00
History of Investigator:
  • Yu-Ru Lin (Principal Investigator)
    yurulin@pitt.edu
  • Wen-Ting Chung (Co-Principal Investigator)
  • Adriana Kovashka (Co-Principal Investigator)
Recipient Sponsored Research Office: University of Pittsburgh
4200 FIFTH AVENUE
PITTSBURGH
PA  US  15260-0001
(412)624-7400
Sponsor Congressional District: 12
Primary Place of Performance: University of Pittsburgh
4200 Fifth Avenue
Pittsburgh
PA  US  15260-0001
Primary Place of Performance
Congressional District:
12
Unique Entity Identifier (UEI): MKAGLD59JRL1
Parent UEI:
NSF Program(s): COVID-19 Research
Primary Program Source: 010N2021DB R&RA CARES Act DEFC N
Program Reference Code(s): 096Z, 7495, 7914
Program Element Code(s): 158Y00
Award Agency Code: 4900
Fund Agency Code: 4900
Assistance Listing Number(s): 47.070
Note: This Award includes Coronavirus Aid, Relief, and Economic Security (CARES) Act funding.

ABSTRACT

As the COVID-19 pandemic spreads, countries and cities around the globe have taken stringent measures including quarantine and regional lockdown. The increasing isolation, along with the panic and anxiety, creates challenges for countering misinformation--people are increasingly tapping into online information sources already familiar to them with declining chances of accessing alternative stories. This project will develop mechanisms based on text and image analysis, social psychology, and crowd-sourcing that can be used in a timely manner to counter misinformation during the ongoing COVID-19 crisis and beyond. One of the novel features of the approach is to deal with a specific instance of misinformation by crowd-sourcing authentic images that counter this misinformation. This research will contribute to the scientific understanding of misinformation and of persuasive narrative construction, to the assessment of risk for the spread of misinformation, and to the development of mechanisms to counter misinformation.

The technical aims of this project are divided into three thrusts. The first thrust will investigate what information content and which specific part of a multimodal social media post (e.g, a piece of text, text with an image, image with an embedded slogan) will receive stronger responses and hence increase the likelihood of the post being shared. The second thrust will create metrics to assess the likelihood of the spread of misinformation based on predictors learned from the content to which users are exposed. The third thrust will focus on the development of a system to counter misinformation based on citizen journalists? inputs of field investigations and on machine learning techniques. Finally, the system will be evaluated by survey studies and interviews to examine the system?s usability, usefulness, and effectiveness in reducing the spreading and impact of misinformation.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH

Note:  When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Unal, Mesut Erhan and Kovashka, Adriana and Chung, Wen-Ting and Lin, Yu-Ru "Visual Persuasion in COVID-19 Social Media Content: A Multi-Modal Characterization" The 1st International Workshop on Multimodal Understanding for the Web and Social Media (MUWS ?2022:), the Web Conference 2022 , 2022 Citation Details
Yan, Muheng and Chung, Wen-Ting and Lin, Yu-Ru "Are Mutated Misinformation More Contagious? A Case Study of COVID-19 Misinformation on Twitter" Proceedings of Web Science 2022 (WebSci 2022) , 2022 https://doi.org/10.1145/3501247.3531562 Citation Details
Yan, Muheng and Lin, Yu-Ru and Litman, Diane "Argumentatively Phony? Detecting Misinformation via Argument Mining" 1st KDD Workshop on AI-enabled Cybersecurity Analytics , 2021 Citation Details
Teng, Xian and Pei, Sen and Lin, Yu-Ru "StoCast: Stochastic Disease Forecasting with Progression Uncertainty" IEEE Journal of Biomedical and Health Informatics , 2020 https://doi.org/10.1109/JBHI.2020.3006719 Citation Details
Lee, JooYoung and Wu, Siqi and Ertugrul, Ali Mert and Lin, Yu-Ru and Xie, Lexing "Whose Advantage? Measuring Attention Dynamics across YouTube and Twitter on Controversial Topics" Proceedings of the International AAAI Conference on Weblogs and Social Media , 2022 Citation Details
Teng, Xian and Lin, Yu-Ru and Chung, Wen-Ting and Li, Ang and Kovashka, Adriana "Characterizing User Susceptibility to COVID-19 Misinformation on Twitter" Proceedings of the International AAAI Conference on Weblogs and Social Media , v.16 , 2022 Citation Details

PROJECT OUTCOMES REPORT

Disclaimer

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

In the global pandemic era of COVID-19, a time filled with high risk and uncertainty, people become more vulnerable than ever to be influenced by problematic information, such as messages provoking strong emotions, discrimination, violence, and distrust. How do such problematic signals accumulate and translate into a toxic influence over online space?  This project aims to study the online ecosystem that fertilizes the "infodemic" in the COVID-19 outbreak, focusing on the characteristics of misinformation content and consumers. By leveraging computational social science methodology and multi-modal learning, this interdisciplinary research makes several contributions to developing methods to understand the proliferation and persuasive narrative construction of misinformation.

First, ​​we developed a computational approach to analyze the patterns of persuasive social media content, in terms of popularity and source credibility. Our multi-modal approach, combining image and text, is not only predictive of information popularity and credibility, but also able to uncover how unreliable sources integrate visual elements with textual content in a distorted, biased fashion. This result provides insights into the enhancement of social media literacy and engagement.

Second,  through analyzing a large, geopolitically diverse sample of Twitter users and their information consumption over a 6-month period, we identified "who" constitutes the population vulnerable to the online misinformation in the pandemic, and what are the robust features and short-term behavior signals that distinguish susceptible users from others. We discovered that (1) contrary to the prior studies on bot influence, our analysis shows that social bots' contribution to misinformation sharing was surprisingly low, and human-like users' misinformation behaviors exhibit heterogeneity; (2) susceptible users appeared to be politically sensitive, active, and responsive to emotionally charged content among susceptible users. In terms of predicting users' susceptibility, we developed an interpretable deep learning model that efficiently forecasts users' transient susceptibility solely based on their short-term news consumption and exposure from their networks. Our results contribute to designing effective intervention mechanisms to mitigate the misinformation dissipation.

Third, through an analysis of over 240 thousand tweets capturing how users shared COVID-19 pandemic-related misinformation news on social media over a five-month period, we study the spread of information and user's level of interaction with the original information source, i.e., from low (copy-and-paste link) to medium (Like), to high (comments). The higher the user's interacting with the original source, the more mutation the spread of information is. Our results indicate a positive relationship between information mutation and spreading outcomes. This study provides the first quantitative evidence of how misinformation propagation may be exacerbated by users' commentary, which has implications for countering misinformation.

Finally, this project has two broader impacts. First, our study yields promising results by characterizing the information ecosystem, which enriches the social solutions in countering misinformation proliferation beyond the fact-checking paradigm. Second, the longitudinal data and techniques we developed enable researchers, platform managers, and policy analysts to discover new insights into information consumption behaviors and test new methods to counter misinformation.

 

 


Last Modified: 06/30/2022
Modified by: Yu-Ru Lin

Please report errors in award information by writing to: awardsearch@nsf.gov.

Print this page

Back to Top of page