NSF Org: |
IIS Div Of Information & Intelligent Systems |
Recipient: |
|
Initial Amendment Date: | April 24, 2020 |
Latest Amendment Date: | April 24, 2020 |
Award Number: | 2027713 |
Award Instrument: | Standard Grant |
Program Manager: |
Tatiana Korelsky
tkorelsk@nsf.gov (703)292-0000 IIS Div Of Information & Intelligent Systems CSE Direct For Computer & Info Scie & Enginr |
Start Date: | May 1, 2020 |
End Date: | April 30, 2022 (Estimated) |
Total Intended Award Amount: | $104,491.00 |
Total Awarded Amount to Date: | $104,491.00 |
Funds Obligated to Date: |
|
History of Investigator: |
|
Recipient Sponsored Research Office: |
4200 FIFTH AVENUE PITTSBURGH PA US 15260-0001 (412)624-7400 |
Sponsor Congressional District: |
|
Primary Place of Performance: |
4200 Fifth Avenue Pittsburgh PA US 15260-0001 |
Primary Place of Performance Congressional District: |
|
Unique Entity Identifier (UEI): |
|
Parent UEI: |
|
NSF Program(s): | COVID-19 Research |
Primary Program Source: |
|
Program Reference Code(s): |
|
Program Element Code(s): |
|
Award Agency Code: | 4900 |
Fund Agency Code: | 4900 |
Assistance Listing Number(s): | 47.070 |
ABSTRACT
As the COVID-19 pandemic spreads, countries and cities around the globe have taken stringent measures including quarantine and regional lockdown. The increasing isolation, along with the panic and anxiety, creates challenges for countering misinformation--people are increasingly tapping into online information sources already familiar to them with declining chances of accessing alternative stories. This project will develop mechanisms based on text and image analysis, social psychology, and crowd-sourcing that can be used in a timely manner to counter misinformation during the ongoing COVID-19 crisis and beyond. One of the novel features of the approach is to deal with a specific instance of misinformation by crowd-sourcing authentic images that counter this misinformation. This research will contribute to the scientific understanding of misinformation and of persuasive narrative construction, to the assessment of risk for the spread of misinformation, and to the development of mechanisms to counter misinformation.
The technical aims of this project are divided into three thrusts. The first thrust will investigate what information content and which specific part of a multimodal social media post (e.g, a piece of text, text with an image, image with an embedded slogan) will receive stronger responses and hence increase the likelihood of the post being shared. The second thrust will create metrics to assess the likelihood of the spread of misinformation based on predictors learned from the content to which users are exposed. The third thrust will focus on the development of a system to counter misinformation based on citizen journalists? inputs of field investigations and on machine learning techniques. Finally, the system will be evaluated by survey studies and interviews to examine the system?s usability, usefulness, and effectiveness in reducing the spreading and impact of misinformation.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
PROJECT OUTCOMES REPORT
Disclaimer
This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
In the global pandemic era of COVID-19, a time filled with high risk and uncertainty, people become more vulnerable than ever to be influenced by problematic information, such as messages provoking strong emotions, discrimination, violence, and distrust. How do such problematic signals accumulate and translate into a toxic influence over online space? This project aims to study the online ecosystem that fertilizes the "infodemic" in the COVID-19 outbreak, focusing on the characteristics of misinformation content and consumers. By leveraging computational social science methodology and multi-modal learning, this interdisciplinary research makes several contributions to developing methods to understand the proliferation and persuasive narrative construction of misinformation.
First, we developed a computational approach to analyze the patterns of persuasive social media content, in terms of popularity and source credibility. Our multi-modal approach, combining image and text, is not only predictive of information popularity and credibility, but also able to uncover how unreliable sources integrate visual elements with textual content in a distorted, biased fashion. This result provides insights into the enhancement of social media literacy and engagement.
Second, through analyzing a large, geopolitically diverse sample of Twitter users and their information consumption over a 6-month period, we identified "who" constitutes the population vulnerable to the online misinformation in the pandemic, and what are the robust features and short-term behavior signals that distinguish susceptible users from others. We discovered that (1) contrary to the prior studies on bot influence, our analysis shows that social bots' contribution to misinformation sharing was surprisingly low, and human-like users' misinformation behaviors exhibit heterogeneity; (2) susceptible users appeared to be politically sensitive, active, and responsive to emotionally charged content among susceptible users. In terms of predicting users' susceptibility, we developed an interpretable deep learning model that efficiently forecasts users' transient susceptibility solely based on their short-term news consumption and exposure from their networks. Our results contribute to designing effective intervention mechanisms to mitigate the misinformation dissipation.
Third, through an analysis of over 240 thousand tweets capturing how users shared COVID-19 pandemic-related misinformation news on social media over a five-month period, we study the spread of information and user's level of interaction with the original information source, i.e., from low (copy-and-paste link) to medium (Like), to high (comments). The higher the user's interacting with the original source, the more mutation the spread of information is. Our results indicate a positive relationship between information mutation and spreading outcomes. This study provides the first quantitative evidence of how misinformation propagation may be exacerbated by users' commentary, which has implications for countering misinformation.
Finally, this project has two broader impacts. First, our study yields promising results by characterizing the information ecosystem, which enriches the social solutions in countering misinformation proliferation beyond the fact-checking paradigm. Second, the longitudinal data and techniques we developed enable researchers, platform managers, and policy analysts to discover new insights into information consumption behaviors and test new methods to counter misinformation.
Last Modified: 06/30/2022
Modified by: Yu-Ru Lin
Please report errors in award information by writing to: awardsearch@nsf.gov.