Photo credit: Rob Dobi, The Globe and Mail

Disinformation Research for Good Systems


The Internet, with its ubiquitous presence, viral pull and anonymity, can be a central tool for disinformation.  Unfortunately, the economic models of media industries including platforms such as Facebook, Twitter, Instagram and many others collaborate in the spread and power of disinformation because of the ways their economic models operate (Gillespie, 2018; Zuboff, 2019).  Those ecosystem features in turn intersect other vulnerable aspects of contemporary society, including weaker and polarized social and political systems (Swedish Civil Contingencies Agency, 2018).  

Several approaches can enhance the resiliency of our information systems in order to counter disinformation. Our research uses a sample of disinformation examples to answer (1) what is the role of emotion in message appeals driven through AI in social media domains? and (2) how can we characterize the role of AI in the circulation, display and community processes by which disinformation influences those encountering it? 

This research is part of our work with UT’s Good Systems effort. We are collaborating with Talia Stroud on the studies described below, and Co-PI Mary Neuburger and her colleagues at CREEES are undertaking other work.  Our approach defines “good systems” as not only technological but also social, organizational and political. This notion of the “system” invokes an information environment with many moving parts.  Artificial intelligence and machine-driven content creation and circulation are significant components of contemporary disinformation efforts. While the most recent scandals in the U.S. point to Russian interference with social media (amid other communication efforts including email break-ins, hacking, deepfakes, and Potemkin Villages), in fact disinformation efforts are long standing and operate in various ways and implicate many countries and parties. Our communication systems such as Facebook and Twitter have been slow to remediate the many negative outcomes associated with their powerful platforms.  

2019-2020 Studies

Study 1

What are the symbols invoked in the Facebook ads?  How might they mobilize group identities? Our design uses the 3500 Facebook ads acknowledged to be Russian and codes their content using a cloud-based analytic program (Dedoose). Coding image representations as well as text embedded in the image will enable us to examine the emotional resonance of the messages and their sources.  

Study 2

How did message circulation and associated display properties mobilize communities?  The “Report On The Investigation Into Russian Interference In The 2016 Presidential Election” by Special Counsel Robert S. Mueller, III speaks of “dozens of U.S. rallies organized by the IRA” (Mueller, 2019 p. 29), with a particular focus on pro-Trump and anti-Clinton rallies.  Russian-organized events in Twin Falls, Idaho, in St. Paul, and in Houston resulted from some of the Facebook messages. How event pages manufactured protests, how local press responded to event announcements and investigated them, and how well the event pages connected with people will be examined through fieldwork interviews with participants and the local reporting communities. The events’ abilities to connect to people and communities such that they prompted actual protests suggest highly persuasive mechanisms (O’Sullivan, 2018). Our empirical work will triangulate on factors that influenced those messages’ success. 

To read the IRB consent form to take part in this study, click here.

Study 3

The medium of Twitter operates differently from Facebook, with distinctive users and impact.  This study builds on 2018-19 work with Good Systems and on Linvill and Warren’s findings (2018) by examining political trolls and the narrative features related to sentiment and divisiveness on Twitter. How does the content on Twitter compare to the content on Facebook in terms of message appeals and sentiment? A sample of tweets over a one year time period from 2016-2017 will be examined and compared to the Facebook analyses we have undertaken. Combining qualitative and quantitative methods, this study investigates matters of strategy and semantic triggers.  

The studies, co-directed by Sharon Strover and Talia Stroud at UT, will unfold simultaneously across 2019-2020. 

Expected Outputs

Our expected outcomes include a nuanced and informed understanding of message construction, including emotional and mimetic qualities; a more nuanced understanding of how AI-aided circulation and reception in these influence activities might be countered, and how we might devise critical information consumption strategies. Additional studies will be forthcoming from our partners at CREEES.