Guest writer and UT Austin Journalism undergradutate, Emily Brown

On Friday, Sept. 2o, Samuel Woolley, a leading researcher in the intersection of politics and technology, gave a lunch-time lecture regarding the dangers of computational propaganda.  

“We need to challenge ourselves to become less digitally ignorant,” said Woolley.

Samuel Woolley, Ph.D., is an assistant professor at the University of Texas School of Journalism and a former Oxford research associate.  During his talk, “Addressing the Next Wave of Computational Propaganda,” Woolley explained the current dangers of computational propaganda and what should be expected in the future.

While propaganda has existed for decades now, this type is unique to the modern era.  Computational propaganda refers to the use of automation and algorithms to manipulate public opinion.

“We often have a Western perspective in the US and in the UK when we talk about the Internet,” said Woolley.  Since its origin, countries around the world have employed the Internet, and now social media, for manipulation.

Creators of computational propaganda use automation, or bots, to create fake profiles on social media.  These bots manipulate the algorithm, not the conversation, Woolley explained, with the idea of manufacturing consensus.  

“The business platform of social media is fundamentally reliant on fake accounts,” stated Woolley.  “Companies are incentivized to share fake content,” he said, in order to drive up engagement, and thus profits.

While social media companies can play a large role in reducing computational propaganda by deleting fake accounts, Woolley believes additional legislation is necessary to combat computational propaganda.  However, he commented that the US is “unlikely to pass anything substantive in the next couple years.”

The role of research within the intersection of social media and politics is complicated. Firms are often hesitant to release information to academics, said Woolley. However, he expanded, “releasing public data to academics can be very helpful.”  The more transparent social media firms are, the more academics can help them fight back against computational propaganda.

After Woolley finished explaining his research, he asked those in attendance for questions. Various members of the audience, comprised mainly of graduate students and a few professors, eagerly asked about the future of computational propaganda and asked Woolley to expand on the role of social media.

With the emergence of new technology, the scope of computational propaganda will change, said Woolley.  “I think we’ll see a combination of some of the same and new AI-enabled methods,” he expanded. He believes these new methods will likely include artificial intelligence, headless browsing technology, and deep fakes.

Woolley explained that while artificial intelligence has already begun to make bots more human-like in behavior, headless browsing technology will make identifying bots even harder. This technology allows a bot to be built and access social media sites through their main pages. Further, Woolley believes that deep fakes, or highly edited videos, will be increasingly used in the realm of politics.

However, because this new technology is still being developed, researchers are better able to develop technologies to identify new behaviors and respond accordingly, concluded Woolley.  

Following the question and answer portion of the talk, many stayed to discuss their thoughts. Tamar Wilner, a Ph.D. student with a focus in misinformation, described why this talk interested her. “It makes me think about the extent to which misinformation is driven by cooperative efforts,” she said, “which is different from a lot of what I deal with.”

Trish Morrison, the event’s coordinator, said she believes these events are important as they “give students perspective outside of the classroom.” 

Morrison also said this event was unique because it involved collaboration between different departments and research institutes. She believes this is important as it “gets people out of their silos.” A similar sentiment was echoed by Wilner, who said “it’s good to know what faculty are doing in the department.” 

Whether any legislation will be passed in the future or social media firms will release data to academics is yet to be seen. However, according to Woolley, both of these actions are necessary to increase public awareness of computational propaganda and its impact.