Computational Science and the Spread of Harmful Conspiracy Theories in Online Social Networks

Nowadays, since almost anyone can post on social media, a strict distinction between source and consumer is no longer evident. As a consequence we are exposed to an exponential growing flood of misleading information produced by an uncontrollable crowd of often clueless creators. Although it is well understood that the spread of misinformation leads to fatal consequences, it seems impossible to manually sort dangerous content from the sheer volume of data published on a daily basis.

Natural language processing is widely used to automatically classify suspicious content. Here, the strategy is to create manually labeled training sets and train classifiers to detect the content of interest.


However, even though these approaches significantly reduce the amount of manual labor required, machine learning models lack an understanding of context. Thus, features like humor or irony may not be taken into account, leading to miss-classifications. Due to these shortcomings, there is considerable motivation to explore other, more general detection methods. We aim for a more generic approach, exploiting not only the content but rather the underlying interactions within online social networks, to gain knowledge about the properties and dynamics of the spread of misinformation with harmful consequences on a societal scale. Specifically, we investigate the evolution of temporal networks induced by interactions between Twitter users during misinformation events.

There will be pizza. Welcome!

Daniel Schroeder (Simula and OsloMet) and Kaspara Skovli Gåsvær (former master of science student at the CS program

Published Sep. 30, 2022 5:25 PM - Last modified Sep. 30, 2022 5:28 PM