The impact of bots on opinions in social networks. Social networks have given us the ability to spread messages and influence large populations very easily. Malicious actors can take advantage of social networks to manipulate opinions using artificial accounts, or bots. It is suspected that the 2016 U.S. presidential election was the victim of such social network interference, potentially by foreign actors. Foreign influence bots are also suspected of having attacked European elections. Multiple research studies confirm the bots main action was the sharing of politically polarized content in an effort to shift opinions. The potential threat to election security from social networks has become a concern for governments around the world.
In the U.S., Members of Congress have not been satisfied with the response of major social networks and have asked them to take actions to prevent future interference in the U.S. democratic process by foreign actors. In response, major social media companies have taken serious steps. Facebook has identified several pages and accounts tied to foreign actors and Twitter suspended over 70 million bot accounts.
Despite all of the efforts taken to counter the threat posed by bots, one important question remains unanswered: how many people were impacted by these influence campaigns? More generally, how can we quantify the effect of bots on the opinions of users in a social network? Answering this question would allow one to assess the potential threat of an influence campaign. Also, it would allow one to test the efficacy of different responses to the threat. Studies have looked at the volume of content produced by bots and their social network reach during the 2016 election. However, this data alone does not indicate the effectiveness of the bots in shifting opinions.
The challenge is we do not know what would have happened if the bots had not been there. Such a counterfactual analysis is only possible if there is a model which can predict the opinions of users in the presence or absence of bots. For a model to be useful in assessing the impact of bots, it must be validated on real social network data. Once validated, an opinion model can then be used to assess the impact of different groups of bots.
A recent research report by the Massachusetts Institute of Technology (MIT) presented a method to quantify the impact of bots on the opinions of users in a social network. MIT focused the analysis on a network of Twitter users discussing the 2016 presidential election between Hillary Clinton and Donald Trump. The key strategy used was to find a model for opinion dynamics in a social network. Firstly, MIT validated the model by showing that the user opinions predicted by the model align with the opinions of these users’ based on their social media posts. Secondly, MIT identified bots in the network using a developed and customised algorithm. Thirdly, MIT used the opinion model to calculate how the opinions shift when they removed the bots from the network.
MIT discovered that a small number of bots have a disproportionate impact on the network opinions, and this impact is primarily due to their elevated activity levels. In the dataset, MIT found that the bots which supported Clinton caused a bigger shift in opinions than the bots which supported Trump, even though there are more Trump bots in the network.