Story
IconChevronFat
Category Science Radar
24 May 2022

Social bots: In the battle for interpretative sovereignty in the information war

Tag Research
Tag Heilbronn
Tag Digitalization

They are now so lifelike that non-experts can hardly distinguish them from human actors: Software robots on social media platforms – so-called social bots – have one central function above all. They disseminate content at breakneck speed and simulate human behavior through likes, comments, retweets as well as their own posts – often to manipulate opinions, spread content frames as well as false news and even influence societal discussions such as elections. But how dangerous are they really? And above all, how do you deal with them? Michaela Lindenmayr, a TUM doctoral student at the Heilbronn Campus at the Professorship for Innovation & Digitalization, has made these questions her research task in a joint project with Prof. Dr. Jens Förderer.

 

It all started with her master’s thesis on the topic of hate speech on social media. “During my research, I had to realize that a lot of what falls into the category of hate speech doesn’t come from human users at all, but is generated by artificial intelligence,” she says. On social platforms, bots are often found in the political sphere “when it comes to influencing people’s opinions,” Lindenmayr says. In elections, for example, this can lead to opinion manipulation, but also in general, the doctoral student says, “Anywhere there is fake news, bots can generate great reach.” In addition, they could distort the image of public opinion and, for example, make supporters of conspiracy theories appear more numerous than they actually are. Of course, there is always a human agenda behind it. In other words, someone has an interest in spreading certain contents which leads to a benefit for oneself or others. Identifying the creators behind the algorithm, however, is extremely difficult. Whether an entire troll factory or individual programmers are behind a bot is virtually impossible to prove – let alone who commissioned them.

 

The role of bots in (information) warfare

Social bots also play a role in wars in the digital age that should not be underestimated – for example, to simulate public approval or to spread a one-sided assessment of the situation. At this point in time, it is difficult for researchers like Michaela Lindenmayr to assess the extent to which social bots are being used in the current Ukraine war. But in any case, it is always worth taking a critical look behind the scenes when dealing with an unknown and unauthenticated source, says the young academic. “Basically, it’s important to question information you find on social media and validate it through other, independent sources before believing it. The prerequisite, of course, is that such sources are accessible.” In the current case of war, this is exactly what becomes a problem. By limiting independent media and the channels where people can get information, as is happening in Russia right now, social bots could have an easier time. Detecting them is by no means always easy. But there are ways and means to quickly unmask at least the simpler variants – for example, by taking a closer look at the user profile and the activities of the alleged bot.

Debunking through research: Questioning and verifying information is paramount

For example, if a user is active non-stop, posts around the clock, or stands out with bumpy sentence constructions, there is a high probability that you are dealing with a bot. It is also worth taking a look at the friends list. Often, non-human users have hardly any personal contacts – or only those that don’t seem ‘real’ themselves. Such social intruders are comparatively easy to identify. But unfortunately, developers have also learned. “In the meantime, there are bots that are programmed to appear more human: The linguistic duct seems more genuine, they argue in a more factual way, they take supposed rest and meal breaks, and even the profiles look like what you’re used to seeing from real people,” says Michaela Lindenmayr. This circumstance makes identification increasingly difficult, also with a view to the future. What remains as a remedy is then only the fact check with the help of reputable media outside the social networks.

 

How to turn the tables – and leverage technology for good

All of this now sounds largely dystopian. Yet the technology that bot developers use is not even fundamentally bad. On the contrary: Used for the cross-channel dissemination of important information, social bots can, for example, summarize news in a compact form or even save lives in the event of a disaster. That’s why Michaela Lindenmayr also advocates not categorically deleting or demonizing them. It is all the more important to educate the general public, to initiate discussions on the topic of social bots, and to sharpen people’s ability to question and classify information.

 

Current studies estimate that around nine to 15 percent of participants in controversial discussions on social networks are programmed to do so. This figure makes it clear how much attention should actually be paid to the topic. And while education officers are working on teaching content, platform administrators are now faced with the task of developing new ways and means of monitoring. Because in the end, they face one danger above all, as Lindenmayr explains: “If the proportion of deliberately spread misinformation continues to increase, more and more users could withdraw from social media – simply because they can no longer believe anything that is played out to them in their newsfeed every day. Fortunately, that’s exactly what’s leading networks like Twitter to target the problem now.”