Recognising fake news
Prof. Dr.-Ing. Hendrik Heuer / Design of trustworthy artificial intelligence
Photo: UniService Third Mission

Recognising fake news - a task for society as a whole

Computer scientist Hendrik Heuer calls for understanding, control and co-design in dealing with artificial intelligence

The press release about a special evaluation of the Pisa Study 2022 recently alarmed internet users across Germany. It found that more than half of pupils in Germany say they have problems recognising false information on the internet. Hendrik Heuer, Professor of Trustworthy Artificial Intelligence Design at the University of Wuppertal, is researching whether and how users can build trust in AI systems with the aim of developing new social media by and for users.

Understanding, control, co-design

"We need a basic understanding," says Heuer, "we need to roughly understand how a ChatGPT (ChatGPT is a chatbot from the US software company OpenAI, with which users can communicate via text-based messages and images, editor's note) calculates things so that certain errors can be recognised earlier. We also need ways of monitoring the output, i.e. what comes out of this AI system, in a very systematic way and, most importantly, the question of co-design. I come from the field of participatory software development and therefore I believe that people should always be a part of software development." This last idea has been known since the 1970s and we know that people who use software should be involved in development as much as possible, as they are experts in their work. "If we bring this together better, then we will have AI that we can trust to automate processes. I trust this AI."

The fight against misinformation is a task for society as a whole

Heuer's research also includes the fight against misinformation and disinformation, which is almost impossible to win given the flood of daily news. "It's a tough nut to crack," laughs the expert, who recently gave a presentation entitled "From Augustus to Trump" with his team at a congress organised by the Chaos Computer Club, proving that fake news has been used since ancient times to gain political or personal advantage. Unlike back then, however, today we have AI systems that can generate false information much more quickly and social media that can be used to reach many more people. "Even if it's a battle against windmills, we have to get on with it. It's a task for society as a whole!"

Technical support through suitable tools

"For example, we are looking at what can be done at a technical level," explains the scientist, "what tools can be used to support people." Research shows that this also works. There are certain things like crowdsourcing (crowdsourcing is made up of the terms "outsourcing" and "crowd". This means that certain tasks and work processes are outsourced to the mass of internet users, the crowd, editor's note), checklists or news reliability criteria. If you explain certain facts to people at the media education level, then you can really get them to recognise misinformation better. "If I've seen certain things before, I'll recognise them more quickly the next time."

How can social media be improved?

A particular focus of Heuer's work is on social media and the question of how it can be improved. He has therefore been working with platforms such as X, formerly Twitter, and Facebook for a long time. However, he also thinks that new, independent platforms can be developed. "I have the idea that if we take out of the equation that we have to make a lot of money with advertising, we could build a platform here in Wuppertal alone with the computer science students that most people could use for their family, their hobbies, their World of Warcraft clan or whatever group is important to them to share pictures and have a good time. I would start small." Heuer is therefore looking for existing groups to support them with expertise and develop customised platforms for them. He is already trialling various options in cooperation with a vocational school in Wuppertal. "Once a programme has been written, it can then be used for many applications." In a project with climate activists who work a lot with Instagram and want to defend themselves against all kinds of sexual violence and hatred online, Heuer and his team are investigating whether filtering options via the browser can also help.

Making the Linux idea socially acceptable again

"I'm a big fan of Linux (Linux refers to free multi-user operating systems, editor's note), this great Linux idea of giving the programming code to everyone so that everyone can change it," says Heuer enthusiastically. "That's a great idea! Linux runs on every Android mobile phone, on almost all Internet servers - it's a huge success story. But this idea they came up with of being able to change the programming code is not being realised. I'm a computer science professor, I've never changed it, my parents have never changed it, there's still something missing. The question is, how can we get it back and make it usable?" At this point, he explains, AI possibilities come into play again, whereby specifying terms is the be-all and end-all. Heuer explains it like this: "Imagine that the terms bicycle, car and spaceship don't exist. They are all called means of transport. And now people come into a shop who actually need a bicycle, but are sold a spaceship, or a car with a spaceship price. That's actually the standard in AI today. AI is everything and nothing and we need more precise terms for users." Large language models, which would also be used in ChatGPT, could be co-designed by users.

It is often difficult to check the truthfulness of news online
Photo: UniService Third Mission

Assessing information - ask someone around you

Many users of social media are often unable to recognise false information. But the expert has advice on this too. "The classic number is the news that sounds too good to be true, which is then usually false," explains Heuer, "this also applies to things that are very emotional. It's also always a social process, i.e. you should ask people around you." Various studies have already shown that there are criteria for categorising information. "One is the content. If it is particularly unusual, you can use alternative media or newspapers that also report on it. The second point is the political orientation of something. For example, if a website is aligned with a certain party and has the party logo on the page, then that's not what we expect from journalism. Or you can find out more on Wikipedia by looking at the points of criticism in the article for a news source. The important thing is to leave the website you want to check and gather information from other websites you trust."

Text manipulation through altered writing style

The federal elections are just around the corner. Voters are more uncertain than ever before and the opportunities for conveying misinformation, especially in this very short election campaign, are particularly diverse. Together with a colleague from Harvard University, Heuer has analysed the credibility of news websites, interviewed journalists and politicians and says: "The criteria we used included certain content such as conspiracy theories, whether there is strong cooperation with certain political parties, who works for a site as an author, whether journalistic standards are adhered to and who owns a website."

AI systems are very receptive to attention

Up to 70% of videos viewed on YouTube are selected by an AI system. If such a system is also proven to influence friendships and acquaintances, it may also be able to exert political influence and that's when it gets dangerous. "Especially since the advent of TikTok, even more is being selected by AI," explains the expert. "But the extent to which AI can influence politics is also very difficult to research. A few AI-generated influences may not yet change my voting behaviour, but in total it does of course have an impact on people. The news I watch in total certainly has an influence on how I perceive the world. And that's why, as researchers, we should also make sure that the news we see is really news and not made-up stuff that pursues a political goal."

AI systems are very susceptible to attention, and this can develop into a problem. Heuer explains: "If there is a piece of news that is being talked about a lot, and this can also be false information, the algorithm makes no distinction between importance and attention. If something gets a lot of attention, it's happy, shows it to even more people and then it gets even more attention. We also know that strongly emotionalising things in particular attract a lot of attention. And that's where you have to go. So if we say we have a platform here that is more like our news programme in that I watch it for a quarter of an hour in the evening, that's a different approach and doesn't fit in at all with the business model of the current platforms that earn a lot of money from advertising." However, Heuer is already experiencing changes. For example, many journalists have now left the "X" platform and signed up to "Bluesky". "So nothing is set in stone."

Nothing has to be as it seems

"We need to talk to each other more again," says Heuer firmly, "because even if a piece of information has 40,000 likes, it doesn't necessarily have to be correct. It's all very easy to manipulate. I don't think you should trust these figures, we have to collect our own figures." So what could user-friendly social media of the future look like? The users' answers are surprisingly simple. "If you ask how you imagine the social media of the future, the things that people want are contact with friends and family or sharing information about hobbies," explains Heuer. "None of this is rocket science, but supporting this is our research work, we're just finding out. We work with prototypes because it's always easier to show something and then find out whether it works or what needs to be changed." The computer scientist sees the success of his work primarily in dialogue with users. "People actually know what they need. We need communication, openness and trust."

Uwe Blass

Hendrik Heuer studied Digital Media at the University of Bremen and then completed a Master's degree at the Royal Institute of Technology in Stockholm and Aalto University in Helsinki. In 2020, he completed his doctorate on Human-Computer Interaction & Machine Learning at the University of Bremen. He then worked there as a postdoc and substitute professor. He was also a postdoc at Harvard University in the USA. Since 2024, he has been researching and teaching on the topic of designing trustworthy artificial intelligence at the Centre for Advanced Internet Studies (CAIS) in Bochum and the School of Mathematics and Natural Sciences at the University of Wuppertal.