Fact check: How do I spot errors in AI chatbots?

1 day ago 5
Chattythat Icon

More and more people are using AI-driven chatbots such as ChatGPT, Gemini, Le Chat or Copilot instead of search engines to find information online. The advantage in this is that these large-language models (LLMs) not only provide links to the information they reproduce but also summarize content in a concise way that lets users access more information faster.

However, this approach is also prone to errors. After all, it is not uncommon for chatbots to make mistakes . This phenomenon, known as hallucination, has been known for a long time.

In other cases, AI chatbots combine information in such a way that they produce an incorrect result. Experts call this confabulation.

Chatbots also fall for false news on current events, although this is less well-known, according to the US company NewsGuard, which has set itself the task of checking the credibility and transparency of websites and web services.

Screenshot of the English Pravda newsite, branded with a stamp that reads "False"The English Pravda news website features a number of false claims Image: Pravda

Why do Russian AI chatbots propagate disinformation?

Portal Kombat, a Russian propaganda network that has attracted attention with numerous disinformation campaigns, might be one of the current key players of online disinformation. The network supports the Russian army's fight in Ukraine via internet portals that spread pro-Russian propaganda.

Portal Kombat was investigated by VIGINUM, an investigation unit of the French government set up to counter manipulated information from abroad. The monitoring center found at least 193 different websites that they attributed to this network between September and December 2023.

The network's central vector appears to be the online news portal Pravda, which is available in several languages, including German, French, Polish, Spanish and English.

Can AI make mistakes?

Flood of fake news around the globe

According to VIGINUM, Portal Kombat's websites do not publish their own articles. Instead, they reproduce third-party content primarily using three sources: "Social media accounts of Russian or pro-Russian actors, Russian news agencies, and official websites of local institutions or actors," as a VIGINUM report from February states.

The articles are characterized by somewhat awkward wording, typical transcription errors from Cyrillic to the Latin alphabet, and colorful commentary.

The US anti-disinformation initiative American Sunlight Project (ASP) has identified more than 180 suspected Portal Kombat sites.

According to ASP estimates, some 97 domains and subdomains belonging to the inner circle of the Pravda network publish more than 10,000 articles per day.

The topics are no longer limited to the NATO defense alliance and allied countries such as Japan and Australia, but also cover countries in the Middle East, Asia and Africa, in particular the Sahel region, where Russia is struggling for geostrategic influence.

Are chatbots the new target group?

According to the internet watchdog NewsGuard, some Pravda pages only have around 1,000 visitors per month, while the website of RT, the Russian state-controlled international news channel, has more than 14.4 million visits per month.

Analysts are convinced that the sheer volume of articles affects chatbots' outputs. They further suspect that this is precisely the purpose of the entire network: Large language models (LLM), the programs behind the chatbots, are supposed to perceive and cite its publications as legitimate sources and thus spread disinformation.

Assuming this is the plan, it seems to be paying off, according to the findings by NewsGuard. The research group examined how chatbots reacted to 15 different demonstrably false claims spread on the Pravda network between April 2022 and February 2025.

The researchers entered three different versions of each claim into the chatbots. First, with a neutral formulation that asks whether the claim is correct, then with a question that assumes the claim is true, and finally, as if a malign actor were aiming to obtain a statement that repeats the false claim.

In a third of the cases, the chatbots confirmed the pro-Russian disinformation. In less than half of the cases, they correctly presented the facts. In the rest of the cases, the models refused to answer.

The NewsGuard researchers conducted another studyin January 2025. This time, they tested the chatbots for ten false messages. They examined responses in seven different languages and did not restrict themselves to pro-Russian content. 

While the bots performed slightly better in this experiment, they still confirmed the fake news in a little over a quarter of their 2,100 responses.

Screenshot of the German Pravda website, branded with a stamp that reads "False"Analysts attributed almost 200 websites to the Pravda news network, which spreads false claims like this one.Image: Pravda

Stay vigilant, check sources, ask again

For LLM users, this means, above all: Stay vigilant. There is a lot of misinformation circulating on the internet. But what else can you pay attention to?

Unfortunately, changing the wording of initial prompts hardly helps, as NewsGuard's McKenzie Sadeghi told DW. "While malign actor prompts are more likely to elicit false information, as that is their intended goal, even innocent and leading prompts resulted in misinformation," he told DW.

According to figures that were not published in the report from 2022 to 2025, but were made available to DW, the researchers received a wrong answer to a neutral or biased prompt in around 16% of all cases.

Therefore, Sadeghi's tip is to check and compare sources. This could include asking various chatbots. Cross-referencing chatbot responses with reliable sources can also help to assess a situation.

However, caution is advised, as layouts of established media are at times copied to lend credibility to fake news portals.

When it comes to finding truthful answers, time is of the essence. Chatbots are more likely to fail in the case of new false reports that are spreading rapidly. But after a while, they will eventually come across fact checks that debunk initial false claims.

Fact check: How do I spot AI images?

This article is part of a DW Fact check series on digital literacy. Other articles include:

You can find out more about this series here. And this is how DW Fact check works and verifies claims and content.

Tilman Wagner contributed to this article, which was originally published in German.

Read Entire Article