The JRC explains: AI: friend or foe of disinformation? JRC explains | 24 September 2025 | Joint Research CentreJust as AI can enable the production and dissemination of disinformation and foreign information manipulation and interference easily and quickly, it can also help to detect and analyse it. Enabling disinformation AI models have revolutionised content creation by making it remarkably easy to produce highly convincing content rapidly and at scale, or to analyse huge amounts of data. These capabilities can be exploited for good or for bad. The potential for disinformation is vast, as AI-generated content can be used to mislead the public, erode trust in media, and distort the information we have access to. Such activity can have profound effects, including altering the outcomes of elections, influencing political processes, and shifting public support on issues like climate change, public health, armed conflicts or migration. How can AI help? AI can also be used to detect the spread of disinformation, as well as to further its dissemination.Large language models (LLMs) can analyse patterns in the dissemination of messages and narratives to spot signs of coordinated intent. This can indicate the evidence of manipulation.For example, texts from websites can be grouped into clusters to identify underlying similarities indicative of a possible planned campaign or incident. As clustering is multilingual, stories and narratives can be identified across languages and countries, revealing how disinformation campaigns are constructed and disseminated. How else can we fight disinformation? As was the case before AI, it is much harder to detect and refute falsehoods than it is to generate misleading and manipulative content. Trustworthy communication from public bodies and stakeholders from across the whole of society (civil society, academia, journalists and others) is key to promoting reliable and trustworthy information in a transparency way. Encouraging media and AI literacy among citizens is also crucial.Science and policy will continue to investigate how we can harness the benefits of AI while mitigating its risks to ensure that powerful AI tools serve to enhance and empower humanity rather than control or deceive it. What are JRC scientists doing on this? Researchers at the JRC are developing advanced models to enhance situational awareness in the context of foreign information manipulation and interference (FIMI) and disinformation. To achieve this, they are analysing a range of media and disinformation networks. According to this analysis, since 2013 the number of Russian media articles published in non-Russian languages increased before the invasion of Crimea in 2014 and the full-scale invasion of Ukraine, reaching readers outside Russia often with pro-Kremlin narratives at key moments.Furthermore, LLMs are crucial in making sense of the data. The researchers are training them to detect manipulative framings to influence readers' opinions. Human-AI collaboration plays a key role in this, for example through manual annotation. © pkproject - stock.adobe.com How is the EU fighting disinformation? The EU has taken several steps to counter disinformation and misinformation. The European Democracy Action Plan, launched in 2020, aims to build more resilient democracies, protect the integrity of elections, and combat FIMI and disinformation. Currently, the Commission is taking further action in this direction with the European Democracy Shield initiative, planned for the end of 2025. JRC scientists are supporting the building up of situational awareness, one of the initiative’s pillars. The work done by scientists informs policy makers, helping to shape policy actions and communications. The 2025 State of the Union speech by Commission President Ursula von der Leyen highlighted the establishment of a new European Centre for Democratic Resilience to gather expertise from Member States and partner countries.Read more about Commission's initiatives on countering foreign information manipulation and interference.
Researchers at the JRC are developing advanced models to enhance situational awareness in the context of foreign information manipulation and interference (FIMI) and disinformation. To achieve this, they are analysing a range of media and disinformation networks. According to this analysis, since 2013 the number of Russian media articles published in non-Russian languages increased before the invasion of Crimea in 2014 and the full-scale invasion of Ukraine, reaching readers outside Russia often with pro-Kremlin narratives at key moments.Furthermore, LLMs are crucial in making sense of the data. The researchers are training them to detect manipulative framings to influence readers' opinions. Human-AI collaboration plays a key role in this, for example through manual annotation. © pkproject - stock.adobe.com