- Category
- Opinion
Russia’s Information War on Your Feed and Your AI

Information warfare is a core instrument of the Kremlin’s foreign policy. Its mission is clear: weaken NATO and fracture Europe from within—sowing enough discord to erode deterrence and smooth the path for further Russian aggression.
Over the past two decades, the rise of social media and algorithm-driven platforms that support freedom of reach in open societies committed to freedom of speech has created a fertile playing field for the Kremlin to experiment. While alarms about the nature and impact of these operations first sounded in the early 2010s, the scale and aggressiveness of campaigns have exploded since the full-scale invasion of Ukraine. The question is no longer if Russian influence operations will strike, but when and how – and whether institutions are prepared to respond.
Evolving tactics across Europe
Moscow’s playbook is far from static, with tactics adapted to each country’s media ecosystem and vulnerabilities.
In Czechia or Slovakia, for example, the Kremlin spent much of the late 2010s cultivating local proxy networks—a web of fringe websites, social media accounts, and channels that launder and localise Kremlin talking points. A recent investigation in Czechia found that such sites now publish an average of 120 articles per day, with output surging in the run-up to the October elections—outpacing the major Czech news media. Around 10% of this content is directly copied and translated from Russian state outlets banned under EU sanctions.
The strategy of “firehose of falsehoods” seeking to overwhelm voters with biased information just as they are forming opinions during an election was on vivid display in Romania’s recent presidential race. More than 25,000 TikTok accounts burst into activity two weeks before the vote to promote a pro-Russian candidate, Calin Georgescu, with one account spending as much as €328,000 on ads, using curated language to exploit TikTok’s recommendation system. The manipulation was so blatant that Romania’s constitutional court annulled the results, citing the “distortion of voters’ expressed will”.

This tactic has also been adapted to exploit the growing popularity of AI-powered tools, particularly large language model (LLM) chatbots. One massive operation—exposed by France’s national security watchdog VIGINUM as “Portal Kombat”—consists of sites across multiple countries and languages masquerading as news outlets, including the so-called “Pravda ” network. The sites scrape and repost material from Russian state media and loyal (mostly) Telegram channels, often using clumsy machine translation to localise propaganda, while pushing pro-Russia, anti-Ukraine, anti-EU or NATO narratives, and exploiting local vulnerabilities. The flood is astonishing—some sites push new articles every few seconds.
While the immediate audience remains minimal, their primary goal is seen as “LLM-grooming”—flooding the web with content to ingest its narratives into AI systems and search engines. Early evidence from the Nordics shows that some AI systems have occasionally parroted claims from Pravda sites, even when those claims had been debunked elsewhere.
Meanwhile, as governments tightened restrictions and expelled intelligence officers under diplomatic cover, the Kremlin leaned more heavily on deniable third-party actors – from opportunistic individuals to organised crime networks – to carry out influence operations. A striking example emerged in March 2024, when the intelligence services uncovered the Voice of Europe network—a Prague-based propaganda operation allegedly run by Putin ally Viktor Medvedchuk, which paid politicians across six EU countries to promote pro-Kremlin narratives ahead of the European Parliament elections.
Playing catch-up: Europe’s response so far
Some states acted earlier than others. Czechia, for example, set up its Centre for Countering Terrorism and Hybrid Threats in 2017. Finland, for its part, with its tradition of total defence, never ceased working on resilience-building. But even though Russia’s efforts are recognised as a major security threat, the capabilities to monitor, analyse, and respond to threats vary widely.
Brussels has banned Russian state media, yet proxies continue to recycle and share content from them. Latvia has blocked over 400 Russian propaganda websites, yet the Portal Kombat network uncovered by the VIGINUM remains active and even expanded after being exposed. Romania consistently ranks as one of the most pro-EU and pro-NATO countries in Central Europe, yet Russia infiltrated its presidential election to push an unknown pro-Russian candidate into the second round.
Most of the monitoring and analytical capabilities have not kept pace with the scale of the threats. In many countries, authorities still lack the legal frameworks and enforcement mechanisms to effectively disrupt them. Meanwhile, the steady flood of narratives advancing Russia’s strategic goals is taking hold. A 2024 GLOBSEC Trends survey across nine Central and Eastern European countries found that only 61% of respondents identified Russia as primarily responsible for the war in Ukraine. By contrast, 18% blamed the West for allegedly provoking Moscow, 11% put the blame on Ukraine itself, and the remaining 10% did not know how to respond. These findings encapsulate the effect of the Kremlin-driven narratives sowing distrust in EU and NATO efforts to safeguard their own security and stability, as well as undermining both support for Ukraine and recognition of its right to defend itself against aggression.
⚡️EU chief Ursula von der Leyen: Putin is a predator using hybrid warfare, cyberattacks, and even migrants as weapons against Europe. pic.twitter.com/b6evcDKJDj
— UNITED24 Media (@United24media) August 29, 2025
Ensuring an effective response
These efforts will not cease while the current regime and its worldview dominate in the Kremlin. The democratic community must adapt—electoral laws modernised, response mechanisms made agile enough to provide real deterrence, and offensive measures considered where appropriate. This must go hand in hand with significant investments in situational awareness, built on the models of Sweden or Finland, where Sputnik closed down (alongside Denmark and Norway) just a year after its launch in 2016.
Countering information warfare must be treated as a key aspect of internal security. Propaganda is often enabled by illicit financial flows, and the same networks that spread manipulated content are often engaged in other criminal activities. These dynamics are deeply intertwined, further blurring the lines between internal and external threats. To counter such multipurpose networks effectively, Europe must be able to map the full spectrum of malicious activities under hybrid threats and disrupt them in a coordinated way.
That, in turn, demands stronger international cooperation and systematic exchange of information between allies. Mechanisms such as the EU’s Hybrid Fusion Cell (an intelligence-sharing hub on hybrid threats), Rapid Response System under the Code of Conduct on Disinformation (an EU early-warning tool against disinformation), or national hybrid-threat task forces should be further strengthened and connected, ensuring swifter intelligence sharing, more coherent attribution, and joint preventive action. Only by embedding national efforts within a collective framework can democracies match the scale and persistence of the challenge. The longer the inaction, the more aggressive adversarial operations become, and the harder it will be to reverse their consequences.


-e27d4d52004c96227e0695fe084d81c6.jpg)






-9ab3b63f905b2166694cb8db0903d1b9.jpg)