The Digital Propaganda Onslaught
Security specialists are raising alarms about a sophisticated Russian operation aimed at artificially influencing popular chatbots, including ChatGPT and Gemini. The technique, termed 'LLM grooming', involves systematically feeding these artificial intelligence systems substantial volumes of pro-Russia narratives to skew their outputs.
A revealing investigation from the London-based Institute for Strategic Dialogue (ISD) has uncovered that hundreds of English-language websites are inadvertently amplifying this effort. These sites, which range from established news outlets to obscure blogs, are frequently linking to content from a pro-Kremlin disinformation operation known as the Pravda network.
In over 80% of the instances analysed, the linking websites treated the network's material as a credible source, thereby lending it an air of legitimacy and significantly boosting its online visibility.
How the 'Grooming' Strategy Works
The Pravda network, which was officially identified by the French government last year, is not a new entity; it has been active since 2014. However, researchers tracking its activities report a dramatic surge in its output this year. The volume of articles published daily skyrocketed from approximately 6,000 in 2024 to a staggering 23,000 per day in May.
Disinformation expert Nina Jankowicz, who recently addressed the UK parliament on threats to democracy, confirmed the network's rapid expansion. "They are targeting a lot of different languages," she stated. "They want to have a presence across a bunch of different countries." This indicates a strategic pivot towards a global audience, with focused efforts in Asia and Africa alongside Europe.
The core of the concern lies in how large language models (LLMs) are trained. These AI models consume colossal datasets scraped from the entire internet. By flooding the digital ecosystem with such a high volume of content, the operation aims to 'poison' the training data for future AI models, ensuring they absorb and later reproduce pro-Russian viewpoints.
The Real-World Impact on AI and Public Discourse
Evidence suggests this strategy is already yielding results. Studies from earlier this year demonstrated that leading chatbots occasionally parroted Russian disinformation when prompted. For example, they were found to suggest that the US was developing a bioweapon in Ukraine or that France was supplying mercenaries to Kyiv.
Joseph Bodnar, a senior researcher at the ISD, emphasised the effectiveness of this high-volume approach. "More than any other Russia-aligned operation, the Pravda network is playing a numbers game," he said. "They've saturated the internet ecosystem enough to get in front of real people who are doing research on Russia-related issues."
The ISD's analysis found that 40% of the Pravda network content picked up by mainstream websites related to Russia's war in Ukraine. However, a significant portion also covered other topics, including US domestic policy and Elon Musk, allowing the network to weave its narratives into broader online conversations on social media and news sites.
Jankowicz issued a further warning about the diminishing media focus on Ukraine. "There's a bit less news about Ukraine. And if they can get in there and fill that gap really soon, that means that the Russian viewpoint is the one that's going to get out there quickly and be cited in large language models," she cautioned, highlighting the risk of this disinformation usurping accurate coverage.