top of page

Russian Disinformation Campaigns Dismantled by International Law Enforcement

Written by Mark Bruno

 

On July 9th, the U.S. Department of Justice, aided by Dutch and Canadian intelligence, disrupted an AI-driven Russian disinformation campaign. This operation involved seizing domains and social media accounts used to create fake personas and spread propaganda. The campaign, linked to Russian state media, and utilizing a proprietary AI software "Meliorator," aimed at influencing the U.S. elections and other international audiences. The Dutch General and Military Intelligence agencies also identified the misuse of Dutch infrastructure for these cyber activities. The seizure significantly hindered the campaign's capabilities, but it’s only one step in combating the massive narrative webs being laid out by Kremlin-backed actors. 


‘Troll Farms’, ‘Deepfakes’ and the Threat to Digital Truth

Russia has long been recognized as a significant threat in hybrid warfare, particularly through its information operations. This capability was most notably demonstrated during the 2014 annexation of Crimea and the support for separatist factions in Donbas.


Russian disinformation efforts aimed to justify the annexation and undermine the Ukrainian government by using state-sponsored groups, social media bots, and social media trolls to amplify pro-Russian narratives and spread false information about the Ukrainian government and military actions. The confusion throughout the earliest days of the conflict empowered more kinetic, “irregular” operations on the ground, often involving Russian soldiers posing as separatist militants as they moved to occupy vast areas of Eastern Ukraine. It wasn’t until the 2022 full-scale invasion began that Russia’s dominance of the narrative was largely unchallenged in popular culture. 

The Internet Research Agency in St. Petersburg (above) is a known “troll farm”

This capacity for seeming to alter reality has only grown in the decade since, as Russian disinformation efforts spread across numerous platforms with fake or re-contextualized news, video, and written content. As well, the rise of Generative AI models (such as ChatGPT) has made both creating this content and spreading it far easier, possibly stifling consequences among the international community. This is why much attention has been focused on influencing elections in countries of strategic importance to Ukraine.


The Recent Investigations and Takedowns

Disruption of AI-Enhanced Bot Farms

In July 2024, the U.S. Department of Justice, in collaboration with international partners, disrupted a sophisticated AI-enhanced social media bot farm operated by a cooperation of Russian state entities such as the RT (Russia Today) media network, and the FSB (Russia’s Federal Security Service). This operation involved seizing domain names and finding 968 social media accounts used to disseminate disinformation. These bot farms created realistic fake personas to promote pro-Russian narratives and sow discord, targeting audiences in the United States and other Western countries.


Other Recent Instances of Countering Disinformation

In 2022, a report by the RAND Corporation circulated, focusing on Russia’s persistent social media efforts and election interference as an additional avenue for its strategic gains in Ukraine. The conclusion of this report emphasized a dire need for international cooperation, and projected the need to address these concerns in NATO and EU-aligned countries as 2024 approached–a year wherein, famously, over 50 countries would be going to the polls.


In the time since, Chinese disinformation tactics have also been exposed through international partnerships in other theaters. In 2023, pro-Chinese narratives featuring AI-generated content were circulating in an operation dubbed “Shadow Play” across YouTube and Facebook. The narratives targeted sensitive topics in the U.S. and Australia. It eventually resulted in a takedown of thousands of accounts across several of Google and Meta’s products


Earlier this year, Moldova’s upcoming elections came into the spotlight. A joint statement in June by the U.S., Canada, and the UK condemned Russia’s alleged electoral interference in Moldova. In response to the individual governments’ findings, representatives from the Five Eyes intelligence alliance have promised greater cooperation and offered assistance to Moldova’s government. Moldova, despite not being a part of NATO, has its own issues with Russian interference, particularly in the separatist region of Transnistria, recognized by the Russian government and a base for an estimated 1500 Russian soldiers. Moldova has become a target of interest for the Kremlin, due to its strategic position relative to Ukraine, and its recent ascendency to EU Candidate status.


The Threat Actors and Campaigns  Involved in Current Russian Operations

Russia’s network for generating and spreading disinformation involves a web of threat actors, deniable assets, and campaigns variably related to more “traditional” state-run media (such as Sputnik and RT), as well as military and foreign intelligence services. While there is no confirmation as to whether or not any of the following entities have been neutralized through recent operations, it’s suspected that they’ve been seriously impacted.


CopyCop

CopyCop is a network of disinformation creators that utilize AI to manipulate and spread politically charged content, targeting divisive issues in the US and Europe. This operation creates fake news by scraping and rewriting articles from conservative-leaning and state-affiliated media, amplifying pro-Russian narratives and aiming to influence election outcomes and public opinion. The infrastructure supporting CopyCop has ties to disinformation outlets like DCWeekly and The Boston Times.


DoppelGanger

The Doppelganger operation clones legitimate media and government websites to distribute pro-Russian disinformation. Outlets they’ve attempted to clone have included Le Monde, The Guardian, Der Spiegel, and Fox News. By creating fake articles and videos, Doppelganger targets various countries, including the US and EU, spreading narratives that depict Ukraine negatively and undermine support for sanctions against Russia.


Recent Reliable News (RRN)Recent Reliable News (RRN) is part of the Doppelganger operation, creating cloned versions of legitimate media websites to disseminate pro-Russian narratives. By mimicking trusted news sources, RRN spreads misinformation about the Ukraine conflict and undermines Western support, focusing on manipulating public opinion through realistic fake news.


Project Kylo

Project Kylo, managed by Russia’s SVR (Foreign Intelligence Service), focuses on spreading fear and uncertainty to destabilize Western governments and diminish support for Ukraine. This operation uses fake NGOs to organize anti-establishment demonstrations and leverages advanced technologies to bypass traditional media channels, thereby directly influencing Western audiences with disinformation campaigns. The link to the Russian SVR has been established through an SVR officer named Mikhail Kolesov.


John Mark Dougan

John Mark Dougan, a former U.S. Marine and police officer now in Russia, is accused of operating a disinformation network that produces and distributes fake news in association with DCWeekly and RNN. His network generates content that appears to come from credible sources, significantly impacting public perception by leveraging advanced AI technologies to spread pro-Russian propaganda.


(above) Pages from The Boston Times, a Russian disinformation conduit. Note that it shows relatively normal US conservative-leaning headlines in its recent posts next to obvious fake stories about Ukraine.



The Role of Generative AI and Large Language Models

Russian disinformation campaigns have increasingly harnessed Generative AI and Large Language Models (LLMs) to enhance their operations. Generative AI facilitates the creation of deepfakes—realistic yet fabricated images, audio, and video—that blur the lines between reality and falsehood. These technologies enable the production of highly persuasive and coherent text, mimicking human writing to generate misleading articles, social media posts, and comments that seamlessly integrate with legitimate content.


Meliorator - The Kremlin’s Generative AI Model

Proprietary AI models are another critical component of these campaigns. Deepfakes can be used to fabricate speeches by public figures or events that never happened, making it challenging for audiences to discern truth from falsehood. Advanced language models are used to create persuasive and coherent text that mimics human writing, which can generate misleading articles, social media posts, and comments.


Meliorator is an AI-enhanced software package developed under the direction of the Russian state news outlet, RT. It was designed to create and manage a social media bot farm, generating fictitious profiles to disseminate pro-Russian narratives and influence public opinion, particularly targeting the United States and other Western countries.


LLMs and AI-generated content have advanced social media manipulation, allowing bots and trolls to generate personalized, contextually relevant responses, making interactions appear genuine. This sophistication enhances the spread of disinformation, with AI tools adapting to real-time events and conversations, providing disinformation actors agility in steering public discourse. The strategic use of these technologies allows for precision targeting of specific demographics, exploiting biases and deepening social divisions.


The global reach of these advanced technologies has been suspected of significantly impacting elections and public opinion across multiple countries. The international community has responded with countermeasures like the EU’s East StratCom Task Force and NATO’s StratCom Centre of Excellence to combat AI-driven disinformation. However, the sophistication of Generative AI and LLMs poses significant challenges for detection and verification, necessitating the development of new technologies and methodologies to effectively counter these threats.


Broader Impacts and Strategic Importance

These operations will likely escalate, given the ongoing and evolving nature of the threat posed by Russian information warfare. By leveraging AI and other advanced technologies, disinformation campaigns have become more sophisticated, necessitating equally advanced countermeasures. The success of these efforts highlight the importance of international cooperation, advanced technological capabilities, and proactive measures in protecting democratic processes and public opinion from foreign interference.



 


 




73 views

Related Posts

See All
bottom of page