Skip to main content

In the digital age, the information war is no longer fought only in newspapers or on television. It takes place on social media, where the line between communication, manipulation, and propaganda is becoming increasingly blurred. Many state powers exploit these platforms to weigh in on political debates, influence public opinion, and even destabilize rival regimes.

An overview of these influence strategies, as discreet as they are effective.

1- Astroturfing: creating false popular support

Disinformation campaigns often rely on a well-established phenomenon: astroturfing, which is the artificial fabrication of a popular movement through fake, often automated, accounts. The goal: to make it appear as though a large number of people support an idea, when in reality it is nothing more than a digital illusion.

By altering the perception of collective support, these campaigns exploit a well-known psychological bias: social proof. The average user is more likely to believe or share an idea if they think it is widely accepted. These fake accounts, in large numbers, can also trick algorithms, which then prioritize the most widely shared content — even if it is misleading or biased.

2- Influencer marketing for political purposes

We often think that influencers only sell sneakers or mascara. But some seemingly harmless content can have a far more strategic purpose: spreading an ideology, swaying a vote, or weakening an opponent. In short: doing politics, without it always being explicit.

Some governments or political parties orchestrate coordinated campaigns with popular influencers, paid to share ideological messages, “react” to current events, or push certain narratives. The content is presented as spontaneous, but in reality follows a precise editorial line, often marked by recurring talking points, a strong emotional charge, and a lack of transparency about intentions or funding.

To better reach Gen Z, these campaigns borrow from pop culture, humor, or memes, hijacking entertainment tools to serve soft — but effective — propaganda.

3- Russia, pioneer of information warfare

Since 2013, Russia has been a major player in digital warfare. Drawing inspiration from Sun Tzu’s military thinking — “weaken the enemy in peacetime” — it has developed a true disinformation industry. In Saint Petersburg, the Internet Research Agency (IRA) employs hundreds of people tasked with producing and disseminating propaganda.

These “troll farms” have been particularly active during several events: the 2016 U.S. elections, Brexit, the Yellow Vests movement in France, and the conflict in Ukraine. In 2022, Ukraine identified 86 Russian farms, responsible for over 3 million fake accounts, which reached 12 million users.

At the same time, European influencers have been directly targeted. A 2024 investigation revealed that more than 2,000 influencers had been approached to relay pro-Kremlin content. Around twenty reportedly accepted, praising Russia’s military strength or playing on the fear of a world war to sway public opinion.

Examples elsewhere in Europe

🇷🇸 Serbia: amplification of pro-Kremlin narratives

In Serbia, influencers close to the government have been accused of organizing coordinated campaigns to support those in power and discredit the opposition. These actions include the dissemination of political messages under the guise of “spontaneous” content.

🇲🇰 North Macedonia: influence on electoral campaigns

During election campaigns, several influencers relayed political messages without transparency regarding partnerships, particularly around the issue of national identity and relations with the European Union.

🇫🇷 France: AI and digital populism

During the 2024 legislative elections, the French far right used AI-generated videos to spread its ideas at low cost, without traditional campaign teams. The goal: to flood the digital space with short, engaging, and targeted content (Le Monde).

4- TikTok: a tool of influence at Beijing’s service?

With its short videos and light tone, TikTok may seem like nothing more than a simple entertainment tool. Yet behind the dances and trends lies a powerful lever of geopolitical influence, at the heart of tensions between China and Western democracies.

Owned by ByteDance, a company based in Beijing, TikTok is subject to the Chinese National Security Law (2017), which requires companies to cooperate with intelligence services. In practice, users’ personal data, even outside China, can be transmitted to authorities without transparency. Although TikTok claims to store European data in Ireland or Norway, several internal accesses from China have been documented.

But it is above all TikTok’s algorithm that raises questions. Far from being neutral, it filters and directs content: topics sensitive for Beijing (Tibet, Hong Kong, Uyghurs, dissent) are often censored, while others, favorable to China or polarizing for democracies, can be artificially promoted.

 

In 2020, The Guardian revealed that TikTok had issued internal guidelines to delete content mentioning Tiananmen Square, the demands of Muslim minorities, or the Tibetan independence movement. These directives had leaked via European moderators.

 

Several experts describe this as a “cognitive war”: a form of soft, insidious, yet massive influence that acts on representations, emotions, and opinions, particularly among the youngest. With over 1.7 billion users worldwide, TikTok has become a formidable channel for shaping imaginations, polishing China’s image, and undermining trust in rival democratic institutions.

In this conflictual landscape, TikTok is not an isolated case. In the United States, the government has already asked Facebook to censor certain content during the pandemic. In Russia, Telegram’s founder, Pavel Durov, has been questioned by French justice in a money laundering case. The information war is no longer just about content: it also involves the control — direct or indirect — of the platforms themselves.

5- AI : a power multiplier

With the emergence of artificial intelligence, there is no longer any need for human troll farms: AI agents can now generate texts, videos, images, comments, and memes in bulk, with striking realism. These automated contents, distributed by bots, can pose as authentic opinions, increasing the illusion of popular support. This information fog makes the detection and regulation of disinformation even more complex.

6- Use of personal data: the oil of the 21st century

At the heart of this invisible war lies a precious resource: our personal data. What we read, like, share, or buy constitutes a goldmine for those seeking to influence or manipulate us.

By combining them with analysis and targeting tools, governments can identify opponents, monitor entire populations, and conduct disinformation or intimidation operations.

“If major platforms continue to use economic criteria as the standard for governing the circulation of information, our democracies risk being swept away by populist movements.”
— David Chavalarias, mathematician, Toxic Data

Social networks are no longer just simple spaces for entertainment or expression. They have become tools of soft warfare, used to manipulate opinion, polarize societies, and weaken democracies. However, in the face of this challenge, initiatives are emerging to strengthen platform regulation, promote media literacy, and preserve trust in democratic institutions. These efforts can provide an essential counterbalance to protect our societies against this invisible war.