Information Warfare

The Battlespace of Perception and Narrative

Wars have always been fought on two fronts: the physical and the cognitive. Armies destroy bodies; information warfare shapes minds. What has changed in the contemporary period is not the existence of propaganda and psychological operations — these are as old as organised conflict — but the scale, speed, and sophistication with which modern states and non-state actors can manipulate information environments, and the degree to which digital communications infrastructure has made every citizen in every democracy a potential target of adversary influence operations. Hybrid warfare theorists often treat information operations as merely one component among several, but a case can be made that the information domain has become primary: in an era of nuclear deterrence and constrained conventional war, what states most actively contest is not territory but perception.

Definition and Scope

Information warfare is a broader concept than propaganda or cyber operations. It encompasses the full spectrum of efforts to shape the information environment in ways that advantage the actor conducting them: propaganda and counter-propaganda, disinformation (deliberately false information designed to deceive), misinformation (false information spread without deliberate intent to deceive, though the distinction matters less operationally than it sounds), psychological operations (PSYOP), strategic communications, narrative warfare, media manipulation, and the weaponisation of social media platforms.

What distinguishes modern information warfare from historical propaganda is primarily the delivery infrastructure. Digital social media platforms enable the rapid, cheap, targeted distribution of information — and disinformation — at a scale that twentieth-century propagandists could not have imagined. A message that might have reached thousands through pamphlets or radio broadcasts can now reach millions through algorithmically amplified social media posts, at marginal cost, without the broadcasting infrastructure that governments once needed to control.

Information warfare also differs from conventional warfare in its ambiguity and deniability. When tanks cross a border, attribution is straightforward. When a network of social media accounts amplifies divisive narratives within a target country’s politics, attribution is difficult, slow, and contested. This ambiguity is not incidental but strategic: the most effective information operations operate below the threshold of attribution, exploiting open societies’ commitment to free speech and free information flow as structural vulnerabilities.

Historical Roots: From Creel to Cold War Radio

Modern states discovered the systematic power of information warfare in the First World War. Woodrow Wilson’s Committee on Public Information (the Creel Committee), established in 1917, coordinated American propaganda efforts on a scale never before attempted in a democratic society: posters, films, press releases, and speakers’ bureaux mobilised American public opinion for a war that had been deeply unpopular before US entry. British propaganda targeting American opinion — including the deliberate exaggeration and in some cases fabrication of German atrocity stories — had been quietly running for years before that. Both sides in the Great War invested heavily in propaganda targeting neutral countries, their own populations, and enemy soldiers.

The Second World War systematised these efforts further. The US Office of War Information coordinated domestic and foreign information operations. The Office of Strategic Services (OSS), predecessor to the CIA, ran “black propaganda” — false information designed to appear to originate from German sources. Both sides used radio broadcasts to reach enemy populations. The legendary Radio Berlin broadcasts featuring “Axis Sally” and the Japanese “Zero Hour” targeting Allied troops represented the psychological warfare front.

The Cold War transformed information warfare into a sustained, institutionalised competition spanning four decades. Radio Free Europe and Radio Liberty, funded covertly by the CIA, broadcast uncensored news and programming to populations behind the Iron Curtain who were otherwise confined to state media. Voice of America represented the overt US information effort. The Soviet Union ran its own extensive information operations: the KGB’s “active measures” programme planted disinformation in foreign media, funded sympathetic publications, operated front organisations, and conducted forgeries designed to discredit Western governments and institutions. Soviet active measures included fabricated documents claiming the CIA had created the AIDS virus — a lie that circulated widely in the Global South.

Russian Doctrine: Reflexive Control and the Gerasimov Article

Contemporary Russian information warfare doctrine draws on a Soviet intellectual tradition that was never fully abandoned and has been systematically developed since the 1990s. The concept of “reflexive control” — developed by Soviet military theorists, particularly Vladimir Lefebvre and Mikhail Ionov — refers to the process of conveying information to an adversary that causes the adversary to voluntarily make decisions beneficial to the party conducting the operation. Rather than simply deceiving the enemy, reflexive control aims to shape the entire cognitive context within which the enemy makes decisions.

In 2013, Russian Chief of the General Staff Valery Gerasimov published an article in the Russian Military-Industrial Courier that Western analysts quickly — and somewhat misleadingly — labelled the “Gerasimov Doctrine.” Gerasimov was describing a shift he observed in the nature of contemporary warfare, arguing that “the very ‘rules of war’ have changed” and that non-military means — informational, technological, economic, diplomatic — had come to play a role “often far exceeding the power of force of weapons in their effectiveness.” The article described information operations as equal in importance to kinetic military operations, and argued that the distinction between war and peace had become blurred.

Russia’s information warfare approach in the 2010s and 2020s has been characterised by what the RAND Corporation labelled the “firehose of falsehood” — a high-volume, multi-channel approach that does not attempt to construct a single coherent alternative narrative but instead floods the information environment with multiple, often mutually contradictory claims. The goal is not persuasion in the conventional sense but epistemic disruption: the creation of confusion, doubt, and cynicism that makes it difficult for target audiences to distinguish reliable from unreliable information, and that erodes trust in institutions and media generally.

Chinese Information Operations

China’s approach to information warfare is conceptually different from Russia’s, more patient, more institutional, and more integrated with long-term strategic objectives. The Chinese Communist Party’s United Front Work Department (UFWD) coordinates influence operations targeting overseas Chinese communities, foreign political parties, academic institutions, think tanks, and media organisations. Unlike Russia’s often provocative disinformation operations, China’s information strategy typically involves the gradual cultivation of relationships, the placement of sympathetic voices in influential positions, and the suppression of narratives — particularly about Taiwan, Hong Kong, Xinjiang, and Tibet — through economic leverage and diplomatic pressure.

Xinhua News Agency and China Global Television Network (CGTN) have been systematically expanded to provide global English-language news coverage that positions Chinese perspectives on international events. These outlets reach substantial audiences, particularly in Africa and parts of Asia where Western media presence is limited. The goal is not primarily to convince Western audiences but to provide an alternative information ecosystem in the Global South.

TikTok — owned by ByteDance, a Chinese company — has become the most discussed flashpoint in the technology dimension of Chinese information operations. The platform’s algorithm, opaque to outside scrutiny, has raised concerns that it could be used to shape information flows in ways that serve Chinese strategic interests, whether by suppressing certain content or promoting it. The evidence for deliberate political manipulation is contested, but the structural concern — that a platform operating under Chinese jurisdiction and subject to Chinese law cannot credibly claim independence from Chinese state influence — reflects a genuine asymmetry in information architecture that has no clean resolution.

Social Media as Battlespace: Internet Research Agency and 2016

The Russian Internet Research Agency (IRA), a St. Petersburg-based organisation funded by oligarch Yevgeny Prigozhin, became the most studied example of social media influence operations following revelations about its activities during the 2016 US presidential election. The IRA operated networks of fake social media accounts designed to appear as authentic American users, amplifying divisive content around race, immigration, gun rights, and Hillary Clinton’s candidacy. The operation was not primarily designed to elect Trump — that was at most a secondary objective — but to inflame social divisions and undermine confidence in American democratic institutions.

The Mueller Report and subsequent Senate Intelligence Committee investigations documented the IRA’s operations in detail: hundreds of fake accounts reaching tens of millions of Americans; targeted advertising on Facebook; infiltration of genuine grassroots activist networks; organisation of real-world political events by fake online personas. The scale was significant, though the causal impact on electoral outcomes remains genuinely contested among researchers.

What the 2016 episode demonstrated was that social media platforms’ business models — designed to maximise engagement by surfacing emotionally arousing content — were structurally compatible with, and in some ways designed for, the amplification of divisive disinformation. The same algorithmic dynamics that push users toward increasingly extreme content to maintain engagement also make platforms effective amplifiers of adversary disinformation operations. This is not exclusively or primarily a Russian problem: domestic political actors in every democracy have learned to exploit the same dynamics for their own purposes.

Ukraine 2022: Competing Narratives at Scale

The 2022 Russian invasion of Ukraine became the most intensively contested information warfare environment in history. Russia deployed its established disinformation apparatus: claims that Ukraine was governed by Nazis, that civilians in Donbas were being persecuted, that Ukrainian biological weapons programmes posed a threat — narratives calibrated to justify the invasion domestically and to confuse international audiences. Russian state media, particularly RT and Sputnik, were banned in the European Union, limiting but not eliminating Russian narrative distribution in Europe.

Ukraine’s information strategy was remarkably sophisticated by comparison with any previous smaller state resisting a great power. President Zelensky’s decision to remain in Kyiv and communicate directly through social media — “I need ammunition, not a ride” — generated enormous international sympathy and support, effectively winning the global information battle within days of the invasion’s launch. The Ukrainian government became adept at releasing battlefield footage, documenting Russian atrocities, and maintaining a consistent narrative of democratic resistance to authoritarian aggression that proved compelling in Western media environments.

The global information environment split largely along pre-existing lines: Western audiences, with access to open media, received broadly consistent pro-Ukrainian coverage; Chinese audiences received state media coverage that was broadly pro-Russian or deliberately ambiguous; Global South audiences encountered a more contested information environment in which Russian narratives found more purchase than in Western Europe or North America.

Deep Fakes, AI, and the Emerging Threat

The integration of artificial intelligence into information warfare represents a qualitative escalation in the threat environment. Generative AI systems can now produce photorealistic video, convincing audio recordings, and highly plausible text at minimal cost and with diminishing technical barriers. The capacity for synthetic media — “deep fakes” — to impersonate political leaders, fabricate events, and manufacture evidence of events that never occurred has grown dramatically in the years since 2020.

The geopolitical implications are substantial. An AI-generated video of a head of state announcing a policy, ordering military action, or making damaging admissions could, if distributed at speed through social media before authentication can occur, produce real political consequences before the fabrication is debunked. The window between release and debunking — hours or days — may be sufficient to shape immediate political reactions in a crisis environment. Authentication technologies (digital watermarking, cryptographic signing) are being developed, but the offensive-defensive balance currently favours the attacker.

AI also enables the personalisation of disinformation at scale: rather than broadcasting a single message to a mass audience, AI systems can generate individually tailored disinformation calibrated to the psychological profile, political beliefs, and information environment of specific target individuals. This micro-targeted approach is significantly harder to detect and counter than broadcast propaganda.

Defensive Measures and Their Limits

Democracies have developed several categories of response to information warfare, each with significant limitations.

Media literacy programmes aim to equip citizens with the critical tools to evaluate information sources and identify disinformation. The evidence for their effectiveness is positive but modest: people who receive media literacy training are somewhat better at identifying disinformation, but the effect sizes are limited and do not persist long in the face of ongoing exposure to sophisticated operations.

Fact-checking organisations — Snopes, PolitiFact, Full Fact, and dozens of equivalents — perform valuable work in documenting specific false claims, but their reach is typically concentrated among audiences already sceptical of disinformation, and correction of false beliefs requires repeated exposure to be effective.

Government counter-disinformation units operate in most NATO states. The EU’s East StratCom Task Force specifically tracks and counters Russian disinformation operations in Europe. These organisations provide valuable situational awareness and attribution, but they face a fundamental asymmetry: offensive information operations are cheap and fast; defensive attribution and debunking are expensive and slow.

Platform moderation — the removal of disinformation content and the deamplification of misleading narratives — has been adopted by major social media companies under varying degrees of political pressure. Its effectiveness is contested, and it generates its own political controversies about censorship, viewpoint discrimination, and the appropriate role of private platforms in policing public discourse.

The deepest challenge posed by information warfare is not technical but definitional. Who determines what constitutes “disinformation”? The boundary between disinformation (false information deployed to deceive) and vigorous political advocacy (strongly framed arguments that opponents find misleading) is genuinely blurry. Governments given the authority to label and remove “disinformation” have powerful incentives to use that authority against political opposition.

This problem is not symmetrical across political systems: authoritarian states have no meaningful constraint on using counter-disinformation authorities against opposition voices, while liberal democracies are institutionally constrained from doing so. But the constraint is not absolute, and the history of counter-disinformation programmes includes documented cases of overreach — governments and platforms removing content that was politically inconvenient rather than genuinely false.

The deeper question is whether open societies can sustain the epistemological common ground that democratic deliberation requires in an information environment that adversaries can deliberately corrupt. Democracy depends, at minimum, on a shared factual baseline — agreement on what happened, who did what, what the evidence shows — without which political argument becomes merely tribal signalling. Information warfare aims precisely at that baseline. Whether liberal institutions have the resilience to maintain it under sustained adversarial pressure is one of the central questions of twenty-first-century geopolitics.

Sources & Further Reading

  • Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare (2020) — A comprehensive history of Soviet and Russian disinformation operations from the 1920s to the present, drawing on declassified intelligence documents and original research to document the continuities in Russian information warfare practice.

  • P.W. Singer and Emerson Brooking, LikeWar: The Weaponization of Social Media (2018) — An accessible and thorough account of how social media platforms have become battlespaces, covering ISIS, Russia, Trump, and the broader militarisation of information networks.

  • Nina Jankowicz, How to Lose the Information War: Russia, Fake News, and the Future of Conflict (2020) — Drawing on fieldwork in Eastern Europe, Jankowicz examines how Russian information operations have played out in the countries most exposed to them and what effective responses look like.

  • Kathleen Hall Jamieson, Cyberwar: How Russian Hackers and Trolls Helped Elect a President (2018) — A systematic analysis of the 2016 US election interference campaign, examining the evidence for Russian information operations’ effects on American political attitudes and behaviour.

  • Christopher Paul and Miriam Matthews, The Russian “Firehose of Falsehood” Propaganda Model (RAND, 2016) — A concise and influential RAND analysis of Russian disinformation strategy, describing the high-volume, multi-channel approach and its psychological effects on target audiences.