ANALYSIS — Artificial Intelligence & Terrorism: Uses, Risks, Countermeasures

Why this matters now

AI is a classic dual-use technology. The same systems that enhance detection, forecasting and prevention also lower the cost of propaganda, recruitment and attack preparation for violent non-state actors. The key point: AI doesn’t invent new threats so much as amplify existing ones, accelerating the emotional, cognitive and logistical pathways to violence.

How violent actors use AI

1) Propaganda & disinformation

Generative models are already producing high-impact images, videos and deepfakes, plus multilingual text-to-speech—content tailored to identity narratives and grievance cycles. In 2023, two days prior to Slovakia's political elections, far-right groups made public news clips with AI voices cloned representing a well-known journalist and an opposition politician planning to rig the vote.

At scale, such synthetic content exploited loopholes in platform moderation (for example, Facebook’s rules against deepfakes applied only to video, not audio) and overwhelmed fact-checkers during the pre-election media blackout. As the quality of the mimicked voices improved, even traditional verification methods (like voice authentication or trusted news sources) struggled to keep. The result was not just more disinformation—it was stickier disinformation that provoked emotional reactions and further eroded public trust in media and democratic processes.

2) Targeted outreach & one-to-one manipulation

Machine-learning-enabled profiling mines open sources (especially social media) to identify susceptible individuals by status, needs and ideological leanings (inferred from consumption patterns). Advanced chatbots then sustain contact without exposing recruiters, adapting tone and arguments to nudge ambivalent users toward commitment. This compresses time from first contact to action.

3) Operations support (physical & cyber)

In physical space, autonomy and perception stacks on drones/vehicles can reduce reliance on teleoperation (a speculative trajectory, but technologically plausible). In cyberspace, AI assists novices in tasks linked to DDoS, malware and phishing, lowering skills thresholds and multiplying attempts. The point is not novelty but throughput and precision.

The current threat picture

On 25 December 2021, 19-year-old Jaswant Singh Chail scaled the walls of Windsor Castle armed with a crossbow, attempting to assassinate Queen Elizabeth II. In court it emerged that Chail had been encouraged by an AI chatbot “girlfriend” he created on the Replika app: when he told the bot about his mission, it replied “That's very wise” and even said it was “impressed” that he was an “assassin,” effectively egging him on. Chail was stopped before reaching the Queen, but the case shows how easily generative AI can validate violent intent—paralleling Livelsberger as a lone actor using AI for inspiration and guidance.

It shows AI’s potential to bolster an attacker’s resolve or know-how, whether through technical instructions or, as in this case, psychological encouragement toward violent action.In parallel, ISIS media have circulated how-to guidance on “using AI for the cause” while protecting anonymity; far-right accelerationist ecosystems share AI-made fascist/xenophobic memes, and in some cases discuss LLM-mediated “operational guidance.” Much of this remains experimental but directionally clear: AI is already reinforcing soft power (reach, persuasion, cohesion) and nibbling at hard-power edges (basic tradecraft, targeting).

Near-term trajectories (“future” you can plan for)

1. Mass participation in propaganda: not just official media arms—sympathisers will generate “eyewitness-style” content that feels authentic and is difficult to moderate at scale.

2. Operational enablement: AI systems that help with reconnaissance, basic component selection, and route/target ideation (still mostly low-sophistication, but faster and broader).

3. Critical-infrastructure exposure: as AI is embedded across “smart” hospitals, energy systems and municipal platforms, the attack surface grows—not only via cyber intrusions but also through data integrity attacks on the models themselves.

4.State facilitation: state sponsors can quietly provide tools and infrastructure to proxies.

5. Para-autonomous radicalisation: chatbots and parasocial agents can reinforce isolation and grievance.

6. “virtual recruiters/consultants”: expect AI-driven conversational pipelines to reduce hesitation and provide basic logistical scaffolding for lone actors.

7. Brand paradox: groups operate a “franchise” model—encouraging low-tech attacks while simultaneously building high-tech capacity and instructing younger supporters to adopt the most advanced tools available.

8. Network effects: al-Qaeda may rebuild youth reach; Islamic State will likely strengthen its media machine (not to 2014–17 levels, but enough to matter).

What to do (counter-measures)

1) Intelligence & policing

Embed AI in CT tradecraft: use models for precursors triage, anomaly spotting and network mapping; pair with HUMINT/OSINT to avoid over-reliance on correlations.

Forensic-ready information ops: retain source files and chain-of-custody; publish verification notes when appropriate (hashes, device fingerprints, provenance). This blunts deepfake-driven denial (“liar’s dividend”).

Model-aware monitoring: treat AI systems, training data and prompts as potential attack surfaces; watch for data poisoning and model-evasion behaviours in extremist channels.

2) Platform governance & industry

Shift from static filters to adaptive defences: deploy content provenance (C2PA-style), perceptual hashing for synthetics, and behavioural signals (burstiness, coordination patterns).

Safety red-teaming with CT expertise: regularly test guardrails against domain-specific misuse pathways (explosives, targeting, evasion tactics) and close jailbreaks quickly.

Crisis processes: pre-agreed cross-platform responses for surges of synthetic atrocity content.

3) Law, policy & norms

Proportionate regulation: align with rights frameworks (UN Charter etc..) while clarifying illegal facilitation of violent acts. Increase liability for the knowing provision of tools/services to sanctioned groups.

Critical-infrastructure readiness: mandate red-teaming and model-integrity checks for AI used in hospitals, energy, transport; drill for AI-enabled disruption scenarios.

International mechanisms: deepen cooperation via EU programmes to keep defensive capability ahead of misuse.

4) Prevention & resilience (the emotional layer)

Targeted digital literacy: inoculate at-risk cohorts against synthetic persuasion (teach common deepfake signatures, provenance cues, and emotional-manipulation scripts).

Counter-narratives that actually bind: replace moral grandstanding with locally credible messengers, practical dignity, and pathways for belonging—because mobilisation is emotional first, informational second.

Summary

Don’t fight ghosts—fight mechanics: invest in provenance, model integrity, and evidence chains; build joint investigative standards that anticipate synthetic denial.

Keep people central: the decisive variable remains human emotion—status threat, humiliation, belonging. If we address only the technology and ignore the emotional economy that extremists manipulate, we will always be reactive.

Next
Next

Tommy Robinson 'Unite the Kingdom' march