ISIS Magazine Instructs Supporters on 'Responsible' AI Use for Jihadist Activities
The Afghanistan branch of the terrorist organization ISIS is now providing guidance to its recruits on how to utilize artificial intelligence in what it terms a 'responsible' manner. This alarming development was uncovered in the latest edition of Voice of Khorasan, the English-language magazine produced by ISIS operatives based in Afghanistan. According to a report by Politico, the publication explicitly teaches supporters to harness AI technology for the purposes of 'mujahid', a term referring to individuals engaged in jihad or the defense of Islam.
AI Described as a Double-Edged Sword in Terrorist Literature
The magazine draws a stark analogy, stating: 'AI is like fire. You can use it to light a home, or to burn it down.' It further emphasizes the pervasive nature of artificial intelligence, advising readers to 'Learn it before it learns too much about you. Raise children and students to be cyber-conscious and spiritually grounded.' The article highlights specific applications, noting that AI can be valuable for 'anonymous private research' and assists recruits in avoiding 'unnecessary exposure' during their activities.
UK Terrorism Expert Warns of Impending Chatbot Radicalization
This revelation follows a dire warning from Jonathan Hall, the British government's independent reviewer of terrorism legislation. In an interview with Politico, Hall expressed grave concerns about the rapid acceleration of AI and agentic AI capabilities, which are increasingly accessible to terrorist groups off the shelf. He cautioned: 'I would not be surprised if chatbot radicalisation starts to take off: If you can create a terrorist website, why would you refrain from creating a terrorist chatbot?' Hall urged government officials to closely monitor AI developments, stating they 'really should be' watching a ticker-tape of advancements in this field.
Former Google CEO Highlights Extreme Risks of AI Misuse
Last year, Eric Schmidt, former chief executive of Google, issued a stark warning about the 'extreme risk' posed by terrorists or rogue states exploiting artificial intelligence. In a BBC interview, Schmidt called for governments to exercise oversight over private tech companies, citing fears that malicious actors could use the technology for 'evil goals'. He specifically named North Korea, Iran, and Russia as potential threats.
Schmidt, who held senior positions at Google from 2001 to 2017, elaborated on the dangers, suggesting AI could be leveraged to create biological weapons. He expressed particular concern about an 'Osama Bin Laden' scenario, where a truly evil individual seizes control of some aspect of modern life to harm innocent people. With private companies at the forefront of AI development, Schmidt emphasized the critical need for careful monitoring and regulation by governments, asserting: 'It’s really important that governments understand what we’re doing and keep their eye on us.'
The convergence of terrorist propaganda with advanced technology underscores a growing and urgent security challenge. As ISIS adapts to the digital age, the call for robust governmental oversight and international cooperation to mitigate these risks has never been more pressing.



