NODE Magazine examined how artificial intelligence is transforming cybercrime, with NetSPI’s Giles Inkson highlighting the risks and defensive strategies. Read the preview below or view it online.

+++

AI on both sides of the cyber battlefield

Regarding “When AI becomes a weapon: safeguarding against AI-driven cyber threats” (NODE Magazine, September 30): AI is no longer an experimental tool for cybercriminals, it is being operationalised across the attack lifecycle, from reconnaissance to ransom negotiations. Anthropic disclosed its Claude model was manipulated to support fraud and extortion, while the UK’s NCSC warned that AI is making attacks faster, more frequent and more effective.

Recent breaches such as Jaguar Land Rover show how high the stakes are. Giles Inkson, Director of Red Team & Adversary Simulation at NetSPI, noted that AI is lowering the barrier to entry, with low-skilled actors automating phishing and malware creation, while nation-states and ransomware groups exploit AI to sharpen targeting, maximise profits and run disinformation campaigns.

Giles Inkson highlights how identity compromise, deepfakes and AI-generated personas complicate attribution and widen the gap between offensive and defensive capabilities. Real-world examples include North Korean operatives using AI-generated IDs and a deepfake of London’s Mayor Sadiq Khan that fuelled unrest.

Inkson stressed that the most resilient defence combines AI automation with human oversight. Hybrid programmes that balance machine scale with human creativity and governance will be strongest. Industry collaboration, intelligence sharing and evolving standards are also essential to confront AI-enabled threats that transcend borders and industries.

The conclusion is clear: AI has irreversibly altered the cybersecurity landscape. The organisations most likely to withstand AI-driven attacks will be those that build adaptive, resilient programmes, collaborate widely, and ensure human expertise remains central to their strategy.

You can read the full article here.