As artificial intelligence continues to transform the world as we know it, experts are looking at how large language models are accelerating cyber threats by simplifying exploit creation and phishing, helping low-skill hackers execute sophisticated attacks. In an article, TechChannel featured insights from NetSPI Field CISO Nabil Hannan that addresses, as well as proactive steps companies can take to protect themselves. Read the preview below or view it online

+ + + 

Nabil Hannan, field chief information security officer at NetSPI, also tests clients’ systems as part of his work for the cybersecurity firm. He says one major advantage that ChatGPT and other large language models (LLMs) can give hackers is speed.  

“One of the fun statistics is most successful ransomware attacks start on a Friday evening after business hours,” Hannan says. “And typically they have until Monday, start of business hours, to try and be successful in navigating multiple layers of exploits to be able to then actually go and deploy their ransomware.” 

LLMs help them get out with the goods before the start of business hours. “I can actually ask the model to build it for me, and I can say, ‘Hey, I’m trying to write this type of a privilege escalation and it’s going to run on a machine that’s running this version of Linux and it’s got this many users, or it has this permission,’” Hannan says. “It gives me the code that I should execute, and then the model builds it for you and you can just run it, versus having to now learn how to code it yourself. Could you have done it yourself? Probably, given enough time, but now you’re just accelerating your way through the roadblocks.” 

 You can read the full story here.