On February 9, NetSPI’s Nick Landers, Nabil Hannan, and Cody Chamberlain were featured in The CyberWire: Water armies across the Taiwan Strait. Pakistan blocks access to Wikipedia. Normalizing an illegal occupation. AI chatbots. Read the preview below or view it online.
Artificially intelligent chatbots and allied technologies have attracted enthusiasm, competition, and concern reminiscent, on a smaller scale, of the dot-com mania at the turn of this century. Right now the two big competitors are Microsoft’s ChatGPT, ahead by a neck, and Google’s more recently released Bard. They’re both pretty plausible, but both of them have stumbled a bit, too. ChatGPT seems, the Wall Street Journal reports, to need some help with math problems (maybe get it a calculator). And Bard embarrassed Google in its own ad. According to Reuters, some questions about the James Webb Space Telescope intended to display the AI chatbot as a knowing savant showed that Bard wasn’t up to the task either (maybe Bard could’ve Googled those questions). But the potential for deception remains a concern. BlackBerry speculates that nation-state services are already working on attacks based on the new AI capabilities.
Nabil Hannan, Managing Director at NetSPI, commented on the use and abuse of AI:
“With the likes of ChatGPT, organizations have gotten extremely excited about what’s possible when leveraging AI for identifying and understanding security issues—but there are still limitations. Even though AI can help identify and triage common security bugs faster – which will benefit security teams immensely – the need for human/manual testing will be more critical than ever as AI-based penetration testing can give organizations a false sense of security.
We received some comment from NetSPI on the implications and potential of this kind of artificial intelligence. Nick Landers, NetSPI’s VP of Research, addressed the commercial potential of AI:
“The news from Google and Microsoft is strong evidence of the larger shift toward commercialized AI. Machine learning (ML) and AI have been heavily used across technical disciplines for the better part of 10 years, and I don’t predict that the adoption of advanced language models will significantly change the AI/ML threat landscape in the short term – any more than it already is. Rather, the popularization of AI/ML as both a casual conversation topic and an accessible tool will prompt some threat actors to ask, ‘how can I use this for malicious purposes?’ – if they haven’t already.
Cody Chamberlain, NetSPI’s Head of Product, distinguishes adversarial from offensive AI:
“When considering the security gaps these new tools from Google and Microsoft present to the threat landscape, it’s best to consider security approaches based on two implications of AI in cyber: Adversarial AI and Offensive AI. When looking at Adversarial AI, the data is only as good as its training model, which opens up attack scenarios for poisoning models, introducing bias, etc. Organizations must perform extensive threat models against their implementations to combat these gaps – thinking like the hacker. When performing extensive testing of the data supply chain, organizations can better determine who can access it and how they can validate its integrity.
Read the full commentary on The CyberWire!