SecurityWeek: Cyber Insights 2023 | Artificial Intelligence
On January 31, NetSPI Director of Research Nick Landers was featured in the SecurityWeek article called Cyber Insights 2023 | Artificial Intelligence. Read the preview below or view it online.
+++
SecurityWeek Cyber Insights 2023 | Artificial Intelligence – The pace of artificial intelligence (AI) adoption is increasing throughout industry and society. This is because governments, civil organizations and industry all recognize greater efficiency and lower costs available from the use of AI-generated automation. The process is irreversible.
What is still unknown is the degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool for beneficial improvement. That day is coming and will begin to emerge from 2023.
The changing nature of AI (from anomaly detection to automated response)
Over the last decade, security teams have largely used AI for anomaly detection; that is, to detect indications of compromise, presence of malware, or active adversarial activity within the systems they are charged to defend. This has primarily been passive detection, with responsibility for response in the hands of human threat analysts and responders. This is changing. Limited resources which will worsen in the expected economic downturn and possible recession of 2023 is driving a need for more automated responses. For now, this is largely limited to the simple automatic isolation of compromised devices; but more widespread automated AI-triggered responses are inevitable.
Failure in AI is generally caused by an inadequate data lake from which to learn. The obvious solution for this is to increase the size of the data lake. But when the subject is human behavior, that effectively means an increased lake of personal data and for AI, this means a massively increased lake more like an ocean of personal data. In most legitimate occasions, this data will be anonymized but as we know, it is very difficult to fully anonymize personal information.
“Privacy is often overlooked when thinking about model training,” comments Nick Landers, director of research at NetSPI, “but data cannot be completely anonymized without destroying its value to machine learning (ML). In other words, models already contain broad swaths of private data that might be extracted as part of an attack.” As the use of AI grows, so will the threats against it increase in 2023.
Read the full article at SecurityWeek!
Explore More News
VM Blog: Five Security Shifts that Will Define 2026
Joe Evangelisto outlines several critical shifts demanding executive attention. As organizations move from open AI experimentation to governed application, leaders must implement safeguards to manage data exposure and ensure system integrity.
DataCenter Knowledge: Defending at Scale – The Importance of People in Data Center Security
As the demand for AI, cloud computing, and digital infrastructure drives rapid data center expansion, the importance of robust security measures has never been greater. In a recent conversation, Dalin highlights why human factors remain central to effective data center security, even in an era of advanced technology.
Security Week: Exploring AI-Assisted Social Engineering Attacks to Help Prepare Leaders for What Lies Ahead in 2026
SecurityWeek interviewed NetSPI’s Director of Social Engineering, Patrick Sayler, for Cyber Insights 2026 exploring AI-assisted social engineering attacks to help prepare leaders for what lies ahead in 2026.