On January 31, NetSPI Director of Research Nick Landers was featured in the SecurityWeek article called Cyber Insights 2023 | Artificial Intelligence. Read the preview below or view it online.

+++

SecurityWeek Cyber Insights 2023 | Artificial Intelligence – The pace of artificial intelligence (AI) adoption is increasing throughout industry and society. This is because governments, civil organizations and industry all recognize greater efficiency and lower costs available from the use of AI-generated automation. The process is irreversible.

What is still unknown is the degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool for beneficial improvement. That day is coming and will begin to emerge from 2023.

The changing nature of AI (from anomaly detection to automated response) 

Over the last decade, security teams have largely used AI for anomaly detection; that is, to detect indications of compromise, presence of malware, or active adversarial activity within the systems they are charged to defend. This has primarily been passive detection, with responsibility for response in the hands of human threat analysts and responders. This is changing. Limited resources which will worsen in the expected economic downturn and possible recession of 2023 is driving a need for more automated responses. For now, this is largely limited to the simple automatic isolation of compromised devices; but more widespread automated AI-triggered responses are inevitable.

Failure in AI is generally caused by an inadequate data lake from which to learn. The obvious solution for this is to increase the size of the data lake. But when the subject is human behavior, that effectively means an increased lake of personal data and for AI, this means a massively increased lake more like an ocean of personal data. In most legitimate occasions, this data will be anonymized but as we know, it is very difficult to fully anonymize personal information.

“Privacy is often overlooked when thinking about model training,” comments Nick Landers, director of research at NetSPI, “but data cannot be completely anonymized without destroying its value to machine learning (ML). In other words, models already contain broad swaths of private data that might be extracted as part of an attack.” As the use of AI grows, so will the threats against it increase in 2023.

Read the full article at SecurityWeek!