NetSPI Field CISO, Nabil Hannan, shares his 2026 cyber prediction with Digital IT News about AI and social engineering. He says AI advancements are dramatically increasing social engineering attacks by enabling convincing deepfakes and misinformation that can cause significant financial losses and erode organizational trust. The security community must focus on managing AI-generated misleading content and protecting employees from falling victim to these sophisticated deceptions. Read the preview below or view it online.

+++

A Rise in Social Engineering Attacks

Social engineering already is, and has been, a leading cause of modern cyberattacks. While we may like to think otherwise, people are the weakest link when it comes to an organization’s overall security posture. However, with advancements in AI, it’s become far too easy to attack people.  

I recently spoke to over seventy CISOs at the Paris KKR CISO Summit, and a key issue on everyone’s mind was the blurring line between what is real and what is fake. When it comes to cases like believing an AI-generated video of bunnies jumping on a trampoline is real, while not ideal, the resulting “harm” is minimal. But what about when a CEO deepfake videocalls an employee and instructs them to move large sums of money in real time? Or think of those AI-generated images of the Pentagon under attack that quickly went viral in 2023. They caused an actual dip in the stock market. When cases of AI-powered misinformation and disinformation play out in higher stakes environments, internal and external trust can easily drop, and financial losses can become frequent and significant, impacting revenue and even potentially company valuations.

In the coming year, we can expect to see a rise in social engineering attacks. For the security community-at-large, there will be a greater focus on how to manage the amount of misleading AI-generated content, including how to protect employees so they don’t fall for it and, more importantly, don’t act on it.

You can read the full article here.