Episode Details:

Listen to Suryaprakash Nalluri, an accomplished application security leader, discuss the shifting landscape of application security, challenges with open-source software, and the critical role of DevSecOps in modern development.

Tune in to NetSPI’s latest Agent of Influence episode where host and Field CISO Nabil Hannan speaks with Suryaprakash Nalluri, an accomplished application security leader, about his career evolution from Java developer to a trailblazer in cybersecurity.

The two share insights on the shifting landscape of application security, challenges with open-source software, and the critical role of DevSecOps in modern development. Hear fresh perspectives on how AI is reshaping cybersecurity, tips for managing open-source risks, and the importance of embedding security early in development.

Show Notes: 

Transcript between Surya and Nabil

Topics covered: Open-source AI, Java security, software composition analysis, AI in security, DevSecOps, shift left, malicious code, software development life cycle

This transcript has been edited for clarity and readability. 

Nabil Hannan: Hi everyone, I’m Nabil Hannan, Field CISO at NetSPI, and this is Agent of Influence. Hey, Surya, welcome to the podcast. Really excited to have you! Before we get started, do you want to tell us a little bit about yourself and where you are today professionally?

Suryaprakash Nalluri: Thank you, Nabil, for having me. Before I start, I want to give a little bit of disclaimer. The views and opinions that are expressed in this podcast are my own and not related to my employer. These are based on my professional experience and my independent research. My name is Suryaprakash Nalluri. I’ve been in the IT industry and cybersecurity industry for the past 20 years.  

I started my career as a Java developer – I love coding. I used to write game programming during my college days. I have a very solid passion towards development. I joined as one of the consultants for one of the companies, and then, my first choice happened to be an open-source project, which was on Java and stretch based project. And slowly, I kind of took up the role of fixing security issues. And then that made me curious, what is security?  

Back in 2008-2009 timeframe, I was not aware of any security much in the industry, in the news as well. When fixing the security issues, I got a little bit curious about security, and I started digging into what is Wireshark, what is “.cap”? And then I’m allowed hacking of my roommates’ network traffic, and that made me curious to jump from Java development to Java security.

01:35: What was your journey like from Java developer to cybersecurity leader?

Nabil: If I go back to the time when Java was originally released, I remember that there was all this talk about how Java is a much more secure programming language compared to the legacy languages like CC++ which didn’t have as many built in protections like Java does. 

I’m curious, in your early days when you started getting your hands dirty with Java, did you have that similar impression? And then now that you’ve spent all these years in the cybersecurity and application security space, how has that view changed? Because obviously Java did not result in us having all secure applications, all of a sudden. There are still vulnerabilities in everything that we deal with. So as a practitioner going to a cybersecurity leader, how has your view and perspective changed on that topic?  

Surya: It changed a lot. My first programming is in CC++, where I used to handle a lot of boilerplate core code in terms of handling the memory management and everything we need to write a code and call it as functions. And Java really helped in terms of, it has a lot of open-source landscape, and it has a lot of libraries that help to develop the code fast. And obviously, I don’t need to deal with memory management issues when I’m doing the Java programming, but the open source was trending at that point of time, with Struts and Apache playing a major role in providing open-source libraries, and we utilize the frameworks, like the web 2.0 is also evolving at that point, and starts helping developers a lot.  

We always have that mindset open source is to be secure because it’s the source code that is available out in the industry, publicly. Anybody can look at the source code and see what’s happening within the code. I mean back in those days, we thought open source was 100% secure, considering it is going through a lot of reviews.  

But now, with the geopolitical situations and the wars that are happening around, some of the maintenance or the contributors of the open source are using it in a different way, like they’re injecting a malicious code into the open source, and they’re also protesting and deleting some of the libraries from the repositories, which is actually breaking the code.

After 10 years, from when I started in 2008 to 2025, we feel that now open source is becoming insecure. Can we trust it? That’s become a big question.

04:16: What are some trends that excite you in security today?

Nabil: Well, it’s also fun to see the journey of how people’s thoughts and perspectives on application security have changed. Like when Java came out, obviously the memory management side, as you explained, was a key focus, because in Java, you didn’t have to manually manage your memory, right? Java, the JVM is supposed to automatically take care of your memory management. But that in itself, resulted in other design and architectural issues that came along with not having control over memory from an application perspective.  

And then it’s interesting to see, too, that Java being a programming language that was developed with the intention of having software leverage this new technology back then called the internet. It was a small thing back then that just exploded.  

But the open-source community became a huge thing because object-oriented programming encourages reuse and building software that can be built and reused across the board in bite-sized pieces, and open-source usage just blew up in everyone’s face, right? Everyone was just stitching together components to make something work than building things from scratch. And it’s interesting for me to see now how we are also in a very good way, we’re moving away from just assuming something is secure because it has too many eyes on it, but that we’re actually doing things like software composition analysis to make sure that a there are not known issues in the open-source software, but also from the license compliance perspective.  

That’s been a focus for many organizations, because they’re understanding the risks of having licensed conflicts as well. So, with that in mind, you’ve worked across various disciplines, in security, and also coming from an engineering and development background, is there a specific aspect of cybersecurity today that really excites you, and why?  

Surya: With the current trend of open source being used as a form of protest, it’s becoming a silent code war. The war is coming from different aspects. Some folks are using code as an instrument to the war. Protest fear is becoming very famous nowadays in the war situation. With artificial intelligence, how can we identify those open malicious, open-source libraries and how can we detect the threats with behavioral analysis using artificial intelligence?  

The convergence of AI and the software supply chain security seems very interesting, and I’m excited to see how it’s going to evolve, from both the attacker’s perspective and defender’s perspective. It’s going to be fascinating. In going forward, I would like to see the improvements on both aspects with the use of AI. 

07:12: What challenges are you seeing with using AI to detect malicious software?

Nabil: Are you seeing any challenges to using AI to detect some of these malicious types of behavior in software? 

Surya: Typically, organizations will have some struggle in adopting AI, but attackers, there are rules for organizations, but there are no rules for the hackers and attackers. So the adoption rate for is higher for attackers, and it’s a little bit challenging for organizations to adopt at the same speed as the attackers, and they use probably the most emerging and novel techniques. That’s only challenge that I do see.

We’ve always had this challenge where ethical hackers have to follow a set of playbooks, and hackers, they don’t have a playbook. They can use whatever they want to achieve their goal, compromising the target. 

Nabil: And then, would you say there are certain things where AI truly excels when it comes to enabling security practitioners and organizations today? What are those areas? Because they probably have some sort of rigor or pattern-based component to it, so we’d love to understand, which areas are technologies that leverage AI being really effective.  

Surya: There are multiple aspects of it. So for example, when we talk in the context of open-source security, we can leverage AI to check the contributors of the open source are they having a right reputation, and the maintenance having a prolonged history of contribution to the open source, that is one aspect.  

AI can be used to check the behavior of the open source, whether it’s doing something that it’s not supposed to do, like remote comment exhibition, or so. It can perform analytics and then give the coding for the open source that can be leveraged within the organization.  

With the approach of AI agents, I think the agents can act as different roles within the DevSecOps pipeline. Like one agent can be as a software engineer, and the other one can be as a compliance and continuously perform the source code scans. And one can continuously analyze open-source libraries and can detect whether the library is being tampered or the developers are getting the libraries from the trusted sources, either internally or from externally, like Mavin repository or so.  

I think AI can automate a lot of these manual, intensive tasks and provide overall scoring in terms of how secure is the code and can use the industry’s publicly available sources as well to check whether the source code package is a golden depository or not. 

10:03: What does it take to truly unlock the power of shifting left in software development?

Nabil: When it comes to doing a lot of these analyses, there’s obviously an opportunity to do analysis maybe earlier in the life cycle than later. Often, we see that many organizations focus on using security as more of a gate that happens later in the life cycle. But with the culture of DevSecOps and software being changed and implemented at a much more rapid pace, I think it’s important for us to understand how we shift earlier into the life cycle for certain types of checks to make sure that we enable application teams to build secure code at the same rate at which they’re trying to develop code and produce new feature functionality.  

So, the concept of shift left is critical in this case. Can you share with us a little bit about what it means to truly shift left, and then, are there certain things you’ve seen that work effectively, if done earlier in the software development life cycle?  

Surya: Yeah, that’s a good question. When I started my career in cybersecurity, back in 2010, the organizations relied on compliance testing. Like how we choose it to happen, once or twice a year. Most of the compliance testing is done by the isolated teams, not from the development teams, right? The security teams are the security vendors. And that has resulted well in terms of the compliance portion, but it still resulted in a lot of issues that are being propagated from the development environment to the production environment.  

With that, the cycle is always recurring, where the compliance testing produces issues, and then they used to fix it in a waterfall model. With the advent of DevOps and DevSecOps, that culture is being changed. And the tools are also evolving, where the tools always used to focus on a given application, how do I scan that? They provide the results for security test analysts as an audience.  

Now the perspective has changed. Where the tools are becoming developer friendly and integrating into the pipeline as well. So as developers contribute to the code within the pipeline itself, the tools will perform the scan and provide the feedback, whether there is SQL injection from the source code analysis scan or there are any open-source related issues, like complaint conclusive views, like these are the CVEs and these are the open-source libraries that they need to upgrade to fix those CVEs so providing that feedback earlier in the life cycle will reduce lot of issues, and then it also saves a lot of costs for the developers, as they can identify those issues within the developer IDE itself, and then they don’t even need to commit that code into the corresponding repository, because the tool itself is giving the impromptu, ‘hey, here is a code return, and this leads to a SQL injection,’ and then they can fix the issue there itself without even committing the code to the code repository.  

In regards to that, actually, the developers can build a framework where they can integrate functional testing and security testing, and then integrate that into their local ID or into the pipeline so that they can perform recurring security testing as often and when they want, on demand, right? So, they don’t need to rely on the external teams or the vendors to perform the testing.  

The low hanging fruits can always be done by the development team and the sophisticated, complex attack vectors can be done by the security teams isolated in at the later phase. So the approach would be the first identifying shifting left in the sense, shifting it more into the requirements phase, where you can capture the security requirements and implement security test cases and build as part of your functional testing, automate them, and as you write code in IDE, use the tools like CoPilot or like Snyk IDE plugins and then take the feedback and fix the issues before even committing into the code and then integrating the tools.  

A similar set of tools in the pipeline will enable us to have some sort of gating mechanism and defining the thresholds and stopping certain types of issues propagating to the higher environments, from development environment to SIT and UAT environments and even production environments. 

14:28: How can teams integrate simple testing and empower developers with security knowledge?

Nabil: When it comes to developers actually understanding the security concepts, I know in the past, it’s always been a challenge, because developers or app teams often get vulnerabilities provided by security teams, and often they don’t have the full context to understand the impact that particular issue might have on at a broader scale from a security perspective. I know in the past, that education has been really challenging to scale and really help developers understand why the security issues that are being raised are important.  

And you mentioned that maybe we can have development teams integrate some basic low hanging fruits into their security testing frameworks that they’re using. Maybe they’re using the same framework as they’d use for QA testing or regression testing, etc. So, my question for you is, do you have any guidance that you can give which helps application teams, first of all, be able to actually integrate some low hanging fruit testing into their software testing capabilities?  

And also, how have you seen some techniques maybe that are effective in making sure that developers are properly educated on the security concepts that they might be doing testing on by themselves before the security group gets involved? 

Surya: Yep, that’s a good question. There is a framework called Cucumber that and Selenium and typically developers use that for functional test automation. So during the requirements phase, the functional requirements are converted into a test case and test plan. And then typically the developers automate that using the Cucumber and Selenium framework, which allows to write the test cases in plain English, and then someone from technical background, they implement the test case.  

So what developers could do with the help of a security analyst or security engineer within that team, they can take some of the business use cases and define security use cases, and then in plain English, they could write, okay, this is supposed to happen if I enter username and password multiple times, it’s supposed to lock up, right? Someone can write that in plain English, and the technical teams can implement that along with the functional test automation.  

So having this, we are avoiding too much dependency and too much knowledge of learning, what is that security issue? Not everybody needs to know in detail about the security related technical terms. The core technical team can implement those by taking those plain English in written security commands, written in plain English, and convert them into a kind of code. And code for security is the way to go in terms of automating some of the test cases; that’s great. 

17:24: How can teams address the false security of automated tools and uncover hidden flaws in software design? 

Nabil: Now that’s really helpful too. And then if I take a step back, I do want to talk a little bit about the open-source concept we spoke about earlier, as well, how people often get that false sense of security. And I want to talk about how often I think people who run automated tools or automated tests also end up having a false sense of security, because they think if a tool couldn’t find it, then my code and my software must be perfect, and often they forget about maybe the fact that tools are only looking for a specific set of things, but it doesn’t mean that your code is free of any security issues.  

Also, tools are not very good at finding design-level flaws or architectural flaws in how systems are created. Do you have any guidance on how to approach this with application teams so that they don’t think just because a scan passed and maybe didn’t have any critical issues, it doesn’t necessarily mean that the software is architected securely or couldn’t have other flaws that tools typically cannot find?  

Surya: That’s been one of the challenges for many organizations. So typically, there are various tools to identify open-source security issues. I would classify them into two categories, one as vulnerabilities, where the open-source contributors have generally introduced vulnerabilities by writing insecure coding. In the other case, is having a malicious code where we talked about protest, where the malicious attackers are the contributors compromise the package repository and introduce the malicious code into that, or the attackers would hijack the actual account and then introduce malicious code. And that happens in a split second.  

If a developer is automatically upgrading to the latest version if they think that it is secure, that may not be secure in all the cases, right? That may introduce new vulnerabilities and also the new package might have a malicious code which is not widely used across the industry, so that that will give a false sense of security.  

So, a couple of best practices I would recommend in this case, where I have mentioned this in my blog, DZone, and then ISACA as well, first of all, developers should not be downloading the open-source libraries and directly adding it to the code repository. They should be following a package manager, where they declare all the packages in a package manager configuration and let the tools scan all the packages that are declared in the package menu scanners, right? So sometimes, if the developers add it as a repository, the security tools are the overall ecosystem, like in the DevSecOps, will not have visibility into those standalone libraries, and we don’t have control of proper scanning, and also we don’t have control of what is a trusted source and whether developers have modified those libraries or not.  

So there are a lot of muddy waters if we don’t follow aspects of it, and not only is perfect, so some tools are good at identifying vulnerabilities, and some tools are recently evolving to identify the malicious code behaviors as well. Having multiple tools may solve some aspects of it as well, and having the continuous scans. Even though the package is passed all the tools, we still need to monitor continuously for the libraries so if there are any new attack vectors or any malicious code identified after the component moved to production, those can be corrected on an as-needed basis. 

21:11: What do you do for fun outside of cybersecurity?

Nabil: Awesome. Surya, before I let you go, I always like to talk to our guests about what they like to do for fun, outside of work or outside of cybersecurity. Can you share with our audience a little bit about some things that you enjoy doing when you’re not working in the security space? 

Surya: When I’m not working, I most likely spend most of my time in writing open source. I have developed a test bit called DVTA, which is very famous for testing desktop applications. It’s called down, vulnerable, thick client application. I developed it a few years ago, and I was passionate about how it has helped the community, like a lot of folks have used it for their testing purposes, and there are a lot of blogs about it as well.  

So apart from that, I mentor on the weekends with lot of fresh grads to kind of help them out in executing the projects or creating cybersecurity awareness. And I also present some of the cybersecurity related topics into the colleges so that the students can take cybersecurity as a separate career. And yeah, apart from the cybersecurity and the technology aspects, I have a five year old kid; I always play with him every day, so indoor soccer, and then hide and seek with him. That’s becoming my other part of the life. 

Nabil: At five years old, he must have some favorite soccer player if he’s into soccer. Who’s his favorite soccer player? 

Surya: Luckily, he’s not into that. He just started watching TV, but yeah he’s recognizing some country flags and colors where he wants to watch those specific videos. And, yeah, he is so passionate about it. And every day after having dinner, he wants to play at least 10 points. And he always wants to win over me so I intentionally lose just to make him win so that I have a peaceful night at sleep. 

Nabil: That’s fantastic. Well, Surya, thank you so much for your time and joining us today and sharing all your thoughts and your insights. It’s very much appreciated. Hopefully we get to do that again sometime soon. 

Surya: Thank you very much for providing this opportunity, Nabil.

Find more episodes on YouTube or wherever you listen to podcasts, as well as at netspi.com/agentofinfluence. If you want to be a guest or want to recommend someone, please fill out this short form to submit your interest.