Episode details:
NetSPI Field CISO and host of Agent of Influence podcast, Nabil Hannan, sat down with Ryan Hays, Global Head of Red Team at Citi, to talk about common misconceptions of the buzzwords “automated red teaming”, and how to foster effective red and blue team collaboration.
Show notes:
- 00:25 – Misconceptions of Automated Red Teaming
- 01:32 – Balance between Manual and Automated Red Teaming
- 02:50 – How Much Automation is Too Much?
- 04:35 – Beware of a False Sense of Security with Automated Tools
- 06:14 – Looking at Red Teaming through a Crystal Ball
- 07:18 – Overcoming Challenges in Purple Teaming
- 09:34 – Unique Ways of Fostering Blue and Red Team Collaboration
- 11:38 – Resolving Conflicts Between Offensive and Defensive Security Teams
Transcript between Nabil and Ryan
Automated red teaming, effective purple teaming, and better collaboration between offensive and defensive security teams
This transcript has been edited for clarity and readability.
Nabil Hannan: Hi everyone, I’m Nabil Hannan Field CISO at NetSPI, and this is Agent of Influence. Today Ryan Hays joins us to talk about automated red teaming and effective purple teaming. Ryan, so happy to have you here.
Ryan Hays: Appreciate it. Thanks, Nabil.
Nabil: Ryan, tell us a little bit about where you are today.
Ryan: Right now, I currently run the Global Head of Red Team at Citi.
00:25: What are some common misconceptions with automated red teaming?
Nabil: Help me understand, what is your take on the buzzwords, “automated red teaming”?
Ryan: I think it’s a marketing term at the moment, right? You see automated red teaming, or automated red teaming with AI, you know, trademark pending.
I think the industry has never thought they were an interesting target. But I think now we see that everything is an interesting target.
But I think the confusion is you’re not going to fully automate red team ever.
There’s just too much human thought process that needs to go in place, and too many decision points that need to happen that AI or machine learning, or any of these other things, are not going to be able to simulate appropriately, without introducing risk.
So, I think it’s a huge marketing term. And really what the problem is, is that small and medium sized businesses just don’t know any better and are buying these things assuming that they’re getting these automated pentess, automated red teaming, and they’re really not. They’re not getting what they think they are.
When you start looking at the larger organizations, they’ve gotten smarter. They have internal teams. So, I definitely feel like automation is very helpful, but automated red teaming is just not true; it’s not there.
01:32: How do you strike a balance between manual and automated red teaming?
Nabil: I think when it comes to red teaming engagements in general, there are certain nuances of cleverness, creativity, and other things that come into play. That being said, there are also a lot of minutia and administrative things or work that goes on behind the scenes that doesn’t necessarily require that creativity from a human being, which can be automated.
Can you talk a little more about how you find balance between a manual red teaming effort that truly needs a human being in the mix, versus what are the types of things that you believe can be automated out?
Ryan: A lot of the repetitive process, all of the aspects of a red team, there’s some aspects that can be automated. Reconnaissance is the prime example that probably everyone is going to use, where I can feed a scope into some automated tool, or some AI, where I can start chewing up that data and paring it down, so that I can then feed it to a human at some point, and that’s the process where they can then think intelligently about it. Here’s the target that we need to go after, or here’s the thing that we need to do.
I think every phase of a red team engagement, there’s some bits and pieces of that that we can automate, and that’s just going to make your teams more efficient, and it’s going to be a force multiplier for you.
02:50: How much automation is too much when it comes to security tools?
Nabil: Now this may be counterintuitive, but with too much automation, or with too much availability of tools and technology out there, I often find that can become an inflection point where it becomes more of a hindrance during an engagement versus actually helping during an engagement.
I would love to understand your perspective on how red teaming engagements are being impacted today by automation, and are there certain negative things that are happening because there’s just too much available out there?
Ryan: I don’t know if this is red team in particular, but you definitely need to be careful and make sure that the operators that you have running pentestsor red team engagements understand the tools that they’re using. All of these tools are great. They improve the overall human aspect of the engagement. They make them more efficient, and they can make them faster and more accurate at their job. But they need to understand what they’re doing. When a tool breaks, how do you fix it?
If you don’t understand the underlying functions that are going on, you’re going to get lost in things.
So yes, there is probably a point where there’s too much automation and not enough human thought going on.
You definitely need to make sure, as you’re planning out things, where are the break points in each of these segments of the engagement? During reconnaissance, at what point do I stop, look at my data, and decide: do I use this tool or this tool instead because of the data output that I have?
I don’t know if there’s a set definition of when I need to stop and inject human intelligence into a process, but there’s definitely something – that’s why we have leadership and management, that can help guide operators along their path as they’re doing these engagements. To stop here, think, look at your data, ask for help, ask for guidance, and then move on, and pick your next tool, or pick your next next thing to do.
04:35: How can companies safeguard against a potential false sense of security that automated tools bring?
Nabil: What type of advice would you have for an organization or someone who’s maybe fallen into the trap of thinking that they’re secure and have a false sense of security because they’re using a lot of automated tools that they think are doing what’s called red teaming, and they’re actually not focusing enough on the human and manual side.
How would you ask them to look at it from a different lens, so they think about the problem differently?
Ryan: That’s a hard one. I think what they need to do is they need to understand what they’re buying and then what they should be getting, and I think that’s a customer education piece, or a business education piece, if you’re an insider employee looking at that. You have to educate the person buying this, if it’s your CISO, or if it’s an executive that’s buying these automated services, you have to educate them on what they’re getting, and what they should be getting at the end of the day.
It boils down to education at the end of the day. There’s so many sales buzzwords that have been going around that people don’t even understand that they’re buying the wrong thing.
So having the conversation around, here’s what you’re getting, here’s the results that we’re getting, and yep, everything is green, and it’s all pretty and it checks that box for us, but you know, if some hacker comes along two weeks from now and they find a vulnerability, where’s your green piece of paper now?
Showing them that there’s human thought process going into this, they’re understanding attack vectors, they’re understanding risks, and vulnerabilities, and operational impact, those are the things that are getting missed when you when you inject all the automation into it.
06:14: If you could look at red teaming through a crystal ball, what would you see?
Nabil: If I gave you a crystal ball that could look into the future, what would your take be on where you see red teaming as a method of security activity in the next five to 10 years?
Ryan: I guess this wouldn’t be a crystal ball, but it’s more of the traditional thought process of red teaming. So right now, the buzzword of red teaming is traditionally looked at in cybersecurity, right? But red teaming should be at every single layer of decision-making within a business.
If it’s thinking about a merger and acquisition, what are the adverse reactions that could happen? What are the things that I could take advantage of if we do that? But taking red teaming outside of cyber, I think, is really where, especially larger organizations, should be shifting to.
It’s something that I push when I build these red teams out for all the many organizations that I’ve done it for, I always push to get it outside of cyber. It always starts in cyber, and then we bring it up to the board level, and have those conversations at the board level, but definitely with executives and everybody else.
07:18: What are some common challenges when purple teaming, and how have you overcome them?
Nabil: I know detecting problems is always fun, but I think a bigger challenge we have is actually remediating or protecting an organization from being attacked and attack scenarios, which leads us to the conversation around purple teaming and having red team and blue teams work together.
What do you see as some of the main challenges of having a proper purple team engagement, and how do you overcome those hurdles?
Ryan: I mean, there’s lots of challenges there. The big, obvious challenge is funding and making sure that you have the appropriate personnel in place to do that. I think one of the other hindrances that you see quite a bit is that when you have a red team and a blue team, especially when they’re internal, there’s a lot of animosity between the two teams, right?
How do you bring those folks together to understand that they’re both working together to solve the same problem? And that’s what purple teams are here to do.
We’re all here to solve the same problem. Red does it one way. Blue does it one way. But when you guys are doing it together, it’s a faster process.
Nabil: So, having done this for so long, are there any highlights or low lights that you’ve experienced that you’d be comfortable sharing?
Ryan: I mean, probably the low lights I don’t know if I’d want to share. I mean, there’s definitely a lot of conflicts, and like I said, animosity between the teams where you have to build bridges and mend those relationships, either things that have happened because of things that you have done personally as you’re working as a red team, or, you know, relationships that you have to repair from previous folks that have been in your similar roles before you came into them.
I would say, I guess that would be the low lights, but definitely the highlights, being the first one to discover problems so that when the attackers come in – I’ve been in scenarios where we’ve been attacked, and it was literally just the last week prior where we saw something.
It was a Zero Day vulnerability, and we fixed not only the vulnerability, but we fixed detection capabilities and visibility capabilities, and none of that would have happened as quickly as it did if we were doing that as a red team, because those engagements are longer, months at a time. Whereas when it’s a purple team, I can spin that up, I can run it, it’s a matter of hours, maybe days, and then we push fixes into place, and things are good to go.
So, some of the highlights are just seeing those things reflected so quickly, and being able to attribute it back to the activities that we were specifically doing.
09:34: Can you share any unique ways of fostering blue and red team collaboration?
Nabil: Is there something that you’ve done in your career that’s a little more unique? We know that there are normal team building activities and exercises to get teams to collaborate. But do you have something you’ve done that might be a little unique or different that helped you harbor that sense of collaboration across teams? And why do you think that was so effective?
Ryan: One of the ways that I’ve built the red teams at the last few organizations that I’ve been is, I refer to it more as defensive red team. And that’s abnormal.
Normally, it’s an offensive security function, but I want to make sure that the red team, as they’re going in and performing all their actions like they can tie that back to intelligence that they gained, being able to emulate just like an attacker would, right? An attacker is not going to be an insider threat. Well, it could be an insider threat, but it’s not going to be somebody that’s doing offensive security for years or months, and have inherent knowledge of an environment.
I want to make sure that the red teams are basing all of their actions on intelligence they’ve gathered during an engagement. They always have.
It’s like the old school when you’re in grade school and you always had to show your work on your math problems, I do the same thing with my red team. Every time they type a command and hit enter, they have to show me their work.
How’d you get there? Why’d you run that command? What are you doing? What’s that based on? What’s your train of thought? And write that out.
We use all that data when we end the engagement and go back and sync up with the blue team, and we walk through our whole entire process, and it sheds a little bit of light that the blue team doesn’t actually get to see when it’s a real adversary, right?
They don’t get to question them at the end of it, and they don’t get to question us all the time on all of the activities, unless we have that documented, or people forget what they do. But I force the guys to sit down and write all their actions and their thoughts down of why they’re doing each thing, so that we can talk through it at the end. And it’s really kind of opened some eyes, and it’s helped.
Again, I don’t know if it’s better than anybody does it at any other place, but it has helped me, and it has helped open the eyes for a lot of the blue and the red team guys that I’ve managed over the years.
11:38: How can we work together as an industry to resolve conflict between offensive and defensive teams?
Nabil: Being in the cybersecurity industry now for almost 20 years myself; you’ve been around for a long time as well; I know we’ve seen leaps and bounds that happened in terms of security and organizations and leaders understanding the need for security activities such as red teaming, etc., But I still find that there’s that friction between the offensive and defensive side, and especially when it comes to purple teaming, and there’s the red teams and the blue teams, and that animosity happens, it’s hard to get everyone on the same page to understand that they’re both going towards a common goal.
I’m curious to get your perspective on, why is that the case? How come we still haven’t learned or understood enough to understand that my team that’s really a partner and a peer of mine isn’t really doing this to make my life harder, but we’re doing this to protect the organization and the business as a whole?
What’s it going to take to get education and awareness of security to a point where people, when other teams come to them with a security problem, they don’t see it as someone calling their baby ugly, but they see it as a way to improve, an opportunity to improve?
Ryan: I mean, you took the words out of my mouth. That’s where I was going to go with it. Nobody wants to have their baby called ugly. One of the ways that I’ve helped foster the relationship a little bit is you feed on some of the animosity, but in a fun way, you gamify the system, right?
If the blue team can catch the red team, and during the first three steps of their actions during the attack path that we have laid out, you know, the red team has to take the blue team out to dinner. Or vice versa, right?
We’ll just kind of gamify a lot of the engagements, and that helps foster some of that. Is it going to fix everything? No, but it does help. It makes it a little bit more fun for everybody, and it makes it more of a contest. So while there is still that, ‘I want to be better than you’ type thing, it’s more in a fun aspect and not coming from anger, where normally, if you’re, like ‘oh, I’m better than you, I just did all this cool stuff, and you guys caught nothing,’ you saw nothing, that’s more of calling the baby ugly. And nobody wants that.
Nabil: Ryan, thank you. This was fun and always great, chatting with you and hanging out. Hopefully we get to do this again. I have one more question for you to wrap this all up, which is, if there was one key takeaway from our conversation today that you want the people to take away, what would that be?
Ryan: I think it is fostering that red and blue relationship. It’s a big thing for me, because I’ve been in environments where there’s so much hate and discontent that nothing gets fixed and nothing gets solved. It’s really making sure that you’re fostering a good relationship right there, because it’s the only way anything is actually going to get done.
Find more episodes on YouTube or wherever you listen to podcasts, as well as at netspi.com/agentofinfluence. If you want to be a guest or want to recommend someone, please reach out to us at podcast@netspi.com.
Explore more podcasts
EPISODE 061 – Leveraging IT Hygiene to Build a Culture of Security
Gain tactical insights on third-party risk, IT hygiene, security culture, and gender equality in cybersecurity with Nabil Hannan and Dawn Armstrong.
EPISODE 060 – Sharing a Blueprint for Cybersecurity Leadership
Listen to Tunde Oni-Daniel’s journey from cybersecurity technologist to leader and hear how to lead with purpose while building a top-tier team.
EPISODE 059 – Making Cybersecurity Accessible for All
Join Agent of Influence as Mandy Haeburn-Little discusses expanding security services in the UK and her work to attract diverse talent to cyber.