
Part 2: Ready for Red Teaming? Crafting Realistic Scenarios Reflecting Real-World Threats
TL;DR
Read this first: Part 1: Ready for Red Teaming? Intelligence-Driven Planning for Effective Scenarios
Good red teams require the right scenarios to test an organisation’s mettle and allow them to grow. If the bar is set too high, the outcomes are not actionable; set it too low, and it fosters a false sense of security and damages the relationship between client and supplier. Why? Because to test effectively, red teams must align with an organisation’s strengths and weaknesses. Stunt hacking (doing complex feats for no good reason) doesn’t benefit anyone, especially when it comes to having expectations of outcomes met. Effective scenarios balance sophisticated techniques with practical, observed attack methods to provide actionable insights for impactful red team testing. Learn about the execution of how red teams and offensive security experts do what they do. Explore real techniques, the realistic setups, the trade-offs, and the ethics that separates theory from practice.
Good Practices:
- Emulate tactics that:
- Align with threat actors relevant to your industry;
- Reflect motivations driving unknown groups and their likely capabilities.
- Blend technical and non-technical attack vectors, balancing familiar approaches with untested ones to ensure you can adapt effectively to both.
- Adjust difficulty, complexity, and time spent based on the organisation’s current security maturity.
- Discuss your scenario ideas with vendors in advance—we’re here to help with our experience and filter out any impractical ideas.
- Purple teaming can help to reduce the TTPs that are possible in your organisation, but this is different from red teaming and collaborative red teaming. Each should be separately run, or risk not meeting outcome expectations.
- Include social engineering elements that align with current trends, as well as those frequently used against your organization. Tried and true methods often remain effective.
- Formulate a plan for your goals from the exercise that goes beyond “to prove our control effectiveness” or “we want to validate our purchase of X security tool” and make sure you work with vendors to align these. If they know them, they can help.
- Analyze the organisational impact of delivering a scenario, who will support it, and how you can perform this and maintain normal security operations.
- Communicate securely and bounce ideas off both teams, the control group and red team throughout – if you roll a boulder down a hill, it does not go the way you want it to 100% of the time, so steer it lightly where you can.
Poor Practices:
- Focusing solely on exotic, advanced persistent threat (APT) techniques.
- Neglecting to test incident response processes alongside prevention and eradication.
- Creating unrealistic scenarios that don’t match real-world constraints or exploited pathways.
- Overlooking common, low-tech and low complexity attack methods that are perennially effective.
- Failing to adapt scenarios based on initial findings during the exercise.
- Not communicating throughout with the red team about the good, bad, and in-between. Communicaiton ensures that as the process evolves, it works for mutual benefit.
Introduction
If we’re conducting a red team engagement in an environment that doesn’t mirror your real infrastructure—with all its technical debt, layered controls, and gaps—the exercise will never truly reflect your actual risk.
When we test an inaccurate reflection of your people, process, and technology, at least one element will be improperly tested, causing control gaps. This results in a situation where we’ve not done our job of challenging your assumptions, and you’ve tested only part of what is useful to you.
The last paragraph may be controversial to some, but if that’s your desire then you will continue to spend money and get no meaningful result. Red teams are specifically used to prove or disprove all assumptions and controls in a real-world context to make sure your time and money spent on security is effective. This allows you to practice what to do when a real breach occurs.
As a red teamer, it’s easy to see the battle-hardened CISO or security leader, who has trodden these paths before many times, and been through the pain a breach can bring to them and their teams. Their focus has shifted away from simply producing reports or determining their criticality and instead centres on how that criticality can drive security change. We are enablers in what we do, and if we find an issue, it’s absolutely a way to prove you identified and fixed it before the real event occurred. This should help you drive organisational change and get the funding and attention from the business you so often need.
Red teamers are often idolized in the world of proactive or offensive security, but we are as human and fallible as the blue team. Approaching the engagement without being adversarial is a key part—and that applies to us as much as it does to the person requesting the service. Why? If we don’t, the opportunity to collaborate and mutually learn is lost. To improve as red team professionals, we need exposure to realistic environments; and to improve as a blue team we need to evaluate each other’s methods and capabilities to produce better outcomes for the next time. That’s how we build up our understanding of modern defences and how they work in practice. That back-and-forth with the blue team is essential. Without that pushback, the red team won’t improve. And the reverse is true, too. It’s a mutual process. I say a lot to clients, red teaming is like a counselling session. We all go in with our strengths and weaknesses on display, because that’s the only pathway to acceptance and growth that works. Clam up, and you never get to hear the other side of the story.
The reality is that a good red team engagement is a complex, involved process. It takes time, planning, and shared understanding. Managing expectations is key—and we try to do that clearly and transparently, even through a blog series like this one.
Why Are You Testing and What Are Your Goals?
How does a red team execute a realistic scenario? It can be external, internal, assumed breach, malicious insider—anything. It could follow frameworks like CBEST, TLPT, TIBER, or be more bespoke, but some things are always core.
For one, most exercises require obtaining some kind of initial access, and this is most often obtained via social engineering of users in some way or another. Other less common methods are external technical breaches or through physical means.
The difficult part is this is dynamic and depends on your business and what we can identify as exposed data. Social engineering can have lots of areas of consideration to get right. Do we phish people directly? Use social media platforms? Go via third-party or fourth-party relationships? Set up watering holes or physical infiltration, like dropping USBs or blending in at a local bar? These are all techniques that get discussed, and they come in and out of popularity.
Right now, physical access testing is making a comeback in some locales, and it falls in and out of fashion globally in the red team space. This is often a sign of economic pressure, or lack of perceived successful outcomes in the past for initial access. Another factor that makes its popularity fluctuate is the capabilities of the company’s EDR solutions versus their attacker tooling at that point in time. We normally recommend not blending them, unless there is a specific threat with viable evidence that it is worth following up with. Why? Because we offer a dedicated physical security and social engineering service, and most companies only need a pentest in this space which is a more cost-effective use of capital resources. Coupled with that, you should ask yourself if you’re sure there are people trying to break into buildings and drop devices into your company, leading to an advanced breach in the real world. This risk exists, but it applies to a more specific audience. If you’re combining it for cost and audit reasons, you will find diminished returns and uncertain outcomes. It’s important to have evidence that something like this has a valid threat behind it, which applies to all types of breach methodology.
Part of the reason alternative methods of initial access are on the rise, is because defenders are getting better, and EDR tools are not only more advanced, but also more affordable. Security stacks are becoming more integrated which makes initial access harder, and sometimes we spend four to six weeks just trying to get a foothold. There is often a need to do more work to align social engineering to your organisation and technical complexity to make it even possible, so expect to be asked for more guiding information on what types of attacks can or cannot be conducted or potentially effective. This is not cheating, it’s the red team trying not to waste your money on failed ventures and work through all the attack chain elements, so you get solid testing on all areas that are needed, as opposed to wasting money on just one part.
If you only want to prove one part of your business is strong against a particular threat vector to validate that you can’t be breached, you’re absolutely doing this for the wrong reasons.
Threats often have more time and resources than you or your red team, and they may already be working for you or have been compromised. If proving or disproving this relies solely on initial access, it’s a sign you have more to learn before engaging a red team.
Red teams are not cheap, for you or for those who conduct them. We are condensing what may be months of work into a few weeks and doing this at scale with highly skilled and experienced technologists. If an organisation’s focus is purely on “can you breach us,” it might take weeks of effort. In the real world, threats are persistent. We’re not—we’re a scoped and costed engagement, so there needs to be balance between realism, outcome, and budget.
Assumed Breach: Doing It Realistically, Not Lazily
One of the common engagement models is assumed breach—start from the point of compromise. How we set that up matters. Historically, people just created a test account, gave it access to a clean machine, and said, “Go,” but that’s not good enough. That’s what farmers would call “fallow earth.” A new user account with no activity history, no realistic group memberships, and none of the messiness of a real corporate identity falls short of proving much. Security products relying on abnormality and heuristic analysis treat such accounts differently from real users, pulling us further away from reality. In contrast, a real compromised user has baggage—aged accounts, old permissions, logon patterns—all of which matter when testing whether a threat actor would be caught or not, as well as the impact of their compromise.
Sometimes, we’re caught not because of what we did, but because the account we were using stuck out for being too new and clean. Heuristic detection flags that because it’s not reflective of real attacker behaviour, and it gives defenders a false sense of security.
On top of that, we must think about the impact on the user. If you use a real person’s account in an assumed breach and they end up disciplined or even fired, that’s wrong as that’s not what red teaming should do.
Instead of punishing them, use the event as a teachable moment to help them learn, grow, and improve. Take time to ask why they fell for it and build it into learning for the rest of the team. Most people who go through a compromise become champions of security awareness. It’s like taking a speed awareness course after you get a speeding ticket—when done right, it’s transformational. People who’ve been phished and educated become internal voices reminding their colleagues to be cautious, give them a chance, and enable them. They also make for great people to act as assumed breach users—they have walked the walk, and it further enables them to be turnaround stories that others will emulate.
Don’t Overfocus on Initial Access
In some regions, especially parts of the Middle East and Indian subcontinent, there’s a strong belief that if you can’t breach the perimeter, then everything else is irrelevant. This is a limited view, because threats don’t have 6 weeks or scope limitations, they have patience and resources that make their capability higher. The idea that if red teamers don’t gain access externally then nothing else matters, is short-sighted. Breach is not a binary outcome; it’s a spectrum. Threats are constant. They may come from insiders, supply chain actors, or credentials exposed elsewhere. Placing all your resources and controls at the perimeter means that if one thing slips through, everything inside is fair game. You need defence in depth—you need visibility, detection, and containment capabilities inside your organisation too.
The Advanced Techniques of Lateral Movement and Organisational Mobility
Let’s talk about advanced attack chains. Phishing to gain access is common, but phishing again internally to move laterally is something we don’t see often enough in testing, even though it happens in real attacks. It’s a great way to test trust boundaries within the organisation. People forget that lateral movement isn’t just about exploiting AD misconfigurations or abusing federation. Person-to-person movement and exploiting relationships are very effective. Once inside, all the fancy perimeter filtering is irrelevant. Internal phishing is often less detectable, and much more revealing.
Detection versus Stealth
There’s a school of thought that says, “Turn the noise up until the SOC spots you.” I challenge that view. Sometimes, we get to a stage in the engagement where we’re clearly undetected—and that’s the moment we should trigger a simulated breach notification, such as a journalist speaking to a ransomware actor, or an anonymous tipoff. Let’s see if the organisation has the processes in place to handle a threat they didn’t know about, do actual threat hunting and eradication to prove they can get rid of the threats. There is one small caveat here: if that’s a paid service, billing may be a concern for IR retainers, so consider if they need to be read in advance. Plenty of real breaches are discovered because someone saw credentials being sold online, or a journalist tipped them off, or a breach site posted something.
Often, the first sign isn’t always a detection event—it’s an artifact, a whisper. Can the organisation respond well in that moment? We want to know: can you find all the touch points? Can you contain the threat? Can you fully eradicate it, with certainty and efficiency?
There’s always a risk in these moments. You don’t want the SOC to panic and start isolating everything, so it must be planned, but it’s far more valuable to test response than it is to test whether you spot every beacon. Often the most damaging question a SOC team can ask of a senior leader during is, ‘is this a red team?’ If your SOC predicts that you do a red team the same time every year, the game is already broken, and it’s difficult to learn the right lessons from the start.
Communication, Failures, and Transparency
One of the underappreciated parts of a red team engagement is communication. You need to give the client enough to maintain situational awareness without tipping your hand. They need to know progress is being made, what the implications might be, and when urgent issues arise.
We also need to do a better job talking about failures. We never talk about the times we didn’t get in, or when we were blocked because of something circumstantial—like a firewall policy change that kicked our beacon offline. That’s happened to me more than once. It’s unfortunate, but it’s important. Your organisation needs to understand that sometimes success or failure wasn’t because of defence, but luck. The post-engagement review should be brutally honest, not just, “Here’s what we got,” but “Here’s what we tried, here’s what almost worked, here’s what failed.”
Again, we’re humans, and humans can fail, but we always aim to do the right thing for our clients, even at our own cost. Your average red teamer spends a lot more time than you ever get to see, trying to do the right thing to help you learn and grow. Just like a personal trainer needs to hit weights too, we need to exercise alongside you to make sure we can make you do those reps. And we can get tired, take non-optimal courses of action or do things that work elsewhere that don’t for your organisation, no matter how much we practice our technique.
Ethics, Risk, and Business Context
Finally, everything we do as red teamers must be seen through an ethical lens. Just because we can exploit something doesn’t mean we should. Maybe we find a critical RCE in building management software. Do we exploit it? What if that software controls the HVAC system for a high-heat facility? We could create a safety risk. We need to simulate real threats, but we cannot become the threat ourselves.
There’s also the business side. Are we considering the industry? Regulations? Recent high-profile incidents? Are we aligning our test to what’s keeping that organisation’s leaders and teams up at night? That includes supply chain scenarios. What happens if we compromise a supplier and gain access to the main organisation unintentionally? Are we prepared to manage that responsibly? Scope, risk, and communication are key.
At the end of the day, it’s all about balance. Balancing realism with operational safety. Balancing stealth with learning outcomes. Balancing red team pride with client maturity. If your team isn’t ready for elite tactics, we don’t throw you in the ring with a world champ—we teach you how to keep your guard up first.
Ready for Red Teaming? Contact NetSPI
Red teaming is an involved testing type that brings highly beneficial insights into your company’s ability to detect and respond to the most realistic attack scenarios. Taking the time for proper planning and evaluation ahead of red team engagements will result in the most valuable outcomes and a strong working partnership between you and the red team testers.
Whether you’re ready for the next challenge, or you’re working on compliance with industry regulations, NetSPI is ready to guide the most impactful next step for your security. Contact us for a consultation with our security experts.
Contact us for a consultation with our security experts.
Explore More Blog Posts

Detecting Authorization Flaws in Java Spring via Source Code Review (SCR)
Discover how secure code review catches privilege escalation vulnerabilities in Java Spring apps that pentests miss - identify insecure patterns early.

Set Sail: Remote Code Execution in SailPoint IQService via Default Encryption Key
NetSPI discovered a remote code execution vulnerability in SailPoint IQService using default encryption keys. Exploit details, discovery methods, and remediation guidance included.

Dark Web Monitoring And Why Your EASM Strategy Depends On It
Organizations face threats beyond their perimeter. Explore how dark web monitoring, breach data tracking, and public exposure detection strengthen your EASM strategy.