TL;DR 

Table of Contents

If you haven’t already, start learning about red teaming by reading part one and part two first:

The true value of red teaming lies in fostering a collaborative security environment where we can speak truth to each other, without the recourse of being offensive or defensive, like ‘red team therapy.’ This is the only way to bettering culture and driving concrete improvements across both the red team and your organisation. To do this, the analysis and improvement stage must serve as a genuine learning exercise for all parties involved. To be able to do that, results and attack paths cannot be seen as failures, but as a means to building a pathway to success.

Good Practices:

  • Conducting joint red and blue team debriefings and let the blue team lead with their view of what transpired to share perspectives starting from defense.
  • Keeping logs of red team activity, and for systems logs in line with this for joint analysis.
  • Mirror technical findings into business risk language for executives, and share outputs of reporting with stakeholder groups separately.
  • Establishing a blameless post-mortem process internally to encourage openness. Do this externally when dealing with regulators, auditors, and red team providers.
  • Setting concrete improvement goals based on exercise results. This is not the same as strategic or tactical objectives or technical controls you implement – they are a subset.

Bad Practices:

  • Treating the exercise as a pass/fail test of controls, people process or technology rather than a learning opportunity on how to improve them by assessing their performance.
  • Withholding information between red and blue teams post-exercise. We must both learn and grow with each test, and if you do not share both ways, red teams cannot get better and neither can you.
  • Focusing solely on successful breaches, ignoring partial detections, response processes and other events in the full attack chain. Each part is a part of a whole.
  • Failing to track and validate security improvements over time, and align this from one report to the next. Don’t force yourself to relearn the same lessons over and over again – and pay for it too.
  • Neglecting to update threat models and security awareness training based on findings, as well as removing those that aren’t relevant or valid. It’s not all about getting more and more fearful of the world and its threats.

Reframing the Post-Engagement Phase

This third phase of red teaming is arguably the most challenging and impactful part of the engagement for organisations. At its core, this stage must serve as a genuine learning exercise for all parties involved. A fundamental challenge lies in moving past adversarial thinking – the entrenched “us versus them” mindset – that often characterises red and blue team dynamics when each party has to defend their workmanship, rather than working together in unison toward the same result or goals.

While the engagement itself may have felt combative because it may have been run in such a way that defensive teams treat the party being the adversary as a true threat, we must now transition away from that mentality.

This is a critical step in organisations’ training of their blue team and demonstrates indirectly how threatened they feel from the exercise. This stage should no longer resemble a fight. Instead, it should resemble a therapy session where all parties wear their strengths and weaknesses on their sleeves to make sure the result is honest, earthy, and true.

It is extremely important that both red and blue teams, along with other stakeholders, discard defensiveness, but this is difficult. The engagement might have exposed security weaknesses, operational blind spots, or limitations in tooling, but none of these findings should provoke shame, retribution or doubt in the organisation being assessed.

There is a common practice and mode of thought that poorly performing blue teams should be disciplined when an ostensibly poor result comes from a red team report. To be clear, there is no bad report, just one that generates more chance to improve.

Every organisation, regardless of maturity, has its challenges both internally and externally. What actually matters now is how we respond to finding out our hard work didn’t withstand the test like we hoped. We can draw comfort for the fact that it wasn’t the real thing this time round.

From Conflict to Collaboration

One reason regulated frameworks often succeed is that the presence of an external party such as a regulator acts as a form of moderator, ideally helping both sides engage constructively. This creates a safer environment for open discussion, reducing the instinct to hide flaws or frame results as failure, like the counsellor in our therapy example.

In less formal settings, the red team needs to adopt a collaborative mindset: how can we help the organisation reduce risk, secure funding for remediation, or isolate risky configurations?

All too often, poor red teams continue the adversarial mindset, mostly because much of the wider industry and controversially the client base, continue to proliferate the fight mentality due to lack of post-operation collaboration.

This mindset shift should be matched by the blue team. Findings are opportunities to get investment, staffing, and to fix the issues they’ve probably wanted to for some time – or at least say, “I told you so” to colleagues. The red team is not a fire meant to burn, but a fire under you, meant to motivate and to drive urgency and improvement. And while analogies of heat and pressure may be colourful, the essential goal is clarity, mutual respect, and growth, and all our communal hairs intact.

Fostering a Culture of Honesty

In the lead up to, and during testing, ego and fear must be set aside. The fear that leadership may discover a weakness in people, process, or technology as solely on the blue team to fix, causes significant damage to the red team process, as well as staff morale and capability during the in-between times. This potential for fear shouldn’t prevent us from having honest conversations which includes discussing surprising design decisions or business constraints, and especially those that led to risk acceptance, technical debt, or why lower standards are accepted at macro levels.

The red team should help translate technical findings into actionable improvements, but this is often missed due to lack of understanding of the constraints under which blue teams operate. The blue team should provide context as to their actions before and during the test, to help with assessing how to improve in the real world. This context, such as why certain decisions were made, how responses were prioritised, or what constraints limited action, feeds the ability for us to provide advice you can use.

Unfortunately, the somewhat tricky reality is that many blue teams can deliberately or inadvertently ‘clam up’ during this phase. Whether due to fear, lack of internal co-ordination, or a lingering adversarial mindset, this lack of transparency hampers the value of the engagement.

Without insight into the organisation’s decision-making processes or incident response rationale, red teams cannot accurately assess the effectiveness of detection, alerting, or response measures. Worse still, this inhibits the blue team’s ability to advocate for improvements within the organisation.

Bridging the Gap Through Shared Process

This communication gap cuts both ways. Red teams often fail to explain their methodology, the rationale behind specific actions, or the decision points that shaped the attack narrative. Blue teams, in turn, may be hesitant to share detection and response workflows, assumptions, or weaknesses. Bridging this divide requires mutual openness and a willingness to share process, not just outcomes, and that means bearing our weaknesses in a way that is difficult, without a healthy dose of exposure therapy. That means both sides must always do this every time.

For some teams, gold teaming can help, as it places more emphasis on process than a pure red or blue team engagement. Tabletop exercises are great, but they need to include practical elements. Though, keep in mind that gold teaming is fairly new in the commercial world, and can be inconsistent in its implementation and often lacks the technical depth of red team delivery.

Nonetheless, looking to models from crisis response in public safety, such as law enforcement or emergency services offers a well-documented framework for structured post-incident analysis, which can serve as inspiration for building more effective and mature collaborative review practices in cybersecurity.

Practical Improvement: Workshops and Collaborative Reviews

At NetSPI, one of the most effective methods we’ve used is the separate post-operation scenario narrative debrief, detection, and response workshop, as well as senior leadership debriefings. Why? Because the senior leadership output requirements differ radically from technical leadership and from the defensive teams’ needs from the engagement. Each of these three is tailored to audience needs and actions after the event and helps support each other to digest and action upon the output of reporting.

Bringing together security vendors, platform owners, the organization’s security capability, leadership (such as the CISO), and the red team fosters rich discussions. But they should be after having triaged with each of them as separate entities. These workshops allow teams to walk through the engagement together, explore alternate response strategies, identify tooling gaps, and simulate more effective reaction scenarios, with their own standing on the engagement established and able to bring insights and needs to the final workshop more effectively. One lesser discussed element and something that can be equally powerful is bringing your security vendor to the table if they’re open to it. Involving vendors directly also allows live refinement of tooling and a shared understanding of how to configure outputs for reporting. This can enable tightening detection rules, improving configuration baselines, and ensuring lessons are actioned immediately or identified for longer term fixes.

This collaborative model begins to resemble a true “rainbow team” in which all participants, regardless of their original role, contribute to mutual learning and operational improvement.

Measuring What Matters

A key objective is to assess what has and hasn’t been effective. This requires a combination of quantitative and qualitative analysis. Atomic findings are useful, but so are broader patterns and systemic risks. Red teams must connect technical issues to wider organisational impacts. This remains the single biggest challenge for many organisations. This means analysing not just people and technology, but also process, an area where many assessments fall short, relying on the inputs of all parties to accurately judge.

Understanding whether alerts were triggered, or if they met the thresholds for it, why certain actions were or weren’t taken, and what influenced business decisions is absolutely essential. Without insight into these processes, we miss opportunities to recommend funding, tools, or training that could significantly reduce future risk.

What Doesn’t Work

The inverse of good practice is, unsurprisingly, bad practice. Withholding information in a red team engagement, whether intentionally or by omission, serves no one. Whether you’re testing to validate controls, identify gaps, meet compliance, or assess operational readiness, the process must be approached openly. Without transparency, the engagement fails to deliver value for either side.

Mutual growth depends on honest, collaborative interaction. Information sharing is vital. If we’re too guarded whether out of fear, pride, or misunderstanding, we miss the opportunity to genuinely improve.

Improving Beyond the Test

In regulated testing scenarios, opportunities for direct improvement can be limited. Many frameworks prohibit providers from using the engagement to cross-sell services. This restriction exists for good reason: the goal is not sales, but learning and security improvement across the industry.

That said, one of the most effective ways to do this is through purple teaming. The best purple teaming should mirror what was experienced in red teams, as a part of the spectrum of unit testing it undertakes.

This means it often needs the same red team to really do it the same way. It shouldn’t be a surprise that this is paradoxical and goes against the up and cross sell mantra. Whoever you use, these sessions focus on the techniques, tactics, and procedures (TTPs) of real threats. By extension, they are used during the red team engagement and allow organisations to test and refine their detection capabilities in different scenarios. This helps extend technical learning into tangible growth and allow self re-assessment without paying the full red team fee next year.

Similarly, detection and response workshops are invaluable. They are essentially feedback sessions on the capability to take action on threats at a technical and procedural level. They allow organisations to walk through what happened, understand the impact, and identify areas for future development for active threats. These workshops also support continued staff training, helping teams better interpret technical findings and map them to potential organisational impacts.

Evolving Support and Aftercare

One challenge in cybersecurity is that many providers hesitate to offer direct support for fixing issues, often due to liability or separation-of-duties concerns. As a result, we’ve started exploring how we can better support clients in applying what they’ve learned. This includes disseminating red team insights more broadly, and helping clients understand how to engineer more adaptive and behaviour-based detections.

For instance, organisations sometimes expect a single YARA detection rule or regex string to cover entire attack types, but that’s not realistic or effective. Defence-in-depth requires layered, resilient controls and behavioural-based controls to identify attacker behaviour patterns. The actions taken in the lead up allows us to adapt and fix the underlying issues and stop repeat behaviour, or trigger events that are variations of the same root issue. To help clients recognise this shift in emphasis and values, we offer aftercare services including remediation workshops and validation testing on what we deliver in our testing.

In red team engagements, however, we need to think even bigger. Fixes must not only work in isolation but be resilient across a wider organisational context. Collaborating with clients to consider that broader impact is a core part of our role.

Long-Term Planning and Iteration

Red teaming shouldn’t be a one-off engagement. Building three-to five-year plans with your organisation allows for iterative maturity and enables both parties to develop a deep understanding of evolving threats and defences. Over multi-year relationships, test scenarios often become more complex or creative. However, we must ensure they remain realistic and aligned with actual risks.

There are two approaches we typically recommend. One is to roadmap the long-term testing strategy alongside the organisation’s security maturity model. The other is to iterate based on previous findings, retesting known weaknesses where necessary, while adjusting for areas that may now pose greater resistance. The key is balance: don’t avoid high-resistance areas just for ease but also don’t retread ground that’s already secured, unless it’s part of a validation cycle.

Insights from the Defence Side

An area we’re actively developing is capturing feedback from defensive teams, SOC managers, L2/L3 analysts, and defensive leads. Understanding how they interpret and respond to red team engagements adds vital perspective. We want to highlight their strengths, explore their challenges, and share best practices across the industry.

In future work, our team here at NetSPI plans to incorporate these perspectives more directly, weaving in real-world case studies and insights from red teamers and defensive leaders alike. We’re particularly interested in how specific sectors like finance, crypto, or regulated industries respond to and adapt based on red team findings.

If you’re interested in sharing insights with us on that side, please do contact us as we’d love to hear from you and build better results for our clients.

Ready for Red Teaming? Contact NetSPI

Let’s talk about what comes next. Emerging technologies, especially AI, will play a significant role in red team evolution. These tools will influence scenario design, detection techniques, and attack simulation strategies. As we continue refining our services, we’ll integrate these trends into both technical execution and strategic planning, helping you and your organization stay ahead of tomorrow’s threats. Whether you’re ready for the next challenge, or you’re working on compliance with industry regulations, NetSPI is ready to guide the most impactful next step for your security.