Measuring Security Risks Consistently

Let’s start with a little exercise. Rate the risk for the following events.

  1. Going 15 mph over the speed limit.
  2. Using a public wireless internet connection at the airport.
  3. Using a third party for payment services.

If you were to ask your neighbor how they would rate them, would it be the same?  Go ahead and ask them, I’ll wait.  For those not asking, do you think they would be the same?  Probably not.  Assigning a risk label to an event is too subjective.  It’s based upon the person’s experience, profession, and situational awareness.  How one labels risk most likely will not be the same as someone else.  This is mostly due to the lack of comparable impacts. Assigning impact consistently is manageable with guidance.  These may include factors such as:

  • Fiscal costs to replace/fix.
  • Employee hours needed (will you have to outsource?)
  • Damage to reputation (usually more for service providers)
  • Harm to individuals (employees and / or patients)?

Each of these factors and the threshold from one to the next is organization specific.  $10,000 in replacement systems for one company may be fairly significant while for another it may be the budget for the annual holiday party.  Establishing the different thresholds for each of your risk layers will make this a repeatable process.  It’s an easier process than most think; just go through the possibilities for each.  If this would cost our organization $__________ it would be bad, $____________ is really bad, and $_______________ is “I’m packing up my office right now.”  Just keep doing that on all your impact decision factors. Creating a matrix will help quickly assign such risk impacts and also ensure that the right people are involved the process.  That’s correct: assigning risks, the impact, and the likelihood, shouldn’t be a one person job; there are too many factors for one person to know.  Healthcare is a great example.  IT can determine how much it would cost to replace/fix a server but IT most likely will not be able to properly gauge organizational reputation damage and the potential harm to patients. Having more people with different roles also brings more situation awareness (i.e., threat likelihood) to the risk assignment process.  They may be aware of additional controls which could lessen the change of the risk being realized. The more the situational awareness is raised allows your company to assess risks with greater understanding and accuracy. For example, would your risks you assigned to the examples above change with the following?

  1. Going 15 mph over the speed limit in a school zone.
  2. Using a public wireless internet connection at the airport after Defcon.
  3. Using a third party for payment services that continues to suffer data breaches.

All of the aspects above increase the maturity level of risk assignments used in Risk Management programs, audits, and everyday operations. It helps everyone within the organization speak the same language and ensure that we compare apples to apples.  When everyone is on the same plane and knows how the risks are being assigned there tends to also be less resistance to risk reducing initiatives. This level of organizational “buy-in” is crucial for those projects that have a large impact radius and cross many departmental boundaries. So how does this all start?  The easiest is to integrate this process as part of your Risk Management program and during each Risk Assessment. Use the same processes for your internal audits and have external companies either use your process or provide enough information to allow your group to rate findings again internally. Document the process and the various factors and make sure all involved know what they are. This will lead you down some interesting conversations, but stick to it! Having an established and consistent process turns the arbitrary into the meaningful.


Pentesting the Cloud

Several months ago, I attended an industry conference where there was much buzz about “The Cloud.”  A couple of the talks purportedly addressed penetration testing in the Cloud and the difficulties that could be encountered in this unique environment; I attended enthusiastically, hoping to glean some insight that I could bring back to NetSPI and help to improve our pentesting services.  As it turns out, I was sorely disappointed. In these talks, most time was spent noting that Cloud environments are shared and, in executing a pentest against such an environment, there was a substantially higher risk of impacting other (non-target) environments.  For example, if testing a web application hosted by a software-as-a-service (SaaS) provider, one could run the risk of knocking over the application and/or the shared infrastructure and causing a denial of service condition for other customers of the provider in addition to the target application instance.  This is certainly a fair concern but it is hardly a revelation.  In fact, if your pentesting company doesn’t have a comprehensive risk management plan in place that aims to minimize this sort of event, I recommend looking elsewhere.  Also, the speakers noted that getting permission from the Cloud provider to execute such a test can be extremely difficult.  This is no doubt due to the previously mentioned risks, as well as the fact that service providers are typically rather hesitant to reveal their true security posture to their customers.  (It should be noted that some Cloud providers, such as Amazon, have very reasonable policies on the use of security assessment tools and services.) In any case, what I really wanted to know was this: is there anything fundamentally different about testing against a Cloud-based environment as compared with testing against a more traditional environment? After much discussion with others in the industry, I have concluded that there really isn’t. Regardless of the scope of testing (e.g., application, system, network), the underlying technology is basically the same in either situation.  In a Cloud environment, some of the components may be virtualized or shared but, from a security standpoint, the same controls still apply.  A set of servers and networking devices virtualized and hosted in the Cloud can be tested in the same manner as a physical infrastructure.  Sure, there may be a desire to also test the underlying virtualization technology but, with regard to the assets (e.g., databases, web servers, domain controllers), there is no difference.  Testing the virtualization and infrastructure platforms (e.g., Amazon Web Services, vBlock, OpenStack) is also no different; these are simply servers, devices, and applications with network-facing services and interfaces.  All of these systems and devices, whether virtual or not, require patching, strong configuration, and secure code. In the end, it seems that penetration testing against Cloud environments is not fundamentally different from testing more conventional environments.  The same controls need to exist and these controls can be omitted or misapplied, thereby creating vulnerabilities.  Without a doubt, there are additional components that may need to be considered and tested.  Yet, at the end of the day, the same tried and true application, system, and network testing methodologies can be used to test in the Cloud.


New MasterCard Level 2 Validation Requirements Effective June 30, 2012

Gettin’ Your Internal Security Assessor on…

Friendly reminder: after June 30 of this year, all Level 2 MasterCard merchants performing their annual self assessment must ensure that their internal resource has attended ISA (Internal Security Auditor) training.  Alternately, Level 2 merchants can hire a Qualified Security Assessor to perform the assessment and sign off on their Level 2 self assessment Attestation of Compliance.  This is a change from the current requirements, which allow for any internal staff to perform the Level 2 assessment. The ISA program is maintained by the PCI Security Standards Council; training consists of four one-hour online courses followed by two days of onsite instructor-led training.  At the end of the course you even get a certificate that you can use to win friends and influence people! Based on feedback received from current ISAs working for my clients, it sounds like the training is valuable even to those with a deep PCI background.  As ISAs receive (essentially) the same training as a Qualified Security Assessor, there are multiple benefits to keep an ISA on staff:

  • By attending SSC-approved training, the ISA is getting the most current and relevant interpretations of the DSS.
  • An ISA is an “internal QSA” and also an employee; therefore the ISA may have the advantage of a deeper familiarity with the organization’s people, environment, and processes compared to an external consultant/auditor.
  • For a variety of reasons, most organizations still choose to use an external QSA firm for audits; however, ISAs tend to be an excellent interface to an external QSA, and can be useful as a second opinion if the QSA firm sends Cousin Eddie to do your audit.
  • An ISA can provide an enhanced understanding of the Data Security Standards (DSS) requirements as they relate specifically to your organization, and can keep you apprised of current and emerging trends in the payment card sphere.
  • Having an ISA on staff is the modern version of having a Royal Wizard in your court.  Though I am not supposed to speak of this, part of the advanced QSA/ISA training involves learning all manner of arcane magic.  The ISA may be able to teach you some tricks or perform at your company holiday party.

If the changes to the MasterCard Level 2 merchant requirements affect your organization there is still time to sign up for training (ISA training schedule is here).  You’ll want to become an ISA yourself when you see the locations – London in April, anyone?

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.