Managing Security in a Regulated Industry
As is the case with any regulated industry, the insurance industry has long been driven by regulatory pressures that compel them to undertake security activities. Over the past 5-10 years or so, the level of regulatory pressure has grown dramatically, forcing regulated industries to take proactive actions to address risk within their environments. At the same time, customers have become savvier and have raised their expectations of what responsible corporate stewardship looks like relative to security and privacy.
The combination of this increased regulatory pressure and higher customer expectations have finally, and relatively swiftly, generated positive movement to address security and privacy risks. The result is active involvement of executives. This contrasts with the snail-like pace of change in the preceding 50 years where the insurance industry was slow to adopt and embrace technology – and adoption of appropriate security and privacy controls was even slower yet!
Most of the regulations are quite explicit about the need to exercise due diligence of your vendors and that’s another difference in the insurance industry – as well as most other regulated industries: if you are not at the very top of the supply chain, you are being constantly audited by your customers as well as the traditional audits conducted by industry regulators and independent audit firms – and once they do, that naturally flows down to other providers in their supply chain.
Advice for Communicating and Working with Organizational Leadership
It’s important to note that I came up, especially during my time in the military, defense, and aerospace industries, in a profession that attempted to drive security improvements using FUD (fear, uncertainty, and doubt), where you’re essentially scaring people into doing the right thing. I’ve found that rarely works well and is not a sustainable approach.
In my more recent roles in the insurance industry, my team and I function as risk managers, and we really do view one of our primary roles as providing illumination of risk and communicating that to appropriate levels of the organization. For me, most of the companies I’ve worked at are mid-sized companies which makes it easier to implement a senior executive oversight committee or steering committee – and I leverage them a lot and with great success. That approach may not work quite as well with very large companies, but the concept is sound.
By getting this steering committee to agree to certain security- and privacy-related objectives, standards, or metrics that are important to them, you can then use their involvement to help accomplish your goals. It also shifts responsibility for executing those changes down the operational business path rather than attempting to force it because “the security team said we must do it.”
I have also always tended to follow an 80/20 rule for addressing risks. It’s not that I don’t care about the 20 percent, but I’ve seen too many times where my peers have set out to boil the ocean and that just doesn’t work. Worse yet, you can seriously damage your credibility with business executives if they perceive your goals as unrealistic. Much better to incrementally pick off chunks, then use the metrics to show success over time.
Application Security: Three Critical Components to Ensuring Security
1. Treat the Platform as a Key Asset and Report on its Security Posture to Executives
I’ve seen most organizations struggle with the application security side including such basic activities as regular security testing. I’m having less trouble these days, and one of the reasons for that is that I get business and IT leaders to identify the key platforms used to run the business and then I report on their security posture.
If you have not done this before, I would suggest starting with the top 5 or 10 platforms – or whatever the right number is based on organizational appetite (again so you are not trying to boil the ocean).
In my current organization, business and IT leaders selected and designated approximately 30 platforms as being “key platforms” to the operation of our business. This is important because we can then start reporting metrics on the posture of that application platform. You can report a wide array of relevant metrics such as: has it been tested dynamically or statically; does it have logging capabilities; are logs being shipped to the SIEM; the number of medium or high severity vulnerabilities; malware protection; DLP; resiliency; or, any other security capabilities that are of interest to your organization. While most of us could easily pick 15-20 metrics we think are important, most organizations cannot really process that many metrics in a meaningful way and, as such, I would suggest sticking to the most important 3-6 metrics that support your objectives.
This report can be as simple as a red, yellow, or green, which is typically what I do because I think there’s a danger in trying to be too precise with things. Sometimes we throw figures out that give the illusion of precision where none exists. For most organizations, it’s good enough to have red, yellow, and green as long as the criteria for assigning those is clearly defined and well understood. In my experience, when some metric goes from green to yellow or yellow to red, and you’re talking to an EVP, they’re going to want to know why it changed. More importantly, they’re going to want to solve the problem. Even if they decide to accept the risk, the conversation provides important context for them so that they make informed risk decisions. This approach can also help prepare the organization for financial forecasts for the following year. or example, if something is going to be a big lift, we can plan to do it next year. But just maintaining that communication and reporting is important.
Regardless of how you express your metrics, it’s important that the audience that you’re communicating this information to has a hand in crafting what those metrics measure. Each executive has their own their own level of understanding and their own hot button issues that you should be measuring. For example, about half the things that I measure or report on are things that the executives had explicitly indicated they wanted to see. This generates a lot of buy-in and adds value to what you’re reporting.
2. Perform Rigorous Testing
Over the past 10-15 years, I’ve been advocating the use of both dynamic and static application security testing. Widespread adoption has been slow, but I have seen a marked increase in recent years and that’s beginning to bear fruit. For example, I have an application about to go live that the appdev team knew would go through both static and dynamic testing and the results were some of the best I’ve seen. It was clear that the appdev team really made a conscious effort to minimize any identified flaws and the result was a much more secure, stable platform.
3. Educate Developers About Secure Coding Practices
Another component of appsec is making sure the developers are at least aware of secure coding technologies. Certainly, if they’re web developers and they don’t know OWASP, then you need to look for another developer. It’s that simple. However, my team has an important role in identifying appropriate training and leadership has a role in providing the opportunities for developers to participate in that training. Incidentally, metrics around developer training are another useful tool to help drive improvements and they can generally be correlated with reductions in coding flaws and vulnerabilities that make it into production systems.