Back

Common Compliance Hurdles Part 2: Non-compliant Applications

In this, the second installment in a series discussing common PCI compliance challenges, I address non-compliant payment applications.  Such applications are nearly ubiquitous in the cardholder data environments of smaller merchants (and even some of the larger ones).  However, merchants that store cardholder data are rarely able to attain a compliant state when using an application that has not been validated as compliant with PCI standards (either the older Payment Application Best Practices, or PABP, and the newer Payment Application Data Security Standard, or PA-DSS).  In particular, compliance with much of PCI DSS Requirement 3, which deals with protection of stored cardholder data, is difficult or even impossible for these businesses, in many cases due to their payment application(s).  Typically, such merchants have three options: migrate to a validated solution, work with the vendor of the current application and encourage them to have the application validated, or implement the required controls themselves.  Because these applications pose such a high risk to cardholder data, Visa has mandated that all merchants will be required to use validated payment applications, with a deadline established for July 2010 in the U.S. and Canada and July 2012 for other regions.  In the meantime, there are several things a merchant can do to meet DSS requirements, the Visa mandate notwithstanding. In most situations, the best solution is to change the payment application and implement a solution that has already been validated.  While this can be a daunting task, especially for larger or distributed environments, it is typically the solution with the most immediate payoff in terms of compliance, especially considering the impending Visa requirement.  Chances are good that, if the current application has not been validated, there is a similar application that has been.  By migrating to an application that has already been validated, and configuring systems to the standards outlined in the application’s implementation guide, merchants find compliance with DSS requirements much easier. If moving to a different solution is simply not feasible, though, merchants should pressure payment application vendors to attain PA-DSS compliance by having the application validated by a PA-QSA firm (see https://www.pcisecuritystandards.org/pdfs/pci_qsa_list.pdf).  Such a process can take quite a bit of time, which would delay the merchant’s ability to validate PCI DSS compliance.  Also, modifying the application can also be expensive for the vendor.  However, vendors of these applications should keep in mind that their customer base needs to attain and maintain compliance and it will become increasingly difficult to market and sell non-compliant payment solutions (see https://usa.visa.com/merchants/risk_management/cisp_payment_applications.html). If a payment application vendor is unable to meet compliance with PA-DSS requirements, merchants may be able to implement a number of controls to meet the PCI DSS requirements.  Exactly which controls need to be applied will vary depending on the application, but some key areas can include eliminating storage of sensitive authentication data, obfuscating stored primary account numbers using encryption, hashing, truncation, etc., PCI data discovery on payment systems and servers, increased access controls and logging outside of the application, and implementing key management processes and technologies.  Essentially, the payment application would be treated as an internally-developed application, and the merchant would be responsible for ensuring all controls are in place to protect cardholder data.  In many cases, the cost of implementing these controls will outweigh the cost of changing payment applications.  When considering these options, merchants should ask an additional question: “Do I actually need to store cardholder data?”  In many cases, smaller merchants can be forced to complete the lengthy Self-Assessment Questionnaire D due to the fact that a single application or point-of-sale stores credit card data.  These merchants should note that slightly altering their business processes and replacing such a system with one that does not store cardholder data can pay dividends, as compliance requirements would be drastically reduced.

Back

Is PCI driving the development of information security within healthcare?

I like to watch industries evolve in how they deal with information security. It was interesting to watch retail evolve as PCI got more organized.  The PCI Council put together the DSS with dates and penalties for breaches and non-compliance, and that drove significant change. It appears that a similar major change within healthcare is starting to take place. We have begun to see a proactive shift that incorporates compliance with HIPAA, an understanding of risk, and the development of security programs. As I’ve discussed in the past, the healthcare industry is significantly behind in dealing with IT-related risk.  For an industry to change its approach to information security / risk, its culture needs to evolve. In my opinion, risk is the most effective driver of this change. If the risk is great enough, industries develop a mature understanding of risk management (of which security is a subset). The military and banking have tangible risks tied directly to their IT assets; therefore, they understand risk. The problem is that this mature understanding of risk doesn’t exist in most other industries. Without risk driving a security program, industries must rely on other drivers – usually compliance (also a subset of risk). What we’re seeing within healthcare is that PCI is driving the maturation of risk. For example, one key issue that keeps coming up, especially in hospitals, is the belief that PHI is more important than PCI / credit card information. Yet it is PCI compliance that has forced organizations to think systematically about risk. How do you reconcile the budget for PCI compliance with the lack of budget for PHI-related security? In addition, PCI has forced multiple groups (including IT, security, audit, and finance) to work together to deal with compliance and, ultimately, information security issues. Many of these same groups are now being asked to deal with HITECH / ARRA / updated HIPAA.  With the new interpretations of HIPAA, the new regulations, and with these new sets of eyes, these groups are beginning to understand that they are not compliant with HIPAA, that they have significant risk exposure, and that they need to develop programs to deal with this exposure.  From what we are seeing with many of our healthcare clients, the combination of a more pervasive awareness of PCI and new healthcare-specific regulations are creating a more mature understanding of risk and driving a new focus on developing successful information security programs. Let’s hope this trend continues.

Back

The Systems That Time Forgot

Do you know about ALL of the systems on your network? If so, you’re in the minority. Identifying and actively managing all the systems on a network is not an easy task. Environments are constantly changing, asset owners come and go, and without a good asset management process, systems get lost in the shuffle. Unfortunately, many organizations don’t take asset management into account when developing their vulnerability management programs. As a result, systems are left unmanaged, misconfigured, and unpatched. As you might guess, malware, attackers and penetration testers are often able use these systems as entry points and leverage their trust relationships to gain administrative control over the network. It’s hard to protect systems you don’t know are there, but there are some controls that can be put into place to reduce the risk they present to your environment.

What systems are typically affected?

Almost any system can drop off the radar, but usually they’re not high profile systems that require a lot of attention like domain controllers or critical application servers. In my experience, the systems most likely to fall through the cracks include test servers, and legacy application servers.

How can I get started doing asset management?

Good asset management starts with knowing what systems are on your network to begin with. So choosing an asset management solution and performing periodic device discovery on known network segments are good first steps. Once systems are identified they can be entered into the inventory and assigned an owner, asset value, and other data that can be used to prioritize IT tasks like vulnerability remediation. Below are some basic steps that can help network administrators get started:

  1. Choose an asset management solution for your environment. There are quite a few commercial and open source options available. Many of them are capable of performing dynamic device identification, but some of them are only able to find certain types of systems. Regardless, I recommend doing some independent research to find the one that works best for your environment and budget. Like most things, you get what you pay for, but below are few tools that I see being used on a fairly regular basis:    – Freeware tool: OpenNMS    – Freeware tool: Network Asset Manager    – Commercial tool: Orion IP Address Manager    – Commercial tool: Altiris Asset Management Suite
  2. Perform periodic asset discovery on known network segments. The goal of asset discovery is to basically identify any device on the network with an IP address. That includes everything from Windows mail servers to loading dock hand scanners. If it’s on the network, you want to know about it. As I mentioned before many of the commercial asset management tools have this functionality built in. However, I understand that not everyone has pockets lined with gold. So for the biggest bang for your buck I recommend Nmap. It’s incredibly flexible and comes at a great price (free). However, remember that ping scans by themselves aren’t enough. Many systems and networks are configured to drop ICMP requests, so it’s a good idea to perform some TCP and UDP scans as well.
  3. Assign asset ownership & transfer asset ownership as necessary. An asset owner’s role is very similar to that of a parent. They are responsible for the care and protection of their systems. As a result they perform such nurturing tasks as applying missing patches, implementing secure configurations, and maintaining up to date virus definitions. If there is no asset owner assigned to perform such duties for a system, it will eventually fold to an emerging vulnerability.
  4. Assign an asset value to each system based on the potential business impact. Typically this value is loosely based on the amount of revenue generated from the system; the type of information stored on the system, and the logical placement of the system on the network. However, keep in mind that the asset value doesn’t necessarily have to be a monetary. It could be a number 1 through 5. The point is having a method to help prioritize IT efforts.

How can I reduce the risk associated with unmanaged systems on my network?

This is a common question, and below are some common answers. Keep in mind that not all of them are appropriate for every environment.

  1. Develop secure configuration baselines for each device type and operating system. This will help to ensure that if a system drops off of the radar that at least it’s less likely to be susceptible to common attacks. Enforcing password complexity and account lockouts to help prevent brute force attacks is a basic example.
  2. Conduct periodic audits on a sample of the configurations to verify compliance. This will help to identify procedural gaps that may contribute to the implementation of insecure system configurations.
  3. Perform periodic vulnerability scanning of systems. This will help to identify common vulnerabilities on known systems. Once vulnerabilities are remediated, the systems will be less susceptible to common attacks.
  4. Perform periodic patch scans and updates (OS and 3rd party applications). Once vulnerabilities are remediated, the systems will be less susceptible to common attacks.
  5. Implement Network Access Control Systems (NAC). Implement a NAC that is capable of dynamic device detection and can quarantine systems that do not meet the minimum security requirements.
  6. Implement Port Security. Ensure vendors and employees can’t easily connect to the network by disabling unused network ports.

Final Thought

It is very common for penetration testers to leverage unmanaged systems as entry points that lead to complete network takeovers. The reality is that there are some unmanaged systems on every large network. So the bad news is that PCI, HIPAA, Personal Identifiable Information (PII), and the rest of your sensitive data is constantly being put at risk by these anonymous threats. The good news is that you have options. Hopefully the information in this blog gave you a place to start.

Resource Links

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X