Back

What’s New in PCI DSS 2.0 – No Surprise That There Are No Surprises

Some of the team from NetSPI spent the week in sunny Orlando at the 2010 PCI North American Community Meeting.  As most are aware, this year’s meeting was particularly significant as a new version of the Data Security Standard, 2.0, which has now been released and effective as of 1/2011. The new standard is so advanced that it went from 1.2.1 to 2.0, incorporating a 0.7.9’s worth of changes in a single revision(!). The last few days’ sessions were a great opportunity to review the changes with the SSC and card brand representatives, catch up with others in the industry, and dispel rumors about the new DSS version (there will be no Requirement 13 mandating the use of ninjas to protect cardholder data, and Ouija boards cannot be used for wireless access point discovery).   It should also be noted (if my wife is reading this), there was absolutely no beer consumed at Kook’s Sports Bar and all discussions were reasoned, civil discourses that ended promptly at 9:00 PM to allow for a full night’s sleep. As far as the changes to the DSS, it should come as no surprise that there were not many surprises to be found.  As was pointed out several times throughout the course of our sessions, the DSS is a mature framework with a rising adoption rate throughout the world; major changes could have serious financial and operational repercussions on merchants and service providers who have already incorporated the DSS into their environments.  Keeping that in mind, the intent of v2.0 is to provide additional guidance and clarification based on the (apparently) thousands of communications that the SSC received in response to their request for feedback, and my first impression is that it succeeds in that respect.  Below are some of the highlights I picked up on from the meeting and SSC-supplied docs, in no particular order: 

  • Clarification that PAN is the primary element for applicability of the DSS to a merchant/service provider environment and is the definition of ‘cardholder data’
  • Sampling requirements will be more detailed, and will require more justification as to why the sampling methodology used for an assessment is considered sufficient
  • There are clarifications for issuers that have a business need to store sensitive authentication data (SAD), which should provide more specific guidelines for retention and protection of SAD
  • Additional requirements to secure PAN if hashing and truncation are used in the same environment, to reduce the possibility of malicious users reconstructing full account numbers
  • At this point, an automated or DLP-type solution is NOT required for data discovery and scoping of the cardholder data environment, though tools of this nature can/should be used where appropriate
  • “System components” now includes virtual systems in the definition
  • Requirement 6 has been overhauled to merge internal and web applications, and “industry best practices” no longer means just “OWASP”, and includes SANS, CIRT, CWE, etc.
  • News flash- two passwords are not considered “2-factor”. Glad we got that one clarified.
  • Requirement 11 allows for physical site inspection for rogue AP discovery if appropriate. I can’t see this working well in a large physical environment, but may work for mom-and-pop retailers who can see every wire on their router. I can’t wait for my first opportunity to write a comment for 11.1 that includes “Bob, the IT guy, climbs through air ducts and drop ceilings on a quarterly basis to identify potential rogue APs”
  • IDS/IPS must be included at the perimeter of the cardholder data environment and ‘key points’, as opposed to monitoring all traffic in an environment
  • There was some discussion around a new SAQ C that would be applicable to ‘virtual terminal’ environments. This is a work in progress, and I didn’t hear an official release date

There are many other tweaks not included above, but no real game-changers in my opinion. I know not everyone will be happy with all of the revisions, but the DSS is by its nature a compromise between global applicability for all types of environments and nuts-and-bolts implementation.  There will still be requirements that have QSAs and clients scratching their heads, but my impressions are that many of the clarifications are long overdue and should make many of the requirements easier to interpret, test, and enforce.  Ninjas will just have to wait for version 3; be sure to get your feedback in early.

Back

Performing Code Reviews to PCI Requirements

We were asked by a customer about performing code review based on the PCI requirements. The questions they asked were:

  • Is there a checklist that exists that covers all of the PCI requirements?
  • Are there requirements such as not storing PAN un-encrypted?
  • What about not storing full track data or other restricted data?
  • Are there considerations outside of OWASP?
  • Can you recommend a simple resource for all PCI-related requirements?

At this point in time, there does not seem to be a single source of reference that answers all of these questions. Since there is no definitive source, this document covers some of the PCI requirements in relation to code reviews. A code review includes reviewing all of the code for the OWASP Top 10 Web Application Security Risks for 2010. The OWASP Top 10 is inclusive of the PCI requirements and answers most if not all of the above questions. The OWASP Top 10 Web Application Security Risks for 2010:

  1. Injection flaws – such as SQL, OS and LDAP
  2. Cross-Site Scripting (XSS)
  3. Broken Authentication and Session Management
  4. Insecure Direct Object References – exposing a file, directory or database key
  5. Cross-Site Request Forgery (CSRF)
  6. Security Misconfiguration
  7. Insecure Cryptographic Storage
  8. Failure to Restrict URL Access
  9. Insufficient Transport Layer Protection
  10. Unvalidated Redirects and Forwards

Risk number 7 specifically covers the question about storing PANs unencrypted. As for storing track data, this is partially covered by risk number 7. Track data is sensitive and needs to be encrypted while stored, but it can only be stored pre-authorization. Once the transaction has been authorized, the data must be securely deleted. This requirement is not covered by the OWASP Top 10. Here is a complete list of the PCI requirements as they relate to the OWASP Top 10 (see the list above):

Requirement 1: Install and maintain a firewall configuration to protect cardholder data – this requirement is not typically covered in a code review

Requirement 2: Do not use vendor-supplied defaults for system passwords and other security parameters – this requirement is covered under risk number 6

Requirement 3: Protect stored cardholder data – this requirement is covered under risk number 7

Requirement 4: Encrypt transmission of cardholder data across open, public networks – this requirement is risk number 9

Requirement 5: Use and regularly update anti-virus software or programs – This requirement is not typically covered in a code review

Requirement 6: Develop and maintain secure systems and applications – this is fully encompassed in the OWASP Top 10 (both 2007 and 2010 versions)

Requirement 7: Restrict access to cardholder data by business need to know – this requirement is partially covered by risk number 4

Requirement 8: Assign a unique ID to each person with computer access – this requirement is not typically covered in a code review

Requirement 9: Restrict physical access to cardholder data access – this requirement is not typically covered in a code review

Requirement 10: Track and monitor all access to network resources and cardholder data – this requirement is not covered in the OWASP Top 10

Requirement 11: Regularly test security systems and processes access – this requirement is not typically covered in a code review

Requirement 12: Maintain a policy that addresses information security for employee’s and contractor’s access – this requirement is not typically covered in a code review Start your code review checklist with the OWASP Code Review Guide and add to it for those requirements that are not covered by this guide. This includes securely deleting sensitive data (PANs, track data, keys, etc.) and application logging. Another place to start or append to your checklist, if you develop .Net applications, would be Microsoft’s Index of Checklists .

Back

Fuzzing Parameters in CSRF Resistant Applications with Burp Proxy

Since its formal recognition by the security community in 2007 on the OWASP Top Ten list, Cross Site Request Forgery (CSRF) has stepped out of the shadows and joined the ranks of vulnerability all-stars like Cross Site Scripting (XSS) and SQL injection. As a result, there has been a big push to better understand how CSRF works, how to prevent it, and how to perform traditional attacks against web applications that attempt to protect against it. Below I’ll provided a high level overview of the first two topics and a step-by-step walk through for the third using Burp Proxy.

What is CSRF and how does it work?

In short, CSRF attacks allow the execution of existing web application functionality without the user’s knowledge if the user has already signed into the web application when the attack is initiated. The end result can vary greatly depending on the nature of the application. However, common examples include changing the user’s password, obtaining access to sensitive data, and transferring money out of the user’s bank account. For some real world examples, there is nice Defcon17 presentation available at https://www.defcon.org/images/defcon-17/dc-17-presentations/defcon-17-bailey-mcree-csrf.pdf.
An important thing to be aware of is that CSRF attacks can be initiated from a variety of vectors. Common attack vectors include image tags, links in phishing emails and JavaScript code imbedded in legitimate websites using pre-existing XSS flaws. All of these attack vectors may seem different from each other on the surface, but in reality they all end up sending a simple HTTP POST or GET request that takes actions in the target web application. For some good generic examples I suggest visiting https://www.cgisecurity.com/csrf-faq.html. The site provides some nice CSRF code examples that use JavaScript, image tags, iframes, URL parameters, and AJAX POST requests. The article also makes the good point that CSRF attacks are not limited to browsers. All kinds of web based technologies are affected, such as MS Office software, Flash files, and web services. Another good thing to note is that each CSRF HTTP request used in an attack has to be custom made for the application (or group of applications) being targeted. Some common application targets include online banking applications and social networking websites, because they often have large user bases that are almost always signed in while doing other things on the internet.

How can we “fix” CSRF Issues?

The blog title included the word “resistant” instead of “protected” for a reason. There is no 100% perfect fix for CSRF (similar to session hijacking). However, submitting non-predictable challenge tokens with each HTTP request can help to prevent most CSRF attacks as long as the token is associated with the user’s session (and no XSS or other serious issues exist within the application). These tokens are also commonly referred to as “anti-CSRF” and “page” tokens. The basic process works as follows:

  1. The user signs into the web application via their web browser.
  2. A random non-predictable anti-CSRF token is generated on the server side and stored as a session variable.
  3. The anti-CSRF token is then sent to the user’s browser in the server’s response as a hidden form field, HTTP GET parameter or cookie.
  4. The next time the user submits a request to the server the anti-CSRF token is sent with it.
  5. On the server, the anti-CSRF token sent from the user’s browser is compared to the server side anti-CSRF token.
  6. If the tokens match, a new token is generated and the process is repeated for each request.
  7. If the tokens don’t match, then the request fails. In many applications if there is a page token failure the user’s session is automatically terminated and the user is forced to sign back in.

Other methods that are considered less user friendly for thwarting CSRF attacks include re-authenticating with every request or using a captcha before providing access to sensitive data or functionality. For a more detailed overview on how to implement ant-CSRF tokens visit either of the following sites:

  1. https://shiflett.org/articles/foiling-cross-site-attacks
  2. https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet

How can fuzzing be conducted against sites that use anti-CSRF tokens?

It’s usually considered a good thing when applications use anti-CSRF tokens. However, sometimes the tokens can make performing security assessments a little more time consuming. Not all commercial web application security scanners support the functionality, and the ones that do can get expensive. I know not every security enthusiast has an unlimited budget so for those who don’t want to spend a ton of money I suggest using Burp. Burp has a lot of native fuzzing payloads, and supports custom payloads that can be used to check for things like SQL injection, XSS and weak passwords. More importantly, Burp supports recursive grepping and fuzzing with more than one parameter at once, which makes it an ideal candidate for testing sites with anti-CSRF tokens. Below I’ve outlined a basic process for doing that.
Setup a Burp local HTTP proxy

  1. Download and install the Burp suit. Fuzzing using the commercial version is much faster than the free version, but either will do.
  2. Run Burp, but configure it so the “intercept” proxy setting is set to off.
  3. Configure Internet Explorer or Mozilla Firefox to use the local Burp proxy.

Target anti-CSRF tokens and other parameters

  1. Log into the web application through the browser.
  2. Navigate to any page in the application.
  3. In the Burp proxy history send the last request to the intruder tab.
  4. Navigate to the “Positions” sub tab.
  5. Choose “cluster bomb” from the attack type drop down list.
  6. Click “Clear” button, then identify the anti-CSRF parameter (It could be a cookie, GET parameter, or POST parameter/hidden input field value).
  7. Once the anti-CSRF token has been identified, click at the beginning of the anti-CSRF token value and click the “Add” button. This will mark where to start the fuzzing.
  8. Click at the end of the anti-CSRF token value and click the “Add” button again. This will mark where to stop the fuzzing.
  9. Target the additional parameters you wish to target for (fuzz) attacks by following the same process.

Setup recursive grep payload (to facilitate valid HTTP requests with anti-CSRF tokens)

  1. Navigate to the “Options” sub tab of the “Intruder” tab and scroll down to the “grep” section.
  2. In the “grep” section, select the “extract” tab and clear all other values from the list by pressing the “delete” button as many times as needed.
  3. Select “simple pattern match” radio button.
  4. Enter the anti-CSRF parameter into the text box and click the “add” button, but make sure to include the”=” character if it’s used in the request. For example: “mytoken=”.
  5. In the “stop capturing at” text field enter the value of the character that comes up after the anti-CSRF token.
  6. Next, manually copy the value of the anti-CSRF token from the last server response to the clip board. It can be found in the “history” sub tab of the “proxy” tab.
  7. Navigate to the “Payloads” sub tab of the “intruderIntruder” tab and choose the payload set for the anti-CSRF token. The Payloads payloads will be numbered sequentially in the order they are displayed in the “Positions” tab.
  8. Choose the “recursive grep” payload.
  9. Select the anti-CSRF token from the list on the left by clicking it.
  10. Paste the current token in the “first payload” text box from the clip board.

Setup additional fuzzing payloads

  1. Choose the next payload set.
  2. Choose the desired payload from the drop down.
  3. Configure the appropriate options.
  4. Start fuzzing by navigating to the “intruder” menu from the title bar and selecting “start”.
  5. Ninja-fy.

Conclusion

As time goes on the IT community is getting a better understanding of what CSRF is, how the attacks can be delivered, and how to protect against them. However, with these new protections, come new challenges for application security assessors that require a more advanced skill set. Eventually all of the tools will catch up, but in the mean time make sure that your application testers to have a strong understanding of how to assess sites that protect against CSRF attacks.

Reference Links

  1. https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
  2. https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet
  3. https://www.cgisecurity.com/csrf-faq.html
  4. https://shiflett.org/articles/foiling-cross-site-attacks
  5. https://www.defcon.org/images/defcon-17/dc-17-presentations/defcon-17-bailey-mcree-csrf.pdf
  6. https://portswigger.net/
Back

Does DLP Help Solve HIPAA Concerns?

One of the most promising technologies for automatically enforcing compliance with sensitive data handling practices is Data Loss Prevention (DLP) technology and it is quickly gaining popularity and adoption across many industries.  Does this mean that DLP is the answer to all sensitive information handling concerns? In short, I am sorry to say that while DLP offers excellent solutions within a limited range of data, such as payment cards, social security numbers, and other easily identifiable data, it does not offer great solutions for HIPAA compliance.  Most recently a case of an employee being fired from Oakwood Hospital in Michigan has once again highlighted the utter impossibility of automatically enforcing HIPAA compliance.  In this case, Cheryl James made some comments on Facebook which were interpreted as a violation of HIPAA requirements.  This was not the case of medical records being leaked out, but rather a comment made by a medical professional.  More information about the incident can be obtained here. (https://www.fiercehealthcare.com/story/hospital-worker-fired-over-facebook-comments-about-patient/2010-08-01) More and more people are using websites such as Facebook as a part of their everyday conversations with their friends and family.  However, a comment made to a spouse in the privacy of one’s home is clearly not the same as posting that comment on Facebook.  Since this is not the first time a comment made on a social networking website has landed a hospital employee in trouble, it’s clear that it will take some time before everyone fully realizes the risks of communication of sensitive data on social networking websites.  Naturally the question that begs to mind is if there is anything that hospitals can do to prevent such incidents in the future. The advantage of DLP technology is that if you are able to define the pattern or a structure for the data that can be automatically identified as sensitive, the DLP technology will be able to prevent most inappropriate transfers of such data, including posting on social websites.  However, with regard to healthcare, data that falls in the range of being considered PHI is very diverse and does not allow for automated identification.  Therefore, techniques for reducing risks of inappropriate disclosure must fall back on the low-tech controls such as training and blocking high-risk websites like Facebook for all employees.

Back

Security in the Cloud

Much fuss has been made over security concerns relating to cloud computing.  Just as cloud computing proponents tout the efficiency, scalability, and ease-of-use that come from leveraging the capabilities of the cloud, detractors highlight the dangers inherent with corporate data being stored in an unknown location, on an unknown number of systems, and protected by unvalidated controls.  At the end of the day both groups have fair points, but it is important to recognize that cloud computing is here to stay and despite the unknowns, many organizations will look to the cloud as a way to increase efficiency and reduce costs.  How, then, can organizations ensure that critical data and processes are protected while still realizing the benefits of cloud computing? It is critical that companies determine an appropriate approach to, and use for, the cloud.  In some cases, certain organizations may have data that is considered so confidential or critical that cost savings are not worth the risk of data compromise or loss.  In order to identify such circumstances, a risk analysis that enumerates threats, vulnerabilities, and potential impacts should be performed.  A key criterion for proper assessment of risk is the accurate classification of data; ensure that data is classified appropriately so that particularly sensitive or critical information is not accidently put in the cloud.  Additionally, compliance requirements should be examined to ensure that any changes do not negatively impact compliance status.  Once the risk analysis has been completed, certain mitigating controls may need to be implemented to account for unknowns in the cloud infrastructure.  For example, controls that would typically reside in lower tiers may need to be implemented in applications.  After implementing and assessing these modifications, an initial migration to the cloud can begin.  Keep in mind, though, that it is also important to develop a process for assessing new applications and data before they are moved to the cloud, as well as periodically reassessing systems and information that were previously deemed cloud-appropriate; this is fundamental to ensuring that cloud-related risks are considered on an on-going basis. While there are certainly challenges facing organizations looking to leverage cloud computing technologies, these challenges are not insurmountable.  With a well-devised approach, including assessment and mitigation of cloud-specific risks, organizations can realize the benefits of cloud computing while still protecting critical data assets.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X