Back

IP Traceback: Has Its Time Arrived?

In simple terms, IP traceback allows for the reliable identification of the source of IP traffic, despite techniques such as IP spoofing. While there are numerous methods for achieving this goal, they all have one thing in common: not one of these methods has actually been implemented in commercial networking equipment. Maybe its time has finally come. The advantage of such a capability lies in determining the sources of one-way attacks, or attacks that can be conducted using spoofed source IP addresses. Unlike Reverse Path Forwarding, which can prevent address spoofing in limited environments, IP traceback essentially allows packets to be postmarked with the true source IP address. Denial-of-service (DoS) attacks are the most common type of malicious traffic that falls into this category. Although they don’t usually get the sort of visibility that they used to, DoS attacks still occur with astonishing frequency. While there are other methods for determining the source of spoofed traffic, they are typically time-consuming and require the involvement of numerous upstream parties. IP traceback could allow a network administrator to determine the source of such malicious traffic. In a grad school paper I wrote a few years ago, I argued that “without the support of major networking equipment vendors or ISPs, and barring a major attack with far-reaching consequences, there is little hope for IP traceback in the near future.” Today, the question is when do we reach the point that the ability to reliably track the source of malicious IP traffic is deemed important enough to demand a feature such as IP traceback? Such an ability is more important than ever. At the same time, there is a question of how effective such a solution would be if it were only partially implemented. Getting ISPs in North America and Europe to implement such a feature is a big enough step, but what practical value would IP traceback have if it were not implemented at the sources of much of the world’s malicious traffic: places like eastern Europe, Russia, China, and North Korea? Despite such a potential limitation, I believe that there is a still a place for IP traceback in our networks. A software-based solution, which would require only firmware or driver updates, would be relatively inexpensive and simple to deploy. At the same time, it would assist network administrators and law enforcement in investigating attacks that use IP spoofing techniques, thereby creating an effective deterrent against such attacks.

Back

How Good Are Your Application Security Assessments?

Let’s talk about application vulnerability assessments, penetration testing, and code reviews. How effective they are depends on a number of factors: the education and experience of the testers, the tools used, the restrictions put on the testers, or even the environment in which the testing is done. This post focuses on the education and experience of the testers.

Consider the well-known recent case of the Heartland breach. Robert O. Carr, Chairman and CEO of Heartland Payment Systems, was quoted as saying the following: “In early 2008 we hired a QSA to perform a penetration test which found nothing. On April 30, 2008, we were deemed PCI-compliant” (https://www.infosecurity-us.com/view/4562/qsa-system-is-broken-says-heartland-ceo/).

I wonder if Heartland Payment Systems queried the QSA company on the background of the pen tester. Yes, the company was QSA-certified, but did the person or persons actually doing the penetration test have the education and experience needed to perform a pen test well? Not everyone does. This also goes for application vulnerability assessments and code review. Just because you hire a company that sells itself as having experts on staff does not always mean you get the top dog or even the middle dog. You might be getting a puppy. If the company performing the testing uses a team approach, the team’s collective knowledge might be as good as or better than that of the top dog.

Find out who will be performing your tests and get their resumes, or at least ask them about their background. What kind of training and experience do they have in this area? Are they right out of school or do they have at least a couple of years of experience? Does the firm employ a team of specialists? Is their work process mature and well defined?

These are not hard questions to ask or answer. Making this small effort could make a big difference in the effectiveness of your application security assessments, and your organization’s overall information security.

Back

Brand Reciprocity Revoked by Visa and MasterCard: What It Means for Merchants

Brand reciprocity refers to how the card brands acknowledge the different merchant levels of the other card brands. For example, if an organization is a Level 2 Visa merchant but a Level 4 MasterCard merchant (both designations based upon transaction volume), brand reciprocity means that the merchant would be classified as a Level 2 merchant.

The classification level determines the type of validation required (SAQ or ROC). Of the other participating card brands, only Discover acknowledges brand reciprocity; AMEX and JCB do not. However, Visa Canada still recognizes brand reciprocity within merchant levels. Brand reciprocity gained increased importance this past summer, when MasterCard announced that Level 2 merchants would have to validate compliance through an onsite audit and a ROC done by a QSA. The announcement specified that Level 2 MasterCard merchants would have to validate compliance through this more rigorous process by the end of 2010. Under brand reciprocity, this requirement meant that if a merchant was, say, a Level 2 Visa merchant (previously validating compliance through a SAQ) and a Level 3 MasterCard merchant by volume of transactions, the merchant would be considered a Level 2 MasterCard merchant and would thus be required to validate compliance through a ROC by an outside QSA firm. With brand reciprocity revoked, we need to take a look at a merchant’s transactions by card brand. By looking at these individual card brand transaction volumes, we can assist the merchant in making a determination of its merchant level status and the corresponding type of validation required. Also, remember that brand reciprocity is still in effect for Visa Canada.

Back

Internal Penetration Testing: Attacking Systems That Matter

When you are conducting internal penetration tests in large environments, prioritizing attacks can be a challenging task, because of the number of systems and vulnerabilities. Attacks performed during testing are commonly prioritized based on the nature and severity of the vulnerabilities identified. However, the effectiveness of that approach can be greatly increased by focusing on the right systems. The goal of this blog entry is to share my thoughts on a few ways to identify those systems. Penetration tests typically have two primary objectives: find sensitive information and obtain Domain Admin privileges on the network. When you are trying to locate sensitive information on the network, it’s important to put some thought into where data is commonly stored. Some of the most common locations include: email servers, TFTP servers, FTP servers, network file shares, and the almighty database server. All of those are good places to look for sensitive information, but I prefer to start with the database servers and work backwards. I also recommend targeting data stores after obtaining Domain Admin privileges, because Domain Admins usually have inherent access to the data. However, if the data stores are configured with weak or default passwords, it may make more sense to attack them first to ensure there is enough time to review the associated data. That approach is especially relevant to penetration tests that are conducted within the context of PCI or HIPAA. Now that we’ve talked a little bit about finding sensitive data on the network, let’s talk about getting Domain Admin privileges. Obtaining Domain Admin access typically requires a little more effort, but there are some things that can be done to reduce the number of steps involved. Two of my favorites are attacking the domain controllers and systems with active Domain Admin sessions directly. Domain controllers can be attacked using traditional penetration testing methodologies, and attack vectors can vary greatly based on configuration. For that reason I won’t be dedicating too much time to that specific area. However, if you’re interested in learning more please refer to my previous blog: Windows Privilege Escalation Part 2: Domain Admin Privileges. However, Active Domain Admin sessions were not covered in any of my previous posts, so they may warrant some introduction. Active Domain Admin sessions are live interactive sessions created by a Domain Admin from one system to another. A simple example would be a Domain Admin logging into a member server from their workstation via remote desktop. When the Domain Admin logins in, they create a unique token that is stored locally on the member server. If a penetration test can gain access to the member server, they can potentially use the Domain Admin’s token to impersonate them on the domain. So, if you can find an active session, you can potentially impersonate a Domain Admin. The question is, “What systems are the sessions active on?” Most penetration testers use a relatively straightforward approach that involves three steps:

  1. Identify the domain controllers for the target domain.
  2. Identify the domain users who are members of the Domain Admins group. This can be accomplished by querying the Domain Controllers for user information via RPC, LDAP, and in some cases SNMP.
  3. Enumerate active Domain Admin sessions and associated systems. This can be accomplished by querying the Domain Controllers for a list of active sessions via RPC, or LDAP. Alternatively, this can be done by querying every system on the network individually for active Domain Admin sessions, but it takes a little bit longer.

There are a number of native Windows tools available for accomplishing steps 1 through 3. Most of them come with standard Windows distributions, and the rest can be found in the resource kits for Windows 2000 and 2003. I suggest leveraging your favorite scripting language to streamline the process, but if you prefer to do it manually, feel free. Some people find the extra typing therapeutic. Whichever way you decide to do it, the following information should have been enumerated by the end of the process:

  1. IP Address of the active session
  2. Username of Domain Admin who started the active session
  3. The start date/time of the active session
  4. The idle time of the active session

Using this information you should be able to target systems that have the potential to provide Domain Admin privileges with very little effort. Whether your internal penetration test is driven by a client request, PCI/HIPAA requirements, or just an effort to better understand the threats in your environment, don’t forget to save yourself some time by focusing your attacks on the systems that matter. Until next time, keep shining light into dark places. – SPS

Tool and Command References

Back

60 Minutes on Cyber Security Risks

On November 8, CBS’s “60 Minutes” ran a segment on information security weaknesses called “Sabotaging The System.” This piece highlighted security vulnerabilities in segments of our nation’s critical infrastructure, including banking, power, and national defense. In addition, former and current government officials confirmed that the threats exist; not only are probes and attacks occurring with alarming frequency, but there have been numerous documented instances of successful penetrations into all three of these sectors. The potential impact of such attacks ranges from the theft of a few million dollars to large-scale power outages or compromise of military secrets. In short, our nation is faced with a significant set of risks, and I feel that “60 Minutes” did justice to the severity of the problem. It is clear that the United States has benefited greatly from the interconnection of computer systems but, at the same time, we place ourselves at great risk by leaving these systems unprotected. At the same time, the program was lacking with regard to solutions. There is nothing about these vulnerabilities that prevents them from being mitigated; IT security professionals solve similar problems every day. In this case, it is simply the scale of the problem that is most daunting. President Obama recently raised the issue and classified our nation’s critical digital infrastructure as a strategic asset. This is the first step along the lengthy road toward a more secure infrastructure, but it is important in that it allows the power of the federal government to be brought to bear. As it stands today, many of the requirements for both private industry and government are inconsistent, vague, and toothless. In the future, though, we will likely see increased regulation of these (and other) critical sectors. Regulation, though, is only part of the solution, and constriction of industry by over-regulation is a very real concern. By taking the initiative to combat vulnerabilities in their own environments, companies in these sectors can not only reduce the burden that eventual regulation will bring, but they can also demonstrate to regulators and lawmakers that they are taking the risk seriously. While that may be a novel approach for some, there will undoubtedly be benefits to swift action. Rather than waiting for government to force them to do something undesirable, businesses should revisit and re-architect their current approach to information security and risk management: examine the security framework that is used, alter how security is organized at the company, identify critical assets, analyze current controls, and finally mitigate vulnerabilities by implementing additional policies, processes, and technologies. There is no question that this sort of initiative will cost money but, in the long run, it will be money well spent.

Back

Do Not Use the Back Door!

In system development a backdoor creates a way of bypassing normal authentication to allow access to a system. Secret backdoor credentials often exist deep in the thousands or millions of lines of code that make up a system. This is just one reason why building your own user management/authorization/authentication schemes into systems is a bad idea, but that is a topic for another time.

I was recently on an elevator with some people who apparently worked for a software company. I overheard something about how their support people use a backdoor in order to access the application. I thought that the practice of installing backdoors in applications was well known to be a very bad idea, and that the practice went the way of the NeXT machine. Perhaps I was wrong.

This bit of overheard conversation opens a Pandoras box of questions: Which applications have backdoors? Should my software vendor be required to tell me if the application has a backdoor? How many applications are out there that we dont know about with backdoors that were created by developers, either well intentioned or malicious?

Imagine if a popular home finance package, for example, had a backdoor. Even if it had been put there with the best of intentions, all it would take is one malicious individual with knowledge of the backdoor to destroy not only the software vendor but also the personal finances of millions of individuals.

NetSPIs application code review assessments include checking for backdoors. Given the small amount of application code that ever gets reviewed, let alone by a third-party security assessment like NetSPI’s, it is safe to assume that there are a lot of scary things buried in trillions of lines of source code out there.

My advice:

Developers: Dont implement backdoors. It is a very bad practice.

QA & Development managers: Include checks for backdoors in your SDLC.

Consumers: Ask your application vendors if there are any backdoors in their products, and get their answers in writing.

Back

Questions on PA-DSS from Software Companies and Straight Answers

This post is a result of many, many conversations with software companies regarding the PCI Payment Application Data Security Standard (PA-DSS).  What’s really interesting about all those conversations is that they tend to fit into two categories – the first involves software companies that know that they need to go through PA-DSS validation and are looking for real guidance, the other typically involves organizations that are looking for someone to tell them that the standard doesn’t apply to them for some reason.

 The questions below are the most common ones that we receive from that second group.  Hopefully, the answers will help those non-POS companies that are wondering whether PA-DSS applies to them.

1.    We are a healthcare (or some other type of) software company, not a POS vendor. We’re not even implemented in a retail setting. PA-DSS doesn’t really apply to us, right?

No, this is not correct. If your situation matches the criteria for the standard (just below), then PA-DSS applies. That doesn’t mean that you have to go through the validation process, it just means that you have to go through the validation process if you want to keep selling your software to companies that accept Visa transactions. Keep reading and I’ll explain. General Criteria:

  • Is the application being sold, distributed, or licensed to a third party?
  • Does the application store, process, or transmit cardholder data (credit card information)?  Note – ‘process’ doesn’t necessarily mean financial transaction. For example, taking and hashing a card number would be considered ‘processing’ cardholder data.

2.    Our company is really a Service Provider – 80% of our business is hosted, and we only sell our software to a subset of our clients for implementation at their location. We don’t have to bother going through this, do we?

According to the PCI SSC, if your software matches the criteria (see #1), then you need to go through the process. In fact, if you haven’t already, you most likely need to go through the full PCI DSS process as a Service Provider as well. There are exemptions for one-off, customized solutions, but if you are selling the same base application to two or more clients (and, again, you meet the criteria) you need to address PA-DSS.

3.    Isn’t there some way of doing this more cheaply?

Not really. The process of validating an application under PA-DSS is actually quite involved. It includes documentation review, lab testing, interviewing, process and controls review, documentation, documentation, documentation, and some more documentation. This is not PABP (if you are familiar with the original Visa software security standard). We have seen significant issues with companies that thought they could treat PA-DSS with the same level of attention and effort as they gave to the older standard.

4.    Are the PCI Council and the card brands trying to put my company out of business?

No – I honestly don’t think so. They are just looking to make sure that applications capable of being implemented in a PCI-compliant manner are what are being sold in the marketplace to handle credit card information and transactions. Their interest is primarily in keeping their card data secure and in making sure that the merchant is choosing applications that will support that goal.

5.    Are they really going to enforce this?

Yes, I think they are. Visa has given every indication that they fully intend to enforce PA-DSS validation and, given their willingness to do so on the broader PCI DSS, I would be willing to bet that they won’t back down.

6.    Do my customers need to have our product validated for them to be compliant with the PCI DSS?

No, your customers are not required by the PCI DSS to utilize PA-DSS validated applications today. In practice (a point stressed at the recent PCI Community Meeting), Visa will be requiring merchants and service providers to use PA-DSS validated applications (unless the applications are developed internally). That Visa announcement makes PA-DSS a practical requirement (even if not technically a PCI requirement) for companies that want to continue accepting Visa as a form of payment.

7.    How long does the PA-DSS process take?

It actually depends heavily on the software vendor. If they are highly motivated, committed, and well prepared (and timing with the PA-QSA matches up), the process can be fairly quick. This isn’t necessarily typical, however. At NetSPI we work with our PA-DSS clients in a very interactive manner. We want to identify gaps and allow you to address them prior to testing and review being complete. Closing identified gaps requires the software vendor to take real action; without that action and commitment, the process can really drag. There is little the PA-QSA can do to move the process along without that commitment.

8.    Why are there fewer PA-QSAs vs. PCI-QSAs?

In my opinion, it is due to a few things:

  • Application expertise and app security expertise are NECESSARY for PA-DSS – it’s not quite so easy to throw outside resources at PA-DSS and try to survive on a rigid audit process; you actually have to know what you are talking about.
  • The rigorous QA program that the PCI SSC has implemented on the PCI DSS side was actually ‘turned on’ from day one on the PA-DSS side and required a level of documentation and commitment on the part of the PA-QSA that many in the PCI community simply didn’t want to (or couldn’t) handle.
  • In PA-DSS, the relationship between the firm validating the software and the vendor going through validation is extremely important. This isn’t just about an audit and we’re done. If a PA-QSA firm is doing its job correctly, the QSAs are having discussions with their client about future updates, version control issues, architecture issues, how to operationalize future PA-DSS validation efforts, etc. That’s a consulting/partnership relationship, and many in the PCI community aren’t focused on that type of relationship.

9.    Implementing our solution in our PA-DSS partner’s lab is a real pain, do we have to?

Not necessarily – the council prefers that your application be tested in a PA-QSA’s lab, but it’s not required. If it is impractical for the application to be installed in the testing lab (and your PA-QSA feels comfortable defending that opinion), you can bring the PA-QSA to your location.

10.    We think this whole thing is ridiculous. What if we refuse?

You certainly have the right, but the PCI SSC has been very straightforward: the PCI PA-DSS list is to be the de facto standard on PCI–acceptable applications. If your application is not on the list, you are taking a risk, the potential impact of which would most likely go far beyond the expense of going through the process.

11.    We’ve gone through CCHIT (or some other standard that includes security). Aren’t we covered?

No. PA-DSS is not a suggestion from Visa– it’s a requirement. It is also the only standard that is entirely focused on cardholder data (the data that card brands really care about). If your solution has been successfully audited or validated under another security-focused standard, you may have a head start on getting your solution to a PA-DSS-compliant state. That effort may help prepare you for PA-DSS validation.

Back

PCI in Europe Today

I attended the 2009 PCI Community meeting in Europe last week. Since this was a feedback year, there wasn’t a significant amount of new content; however, there were some interesting points regarding PCI adoption in Europe.

It’s been discussed quite frequently that the Europeans are behind North America in implementing PCI, especially at the merchant level. In my experience and based on the discussions at the conference, I’d say this is true. The consensus at this year’s conference was that this situation is beginning to change.

The traditional arguments against adopting the PCI DSS, such as those surrounding increased security due to Chip and PIN, elicited a fair amount of eye rolling even from other Europeans in the audience. One of the other core reasons for slower adoption is that country-by-country legislation already covers much of what PCI does (France and Germany were the two most cited examples). Interestingly, U.S. state-based legislation was cited as a similar and perhaps more stringent (and therefore more effective) means of securing credit card data. In fact, one of the attendees cited my home state’s legislation, the Minnesota Plastic Card Security Act, which, in my opinion, has had very little impact on organizations that do business in the state.

I think that there are three key items that will drive PCI’s adoption in Europe. First, the Europeans will need to understand that, while very effective for face-to-face transactions, Chip and PIN does not protect card not present (CNP) transactions. As more business is done online, organizations are going to need to deal with the issues that PCI addresses and that Chip and PIN does not. Second, and perhaps most important, acquiring banks will need to enforce the PCI standard. This was a key topic of discussion at the conference and one that appears to still be open. Finally and highly related, the card brands in Europe are going to need to support the PCI standard. The commentary that I heard at this meeting was that this appears to be happening. If that is the case, it should only be a matter of time before the acquiring banksand therefore merchantstake PCI as seriously in Europe as they do in North America.

Back

European PCI Community Meeting: Some Impressions

The trip back to the U.S. from the European PCI Community Meeting in Prague took about 12 hours. For someone who lives and breathes PCI, that equals one hour for each of the 12 requirements of the Data Security Standard (DSS). Here are my impressions of the conference.

First, the PCI Security Standards Council did another great job of bringing the payments community together to discuss current topics and provide feedback regarding the DSS. Second, I met a lot of interesting people and made numerous contacts during the networking sessions.

Third, the meetings were well attended and provided valuable information. I was able to discuss the current state of compliance with European representatives from acquirers, card brands, merchants, service providers, and fellow QSAs. One thing that stands out from these conversations is that the U.S. remains in the forefront of payments security.

Fourth, from a QSA or practitioner point of view, two topics of special concern emerged during the open-microphone sessions: issuers and logging. These two areas were also brought up at the North American Community Meeting in September. So the feedback from the community on both sides of the Atlantic indicates a need for more clarification and guidance on how organizations that are classified as issuers need to comply, and for more guidance on how to review logs.

Fifth, if you ever have an opportunity to visit Prague, make sure to do so. The city is amazing, and the Czech people are very hospitable to visitors. It was a perfect venue for the European PCI Community Meeting.

Back

Vulnerability Scanning with Multiple Products

Should you rely on just one solution to identify all of your vulnerabilities? Most of us rely upon just one anti-virus scanner, right? Every vulnerability scanner claims to be better than its competitors, but how could this be? Where is the Consumer Reports on this subject? I think there is a mix of reasons why this subject has not been picked up by the likes of Gartner or Forrester—it’s quite technical and hard to understand, and the audience may be too small. I have inquired of two independent security test labs recently as to whether or not vulnerability scanning products were ever tested and compared against one another, with the results then published. The short answer is no. Products are often benchmarked against standard criteria, and results are privately reported according to whether or not they meet the minimum criteria. There have been some rogue studies on the subject, and I have conducted extensive testing myself. I can confirm that certain products are better than their competitors, but not in all areas. Because there are not well-defined standards or readily available test results, security practitioners are left using a vulnerability scanner that performs like a piano with many keys out of tune. In our own testing we have seen variations of up to 60% among leading products. In addition, their comprehensiveness and accuracy depend on what operating systems, applications, and configuration settings you have and whether or not your scanner vendor agrees that a particular vulnerability is important enough to test for. In a decade-old product space, we have not seen complete maturity of either the space or the products themselves. During this time there have been a number of acquisitions of product vendors, and some of those acquired products no longer exist. At the same time, new and exciting products and vendors continue to emerge. The requirements of a scanner have evolved from OS level service checks to include web application vulnerabilities, authenticated configuration testing, and zero day attacks. Within the typical server environment, there are so many vulnerabilities identified time and time again, that many organizations find it difficult to embrace the idea that there may be actually more vulnerabilities out there that go undetected. If your security team is a capable one, I encourage you to incorporate both commercial and open source tools, and even consider the introduction of more than one commercial product. If you outsource this service, ask your vendor what products it tests with and whether or not it can consolidate all findings from all vendors into one comprehensive report. In lieu of product comparison benchmarks, this approach may be your best option to ensure you are not leaving large areas of vulnerabilities undiscovered. Keep in mind, if you hire a product vendor to perform your assessment, its professional services team may not be able to use a different vendor’s product within its own solution. For those of you concerned with the thought of too many vulnerabilities, check back in a couple weeks, as I plan to discuss some techniques for vulnerability prioritization and remediation.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X