Back

When Databases Attack: SQL Server Express Privilege Inheritance Issue

SQL Server Express is commonly used by database hobbyists, application developers, and small application vendors to manage their application data. By default, it supports a lot of great options that make it a very practical solution to many business problems. However, it also comes configured with a not so great setting that could allow domain users to gain unauthorized access to SQL Server Express instances. In this blog I’ll cover what the issue is, how to attack it, and how to fix it.

How it works

Through privilege inheritance, all domain users have access to default SQL Server Express instances that have remote listeners enabled. This appears to be possible because the local Windows “BUILTIN\Users” group is assigned “connect” privileges during the default installation. Below is a summary of how this configuration allows users to gain unauthorized access to databases.

  1. By default, the “NT AUTHORITY\Authenticated Users” built-in security principal includes all users that have been “authenticated locally by a trusted domain controller.”. That includes all domain user and machine accounts.
  2. By default, the “NT AUTHORITY\Authenticated Users” built-in security principal is added as a member of the local “Users” group in Windows. This can be verified by issuing the following command from a Windows console:
    C:\>net localgroup users
    Alias name     users
    Comment        Users are prevented from making accidental or intentional system-
    wide changes and can run most applications
    Members
    -------------------------------------------------------------------------------
    NT AUTHORITY\Authenticated Users
    NT AUTHORITY\INTERACTIVE
    The command completed successfully.
  3. By default, SQL Server Express 2005 to 2014 create a login for the local “BUILTIN\Users” group that provides users with connection privileges. This can be verified by issuing the following query in any default SQL Server Express instance:
    C:\>SQLCMD -S "%COMPUTERNAME%\SQLEXPRESS" -E -Q "SELECT * FROM sys.server_principals WHERE name = 'BUILTIN\Users';"
    name
    ------------------------------------------------
    BUILTIN\Users
    (1 rows affected)
    ...[snip]...
  4. As a result, all user and machine accounts on the same domain as the SQL Server Express instance also inherently have connect permissions to the SQL Server Express instance if a TCP listener has been enabled. Below is a basic example of how to issue a query to one of the affected SQL Servers from a Windows console:
    SQLCMD -E -S "AffectedServer1\SQLEXPRESS" -Q "SELECT @@Version"

At a minimum, this default configuration provides an internal attacker with initial access to SQL Server Express instances. That “foot in the door” could potentially be leveraged to gain access to other database servers, systems, and network resources. During penetration tests, this type of issue often leads to exposure of sensitive data, and system access.

How to attack it

Below I’ve outlined one method for accessing SQL Server Express instances on the current broadcast domain using a standard ADS domain account. Keep in mind that there are a number of ways to accomplish the same thing. For example, it could be run through the “xp_cmdshell” extended stored procedure in order to run with the privileges of the SQL Server service account, which is the domain machine account if configured with “nt authority\system”. Also, a full list of domain SQL Server targets could be obtained by any domain user via LDAP queries for MSSQL SPN information.

Note:  You may have to disable/modify your local firewall to ensure that SQLCMD can process the UDP responses from the SQL Servers on the network.

  1. Log into a Windows system with domain credentials.
  2. Install SQL Server Express.
  3. Open up a command prompt.
  4. Enumerate SQL Server instances that you have access to on the domain with the command below.
    FOR /F "" %a in ('SQLCMD -L') do SQLCMD -E -S %a -Q "SELECT 'Vulnerable: '+@@SERVERNAME" | FIND /I "Vulnerable:" >> dbservers.txt
  5. Now you have a list of vulnerable SQL Servers that you can issue arbitrary queries to with SQLCMD or SQL Server Management Studio. If you’re a penetration tester, you can also start escalating privileges and gaining unauthorized access to data.

At some point in the near future I’ll also release a TSQL script that will output the list into a pretty table. If you’re interested in similar attacks, I wrote a blog called “When Databases Attack: Hacking with OSQL” that you might like.

How to fix it

Remove the “BUILTIN\Users” login from SQL Server express instances to prevent evil doers from accessing your data.

Conclusions

From what I understand, Microsoft only made this a configuration default in express editions to help make SQL Server easier to deal with on Windows systems with User Access Control (UAC) enabled. So if you’re running any other edition you shouldn’t have to worry about anything unless someone manually added a login for BUILTIN\Users. With that, I have a few words of advice. First, never trust default configurations. Second, always leverage best practice hardening guides to help lock down new systems and applications. Third, don’t forget to configure accounts and securables with least privilege. Good hunting.

References

Back

Insider Threats

We all want to believe that our co-workers will do the right thing.  That we need to focus our security efforts on the bad guys “out there.”  However the insider threat is one of the worst incidents that an organization can withstand.  Carnegie Mellon’s CERT® Coordination Center  has launched the CERT Insider Threat Database.  They have collected approximately 700 cases of insider activity that “resulted in the disruption of an organization’s critical information technology (IT) services.”   I realize that 700 cases since they started collecting data in 2001 seems like a drop in the bucket but it’s important to remember that these are cases involving the critical IT services, and were reported to CERT.  Many incidents are not reported as the organization doesn’t want the negative publicity, or in even worse cases, the perpetrator hasn’t been caught (yet).  In many discussions about Insider Threats I’ve referred to the San Francisco IT Administrator charged with holding the city’s network hostage.  In this particular case he didn’t give the administrative credentials back to his employer but kept the systems operational.  It was a good example but is now a bit dated (2008) but it was only a matter of time before another one emerged. With a roar, it did.  An IT Administrator has recently pleaded guilty to crippling his former employer’s network.  Now some have dubbed this a “hacking spree” but I would like to differentiate this as not a hack, but an individual that had elevated privileges that became so disgruntled that he lashed out.  When he did so, he didn’t use specialized hacking tools or techniques, instead he used a common administrative tool to delete critical IT systems causing in excess of $800,000 in damages according to court documents.  What makes this example worse is that this individual resigned before the attack, but the organization kept him on as a consultant “due to this extensive knowledge of the company’s network.”  He performed his attacks with valid user credentials and common support tools.  Why am I trying to draw such a distinction whether this is hacking or not?  When discussing risks as either part of your normal risk assessments, Risk Management Program, etc. I think it is important to draw the distinction as it relates to policies and implementable controls.  There is usually a lot of effort put into place to protect against malicious and unauthorized attacks (i.e., hacking) compared to disgruntled individuals with elevated privileges.  Malicious?  Yes.  Unauthorized?  No.  That’s the scary part and the one that needs to be addressed by each and every organization. The take away here is to ensure that segregation of duties is followed so not one person has keys to the kingdom and disgruntled employees are not retained where they can cause extensive damage to the organization.

Back

PCI 2.0 scoring matrix released to the public (now your kids can play “PCI Auditor” at home!)

The PCI Security Standards Council (SSC) has recently released the latest version of the 2.0 Report on Compliance (ROC) Reporting Instructions (formerly called the “scorecard”).  This document had previously been for use by QSA auditors only; it is the secret sauce used to perform a Level 1 PCI audit. For those of you lucky enough to have gone through a L1 audit, the “scorecard” is the super secret document that the QSA kept stored on the triple encrypted drive in the TEMPEST-approved tamperproof tungsten-lined briefcase handcuffed to her wrist.  QSA’s were not allowed to share the criteria on which the company was being audited (scored) on; the reporting instructions require the QSA to perform one or more of the following validation steps for every requirement:

  • Observation of system settings, configurations
  • Documentation review
  • Interview with personnel
  • Observation of process, action, state
  • Identify sample

Well, good news everyone!  The document is now available to the general public. Hopefully, this will eliminate some of those awkward moments that seem to always come up during an audit: QSA: “You need a documented policy that says you use network address translation. That’s not written down anywhere.” Customer: “Can you show me where it says I need to do that in the DSS?” QSA: “You won’t find it there, but I promise it says it somewhere.  I’m not allowed to show you, just trust me, you need it”. Customer: “Can you just let me peek over your shoulder?” QSA: “If you saw it, I would have to have your memory wiped.  Have you ever seen “Men in Black“”? Customer: “I’m calling Security”. It’s pretty hard to follow the rules when you’re not allowed to know what they are.  With this document’s public release a company can actually evaluate their controls and compliance program against the same standards that a QSA will use; no more guessing how to meet a requirement,  no more conversations where the auditor gives a seemingly arbitrary failing finding, with a “because I said so” for the explanation.  This should also allow for organizations to get a much better picture of the intent and expected implementation of a requirement by understanding how the controls will be assessed.  Well done, SSC.

Back

Metrics: Your Security Yardstick – Part 2 – Defining Metrics

After a number of questions on the topic, I have decided to follow up on my earlier security metrics blog with a bit more information regarding metrics development. The diagram below outlines the metrics development process.

Security Metrics Development Process

1. Identify Controls to Measure – This is pretty self-explanatory: which controls do you want to evaluate? In very mature security programs, metrics may be gathered on numerous controls across multiple control areas. However, if you’re just starting out, you likely would not realize significant value from such detailed metrics at this time and would benefit more from monitoring key indicators of security health such as security spending, vulnerability management status, and patch compliance. In general, controls to be evaluated should be mapped from external and internal requirements. In this fashion, the impact of controls on compliance can be determined once metrics become available. Conversely, metrics can be designed to measure controls that target key compliance requirements. For this blog, I will focus on metrics related to vulnerability management.

2. Identify Available Sources of Data – This step is established in order to identify all viable sources of data which may be presented singularly or combined with others to create more comprehensive security metrics. Sources of data for metrics will vary based on what sort of controls are being measured. However, it is important that data sources be reliable and objective. Some examples of metrics that can be gathered from a single source (in this case, a vulnerability management tool) are listed in the table below.

NameType
Number of systems scanned within a time periodEffort
Number of new vulnerabilities discovered within a time periodEffort
Number of new vulnerabilities remediated within a time periodResult
Number of new systems discovered within a time periodEnvironment
List of current vulnerabilities w/ages (days)Result
List of current exploitable vulnerabilities w/ages (days)Result
Number of OS vulnerabilitiesEnvironment
Number of third-party vulnerabilitiesEnvironment
List of configured networksEffort
Total number of systems discovered / configuredEffort

3. Define Security Metrics – Decide which metrics accurately represent a measurement of controls implemented by your organization. Begin by developing low-level metrics and then combine to create high level-metrics that provide deeper insight.

a. Low-Level Metrics
Low-level metrics are measurements of aspects of information security within a single area.Each metric may not be sufficient in conveying a complete picture but may be used in context with different metric types.Each metric should attempt to adhere to the following criteria:

  • Consistently measured
  • Inexpensive to gather
  • Expressed as a cardinal number or a percentage
  • Expressed as a unit of measure

Low-level metrics should be identified to focus on key aspects of the information security program.The goal should be to identify as many measurements as possible without concern for how comprehensive each measurement may be.The following are examples of low-level metrics:

  • Hosts not patched (Result)
  • Hosts fully patched (Result)
  • Number of patches applied (Effort)
  • Unapplied patches (Environment)
  • Time to apply critical patch (Result)
  • Time to apply non-critical patch (Result)
  • New patches available (Environment)
  • Hours spent patching (Effort)
  • Hosts scanned (Effort)

b. High-Level Metrics
High-level metrics should be comprised of multiple low-level metrics in order to provide a comprehensive measure of effectiveness.The following are examples of such metrics:

  • Unapplied patch ratio
  • Unapplied critical patch trend
  • Unapplied non-critical patch trend
  • Applied / new patch ratio
  • Hosts patched / not patched ratio

4. Collect Baseline Metric Data – A timeframe should be established that is sufficient for creating an initial snapshot, as well as basic trending. It is important that you allow enough time to collect a good baseline sample, as data can easily be skewed as you’re working out little bugs in the collection process.

5. Review Effectiveness of Metrics – Review the baseline data collected and determine whether it is effective in representing the success factors of specific controls. If some metrics fall short of the overall goal, another iteration of the metric development process will be necessary.

6. Publish Security Metrics – Begin publishing security metrics in accordance with pre-defined criteria.

As noted above, well-designed metrics must be objective and be based upon reliable data. However, data sources are not always fully understood when selected and, as a result, metrics may end up being less effective than when they were initially designed.

After metrics have been implemented, a suitable timeframe for collecting baseline data should be permitted. Once this has been done, metrics should be reevaluated in order to determine whether or not they provide the requisite information.Some metrics may fall short in this regard; if this is the case, another iteration of the metric development process will be necessary. Ultimately, metrics are intended to provide insight into the performance of a security program and its controls. If the chosen metrics do not do this effectively, or answer the questions that your organization is asking, then the metrics must be redesigned prior to being published and used for the purposes of decision making.

Back

Hacking Twitter for Fun (and Profit?)

Just last week, on the eve of the tenth anniversary of the 9/11 attacks, NBC News’ Twitter account was hacked by a group calling itself The Script Kiddies. Posing as NBC News, The Script Kiddies falsely tweeted that an airliner had been hijacked and flown into the Ground Zero site in New York City. This is the second such attack perpetrated by The Script Kiddies, the first being a  July 4 hack of the Fox News Twitter claiming that President Obama had been assassinated. In both cases, the spurious posts were quickly removed by Twitter and the news agencies. Traditionally, hackers have chosen their targets in order to either profit financially or make a political statement (never mind the advanced persistent threats represented by nation states and powerful criminal organizations); recent publicized attacks demonstrate this. Fame and reputation have always been motivators for hackers but, in recent years, business-savvy blackhats seem to have outnumbered the jesters of the digital underground by a wide margin. Twitter hacks are hardly uncommon and generally seem to be done more for amusement than for any truly nefarious purpose, but they mostly slip by unnoticed aside from a handful of celebrity victims and entertainment reporters. As far as I can tell, the NBC and Fox attacks are no different in terms of motivation, but the side effects are much more serious. Cyber terrorism has been a buzz topic for some time now and, while false news reports may rank relatively low on the impact scale, it is probably only a matter of time before this sort of event occurs specifically in order to incite panic in the general population. That would be a real paradigm shift but I don’t know that we’re there yet. These attacks appear to serve no obvious purpose beyond self-promotion.

Back

Mayo Clinic's Solution for Social Media Challenges

The Mayo Clinic recently launched Mayo Clinic Center for Social Media (https://socialmedia.mayoclinic.org/) intended to help train medical practitioners and patients about the use of social media to improve patient care.  While it’s easy to see how greater access to healthcare related information can be very valuable, problems with doctors and nurses posting PHI inappropriately has made news headlines more than a handful of times.  Therefore, this new development comes at a great time, just as more and more organizations are beginning to appreciate the value of a comprehensive social media strategy. With the goal of delivering better quality care to patients, many healthcare systems are sharing EMR applications and medical data repositories and setting up interfaces between different systems.  This increases exposure of medical records to a larger group of healthcare practitioners by allowing better, faster, and easier collaboration between doctors.  With increased collaborative efforts, it’s become more likely that doctors will choose social media as the catalyst of collaborative efforts and patient information sharing.  Therefore, organizations that act as custodians of PHI, such as hospitals, clinics, and research labs, must take active steps in educating their workforce about the dangers of social media, and how these tools can be used effectively and without violating patient confidentiality or current healthcare compliance requirements. Through the Center for Social Media, Mayo Clinic seems to approach the problem from multiple angles.  While the portal is still very young, the articles already posted address issues of creating well-designed social media policies, creating appropriate training materials, as well as provide analysis of documented cases of misuse of PHI.  Overall, I view this as a very positive development and will continue to monitor this website for insightful information about the best use of social media in healthcare.  After all, this technology is here to stay, and draconian policies of simply blocking access to Facebook from the workplace have proven to be ineffective.  The answer to these challenges clearly point to better guidance and training for the healthcare practitioners, as well as developing tools for responsible, effective, and secure collaboration.

Discover how NetSPI ASM solution helps organizations identify, inventory, and reduce risk to both known and unknown assets.

X