Reflections on Black Hat 2011

There were a number of very good presentations this year and the after-hours parties were great, but from a security industry standpoint, Black Hat 2011 seemed like it had less energy this year. Some of that might have been because it got so much airplay on commercial media and NPR before and during the event, but even with many, many more people, there just wasn’t as much excitement as in the past. It’s long been clear that the US Government is interested in the space and is spending massive amounts of money on information security and new security technology. It’s also apparent that many organizations are waking up to the fact that they need to develop effective information security programs. Recent discussions with clients are generally about how much more budget they will have in 2012 than this year. These are good things and you’d think they’d lead to significant private investment and more innovation that might show up at Black Hat. However, while Black Hat (and DEF CON for that matter) is supposed to be vendor neutral, you would expect organizations to emerge as industry leaders or at a minimum to show overall industry thought leadership. Other than the US Government and its speakers (in particular Mudge), there wasn’t much commentary on the state of the industry and bigger picture issues. I realize that some of the lack of corporate thought leadership (and momentum) is intentional – Jeff Moss referenced getting back to vendor neutrality in one of the keynote intros and I do understand that Black Hat is more about security research and technology. Nevertheless, in past years, there was at least some industry excitement surrounding new concepts and industry related acquisitions such as IBM buying Ounce and AppScan, or HP buying WebInspect and Fortify. Even the spinoff (and eventual Dell acquisition) of SecureWorks created buzz at Black Hat in the past.  There was really no “buzz” and no real unifying industry vision at this year’s event – which ultimately is important as we mature as a vertical. As has happened before with the security industry, roll-ups and investment seem to be bungled.  Like the first major round of roll-ups (where Symantec, McAfee, and VeriSign were the acquirers), the latest generation of security rollups appear to be flailing. IBM has struggled to consume ISS and its other recently acquired security product lines. HP appears to be in a similar boat. RSA looked like it might be starting something, but, well they won a pwnie this year… Don’t get me wrong, I enjoyed many of the presentations – Moxie Marlinspike was great, Nelson Elhage’s preso on breaking KVM was interesting, and I always enjoy the Securosis crew. Additionally, the overall focus on mobile security, IOS and Android was good.  And the open discussion about advanced persistent threat (APT) and what actually is going on with foreign governments (like China) was refreshing – Alex Stamos gave a good 10 minute overview of APT within his presentation comparing Windows and Apple security. However, you know the industry is having issues when one of the main industry related discussions is about Trustwave trying to go public (which we’ve been hearing for 18 months) and the biggest booth at the show is occupied by a pwnie award winner, RSA (one of the reasons for increased budgets next year). I’m not sure that this will change soon, and, in fact, not having large major players benefits boutique firms like NetSPI, however, with all of the government money and the increased information security budgets, it’s inevitable that we’ll see more investment, new ideas, and new leaders emerge – maybe next year.


Metrics: Your Information Security Yardstick

Mention metrics to most anyone in the information security industries and eyes will immediately glaze over. While there are a few adventurous souls out there, most security professionals balk at the prospect of trying to measure their security program. However, the ability to do just that is an essential part of a security program at any maturity level and a way to move your program forward. Metrics allow security stakeholders to answer important questions about the performance of the security program and, ultimately, make educated decisions regarding the program. Being able to communicate the effectiveness of your program to business leaders is an important part of a program’s identity and maturity within an organization. Generally speaking, metrics can be categorized into three types based on what they signify:

  • Effort metrics measure the amount of effort expended on security. For example:
    • training hours,
    • time spent patching systems, and
    • number of systems scanned for vulnerabilities
  • Result metrics attempt to measure the results of security efforts. Examples of result metrics include:
    • the number of days since the last data breach,
    • the number of unpatched vulnerabilities, and
    • the number of adverse audit or assessment findings.
  • Environment metrics measure the environment in which security efforts take place. These metrics provide context for the other two metrics. For example:
    • the number of known vulnerabilities
    • the number of systems

By compiling metrics, it is possible to measure the effect of your organization’s investment in security . For example, you may track the number of vulnerabilities that have been known to exist in your environment for longer than 30 days. After making improvements to your patch and configuration management processes, you should see a positive impact represented by a decrease in the vulnerability count. Similarly, budget cuts could negatively impact your security program; by demonstrating this negative impact through metrics, you will (hopefully) have a better chance at increasing your security budget in the next budgeting cycle. Remember that every organization’s security program is different and, as such, a metrics package is not one-size-fits-all. In particular, more mature programs can supply more detailed and advanced metrics; however, less mature programs can still benefit from simple metrics. No matter where your security program is on the maturity curve, metrics can give you better insight into your program’s strengths and weaknesses and, as such, will allow you to make better management decisions.


Security and Privacy Considerations in "Meaningful Use"

One of the common and consistent themes at HIMSS (Healthcare Information and Management Systems Society) this year was achieving “Meaningful Use” requirements so that healthcare providers can apply for EHR (Electronic Health Record) stimulus money. The “Meaningful Use” requirements focus on: – Improving quality, safety, efficiency, and reduce health disparities – Engaging patients and families – Improving care coordination – Improving population and public health – Ensuring adequate privacy and security protections for personal health information Naturally, my interest is within the last item in the list, and within this post I hope to bring more clarity to a small subset of what clearly is becoming the newest “hot-item” of the healthcare industry. Based on the “Meaningful Use” matrix created by the HIT (Health IT) Policy Committee, here are the security and privacy goals that need to be reached within the next year and a half:

2011 Objectives:

  • Compliance with HIPAA Privacy and Security Rules and state laws
  • Compliance with fair data sharing practices set forth in the Nationwide Privacy and Security Framework

2011 Measures:

  • Full compliance with HIPAA Privacy and Security Rules
  • An entity under investigation for a HIPAA privacy or security violation cannot achieve meaningful use until the entity is cleared by the investigating authority
  • Conduct or update a security risk assessment and implement security updates as necessary

What the above means is that healthcare companies need to conduct (or update an existing) security risk assessment, and implement the appropriate controls to meet HIPAA requirements. However, since conducting risk assessments is technically a part of HIPAA / HITECH compliance, the requirements could be further simplified to say that by the end of 2011, companies need to be HIPAA compliant. One thing that companies really need to address is making sure that HIPAA compliance goes beyond EMR (Electronic Medical Record) applications, and includes the litany of small applications and medical devices that process, store, or transmit PHI. In order to ensure and demonstrate a comprehensive and complete state of compliance, healthcare providers need to make sure that risk assessments take into account all applications and medical devices, and provide clear supporting documentation of implemented controls and regulatory compliance. For additional information, I have provided future 2013 and 2015 objectives below:

2013 Objectives:

  • Use summarized or de-identified data when reporting data for population health purposes (e.g. public health, quality reporting, and research) where appropriate, so that important information is available with minimal privacy risk

2013 Measures:

  • Provide summarized or de-identified data, when sufficient, to satisfy a data request for population health purposes

2015 Objectives:

  • Provide patients, on request, with an accounting of treatment, payment, and health care operations disclosures
  • Protect sensitive health information to minimize reluctance of patients to seek care because of privacy concerns

2015 Measures:

  • Provide patients, on request, with a timely accounting of disclosures for treatment, payment, and health care operations, in compliance with applicable law
  • Incorporate and utilize technology to segment sensitive data

PCI and the "other wireless"

From the “never been asked that question before” files, I recently had a client who wanted to know about wireless keyboards and whether they are in-scope for PCI. There are no PCI requirements that address keyboards or other wireless peripherals (though you could make a case that some keyboards transmit unencrypted cardholder data over ‘open, public networks’). Just to double check, I reread the Security Standards Council’s Wireless Special Interest Group publication on wireless best practices and PCI; the guidelines are geared towards 802.11 WLANs and specifically exclude Bluetooth. Wireless keyboards are ubiquitous; there is a reasonable chance your organization is using them as the interface to a POS application or virtual terminal. The input could include customer name, expiration, PAN, and CVV. As we typically wouldn’t pay much attention to the peripherals that we type this data on, the question got me thinking about how much we take technology (and its security through obscurity) for granted. I did some exhaustive research on the subject (at least 5 minutes searching Google) and easily found some real world examples of wireless keyboard sniffing techniques; though not currently a prevalent attack, it is quite feasible to intercept the output from a wireless keyboard without leaving fingerprints behind. Unlike traditional keystroke loggers and screen scrapers, which can often be detected by antimalware applications, wireless attacks are transparent and do not require physical or logical access to target machines. One of the more advanced tools out there is on Remote Exploit’s site, called KeyKeriki. This is a combination of hardware/software that targets the wireless signals from 27MHz keyboards (there’s a 2.7 GHz version on the way, too) and can capture or output the keystrokes. The hardware looks simple to build and includes an SDCard for logging; additionally, the software can do decryption of some weak XOR-based encryption on the fly (it takes about 40 keystrokes to get enough data to decipher the stream in real-time). I don’t want to go too far down the rabbit hole here as you can’t defend against every attack vector (PCI doesn’t address TEMPEST or Van Eck phreaking either), but there are some simple steps that can be taken to reduce the risk of compromise:

  • Include standards for input devices in your list of approved hardware; pick keyboards that use strong cryptography to transmit data.
  • It looks like many of the exploits are written to take advantage of certain vendor’s keyboards (I’m looking at you, Logitech and Microsoft…). Do some research when purchasing wireless keyboards to see if their communications security has already been compromised.
  • If you do have a need for wireless input devices, consider using Bluetooth, which offers some protection through the use of a PIN and a custom SAFER+ block cipher implementation. Check the footnote for a good publication on Bluetooth and security from NIST.
  • Drink plenty of coffee and/or adult beverages of your choice before typing credit card numbers. The resultant twitching/lack of coordination will make it more difficult for a malicious user to extract useful information from your typing. Bonus: it’s fun.
  • Consider using wired keyboards for virtual terminals and POS workstations. Remember those things?



EMR Security in the Cloud

I recently had the opportunity to review an article by Michael Koploy of Software Advice titled “HHS Data Tells the True Story of HIPAA Violations in the Cloud“.  While the article has great data about the historical breaches, I think it’s fair to say that not enough time has passed for us to know the real implication of companies moving EMRs into the cloud.  HIPAA violations in an IT-centric environment like cloud or software-as-a-service providers are harder to detect, and the general awareness of rules around HIPAA violations are lower than that in the hospitals.  In fact, that’s one of the basic problems with people deciding to move their data into the “cloud;” it requires a lot of blind trust. Also, it’s important to keep in mind that a lot of physical theft reported by hospitals has nothing to do with someone actively seeking to steal PHI, and everything to do with someone losing a box of medical records in the warehouse or making their laptop easy for a thief to steal.  Comparing this to electronic hacking of EMR is simply like comparing apples and oranges, unless you can prove that all instances of physical theft were motivated by someone looking for the medical records.  On the whole, I would suggest that we simply don’t have enough information to make a risk determination for storing EMR in the cloud or whether it’s a good idea.  All we have is the “Wall of Shame” from HHS  and the data that can be interpreted in many ways and support a variety of conclusions.  For example, since 12/15/09, there have been 292 total reported incidents, of which 58 involved a breach caused by a Business Associate (BA).  Also, the statistics showed that 50% of incidents reported by BAs involved a physical theft or loss of data, closely followed by “Unauthorized Access / Disclosure” category at 43%.   This means that approximately 20% of all breaches involved a third-party, and in reality the statistics for breaches caused by BAs are not much different than healthcare providers.  Applying an unfair twist to this statistic I could argue that a decision to not move data into the cloud would reduce chances of a breach by 20%, which would not be any less accurate than stating that the cloud will reduce the number of reported breaches.  The truth is that there is simply not enough historical data, and companies need to exercise great due-diligence when they decide to trust a third-party with sensitive data.

Discover why security operations teams choose NetSPI.