Ryan Wakeham

Ryan Wakeham has been with NetSPI since 2005 and has nearly 20 years of IT and cyber security experience. He holds a graduate degree in Information Security from the Georgia Institute of Technology and has a background that includes vulnerability testing, compliance advisory consulting, and security management program assessment & development. Over his years with NetSPI, Ryan has worked with clients ranging from Fortune 10 organizations and top US financial institutions to multinational retailers and global technology companies. For several years, Ryan led NetSPI’s pentesting team. In his current role, Ryan partners with NetSPI’s clients to better understand their security challenges and develop solutions to meet their needs.
More by Ryan Wakeham
WP_Query Object
(
    [query] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "16"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "16"
                            [compare] => LIKE
                        )

                )

        )

    [query_vars] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "16"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "16"
                            [compare] => LIKE
                        )

                )

            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [paged] => 0
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [search_columns] => Array
                (
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 1
            [update_post_term_cache] => 1
            [update_menu_item_cache] => 
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [nopaging] => 1
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "16"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "16"
                            [compare] => LIKE
                        )

                    [relation] => OR
                )

            [relation] => OR
            [meta_table] => wp_postmeta
            [meta_id_column] => post_id
            [primary_table] => wp_posts
            [primary_id_column] => ID
            [table_aliases:protected] => Array
                (
                    [0] => wp_postmeta
                )

            [clauses:protected] => Array
                (
                    [wp_postmeta] => Array
                        (
                            [key] => new_authors
                            [value] => "16"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                    [wp_postmeta-1] => Array
                        (
                            [key] => new_presenters
                            [value] => "16"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                )

            [has_or_relation:protected] => 1
        )

    [date_query] => 
    [request] => SELECT   wp_posts.ID
					 FROM wp_posts  INNER JOIN wp_postmeta ON ( wp_posts.ID = wp_postmeta.post_id )
					 WHERE 1=1  AND ( 
  ( wp_postmeta.meta_key = 'new_authors' AND wp_postmeta.meta_value LIKE '{4005e747bb072813b4614861b9f5f39060ede84cf2d630b867fd3bdd3398d982}\"16\"{4005e747bb072813b4614861b9f5f39060ede84cf2d630b867fd3bdd3398d982}' ) 
  OR 
  ( wp_postmeta.meta_key = 'new_presenters' AND wp_postmeta.meta_value LIKE '{4005e747bb072813b4614861b9f5f39060ede84cf2d630b867fd3bdd3398d982}\"16\"{4005e747bb072813b4614861b9f5f39060ede84cf2d630b867fd3bdd3398d982}' )
) AND wp_posts.post_type IN ('post', 'webinars') AND ((wp_posts.post_status = 'publish'))
					 GROUP BY wp_posts.ID
					 ORDER BY wp_posts.post_date DESC
					 
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 20466
                    [post_author] => 71
                    [post_date] => 2020-11-24 07:00:05
                    [post_date_gmt] => 2020-11-24 07:00:05
                    [post_content] => 

Making its debut in 2018, the Ryuk ransomware strand has wreaked havoc on hundreds of businesses and is responsible for one-third of all ransomware attacks that took place in 2020. Now it is seemingly on a mission to infect healthcare organizations across the country, already having hit five major healthcare providers, disabling access to electronic health records (EHR), disrupting services, and putting sensitive patient data at risk.

The healthcare industry is surely bracing for what the U.S. Cybersecurity & Infrastructure Security Agency (CISA) is warning as, “an increased and imminent cybercrime threat to U.S. hospitals and healthcare providers.” What can organizations do to preemptively protect themselves? Our recommendation:

  1. Analyze what makes the healthcare industry a key target for ransomware,
  2. Educate yourself to better understand Ryuk and TrickBot, and
  3. Implement proactive cyber security strategies to thwart ransomware attacks and minimize damage from an incident (we’ll get into this more later in this post).

We’ve pulled together this Guide to Ryuk as a resource to help organizations prevent future ransomware attacks and ultimately mitigate its impact on our nation’s healthcare systems.

Why are Healthcare Providers a Target for Ransomware?

Healthcare is widely known as an industry that has historically struggled to find a balance between the continuation of critical services and cyber security. To put this into perspective, doctors and physicians can’t stop everything and risk losing a life if their technology locks them out due to forgetting a recently changed password. So, security, while critically important in a healthcare environment, is more complex due to its “always on” operational structure.

We’ve seen a definite uptick in attention paid to security at healthcare organizations, but there’s much work to be done. The task of securing a healthcare systems is extremely challenging given its scale and complexity, consisting of many different systems and, with the addition of network-enabled devices, it becomes difficult for administrators to grasp the value of security relative to its costs.  In addition, third parties, such as medical device manufactures also play a role. Historically, devices in hospitals, clinics, and home-healthcare environments had no security controls, but there has been more of a focus on “security features” as connectivity (network, Bluetooth, etc.) has increased. Yet most healthcare networks are still rife with these sorts of devices that have minimal, if any, built-in security capabilities.

Healthcare is by no means the only target industry: any organization can fall victim to ransomware. Though, healthcare is a prime target for two reasons:

  • It’s a gold mine for sensitive data, including social security numbers, payment information, birth certificates, addresses, and more. While monetizing such data may require additional effort on the part of cybercriminals, breaches of such data is a major HIPAA compliance violation that can result in heavy fines and could also potentially have a negative impact to patients if their data is leaked.
  • The criticality of the business is as high-risk as it gets. In other words, hospitals cannot afford downtime. Add a public health pandemic to the mix and the criticality increases drastically.

This sense of urgency to get systems back up and running quickly is a central reason why Ryuk is targeting the industry now. Hospitals are more likely to pay a ransom due to the potential consequence downtime can have on the organization and its patients.

Ransomware, Ryuk, and TrickBot:

To understand Ryuk, it is important to first understand ransomware attacks at a fundamental level. Ransomware gains access to a system only after a trojan or ‘botnet’ finds a vulnerable target and gains access first. Trojans gain access often through phishing attempts (spam emails) with malicious links or attachments (the payload). If successful, the trojan installs malware onto the target’s network by sending a beacon signal to a command and control server controlled by the attacker, which then sends the ransomware package to the Trojan.

In Ryuk’s case, the trojan is TrickBot. In this case, a user clicks on a link or attachment in an email, which downloads the TrickBot Trojan to the user’s computer. TrickBot then sends a beacon signal to a command and control (C2) server the attacker controls, which then sends the Ryuk ransomware package to the victim’s computer.

Trojans can also gain access through other types of malware, unresolved vulnerabilities, and weak configuration, though, phishing is the most common attack vector. Further, TrickBot is a banking Trojan, so in addition to potentially locking up the network and holding it for ransom, it may also steal information before it installs the ransomware.

How does an organization know if they have fallen victim to ransomware, more specifically Ryuk? It will be obvious if Ryuk has successfully infiltrated a system. It will take over a desktop screen and a ransom note will appear with details on how to pay the ransom via bitcoin:

A screenshot of Ryuk’s ransom note.

An early warning sign of a ransomware attack is that at the technical level, your detective controls, if effective, should alert to Indicators of Compromise (IoC). Within CISA’s alert, you can find TrickBot IoCs listed along with a table of Ryuk’s MITRE ATT&K techniques.

A threat to the increasing remote workforce: In order to move laterally throughout the network undetected, Ryuk relies heavily on native tools, such as Windows Remote Management and Remote Desktop Protocol (RDP). Read: COVID-19: Evaluating Security Implications of the Decisions Made to Enable a Remote Workforce

Implementing Proactive Cyber Security Strategies to Thwart Ransomware Attacks

We mentioned at the start of this post that one of the things organizations can do preemptively to protect themselves is to put in place proactive security strategies. While important, security awareness only goes so far, as humans continue to be the greatest cyber security vulnerability. Consider this: In past NetSPI engagements with employee phishing simulations, our click-rates, or fail-rates, were down to 8 percent. This is considered a success, but still leaves open opportunity for bad actors. It only takes one person to interact with a malicious attachment or link for a ransomware attack to be successful.

Therefore, we support defense-in-depth as the most comprehensive strategy to prevent or contain a malware outbreak. Here are four realistic defense-in-depth tactics to implement in the near- and long-term to prevent and mitigate ransomware threats, such as Ryuk:

  1. Revisit your disaster recovery and business continuity plan. Ensure you have routine and complete backups of all business-critical data at all times and that you have stand-by, or ‘hot,’ business-critical systems and applications (this is usually done via virtual computing). Perform table-top or live disaster recovery drills and validate that ransomware wouldn’t impact the integrity of backups.
  2. Separate critical data from desktops, avoid siloes: Ryuk, like many ransomware strands, attempts to delete backup files. Critical patient care data and systems should be on an entirely separate network from the desktop. This way, if ransomware targets the desktop network (the most likely scenario) it cannot spread to critical hospital systems. This is a long-term, and challenging, strategy, yet well worth the time and budgetary investment as the risk of critical data loss will always exist.
  3. Take inventory of the controls you have readily available – optimize endpoint controls: Assess your existing controls, notably email filtering and endpoint controls. Boost email filtering processes to ensure spam emails never make it to employee inboxes, mark incoming emails with a banner that notifies the user if the email comes from an external source, and give people the capability to easily report suspected emails. Endpoint controls are essential in identifying and preventing malware. Here are six recommendations for optimizing endpoint controls:
    1. Confirm Local Administrator accounts are strictly locked down and the passwords are complex. Ensure Domain Administrator and other privileged accounts are not used for routine work, but only for those tasks that require admin access.
    2. Enable endpoint detection and response (EDR) capabilities on all laptops and desktops.
    3. Ensure that every asset that can accommodate anti-malware has it installed, including servers.
    4. Apply all security patches for all software on all devices. Disable all RDP protocol access from the Internet to any perimeter or internal network asset (no exceptions). 
  1. Test your detective controls, network, and workstations:
    1. Detective control testing with adversarial simulation: Engage in a purple team exercise to determine if your detective controls are working as designed. Are you able to detect and respond to malicious activity on your network?
    2. Host-based penetration testing: Audit the build of your workstations to validate that the user does have least privilege and can only perform business tasks that are appropriate for that individual’s position.
    3. Internal network penetration testing: Identify high impact vulnerabilities found in systems, web applications, Active Directory configurations, network protocol configurations, and password management policies. Internal network penetration tests also often include network segmentation testing to determine if the controls isolating your crown jewels are sufficient.

Finally, organizations that end up a victim to ransomware have three options to regain control of their systems and data.

  • Best option: Put disaster recovery and business continuity plans in motion to restore systems. Also, perform an analysis to determine the successful attack vector and remediate associated vulnerabilities.
  • Not advised: Pay the ransom. A quick way to get your systems back up and running, but not advised. There is no guarantee that your business will be unlocked (in fact, the offer may also be ransomware), so in effect you are funding adversarial activities and it’s likely they will target your organization again.
  • Rare cases: Cracking the encryption key, while possible with immature ransomware groups, is often unlikely to be successful. Encryption keys have become more advanced and require valuable time to find a solution.

For those that have yet to experience a ransomware attack, we encourage you to use the recent Ryuk news as a jumping point to future-proof your security processes and prepare for the inevitability of a breach. And for those that have, now is the time to reevaluate your security protocols.

[post_title] => Healthcare’s Guide to Ryuk Ransomware: Advice for Prevention and Remediation [post_excerpt] => Making its debut in 2018, the Ryuk ransomware strand has wreaked havoc on hundreds of businesses and is responsible for one-third of all ransomware attacks that took place in 2020. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => healthcares-guide-to-ryuk-ransomware-advice-for-prevention-and-remediation [to_ping] => [pinged] => [post_modified] => 2021-05-04 17:06:19 [post_modified_gmt] => 2021-05-04 17:06:19 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=20466 [menu_order] => 454 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 1145 [post_author] => 16 [post_date] => 2013-09-22 07:00:17 [post_date_gmt] => 2013-09-22 07:00:17 [post_content] =>

For as long as I can remember, security professionals have spent the majority of their time focusing on preventative controls. Things like patching processes, configuration management, and vulnerability testing all fall into this category. The attention is sensible, of course; what better way to mitigate risk than to prevent successful attacks in the first place?

However, this attention has been somewhat to the detriment of detective controls (I’m intentionally overlooking corrective controls). With budget and effort being concentrated on the preventative, there is little left over for the detective. However, in recent years, we have seen a bit of a paradigm shift; as organizations have begun to accept that they cannot prevent every threat agent, they have also begun to realize the value of detective controls.

Some may argue that most organizations have had detective controls implemented for years and, technically speaking, this is probably true. Intrusion detection and prevention systems (IDS/IPS), log aggregation and review, and managed security services responsible for monitoring and correlating events are nothing new. However, in my experience, these processes and technologies are rarely as effective as advertised (IDS/IPS can easily be made ineffective by the noise of today’s networks, logs are only worth reviewing if you’re collecting the right data points, and correlation and alerting only works if it’s properly configured) and far too many companies expect plug-and-play ease of use.

Detective controls should be designed and implemented to identify malicious activity on both the network and endpoints. Just like preventative controls, detective controls should be layered to the extent possible. A good way to design detective controls is to look at the steps in a typical attack and then implement controls in such a way that the key steps are identified and trigger alerts.

Below is a simplified example of such an approach:

Attack StepKey Detective Control
Gain access to restricted network (bypass network access control)Network access control alerts for unauthorized devices
Discover active systems and servicesIDS / IPS / WAF / HIPS; activity on canary systems that should never be accessed or logged into
Enumerate vulnerabilitiesIDS / IPS / WAF / HIPS; activity on canary systems that should never be accessed or logged into
Test for common and weak passwordsCorrelation of endpoint logs (e.g., failed login attempts, account lockouts); login activity on canary accounts that should never be used
Execute local system exploitAnti-malware; monitoring of anti-malware service state; FIM monitoring security-related GPO and similar
Create accounts in sensitive groupsAudit and alert on changes to membership in local administrator group, domain admin group, and other sensitive local and domain groups
Access sensitive dataLogging all access to sensitive data such as SharePoint, databases, and other data repositories
Exfiltrate sensitive dataData leakage prevention solution; monitor network traffic for anomalies including failed outbound TCP and UDP connections

This example is not intended to be exhaustive but, rather, in meant to illustrate the diversity of detective controls and the various levels and points at which they can be applied.

While every environment is slightly different, the general rules remain the same: implementing controls to detect attacks at common points will greatly increase the efficacy of detective controls while still sticking within a reasonable budget. The one big caveat in all of this is that, in order to be truly effective, detective controls need to be tuned to the environment; no solution will perform optimally right out of the box. At the end of the day, proper application of detective controls will still cost money and require resources. However, the impact of an attack will be reduced substantially through strong detective controls.

[post_title] => The Value of Detective Controls [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-value-of-detective-controls [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:21 [post_modified_gmt] => 2021-04-13 00:05:21 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1145 [menu_order] => 739 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 1181 [post_author] => 16 [post_date] => 2012-12-12 07:00:26 [post_date_gmt] => 2012-12-12 07:00:26 [post_content] => The Georgia Tech Information Security Center and Georgia Tech Research Institute recently released their 2013 report on emerging cyber threats. Some of these threats are fairly predictable, such as cloud-based botnets, vulnerabilities in mobile browsers and mobile wallets, and obfuscation of malware in order to avoid detection. However, some areas of focus were a bit more surprising, less in a revelatory sense and more simply because the report specifically called them out. One of these areas is supply chain insecurity. It is hardly news that counterfeit equipment can make its way into corporate and even government supply chains but, in an effort to combat the threat, the United States has redoubled efforts to warn of foreign-produced technology hardware (in particular, Chinese-made networking equipment).  However, the report notes that detecting counterfeit and compromised hardware is a difficult undertaking, particularly for companies that are already under the gun to minimize costs in a down economy.  Despite the expense, though, the danger of compromise of intellectual property or even critical infrastructure is very real and should not be ignored. Another interesting focus of the report is healthcare security.  The HITECH Act, which was enacted in 2009, provided large incentives for healthcare organizations to move to electronic systems of medical records management. While the intent of this push was to improve interoperability and the level of patient care across the industry, a side effect is a risk to patient data. The report notes what anyone who has dealt with information security in the healthcare world already knows: that healthcare is a challenging industry to secure.  The fact that the report calls out threats to health care data emphasizes the significance of the challenges in implementing strong controls without impacting efficiency. Addressing the threats of information manipulation, insecurity of the supply chain, mobile security, cloud security, malware, and healthcare security, the report is a recommended read for anyone in the information security field. The full report can be found at: https://www.gtsecuritysummit.com/pdf/2013ThreatsReport.pdf   [post_title] => 2013 Cyber Threat Forecast Released [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => 2013-cyber-threat-forecast-released [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:59 [post_modified_gmt] => 2021-04-13 00:05:59 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1181 [menu_order] => 778 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 1189 [post_author] => 16 [post_date] => 2012-10-15 07:00:26 [post_date_gmt] => 2012-10-15 07:00:26 [post_content] => I recently attended a talk given by an engineer from a top security product company and, while the talk was quite interesting, something that the engineer said has been bugging me a bit. He basically stated that, as a control, deploying a web application firewall was preferable to actually fixing vulnerable code. Web application firewalls (WAFs) are great in that they provide an additional layer of defense at the application layer. By filtering requests sent to applications, they are able to block certain types of malicious traffic such as cross-site scripting and SQL injection. WAFs rarely produce false positives, meaning that they won’t accidently block legitimate traffic. And WAFs can be tuned fairly precisely to particular applications. Additionally, WAFs can filter outbound traffic to act as a sort of data leak prevention solution. But is installing a WAF preferable to writing secure code? Or, put differently, is having a WAF in place reason enough to disregard secure coding standards and remediation processes? I don’t think so. WAFs, like other security controls, are imperfect and can be bypassed. They require tuning to work properly. They fall victim to the same issues that any other software does: poor design and poor implementation. While a WAF may catch the majority of injection attacks, for example, a skilled attacker can usually craft a request that can bypass application filters (particularly in the common situation that the WAF hasn’t been completely tuned for the application, which can be an extremely time-consuming activity). We have seen this occur quite often in our penetration tests; the WAF filters automated SQL injection attempts executed by our tools but fails to block manually crafted injections. I’m not saying that organizations shouldn’t deploy web application firewalls. However, rather than using a WAF in place of good application secure application development and testing practices, they should consider WAFs as an additional layer in their strategy of defense-in-depth and continue to address application security flaws with code changes and security validation testing. [post_title] => Thoughts on Web Application Firewalls [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => thoughts-on-web-application-firewalls [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:28 [post_modified_gmt] => 2021-04-13 00:06:28 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1189 [menu_order] => 787 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [4] => WP_Post Object ( [ID] => 1203 [post_author] => 16 [post_date] => 2012-06-22 07:00:26 [post_date_gmt] => 2012-06-22 07:00:26 [post_content] => It is becoming more common these days (though still not common enough) for organizations to have regular vulnerability scans conducted against Internet-facing, and sometimes internal, systems and devices. This is certainly a step in the right direction, as monthly scans against the network and service layer are an important control that can be used to detect missing patches or weak configurations, thereby prompting vulnerability remediation. Perhaps unsurprisingly, some application security vendors are applying this same principle to web application testing, insisting that scanning a single application numerous times throughout the year is the best way to ensure the security of the application and related components.  Does this approach make sense? In a handful of cases, where ongoing development is taking place and the production version of the application codebase is updated on a frequent basis, it may make sense to scan the application prior to releasing changes (i.e. as part of a pre-deployment security check). Additionally, if an organization is constantly deploying simple websites, such as marketing “brochureware” sites, a simple scan for vulnerabilities may hit the sweet spot in the budget without negatively impacting the enterprise’s risk profile. However, in most cases, repeated scanning of complex applications is a waste of time and money that offers little value beyond identifying the more basic of application weaknesses.  Large modern web applications are intricate pieces of software. Such applications are typically updated based on a defined release cycle rather than on a continual basis and, when they are updated, functionality changes can be substantial. Even in the cases where updates are relatively small, the impact of these changes to the application’s security posture can still be significant. Due to these facts, repeated scans for low-level vulnerabilities simply do not make sense. Rather, comprehensive testing to identify application-specific weaknesses, such as errors related to business logic, is necessary to truly protect against the real threats in the modern world.  Your doctor might tell you to check your blood pressure every few weeks but he would never lead you to believe that doing so is a sufficient way to monitor your health. Rather, less frequent but still regular comprehensive checkups are recommended. So why would you trust an application security vendor that tells you that quantity can make up for a lack in quality? There may be a place in the world for these types of vendors but you shouldn’t be entrusting the security of your critical applications to mere testing for low-hanging fruit. A comprehensive approach that combines multiple automated tools with expert manual testing is the best way to ensure that your web applications are truly secure. [post_title] => Web Application Testing: What is the right amount? [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => web-application-testing-what-is-the-right-amount [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:28 [post_modified_gmt] => 2021-04-13 00:06:28 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1203 [menu_order] => 801 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [5] => WP_Post Object ( [ID] => 1206 [post_author] => 16 [post_date] => 2012-05-24 07:00:26 [post_date_gmt] => 2012-05-24 07:00:26 [post_content] => Earlier this month, at the Secure360 conference in St. Paul, Seth Peter (NetSPI’s CTO) and I gave a presentation on enterprise vulnerability management.  This talk came out of a number of discussions about formal vulnerability management programs that we have had both internally at NetSPI and with outside individuals and organizations.  While many companies have large and relatively mature security programs, it would not be an exaggeration to say that very few have formalized the process of actively managing the vulnerabilities in their environments. To accompany our presentation, I created a short white paper addressing the subject. In it, I briefly address the need for such a formal program, summarize a four phase approach, and offer some tips and suggestions on making vulnerability management work in your organization. When reading it, keep in mind that the approach that I outline is by no means the only way of successfully taking on the challenge of managing your security weaknesses. However, due to our unique vantage point as both technical testers and trusted program advisors for many organizations across various industries, we have been able to pull together an approach that incorporates the key elements that will allow this sort of program to be successful. Download Ryan's white paper: An Approach to Enterprise Vulnerability Management [post_title] => Enterprise Vulnerability Management [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => enterprise-vulnerability-management [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:04 [post_modified_gmt] => 2021-04-13 00:06:04 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1206 [menu_order] => 804 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 1213 [post_author] => 16 [post_date] => 2012-03-19 07:00:26 [post_date_gmt] => 2012-03-19 07:00:26 [post_content] => Several months ago, I attended an industry conference where there was much buzz about “The Cloud.”  A couple of the talks purportedly addressed penetration testing in the Cloud and the difficulties that could be encountered in this unique environment; I attended enthusiastically, hoping to glean some insight that I could bring back to NetSPI and help to improve our pentesting services.  As it turns out, I was sorely disappointed. In these talks, most time was spent noting that Cloud environments are shared and, in executing a pentest against such an environment, there was a substantially higher risk of impacting other (non-target) environments.  For example, if testing a web application hosted by a software-as-a-service (SaaS) provider, one could run the risk of knocking over the application and/or the shared infrastructure and causing a denial of service condition for other customers of the provider in addition to the target application instance.  This is certainly a fair concern but it is hardly a revelation.  In fact, if your pentesting company doesn’t have a comprehensive risk management plan in place that aims to minimize this sort of event, I recommend looking elsewhere.  Also, the speakers noted that getting permission from the Cloud provider to execute such a test can be extremely difficult.  This is no doubt due to the previously mentioned risks, as well as the fact that service providers are typically rather hesitant to reveal their true security posture to their customers.  (It should be noted that some Cloud providers, such as Amazon, have very reasonable policies on the use of security assessment tools and services.) In any case, what I really wanted to know was this: is there anything fundamentally different about testing against a Cloud-based environment as compared with testing against a more traditional environment? After much discussion with others in the industry, I have concluded that there really isn’t. Regardless of the scope of testing (e.g., application, system, network), the underlying technology is basically the same in either situation.  In a Cloud environment, some of the components may be virtualized or shared but, from a security standpoint, the same controls still apply.  A set of servers and networking devices virtualized and hosted in the Cloud can be tested in the same manner as a physical infrastructure.  Sure, there may be a desire to also test the underlying virtualization technology but, with regard to the assets (e.g., databases, web servers, domain controllers), there is no difference.  Testing the virtualization and infrastructure platforms (e.g., Amazon Web Services, vBlock, OpenStack) is also no different; these are simply servers, devices, and applications with network-facing services and interfaces.  All of these systems and devices, whether virtual or not, require patching, strong configuration, and secure code. In the end, it seems that penetration testing against Cloud environments is not fundamentally different from testing more conventional environments.  The same controls need to exist and these controls can be omitted or misapplied, thereby creating vulnerabilities.  Without a doubt, there are additional components that may need to be considered and tested.  Yet, at the end of the day, the same tried and true application, system, and network testing methodologies can be used to test in the Cloud. [post_title] => Pentesting the Cloud [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => pentesting-the-cloud [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:27 [post_modified_gmt] => 2021-04-13 00:05:27 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1213 [menu_order] => 811 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 1218 [post_author] => 16 [post_date] => 2012-02-07 07:00:26 [post_date_gmt] => 2012-02-07 07:00:26 [post_content] => In my experience, one of the security management processes that causes the most confusion among security stakeholders is the periodic risk assessment.  Most major information security frameworks such as ISO/IEC 27002:2005, the PCI Data Security Standard, and HIPAA, include annual or periodic risk assessments and yet a surprising number of organizations struggle with putting together a risk assessment process. Fundamentally, the concept of a risk assessment is straightforward: identify the risks to your organization (within some defined scope) and determine how to treat those risks.  The devil, of course, is in the details.  There are a number of formal risk assessment methodologies that can be followed, such as NIST SP 800-30, OCTAVE, and the risk management framework defined in ISO/IEC 27005 and it makes sense for mature organizations to implement one of these methodologies.  Additionally, risk assessments at larger companies will often feed into an Audit Plan.  If you’re responsible for conducting a risk assessment for a smaller or less mature company, though, the thought of performing and documenting a risk assessment may leave you scratching your head. The first step in any risk assessment is to identify the scope of the assessment, be they departments, business process, systems and applications, or devices.  For example, a risk assessment at a financial services company may focus on a particular business unit and the regulated data and systems used by that group.  Next, the threats to these workflows, systems, or assets should be identified; threats can include both intentional and unintentional acts and may be electronic or physical.  Hackers, power outages, and hurricanes are all possible threats to consider.  In some cases, controls for addressing the vulnerabilities associated with these threats may already exist so they should be taken into account.  Quantifying the impact to the organization should one of these threats be realized is the next step in the risk assessment process.  In many cases, impact is measured in financial terms because dollars are pretty tangible to most people but financial impact is not always the only concern.  Finally, this potential impact should be combined with the likelihood that such an event will occur in order to quantify the overall risk.  Some organizations will be satisfied with quantifying risk as high, medium, or low, but a more granular approach can certainly be taken. When it comes to treating risks, the options are fairly well understood.  An organization can apply appropriate controls to reduce the risk, avoid the risk by altering business processes or technology such that the risk no longer applies, share the risk with a third party through contracts (including insurance), or knowingly and objectively determine to accept the risk. At the conclusion of all of the risk assessment and treatment activities, some sort of documentation needs to be created.  This doesn’t need to be a lengthy formal report but, whatever the form, it should summarize the scope of the assessment, the identified threats and risks, and the risk treatment decisions.  Results from the Audit Plans can also assist in this documentation process. Most organizations already assess and treat risks operationally and wrapping a formal process around the analysis and decision-making involved should not be overwhelming.  Of course, different organizations may need more rigor in their risk assessment process based on internal or external requirements and this is not meant to be a one-size-fits-all guide to risk assessment.  Rather, the approach outlined above should provide some guidance, and hopefully inspire some confidence to security stakeholders who are just starting down the road of formal risk management. [post_title] => The Annual Struggle with Assessing Risk [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-annual-struggle-with-assessing-risk [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:15 [post_modified_gmt] => 2021-04-13 00:06:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1218 [menu_order] => 816 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [8] => WP_Post Object ( [ID] => 1225 [post_author] => 16 [post_date] => 2011-10-26 07:00:26 [post_date_gmt] => 2011-10-26 07:00:26 [post_content] => The Cloud is one of the "new big things" in IT and security and I hate it.  To be clear, I don't actually hate the concept of The Cloud (I'll get to that in a minute) but, rather, I hate the term. According to Wikipedia, cloud computing is "the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet)."  What this pretty much amounts to is outsourcing.  There are a lot of reasons that people "move to The Cloud" and I'm not really going to dive into them all; suffice it to say that it comes down to cost and the efficiencies that Cloud providers are able to leverage typically allow them to operate at lower cost than most organizations would spend accomplishing the same task.  Who doesn't like better efficiency and cost savings? But what is cloud computing really?  Some people use the term to refer to infrastructure as a service (IaaS), or an environment that is sitting on someone else's servers; typically, the environment is virtualized and dynamically scalable (remember that whole efficiency / cost savings thing).  A good example of an IaaS provider is Amazon Web Services.  Software as a service (SaaS) is also a common and not particularly new concept that leverages the concept of The Cloud.  There are literally thousands of SaaS providers but some of the better known ones are Salesforce.com and Google Apps.  Platform as a Service (PaaS) is less well-known term but the concept is familiar: PaaS providers the building blocks for hosted custom applications.  Often, PaaS and IaaS solutions are integrated.  An example of a PaaS provider is Force.com.  The Private Cloud is also generating some buzz with packages such as Vblock, and OpenStack; really, these are just virtualized infrastructures. I'm currently at the Hacker Halted 2011 conference in Miami (a fledgling but well-organized event) and one of the presentation tracks is dedicated to The Cloud.  There have been some good presentations but both presenters and audience members have struggled a bit with defining what they mean by The Cloud.  One presenter stated that "if virtualization is involved, it is usually considered to be a cloud."  If we're already calling it virtualization, why do we also need to call it The Cloud? To be fair, The Cloud is an appropriate term in some ways because it represents the nebulous boundaries of modern IT environments.  No longer is an organization's IT infrastructure bound by company-owned walls; it is an amalgamation of company and third party managed party services, networks, and applications.  Even so, The Cloud is too much of a vague marketing term for my taste.  Rather than lumping every Internet-based service together in a generic bucket, we should say what we really mean.  Achieving good security and compliance is already difficult within traditional corporate environments.  Let's at least all agree to speak the same language. [post_title] => Why I Hate The Cloud [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => why-i-hate-the-cloud [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:28 [post_modified_gmt] => 2021-04-13 00:05:28 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1225 [menu_order] => 824 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [9] => WP_Post Object ( [ID] => 1226 [post_author] => 16 [post_date] => 2011-10-12 07:00:25 [post_date_gmt] => 2011-10-12 07:00:25 [post_content] => Mobile computing technology is hardly a recent phenomenon but, with the influx of mobile devices such as smartphones and tablet computers into the workplace, the specter of malicious activity being initiated by or through these devices looms large.  However, generally speaking, an information security toolkit that includes appropriate controls for addressing threats presented by corporate laptops should also be able to deal with company-owned smartphones.   My recommendations for mitigating the risk of mobile devices in your environment include the following:
  • Establish a Strong Policy
  • Educate Users
  • Implement Local Access Controls
  • Minimize the Mobile Footprint
  • Restrict Connectivity
  • Restrict Web Application Functionality
  • Assess Mobile Applications
  • Encrypt, Encrypt, Encrypt
  • Enable Remote Wipe Functionality
  • Implement a Mobile Device Management System
  • Provide Support for Employee-Owned Devices 
For more detailed information, take a look at the white paper that I just put together on the subject: Dealing with Mobile Devices in a Corporate Environment. [post_title] => Mobile Devices in Corporate Environments [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => mobile-devices-in-corporate-environments [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:31 [post_modified_gmt] => 2021-04-13 00:05:31 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1226 [menu_order] => 826 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [10] => WP_Post Object ( [ID] => 1228 [post_author] => 16 [post_date] => 2011-10-04 07:00:25 [post_date_gmt] => 2011-10-04 07:00:25 [post_content] => When it comes to application of security controls, many organizations have gotten pretty good at selecting and implementing technologies that create defense-in-depth.  Network segmentation, authorization and access control, and vulnerability management are all fairly well understood and generally practiced by companies these days.  However, many organizations are still at risk because they can't answer a simple question: where is sensitive data?  It should go without saying but if a company can't identify the locations where sensitive data is stored, processed, or transmitted, it will have a pretty hard time implementing controls that will effectively protect that data. Two effective methods for identifying sensitive data repositories and transmission channels are data flow mapping and automated data discovery.  A comprehensive and accurate approach will include both.  Note, of course, that both methods assume that you have already defined what types of data are considered sensitive; if this is not the case, you will need to go through a data classification exercise and create a data classification policy. Data flow mapping is exactly what it sounds like: a table-top exercise to identify how sensitive data enters the organization and where it goes once inside.  Data flow mapping is typically pretty interview-centric, as you will need to really dig into the business processes that manipulate, move, and store sensitive data.  Depending on the size and complexity of your organization, data flow mapping could either be very straightforward or extremely complicated.  However, it is the only reliable way to determine the actual path that sensitive data takes through your organization.  As you conduct your interviews, remember that you want to identify all the ways that sensitive data is input into a business process, where it is stored and processed, who handles it and how, and what the outputs are.  Make sure that you get multiple perspectives on individual business processes as validation and also match up the outputs of one process with the inputs of another.  It is not uncommon for employees in one business unit or area to have misunderstandings about other processes; your goal is to piece together the entire puzzle. Automated data discovery does a poor job of shedding light on the mechanisms that move sensitive data around an organization but it can be very valuable for validating assumptions, identifying exceptions, and helping to reveal the true size of certain data repositories.  There are a number of free and commercial tools that can be used for data discovery (one of the most popular free tools is Cornell University's Spider tool) but they all aim to accomplish the same objective: provide you with a list of files and repositories that contain data that you have defined as sensitive.  Good places to start your discovery include network shares, databases, portal applications, home drives on both servers and workstations, and email inboxes.  Be aware that most discovery tools will require that you provide or select a regular expression that matches the format of particular data fields.  However, some more advanced commercial tools also provide signature learning features. Ultimately, your data discovery exercise should result in a much improved understanding of how sensitive data passes through your organization and where it is stored.  The next step is to determine how to apply controls based on where data is stored, processed, and transmitted.  Also, where necessary, business processes may need to be adjusted in order to consolidate data and meet data protection requirements.   While identification of sensitive data is only the first phase in a process that will result in better data security and reduced risk, it is an absolutely critical step if application of security controls is to be effective. [post_title] => Do You Know Where Your Data Is? [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => do-you-know-where-your-data-is [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:04 [post_modified_gmt] => 2021-04-13 00:06:04 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1228 [menu_order] => 827 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [11] => WP_Post Object ( [ID] => 1232 [post_author] => 16 [post_date] => 2011-09-16 07:00:25 [post_date_gmt] => 2011-09-16 07:00:25 [post_content] => Just last week, on the eve of the tenth anniversary of the 9/11 attacks, NBC News’ Twitter account was hacked by a group calling itself The Script Kiddies. Posing as NBC News, The Script Kiddies falsely tweeted that an airliner had been hijacked and flown into the Ground Zero site in New York City. This is the second such attack perpetrated by The Script Kiddies, the first being a  July 4 hack of the Fox News Twitter claiming that President Obama had been assassinated. In both cases, the spurious posts were quickly removed by Twitter and the news agencies. Traditionally, hackers have chosen their targets in order to either profit financially or make a political statement (never mind the advanced persistent threats represented by nation states and powerful criminal organizations); recent publicized attacks demonstrate this. Fame and reputation have always been motivators for hackers but, in recent years, business-savvy blackhats seem to have outnumbered the jesters of the digital underground by a wide margin. Twitter hacks are hardly uncommon and generally seem to be done more for amusement than for any truly nefarious purpose, but they mostly slip by unnoticed aside from a handful of celebrity victims and entertainment reporters. As far as I can tell, the NBC and Fox attacks are no different in terms of motivation, but the side effects are much more serious. Cyber terrorism has been a buzz topic for some time now and, while false news reports may rank relatively low on the impact scale, it is probably only a matter of time before this sort of event occurs specifically in order to incite panic in the general population. That would be a real paradigm shift but I don’t know that we’re there yet. These attacks appear to serve no obvious purpose beyond self-promotion. [post_title] => Hacking Twitter for Fun (and Profit?) [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => hacking-twitter-for-fun-and-profit [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:29 [post_modified_gmt] => 2021-04-13 00:05:29 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1232 [menu_order] => 832 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [12] => WP_Post Object ( [ID] => 1233 [post_author] => 16 [post_date] => 2011-09-15 07:00:25 [post_date_gmt] => 2011-09-15 07:00:25 [post_content] =>

After a number of questions on the topic, I have decided to follow up on my earlier security metrics blog with a bit more information regarding metrics development. The diagram below outlines the metrics development process.

Security Metrics Development Process

1. Identify Controls to Measure – This is pretty self-explanatory: which controls do you want to evaluate? In very mature security programs, metrics may be gathered on numerous controls across multiple control areas. However, if you’re just starting out, you likely would not realize significant value from such detailed metrics at this time and would benefit more from monitoring key indicators of security health such as security spending, vulnerability management status, and patch compliance. In general, controls to be evaluated should be mapped from external and internal requirements. In this fashion, the impact of controls on compliance can be determined once metrics become available. Conversely, metrics can be designed to measure controls that target key compliance requirements. For this blog, I will focus on metrics related to vulnerability management.

2. Identify Available Sources of Data – This step is established in order to identify all viable sources of data which may be presented singularly or combined with others to create more comprehensive security metrics. Sources of data for metrics will vary based on what sort of controls are being measured. However, it is important that data sources be reliable and objective. Some examples of metrics that can be gathered from a single source (in this case, a vulnerability management tool) are listed in the table below.

NameType
Number of systems scanned within a time periodEffort
Number of new vulnerabilities discovered within a time periodEffort
Number of new vulnerabilities remediated within a time periodResult
Number of new systems discovered within a time periodEnvironment
List of current vulnerabilities w/ages (days)Result
List of current exploitable vulnerabilities w/ages (days)Result
Number of OS vulnerabilitiesEnvironment
Number of third-party vulnerabilitiesEnvironment
List of configured networksEffort
Total number of systems discovered / configuredEffort

3. Define Security Metrics – Decide which metrics accurately represent a measurement of controls implemented by your organization. Begin by developing low-level metrics and then combine to create high level-metrics that provide deeper insight.

a. Low-Level Metrics
Low-level metrics are measurements of aspects of information security within a single area.Each metric may not be sufficient in conveying a complete picture but may be used in context with different metric types.Each metric should attempt to adhere to the following criteria:

  • Consistently measured
  • Inexpensive to gather
  • Expressed as a cardinal number or a percentage
  • Expressed as a unit of measure

Low-level metrics should be identified to focus on key aspects of the information security program.The goal should be to identify as many measurements as possible without concern for how comprehensive each measurement may be.The following are examples of low-level metrics:

  • Hosts not patched (Result)
  • Hosts fully patched (Result)
  • Number of patches applied (Effort)
  • Unapplied patches (Environment)
  • Time to apply critical patch (Result)
  • Time to apply non-critical patch (Result)
  • New patches available (Environment)
  • Hours spent patching (Effort)
  • Hosts scanned (Effort)

b. High-Level Metrics
High-level metrics should be comprised of multiple low-level metrics in order to provide a comprehensive measure of effectiveness.The following are examples of such metrics:

  • Unapplied patch ratio
  • Unapplied critical patch trend
  • Unapplied non-critical patch trend
  • Applied / new patch ratio
  • Hosts patched / not patched ratio

4. Collect Baseline Metric Data – A timeframe should be established that is sufficient for creating an initial snapshot, as well as basic trending. It is important that you allow enough time to collect a good baseline sample, as data can easily be skewed as you’re working out little bugs in the collection process.

5. Review Effectiveness of Metrics – Review the baseline data collected and determine whether it is effective in representing the success factors of specific controls. If some metrics fall short of the overall goal, another iteration of the metric development process will be necessary.

6. Publish Security Metrics – Begin publishing security metrics in accordance with pre-defined criteria.

As noted above, well-designed metrics must be objective and be based upon reliable data. However, data sources are not always fully understood when selected and, as a result, metrics may end up being less effective than when they were initially designed.

After metrics have been implemented, a suitable timeframe for collecting baseline data should be permitted. Once this has been done, metrics should be reevaluated in order to determine whether or not they provide the requisite information.Some metrics may fall short in this regard; if this is the case, another iteration of the metric development process will be necessary. Ultimately, metrics are intended to provide insight into the performance of a security program and its controls. If the chosen metrics do not do this effectively, or answer the questions that your organization is asking, then the metrics must be redesigned prior to being published and used for the purposes of decision making.

[post_title] => Metrics: Your Security Yardstick - Part 2 - Defining Metrics [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => metrics-your-security-yardstick-part-2-defining-metrics [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:09 [post_modified_gmt] => 2021-04-13 00:06:09 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1233 [menu_order] => 831 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [13] => WP_Post Object ( [ID] => 1234 [post_author] => 16 [post_date] => 2011-08-30 07:00:25 [post_date_gmt] => 2011-08-30 07:00:25 [post_content] => Mention metrics to most anyone in the information security industries and eyes will immediately glaze over. While there are a few adventurous souls out there, most security professionals balk at the prospect of trying to measure their security program. However, the ability to do just that is an essential part of a security program at any maturity level and a way to move your program forward. Metrics allow security stakeholders to answer important questions about the performance of the security program and, ultimately, make educated decisions regarding the program. Being able to communicate the effectiveness of your program to business leaders is an important part of a program's identity and maturity within an organization. Generally speaking, metrics can be categorized into three types based on what they signify:
  • Effort metrics measure the amount of effort expended on security. For example:
    • training hours,
    • time spent patching systems, and
    • number of systems scanned for vulnerabilities
  • Result metrics attempt to measure the results of security efforts. Examples of result metrics include:
    • the number of days since the last data breach,
    • the number of unpatched vulnerabilities, and
    • the number of adverse audit or assessment findings.
  • Environment metrics measure the environment in which security efforts take place. These metrics provide context for the other two metrics. For example:
    • the number of known vulnerabilities
    • the number of systems
By compiling metrics, it is possible to measure the effect of your organization's investment in security . For example, you may track the number of vulnerabilities that have been known to exist in your environment for longer than 30 days. After making improvements to your patch and configuration management processes, you should see a positive impact represented by a decrease in the vulnerability count. Similarly, budget cuts could negatively impact your security program; by demonstrating this negative impact through metrics, you will (hopefully) have a better chance at increasing your security budget in the next budgeting cycle. Remember that every organization's security program is different and, as such, a metrics package is not one-size-fits-all. In particular, more mature programs can supply more detailed and advanced metrics; however, less mature programs can still benefit from simple metrics. No matter where your security program is on the maturity curve, metrics can give you better insight into your program's strengths and weaknesses and, as such, will allow you to make better management decisions. [post_title] => Metrics: Your Information Security Yardstick [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => metrics-your-information-security-yardstick [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:08 [post_modified_gmt] => 2021-04-13 00:06:08 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1234 [menu_order] => 835 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [14] => WP_Post Object ( [ID] => 1242 [post_author] => 16 [post_date] => 2011-06-22 07:00:25 [post_date_gmt] => 2011-06-22 07:00:25 [post_content] => A recent spate of data breaches has highlighted the fact that, even in 2011, many organizations still have not managed to grasp the concepts of vulnerability management.  It is certainly the case that no organization, no matter what sort of controls are in place, can eliminate risk completely.  However, the apparent ease with which some of these attacks have been carried out makes one wonder what sort of vulnerability management practices, if any, these organizations had in place.  At the most basic, vulnerability management should be a cyclical process that involves the identification, analysis, reporting, and remediation of weaknesses in system configurations and software.  Before vulnerability identification can begin, though, it is necessary for an organization to have an understanding of what assets (i.e., systems and data) exist and where they live.  This, of course, is itself an ongoing process, as few IT environments are very static. Vulnerability identification is, believe it or not, often one of the easier steps in the vulnerability management process.  There are numerous options available to assist organizations in identifying weaknesses in their environments, including automated scanners, patch and configuration management utilities, and assessment service providers.  Regardless of the sources of this information, it should provide insight into vulnerabilities in system and software configuration, patch levels, and application code. Analysis of the identified vulnerabilities should include prioritization based on risk to the organization's assets.  Information such as CVSS score can be helpful in this regard.  Also, categorization based on vulnerability type, infrastructure area, and business unit may also be beneficial, particularly in larger or decentralized organizations. Internal reporting of vulnerabilities sounds simple but, in larger IT groups, can actually be surprisingly complex.  A process needs to exist to ensure that vulnerability information is distributed to the proper individuals in a timely and organized fashion.  Ticketing systems can be a useful tool in this regard, particularly if your vulnerability assessment, analysis, or correlation tool can integrate directly with the system. Finally, identified vulnerabilities should be remediated.  In the ideal scenario, this simply means applying a patch or changing a configuration setting.  However, it is not uncommon for this to be a sticking point.  Whether due to patch incompatibility or a business need for particular features or configurations to exist, it is all too common for identified vulnerabilities to go unmitigated for long periods of time.  This is somewhat understandable and sometimes risks do need to be accepted but, often, the business impact of a vulnerability is not fully understood by the risk owners.  Ultimately, risk owners need to realize the implications on the entire environment if they choose to allow vulnerabilities to persist (see Scott Sutherland and Antti Rantasaari's "When Databases Attack" presentation for some great examples). Ultimately, vulnerability management does have some complexities but it is an absolutely critical component of a comprehensive and effective information security program.  Security can never be a sure thing and threats will always exist.  However, in this day and age, there is really no excuse for organizations to lack a formal approach to identifying and treating vulnerabilities, particularly in their Internet-facing infrastructure. [post_title] => What Happened to Vulnerability Management? [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => what-happened-to-vulnerability-management [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:17 [post_modified_gmt] => 2021-04-13 00:06:17 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1242 [menu_order] => 843 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [15] => WP_Post Object ( [ID] => 1247 [post_author] => 16 [post_date] => 2011-04-15 07:00:25 [post_date_gmt] => 2011-04-15 07:00:25 [post_content] => In a March 25 front page article in the Minneapolis / St. Paul Business Journal, it was revealed that sensitive records including employee Social Security Numbers, payroll information, and medical records,  from a long-defunct tech company were inadvertently auctioned off along with the filing cabinets they were stored in.  In this instance, the story ends happily; the founder and former CEO of the company was able to purchase the records from the buyer and secure them.  However, it's not hard to see how this could have turned out for the worse. So what should be done?  In this case, the CEO was advised by his lawyers to retain certain files and so he simply held on to all of them.  In all likelihood, he didn't know exactly what needed to be kept and so he kept everything.  While that may have seemed like a good idea at the time, not destroying  all but the key documents ended up coming back to bite the CEO a full decade after the company shut its doors.  Due to the fact that the data was outside the CEO's control for a number of weeks, he is required by certain state laws to notify individuals that the security of their personal data had been breached. While it may seem unnecessary on the surface, especially in this age of ever cheaper digital storage, a good data classification, retention, and destruction policy is of paramount importance to every organization.  While your organization hopefully won't go out of business any time soon, such a policy also helps to secure sensitive information during the course of regular business operations.  The cost of a data breach is ever-increasing, both in terms of reputation and dollars, and no organization profits from losing sensitive personal data on its customers or employees.  By properly classifying your sensitive data, you can apply controls more appropriately and efficiently.  Also, always remember the rule of thumb for storage of sensitive data: if you no longer need it, get rid of it! [post_title] => The Danger of Retaining Data [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-danger-of-retaining-data [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:15 [post_modified_gmt] => 2021-04-13 00:06:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1247 [menu_order] => 848 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [16] => WP_Post Object ( [ID] => 1249 [post_author] => 16 [post_date] => 2011-03-24 07:00:25 [post_date_gmt] => 2011-03-24 07:00:25 [post_content] => A couple of months ago, I attended the Nuclear Energy Institute's Cyber Security Implementation Workshop in Baltimore.  The keynote speaker was Brian Snow, who is a well-known security expert with substantial experience at the National Security Agency.  Early in his talk, Snow highlighted the fact that security practitioners do not operate in a benign environment, where threats are static, but, rather must work to continually counter malice. A good analogy that Snow provided deals with transportation.  When you need a vehicle for use in a benign environment, you use a car; when you need a vehicle for use in a malicious environment, you use a tank, which is purpose-built for such an environment. A security program needs to provide the defensive capabilities of a tank.  However, few security practitioners have the luxury of building the program from scratch and, instead, must attempt to retrofit tank-level security into an IT environment that was designed to be less complex, less expensive, and simpler to maintain, much like a car is.  Due to this fact, security practitioners tend to run into numerous roadblocks when adding layers of controls.  While it may not be feasible to build a complete approach to information security from the ground up, it is important for IT management to recognize that a proactive strategy of incorporating defensive controls will lead to the most robust and effective information security program possible.  Additionally, security practitioners may encounter resistance to applying particular controls.  In this case, a risk-based approach is advised.  Will forgoing this control leave the tank substantially weakened or is the additional protection afforded by the control something that can truly be done without? Ultimately, a team implementing a corporate security program likely has more obstacles to overcome than the builder of a tank due to the fact that there is far more room for different interpretations of risk in the boardroom than on the battlefield.  Even so, it is important to put each and every decision about controls in context; as the reliance on information systems expands even further into industries such as healthcare, energy, and defense, lives truly may depend on it. [post_title] => Building Tanks [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => building-tanks [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:02 [post_modified_gmt] => 2021-04-13 00:06:02 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1249 [menu_order] => 850 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [17] => WP_Post Object ( [ID] => 1260 [post_author] => 16 [post_date] => 2010-10-18 07:00:25 [post_date_gmt] => 2010-10-18 07:00:25 [post_content] => In continuing with my series addressing common compliance hurdles relating to Payment Card Industry (PCI) requirements, I would like to turn to the topic of data retention.  Surprisingly, I have found that many organizations struggle with data retention - not just managing and archiving credit card data but even defining appropriate data retention policies.  There seems to be a lot of misinformation or at least misunderstanding out there so hopefully this will help clear things up a bit. Requirement 3.1 of the PCI Data Security Standard (DSS) states that cardholder data storage must be minimized and a policy defining the appropriate retention length must be defined.  There may be legal or regulatory requirements relating to data retention that must be adhered to.  However, in most circumstances, documents containing full primary account numbers (PANs) need not be retained past 90 days, which is typically when chargebacks or disputes occur.  If there is no business need to store cardholder data (for instance, so that a third party can supply access to transaction data and provide a mechanism for disputes and chargebacks), merchants should consider purging or redacting PANs stored both electronically and in hardcopy.  Simply put: if you don't need it, get rid of it.  Also, keep in mind that masked or truncated PANs are not considered cardholder data and, as such, are not subject to the PCI DSS and can be stored indefinitely.  Therefore, redacting all but the first six and last four digits of the PAN is a common method that organizations use to reduce or eliminate cardholder data while still maintaining the ability to reference a particular card or transaction, if necessary. One common misconception deals with retaining financial records for audits.  Many organizations end up keeping paid invoices that include complete credit card numbers for several years.  However, this is not usually necessary.  Financial auditors are interested in seeing records of revenue and expenses, not individual credit card numbers.  In light of this fact, a simple change in business process can often significantly reduce burden associated with attaining and maintaining compliance. When it comes to a remediation strategy, the first step in complying with Requirement 3.1 is to define an appropriate data retention policy that is based on business, legal, and regulatory requirements.  This will, of course, vary from organization to organization.  However, keep in mind that most business models do not require the retention of full PANs for very long after a transaction.  Once an appropriate retention period has been determined, an official policy should be documented.  Be sure to include any specific legal or business reasons for the selected retention period, as well as a formal method for disposing of both electronic and hardcopy cardholder data once that period has been reached.  Finally, implement the policy and purge historical data that exceeds the newly defined maximum retention period.  Remember that, in most cases, redacting PANs by blacking them out with a marker, cutting off the credit card section of a form and shredding it, or truncating data in a database is an effective way to reduce the cardholder data that is retained, reduce the risk of a credit card data breach, and meet the intent of PCI requirements. [post_title] => Common Compliance Hurdles Part 3: Data Retention [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => common-compliance-hurdles-part-3-data-retention [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:03 [post_modified_gmt] => 2021-04-13 00:06:03 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1260 [menu_order] => 860 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [18] => WP_Post Object ( [ID] => 1266 [post_author] => 16 [post_date] => 2010-09-02 07:00:25 [post_date_gmt] => 2010-09-02 07:00:25 [post_content] => Much fuss has been made over security concerns relating to cloud computing.  Just as cloud computing proponents tout the efficiency, scalability, and ease-of-use that come from leveraging the capabilities of the cloud, detractors highlight the dangers inherent with corporate data being stored in an unknown location, on an unknown number of systems, and protected by unvalidated controls.  At the end of the day both groups have fair points, but it is important to recognize that cloud computing is here to stay and despite the unknowns, many organizations will look to the cloud as a way to increase efficiency and reduce costs.  How, then, can organizations ensure that critical data and processes are protected while still realizing the benefits of cloud computing? It is critical that companies determine an appropriate approach to, and use for, the cloud.  In some cases, certain organizations may have data that is considered so confidential or critical that cost savings are not worth the risk of data compromise or loss.  In order to identify such circumstances, a risk analysis that enumerates threats, vulnerabilities, and potential impacts should be performed.  A key criterion for proper assessment of risk is the accurate classification of data; ensure that data is classified appropriately so that particularly sensitive or critical information is not accidently put in the cloud.  Additionally, compliance requirements should be examined to ensure that any changes do not negatively impact compliance status.  Once the risk analysis has been completed, certain mitigating controls may need to be implemented to account for unknowns in the cloud infrastructure.  For example, controls that would typically reside in lower tiers may need to be implemented in applications.  After implementing and assessing these modifications, an initial migration to the cloud can begin.  Keep in mind, though, that it is also important to develop a process for assessing new applications and data before they are moved to the cloud, as well as periodically reassessing systems and information that were previously deemed cloud-appropriate; this is fundamental to ensuring that cloud-related risks are considered on an on-going basis. While there are certainly challenges facing organizations looking to leverage cloud computing technologies, these challenges are not insurmountable.  With a well-devised approach, including assessment and mitigation of cloud-specific risks, organizations can realize the benefits of cloud computing while still protecting critical data assets. [post_title] => Security in the Cloud [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => security-in-the-cloud [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:28 [post_modified_gmt] => 2021-04-13 00:05:28 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1266 [menu_order] => 865 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [19] => WP_Post Object ( [ID] => 1272 [post_author] => 16 [post_date] => 2010-06-23 07:00:25 [post_date_gmt] => 2010-06-23 07:00:25 [post_content] => In this, the second installment in a series discussing common PCI compliance challenges, I address non-compliant payment applications.  Such applications are nearly ubiquitous in the cardholder data environments of smaller merchants (and even some of the larger ones).  However, merchants that store cardholder data are rarely able to attain a compliant state when using an application that has not been validated as compliant with PCI standards (either the older Payment Application Best Practices, or PABP, and the newer Payment Application Data Security Standard, or PA-DSS).  In particular, compliance with much of PCI DSS Requirement 3, which deals with protection of stored cardholder data, is difficult or even impossible for these businesses, in many cases due to their payment application(s).  Typically, such merchants have three options: migrate to a validated solution, work with the vendor of the current application and encourage them to have the application validated, or implement the required controls themselves.  Because these applications pose such a high risk to cardholder data, Visa has mandated that all merchants will be required to use validated payment applications, with a deadline established for July 2010 in the U.S. and Canada and July 2012 for other regions.  In the meantime, there are several things a merchant can do to meet DSS requirements, the Visa mandate notwithstanding. In most situations, the best solution is to change the payment application and implement a solution that has already been validated.  While this can be a daunting task, especially for larger or distributed environments, it is typically the solution with the most immediate payoff in terms of compliance, especially considering the impending Visa requirement.  Chances are good that, if the current application has not been validated, there is a similar application that has been.  By migrating to an application that has already been validated, and configuring systems to the standards outlined in the application's implementation guide, merchants find compliance with DSS requirements much easier. If moving to a different solution is simply not feasible, though, merchants should pressure payment application vendors to attain PA-DSS compliance by having the application validated by a PA-QSA firm (see https://www.pcisecuritystandards.org/pdfs/pci_qsa_list.pdf).  Such a process can take quite a bit of time, which would delay the merchant's ability to validate PCI DSS compliance.  Also, modifying the application can also be expensive for the vendor.  However, vendors of these applications should keep in mind that their customer base needs to attain and maintain compliance and it will become increasingly difficult to market and sell non-compliant payment solutions (see https://usa.visa.com/merchants/risk_management/cisp_payment_applications.html). If a payment application vendor is unable to meet compliance with PA-DSS requirements, merchants may be able to implement a number of controls to meet the PCI DSS requirements.  Exactly which controls need to be applied will vary depending on the application, but some key areas can include eliminating storage of sensitive authentication data, obfuscating stored primary account numbers using encryption, hashing, truncation, etc., PCI data discovery on payment systems and servers, increased access controls and logging outside of the application, and implementing key management processes and technologies.  Essentially, the payment application would be treated as an internally-developed application, and the merchant would be responsible for ensuring all controls are in place to protect cardholder data.  In many cases, the cost of implementing these controls will outweigh the cost of changing payment applications.  When considering these options, merchants should ask an additional question: "Do I actually need to store cardholder data?"  In many cases, smaller merchants can be forced to complete the lengthy Self-Assessment Questionnaire D due to the fact that a single application or point-of-sale stores credit card data.  These merchants should note that slightly altering their business processes and replacing such a system with one that does not store cardholder data can pay dividends, as compliance requirements would be drastically reduced. [post_title] => Common Compliance Hurdles Part 2: Non-compliant Applications [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => common-compliance-hurdles-part-2-non-compliant-applications [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:03 [post_modified_gmt] => 2021-04-13 00:06:03 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1272 [menu_order] => 870 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [20] => WP_Post Object ( [ID] => 1283 [post_author] => 16 [post_date] => 2010-03-30 07:00:25 [post_date_gmt] => 2010-03-30 07:00:25 [post_content] => Looking over the findings of the last few dozen PCI gap assessments that NetSPI has performed, I am struck by the fact that today, well into version 1.2 of the Payment Card Industry Data Security Standard (PCI DSS, or just DSS), one of our most common findings remains increased scope due to lack of network segmentation.  For example, we have seen numerous merchants with relatively simple payment processing environments that have a very large and complicated PCI scope and must bear the associated costs (e.g., develop and apply hardened system configurations, pay for external scanning services, etc.).  In some cases, the merchant may not even have a real business need to store cardholder data (i.e. they could simplify their business processes and complete a Self Assessment Questionnaire C rather than the much longer SAQ D) but, even if they do, the scope of compliance is often far larger than necessary.  Limiting the scope of the systems that are required to meet PCI DSS compliance gives merchants and service providers the best “bang for their buck” in terms of reaching their compliance goals, yet it seems that many merchants struggle with defining and implementing the controls necessary to do just this.  The first step in reducing the PCI scope through segmentation is to determine exactly which systems store, process, or transmit cardholder data.  While this may be very straightforward for some organizations, it may be helpful to create a cardholder data flow diagram for more complex environments.  Once cardholder data systems have been identified, a process of isolation and segmentation can begin.  Ideally, cardholder data systems should be segregated off in a “PCI island” by a stateful firewall; Internet-facing systems should be placed in a separate DMZ segment.  Once these major changes have occurred, locking down and documenting the firewall ruleset, implementing the necessary management processes, and other items detailed in Requirement 1 are much easier to address. Though this process may look simple on paper, it can often involve the rearchitecture of not just the network but also individual systems, as PCI-related applications and functions should be isolated from other business functions (e.g., a database containing a parts inventory along with invoicing and payment information should be separated into individual databases in isolated network zones).  However, through proper segmentation, merchants and service providers can significantly reduce the cost and scope of compliance and need only apply the DSS to systems and devices that store, process, or transmit PCI data. [post_title] => Common Compliance Hurdles Part 1: Increased PCI Scope [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => common-compliance-hurdles-part-1-increased-pci-scope [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:02 [post_modified_gmt] => 2021-04-13 00:06:02 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1283 [menu_order] => 878 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [21] => WP_Post Object ( [ID] => 1293 [post_author] => 16 [post_date] => 2009-11-18 07:00:25 [post_date_gmt] => 2009-11-18 07:00:25 [post_content] => In simple terms, IP traceback allows for the reliable identification of the source of IP traffic, despite techniques such as IP spoofing. While there are numerous methods for achieving this goal, they all have one thing in common: not one of these methods has actually been implemented in commercial networking equipment. Maybe its time has finally come. The advantage of such a capability lies in determining the sources of one-way attacks, or attacks that can be conducted using spoofed source IP addresses. Unlike Reverse Path Forwarding, which can prevent address spoofing in limited environments, IP traceback essentially allows packets to be postmarked with the true source IP address. Denial-of-service (DoS) attacks are the most common type of malicious traffic that falls into this category. Although they don’t usually get the sort of visibility that they used to, DoS attacks still occur with astonishing frequency. While there are other methods for determining the source of spoofed traffic, they are typically time-consuming and require the involvement of numerous upstream parties. IP traceback could allow a network administrator to determine the source of such malicious traffic. In a grad school paper I wrote a few years ago, I argued that “without the support of major networking equipment vendors or ISPs, and barring a major attack with far-reaching consequences, there is little hope for IP traceback in the near future.” Today, the question is when do we reach the point that the ability to reliably track the source of malicious IP traffic is deemed important enough to demand a feature such as IP traceback? Such an ability is more important than ever. At the same time, there is a question of how effective such a solution would be if it were only partially implemented. Getting ISPs in North America and Europe to implement such a feature is a big enough step, but what practical value would IP traceback have if it were not implemented at the sources of much of the world’s malicious traffic: places like eastern Europe, Russia, China, and North Korea? Despite such a potential limitation, I believe that there is a still a place for IP traceback in our networks. A software-based solution, which would require only firmware or driver updates, would be relatively inexpensive and simple to deploy. At the same time, it would assist network administrators and law enforcement in investigating attacks that use IP spoofing techniques, thereby creating an effective deterrent against such attacks. [post_title] => IP Traceback: Has Its Time Arrived? [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => ip-traceback-has-its-time-arrived [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:48 [post_modified_gmt] => 2021-04-13 00:05:48 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1293 [menu_order] => 888 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [22] => WP_Post Object ( [ID] => 1298 [post_author] => 16 [post_date] => 2009-11-09 07:00:25 [post_date_gmt] => 2009-11-09 07:00:25 [post_content] => On November 8, CBS’s “60 Minutes” ran a segment on information security weaknesses called “Sabotaging The System.” This piece highlighted security vulnerabilities in segments of our nation’s critical infrastructure, including banking, power, and national defense. In addition, former and current government officials confirmed that the threats exist; not only are probes and attacks occurring with alarming frequency, but there have been numerous documented instances of successful penetrations into all three of these sectors. The potential impact of such attacks ranges from the theft of a few million dollars to large-scale power outages or compromise of military secrets. In short, our nation is faced with a significant set of risks, and I feel that “60 Minutes” did justice to the severity of the problem. It is clear that the United States has benefited greatly from the interconnection of computer systems but, at the same time, we place ourselves at great risk by leaving these systems unprotected. At the same time, the program was lacking with regard to solutions. There is nothing about these vulnerabilities that prevents them from being mitigated; IT security professionals solve similar problems every day. In this case, it is simply the scale of the problem that is most daunting. President Obama recently raised the issue and classified our nation’s critical digital infrastructure as a strategic asset. This is the first step along the lengthy road toward a more secure infrastructure, but it is important in that it allows the power of the federal government to be brought to bear. As it stands today, many of the requirements for both private industry and government are inconsistent, vague, and toothless. In the future, though, we will likely see increased regulation of these (and other) critical sectors. Regulation, though, is only part of the solution, and constriction of industry by over-regulation is a very real concern. By taking the initiative to combat vulnerabilities in their own environments, companies in these sectors can not only reduce the burden that eventual regulation will bring, but they can also demonstrate to regulators and lawmakers that they are taking the risk seriously. While that may be a novel approach for some, there will undoubtedly be benefits to swift action. Rather than waiting for government to force them to do something undesirable, businesses should revisit and re-architect their current approach to information security and risk management: examine the security framework that is used, alter how security is organized at the company, identify critical assets, analyze current controls, and finally mitigate vulnerabilities by implementing additional policies, processes, and technologies. There is no question that this sort of initiative will cost money but, in the long run, it will be money well spent. [post_title] => 60 Minutes on Cyber Security Risks [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => 60-minutes-on-cyber-security-risks [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:00 [post_modified_gmt] => 2021-04-13 00:06:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1298 [menu_order] => 892 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 23 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 20466 [post_author] => 71 [post_date] => 2020-11-24 07:00:05 [post_date_gmt] => 2020-11-24 07:00:05 [post_content] =>

Making its debut in 2018, the Ryuk ransomware strand has wreaked havoc on hundreds of businesses and is responsible for one-third of all ransomware attacks that took place in 2020. Now it is seemingly on a mission to infect healthcare organizations across the country, already having hit five major healthcare providers, disabling access to electronic health records (EHR), disrupting services, and putting sensitive patient data at risk.

The healthcare industry is surely bracing for what the U.S. Cybersecurity & Infrastructure Security Agency (CISA) is warning as, “an increased and imminent cybercrime threat to U.S. hospitals and healthcare providers.” What can organizations do to preemptively protect themselves? Our recommendation:

  1. Analyze what makes the healthcare industry a key target for ransomware,
  2. Educate yourself to better understand Ryuk and TrickBot, and
  3. Implement proactive cyber security strategies to thwart ransomware attacks and minimize damage from an incident (we’ll get into this more later in this post).

We’ve pulled together this Guide to Ryuk as a resource to help organizations prevent future ransomware attacks and ultimately mitigate its impact on our nation’s healthcare systems.

Why are Healthcare Providers a Target for Ransomware?

Healthcare is widely known as an industry that has historically struggled to find a balance between the continuation of critical services and cyber security. To put this into perspective, doctors and physicians can’t stop everything and risk losing a life if their technology locks them out due to forgetting a recently changed password. So, security, while critically important in a healthcare environment, is more complex due to its “always on” operational structure.

We’ve seen a definite uptick in attention paid to security at healthcare organizations, but there’s much work to be done. The task of securing a healthcare systems is extremely challenging given its scale and complexity, consisting of many different systems and, with the addition of network-enabled devices, it becomes difficult for administrators to grasp the value of security relative to its costs.  In addition, third parties, such as medical device manufactures also play a role. Historically, devices in hospitals, clinics, and home-healthcare environments had no security controls, but there has been more of a focus on “security features” as connectivity (network, Bluetooth, etc.) has increased. Yet most healthcare networks are still rife with these sorts of devices that have minimal, if any, built-in security capabilities.

Healthcare is by no means the only target industry: any organization can fall victim to ransomware. Though, healthcare is a prime target for two reasons:

  • It’s a gold mine for sensitive data, including social security numbers, payment information, birth certificates, addresses, and more. While monetizing such data may require additional effort on the part of cybercriminals, breaches of such data is a major HIPAA compliance violation that can result in heavy fines and could also potentially have a negative impact to patients if their data is leaked.
  • The criticality of the business is as high-risk as it gets. In other words, hospitals cannot afford downtime. Add a public health pandemic to the mix and the criticality increases drastically.

This sense of urgency to get systems back up and running quickly is a central reason why Ryuk is targeting the industry now. Hospitals are more likely to pay a ransom due to the potential consequence downtime can have on the organization and its patients.

Ransomware, Ryuk, and TrickBot:

To understand Ryuk, it is important to first understand ransomware attacks at a fundamental level. Ransomware gains access to a system only after a trojan or ‘botnet’ finds a vulnerable target and gains access first. Trojans gain access often through phishing attempts (spam emails) with malicious links or attachments (the payload). If successful, the trojan installs malware onto the target’s network by sending a beacon signal to a command and control server controlled by the attacker, which then sends the ransomware package to the Trojan.

In Ryuk’s case, the trojan is TrickBot. In this case, a user clicks on a link or attachment in an email, which downloads the TrickBot Trojan to the user’s computer. TrickBot then sends a beacon signal to a command and control (C2) server the attacker controls, which then sends the Ryuk ransomware package to the victim’s computer.

Trojans can also gain access through other types of malware, unresolved vulnerabilities, and weak configuration, though, phishing is the most common attack vector. Further, TrickBot is a banking Trojan, so in addition to potentially locking up the network and holding it for ransom, it may also steal information before it installs the ransomware.

How does an organization know if they have fallen victim to ransomware, more specifically Ryuk? It will be obvious if Ryuk has successfully infiltrated a system. It will take over a desktop screen and a ransom note will appear with details on how to pay the ransom via bitcoin:

A screenshot of Ryuk’s ransom note.

An early warning sign of a ransomware attack is that at the technical level, your detective controls, if effective, should alert to Indicators of Compromise (IoC). Within CISA’s alert, you can find TrickBot IoCs listed along with a table of Ryuk’s MITRE ATT&K techniques.

A threat to the increasing remote workforce: In order to move laterally throughout the network undetected, Ryuk relies heavily on native tools, such as Windows Remote Management and Remote Desktop Protocol (RDP). Read: COVID-19: Evaluating Security Implications of the Decisions Made to Enable a Remote Workforce

Implementing Proactive Cyber Security Strategies to Thwart Ransomware Attacks

We mentioned at the start of this post that one of the things organizations can do preemptively to protect themselves is to put in place proactive security strategies. While important, security awareness only goes so far, as humans continue to be the greatest cyber security vulnerability. Consider this: In past NetSPI engagements with employee phishing simulations, our click-rates, or fail-rates, were down to 8 percent. This is considered a success, but still leaves open opportunity for bad actors. It only takes one person to interact with a malicious attachment or link for a ransomware attack to be successful.

Therefore, we support defense-in-depth as the most comprehensive strategy to prevent or contain a malware outbreak. Here are four realistic defense-in-depth tactics to implement in the near- and long-term to prevent and mitigate ransomware threats, such as Ryuk:

  1. Revisit your disaster recovery and business continuity plan. Ensure you have routine and complete backups of all business-critical data at all times and that you have stand-by, or ‘hot,’ business-critical systems and applications (this is usually done via virtual computing). Perform table-top or live disaster recovery drills and validate that ransomware wouldn’t impact the integrity of backups.
  2. Separate critical data from desktops, avoid siloes: Ryuk, like many ransomware strands, attempts to delete backup files. Critical patient care data and systems should be on an entirely separate network from the desktop. This way, if ransomware targets the desktop network (the most likely scenario) it cannot spread to critical hospital systems. This is a long-term, and challenging, strategy, yet well worth the time and budgetary investment as the risk of critical data loss will always exist.
  3. Take inventory of the controls you have readily available – optimize endpoint controls: Assess your existing controls, notably email filtering and endpoint controls. Boost email filtering processes to ensure spam emails never make it to employee inboxes, mark incoming emails with a banner that notifies the user if the email comes from an external source, and give people the capability to easily report suspected emails. Endpoint controls are essential in identifying and preventing malware. Here are six recommendations for optimizing endpoint controls:
    1. Confirm Local Administrator accounts are strictly locked down and the passwords are complex. Ensure Domain Administrator and other privileged accounts are not used for routine work, but only for those tasks that require admin access.
    2. Enable endpoint detection and response (EDR) capabilities on all laptops and desktops.
    3. Ensure that every asset that can accommodate anti-malware has it installed, including servers.
    4. Apply all security patches for all software on all devices. Disable all RDP protocol access from the Internet to any perimeter or internal network asset (no exceptions). 
  1. Test your detective controls, network, and workstations:
    1. Detective control testing with adversarial simulation: Engage in a purple team exercise to determine if your detective controls are working as designed. Are you able to detect and respond to malicious activity on your network?
    2. Host-based penetration testing: Audit the build of your workstations to validate that the user does have least privilege and can only perform business tasks that are appropriate for that individual’s position.
    3. Internal network penetration testing: Identify high impact vulnerabilities found in systems, web applications, Active Directory configurations, network protocol configurations, and password management policies. Internal network penetration tests also often include network segmentation testing to determine if the controls isolating your crown jewels are sufficient.

Finally, organizations that end up a victim to ransomware have three options to regain control of their systems and data.

  • Best option: Put disaster recovery and business continuity plans in motion to restore systems. Also, perform an analysis to determine the successful attack vector and remediate associated vulnerabilities.
  • Not advised: Pay the ransom. A quick way to get your systems back up and running, but not advised. There is no guarantee that your business will be unlocked (in fact, the offer may also be ransomware), so in effect you are funding adversarial activities and it’s likely they will target your organization again.
  • Rare cases: Cracking the encryption key, while possible with immature ransomware groups, is often unlikely to be successful. Encryption keys have become more advanced and require valuable time to find a solution.

For those that have yet to experience a ransomware attack, we encourage you to use the recent Ryuk news as a jumping point to future-proof your security processes and prepare for the inevitability of a breach. And for those that have, now is the time to reevaluate your security protocols.

[post_title] => Healthcare’s Guide to Ryuk Ransomware: Advice for Prevention and Remediation [post_excerpt] => Making its debut in 2018, the Ryuk ransomware strand has wreaked havoc on hundreds of businesses and is responsible for one-third of all ransomware attacks that took place in 2020. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => healthcares-guide-to-ryuk-ransomware-advice-for-prevention-and-remediation [to_ping] => [pinged] => [post_modified] => 2021-05-04 17:06:19 [post_modified_gmt] => 2021-05-04 17:06:19 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=20466 [menu_order] => 454 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 23 [max_num_pages] => 0 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 8ae4b0e26fddae1f41316d7f9adc3754 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) )

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X