Yan Kravchenko

Yan has performed and managed numerous security assessment and IT audit projects in the education, government, healthcare, manufacturing, and agriculture sectors. A particular focus recently has been information security in healthcare. Yan has over 14 years of consulting experience in Information Technology and Information Security, specializing in security program development and management, security assessments, and IT audits.
More by Yan Kravchenko
WP_Query Object
(
    [query] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "22"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "22"
                            [compare] => LIKE
                        )

                )

        )

    [query_vars] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "22"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "22"
                            [compare] => LIKE
                        )

                )

            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [paged] => 0
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [search_columns] => Array
                (
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 1
            [update_post_term_cache] => 1
            [update_menu_item_cache] => 
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [nopaging] => 1
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "22"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "22"
                            [compare] => LIKE
                        )

                    [relation] => OR
                )

            [relation] => OR
            [meta_table] => wp_postmeta
            [meta_id_column] => post_id
            [primary_table] => wp_posts
            [primary_id_column] => ID
            [table_aliases:protected] => Array
                (
                    [0] => wp_postmeta
                )

            [clauses:protected] => Array
                (
                    [wp_postmeta] => Array
                        (
                            [key] => new_authors
                            [value] => "22"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                    [wp_postmeta-1] => Array
                        (
                            [key] => new_presenters
                            [value] => "22"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                )

            [has_or_relation:protected] => 1
        )

    [date_query] => 
    [request] => SELECT   wp_posts.ID
					 FROM wp_posts  INNER JOIN wp_postmeta ON ( wp_posts.ID = wp_postmeta.post_id )
					 WHERE 1=1  AND ( 
  ( wp_postmeta.meta_key = 'new_authors' AND wp_postmeta.meta_value LIKE '{0bd4c033689e00b3e16e49c0c2ef387326bc2fb10fae2b4494a95a3d12b0b9c2}\"22\"{0bd4c033689e00b3e16e49c0c2ef387326bc2fb10fae2b4494a95a3d12b0b9c2}' ) 
  OR 
  ( wp_postmeta.meta_key = 'new_presenters' AND wp_postmeta.meta_value LIKE '{0bd4c033689e00b3e16e49c0c2ef387326bc2fb10fae2b4494a95a3d12b0b9c2}\"22\"{0bd4c033689e00b3e16e49c0c2ef387326bc2fb10fae2b4494a95a3d12b0b9c2}' )
) AND wp_posts.post_type IN ('post', 'webinars') AND ((wp_posts.post_status = 'publish'))
					 GROUP BY wp_posts.ID
					 ORDER BY wp_posts.post_date DESC
					 
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 2152
                    [post_author] => 22
                    [post_date] => 2014-12-22 07:00:49
                    [post_date_gmt] => 2014-12-22 07:00:49
                    [post_content] => Information security used to be all about networks and protecting the network perimeter. Today, however, applications are the new battleground for the protection of digital assets. While the concept of software security has been around for a long time, the evolution of mobile technologies and the universal accessibility of applications is requiring organizations to work on improving the maturity of their application security practices. To help with this initiative, many organizations developed their own methodologies. Some of these are too complex and difficult to implement, others are proprietary and expensive. However, there is one framework that stands out among the crowd.

The Open Software Assurance Maturity Model (OpenSAMM) was developed by OWASP back in 2009, and is currently undergoing many updates. Like all OWASP projects, this one is free and has been adopted by many organizations around the world. OpenSAMM is very comprehensive in nature, covers all aspects of application security, and still allows each application to be evaluated in under one hour. On the whole, I would recommend any organization that develops its own software to take a look at OpenSAMM website and try it out.

In order to get started, you need to compile a list of applications and subject-matter experts for each application who can provide answers regarding the way the application was developed. Once compiled, you should review the "Assessment Interview Template" and the "Assessment Worksheet", both downloads from the Open SAMM's website. These documents help break down Open SAMM into a series of interview questions, making the process of collecting and normalizing information easier. Once answers have been gathered, OpenSAMM maturity scores can be calculated.

The model breaks application security maturity down to four "business functions":
  • Governance: The way application security is managed in the organization
  • Construction: The way applications are built
  • Verification: The way security of the application is tested
  • Deployment: How the application is deployed and supported in production
Additionally, each business function breaks down into three "security practices". For example, "Verification" splits up into:
  • Design Review: Activities such as attack surface analysis
  • Code Review: Activities such as peer and 3rd party code reviews
  • Security Testing: Activities such as penetration testing
Once all the questions have been answered, each "security practice" will receive a score ranging from 0 to 3. This score represents the maturity of each "security practice," and provides a snapshot of the security practices around each application. Understanding what each metric means to the organization, well that requires additional analysis. Applications developed in certain high-security industries or environments may require achieving the maximum maturity level at each "security practice," however this is not necessary for most of the organizations. OpenSAMM provides sample "security profiles" that show suggested maturity ratings for organizations within different industries, but these should be viewed as recommendations only. Each organization should perform its own risk analysis and determine their own acceptable maturity levels in different "security practices" and different applications. In order to better understand different maturity ratings and objectives, Open SAMM provides additional guidance in each area, including suggested security metrics. All of this information should be evaluated collectively, in order to establish the right maturity level objectives for different applications. The OpenSAMM framework is quick to deploy, provides actionable recommendations for improving application security, and does not dwell in a lot of risk management terminology. I recommend everyone responsible for managing application security to check it out. [post_title] => An Introduction to the Open Software Assurance Maturity Model (OpenSAMM) [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => an-introduction-to-opensamm [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:00 [post_modified_gmt] => 2021-04-13 00:06:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=2152 [menu_order] => 698 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 1976 [post_author] => 22 [post_date] => 2014-11-24 07:00:02 [post_date_gmt] => 2014-11-24 07:00:02 [post_content] =>

Mobile Application Landscape

As mobile applications have become an integral part of everyday life, it’s hard to believe just how young the platform really is.  When considering that seven years ago nobody has ever heard of iPhone or Android, and the first iPad came on the market a little over four years ago, it’s truly remarkable how quickly the technology was adopted by individuals and enterprises alike.  People trust the mobile platform with their most intimate information such as credit cards, passwords, and other sensitive information, closely followed by enterprises that allow use of mobile devices for every business functions where use of the mobile form-factor is practical.

Value of understanding the overall threat posture

Looking over the results of several of our penetration tests of mobile applications, I realized that while many people understand and focus on the limitations associated with mobile, few people take time to think about similarities and security implications of the mobile technology.  While limitations such as virtual keyboard, limited screen size, and limitations of the operating system take center-stage, security is often an afterthought, rife with weaknesses that would be plainly visible during a threat assessment. Threat assessments are often ignored and misunderstood exercises that should be performed during the initial design phases of the development lifecycle.  However, if you (like so many) were not given time for the threat assessment when you were designing the application, it’s not too late now.  To help guide the process, I have provided a framework with which an application can be evaluated.  This should be able to help facilitate further discussions and remediation of design flaws, which can potentially make the application exploitable to malicious users.

Mobile Infrastructure

While there are many examples of stand-alone applications, the majority of what I will focus on here will be mobile applications that communicate with one or multiple other servers.  For those where a server component is not a factor, can simply focus on risks associated with mobile applications, which presents a much simpler problem.  For most, however, mobile applications require communication with a back-end system, in most instances over an untrusted network.  Therefore, when considering threats associated with the mobile application, it’s essential to remember the server components as well.  While mobile applications by themselves provide a threat landscape similar to “typical” thick applications, web services providing the data feeds to the applications add a threat profile similar to a web-application.  Additionally, since the two components of the mobile platform are inter-dependent, it’s also important to consider any business logic flaws (such as segregation of duties, regulatory compliance, and compliance with corporate policies), which may impact one or both components.  With that understanding, let’s look at each one in more details… YK_Threat_Model_1

Mobile Application Threats

When evaluating the security model for a mobile application, consider that at its core, it’s a thick application, with the addition of limitations unique to the Mobile platform:

Platform-specific vulnerabilities

While Blackberry and Microsoft mobile operating systems remain in use, for the purpose of illustrating the threat model I will focus on Android and iOS, as they are more common and represent the significant majority of mobile applications.  When assessing mobile threats, it’s important to recognize the strengths and weaknesses of different platforms. For example, the ease with which Android applications can be reverse engineered should necessitate that the source code does not contain any information that may be considered sensitive, and require using stronger methods for protecting any certificates, passwords, or connection strings.  At the same time, the ease with which iOS Keychain can be cracked to extract sensitive information may necessitate avoiding relying on the device for secure storage of anything that may allow a malicious user to gain access.  When beginning to develop code for each platform, a detailed review of common weaknesses of each mobile operating system should be evaluated and considered when designing mitigating controls.

Single-user Platform

Yes, it’s obvious and self-explanatory, but when designing mobile software, it’s important to remember that most commonly, these are intended to be single-user devices.  While the same user is “intended” to be the one in possession of the device at all times, the portable nature of the platform makes this an unreasonable expectation, require some form of a user-acceptable re-authentication each time the device is turned on.  For those applications that are intended for multiple users, such as kiosks or other shared uses, additional precautions must be taken to ensure the system performs the necessary user identification, authentication, and authorization, as well as completely clears the session once the user is done (regardless whether the user logged out or the session has timed out)

Always Untrusted

There is really no other way to say it… assume anything typed into the application or the OS services providing security mechanisms can be compromised.  While each mobile OS tries to provide application developers with controls that can provide several levels of security, Jailbreaking or Rooting the OS makes these controls unreliable.  Even use of Mobile Device Management (MDM) technologies may not provide 100% assurance, though there are excellent ones hitting the marketplace every day.  While use of tools like Fuzzers on a mobile platform may seem farfetched, remember that these applications can be loaded into virtual mobile device environments, where automation can be used to identify any available input validation flaw very quickly.

Limited UI & Screen

In order for controls to be effective, they must be easy to use.  While the best way to authenticate the user is to prompt for username and password each time the application is opened, this may not be desirable if the application is frequently opened and closed.

Background Processing

Different operating systems handle this a little differently, but most allow applications to continue running, even after they are closed by the user or the device is turned off.  Understanding the need to clear sensitive data from memory and storage between uses and ensure the data is not accessible from any other application may require development of additional mitigating controls.

Storage Encryption Challenges

Ideally, devices should never store any sensitive information or data.  While onboard encryption and protection of encryption keys gets better with each Operating System release, key management challenges on a mobile platform become significant when considering the possibility of Jailbroken or Rooted device.

Web Services Threats

Web services are among the most common communication channels for mobile applications, and are a pivotal part of the security of the overall mobile application design.  While it may be hoped that the Web Service will remain largely invisible, it’s discovery by a malicious user is inevitable, and should necessitate considering the following threats commonly associated with web applications.

Trusted Security Perimeter

If you consider the mobile application threats above, the Web Services provide a trust barrier between untrusted and trusted computing environments.  Therefore, when considering threats associated with Web Services, it’s important to view the service layer as a critical component of the overall security strategy.  In many instances this necessitates hardening of the server, tight firewall configuration in front of the interface engine, and ensuring that it’s running in the DMZ and does not have direct access to the internal network.

Limited Authentication Options

Ideally, each new session should be fully authenticated, including appropriate identification, authentication, and authorization of each device and user.  However, considering the challenges associated with the usability and cryptographic storage within the mobile device platform, it’s important to design a comprehensive authentication and session management scheme that would resist common Man-in-the-Middle and other session hijacking attacks. Considerations such as mutual authentication may be necessary, depending on the sensitivity of the data managed by the application.

Input Validation

All inputs coming from the web application must be considered potentially harmful, and evaluated accordingly.  Failure to properly sanitize input remains one of the most common application vulnerabilities, and web-services are not an exception.

Error Handling

When building a service to be used for mobile applications, developers often assume that error message handling is not necessary, since the service is intended to be used exclusively for one application. Improper error handling can results in sensitive information disclosure, and assist the attack by supplying information about the database and other inter-workings of the system.

Auditing & Logging

Most statutory and regulatory requirements, as well as security best-practices, require appropriate audit-logging to be implemented as part of the application.  Therefore, it’s essential that the web service works together with the application to ensure all access to sensitive data and the application is logged.

Summary

Whether you are planning to develop a mobile application, or reviewing the security of an existing application, it’s never too late to perform a threat assessment.  While specific threats will differ for each, the basic concepts described above should apply to the majority of applications, and help guide the process of understanding all mobile application threats. [post_title] => Mobile Application Threat Modeling [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => mobile-application-threat-modeling [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:35:35 [post_modified_gmt] => 2023-03-16 14:35:35 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1976 [menu_order] => 701 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [2] => WP_Post Object ( [ID] => 1139 [post_author] => 22 [post_date] => 2013-11-14 07:00:26 [post_date_gmt] => 2013-11-14 07:00:26 [post_content] => November 7th is tricky. Some years it rings of election news (at least in the US), while in others it has brought devastating earthquakes to places like Guatemala. Considerably less dramatic, this year it brought us the final version of PCI-DSS: the long-awaited 3.0. For a complete list of all changes in the new DSS, I recommend downloading and reviewing the "PCI DSS 3.0 Summary of Changes" which will take you through everything you need to know. I will limit my post to highlighting some of the most significant, noteworthy, and understated. Having reviewed all the changes in detail, I am fairly convinced that majority of the pains in adopting the new 3.0 will come from items marked as a "Clarification", rather than any new or "Evolving" requirements. I anticipate none more than the added clarification on Scoping.

PCI Scope Determination

The updated "Scope of PCI DSS Requirements" section now explicitly talks about the scope being defined not only by the segmented network, but also extending to "all system components included in or connected to the cardholder data environment". In addition, the new DSS goes on to instruct that the scope is also defined by any system that may "impact the security of (for example, name resolution or web redirection servers) the CDE". Considering inherent risks of all systems connected to one another, one might interpret this in a way which would include the entire Internet in scope. However, a more sensible approach should revolve around performing a risk-based analysis around those systems connected to the CDE to determine whether the scope is limited to the connected systems or should be extended to the logical network. This is a topic worthy of a whole separate post… or a white-paper… or a book. In the meantime, be sure to fully document your scope and challenge your assumptions to help you identify systems that may impact security of the CDE. Remember that these may include systems that you have previously left off the PCI compliance radar because they were on different logical and physical networks.

Relationship Between PCI DSS and PA-DSS

The clarification offered in the new DSS will mostly impact smaller organizations, and those that felt that they "outsourced" PCI compliance by using an application that had gone through the PA-DSS certification. All systems that process, store, and transmit CHD are in-scope, even if PA-DSS certified. The certification means the application can be implemented in a PCI DSS compliant configuration if installed in accordance with the Implementation Guide provided by the software vendor. This should be validated during the assessment and should not be taken for granted.

Best Practices for Implementing PCI DSS into Business-as-Usual Processes

This section is new in the PCI DSS 3.0 and helps reaffirm the need for a viable security program. While nothing new to security professionals, this section should be reviewed by all organizations, especially those that do not have a well-defined Information Security team. Recommendations offered clearly state that expectation of information security becoming a concern throughout the year, not only during the PCI assessment.

Sampling

The new "Sampling of Business Facilities/System Components" section covers something all QSA's already worked with for years. While the new DSS calls it out as something for Assessors, I would recommend everyone read it and make a note of the things you can do to reduce the number of systems that need to get reviewed during the assessment. It's simple: the more standards are followed during deployment and maintenance, the smaller the sample size. In addition to significantly reducing the amount of work that needs to be performed during the assessment, this has the added benefit of actually improving system management and security.

Understanding the Intent of Each Requirement

PCISSC has published several guidelines to help assessors, merchants, and service providers better understand the intent behind each requirement. These guidelines remained best-kept secrets and were seldom used, in spite of containing essential information for interpreting requirements where they didn't apply verbatim. In the new DSS, these guidelines have been incorporated into the standard, which will help reduce the number of misunderstandings between assessing and assessed parties. Perhaps even higher value, it will help reduce differences of opinion between assessors, which generally drive merchant and service provider companies crazy with frustration.

Documented and Implemented…

Several changes in the new DSS have to do with re-enforcing the concept that controls must be documented and implemented in accordance with the documentation. While common-sense to most, this may be a good wakeup call to organizations whose policies are several years ahead of the controls that are actually implemented.

Requirement 1.1.3 - Current diagram that shows all cardholder data flows across systems and networks

In my experience as an assessor, I seldom find companies that do this well. Most diagrams are either too high-level (bordering on conceptual) or go to a level of detail that could be used to troubleshoot routing tables. This new requirement now requires diagrams that show where CHD is processed, transmitted, and stored. A side benefit - doing this well will also help critically evaluate the current scope of the CDE and identify opportunities for reducing the scope.

Requirement 2.4 - Maintain an inventory of system components that are in scope for PCI DSS

I don't know why, but this has been a pain-point of most of the assessments I performed over the past years. System inventories either don't identify whether the system is in scope for PCI or simply don't exist. This always created many problems when attempting to select an appropriate sample size and I am happy about this being made into a requirement in the new DSS.

Requirement 3.5.2 and 3.5.3 - Key Storage

Over the years, I have seen many creative solutions for secure storage of data-encryption keys and key-encryption keys. Many met requirements but just as many were too big on creativity and somewhat light on security. Clarifications added in these two requirements help provide better guidance on how keys are to be stored and should be reviewed to ensure that key storage conforms to the new information.

Requirement 5 - Protect all systems against malware and regularly update anti-virus software or programs

No more picking on Windows; now all operating systems are in-scope for anti-virus software. The new and updated requirements still offer some flexibility in which systems get the anti-virus software installed on them, but the process needs to clearly show that anti-malware was a consideration for all platforms.

Requirement 6.6 - For public-facing web applications, address new threats and vulnerabilities on an ongoing basis and ensure these applications are protected against known attacks

I am not too happy about one of the changes in this requirement . Specifically, you no longer have to run your Web Application Firewall (WAF) in 'deny' mode, as 'detect-only' is sufficient to meet this requirement. The requirement actually states that the solution will meet the requirement if it is "configured to either block web-based attacks, or generate an alert". I have some hope that it's a type-o, because within the same requirement the Guidance section offers: "Web-application firewalls filter and block non-essential traffic at the application layer". Otherwise, this would be a change that would allow a weaker security configuration than what was required in 2.0.

Requirement 8.5.1 - Additional requirement for service providers: Service providers with remote access to customer premises (for example, for support of POS systems or servers) must use a unique authentication credential (such as a password/phrase) for each customer

This is one of my favorite additions to the PCI DSS, even though I know how much pain this will cause the majority of the service providers. The time-honored practice of using the same password for all customers has been a silent danger for a long time, and I am happy it's now officially prohibited by the DSS. The small silver-lining here for service providers is that it's not mandatory until June 30, 2015, so you have a little over a year and a half to propagate this change throughout your organization.

Requirement 9.9 - Protect devices that capture payment card data via direct physical interaction with the card from tampering and substitution

Applicable to anyone who uses physical devices for capturing CHD, this new requirement will introduce a number of items that should be considered. First, it requires that the company maintain a full inventory of all devices deployed in the field. Considering the number of retail locations or CHD capture devices deployed in many hospital systems, this alone could provide a challenge for some organizations. The inventory must contain the relevant device information, as well as information about the location of the device. The other two sub-requirements deal with the need to provide periodic training on spotting device tampering, as well as actually performing periodic audits of all devices. While seemingly simple, I anticipate this requirement to create a lot of headaches. At least as with all other new requirements, this one is not required until June 30th, 2015.

Requirement 10.2 5 - Use of and changes to identification and authentication mechanisms…

This requirement enhances mandatory logging settings to include each use of authentication mechanisms and all possible changes to user account permissions and settings. While this is supported by most enterprise-level operating systems, it's rarely the default configuration and will require changes across a large number of systems. However, custom-written applications may require some additional and enhanced logging triggers to be developed. The same will apply for custom software that does not support logging 'pausing' events, as required by 10.2.6.

Requirement 11.2 - Run internal and external network vulnerability scans at least quarterly and after any significant change in the network…

This requirement is certainly not new. However, the council has added a lot of guidance, specifying that a report showing "high" or "critical" vulnerabilities is not acceptable as a quarterly report. This means that in order to meet the quarterly scanning obligations, organizations will need to scan, remediate, rescan, and then save reports. The requirement now specifically talks about combining multiple reports that show a single quarter's attestation of compliance with this requirement.

Requirement 11.3 - Implement a methodology for penetration testing…

In the past, the requirement on penetration testing loosely stated that the methodology should be documented to reflect application and network level testing. Now, the DSS has added a relatively large description of all the requirements behind an acceptable penetration testing methodology. For companies that do their own penetration testing, they will not be able to simply rely on a pen-testing tool to meet this requirement; they will need to ensure the penetration testing process is a considerable undertaking, not limited to running automated tools on the network. For companies that perform penetration testing for PCI, this means updating or improving their reporting to demonstrate their methodology meets all the requirements stated in 11.3. Related is the new requirement 11.3.4, which requires that penetration testing include verifying the effectiveness of any network segmentation that is used to safeguard CDEs. Depending on the number of non-CDE networks, this may mean significant changes in scope for penetration testing efforts, particularly for large, global enterprises.

Requirement 12.2 - Implement a risk-assessment process

Love Risk Assessments? So does the PCI council! The standard de-facto risk-assessment requirement has been updated to require repeating them after significant changes, in addition to having to perform them on an annual basis. At least the definition of a significant change has been expanded from adding a new web server to things like relocation or mergers and acquisitions.

Requirement 12.8.5 Maintain information about which PCI DSS requirements are managed by each service provider, and which are managed by the entity.

Few things caused merchants and service providers to hit the brakes in the middle of a PCI assessment like the discovery that your PCI-compliant service provider does less for you than you originally anticipated. Statements like "We fully outsource all server-related things to so and so" turned into vendors doing nothing more than providing space, power, and data. In order to help level the playing field, this new requirement requires maintaining a detailed list of all PCI requirements the vendors are covering as part of their service / attestation. This will certainly help everyone get on the same page about how much of the PCI attestation may depend on the PCI AOC provider by a vendor or a service provider. As reinforced through 12.9, contracts should be updated to include information related to the scope of the PCI attestation included as part of the contract but, like all other new requirements, this is not mandatory until June 30, 2015. During the last North America community meeting, someone asked the council whether this would necessitate all contracts to be re-negotiated immediately in order to meet this requirement. The council replied that that won't be necessary, as long as contracts get updated as they expire or as part of new contract negotiations.

Summary

Once again, this is not a complete recap of all that is new and updated, just those items I feel people need to pay close attention to. The new requirements are probably not significant for a large number of organizations. However, I can definitely see how some companies with smaller and less rigorous information security teams may struggle to meet some of the new and updated requirements. [post_title] => Things not to overlook in the new PCI DSS 3.0 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => things-not-to-overlook-in-the-new-pci-dss-3-0 [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:15 [post_modified_gmt] => 2021-04-13 00:06:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1139 [menu_order] => 734 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [3] => WP_Post Object ( [ID] => 1222 [post_author] => 22 [post_date] => 2011-12-05 07:00:07 [post_date_gmt] => 2011-12-05 07:00:07 [post_content] =>

On October 16th, 2011 the DEA released a series of clarifications regarding the requirements for Electronic Prescriptions of Controlled Substances (EPCS). While overall this clarification was very helpful and confirmed the comprehensive nature of the certification process, it did introduce / revive a concept that triggered several calls and inquiries. More specifically, DEA listed a company that has been certified to conduct DEA EPCS Certifications, which raised excellent questions:

  • Why is NetSPI not listed on their website?
    (Answer: We don’t need to be; we meet other requirements that make us qualified certifiers)
  • Is NetSPI allowed to certify our application before you are listed on DEA’s website?
    (Answer: Yes)

According to 21 CFR 1311.300(a), there are two alternative processes for achieving the necessary qualifications:

  1. A third-party audit conducted by a person qualified to conduct a SysTrust, WebTrust or SAS 70 audit or a Certified Information System Auditor as stated in 21 CFR 1311.300(b), which comports with the requirements of paragraphs (c) and (d) of 21 CFR 1300.300” or
  2. A certification by a certifying organization whose certification process has been approved by DEA

Therefore, the certification process emphasized within the clarification is simply one of the alternatives, and is in no way required or mandatory.  While the principal consultant involved with the EPCS Certification is a Certified Information System Auditor (CISA) in good standing, there should not be any issues with qualifications.  Experience with SysTrust, WebTrust, or the slightly outdated SAS-70 (in my opinion) are more a derivative of training provided by ISACA as part of CISA. The bigger question would be whether having appropriate qualifications is the only measure by which you should select your certifying agent. This is where things like experience with certifying applications in other standards, experience in healthcare, and understanding of software development lifecycle can be significant differentiating factors.  Certainly, like with any other regulatory standard, there will be (perhaps already are) many low-cost, rubber-stamp firms that might get you the certification letter you are seeking.  They may let you replace application controls with policies and documentation, conduct the whole assessment by phone, and turn the whole certification process around in 24 hours.  However, obtaining the certification is only the first step in the long journey of maintaining DEA EPCS compliance.  If your client decides that your application does not meet requirements or is in violation of EPCS, you will have to investigate all such claims and if confirmed, announce to all of your customers that they can no longer use your application to prescribe or accept electronic prescriptions of controlled substances. (21 CFR 1311.302)  While it may seem appealing to take a run at getting through the certification fast, trust me, taking this shortcut is not a good idea, and any perceived savings of time and money will likely come back to haunt you in the future.  Going for the low-cost auditor in this case may actually be the most expensive option.

[post_title] => DEA Electronic Prescription of Controlled Substances – Certification Clarification [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => dea-electronic-prescription-of-controlled-substances-certification-clarification [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:04 [post_modified_gmt] => 2021-04-13 00:06:04 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1222 [menu_order] => 821 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 1264 [post_author] => 22 [post_date] => 2011-09-09 07:00:07 [post_date_gmt] => 2011-09-09 07:00:07 [post_content] => The Mayo Clinic recently launched Mayo Clinic Center for Social Media (https://socialmedia.mayoclinic.org/) intended to help train medical practitioners and patients about the use of social media to improve patient care.  While it's easy to see how greater access to healthcare related information can be very valuable, problems with doctors and nurses posting PHI inappropriately has made news headlines more than a handful of times.  Therefore, this new development comes at a great time, just as more and more organizations are beginning to appreciate the value of a comprehensive social media strategy. With the goal of delivering better quality care to patients, many healthcare systems are sharing EMR applications and medical data repositories and setting up interfaces between different systems.  This increases exposure of medical records to a larger group of healthcare practitioners by allowing better, faster, and easier collaboration between doctors.  With increased collaborative efforts, it's become more likely that doctors will choose social media as the catalyst of collaborative efforts and patient information sharing.  Therefore, organizations that act as custodians of PHI, such as hospitals, clinics, and research labs, must take active steps in educating their workforce about the dangers of social media, and how these tools can be used effectively and without violating patient confidentiality or current healthcare compliance requirements. Through the Center for Social Media, Mayo Clinic seems to approach the problem from multiple angles.  While the portal is still very young, the articles already posted address issues of creating well-designed social media policies, creating appropriate training materials, as well as provide analysis of documented cases of misuse of PHI.  Overall, I view this as a very positive development and will continue to monitor this website for insightful information about the best use of social media in healthcare.  After all, this technology is here to stay, and draconian policies of simply blocking access to Facebook from the workplace have proven to be ineffective.  The answer to these challenges clearly point to better guidance and training for the healthcare practitioners, as well as developing tools for responsible, effective, and secure collaboration. [post_title] => Mayo Clinic's Solution for Social Media Challenges [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => mayo-clinics-solution-for-social-media-challenges [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:08 [post_modified_gmt] => 2021-04-13 00:06:08 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1264 [menu_order] => 833 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 1269 [post_author] => 22 [post_date] => 2011-08-09 07:00:07 [post_date_gmt] => 2011-08-09 07:00:07 [post_content] => One of the common and consistent themes at HIMSS (Healthcare Information and Management Systems Society) this year was achieving "Meaningful Use" requirements so that healthcare providers can apply for EHR (Electronic Health Record) stimulus money. The "Meaningful Use" requirements focus on: - Improving quality, safety, efficiency, and reduce health disparities - Engaging patients and families - Improving care coordination - Improving population and public health - Ensuring adequate privacy and security protections for personal health information Naturally, my interest is within the last item in the list, and within this post I hope to bring more clarity to a small subset of what clearly is becoming the newest "hot-item" of the healthcare industry. Based on the "Meaningful Use" matrix created by the HIT (Health IT) Policy Committee, here are the security and privacy goals that need to be reached within the next year and a half: 2011 Objectives:
  • Compliance with HIPAA Privacy and Security Rules and state laws
  • Compliance with fair data sharing practices set forth in the Nationwide Privacy and Security Framework
2011 Measures:
  • Full compliance with HIPAA Privacy and Security Rules
  • An entity under investigation for a HIPAA privacy or security violation cannot achieve meaningful use until the entity is cleared by the investigating authority
  • Conduct or update a security risk assessment and implement security updates as necessary
What the above means is that healthcare companies need to conduct (or update an existing) security risk assessment, and implement the appropriate controls to meet HIPAA requirements. However, since conducting risk assessments is technically a part of HIPAA / HITECH compliance, the requirements could be further simplified to say that by the end of 2011, companies need to be HIPAA compliant. One thing that companies really need to address is making sure that HIPAA compliance goes beyond EMR (Electronic Medical Record) applications, and includes the litany of small applications and medical devices that process, store, or transmit PHI. In order to ensure and demonstrate a comprehensive and complete state of compliance, healthcare providers need to make sure that risk assessments take into account all applications and medical devices, and provide clear supporting documentation of implemented controls and regulatory compliance. For additional information, I have provided future 2013 and 2015 objectives below: 2013 Objectives:
  • Use summarized or de-identified data when reporting data for population health purposes (e.g. public health, quality reporting, and research) where appropriate, so that important information is available with minimal privacy risk
2013 Measures:
  • Provide summarized or de-identified data, when sufficient, to satisfy a data request for population health purposes
2015 Objectives:
  • Provide patients, on request, with an accounting of treatment, payment, and health care operations disclosures
  • Protect sensitive health information to minimize reluctance of patients to seek care because of privacy concerns
2015 Measures:
  • Provide patients, on request, with a timely accounting of disclosures for treatment, payment, and health care operations, in compliance with applicable law
  • Incorporate and utilize technology to segment sensitive data
[post_title] => Security and Privacy Considerations in "Meaningful Use" [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => security-and-privacy-considerations-in-meaningful-use [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:13 [post_modified_gmt] => 2021-04-13 00:06:13 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1269 [menu_order] => 836 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 1237 [post_author] => 22 [post_date] => 2011-08-02 07:00:07 [post_date_gmt] => 2011-08-02 07:00:07 [post_content] => I recently had the opportunity to review an article by Michael Koploy of Software Advice titled "HHS Data Tells the True Story of HIPAA Violations in the Cloud".  While the article has great data about the historical breaches, I think it's fair to say that not enough time has passed for us to know the real implication of companies moving EMRs into the cloud.  HIPAA violations in an IT-centric environment like cloud or software-as-a-service providers are harder to detect, and the general awareness of rules around HIPAA violations are lower than that in the hospitals.  In fact, that's one of the basic problems with people deciding to move their data into the "cloud;" it requires a lot of blind trust. Also, it's important to keep in mind that a lot of physical theft reported by hospitals has nothing to do with someone actively seeking to steal PHI, and everything to do with someone losing a box of medical records in the warehouse or making their laptop easy for a thief to steal.  Comparing this to electronic hacking of EMR is simply like comparing apples and oranges, unless you can prove that all instances of physical theft were motivated by someone looking for the medical records.  On the whole, I would suggest that we simply don't have enough information to make a risk determination for storing EMR in the cloud or whether it's a good idea.  All we have is the "Wall of Shame" from HHS  and the data that can be interpreted in many ways and support a variety of conclusions.  For example, since 12/15/09, there have been 292 total reported incidents, of which 58 involved a breach caused by a Business Associate (BA).  Also, the statistics showed that 50% of incidents reported by BAs involved a physical theft or loss of data, closely followed by "Unauthorized Access / Disclosure" category at 43%.   This means that approximately 20% of all breaches involved a third-party, and in reality the statistics for breaches caused by BAs are not much different than healthcare providers.  Applying an unfair twist to this statistic I could argue that a decision to not move data into the cloud would reduce chances of a breach by 20%, which would not be any less accurate than stating that the cloud will reduce the number of reported breaches.  The truth is that there is simply not enough historical data, and companies need to exercise great due-diligence when they decide to trust a third-party with sensitive data. [post_title] => EMR Security in the Cloud [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => emr-security-in-the-cloud [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:05:25 [post_modified_gmt] => 2021-04-13 00:05:25 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1237 [menu_order] => 838 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 1248 [post_author] => 22 [post_date] => 2011-03-30 07:00:07 [post_date_gmt] => 2011-03-30 07:00:07 [post_content] => At a recent networking event I heard a manager express frustration over managing an employee who got caught up in her own fairy tales that resulted in a very embarrassing termination.  She told her co-workers that she was diagnosed with cancer and needed time off for surgery and treatment.  The company responded with genuine concern and care, assuring her that she will have all the support and time off she will need.  However, after an attempt to send her flowers to the hospital, they discovered that she was not there, and a little more probing confirmed that she never had cancer in the first place.  Once I got over the ridiculousness of this lie, I started thinking about the implications of being able to determine whether someone is at the hospital... Is letting someone know that a patient is not at a particular hospital at a specific time considered Protected Health Information (PHI)?  What about calling the hospital, asking for the room where Mr. Kravchenko is located and promptly being routed to my room?  Isn't the simple fact of agreeing to route the call already considered PHI?  I realize that this may not be the biggest or most prominent HIPAA violation for most hospitals, requiring some familiarity with the patient in order to make the inquiry effective.  However, this also seems like this would allow for targeted inquiries into individual's health records, all without having consent.  I can also see how interested but not authorized parties can start checking for attendance to substance abuse or psychological treatments simply by calling at the time when the patient is suspected to be there. Obviously, HIPAA was not created to protect compulsive liars from being able to deceive their employers and it hard to feel bad for the person who would lie about being sick with a terminal disease.  However, this example does highlight the need for staff at hospitals and out-patient facilities to be trained on handling incoming inquiries, including deliveries of balloons and flowers.  This also means that hospitals may need to come up with a different / better way of handling incoming calls to patient rooms, and may even need to start using "passwords" before routing a call.  While many such incidents are anecdotal and often do not create a lot of sympathy for the "patient", this does highlight just how easy it is for unauthorized disclosure of PHI to happen. [post_title] => HIPAA May not Protect Compulsive Liars [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => hipaa-may-not-protect-compulsive-liars [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:05 [post_modified_gmt] => 2021-04-13 00:06:05 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1248 [menu_order] => 849 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [8] => WP_Post Object ( [ID] => 1265 [post_author] => 22 [post_date] => 2010-09-03 07:00:07 [post_date_gmt] => 2010-09-03 07:00:07 [post_content] => One of the most promising technologies for automatically enforcing compliance with sensitive data handling practices is Data Loss Prevention (DLP) technology and it is quickly gaining popularity and adoption across many industries.  Does this mean that DLP is the answer to all sensitive information handling concerns? In short, I am sorry to say that while DLP offers excellent solutions within a limited range of data, such as payment cards, social security numbers, and other easily identifiable data, it does not offer great solutions for HIPAA compliance.  Most recently a case of an employee being fired from Oakwood Hospital in Michigan has once again highlighted the utter impossibility of automatically enforcing HIPAA compliance.  In this case, Cheryl James made some comments on Facebook which were interpreted as a violation of HIPAA requirements.  This was not the case of medical records being leaked out, but rather a comment made by a medical professional.  More information about the incident can be obtained here. (https://www.fiercehealthcare.com/story/hospital-worker-fired-over-facebook-comments-about-patient/2010-08-01) More and more people are using websites such as Facebook as a part of their everyday conversations with their friends and family.  However, a comment made to a spouse in the privacy of one's home is clearly not the same as posting that comment on Facebook.  Since this is not the first time a comment made on a social networking website has landed a hospital employee in trouble, it's clear that it will take some time before everyone fully realizes the risks of communication of sensitive data on social networking websites.  Naturally the question that begs to mind is if there is anything that hospitals can do to prevent such incidents in the future. The advantage of DLP technology is that if you are able to define the pattern or a structure for the data that can be automatically identified as sensitive, the DLP technology will be able to prevent most inappropriate transfers of such data, including posting on social websites.  However, with regard to healthcare, data that falls in the range of being considered PHI is very diverse and does not allow for automated identification.  Therefore, techniques for reducing risks of inappropriate disclosure must fall back on the low-tech controls such as training and blocking high-risk websites like Facebook for all employees. [post_title] => Does DLP Help Solve HIPAA Concerns? [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => does-dlp-help-solve-hipaa-concerns [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:04 [post_modified_gmt] => 2021-04-13 00:06:04 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1265 [menu_order] => 864 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [9] => WP_Post Object ( [ID] => 1267 [post_author] => 22 [post_date] => 2010-08-31 07:00:07 [post_date_gmt] => 2010-08-31 07:00:07 [post_content] =>

 Even though the full extent of the HIPAA and HITECH requirements will not be required for Business Associates until 2011, my experience with helping organizations reach compliance with appropriate security requirements suggests that compliance efforts should begin right away.  Proposed changes to the rules can be viewed at regulations.gov (https://www.regulations.gov/search/Regs/home.html#documentDetail?R=HHS-OCR-2010-0016-0001).The deadline for submitting comments has passed on August 13th; however I would be surprised to find significant changes from those that have been proposed.

With Business Associates having to comply with the same requirements as Covered Entities, there are many important requirements with regard to handling ePHI.  Companies should quickly become familiar with:
  1. Performing periodic risk assessments that include ePHI - Organizations may decide to use guidance provided by HHS or use their own discretion.
  2. Ability to respond to ePHI access inquiries - Just as covered entities, BAs need to be able to respond to requests regarding access to individual's ePHI.
  3. Incident investigation timeframe - In accordance with the HITECH requirements, responding to security incidents and issuing appropriate breach notifications must take place within a relatively short timeframe. While 60 days may not seem very short, having participated in a number of incident investigations, I can assure that this is not a lot of time.
Implementation of the above mentioned requirements may warrant creating new or modifying existing policies, implementing new security controls, and providing training for IT staff and other ePHI custodians.  Failure to comply with policies and practices may cause the company to be viewed as negligent, triggering significantly higher fines and possible consequences for company leadership.  For those companies who have relatively new security and privacy programs, I strongly recommend referencing HITRUST Common Security Framework (CSF) for detailed implementation requirements for individual HIPAA and HITECH controls.  While this may be seen as over-kill for small Business Associate organizations, the range of control implementation considerations will help organizations realize all possible consequences of these healthcare regulatory requirements.  With implementation of some of the more technical controls requiring considerable cost and operational changes, organizations should take advantage of the time before the requirements have been mandated and companies  fall into the scope of the HIPAA and HITECH enforcement efforts. [post_title] => Business Associates Need to Understand HIPAA & HITECH Requirements [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => business-associates-need-to-understand-hipaa-hitech-requirements [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:02 [post_modified_gmt] => 2021-04-13 00:06:02 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1267 [menu_order] => 866 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [10] => WP_Post Object ( [ID] => 1287 [post_author] => 22 [post_date] => 2010-01-13 07:00:07 [post_date_gmt] => 2010-01-13 07:00:07 [post_content] =>

In this conclusion of the HITRUST blog series, I would like to discuss some definite opportunities and challenges that HITRUST is likely to face.

Putting together a single prescriptive framework for the healthcare industry is truly an ambitious effort. However, cross-referencing this framework with different regulatory requirements and then proposing a mechanism by which companies can be certified against this framework takes any such ambitions to a whole new level. The good news is that many of the healthcare industry's biggest organizations have gotten onboard and made significant contributions to this effort. Additionally, with the way HIPAA is written, there seems to be a lot of need for a framework such as this, which can enable companies to better defend their interpretations of HIPAA requirements. Therefore, I think the future of HITRUST is going to be defined within the following considerations:

  • Quality of the Framework – In order for the framework to gain traction, it must be of good quality, and it should achieve its stated objectives of being risk-based and prescriptive. Even though the framework is a product of multiple organizations collaborating, HITRUST does not necessarily govern by community and will make the final decision about CSF content. Another important aspect of the framework will be the approval process of alternative or compensating controls, and ensuring that the process of approvals or denials is transparent. Nothing will de-value the framework faster than perception of its being driven by the agenda of any specific company rather than the industry as a whole.
  • Maturity of the Certification Process – Having gone through the assessor training, I feel this is perhaps the weakest HITRUST point so far. In starting a certification program from scratch, mistakes are easy to make and are common (just ask the PCI Council). However, PCI DSS was not a voluntary program; compliance was mandatory. Requirements such as submitting complete gap analysis reports to HITRUST (including all found vulnerabilities spelled out in detail) are clearly not going to last, since I can’t imagine any company willing to submit a comprehensive set of their dirty laundry (including all areas where they are not compliant with regulatory requirements) to a for-profit company for their assessment and evaluation. However, I feel that once they begin to get this kind of feedback from HITRUST practitioners, they will make the necessary changes in their approach.
  • Certification Quality Assurance – Not all consulting firms are equal; in fact they differ greatly in the quality of their work. Therefore, HITRUST needs to establish a better-defined QA program, to govern the certification process. Protecting the integrity of the HITRUST certification will be essential for internal auditors to begin considering it in place of alternative third-party audits.
  • First Breach / Legal Challenge – In spite of the fact that HITRUST does not make any representations that regulatory compliance is synonymous with HITRUST certification, the first time a HITRUST-certified company suffers a breach or is a subject to regulatory inquiry, we will see the first official test of the framework. One of the big selling points of the framework is that their interpretations of HIPAA are valid and substantiated by the whole healthcare industry. However, if a judge disagrees with any of their interpretations, this may be very damaging to HITRUST acceptance.

I really want HITRUST to succeed. I think it’s a great initiative that has a lot of promise for the whole industry. However, I think it has a long way to go before it is widely accepted, and the certification process is sufficiently mature to inspire confidence on all sides. My recommendation for all healthcare providers and vendors is to begin looking at HITRUST and seeing how their security controls compare with those specified within the CSF. For those companies that do not have a security program in place and are looking to undergo a HIPAA gap assessment for the first time, I would recommend adopting the CSF. After all, the risks are fairly small, since the framework is based on current standards and not anything new. As to the expense of undergoing a full certification, I would recommend putting that on hold until the framework is more widely accepted, or in cases of service providers, until your customers begin to ask you for it.

[post_title] => HITRUST Part 4 Looking Forward [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => hitrust-part-4-looking-forward [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:06 [post_modified_gmt] => 2021-04-13 00:06:06 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1287 [menu_order] => 882 [post_type] => post [post_mime_type] => [comment_count] => 1 [filter] => raw ) [11] => WP_Post Object ( [ID] => 1289 [post_author] => 22 [post_date] => 2009-12-30 07:00:07 [post_date_gmt] => 2009-12-30 07:00:07 [post_content] =>

As a continuation of the HITRUST blog series, in this post I would like to explore the concept of certification, and what it means.

So, by now I hope you’ve followed my advice and have been browsing the framework up and down. Perhaps you generated a few reports that show you just how easy it is to identify controls for each regulatory requirements and standard. You are now a CSF Ninja and have mastered the framework engine, and now you are ready for bringing the idea of HITRUST certification to your organization. However, there is a very important concept that I must stress with regard to certification:

HITRUST certification does not mean you are compliant with ANY of the regulatory requirements or standards referenced within CSF.

Since this is somewhat counter-intuitive, please take a few minutes to absorb this information. So, what is the value of a HITRUST certification?

To answer this question, it’s important to go back and review my first blog post in this series, and understand that one of the missions of HITRUST is to be practical and realistic in its approach to security. Therefore, the current certification is based on a certain minimum standard that had been agreed upon by all participating organizations. Therefore, the certification is not to be used as a way of proving HIPAA or HITECH compliance, but rather as proof of meeting the most basic security requirements the healthcare industry has deemed as most important. To a healthcare industry outsider this may appear weak and indecisive, and perhaps there is at least some truth in that statement. However, those that deal with the daily challenge of responding to a seemingly unending stream of third-party audit checklists and audits, and those who have to manage controls for hundreds of applications and hundreds of different medical devices will view even such a humble beginning as a monumental effort.

The overall goal is for the HITRUST certification to be used as a way of showing that a particular organization is focused on security and has all the basics covered. As the industry gets better with managing security, I fully anticipate the certification threshold to go up and become even more difficult to obtain. However, while HITRUST seeks acceptance by medical practitioners and various service providers, I think it’s wise to keep the certification requirements more attainable, even if the certification does not equal regulatory compliance. After all, each organization may choose to implement more than the certification minimum and get to a level of compliance before this is required by HITRUST.

In my view, the real value of the HITRUST certification is in simplifying and streamlining conversations between healthcare providers and vendors about the security of PHI data. If a vendor has a single framework to comply with, it may choose to spend more resource on achieving that compliance, and not worry about various checklists and rule interpretations by different security consultants and auditors. Similarly, if a patient care provider can focus on adopting a single framework, and requiring that all vendors do the same, the CSF certification may be acceptable in place of any other third-party assessments.

Please, check back for the fourth and final installment of the HITRUST blog series, when I present some of the opportunities and challenges HITRUST will face in the future.

[post_title] => HITRUST Part 3 Certification Explained [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => hitrust-part-3-certification-explained [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:06 [post_modified_gmt] => 2021-04-13 00:06:06 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1289 [menu_order] => 884 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [12] => WP_Post Object ( [ID] => 1291 [post_author] => 22 [post_date] => 2009-12-07 07:00:07 [post_date_gmt] => 2009-12-07 07:00:07 [post_content] =>

As a continuation of the HITRUST blog series, in this post I would like to take a closer look at the Common Security Framework CSF, and what it’s all about.

The CSF is designed based on the ISO/IEC 27001:2005 and ISO/IEC 27002:2005 standards. Additionally, the framework currently includes:

  • NIST 800 series of standards
  • ISO/IEC 27799:2008 Health Informatics
  • COBIT
  • PCI
  • HIPAA
  • HITECH Act
  • FTC 16 Red Flags Rules

HITECH is planning to add other regulatory requirements and standards, such as EHNAC’s Healthcare Network Accreditation Program (HNAP-EHN), Healthcare Information Technology Standards Panel, and CMS Information Security (IS). However, the real value of the framework is not in that it provides a clear cross-reference between these and future requirements, since this information is already available within a broad range of compliance management tools, but rather that the reconciliation of the different standards and the additions of the controls are based on experiences and best practices from the HITRUST participants.

Each control described within the framework includes basic information such as control objectives, descriptions, and a few different categories that it may be associated with. Additionally, each control includes the following, which differentiates it from others:

  • Control Implementation – This is a prescriptive description of a control that provides     detailed information about different aspects of implementation.
  • Control Audit Procedures – Detailed instructions that clearly document the steps that CSF assessors should take in order to accurately ascertain compliance with the specific control.
  • Control Standards Mapping – All regulatory requirements and standards that apply to the particular control.
  • Alternative Controls – Compensating controls that have been approved by HITRUST, that may be used in place of the controls listed. All organizations may submit their alternative or compensating controls to HITRUST for approval, which will subsequently include them in the framework for the benefit of other companies.
  • Required for Certification – Some controls are marked as required for certification, while others are only recommended for compliance with specific regulatory requirements. (I know that ability to certify while not being compliant is odd, but hang in there, I will tackle this topic in the next blog post).
  • Organizational or System – Organizational controls are implemented for the entire organization, while system controls should be implemented and audited on each system containing ePHI.
The engine supporting the framework allows easy searching and navigation of the framework. Additionally, all regulatory requirement and standards references are presented as hyperlinks that will navigate directly to the original authoritative sources. HITRUST also has created several reports that can enable companies to determine gaps within any of the regulatory standards, based on the framework control references. Overall, since the framework is available free of charge, I strongly recommend people to register with HITRUST and browse around.

In addition to providing a list of controls, HITRUST has also incorporated different levels of implementation for different controls. These are guided by the size of the organization, and range from Level 1 (most basic implementation) to Level 3 (most advanced security). Therefore, in order to understand which level of implementation would be applicable for a specific organization, it’s important to pay attention to the organizational and system factors, which include a wide range of consideration such as the need for PCI compliance, number of patient visits or hospital beds, number of employees, internet connectivity, or many others. However, in order to allow for easier and more automated determination, HITRUST has created spreadsheets where an organization simply needs to fill out some basic information, and the next tab will provide the required levels of implementation for each control.

To summarize, the CSF is specific, prescriptive, and scalable, providing not only guidance for implementation, but also for validation and attestation of compliance. The framework is intended to be in continuous development, by addition of formally accepted alternative controls, as well as entity type specific implementation requirements. Another important misconception is that CSF is a new standard. In truth, the CSF is an interpretation of other existing standards. Therefore, even if adopting a relatively new framework seems like a risky investment of time and resources, I would encourage organizations to at least become familiar with it. You never know, you just might find some great ideas that may be applicable for your environment.

In the next post in this series, I will focus on the topic of HITRUST certification, and what it means in the context of compliance with other regulatory requirements such as HIPAA, HITECH, and PCI.

[post_title] => HITRUST Part 2: Taking a First Look at the CSF [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => hitrust-part-2-taking-a-first-look-at-the-csf [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:06 [post_modified_gmt] => 2021-04-13 00:06:06 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1291 [menu_order] => 887 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [13] => WP_Post Object ( [ID] => 1292 [post_author] => 22 [post_date] => 2009-12-04 07:00:07 [post_date_gmt] => 2009-12-04 07:00:07 [post_content] =>

HITRUST is rapidly gaining popularity in the healthcare and security consulting fields, and NetSPI is investing significant resources in developing services that will assist clients in taking advantage of the new Common Security Framework (CSF), as well as in achieving all the benefits of optimizing information security programs against an industry-developed and accepted framework. As a way of introducing this new development, I will write a series of blog posts intended to familiarize anyone interested with just what HITRUST and the CSF are all about. So, let’s dive in…

Imagine that health providers, payers, and service providers got tired of constantly having to deal with different interpretations of regulatory requirements, an ongoing series of compliance and third-party audits, and inconsistencies among different regulatory standards. Also, imagine that they decided to get together and perform the tremendous task of not only correlating various regulatory requirements, but also reconciling any differences among standards. Well, that happened… sort of. More specifically, the Health Information Trust Alliance (HITRUST), a for-profit company, brought them all together and assisted them in this very ambitious effort called the Common Security Framework (CSF), available for free from their website. (https://www.hitrustalliance.net)

In addition to developing the framework, HITRUST has positioned itself as a certification body that will allow companies to demonstrate their acceptance of the CSF framework, by issuing a certification. It is important to note that certification does not mean that the organization has implemented 100% of the controls described within the framework, but rather that it meet a specific certification threshold. Another important note is that certification does not mean compliance with all standards included in the CSF… but more on that later. The minimum certification requirement has been agreed upon by all participating members as the current minimum standard for security controls that an organization in healthcare should maintain. This approach is consistent with the goal of having the framework be based on practical expectations rather than often unrealistic regulatory expectations.

The reality is that most companies are currently not compliant even with the most basic requirements required by HIPAA, and now further enforced by the HITECH provisions of the 2009 American Recovery and Reinvestment Act (ARRA). Even with the level of specificity added by HITECH, there is a lot of room for interpretation of the requirements by auditors and security analysts. Recognizing this problem, the HITRUST Alliance has decided to leverage the power of the healthcare community at taking a step forward in defining a certain minimum set of requirements, intended to move all providers, payers, and business partners in the right direction. Additionally, if your organization has the misfortune of having to defend its security controls and demonstrating HIPAA compliance in court, being able to demonstrate the use of controls approved by the larger healthcare community will provide a stronger legal position.

Check back for a more detailed look into the CSF, as well as information about the certification path and my humble opinions about the future opportunities and challenges for HITRUST.

[post_title] => What is HITRUST? - Part 1 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => what-is-hitrust-part-1 [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:17 [post_modified_gmt] => 2021-04-13 00:06:17 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1292 [menu_order] => 886 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [14] => WP_Post Object ( [ID] => 1317 [post_author] => 22 [post_date] => 2009-08-21 07:00:07 [post_date_gmt] => 2009-08-21 07:00:07 [post_content] => Mozilla is known to most people for its open-source and free software, most notably Firefox. However, starting around August 4th, it also became known as yet another company whose merchandise store was breached. Following the announcement on the company’s blog and closure of Mozilla’s store, the following headlines filled trade pubs and the blogosphere: “Mozilla Store Breached” - PC Magazine, “Mozilla shuts Firefox e-store after security breach” – Computerworld, and “Mozilla Store Security Breached” - InformationWeek. A careful reading of these articles, however, revealed that the breach did not happen by any fault of Mozilla; rather, it was caused by a company called Gateway/CDI, a third-party e-commerce processor. Even though most news stories about the breach mentioned this critical fact, my conversations with non-techie friends proved that such details went largely unnoticed and that Mozilla was viewed as the guilty party. This only goes to prove that unless the reader has a reason to take an interest in the story, the headline will be the only thing read. Unfortunately, headlines such as “Mozilla shuts down online store after third-party security breach” (SearchSecurity.com) are rare and tend to appear only in technical and security-oriented news sources. What all this adds up to is that when considering the outsourcing of storing, processing, or transmitting critical data to a third party, organizations must recognize that in the event that such a third party is breached, it will be their name in the headlines, not the vendor’s. The solution is for companies to carefully evaluate whether outsourcing is really the best option for them and for their clients. Personally, I think that companies are outsourcing too much, completely ignoring the risks associated with letting your data outside of the trusted network perimeter. However, if outsourcing still makes business sense, careful attention must be paid to ensuring that the vendor takes all appropriate precautions to make sure your data remains safe. Ideally, this should be done during the initial negotiation, as that will be the time when a client has the most influence and power over the vendor. Typical validation steps may include a combination of any of the following tactics:
  1. Ask if the vendor has a SAS-70 on file. (Make sure it explicitly covers the service you are purchasing, and request an independent review of the report to make sure it was provided by a reputable audit firm.)
  2. Involve Internal Audit to request that the vendor fill out a questionnaire, indicating its information security practices. Make sure to ask for proof for some of the most critical controls.
  3. Hire a security consulting firm to perform an independent audit of the security controls the vendor has in place.
  4. Request your internal information security team to perform a thorough review of the vendor’s security controls.
The most important thing to remember is that even though your organization may be outsourcing to a third party, the overall responsibility for the protection of the data in the eyes of your current and future customers will always remain with your company. [post_title] => You Cannot Outsource the Consequences of a Breach [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => you-cannot-outsource-the-consequences-of-a-breach [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:17 [post_modified_gmt] => 2021-04-13 00:06:17 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1317 [menu_order] => 910 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 15 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 2152 [post_author] => 22 [post_date] => 2014-12-22 07:00:49 [post_date_gmt] => 2014-12-22 07:00:49 [post_content] => Information security used to be all about networks and protecting the network perimeter. Today, however, applications are the new battleground for the protection of digital assets. While the concept of software security has been around for a long time, the evolution of mobile technologies and the universal accessibility of applications is requiring organizations to work on improving the maturity of their application security practices. To help with this initiative, many organizations developed their own methodologies. Some of these are too complex and difficult to implement, others are proprietary and expensive. However, there is one framework that stands out among the crowd. The Open Software Assurance Maturity Model (OpenSAMM) was developed by OWASP back in 2009, and is currently undergoing many updates. Like all OWASP projects, this one is free and has been adopted by many organizations around the world. OpenSAMM is very comprehensive in nature, covers all aspects of application security, and still allows each application to be evaluated in under one hour. On the whole, I would recommend any organization that develops its own software to take a look at OpenSAMM website and try it out. In order to get started, you need to compile a list of applications and subject-matter experts for each application who can provide answers regarding the way the application was developed. Once compiled, you should review the "Assessment Interview Template" and the "Assessment Worksheet", both downloads from the Open SAMM's website. These documents help break down Open SAMM into a series of interview questions, making the process of collecting and normalizing information easier. Once answers have been gathered, OpenSAMM maturity scores can be calculated. The model breaks application security maturity down to four "business functions":
  • Governance: The way application security is managed in the organization
  • Construction: The way applications are built
  • Verification: The way security of the application is tested
  • Deployment: How the application is deployed and supported in production
Additionally, each business function breaks down into three "security practices". For example, "Verification" splits up into:
  • Design Review: Activities such as attack surface analysis
  • Code Review: Activities such as peer and 3rd party code reviews
  • Security Testing: Activities such as penetration testing
Once all the questions have been answered, each "security practice" will receive a score ranging from 0 to 3. This score represents the maturity of each "security practice," and provides a snapshot of the security practices around each application. Understanding what each metric means to the organization, well that requires additional analysis. Applications developed in certain high-security industries or environments may require achieving the maximum maturity level at each "security practice," however this is not necessary for most of the organizations. OpenSAMM provides sample "security profiles" that show suggested maturity ratings for organizations within different industries, but these should be viewed as recommendations only. Each organization should perform its own risk analysis and determine their own acceptable maturity levels in different "security practices" and different applications. In order to better understand different maturity ratings and objectives, Open SAMM provides additional guidance in each area, including suggested security metrics. All of this information should be evaluated collectively, in order to establish the right maturity level objectives for different applications. The OpenSAMM framework is quick to deploy, provides actionable recommendations for improving application security, and does not dwell in a lot of risk management terminology. I recommend everyone responsible for managing application security to check it out. [post_title] => An Introduction to the Open Software Assurance Maturity Model (OpenSAMM) [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => an-introduction-to-opensamm [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:00 [post_modified_gmt] => 2021-04-13 00:06:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=2152 [menu_order] => 698 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 15 [max_num_pages] => 0 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 8485731976b5637c8c6ca405daf14380 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) )

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X