Nabil Hannan

Nabil Hannan is a Field CISO at NetSPI. He leads the company’s advisory consulting practice, focusing on helping clients solve their cyber security assessment, and threat & vulnerability management needs. His background is around building and improving effective software security initiatives, with deep expertise in the financial services sector. He has over 15 years of experience in cyber security consulting from his tenure at Cigital/Synopsys Software Integrity Group, where he has identified, scoped, and delivered on software security projects (architectural risk analysis, penetration testing, secure code review, malicious code detection, vulnerability remediation, mobile security assessments, etc.). Nabil has also worked as a Product Manager at Research In Motion/BlackBerry and has managed several flagship initiatives and projects through the full software development life cycle.
More by Nabil Hannan
WP_Query Object
(
    [query] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "65"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "65"
                            [compare] => LIKE
                        )

                )

        )

    [query_vars] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "65"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "65"
                            [compare] => LIKE
                        )

                )

            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [paged] => 0
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [search_columns] => Array
                (
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 1
            [update_post_term_cache] => 1
            [update_menu_item_cache] => 
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [nopaging] => 1
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "65"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "65"
                            [compare] => LIKE
                        )

                    [relation] => OR
                )

            [relation] => OR
            [meta_table] => wp_postmeta
            [meta_id_column] => post_id
            [primary_table] => wp_posts
            [primary_id_column] => ID
            [table_aliases:protected] => Array
                (
                    [0] => wp_postmeta
                )

            [clauses:protected] => Array
                (
                    [wp_postmeta] => Array
                        (
                            [key] => new_authors
                            [value] => "65"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                    [wp_postmeta-1] => Array
                        (
                            [key] => new_presenters
                            [value] => "65"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                )

            [has_or_relation:protected] => 1
        )

    [date_query] => 
    [request] => 
					SELECT   wp_posts.ID
					FROM wp_posts  INNER JOIN wp_postmeta ON ( wp_posts.ID = wp_postmeta.post_id )
					WHERE 1=1  AND ( 
  ( wp_postmeta.meta_key = 'new_authors' AND wp_postmeta.meta_value LIKE '{390b707b25fac1254cc2d416e9cb582fa9c09d0a924911d435541183b48db708}\"65\"{390b707b25fac1254cc2d416e9cb582fa9c09d0a924911d435541183b48db708}' ) 
  OR 
  ( wp_postmeta.meta_key = 'new_presenters' AND wp_postmeta.meta_value LIKE '{390b707b25fac1254cc2d416e9cb582fa9c09d0a924911d435541183b48db708}\"65\"{390b707b25fac1254cc2d416e9cb582fa9c09d0a924911d435541183b48db708}' )
) AND wp_posts.post_type IN ('post', 'webinars') AND ((wp_posts.post_status = 'publish'))
					GROUP BY wp_posts.ID
					ORDER BY wp_posts.post_date DESC
					
				
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 31564
                    [post_author] => 53
                    [post_date] => 2024-02-20 09:59:20
                    [post_date_gmt] => 2024-02-20 15:59:20
                    [post_content] => 




Watch Now

Overview

Incorporating Artificial Intelligence (AI) into your business or developing your own machine learning (ML) models can be exciting! Whether you are purchasing out-of-the-box AI solutions or developing your own Large Language Models (LLMs), ensuring a secure foundation from the start is paramount — and not for the faint of heart.  

Looking for guidance on how to safely adopt generative AI? Look no further. There’s no better guiding light than other security leaders that have already experienced the process — or are going through it as we speak.  

NetSPI Field CISO, Nabil Hannan, welcomed two AI security leaders for a discussion on what they’ve learned throughout their experiences implementing Generative AI in their companies. Chris Schneider, Senior Staff Security Engineer at Google, and Tim Schulz, Distinguished Engineer, AI Red Team at Verizon, shared their perspectives on cybersecurity considerations companies should address before integrating AI into their systems and proactive measures can organizations take to avoid some of the most common cybersecurity pitfalls teams face. 

Access the on-demand webinar to hear their discussion on:  

  • Cybersecurity questions to ask before starting your AI journey  
  • Common pitfalls and challenges you can avoid 
  • Stories from security leaders on the top lessons they’ve learned   
  • Security testing approaches for AI-based systems  
  • And more! 

Key Highlights 

03:27 - AI as a misnomer 
12:22 – What to consider before implementing AI 
17:51 – Aligning AI initiatives with cybersecurity goals 
10:41 - Perspectives on community guidance 
24:35 - Cybersecurity pitfalls with Generative AI 
34:51 - Testing AI-based systems vs traditional software 
41:50 - Security testing for AI-based systems 
47:58 – Lessons learned from implementing AI 

Artificial Intelligence can be a misnomer because it implies that there’s a form of sentience behind the technology. In most cases when talking about AI, we’re talking about technology that digests large amounts of data and gives an output quickly. Can you share your perspective on the technology and how it’s named?  

Tim: Tim explains that generative AI has influenced the discourse on the essence of artificial intelligence, sparking debates over terminology. The widespread familiarity with AI, thanks to its portrayal in Hollywood and elsewhere, has led to diverse interpretations. However, he notes the existing definition fails to accommodate the nuanced discussions necessitated by technological advancements. This discrepancy poses a significant challenge. While the term "AI" is easily recognizable to the general public, the field's rapid evolution demands a reassessment of foundational definitions. Expert opinions vary, which is why discussions like these are constructive because it’s better to have diverse perspectives rather than categorizing any particular viewpoint as unpopular. 

Chris: Chris makes a case for AI being a term more widely recognized by the public compared to machine learning. The historical marketing associated with AI makes it more familiar to people and increases its appeal. However, he cautions the influence of popular media may distort factual aspects and contribute to exaggerated claims, often made by celebrities. As a scientist, he advocates for a cautious approach, emphasizing the importance of basing discussions on demonstrated capabilities and evidence from past experiences. Differing opinions can be valid if they are not sensational, such as concerns about a robot uprising, which is divergent from the field's focus on probabilistic forecasting and observed behaviors. AI is a process involving memorization, repetition, and probabilistic synthesis rather than independent intelligence or foresight. 

What are some aspects to consider before organizations start their journey to leverage AI-based technologies? Are there common pitfalls that organizations run into? 

Tim: Tim believes it’s important to assess available resources for AI adoption. AI isn’t a simplistic, plug-and-play solution. Rather it has significant infrastructure and engineering efforts necessary for seamless integration. The complexity results in a vital need to dedicate resources and adopt a comprehensive approach. Moreover, AI literacy plays a crucial role in facilitating effective communication and decision-making.  

Tim cautions against the risk of being outmaneuvered in discussions by vendors and advocates for seeking partnerships or trusted advisors to bridge knowledge gaps. The industry needs to embrace continuous learning and adaptation in response to evolving regulations and the dynamic nature of AI technology. Outsourcing can be a viable option to streamline operations for those reluctant to commit to ongoing maintenance and operational efforts. 

Are there ways organizations can ensure their AI initiatives align with their cybersecurity goals and protocols? 

Chris: Speaking as a prospective employee at Google, but not officially on behalf of Google, Chris explains one of the ways he approaches this is to use the Android AppSec Knowledgebase within Android Studio. This tool provides developers with real-time alerts regarding common errors or security risks, often accompanied by quick fixes. It’s updated with ongoing efforts to expand its functionality to encompass machine learning implementations, aligning with Google's Secure AI Framework (SAIF). The framework offers guidelines and controls to address security concerns associated with ML technologies, although it may not cover all emerging issues, prompting ongoing research and development. Chris emphasizes the adaptability of these controls to suit different organizational needs and highlights their open-source nature, allowing individuals to apply custom logic. He mentions drawing inspiration from existing literature and industry feedback, aiming to contribute positively to the community, while acknowledging the learning curve and the complexity involved. 

Do you have any perspectives on the community guidance that’s being generated? Anything you’re hoping to see in the future?  

Tim: Tim notes a significant challenge in the AI domain is the gap between widespread knowledge and expert-driven understanding. Despite the rapid advancements in AI, Tim observes a lack of comprehensive knowledge across organizations due to the sheer volume of developments.  

Community efforts have had a positive impact on sharing knowledge so far, but challenges remain in discerning quality information amidst the abundance of resources. Major tech companies like Google, Meta, and Microsoft have contributed by releasing tools and addressing AI security concerns, facilitated by recent executive orders. However, the absence of a common toolset for testing models remains a challenge. Tim commends the efforts of large players in the industry to democratize expertise but acknowledges the ongoing barrier posed by the need for specialized knowledge. Broadening discussions beyond model deployment is important to address emerging complexities in AI. 

What have you seen as some of the most common cybersecurity pitfalls that organizations have encountered when they implement AI technologies? Do you have any recommendations to avoid those? 

Tim: Tim says it’s inevitable that Generative AI will permeate organizations in various capacities, requiring heightened security measures. AI literacy is essential in understanding and safeguarding AI systems, which differs significantly from conventional web application protection.  

Notably, crafting incident response plans for AI incidents poses unique challenges, given the distinct log sources and visibility gaps inherent in AI systems. While efforts to detect issues like data poisoning are underway, they remain primarily in the research phase. Explainable AI and AI transparency is incredibly important in enhancing visibility for security teams.  

Distinguishing between regular incident response and AI incident response processes is crucial, potentially involving different teams and protocols. Dynamics are shifting within data science teams, now grappling with newfound attention and security concerns due to Generative AI. Bridging the gap between data science and cybersecurity teams requires fostering collaboration and adapting to evolving processes. Legal considerations also come into play, as compliance requirements for AI systems necessitate legal counsel involvement in decision-making processes.  

These ongoing discussions reflect the dynamic nature of AI security and underscore the need for continual adaptation and collaboration among stakeholders. The field is developing rapidly with new advancements emerging often on a daily, weekly, or even hourly basis. Drawing from personal experience, Tim emphasizes the unprecedented speed at which research transitions into practical applications and proof-of-concepts (POCs), ultimately integrating into products. This remarkable acceleration from research to productization represents an unparalleled advancement in technology maturity timelines. 

Chris: The concept of "adopt and adapt" is helpful here, noting both traditional and emerging issues with code execution. Machine learning introduces unintentional variants in input and output, posing challenges for software developers. A modified approach for machine learning has multiple stages, including pre-training and post-deployment sets. While traditional infrastructure controls may suffice, addressing non-infrastructure controls, particularly on devices, proves more challenging due to physical possession advantages. Hybrid models, such as those seen in the gaming industry, offer a viable approach, particularly for mitigating risks like piracy. He highlights the need for robust assurances in machine learning usage, especially concerning compliance and ethical considerations. 

Traditional software testing paradigms may not apply to AI-based systems that are non-deterministic. What makes testing AI-based systems unique compared to traditional software?  

Chris: Considering security aspects, the focus is on achieving security parity with current controls. However, addressing emerging threats or new capabilities in machine learning poses challenges. If existing controls prove inadequate for these scenarios, alternative approaches must be explored. For instance, the synthesis of identity presents significant concerns, as advancements in technology allow for sophisticated audio synthesis with minimal sample data requirements. This underscores the need for proactive measures to address evolving security risks. 

In security, the focus is on achieving security parity with current controls while addressing emerging threats or new capabilities in machine learning. For instance, the synthesis of identity presents significant concerns, as advancements in technology enable sophisticated audio and video synthesis, allowing for impersonation and potentially fraudulent activities. Preventing such misuse is a pressing concern, with efforts aimed at developing semantic and provable solutions to combat these challenges.  

Additionally, there's a distinction between stochastic and non-stochastic software, with an increasing emphasis on the collection of vast amounts of data without strict domain and range boundaries. This shift challenges traditional security principles, particularly the importance of authenticating data before processing it, as emphasized by Moxie Marlinspike's "Doom principle."  

Despite the widespread acceptance of indiscriminate data ingestion, there's growing recognition of the risks associated with it, such as prompt injection and astroturfing. Testing the security of systems against inconsistent behaviors and untrusted data sources has always been challenging, with approaches like utility functions proposed to address these complexities. Finding the right balance between control and innovation remains a central dilemma, with both excessive control and insufficient oversight posing risks to the integrity and reliability of systems. 

From a Red Teaming perspective, what measures should organizations take to ensure comprehensive security testing for AI-based systems? What tips or tricks have been effective in your experience that you wish you had known earlier? 

Tim: Tim explains that one of the aspects organizations need to consider is the testing phase, especially during deployment of AI-based systems like web applications integrated with language models. Understanding the intended behavior is crucial, and simulating user interactions helps in documenting various use cases accurately. Cost is another significant aspect to evaluate, as API usage can incur charges based on request/response rates. Red teaming or penetration testing should explore context length expansion tactics to avoid unforeseen financial burdens, especially when manipulating parameters to change response lengths.  

Efficient resource utilization is paramount, considering that most organizations won't deploy or train massive models due to cost constraints. Therefore, managing expenses and implementing guardrails for API usage becomes imperative. Additionally, safeguarding brand reputation is crucial, particularly for public-facing platforms, where Generative AI content could potentially lead to negative publicity if misused. Thus, a comprehensive approach to security and Red Teaming in AI systems involves addressing not only technical controls but also considering broader implications and partnering with responsible AI teams to mitigate risks effectively. 

If you could go back in time and share one lesson with your younger self that would have helped on your AI journey, what would it be? 

Chris: Synthesizing content can offer benefits, yet it entails inherent trade-offs. The ability to produce unique interactions correlates with the tolerance for risk that the business is willing to accept. This aspect is quantified by a term known as "temperature" in business jargon. Conversely, if the generated content pertains to sensitive information like payment details, it can present challenges that need careful consideration before implementation. Miguel Rodriguez's suggestion regarding pre- and post-training, as well as pre- and post-deployment phases, serves as an excellent starting point. Additionally, augmenting these phases with considerations for networking, hardware, operating systems, and application context helps fortify the threat model review process. 

Tim: Similar to what Chris mentioned, sending specific resources on honing in on lessons about neural networks could be beneficial. Overall, the key is to continue using these systems. Besides understanding the theory, interacting with the systems and trying different prompts is crucial. Experimenting with advertised hacks and cheats found online can provide insights into their effectiveness. Diversity of thought is important as it offers various approaches to exploring these systems. Therefore, focusing on experimentation and continual learning is essential for gaining knowledge in this field. 

Hear the full discussion between Nabil, Chris, and Tim by requesting the on-demand webinar using the form above or continue your AI security learning by accessing our eBook, “The CISO’s Guide to Securing AI/ML Models.” 

[wonderplugin_video iframe="https://youtu.be/LC9E44mDJEY" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Hindsight’s 20/20: What Security Leaders Wish They Knew Before Implementing Generative AI  [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => what-to-know-before-implementing-generative-ai [to_ping] => [pinged] => [post_modified] => 2024-03-27 13:40:01 [post_modified_gmt] => 2024-03-27 18:40:01 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=31564 [menu_order] => 3 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 31424 [post_author] => 53 [post_date] => 2023-11-06 16:30:58 [post_date_gmt] => 2023-11-06 22:30:58 [post_content] =>
Watch Now

Morphing your IT defenses to bounce back better. Driving innovation quickly while maintaining IT resiliency and cybersecurity is no small challenge. Cloud outages, wobbly supply chains, ransomware attacks, and other steep challenges may limit your staff and budget. This session will cover some new innovations that will help IT clear hurdles and explain how to keep innovation and resilience afloat at the same time.

In this webinar, you will learn:

  • How cloud migration can impact security and ROI
  • What tools can help identify vulnerabilities and integrate into development workflows
  • How companies should prioritize deployment styles
  • Why the integration of security practices will impact the decision

[wonderplugin_video iframe="https://youtu.be/C9nz4N-uPak" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Innovation & Cyber Resiliency [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => innovation-cyber-resiliency [to_ping] => [pinged] => [post_modified] => 2023-11-30 16:20:35 [post_modified_gmt] => 2023-11-30 22:20:35 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=31424 [menu_order] => 8 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 31485 [post_author] => 53 [post_date] => 2023-11-01 11:02:25 [post_date_gmt] => 2023-11-01 16:02:25 [post_content] =>
Watch Now

A novel 0-day vulnerability referred to as, “HTTP/2 Rapid Reset,” was reported, which abuses certain features of HTTP/2 protocol and allows for Distributed Denial of Service (DDoS) attacks at an unprecedented scale.

Hear perspectives from NetSPI Field CISO Nabil Hannan, and Security Research Engineer Isaac Clayton to get their take on the CVE and learn more about NetSPI's quick response to help security leaders with identification and remediation.

In this webinar they'll discuss:

  • What is CVE-2023-44487 and who is impacted
  • How to determine if you are vulnerable
  • Best practices for remediation
  • ASM’s role in CVE management

Read more about NetSPI's analysis of HTTP/2 Rapid Reset in this article.

[wonderplugin_video iframe="https://youtu.be/iLedcMoHmU4" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => NetSPI LinkedIn Live: HTTP/2 Rapid Reset [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => http-2-rapid-reset [to_ping] => [pinged] => [post_modified] => 2023-11-16 11:03:32 [post_modified_gmt] => 2023-11-16 17:03:32 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=31485 [menu_order] => 11 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 30779 [post_author] => 53 [post_date] => 2023-08-07 11:27:54 [post_date_gmt] => 2023-08-07 16:27:54 [post_content] =>

See Pentesting in Action and Learn Best Practices from the CISOs at Nuspire and NetSPI

Organizations spend millions of dollars on security controls, yet they still get breached. And often, the breach resulted from something basic like a default password or unlocked door. This is where pentesting is valuable because organizations can learn where their gaps are and remedy them before a real, costly cyberattack occurs.

In this webinar, you’ll hear from cybersecurity veterans J.R. Cunningham, CSO at Nuspire, and Nabil Hannan, Field CISO at NetSPI, who will share their pentesting stories – from the perspective of both the pentester and the organization being pentested.

They’ll cover:

  • Why pentesting is essential for any size organization
  • Pentesting examples, including social engineering, application-level issues and SQL injection
  • Overseeing pentesting from the client’s perspective, including how to manage expectations with leadership
  • How to best leverage pentesting results to fix security gaps
  • And more!

[wonderplugin_video iframe="https://youtu.be/WK8lNyjC1pA" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Offensive vs. Defensive Security: Cyber Stories from the Field [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => offensive-vs-defensive-security [to_ping] => [pinged] => [post_modified] => 2023-09-14 16:23:39 [post_modified_gmt] => 2023-09-14 21:23:39 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=30779 [menu_order] => 15 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 30693 [post_author] => 53 [post_date] => 2023-07-31 09:06:42 [post_date_gmt] => 2023-07-31 14:06:42 [post_content] =>

Entering a high stakes sports competition without a game plan is a reliable way to lose a matchup. The same applies to cybersecurity: running your business without a proactive security plan is an effective way to get breached.

But what exactly does a winning cybersecurity playbook look like? In this panel, NetSPI’s Nabil Hannan sits down with Hudl’s Robert LaMagna-Reiter and PGA Tour’s J Oliva to learn how they get proactive with security planning and share pointers on how to create a cybersecurity playbook with winning potential.

Tune in for a discussion on:

  • Mastering the Game | Incident response planning and best practices for improving detections
  • Uniting the Team | Building a collaborative environment around offense and defense
  • Playing to Win | Asset and vulnerability management trials and triumphs
  • Scoring Success | Identifying the most effective KPIs for measuring program accomplishments

Whether you’re familiar with the world of sports or not, you’ll leave this cybersecurity timeout with actionable advice from your peers, equipping you to get the lead over adversaries and secure your business like a pro.

Be sure to check out the additional resources below, which may be useful as well.

[wonderplugin_video iframe="https://youtu.be/iQqvh39-cA0" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Get Your Head in the Game: How to Create a Winning Cybersecurity Playbook [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => get-your-head-in-the-game-cybersecurity-playbook [to_ping] => [pinged] => [post_modified] => 2023-08-29 15:23:07 [post_modified_gmt] => 2023-08-29 20:23:07 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=30693 [menu_order] => 17 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 30324 [post_author] => 65 [post_date] => 2023-06-13 09:00:00 [post_date_gmt] => 2023-06-13 14:00:00 [post_content] =>

In simple terms, an API (application programming interface) is a piece of software used to talk to other pieces of software. The use of APIs continues to spike with no signs of slowing down. This presents more pathways that have the potential to be exploited, especially if API security isn’t prioritized through activities such as application penetration testing. Oftentimes security for APIs isn’t part of the development phase, but rather addressed after a launch if at all. 

The growing need for securing APIs over the last five years inspired Open Web Application Security Project (OWASP) to create the API Security Top 10, a list of the top API vulnerabilities facing developers and DevSecOps today. The 2023 list was just released and concluded API1:2023 – Broken Object Level Authorization and API2:2023 - Broken Authentication have remained in the top places for security concerns since 2019, showing us more work is needed to address these core vulnerabilities. 

Knowing that more and more APIs are being used to build software, security implications need to be top of mind for all IT leaders.

API Security is the Underdog We’re All Rooting For 

Organizations require clarity on the fact that API security needs to be prioritized alongside other security domains. Traditionally, software goes through security testing as a whole, instead of testing the APIs individually. This form of testing leads to missed information and possible vulnerabilities for adversaries to take advantage of.

Typically software includes many APIs, and automated scanning tools aren’t able to provide comprehensive results. Manual testing is needed to fully understand the breadth of security implications — which is a challenge for many organizations due to time, resource, and budget constraints.

API Security versus Application Security 

API security is a subset of application security that is more challenging because APIs are harder to remember to secure, given their development process and lack of use case foreshadowing.  

When a developer is building small bits of software, like APIs, they may not be able to foreshadow how it will ultimately be used, so security can fall to the wayside. Rather, when developers build a larger software application (general applications), security professionals often automatically think of adding security controls such as authentication, input validation, or output coding. The shift that needs to happen when working with APIs is that those automatic security responses are built into the requirements to become an inherent property of the APIs.

What's the difference between Web Application Penetration Testing and API Penetration Testing? Take a look!

API Security Best Practices 

The traditional pillars of AppSec apply to making APIs more secure, such as input validation, output coding, authentication, error handling, and encryption to name a few. IT security leaders need to think of these pillars and all the different ways in which APIs can be used to build out comprehensive security controls.  

In short, organizations need to build secure development frameworks with APIs that take the security considerations out of the developers’ hands – since they often don’t possess a security-first mindset – and build security directly into the APIs themselves. 

Go back to the basics. Every CISO can benefit from this practice. Just like with general software security, if you don’t go back to the basics first, you won’t be able to mature the program. Right now, the basics are where organizations are struggling. NetSPI’s 2023 Offensive Security Vision Report had similar findings. These foundational security flaws are ever-present, and we're still challenged by the basics across attack surfaces.

Questions to Consider Before API Pentesting 

API penetration testing is conducted in a similar manner to traditional web application testing. However, there are several nuances to API pentesting that must be considered during the scoping phase. Overall consultants require engagement from API developers to ensure that testing is done thoroughly. These questions explore what is specifically needed to maximize API pentesting success – from the very beginning.

1. Production vs Staging: Is it possible to provide testers with an API staging environment? 

NetSPI recommends providing penetration testers with a staging API environment. If testing is done in staging, the testers can use more thorough and invasive/comprehensive attacks. If testing is done in production, then testers will be forced to resort to more conservative attacks to avoid negatively affecting the system and disrupting the end-users.  

2. Rate Limiting: How is rate limiting implemented on the target API? Is rate limit testing in scope for this engagement? 

By leveraging rate limiting flaws, attackers can exploit race condition bugs or rack up costly service hosting bills.  

3. WAF Disabled: Is it possible to disable the API’s WAF or allow list the penetration tester’s IP range during the testing window? 

If possible, we recommend API WAFs are disabled when testing occurs. If testing is done in production, consider allow listing your testing team’s IP range. Read more on how it adds value to API pentesting here

4. New Features: Are there any new features in scope that we should focus on? 

New features that haven’t been reviewed for security issues are more likely to be vulnerable than hardened code.  

5. Denial of Service (DoS) Testing: During the test, will DoS testing be in scope? 

Denial of Service vulnerabilities of APIs can have a catastrophic impact on software systems.  

6. Source Code Assisted Testing: Will source code be provided to consultants during the test? 

By providing source code, consultants are enabled to test applications more thoroughly without additional cost. For additional information on source code assisted penetration tests, check out our article on “Why You Should Consider a Source Code Assisted Penetration Test.” 

Due to their programmatic nature, APIs provide additional customer interaction during the scoping process. By providing testers with the information listed above, testers are able to provide maximum value during an API penetration test and maximize the return on investment. 

Prioritize API Security with NetSPI's API Penetration Testing. Get Started.

Predictions for the Future of Security API 

Going forward, we’ll likely see a software development paradigm shift over the next five years that combines features from REST and SOAP security. There is likely to be a software development paradigm where some features from each method are used to create a combined superior method – something we’re already starting to see with Adobe and Google. This combination will take security out of the hands of the developers and allow for better “secure by design” adoption. We must enable developers to innovate with confidence.

Additionally, the concept of identity and authentication is changing — we need to move away from the traditional use of usernames and passwords and two-factor authentication, which relies on humans not making any errors. The authentication workflow will shift to what companies like Apple are doing around identity management with innovations like the iOS16 passkeys, and could even impact the OWASP API Security Top 10. This will be developed through APIs. 

APIs provide incredible value with connectivity between systems. They are here to stay, making API security a much-needed focus. NetSPI’s Application Penetration Testing gives your team a proactive advantage with identifying, prioritizing, and remediating vulnerabilities in a single platform. Bring proactivity to your API security by requesting a quote today.

[post_title] => Getting Started with API Security Best Practices  [post_excerpt] => API security has become a top priority and NetSPI’s API pentesting can help you get started with API security best practices. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => get-started-with-api-security-best-practices [to_ping] => [pinged] => [post_modified] => 2023-08-29 17:56:26 [post_modified_gmt] => 2023-08-29 22:56:26 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=30324 [menu_order] => 102 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 29416 [post_author] => 77 [post_date] => 2023-02-09 09:00:00 [post_date_gmt] => 2023-02-09 15:00:00 [post_content] =>

On February 9, NetSPI's Nick Landers and Nabil Hannan were featured in the Digital Journal article called What Cybersecurity Risk to AI Chatbots Pose?. Read the preview below or view it online.

+++

ChatGPT is a tool from OpenAI that enables a person to type natural-language prompts. To this, ChatGPT offers conversational, if somewhat stilted, responses. The potential of this form of ‘artificial intelligence’ is, nonetheless, considerable.

Google is launching Bard A.I. in response to ChatGPT and Microsoft is following closely with an application called Redmond.

What do these tools mean for the expanding threat landscape? To find out, Digital Journal sought the opinions of two NetSPI representatives.

First is Nabil Hannan, Managing Director at NetSPI. According to Hannan businesses seeking to adopted the technology need to stand back and consider the implications: “With the likes of ChatGPT, organizations have gotten extremely excited about what’s possible when leveraging AI for identifying and understanding security issues—but there are still limitations. Even though AI can help identify and triage common security bugs faster – which will benefit security teams immensely – the need for human/manual testing will be more critical than ever as AI-based penetration testing can give organizations a false sense of security.”

Hannan adds that things can still go wrong, and that AI is not perfect. This could, if unplanned, impact on a firm’s reputation. Hannan adds: “In many cases, it may not produce the desired response or action because it is only as good as its training model, or the data used to train it. As more AI-based tools emerge, such as Google’s Bard, attackers will also start leveraging AI (more than they already do) to target organizations. Organizations need to build systems with this in mind and have an AI-based “immune system” (or something similar) in place sooner rather than later, that will take AI-based attacks and automatically learn how to protect against them through AI in real-time.”

The second commentator is Nick Landers, VP of Research at NetSPI.

Landers looks at wider developments, noting: “The news from Google and Microsoft is strong evidence of the larger shift toward commercialized AI. Machine learning (ML) and AI have been heavily used across technical disciplines for the better part of 10 years, and I don’t predict that the adoption of advanced language models will significantly change the AI/ML threat landscape in the short term – any more than it already is. Rather, the popularization of AI/ML as both a casual conversation topic and an accessible tool will prompt some threat actors to ask, “how can I use this for malicious purposes?” – if they haven’t already.”

What does this mean for cybersecurity? Landers’ view is: “The larger security concern has less to do with people using AI/ML for malicious reasons and more to do with people implementing this technology without knowing how to secure it properly.”

He adds: “In many instances, the engineers deploying these models are disregarding years of security best practices in their race to the top. Every adoption of new technology comes with a fresh attack surface and risk. In the vein of leveraging models for malicious content, we’re already starting to see tools to detect generated content – and I‘m sure similar features will be implemented by security vendors throughout the year.”

Landers concludes, offering: “In short, AI/ML will become a tool leveraged by both offensive and defensive actors, but defenders have a huge head start at present. A fresh cat-and-mouse game has already begun with models detecting other models, and I’m sure this will continue. I would urge people to focus on defense-in-depth with ML as opposed to the “malicious actors with ChatGPT AI” narrative.”

Read the article at Digital Journal!

[post_title] => Digital Journal: What Cybersecurity Risk do AI Chatbots Pose? [post_excerpt] => NetSPI's Nick Landers and Nabil Hannan shared insights on what AI tools ChatGPT and Bard A.I. and mean for the expanding threat landscape. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => digital-journal-cybersecurity-risks-ai [to_ping] => [pinged] => [post_modified] => 2023-03-10 09:02:24 [post_modified_gmt] => 2023-03-10 15:02:24 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=29416 [menu_order] => 145 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 28903 [post_author] => 65 [post_date] => 2022-11-23 15:48:12 [post_date_gmt] => 2022-11-23 21:48:12 [post_content] =>

On November 23, NetSPI Managing Director, Nabil Hannan, was featured in the VentureBeat article called Why API Security is a Fast-growing Threat to Data-driven Enterprises. Read the preview below or view it online.

+++

As data-driven enterprises rely heavily on their software application architecture, application programming interfaces (APIs) occupy a significant position. APIs have revolutionized the way web applications are used, as they aid communication pipelines between multiple services. Developers can integrate any modern technology with their architecture by using APIs, which is highly useful for adding features that a customer needs.  

By nature, APIs are vulnerable to exposing application logic and sensitive data such as personally identifiable information (PII), which makes them an easy target for attackers. Often available over public networks (accessible from anywhere), APIs are typically well-documented and can be quickly reverse-engineered by malicious actors. They are also susceptible to denial of service (DDoS) incidents. 

The most significant data leaks are due to faulty, vulnerable or hacked APIs, which can reveal medical, financial and personal data to the general public. In addition, various attacks can occur if an API is not secured correctly, making API security a vital aspect for data-driven businesses today.

The Future of API Security

“We’re most likely going to see a different software paradigm shift in the next five years that combines features from REST and SOAP security. I believe there will be a software development paradigm where features from each method are used to create a combined superior method,” Nabil Hannan, managing director at NetSPI, told VentureBeat. “This combination will take security out of the hands of the developers and allow for better ‘secure by design’ adoption.”

Hannan said that the concept of identity and authentication is changing, and we need to move away from usernames and passwords and two-factor authentication, which relies on humans not making any errors. 

“The authentication workflow will shift to what companies like Apple are doing around identity management with innovations like the iOS16 keychain. This will be developed through APIs in the near future,” he said.

You can read the full article at VentureBeat!

[post_title] => VentureBeat: Why API Security is a Fast-growing Threat to Data-driven Enterprises [post_excerpt] => NetSPI Managing Director, Nabil Hannan, was featured in the VentureBeat article called Why API Security is a Fast-growing Threat to Data-driven Enterprises. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => venturebeat-why-api-security-is-a-fast-growing-threat-to-data-driven-enterprises [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:02 [post_modified_gmt] => 2023-01-23 21:10:02 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28903 [menu_order] => 179 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [8] => WP_Post Object ( [ID] => 28873 [post_author] => 65 [post_date] => 2022-11-20 14:40:00 [post_date_gmt] => 2022-11-20 20:40:00 [post_content] =>

On November 20, NetSPI Managing Director, Nabil Hannan, was featured in the Datamation article called 5 Top Penetration Testing Trends in 2022. Read the preview below or view it online.

+++

Penetration testing is based on the premise that one of the best ways to safeguard the enterprise is to pretend to be a hacker and find the number of ways you can break into a business. 

The FBI uses this strategy. It often recruits criminals such as forgers and thieves who proved especially effective at crime and in thwarting the efforts of law enforcement. These former criminals become consultants who are highly skilled at spotting scams. Frank Abagnale is one of the most famous, the subject of the movie, “Catch Me If You Can”.  

Penetration testing is a formalization of this approach. A series of tools have been developed that are designed to automatically probe the network and systems for different weaknesses. 

1. Understand The External Attack Surface 

Nabil Hannan, managing director, NetSPI, has noted a greater focus on testing and understanding the external attack surface of organizations. 

Over the last two years, with the shift to working from home, businesses had to make drastic and rapid transformations in the way they operate. As a result, not only did the threat model of their business change, but the external facing attack surface of their organization evolved.

Enterprises now have assets that are exposed to the internet and are regularly changing — and these changes are occurring more rapidly with cloud-hosted systems. That’s one of the drivers behind attack surface management solutions, such as NetSPI’s ASM. They are being leveraged by organizations to continuously monitor attack surfaces and proactively identify any areas of risk in a timely manner.

“Creating and managing an accurate inventory of internet-facing assets and being able to identify potential exposures and vulnerabilities have become key focuses for many organizations,” Hannan said. 

You can read the full article at Datamation!

[post_title] => Datamation: 5 Top Penetration Testing Trends in 2022 [post_excerpt] => NetSPI Managing Director, Nabil Hannan, was featured in the Datamation article called 5 Top Penetration Testing Trends in 2022. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => datamation-5-top-penetration-testing-trends-in-2022 [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:03 [post_modified_gmt] => 2023-01-23 21:10:03 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28873 [menu_order] => 181 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [9] => WP_Post Object ( [ID] => 28375 [post_author] => 65 [post_date] => 2022-09-08 17:21:00 [post_date_gmt] => 2022-09-08 22:21:00 [post_content] =>

On September 8, NetSPI Managing Director Nabil Hannan was featured in Security Magazine's article on National Insider Threat Awareness Month 2022. Read the preview below or view it online.

+++

September is National Insider Threat Awareness Month, which emphasizes the importance of safeguarding enterprise security, national security and more by detecting, deterring and mitigating insider risk.

The risks of espionage, violence, unauthorized disclosure and unknowing insider threat actions are higher than ever; therefore, maintaining effective insider threat programs is critical to reducing any security risks and increasing operational resilience.

National Insider Threat Awareness Month is an opportunity for enterprise security, national security and all security leaders to reflect on the risks posed by insider threats and ensure that an insider threat prevention program is in place and updated continuously to reflect the evolving threat landscape.

Below, in honor of National Insider Threat Awareness Month, security leaders offer advice on how to reduce insider threat risks effectively.

Nabil Hannan, Managing Director, NetSPI:

To account for internal threats, there must be a mindset shift in what constitutes an organization’s threat landscape. Most companies focus exclusively on external threats and view their own people as trustworthy. As a result, insider threats are often under-addressed cybersecurity threats within organizations. We learned with SolarWinds that detecting such a threat is vastly different from traditional pen testing, code review or other vulnerability detection techniques. 

Security teams need to move from only looking for vulnerabilities to also looking for suspicious or malicious code. With a vulnerability, the threat actor interacts with the attack surface in a way that exploits a weakness. With malicious code, the threat actor is either choosing or creating the attack surface and functionality because they have control over the system internally. 

So, instead of the threat actor exploiting vulnerabilities in the attack surface, now the threat actor creates the attack surface and exercises the functionality that they implement. Failing to implement threat modeling that studies potential threats to both vulnerabilities and malicious code can set your organization up with a false sense of security.

You can read the full article at Security Magazine!

[post_title] => Security Magazine: National Insider Threat Awareness Month 2022 [post_excerpt] => On September 8, NetSPI Managing Director Nabil Hannan was featured in Security Magazine's article on National Insider Threat Awareness Month 2022. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => security-magazine-national-insider-threat-awareness-month [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:16 [post_modified_gmt] => 2023-01-23 21:10:16 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28375 [menu_order] => 215 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [10] => WP_Post Object ( [ID] => 28374 [post_author] => 65 [post_date] => 2022-09-06 14:01:00 [post_date_gmt] => 2022-09-06 19:01:00 [post_content] =>

On September 6, NetSPI Managing Director Nabil Hannan was featured in VMblog's article on September is National Insider Threat Awareness Month - Experts Weigh In. Read the preview below or view it online.

+++

September marks National Insider Threat Awareness Month, a time dedicated to emphasize the importance of detecting, deterring and reporting insider threats. This began as a collaborative effort by U.S. government agencies, three years ago and has now grown to both the public and private sector. 

In honor of the month, industry experts have shared their thoughts on different strategies organizations can use to protect themselves from these threats.

Nabil Hannan, Managing Director, NetSPI 

"To account for internal threats there must be a mindset shift in what constitutes an organization's threat landscape. Most companies focus exclusively on external threats and view their own people as trustworthy. As a result, insider threats are often under addressed cybersecurity threats within organizations. We learned with SolarWinds that detecting such a threat is vastly different from traditional pen testing, code review or other vulnerability detection techniques. Security teams need to move from only looking for vulnerabilities to also looking for suspicious or malicious code. With a vulnerability, the threat actor interacts with the attack surface in a way that exploits a weakness. With malicious code, the threat actor is either choosing or creating the attack surface and functionality because they have control over the system internally. So, instead of the threat actor exploiting vulnerabilities in the attack surface, now the threat actor creates the attack surface and exercises the functionality that they implement. Failing to implement threat modeling that studies potential threats to both vulnerabilities and malicious code can set your organization up with a false sense of security."

You can read the full article at VMblog!

[post_title] => VMblog: September is National Insider Threat Awareness Month - Experts Weigh In [post_excerpt] => On September 6, NetSPI Managing Director Nabil Hannan was featured in VMblog's article on September is National Insider Threat Awareness Month - Experts Weigh In. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => vmblog-national-insider-threat-awareness-month-experts-weigh-in [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:17 [post_modified_gmt] => 2023-01-23 21:10:17 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28374 [menu_order] => 218 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [11] => WP_Post Object ( [ID] => 28022 [post_author] => 65 [post_date] => 2022-07-05 08:00:00 [post_date_gmt] => 2022-07-05 13:00:00 [post_content] =>

Bolstering over 350 exhibitors and more than 190 expert sessions, Infosecurity Europe is one of the largest gatherings of cybersecurity professionals in Europe. This year, the NetSPI team made an appearance in the exhibitor hall.  

During Infosecurity Europe, NetSPI officially announced its expansion into the EMEA region. We’ve experienced growing demand from EMEA organizations, and we feel that NetSPI is well-positioned to deliver in this region. 

Aside from the hustle and bustle of the conference itself, we devoted much of our time to the exhibitor hall – where we noticed a few interesting themes. Continue reading for our three key observations from Infosecurity Europe and our conversations with the EMEA cybersecurity community. 

Automate Where Necessary 

Walking the floor, the automation message was prevalent among vendor solutions. However, in conversations with end users, the underlying message was that automation needs to serve a purpose, linked to, for example, improving cybersecurity workflows and processes. As Lalit Ahluwalia, writes in this Forbes article, the top drivers for automation include the lack of skilled workers, lack of standardization, and the expanded attack surface

It is also important to understand that technology alone should not be viewed as a “silver bullet.” There is a fundamental need to ensure that skilled humans can triage the data to ensure accurate results and that the information delivered is valuable and actionable.  

Automation should enable humans to do their job better and spend more time on the tasks that matter most (e.g., in penetration testing, looking for critical vulnerabilities that tools cannot find). For more on the importance of people in cybersecurity, read Technology Cannot Solve Our Greatest Cybersecurity Challenges, People Can

Tightening of Venture Capital Funding and Cybersecurity Budgets 

Another heavily discussed topic at Infosecurity Europe centered around funding, budgets, and priorities. 

With the onset of COVID-19, we noticed an over-expansion of cybersecurity vendors – this was evident in the exhibitor space. We attribute this partly to the rise in remote work, increased ransomware attacks in the past year, and companies’ expanding attack surfaces.  

The cause for concern? 

With the current global economic downturn, many vendor solutions are now seen as a “nice to have”, budgets are being squeezed, and end users are prioritizing their investments based on risk.  

We also had conversations with end users who felt that the whole market is becoming a “Noah’s ark” of solutions – i.e., there are a lot of solutions that have been built in the hope end users see value. We foresee not just a consolidation of the vendors in the market, but also a consolidation of the actual solutions that end users view as critical to their needs. 

The reality is that financial winds of change are blowing, whether it is customers focusing on maximising the return on their budget, or investment dollars looking for a home, there is a tightening coming. While our industry is relatively well-placed to withstand these financial pressures, the ability to build those trusted relationships with our customers and help them achieve tangible positive outcomes will be a key differentiator. 

Emphasis on Business Enablement  

It was refreshing to see many vendors focus less on fear, uncertainty, and doubt and more on business enablement and benefits to the customer.  

Understanding how technology supports initiatives that enable a company to grow is a win-win tactic in our book. This is a positive change and one that will help customers understand which products and services are vital as they mature their security programs.  

The Future of Information Security in EMEA 

There is no doubt that cybersecurity is a vital component of every business, and that was evident at the conference. We’re excited to be a part of the momentum in the EMEA region and support the global cybersecurity community through our platform driven, human delivered methodology and our focus on business enablement. 

Infosecurity Europe may be over, but that doesn’t mean our conversation has to end. Connect with NetSPI today to learn how we can meet your global offensive security needs.

[post_title] => Infosecurity Europe 2022: Observations from the ExCel [post_excerpt] => Learn about three top key observations from Infosecurity Europe you need to know and what they mean. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => infosecurity-europe-oberservations [to_ping] => [pinged] => [post_modified] => 2023-05-23 08:58:05 [post_modified_gmt] => 2023-05-23 13:58:05 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28022 [menu_order] => 244 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [12] => WP_Post Object ( [ID] => 28005 [post_author] => 65 [post_date] => 2022-06-28 08:00:00 [post_date_gmt] => 2022-06-28 13:00:00 [post_content] =>

In recent years, more organizations have adopted the “shift left” mentality. This concept moves application security testing earlier in the software development life cycle (SDLC) versus the traditional method of implementing security testing after deployment of the application.  

By shifting left, an organization can detect application vulnerabilities early and remediate them, saving time and money, and ultimately not delaying the release of the application.  

But not everything comes wrapped in a beautiful bow. In application security, I witnessed that shifting left comes with its fair share of trouble – two in fact: 

  • Overworked and understaffed teams
  • Friction between application security engineers and development teams 

During his time at Microsoft, Idan Plotnik, co-founder and CEO at Apiiro experienced these two roadblocks and created an application security testing tool that addressed both. I recently had the opportunity to sit down with him to discuss the concept of shift left and other application security challenges.  

Continue reading for highlights from our conversation including contextual pentesting, open-source security, and tips on how a business can better prepare for remote code execution vulnerabilities like Log4Shell. For more, listen to the full episode on the Agent of Influence podcast.  

Why is it important to get more context on how software has changed and apply that to pentesting? 

Idan Plotnik: One of the biggest challenges we are hearing is that organizations want to run penetests more than once throughout the development life cycle but are unsure of what and when to test. You don't want to spend valuable time on the pentester, the development team, and the application security engineer to run priority or scoping calls in every release. You want to identify the crown jewels that introduce risk to the application. You want to identify these features as early as possible and then alert your pentesting partner so they can start pentesting early on and with the right focus. 

It's a win-win situation.  

On one hand, you reduce the cost of engineers because you're not bombarding them with questions about what you've changed in the current release, when and where it is in the code, and what are the URLs for these APIs, etc.  

On the other hand, you’re reducing the costs of the pentest team because you’re allowing them to focus on the most critical assets in every release.  

Nabil Hannan: The traditional way of pentesting includes a full deep dive test on an application. Typically, the cadence we've been seeing is annual testing or annual requirements that are driven by some sort of compliance pressure or regulatory need.  

I think everybody understands why it would be valuable to test an application multiple times, and not just once a year, especially if it's going through changes multiple times in a year. 

Now, the challenge is doing these tests can often be expensive because of the human element. I think that's why I want to highlight that contextual testing allows the pentester to hone and focus only on the areas where change has occurred.  

Idan: When you move to agile, you have changes daily. You need to differentiate between changes that are not risky to the organization or to the business, versus the ones that introduce a potential risk to the business. 

It can be an API that exposes PII (Personally Identifiable Information). It can be authorization logic change. It can be a module that is responsible for transferring money in a trading system.  

These are the changes that you need to automatically identify. This is part of the technology that we developed at Apiiro to help the pentester become much more contextual and focused on the risky areas of the code. With the same budget that you have today, you can much more efficiently reduce the risks.  

Learn more about the partnership between NetSPI and Apiiro. 

Why is open-source software risk so important, and how do people need to think about it? 

Idan: You can’t look at open source as one dimension in application security. You must take into consideration the application code, the infrastructure code, the open-source code, and the cloud infrastructure that the application will eventually run on.  

We recently built the Dependency Combobulator. Dependency confusion is one of the most dangerous attack vectors today. Dependency confusion is where you’re using an internal dependency without a proper naming convention and then an attacker goes into a public package manager and uses the same name.  

When you can't reach your internal artifact repository or package manager, it will automatically fall back and access the package manager on the internet. Then, your computer will fetch or download the malicious dependency with the malicious code, which is a huge problem for organizations.  

The person who founded the dependency confusion attack suddenly receive HTTP requests from within Microsoft, Apple, Google, and other enterprises because he found some internal packages while browsing a few websites. He just wanted to play with the concept of editing the same packages with the same name to the public repository. 

This is why we need to help the community and provide them with an open-source framework that they can extend, so that they can run it from their CLI or CI/CD pipeline for every internal dependency. Contributing to the open-source community is an important initiative.  

What can organizations do to be better prepared for similar vulnerabilities to Log4Shell? 

In December 2021, Log4Shell sent security teams into a frenzy ahead of the holiday season. Idan and I discussed four simple steps organizations can take on to mitigate the next remote code execution (RCE) vulnerability, including: 

  1. Inventory. Inventory and identify where the vulnerable components are.
  2. Protection. Protect yourself or your software from being attacked and exploited by attackers from the outside.
  3. Prevention. Prevent developers from doing something or getting access to the affected software to make additional changes until you know how to deal with the critical issue.
  4. Remediation. If you do not have that initial inventory that is automated and happening systemically across your organization and all the different software that is being developed, you cannot get to this step.  

For the full conversation and additional insights on application security, listen to episode 39 of the Agent of Influence podcast.

Listen to Agent of Influence, Episode 39 with Idan Plotnik now
[post_title] => Addressing Application Security Challenges in the SDLC [post_excerpt] => Learn how Idan Plotnik, CEO of Apiiro, addresses challenges in application security and tips to help businesses protect against Log4Shell. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => application-security-challenges-sdlc [to_ping] => [pinged] => [post_modified] => 2023-04-28 13:35:01 [post_modified_gmt] => 2023-04-28 18:35:01 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28005 [menu_order] => 247 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [13] => WP_Post Object ( [ID] => 28010 [post_author] => 65 [post_date] => 2022-06-24 12:35:00 [post_date_gmt] => 2022-06-24 17:35:00 [post_content] =>

On June 24, 2022, NetSPI Managing Director Nabil Hannan published an article in Solutions Review called Four Ways to Elevate Your Penetration Testing Program. Read the preview below or view it online.

+++

Let’s set the scene. For years, organizations have undergone compliance-based penetration testing (pentesting), meaning they only audit their systems for security vulnerabilities when mandated to do so by regulatory bodies. However, this “check-the-box” mindset that’s centered around point-in-time testing is leaving organizations at risk for potential exploitation.

From August-October 2021 alone, a total of 7,064 new Common Vulnerabilities and Exposures (CVE) numbers were registered – all of which could go undetected if a business does not have an established proactive security posture.

With malicious actors continuously evolving and maturing their attack techniques, organizations must leave this outdated mindset behind and take the necessary steps to develop a comprehensive, always-on penetration testing program. Here’s a look at how this can be accomplished.

Adopt an ‘as-a-Service’ Model

Traditional pentesting programs operate under a guiding principle: organizations only need to test their assets a few times a year to protect their business from potential vulnerabilities properly. During this engagement, a pentester performs an assessment over a specified period and then provides a static report outlining all of the found vulnerabilities. While once deemed the status quo, there are many areas for inefficiencies in this traditional model.

With threats increasing, organizations must take a proactive approach to their security posture. Technology-enabled as-a-Service models overhaul traditional pentesting programs by creating always-on visibility into corporate systems. For an as-a-Service model to succeed, the engagement should allow organizations to view their testing results in real-time, orchestrate faster remediation, and perform always-on continuous testing.

This hyperfocus on transparency from both parties will drive clear communication, with the pentesters available to address any questions or concerns in real-time – instead of just providing an inactionable static report. Additionally, it allows teams to truly understand the vulnerabilities within their systems so they can begin remediation before the end of the pentesting engagement.

Lastly, when working in an as-a-Service model, pentesters can help organizations become more efficient with their security processes, as they work as an extension of the internal team and can lend their industry expertise to help strengthen their clients’ security posture.

Read the full article online here.

[post_title] => Solutions Review: Four Ways to Elevate Your Penetration Testing Program [post_excerpt] => On June 24, 2022, NetSPI Managing Director Nabil Hannan published an article in Solutions Review called Four Ways to Elevate Your Penetration Testing Program. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => endpoint-security-elevate-penetration-testing-program [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:30 [post_modified_gmt] => 2023-01-23 21:10:30 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=28010 [menu_order] => 249 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [14] => WP_Post Object ( [ID] => 27989 [post_author] => 65 [post_date] => 2022-06-21 11:44:00 [post_date_gmt] => 2022-06-21 16:44:00 [post_content] =>

On June 21, 2022, NetSPI Managing Director Nabil Hannan published this article on TechTarget called How to Address Security Risks in GPS-enabled Devices. Read the preview below or view it online.

+++

Trendy consumer gadgets are reaching the market at an expedited rate in today's world, and the next new viral product is right around the corner. While these innovations aim to make consumers' lives easier and more efficient, the rapid development of these products often creates security risks for users -- especially as hackers and malicious actors get more creative.

When commercial drones were brought to market as recreational tools in 2013, for example, consumers jumped at the chance to use them for a wide range of personal purposes, from photography to flying practice. Many security risks emerged, however, and it became clear that drones can be used maliciously to do anything from tracking and monitoring to causing physical harm and societal disruption.

GPS-enabled devices are now experiencing the same growing pains.

The Current Threat Environment

GPS-enabled devices have been on the market for a while, but consumer use has boomed in recent years. The newest device making waves is Apple's AirTag -- a small device that tracks personal items such as keys, wallets and backpacks.

With an affordable price tag, consumers have jumped at the opportunity to keep track of their belongings more easily. As adoption has grown, however, so have security and privacy concerns. Malicious actors can easily slip these devices into peoples' belongings and track them.

While the risk to consumers is clear, businesses and influential figures can also be targeted. GPS-enabled devices can be used to track day-to-day business movements and identify exploitable weak points.

Apple has remediated some of these risks by releasing a personal safety guide outlining the steps users should take if they find an unknown AirTag or suspect someone has gained access to their product. Yet these risks highlight a broader problem with GPS-enabled devices. Threat modeling in the design phase of tech development must evolve to uncover emerging security risks -- before consumers get their hands on the devices.

Read the full article online.

[post_title] => TechTarget: How to Address Security Risks in GPS-enabled Devices [post_excerpt] => On June 21, 2022, NetSPI Managing Director Nabil Hannan was featured in this TechTarget interview called How to Address Security Risks in GPS-enabled Devices. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => techtarget-security-risks-gps-enabled-devices [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:31 [post_modified_gmt] => 2023-01-23 21:10:31 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27989 [menu_order] => 252 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [15] => WP_Post Object ( [ID] => 27749 [post_author] => 65 [post_date] => 2022-05-10 07:00:00 [post_date_gmt] => 2022-05-10 12:00:00 [post_content] =>

What is the typical authentication setup for personal online accounts? The username and password. 

For too long, we have depended on this legacy form of authentication to protect our personal data. As more people rely on the internet to manage their most important tasks — online banking, applying for loans, running their businesses, communicating with family, you name it — many companies and services still opt for the typical username and password authentication method, often with multi-factor authentication as an option, but not a requirement.  

To combat the sophisticated attacks of hackers today, multi-factor authentication methods must be considered the bare minimum. [For those unfamiliar with the concept, multi-factor authentication, or MFA, requires the user to validate their identity in two or more ways to gain access to an account, resource, application, etc.] Then, starting on that foundation, security leaders must consider what other identity and access management practices can they implement to better protect their customers? 

For more insights on this global challenge, we spoke with authentication expert Jason Soroko, CTO-PKI at Sectigo, during episode 40 of the Agent of Influence podcast to learn more about the future of multi-factor authentication, symmetric and asymmetric secrets, digital certificates, and more. Continue reading for highlights from our discussion or listen to the full episode, The State of Authentication and Best Practices for Digital Certificate Management

Symmetric Secrets vs. Asymmetric Secrets  

The legacy username and password authentication method no longer offers enough protection. Let’s take a deep dive into symmetric secrets and asymmetric secrets to better understand where we can improve our processes. 

Symmetric secrets are an encryption method that use one key for both encrypting and decrypting a piece of data or file. Here’s a fun anecdote that Jason shared during the podcast: “Let’s say you and I want to do business. We agree that I could show up at your door tomorrow and if I knock three times, you will know it's me. Well, somebody could have overheard us having that conversation to agree to knock three times. It’s the same thing with a username and password. That's a shared symmetric secret.” 

According to Jason, the issue with this method is that the secret had to be provisioned out to someone or, in today’s context, keyed into memory on a computer. This could be a compromised endpoint on your attack surface. Shared secrets have all kinds of issues, and you only want to utilize them in a network where the number of resources is extremely small. And we should no longer use them for human authentication methods. 

Instead, we need to shift towards asymmetric secrets.   

Asymmetric secrets, which are used to securely send data today, have two keys: private and public. The public key is used for encryption purposes only and cannot be used to decrypt the data or file. Only the private key can do that. 

The private key is never shared; it never leaves a secured place (e.g., Windows 10, Windows 11, trusted platform module (TMP), etc.) and it’s what allows the authentication to occur securely. Not only that, but asymmetric secrets don’t require the 123 steps of authentication, improving the user experience overall. The ability for a hacker to guess or steal the asymmetric secret is much more difficult because it is in a secure element, Jason explains. 

Of course, some organizations have no choice but to stick with ancient legacy systems due to financial reasons. But the opportunity here is to complement that legacy authentication method with other controls so you can enhance your authentication system. 

Pitfalls of SMS Authentication 

If you’re considering SMS authentication, I hate to be the breaker of bad news, but that doesn’t offer comprehensive protection. SMS authentication was never built to be secure, and it was never intended to be used the way it is used popularly today. Now, not only do we have the issue of people using a protocol that’s inherently insecure by design, but hackers can easily intercept authentication messages sent via SMS. 

As Jason shared on the podcast, the shocking truth is that SMS redirection is commercially available. It only costs around $16 to persuade the telecommunications company to redirect SMS messages to wherever you want them to go, which shows how easily hackers can obtain messages and data. 

Learn more about telecommunications security, read: Why the Telecoms Industry Should Retire Outdated Security Protocols. 

Three Best Practices for Managing Digital Certificates 

Even with the implementation of multi-factor authentication, how do you know if a person or a device is trustworthy to allow inside your network? 

You achieve that with digital certificates also known as public key certificates. They’re used to share public keys and verify the ownership of a public key to the person or device that owns it. 

With so many people moving to remote work, this only amplifies the number of digital certificates to authenticate each day. It’s important to manage your digital certificates effectively to mitigate the risk of adversaries trying to access your organization’s network. 

For additional reading on the security implications of remote work, check out these articles: 

To get you started toward better digital certificate management, Jason shared these three best practices: 

  1. Take inventory: Perform a proper discovery of all the certificates that you have (TLS, SSL, etc.) to gain visibility into how many you have.
  2. Investigate your certificate profiles: Take into consideration your DevOps certificates, your IoT certificates, etc., and delve into how the certificates were set up, who set them up, how long the bit-length is, and whether is it a proper non-deprecated cryptographic algorithm.
  3. Adapt to new use cases: Look towards the future to determine if you can adapt to new use cases (e.g., can this be used to authenticate BYOD devices or anything outside the Microsoft stack, how will the current cryptographic algorithms today differ in the future, what about hybrid quantum resistance, etc.). 

The Future of Multi-Factor Authentication 

As mentioned at the beginning for this article, multi-factor authentication should be considered the bare minimum, or foundation, for organizations today. For organizations still on the fence about implementing this authentication method, here are three reasons to start requiring it: 

  • A remote workforce requires advanced multi-factor authentication to verify the entities coming into your network.
  • Most cyberattacks stem from hackers stealing people’s username and password. Multi-factor authentication adds additional layers of security to prevent hackers from accessing an organization’s network.
  • Depending on which method your organization utilizes, multi-factor authentication provides a seamless login experience for employees — sometimes without the need for a username or password if using biometrics or single-use code. 

More organizations are choosing to adopt multi-factor authentication and we can only expect to see more enhancements in this area.  

According to Jason, artificial intelligence (AI) will play an important role. Take convolutional neural networks for example. This is a type of artificial neural network (AAN) used to analyze images. If we were to apply convolutional neural networks to cybersecurity, we could train it to identify malicious known binaries or patterns quickly and accurately. Of course, this is something to look forward to in the foreseeable future. 

An area we’ve certainly made much progress on, though, is the ability to use machine learning to determine malicious activity in the credit card fraud detection space. 

Multi-Factor Authentication is Only the First Step 

At a bare minimum, every organization should start with multi-factor authentication and build from there. One-time passwords, email verification codes, or verification links are user-friendly and go a long way in effective authentication.  

Cyberwarfare coupled with a remote workforce and government scrutiny should prompt companies everywhere to bolster their cybersecurity defenses. The authentication methods and best practices Jason Soroko shared with me on the Agent of Influence podcast are a step in the right direction toward protecting your organization, employees, and — most importantly — your customers. 

Put your IAM and authentication processes to the test against real attacker techniques. Explore NetSPI’s red team operations.
[post_title] => Multi-Factor Authentication: The Bare Minimum of IAM [post_excerpt] => Learn how protecting your organization, employees, and customers starts with multi-factor authentication. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => multi-factor-authentication-the-bare-minimum-of-iam [to_ping] => [pinged] => [post_modified] => 2023-06-12 13:40:44 [post_modified_gmt] => 2023-06-12 18:40:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27749 [menu_order] => 271 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [16] => WP_Post Object ( [ID] => 27746 [post_author] => 65 [post_date] => 2022-05-05 14:56:11 [post_date_gmt] => 2022-05-05 19:56:11 [post_content] =>

On May 5, 2022, Nabil Hannan was featured in the VMblog Get Expert Advice During World Password Day 2022. Preview the article below, or read the full article online.

+++

Did you know, today, May 5th, is World Password Day! The Registrar of National Day Calendar has designated the first Thursday of May of each year as World Password Day, and it is meant to promote better password habits - something we could all use, I'm sure. Passwords are critical gatekeepers to our digital identities, allowing us to access online shopping, banking, social media, private work, and life communications. 

We use a lot of online services in our daily lives. And we're constantly having to deal with the possibility of so many different types of attacks, making digital protection more and more important. So let World Password Day be a reminder and encourage people to protect themselves with a series of strong passwords.

To help get a handle on things, a number of industry security experts have chimed in to share their perspectives and opinions with VMblog readers.


--

Nabil Hannan, Managing Director, NetSPI

“World Password Day serves as a moment in time for organizations to re-evaluate password security best practices. Today, a strong authentication strategy must include policies for safe password storage, the most important aspect of password security. Additionally, at a bare minimum, every organization should start with multi-factor authentication and build from there. One-time passwords, email verification codes, or verification links are user-friendly and go a long way in effective authentication.

From a user perspective, all staff working within or alongside the organization should be required to use strong, complex passwords that follow NIST’s latest guidelines. Security leaders may also practice the principle of least privilege, where only those who need access to certain information have it. With these best practices, organizations can better bolster protection and set themselves up for success on World Password Day and beyond.”

[post_title] => VMblog: Get Expert Advice During World Password Day 2022 [post_excerpt] => On May 5, 2022, Nabil Hannan was featured in the VMblog Get Expert Advice During World Password Day 2022. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => vmblog-expert-advice-world-password-day-2022 [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:39 [post_modified_gmt] => 2023-01-23 21:10:39 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27746 [menu_order] => 273 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [17] => WP_Post Object ( [ID] => 27699 [post_author] => 65 [post_date] => 2022-04-25 09:02:00 [post_date_gmt] => 2022-04-25 14:02:00 [post_content] =>

On April 25, 2022, Nabil Hannan was featured in the CSO article, SolarWinds breach lawsuits: 6 takeaways for CISOs. Preview the article below, or read the full article online.

+ + +

The SolarWinds compromise of 2020 had a global impact and garnered the resources of both public and private sectors in an all-hands-on-deck remediation effort. The event also had a deleterious effect on the SolarWinds stock price. These two events, were, predictably, followed by a bevy of civil lawsuits. Fast forward to late March 2022 and we have a federal court saying the suit that named SolarWinds; its vice president of security and CISO, Tim Brown; as well as two prime investor groups Silver Lake and Thoma Bravo may go forward.

As Violet Sullivan, cybersecurity and privacy attorney of client engagement at Redpoint Cybersecurity, observes, the judge finds that the plaintiffs “may have a claim, so the judge is going to hear it.” She explains, “It’s not what is being said in the order that is interesting. It’s what will be shown during the discovery process that is interesting. There will be questions in this suit including: Will the forensic reports be available during the discovery or covered by attorney-client privilege?”

Resource Cybersecurity According to Risk

CISOs are uniquely positioned to provide insight on the threat landscape to business operations and together create the appropriate risk management plan. I recently mentioned how cybersecurity is often something companies get around to. The SolarWinds cyberattack and the resultant civil lawsuits are demonstrating the need for the well-documented investment in cybersecurity must be at the forefront.

The managing director of NetSPI, Nabil Hannan, says, “Internal threats are still a lingering and often under-addressed cybersecurity threat within organizations, especially when compared to the resources applied toward external threats. But, with buy-in from an organization's leadership team, CISOs can have the resources needed to develop a proactive and ongoing threat detection governance program.”

Those who hesitate may find themselves playing catch up as they are spurred along by the new U.S. Securities and Exchange Commission initiative on the need for publicly sharing information security breach information within four days of discovery that the breach is material will affect direct change. Similarly, the SEC’s desire to have companies describe how they address cybersecurity will drive greater transparency within many companies. This SEC effort will pull infosec out of the back room and to the forefront, like policies, procedures, resourcing, and expertise will be on full display via the required SEC filings.

[post_title] => CSO: SolarWinds breach lawsuits: 6 takeaways for CISOs [post_excerpt] => On April 25, 2022, Nabil Hannan was featured in the CSO article, SolarWinds breach lawsuits: 6 takeaways for CISOs. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cso-solarwinds-breach-lawsuits-6-takeaways-for-cisos [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:41 [post_modified_gmt] => 2023-01-23 21:10:41 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27699 [menu_order] => 279 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [18] => WP_Post Object ( [ID] => 27600 [post_author] => 65 [post_date] => 2022-04-12 07:00:00 [post_date_gmt] => 2022-04-12 12:00:00 [post_content] =>

In application security, DevOps, and DevSecOps, “shift left” is a guiding principle for how organizations should implement security practices into the development process. For this reason, today’s application security testing tools and technologies are built to facilitate a shift left approach, but the term has taken on a new meaning compared to when it first entered the scene years ago.

Over the past decade, software development has drastically changed with the proliferation of impactful technology, such as APIs and open-source code. However, shift left has remained a North Star for organizations seeking to improve application security. Its meaning has become more nuanced for those attempting to achieve a mature application security framework.

I recently sat down with Maty Siman, Founder and CTO at Checkmarx on our Agent of Influence podcast to discuss application security and the concept of shift left. You can listen to the full episode here. Let’s explore four highlights from the discussion:

The “Lego-ization” of Software 

In the past, developers would build their solutions from the ground up, developing unique libraries to carry out any desired functionality within an application. Today, developers leverage a wide range of tools and technologies, such as web services, open-source code, third party solutions and more, creating software that is ultimately composed of a variety of different components.

As Maty alluded to during the Agent of Influence podcast, many in the industry have referred to this practice as the “lego-ization” of software, piecing together different premade, standardized Lego blocks to form a unique, sound structure.

While both traditional and modern, lego-ized methods are forms of software development, they demand a different set of expertise. This is where mature application security frameworks become invaluable. Maty explains that today’s developers are often working around the clock to keep up with the pace of digital transformation; they cannot just focus on code for vulnerabilities. They must also look at how the different components are connected and how they communicate with one another.

Each connection point between these components represents a potential attack surface that must be secured – but addressing this can also become a source of friction and perceived inconvenience for developers.  

The Impact of Today’s Open Source and API Proliferation 

The recent proliferation of software supply chain security threats has made the situation even more complex and dire for software developers, as malicious actors look to sneak malicious code into software as it’s being built.

As Siman explains during our podcast conversation, open source code makes up anywhere from 80 to 90 percent of modern applications. Still, developers are pulling these resources from a site like GitHub often without checking to see if the developer who created the package is trustworthy. This further exacerbates the security risk posed by the lego-ized development practices we see today, Maty warns.

Additionally, in recent years, there has been an explosive growth in the usage of APIs in software development. Organizations now leverage thousands of APIs to manage both internal and external processes but have not paid enough attention to the challenge of securing these deployments, according to Maty.

However, efforts have been made to set organizations on the right path in securing APIs, such as the OWASP API Security Project – but there is still a lot of work to be done. Check out the OWASP API Top 10 list, co-written by Checkmarx’s Vice President of Security Research, Erez Yalon.

Read: AppSec Experts React to the OWASP Top 10 2021

Many organizations are not aware of which or how many APIs their services take advantage of, which presents an obstacle towards securing them. As a result, Maty explains that the concept of a “software bill of materials,” or SBOM, is beginning to take shape as organizations seek to better understand the task at hand.

With APIs quickly becoming a favored attack vector for cybercriminals, the importance of developers getting a handle on API security cannot be overstated, which is especially crucial for application penetration testing. Simultaneously, the task is an immense one that many developers see as a headache or hindrance to their main goal, which is to deliver new software as quickly as possible. 

Shifting Left in an Evolving Application Development Landscape  

While the trends outlined above certainly present significant challenges when it comes to application security, they are not insurmountable. Maty advises that organizations can and should implement certain changes in their approach to application security to better support developers with appropriate application security testing tools and other resources.

One of the main issues organizations face in modern application security testing, including application penetration testing or secure code review, lies in the effort to shift left. Shift left is sometimes seen as a source of friction in the developer community. It is about finding and managing vulnerabilities as early as possible, which has only become more difficult and complex as development has evolved.

Read: Shifting Left to Move Forward: Five Steps for Building an Effective Secure Code Review Program

The amount of innovation in software development and implementation means that shifting as far left as possible is not always feasible or even the best approach. While detecting vulnerabilities in code as early as possible is a priority in application security, attempting to force developers to do so too early in the development process can exhaust developers and slow software delivery, as Maty advises.

For example, the use of integrated development environment (IDE) plugins can often make developers feel hindered and nagged by security rather than empowered by it. While they represent a shift to the extreme left in terms of security, they are not always a good idea to impose on developers.

No Right Way to Shift Left in Application Security 

Ultimately, the proper way to shift left is going to vary across organizations, depending on the software they are building and what is going into it. It is paramount to take a tailored approach that balances the security responsibilities placed on developers with the need to maintain agility and deliver software quickly.

Application development has changed significantly, and we can expect it to continue to change in the coming years. Creating and maintaining a mature application security framework will depend on maintaining a proper understanding of the tools and technologies developers are using and adjusting the organizational approach to application security accordingly.

For more, listen to episode 32 of Agent of Influence with Maty of Checkmarx:

For more, listen to episode 32 of Agent of Influence with Maty of Checkmarx: “Shift Left, But Not Too Left”: A Conversation on AppSec and Development Trends.
[post_title] => Application Security: Shifting Left to the Right Degree [post_excerpt] => Read application security best practices from our cybersecurity podcast discussion with Maty Siman, CTO at Checkmarx. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => shift-left-secure-software [to_ping] => [pinged] => [post_modified] => 2023-04-07 09:17:27 [post_modified_gmt] => 2023-04-07 14:17:27 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27600 [menu_order] => 283 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [19] => WP_Post Object ( [ID] => 27514 [post_author] => 65 [post_date] => 2022-03-11 13:44:00 [post_date_gmt] => 2022-03-11 19:44:00 [post_content] =>

On March 11, 2022, Nabil Hannan guest authored a TechTarget article titled, How to Build a Security Champions Program. Preview the article below, or read the full article online here.

+ + +

Application security is more important than ever, as apps remain one of the most common attack vectors for external breaches. Forrester's latest "State of Application Security" report stated organizations are starting to recognize the importance of application security, and many have started embedding security practices more tightly into their development stages — a big step in the right direction.

It's important to understand, however, that building a world-class application security program can't happen overnight. A great deal of foundational work must be done before an organization can achieve results, including sharpening security processes around the software development lifecycle (SDLC) to identify, track and remediate vulnerabilities more efficiently. These efforts will eventually bring organizations to a high level of maturity.

Adoption of security in the SDLC is often lacking in many organizations. The answer to this problem lies within an organization's employee population. Companies should establish a security champions program, where certain employees are elected as security advocates and drivers of change.

To create a strong cybersecurity culture, security champions should be embedded throughout an entire organization. These individuals should have an above-average level of security interest or skill, with the goal of ultimately evangelizing and accelerating the adoption of a security-first culture — not only through software and application development, but throughout the organization.

Developing a security champions program doesn't need to be complicated. This four-step process helps organizations establish their program with ease.

Continue reading How to Build a Security Champions Program on TechTarget.

[post_title] => TechTarget: How to Build a Security Champions Program [post_excerpt] => On March 11, 2022, Nabil Hannan guest authored a TechTarget article titled, How to Build a Security Champions Program. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => techtarget-build-security-champions-program [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:49 [post_modified_gmt] => 2023-01-23 21:10:49 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27514 [menu_order] => 296 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [20] => WP_Post Object ( [ID] => 27447 [post_author] => 65 [post_date] => 2022-03-08 07:00:00 [post_date_gmt] => 2022-03-08 13:00:00 [post_content] =>

The Federal Communications Commission (FCC) recently announced its proposal to update data breach laws for telecom carriers.

A key change in the proposal? Eliminating the seven-business-day waiting period required of businesses before notifying customers of a breach.  

Although the proposed FCC change would allow companies to address and mitigate breaches more quickly, it does not solve the greater issue at hand: The sensitive data collected by the telecoms industry is constantly at risk of being exploited by malicious actors.  

The Telecoms Threat Environment 

Protecting data within the telecoms industry is instrumental in ensuring customer privacy and safety.  

When telecom companies experience a data breach, hackers often target customer proprietary network information (CPNI) – “some of the most sensitive personal information that carriers and providers have about their customers,” according to the FCC. This includes call logs, billing account information, as well as the customer’s name, phone number, and more.  

In August 2021, T-Mobile suffered the largest carrier breach on record, with over 50 million current and former customers affected.  

To protect customers from further breaches, the telecoms industry must deploy configurations securely, enable end-to-end encryption, and return to security basics by enabling automation in vulnerability discovery and penetration testing.  

Misconfiguration Risk 

Networks, specifically telecommunications channels, continue to increase in complexity, causing an increased risk for misconfigured interfaces within organizations.  

From these misconfigurations, attackers can stitch together multiple weaknesses and pivot from one system to another across multiple organizations.  

In October 2021, LightBasin, a hacking group with ties to China, compromised telecom companies around the world. LightBasin used multiple tools and malware to exploit weaknesses in systems that were configured with overly permissive settings and neglected to follow the principle of least privilege.  

These hacking tactics are not unique. Had the telecoms industry instituted the proper channels for alerting and blocking on common attack patterns and known tactics, techniques, and procedures (TTPs) that attackers use widely they may have been able to prevent the LightBasin attack.  

Additionally, to protect against future attacks and data breaches, industries should build proper standards and automation to ensure that configurations are deployed securely and consistently monitored.  

The Need for End-to-End Encryption 

Enabling end-to-end encryption within mobile communication networks could help to combat some of the lateral movement strategies used by LightBasin and similar hacker groups.  

This lateral movement within telecommunications networks can be challenging for the industry to address for multiple reasons. The overarching issue? Telecommunications systems were not originally developed with security in mind and are not secure by design.  

The telecoms systems have flaws that cannot be fixed without major architectural changes and these systems have evolved to be utilized in a way that’s outside of the original creators’ intent.  

In particular, these mobile communications networks were not built with a quality of service guarantee or any type of end-to-end encryption to ensure that users’ data is not exposed while in transit.  

WhatsApp, for example, uses the Signal protocol to asymmetrically encrypt messages between users. The encrypted messages then get transmitted the via a WhatsApp facilitated server.  

This ensures that only the intended recipient can decrypt the message and others who attempt to do so will fail. Legacy telecoms players should adopt a similar approach for added protection to users’ communications.  

While end-to-end encryption can protect against lateral movement strategies, this does not mean the security is infallible. Just because the communication channel is secure doesn’t ensure application security. Users are still vulnerable to social engineering attacks, malware, and, as in WhatsApp’s case, the app itself may be vulnerable.  

To truly secure user data, the telecoms industry security must invest in holistic security strategies including application security testing.  

For more on end-to-end encryption, read Why Do People Confuse “End-to-End Encryption” with “Security”? 

Collaboration and Coordination 

As the telecoms industry begins to prioritize security, organizations harnessing the networks must also prioritize security.  

This includes ensuring multi-factor authentication between users and systems, the principle of least privilege, or even proper input validation and output encoding.  

In tandem, the telecoms industry should strive to build automated vulnerability management processes where possible. This ensure continuous checks and balances are in place to secure all deployed systems – both at the software and infrastructure levels.  

Where hackers have only become more sophisticated in the technology and methods used to acquire data, the telecoms industry has neglected to keep up.  

Currently, messages and calls can be spoofed, data is not encrypted while in transit, and the quality of service and protection is not guaranteed. We have adopted a network with inherent flaws in its design from a security perspective, and these systems are used by billions of people across the globe.  

The change in FCC guidelines mark significant progress. Given the current threat environment, security efforts in the telecoms industry must be prioritized to ensure billions of people and their data are protected. 

Learn more about the benefits of vulnerability management for the telecoms industry in our case study with a fast-growing provider.
[post_title] => Why the Telecoms Industry Should Retire Outdated Security Protocols [post_excerpt] => Learn how the telecommunications industry can invest in end-to-end encryption to secure user data and prevent breaches. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => why-telecoms-should-retire-outdated-security-protocols [to_ping] => [pinged] => [post_modified] => 2023-06-12 13:46:44 [post_modified_gmt] => 2023-06-12 18:46:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27447 [menu_order] => 298 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [21] => WP_Post Object ( [ID] => 27392 [post_author] => 65 [post_date] => 2022-02-18 18:00:00 [post_date_gmt] => 2022-02-19 00:00:00 [post_content] =>

On February 18, 2022, Nabil Hannan was featured in a LifeWire article titled, Why Unwanted Tracking Is on the Rise. Preview the article below, or read the full article online here.

+ + +

It's never been easier to track your possessions thanks to gadgets like Apple AirTags, but they also contribute to a growing privacy problem.

Apple recently said it would improve AirTag safeguards after reports of people being tracked surreptitiously using AirTags. However, some experts say Apple's efforts won't be sufficient to protect users.

"Even with the personal safety guide released by Apple, consumers are still subject to increased risks, as it only gives consumers some tools to use if they suspect their device has been compromised," Nabil Hannan, managing director at cybersecurity firm NetSPI, told Lifewire in an email interview.

AirTags or CreepTags?

AirTags send out Bluetooth signals that nearby Apple devices can detect. Many people have claimed they've been tracked by people using AirTags without their knowledge.

[post_title] => Lifewire: Why Unwanted Tracking Is on the Rise [post_excerpt] => On February 18, 2022, Nabil Hannan was featured in a LifeWire article titled, Why Unwanted Tracking Is on the Rise. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => lifewire-why-unwanted-tracking-is-on-the-rise [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:53 [post_modified_gmt] => 2023-01-23 21:10:53 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27392 [menu_order] => 306 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [22] => WP_Post Object ( [ID] => 27381 [post_author] => 65 [post_date] => 2022-02-15 04:00:00 [post_date_gmt] => 2022-02-15 10:00:00 [post_content] =>

On February 15, 2022, Nabil Hannan was featured in a TechNewsWorld article titled, 49ers Blitzed by Ransomware. Preview the article below, or read the full article online here.

+ + +

While their downstate rivals the Los Angeles Rams were busy winning Super Bowl LVI, the San Francisco 49ers were being clipped in a ransomware attack.

News of the attack was reported by the Associated Press after cybercriminals posted documents to the dark web that they claimed were stolen from the NFL franchise.

In a public statement obtained by TechNewsWorld, the team noted: “We recently became aware of a network security incident that resulted in temporary disruption to certain systems on our corporate IT network.”

Looking for Street Cred

Nabil Hannan, managing director at NetSPI, a penetration testing company in Minneapolis, maintained that it’s unusual for a ransomware gang to post exfiltrated data on the web without making any ransom demands.

“I would assume this is due to the fact that they weren’t able to hold any critical systems hostage,” he told TechNewsWorld.

“The gang may have been able to encrypt/steal some files or systems that were categorized as non-critical, but they likely knew that they wouldn’t be able to receive any ransom payout for such information,” he surmised.

“Most likely this was an act to get ‘street creds’ and pose that they were able to steal information from such a high profile organization to show their reach and ability to break into any system,” he said.

[post_title] => TechNewsWorld: 49ers Blitzed by Ransomware [post_excerpt] => On February 15, 2022, Nabil Hannan was featured in a TechNewsWorld article titled, 49ers Blitzed by Ransomware. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => technewsworld-49ers-blitzed-by-ransomware [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:53 [post_modified_gmt] => 2023-01-23 21:10:53 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27381 [menu_order] => 308 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [23] => WP_Post Object ( [ID] => 27276 [post_author] => 65 [post_date] => 2022-02-01 16:12:33 [post_date_gmt] => 2022-02-01 22:12:33 [post_content] =>

While in the Kingdom of Saudi Arabia for the @Hack cybersecurity conference, we noticed a disconnect in the understanding of penetration testing. Many of the people we spoke with assumed pentesting and bug bounty programs were one and the same.

Spoiler alert: that assumption is incorrect. While they share a similar goal, pentesting services and bug bounties vary in impact and value.

In an effort to demystify the two vulnerability discovery activities, in this blog we will cover how each are used in practice, key differences, and explain the risks associated with solely relying on bug bounties.

What is a Bug Bounty Program?

Simply put, a bug bounty program consists of ethical hackers exchanging critical vulnerabilities, or bugs, for recognition and compensation. 

The parameters of a bug bounty program may vary from organization to organization. Some may scope out specific applications or networks to test and some may opt for a “free-for-all" approach. Regardless of the parameters, the process remains the same. A hacker finds a vulnerability, shares it with the organization, then, once validated, the organization pays out a bounty to the hacker. 

For a critical vulnerability finding, the average payout rose to $3,000 in 2021. Bounty payments have come a long way since 2013’s ‘t-shirt gate,’ where Yahoo offered hackers a $12.50 company store credit for finding a number of XSS (cross-site scripting) vulnerabilities – yikes.

What is Penetration Testing?

Penetration testing is an offensive security activity in which a team of pentesters, or ethical hackers, are hired to discover and verify vulnerabilities. Pentesters simulate the actions of a skilled adversary to gain privileged access to an IT system or application, such as cloud platforms, IoT devices, mobile applications, and everything in between. 

Pentesting also helps organizations meet security testing requirements set by regulatory bodies and industry standards such as PCI and HIPAA.

Pentesters use a combination of automated vulnerability discovery and manual penetration testing techniques. They work collaboratively to discover and report all vulnerability findings and help organizations with remediation prioritization. Pentesting partners like NetSPI work collaboratively with in-house security teams and are often viewed and treated as an extension of that team.

Penetration testing has evolved dramatically over the past five years with the emergence of penetration testing as a service (PTaaS). PTaaS enables more frequent, transparent, and collaborative testing. It streamlines vulnerability management and introduces interactive, real-time reporting. 

As an industry, we’ve shifted away from traditional pentesting where testers operate behind-the-curtain, then deliver a long PDF list of vulnerabilities for security teams to tackle on their own.

What is Penetration Testing?
For a more detailed definition, how it works, and criteria for selecting your penetration testing partner, read our guide.

6 Core Differences Between Pentesting and Bug Bounties

So, what are the greatest differences between pentesting and bug bounties? Let’s break it down into six components: personnel, payment, vulnerabilities, methodology, time, and strategy.

Personnel

Pentesters are typically full-time employees that have been vetted and onboarded to provide consistent results. They often work collaboratively as a team, rather than relying on a single tester. 

Bug bounty hackers operate as independent contractors and are typically crowdsourced from across the globe. Working with crowdsourced hackers can open the door to risk, given you cannot be 100% confident in their intentions and motives. 

Will they sell the intel they gather to a malicious party for additional compensation? Will they insert malicious code during a test? With full-time employees, there are additional guardrails and accountability to ensure the hacking is performed ethically.

Payment

With penetration testing vendors, the payment model can vary. Cost is often influenced by the size of the organization, the complexity of the system or application, vendor experience, the scope, depth, and breadth of the test, among other factors. 

With a bug bounty program, the more severe the vulnerability, the more money a bug bounty hunter makes. Keep in mind that negotiation of the bounty payment is very common with bug bounty programs, so it is important to factor in the time and resources to manage those discussions.

Additionally, one cause for concern with bug bounty payments is that instead of reporting vulnerabilities as they are found, it’s common for hackers to hold on to the most severe vulnerabilities for greater payout and recognition during a bug bounty tournament. 

Vulnerabilities

Because of the pay-per-vulnerability model bug bounty programs follow, it’s no surprise that many are focused solely on finding the highest severity vulnerabilities over the medium and low criticality ones. However, when chained together, lower severity vulnerabilities can expose an organization to significant risk.

This is a gap that penetration testing fills. Penetration testers chain together seemingly low-risk events to verify which vulnerabilities enable unauthorized access. Pentesters do prioritize critical vulnerabilities, but they also examine all vulnerabilities with a business context lens and communicate the risk each could pose to operations if exploited.

Vulnerability findings aside, there are also key differences in how the results are delivered. With bug bounties, it’s up to the person who found the vulnerability to decide when to disclose the flaw to the program – or save it for a tournament as mentioned above, or even disclose it publicly without consent.

Modern penetration testing companies like NetSPI operate transparently and report findings in real time as they are discovered. Plus, pentesters validate and retest to confirm the vulnerability exists, evaluate the risk it poses, and determine if it was fixed effectively.

Methodology

The greatest difference in the testing methodology of bug bounty programs and penetration testing services is consistency.

From our discussions with security leaders, the biggest challenge they face with bug bounty programs is that service, quality, project management, and other key methodology factors often lack consistency. Notably, the pool of independent contractors varies across experience and expertise. And the level of effort diminishes as rewarding, critical vulnerabilities are found and researchers move on to opportunities with greater opportunity for compensation.

Penetration testing is more methodical in nature. Testers follow robust checklists to ensure consistency in the testing process and make certain that they are not missing any notable gaps in coverage. They also hold each other accountable by working on teams. At NetSPI, our pentesters use the workbench in our Resolve PTaaS technology platform to collaborate and maintain consistency.

For any organization that has legal, regulatory, or contractual obligations for a robust security testing bug bounties simply cannot meet those requirements. Bug bounty programs are opportunistic. There is no assurance of full coverage testing as they do not adhere to defined methodology or checklists to ensure consistency from assessor to assessor, or assessment to assessment. Some bug bounties can use checklists upon request – for a hefty added cost.

Time

While bug bounty programs are evergreen and always-on, traditional penetration testing has been limited by time-boxed assessments.

To address this, first and foremost we recommend organizations provide their pentesting team with access to source code or perform a threat modeling assessment to equip their team with information a malicious hacker could gain access to in the wild. This allows pentesters to accurately emulate real attackers and spend more time finding business critical vulnerabilities.

The pentesting industry is rapidly evolving and is becoming more continuous, thanks to the PTaaS delivery model and attack surface management. Gone are the days of annual pentests that check a compliance box. We see a huge opportunity for integration with attack surface management capabilities to truly offer continuous testing of external assets.

Strategy

Penetration testing is a strategic security activity. On the other hand, bug bounty programs are very tactical and transactional: find a vulnerability, report it, get paid for it, then move on to the next hunt.

As noted earlier, penetration testing is often viewed as an extension of an internal security team and collaborates closely with defensive teams. You can also find pentesting partners that offer strategic program maturity advisory services. Because of this, pentesters deeply understand the systems, networks, applications, etc. and can assess them holistically. This is particularly beneficial for complex systems and large organizations with massive technology ecosystems.

Furthermore, strategic partnerships between penetration testing vendors and their partners lead to a greater level of trust, institutional knowledge, and free information exchange. In other words, when you work with a team of penetration testers on an ongoing basis, their ability to understand the mechanics of your company and its technologies lends itself to discovering both a greater number and higher quality of vulnerabilities.

Final Thoughts

The way penetration testing has and continues to evolve fills many of the gaps left by bug bounty programs. There is certainly room for both bug bounty programs and penetration testing in the security sector – in many cases the services complement one another. However, it is important to understand the implications and risks associated when deciding where to focus your efforts and budget. 

[post_title] => Penetration Testing Services vs. Bug Bounty Programs [post_excerpt] => What are the greatest differences between pentesting and bug bounties? We break it down into six components: personnel, payment, vulnerabilities, methodology, time, and strategy. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => penetration-testing-services-versus-bug-bounty [to_ping] => [pinged] => [post_modified] => 2023-08-22 09:52:04 [post_modified_gmt] => 2023-08-22 14:52:04 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27276 [menu_order] => 310 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [24] => WP_Post Object ( [ID] => 27280 [post_author] => 65 [post_date] => 2022-02-01 07:00:00 [post_date_gmt] => 2022-02-01 13:00:00 [post_content] =>

On February 1, 2022, Nabil Hannan was featured in SHRM's article on the UKG ransomware attack. Preview the article below, or read the full article online here

+ + +

Along ordeal for customers of Ultimate Kronos Group (UKG) is nearing an end. The vendor has restored its time-keeping and payroll services after a ransomware attack disrupted the lives of thousands of HR professionals and employees alike.

But experts say fallout from the attack will continue, given that some customer data was stolen, companies will have to transition manual records back into UKG systems and shaken clients are questioning their future with the vendor.

In a public update on Jan. 22, UKG said it had restored core time, scheduling and payroll capabilities to all customers impacted by the ransomware attack on its Kronos Private Cloud system. The statement said UKG is now focused on the "restoration of supplemental features and nonproduction environments" and is offering video-based recovery guides to help customers reconcile their data.

The outage—which lasted more than a month for many UKG clients—forced thousands of organizations to scramble to create manual workarounds. It happened during a particularly challenging time of year; employers had to find ways to pay workers holiday pay and overtime as employees worked extra shifts to cover staff shortages caused by the omicron variant of the coronavirus and ongoing resignations.

UKG and companies using its services may be facing legal action. "Unfortunately, some customer data was stolen in the attacks and that creates a secondary concern for UKG and its clients," said Allie Mellen, a security and risk analyst with research and advisory firm Forrester. UKG confirmed in its latest public statement that the personal data of at least two of its customers had been "exfiltrated" or breached.

.....

Cautionary Tale for HR Tech Vendors

HR technology analysts say vendors and their clients should brace themselves for similar attacks as more hackers train their sights on sensitive employee data rather than customer data.

"The reality is we're going to see more of these attacks," said Trevor White, a research manager specializing in HCM technologies with Nucleus Research in Boston. "The question for HR vendors is how they'll limit disruption to their customers as they go about solving problems related to ransomware and other cyberattacks. Unless you pay the ransom, these things can take weeks to solve."

Nabil Hannan, managing director for NetSPI, an enterprise security testing and vulnerability management firm in Minneapolis, said too many organizations still focus on protecting customer data at the expense of securing employee data.

"Hackers are getting more creative and focusing more of their efforts on finding ways to lock up systems that on their face may not seem as critical but that have far-reaching impacts, like HR data," Hannan said.

[post_title] => SHRM: Concerns Linger Following UKG Ransomware Attack [post_excerpt] => On February 1, 2022, Nabil Hannan was featured in SHRM's article on the UKG ransomware attack. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => shrm-concerns-linger-following-ukg-ransomware-attack [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:54 [post_modified_gmt] => 2023-01-23 21:10:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27280 [menu_order] => 309 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [25] => WP_Post Object ( [ID] => 27239 [post_author] => 91 [post_date] => 2022-01-25 12:09:02 [post_date_gmt] => 2022-01-25 18:09:02 [post_content] =>

On January 25, 2022, Travis Hoyt, Florindo Gallicchio, Charles Horton, and Nabil Hannan were featured in TechRound's 2022 Cybersecurity Predictions round up. Preview the article below, or read the full article online here.

  • Explore industry expert predictions on what’s in store for cybersecurity in 2022.
  • Cyber-attacks have remained a key concern throughout the COVID-19 pandemic. With 2021 now over, what does the new year have in store for cybersecurity?
  • We’ve collected predictions from industry experts, including HelpSystems’s Joe Vest, Gemserv’s Andy Green and more.

With many businesses continuing to work from home where possible and settling into a more hybrid style of work, cybersecurity has been a key concern across a range of industries.

Here, we’ve collected opinions from industry experts on what they predict 2022 has in store for cybersecurity.

Travis Hoyt, CTO at NetSPI

Attack surface management: “As organisations continue to become more reliant on SaaS technologies to enable digital transformation efforts, the security perimeter has expanded. Organisations now face a new source of cybersecurity risk as cybercriminals look to exploit misconfigurations or vulnerabilities in these SaaS technologies to wage costly attacks. In 2022, we can expect that organisations will become more focused on SaaS posture management and ensuring that their SaaS footprint is not left open as a vector for cyberattacks. This trend will be further accelerated by the insistence of insurance providers that organisations have a detailed understanding of their SaaS deployments and configurations, or face higher premiums or even a refusal of insurance altogether.”

Next generation architectures open new doors for security teams: “Interest in distributed ledger technology, or blockchain, are beginning to evolve beyond the cryptocurrency space. In 2022, we’ll begin to see the conversation shift from bitcoin to discuss the power blockchain can have within the security industry. Companies have already started using this next generation architecture, to better communicate in a secure environment within their organisations and among peers and partners. And I expect we’ll continue to see this strategy unfold as the industry grows.”

CFOs will make or break ransomware mitigation: “For too long, companies have taken a reactionary approach to ransomware attacks – opting to pay, or not pay, after the damage has already been caused. I expect to see CFOs prioritising conversations surrounding ransomware and cyber insurance within 2022 planning and budgetary meetings to develop a playbook that overalls all potential ransomware situations and a corresponding strategy to mitigate both damage and corporate spend. If they don’t lead with proactivity and continue to take a laggard approach to ransomware and cyber insurance, they are leaving their companies at risk for both a serious attack and lost corporate funds.”

Florindo Gallicchio, Managing Director and Head of Strategic Solutions at NetSPI

Cybersecurity budgets will rebound significantly from lower spend levels during the pandemic: “As we look to 2022, cybersecurity budgets will rebound significantly after a stark decrease in spending spurred by the pandemic. Ironically, while COVID-19 drove budget cuts initially, it also accelerated digital transformation efforts across industries – including automation and work-from-home infrastructure, which have both opened companies up to new security risks, leading to higher cybersecurity budget allocation in the new year. Decisions are being made in Fortune 500+ companies with CFOs on the ground, as these risk-focused enterprises understand the need for larger budgets, as well as thorough budgeted risk and compliance strategies. Smaller corporations that do not currently operate under this mindset should follow the lead of larger industry leaders to stay ahead of potential threats that emerge throughout the year.”

Charles Horton, Chief Operations Officer at NetSPI

Company culture could solve the cybersecurity hiring crisis: “It’s no secret that cybersecurity, like many industries, is facing a hiring crisis. The Great Resignation we’re seeing across the country has underscored a growing trend spurred by the COVID-19 pandemic: employees will leave their company if it cannot effectively meet their needs or fit into their lifestyle. From a retention perspective, I expect to see department heads fostering a culture that’s built on principles like performance, accountability, caring, communication, and collaboration. Once this team-based viewpoint is established, employees will take greater pride in their work, producing positive results for their teams, the company and themselves – ultimately driving positive retention rates across the organisation.”

Nabil Hannan, Managing Director at NetSPI

2022 is the year for API security: “In 2022, we will see organisations turn their attention to API security risks, deploying security solutions and conducting internal audits aimed at understanding and reducing the level of risk their current API configurations and deployments create. Over the past few years, APIs have become the cornerstone of modern software development. Organisations often leverage hundreds, and even thousands, of APIs, and ensuring they are properly configured and secured is a significant and growing challenge. Compounding this issue, cyberattackers have increasingly turned to APIs as their preferred attack vector when seeking to breach an organisation, looking for vulnerable connection points within API deployments where they can gain access to an application or network. For these reasons, securing APIs will be a top priority throughout 2022.”

The Skills Shortage Will Continue Until Hiring Practices Change: “In 2022 the cybersecurity skills gap will persist, but organisations that take a realistic approach to cybersecurity hiring and make a commitment to building cybersecurity talent from the ground up will find the most success in addressing it. The focus in closing the skills gap often relies on educating a new generation of cybersecurity professionals through universities and trade programs, and generally encouraging more interest in young professionals joining the field. In reality, though, these programs will only have limited success. The real culprit behind the skills gap is that organisations often maintain unrealistic hiring practices, with cybersecurity degrees and certification holders often finding untenable job requirements such as 3+ years of experience for an entry level job.”

[post_title] => TechRound: Cybersecurity Predictions for 2022 [post_excerpt] => On January 25, 2022, Travis Hoyt, Florindo Gallicchio, Charles Horton, and Nabil Hannan were featured in TechRound's 2022 Cybersecurity Predictions round up. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => techround-cybersecurity-predictions-for-2022 [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:10:56 [post_modified_gmt] => 2023-01-23 21:10:56 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27239 [menu_order] => 314 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [26] => WP_Post Object ( [ID] => 27220 [post_author] => 53 [post_date] => 2022-01-24 16:47:10 [post_date_gmt] => 2022-01-24 22:47:10 [post_content] =>
Watch Now

Overview 

Today's approaches to defense in depth for application security are siloed and lack context, thus results have fallen short. But a layered approach is the key to building a world-class AppSec program that spans the entire Software Development Lifecycle (SDLC). So, how does our approach need to change? 

In this webinar, you’ll hear from three experts at each of the core security touchpoints within the Software Development Life Cycle (SDLC): at the code level, pre-deployment, and post-deployment.

Speakers include Nabil Hannan, managing director at NetSPI, Moshe Zioni, VP of strategy research at Apiiro, and Samir Sherif, CISO at Imperva. 

During this webinar, speakers will discuss:

  • Key timeframes to implement security testing – and why 
  • How to incorporate risk context across the SDLC 
  • Best practices for application penetration testing and secure code review 
  • Proper implementation of application security tools for continuous monitoring 
  • Plus, more tips to achieve a layered application security strategy 

Key highlights:

  • 1:21 – The state of AppSec testing 
  • 3:55 – Contextual AppSec testing 
  • 14:45 – Best practices for application pentesting and secure code review  
  • 30:40 – The implementation journey 
  • 42:00 – Q&A 

The State of AppSec Testing 

To get started, it’s important to have an understanding of the current state of today’s AppSec programs and application security in general.  

Key challenges with application security include:

  1. Siloed: Application security programs are siloed in most organizations. AppSec-related activities often happen without being in sync with the rest of the organization, but effective application security requires collaboration across the board.
  2. Lacks context: A lot of testing happens in different phases of the software development lifecycle (SDLC), but oftentimes it tends to lack context. Testing may be driven by customer needs or regulatory and compliance requirements, but often there’s not enough testing being done based on an organization’s software context and understanding when and why you need to test systems, other than specific requirements from external pressures. 
  3. Results fall short: When application security testing is siloed, lacks context, and doesn’t have proper strategy, the results are more likely to fall short.   

A layered testing approach is the key to building a world-class AppSec testing program that spans the entire SDLC, including code level, pre-deployment, and post-deployment.   

Contextual AppSec Testing 

For AppSec testing to be effective, context from across the SDLC is required to understand risk.  

Some of the benefits of context in each stage across the SDLC include:

  • Design 
    • Prioritize and trigger threat model sessions
    • Trigger contextual compliance reviews 
  • Branch 
    • Trigger contextual security code reviews and enrich data from SAST/SCA/GWs 
    • Trigger contextual compliance reviews 
    • Automate manual risk questionnaires 
    • Automate code governance 
  • Repository 
    • Gain complete visibility into AppSec infrastructure and CSS 
    • Actionable remediation work plan 
    • Trigger incremental plan testing 
    • Reduce SAST & SCA FP and prioritize results 
    • Detect compromised results  
  • CI/CD 
    • Prevent build-time code injection attacks (SolarWinds)  

Best Practices for Application Pentesting and Secure Code Review  

Understanding best practices for application pentesting and secure code review can help ensure your approach is as effective as possible.

Here are some ways optimize your application pentesting: 

1. Risk-based pentesting is key 

  • Understand how your business makes money 
  • Prioritize remediation of vulnerabilities that pose the greatest risk to the organization 
  • Loop in finance and risk leadership 
  • Contextual pentesting 

2. Strategy is the future 

  • Informed pentesting is more valuable, as hackers aren’t bound by time 
  • Threat modeling and secure design reviews 
  • Pair point-in-time testing with always-on monitoring 
  • Bug bounty vs. pentesting 

3. Enable manual testing  

  • Enable your testing team to find vulnerabilities that tools miss 
  • According to NetSPI testing data, 63% of critical vulnerabilities are found through manual testing 
  • External network pentesting finds 10x more critical vulnerabilities than a single network vulnerability scanning tool 

4. Take a holistic approach 

  • Validation of security controls 
  • Understanding how everything works together  

Another important aspect is building an effective secure code review program. Some step to do this include:

  1. Establish a security culture and listen to your developers 
  2. Create simple and effective methodologies and processes 
  3. Plan application onboarding and scan frequency 
  4. Understand that remediation matters most 
  5. Measure and improve over time  

As you formalize your company’s AppSec program, following a maturity checklist can help set the program up for success.

Make sure to include the following steps your application security program maturity checklist:

  • Formalize your roadmap 
  • Governance in the SDLC 
  • Establish metrics that matter 
  • Be an AppSec ambassador

The Journey to Implement AppSec 

When it comes to how an organization looks at and approaches application security in general, breadth is an important framework to redefine and conceptualize application security.

This framework includes: 

  • Shift-left to dev training and code analysis 
  • Heavy focus on in-app and perimeter protections 
  • Shift-right to advanced, proactive, and managed services  

Left-to-right application security features the following solutions: 

  • Awareness and education
    • Learning, training, threat modeling 
  • Code analysis
    • SAST, DAST, IAST, SCA, code risk 
  • In-app protection 
    • RASP, CWPP (EW) 
  • Perimeter protection 
    • WAAP, CWPP (NS), DDoS, Zero Trust 
  • Advanced solutions 
    • Bot, insights, fraud, 3rd party, TI, CDR, DLP 
  • Proactive solutions 
    • VM, CSPM, CIEM, BAS, EASM, MDR 

Partner with NetSPI to Improve Application Security  

NetSPI’s Application Security as a Service helps organizations manage multiple areas of their application security program.

Our AppSec as a service capabilities combine the power of technology through our vulnerability management and orchestration platform with our leading cybersecurity consulting services featuring expert human pentesters to ensure you can build and manage a world-class application security program.

Learn more about our AppSec as a service offerings and partner with NetSPI to drive your application security program forward to meet your security objectives. Schedule a demo today

[wonderplugin_video iframe="https://youtu.be/bml1NTFqxFA" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Application Security In Depth: Understanding The Three Layers Of AppSec Testing [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => application-security-in-depth-understanding-the-three-layers-of-appsec-testing [to_ping] => [pinged] => [post_modified] => 2023-07-12 12:43:09 [post_modified_gmt] => 2023-07-12 17:43:09 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=27220 [menu_order] => 49 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [27] => WP_Post Object ( [ID] => 27166 [post_author] => 65 [post_date] => 2022-01-18 07:00:00 [post_date_gmt] => 2022-01-18 13:00:00 [post_content] =>

Today’s business environment extends far beyond traditional brick and mortar organizations. Due to an increased reliance on digital operations, the frequency and complexity of supply chain cyber attacks — also known as vendor risk management or third-party security — are growing exponentially. It’s apparent that business leaders can no longer ignore supply chain security.

Not only did we see an increase in supply chain attacks in 2021, but the entire anatomy of an organization’s attack surface has evolved significantly. With more organizations shifting to a remote or hybrid workforce, we’ve seen a spike in cloud adoption and a heavy reliance on digital collaboration with third-parties.

Over the past few years we’ve introduced many new risks into our software supply chains. So, how do we ensure we don’t become the next SolarWinds or Accellion? In this blog, we reveal four supply chain security best practices to get you started on solid footing.

First, understand where the threats are coming from. 

With so many facets of the supply chain connected through digital products, organizations and security leaders need to understand which sectors are most vulnerable and where hackers can find holes — both internally and externally.

A recent study found that 70% of all breaches are caused by an outside force, and 17% were specifically from malware. This is to be expected. As software developers have been outsourced more frequently, the doors have opened to traditional malware attacks and breaches. Businesses need to understand how and where their resources can be accessed, and whether these threats can be exploited. However, malicious code detection is known to be very difficult. Standard code reviews won’t always identify these risks, as they can be inserted into internally-built software and mimic the look and feel of regular code. This is one of the biggest trends leaders must be aware of and fully understand which threats could impact their organization.

In addition to malware, hackers have begun attacking multiple business assets outside of an organization's supply chain through “island hopping.'' We’re seeing 50% of today’s cyber attacks use this technique. Security leaders need to identify and monitor island hopping attacks frequently to stay ahead of the vulnerability. Gone are the days where hackers target an organization itself — instead adversaries are going after an organization's partners to gain access to the initial organization's network.

Supply Chain Security Best Practices

How do organizations ensure they don’t become the weakest link in the supply chain? First and foremost, be proactive! Businesses must look at internal and external factors impacting their security protocol and implement these four best practices.

1. Enforce security awareness training.

Ensure you are training your staff not only when they enter the organization, but also on a continuous basis and as new business emerges. Every staff member, regardless of level or job description, should understand the organization's view and focus on security, including how to respond to phishing attempts and how to protect data in a remote environment. For example, in a retail environment, all internal employees and third-party partners should understand PCI compliance, while healthcare professionals need a working knowledge of HIPPA. The idea is to get everyone on the same page so they understand the importance of sensitive information within an organization and can help mediate a threat when it is presented.

2. Enact policy and standards adherence.

Adherence to policies and standards is how a business keeps progressing. But, relying on a well-written standard that matches policy is not enough. Organizations need to adhere to that policy and standards, otherwise they are meaningless. This is true when working with outside vendors as well. Generally, it’s best to set up a policy that meets an organization where it is and maps back to its business processes – a standard coherence within an organization. Once that’s understood, as a business matures, the policy must mature with it. This will create a higher level of security for your supply chain with less gaps.

In the past, we’ve spent a lot of time focusing on policies and recommendations for brick and mortar types of servers. With the new remote work and outsourcing increasing, it’s important to understand how policies transfer over when working with vendors in the new remote setting. 

3. Implement a vendor risk management program.

How we exchange information with people outside of our organization is critical in today’s environment. Cyber attacks through vendor networks are becoming more common, and organizations need to be more selective when choosing their partners.

Once partners are chosen, security teams and business leaders need to ensure all new vendors are assessed with a risk-based vendor management program. The program should address re-testing vendors according to their identified risk level. A well-established, risk-based vendor management program involves vendor training — follow this three-tiered approach to get started: 

  • Tier one: Organizations need to analyze and tier their vendors based on business risk so they can hone in on different security resources and ensure they’ve done their due diligence where it matters most. 
  • Tier two: Risk-based assessments. The higher the vendor risk, the more their security program should be accessed to understand where an organization’s supply chain could be vulnerable – organizations need to pay close attention here. Those categorized as lower risk vendors can be assessed through automated scoring, whereas medium risk vendors require a more extensive questionnaire, and high-risk vendors should showcase the level of their security program through penetration testing results. 
  • Tier three: Arguably most important for long term vendor security. Re-testing vendor assessments should be conducted at the start of a partnership, and as that partnership grows, to make sure they’re adhering to protocol. This helps confirm nothing is slipping through the cracks and that the safety policies and standards in place are constantly being met. 

4. Look at the secondary precautions. 

Once security awareness training, policy, and standards are in place, and organizations have established a successful vendor risk management program, they can look at secondary proactive measures to keep supply chain security top of mind. Tactics include, but are not limited, to attack surface management, penetration testing services, and red team exercises. These strategic offensive security activities can help identify where the security gaps exist in your software supply chain.

Now that so many organizations are working with outside vendors, third-party security is more important than ever. No company wants to fall vulnerable due to an attack that starts externally. The best way to prepare and decrease vulnerability is to have a robust security plan that the whole company understands. By implementing these four simple best practices early on, businesses can go into the new year with assurance that they won’t be the weakest link in the supply chain — and that they’re safeguarded from external supplier threats.

Want to learn more about how to strengthen your software supply chain security? Watch the on-demand webinar: "How NOT To Be The Weakest Link In The Supply Chain"
[post_title] => Best Practices for Software Supply Chain Security [post_excerpt] => Take these four steps to improve your software supply chain security. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => best-practices-software-supply-chain-security [to_ping] => [pinged] => [post_modified] => 2023-08-22 09:53:35 [post_modified_gmt] => 2023-08-22 14:53:35 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27166 [menu_order] => 317 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [28] => WP_Post Object ( [ID] => 27147 [post_author] => 65 [post_date] => 2022-01-11 11:49:08 [post_date_gmt] => 2022-01-11 17:49:08 [post_content] =>

Cybersecurity is a moving target. As adversaries evolve, the methodologies we use to protect our businesses must evolve in tandem.  

Penetration testing is a great example of a category that must continuously innovate to keep pace with attackers. After all, the goal of penetration testing services is to emulate real-world attack tactics, techniques, and procedures (TTPs) as accurately as possible. 

Traditional penetration cannot keep pace with the realities of business agility and hacker ambitions. Without innovation and evolution, it remains slow, stodgy, inconsistent, and simply checks a compliance box. So, how do we drive the industry forward? Strategy is key, according to Manish Khera, CISO at a national utilities company. 

I recently invited Manish on the Agent of Influence podcast, a place to share best practices and trends in the world of cybersecurity and vulnerability management. We dove deep into the future of penetration testing, among other topics. When discussing the evolution of pentesting, he believes strategy is key – and I couldn’t agree more. Taking a strategic approach to security testing is vital. Continue reading to learn why, and for highlights from our conversation. 

Do you believe the security mindset has migrated to a more proactive approach today? Or do you think there's more work that needs to happen? 

Manish: I think we have become more proactive. Is it working? Hard to say. We have created proactive programs like AppSec and the concept of shifting left for example. We talk about security assessments and consulting, and security is getting involved earlier on projects. We’re making sure that it's not a “stage gate” pentest that occurred to assess a project. We've obviously grown and matured in that regard.  

So, what is the right approach? If we're too proactive, we may miss some of the needs for last-minute reviews. A pentest before a go-live for an external facing application, for example, is a good best practice. Ideally, we have good application security processes in place early on – SAST, DAST, whatever scans, plugins, et cetera, to get a better feel of low hanging fruit. It is a tough hill to climb balancing proactive and reactive security, but we are getting better. 

Nabil: You mentioned something that resonates well with how I talk about pentesting. Ultimately, people tend to start their security practices with penetration testing as a way to discover vulnerabilities. But I think as you mature, you have to change that mindset to view pentesting as a way to determine how effective our other controls are. Keeping that in mind... 

How does penetration testing need to evolve based on the trends you're seeing in the industry? 

Manish: I think you and I are on of one mind in this space. I do agree with you 100%, that pentesting has to evolve. The idea of it being a report card or simply finding vulnerabilities, when it should be the sum total of great activities up to that point. For future pentesting, we must do a couple different things.

Listen Now: Application Security and Penetration Testing Insights from a Utilities Sector CISO. Episode 29 of Agent of Influence with Manish Khera

Organizations should be more thoughtful about their approach. We should be willing to spend the money to threat model down to give proper avenues for pentesting vendors or your internal pentesting team. Organizations are often afraid to engage a pentesting vendor over a long period of time, and we feel we’ve spent too much money pentesting. However, we need to threat model, work with that vendor, and spend time with them to make sure they have enough time and resources to not just find vulnerabilities that are lucky to find, but also business context vulnerabilities. 

If I say “you have two weeks to get this done” that is not really a good pentest. Get that vendor in, spend a day with them, have them understand what the actual threat vectors are, understand the important parts of that application and data sets are, what the target would be from an informed, authenticated user, and so on. Then give them time to figure it out. The vendor should be smart about it too. It’s on both sides to be smart about it. It can't be a time box, very slim budget event. It's got to be thoughtful and threat-focused versus, “I have 50k to spend on a pentest.” 

I also think that the “shift left” marketing schemes have to come into play. We've got to get better integrated in using scans, using ID plugins, and teaching developers how to code better. We call this a security champions program. Have somebody from the development team join the appsec team and work with them to better understand appsec processes. Then, they go back to the development team and become champions that speak the same language as across teams. 

All of a sudden, pentesting becomes an event that clears the scorecard. If you practice good security up to that point, the vulnerabilities you find are more likely to be small efforts versus huge efforts that delay projects from going live. I hope that pentesting matures in that regard – but only time will tell.

Threat modeling can be time consuming, but valuable. Can you share a scenario where you found that threat modeling something, and then using that to drive a pentest or a security activity, was more valuable? 

Manish: The first time you do threat modeling is always the heaviest lift. Determining what framework to follow and how to create the process so that it is repeatable is most time consuming. But it does get easier over time if you follow a consistent framework. Especially if you have the same teams involved or a threat modeling champion engaged when a vendor comes in to do the threat modeling engagement.  

In terms of a key win or scenario, every time we do it, we find a better way to approach a pentest or improve our security activities. Every threat modeling assessment produces something that is shocking or surprising. I think you should always do it, because there’s always an opportunity to gain a better understanding of your applications and enable better tests. 

Essentially, coming in “blind” to do a pentest is rarely as valuable as having more details and information about how the system is architected. Taking an approach where you're enabling your pentesters with as much detail as possible only allows you to get better results. I'm not a big fan of “black box” testing or unauthenticated testing. We should assume that an adversary has deep inside knowledge of the environment, because they likely do. They can buy it or coerce somebody to give it to them – they can get it one way or another. We have to open our eyes to that scenario. We want informed testing and we want detailed reviews. That's how we drive value. 

For more on the future of penetration testing – plus, insights on cybersecurity challenges in the utilities sector, consultancy vs. in-house security leadership roles, how to build a security champions program, and more – listen to episode 29 of Agent of Influence, featuring Manish Khera.

[post_title] => Strategic Penetration Testing is the Future [post_excerpt] => Learn why taking a strategic penetration testing approach is vital. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => future-of-penetration-testing-strategy [to_ping] => [pinged] => [post_modified] => 2023-08-22 09:54:15 [post_modified_gmt] => 2023-08-22 14:54:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=27147 [menu_order] => 319 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [29] => WP_Post Object ( [ID] => 26881 [post_author] => 65 [post_date] => 2021-12-07 16:28:08 [post_date_gmt] => 2021-12-07 22:28:08 [post_content] =>

On December 7, 2021, NetSPI Managing Director Nabil Hannan was a featured guest on ITSPmagazine’s Redefining Security Podcast, where they discuss the new OWASP Top 10 2021. Listen below or view online here.

Episode Summary

Every few years, a group of individuals work together to deliver what has become a staple in application security practices: The Open Web Application Security Project (OWASP) Top 10. In the 2021 edition, the team took a fresh look at the data and what it means. Everything changed while staying the same.

Episode Notes

Every few years, a group of individuals work together to deliver what has become a staple in application security practices: The Open Web Application Security Project (OWASP) Top 10. In the 2021 edition, the team took a fresh look at the data and what it means. Everything changed while somehow stayed the same.

The real changes are in how organizations should look at this information and how to use it to make a difference in their application development and information security programs. While data analytics played a huge role in changing the game for the OWASP Top 10 for 2021, it's the humans that will see the outcomes come to fruition. Or, at least we hope.

____________________________

Guests

Diana Kelley
On ITSPmagazine https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/diana-kelley

Andrew van der Stock
On LinkedIn | https://www.linkedin.com/in/vanderaj/
On Twitter | https://twitter.com/vanderaj

Nabil Hannan
On LinkedIn | https://www.linkedin.com/in/nhannan/
On Twitter | https://twitter.com/nabilhannan

[post_title] => ITSP Magazine: The OWASP Top 10 2021 Edition: What Changed And What Must You Change In Application Development Given The Updated Top List Of Broken (AKA Weak Or Vulnerable) Things? [post_excerpt] => On December 7, 2021, NetSPI Managing Director Nabil Hannan was a featured guest on ITSPmagazine’s Redefining Security Podcast, where they discuss the new OWASP Top 10 2021. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => itsp-magazine-redefining-security-owasp-top-10-2021 [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:11:04 [post_modified_gmt] => 2023-01-23 21:11:04 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26881 [menu_order] => 335 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [30] => WP_Post Object ( [ID] => 26810 [post_author] => 65 [post_date] => 2021-12-07 07:00:00 [post_date_gmt] => 2021-12-07 13:00:00 [post_content] =>

Shortly after Thanksgiving, we packed our bags and ventured off to Riyadh, Saudi Arabia for the inaugural @Hack cybersecurity event. We were invited to exhibit at the SecureLink booth, who we recently partnered with to expand NetSPI’s services to the Middle East and Africa (MEA).

Over the past two years, the Kingdom of Saudi Arabia has gone through accelerated digital transformation, driven heavily by its Vision 2030 reform plan. And with this digital transformation, comes expanded attack surfaces and more exposure to cyber threats. This was a key theme and concern during the event – and a large part of why the event was organized in the first place.

It was exciting to see the energy and enthusiasm around technology and cybersecurity (almost as exciting as when we realized that @Hack was synonymous with “attack”). @Hack organizers estimated that more than 14,000 people from 70 countries were in attendance, many of which we spoke to at the NetSPI stand about the state of security in Saudi Arabia, penetration testing, cybersecurity education, cybersecurity jobs, and more.

As we packed up to head to our next destinations, we took time to reflect on our conversations, the people we met, and the key themes we observed on the show floor.

Cybersecurity Maturity in the Kingdom of Saudi Arabia

The Kingdom of Saudi Arabia has only recently focused on transforming their technological infrastructure and has invested in becoming a technological powerhouse in the region. At the conference itself, we saw the use of QR codes, mobile payments, digital sharing of contact information, and more. Although their technology adoption is very high, there is an opportunity for the region to mature its understanding of and focus on cybersecurity challenges.

One of the younger attendees came from Egypt and participated in the “bug bounty” challenge. He came in 2nd place and mentioned how the challenge to him was simple compared to what he was used to in his home country. To us, this indicates that security is not necessarily at the forefront of Saudi Arabia’s considerations when acquiring or deploying technology, and there is some catching up it needs to do to ensure security keeps pace with its technological developments.

We also recognized that most of the cybersecurity work performed is based on what is mandated by the Kingdom of Saudi Arabia government. Penetration testing services are not a large part of that discussion today, but we anticipate security testing activities – pentesting, secure code review, threat modeling, red team, design reviews – will be part of the requirements very soon.

The State of Penetration Testing

At the event, we were surprised to hear that the concept of penetration testing is new to most people and organizations in the region. In many of our conversations, we heard that they were interested in purchasing products and software solutions that could take care of all security concerns. But, as we know, even the largest technology companies can make security mistakes (see: Microsoft Azure CVE-2021-4306).

There were a number of misconceptions about penetration testing that we helped to address at the show. Notably, the difference between penetration testing and simply running an automated scanner tool or a monitoring solution.

The explosion in technology adoption over the last few years has caused many companies to rapidly seek new and innovative security solutions, however, the adoption of pentesting services in the Middle East will be largely driven by regulation.

Youth and Women in Cybersecurity

@Hack brought a diverse group of people together. Students as young as 11 stopped by our booth and were eager to learn from us. It was incredible to see the younger generation’s interest in cybersecurity careers and education. Questions we were asked include, “how can we learn more?”, “where can I find more resources?”, “what resources should I look at to become a pentester?”, and “can you hire me and train me?”

A large portion of those coming into the industry are students who have learned from global online communities, including bug bounties, capture the flag, and online forums. For continued reading, this Arab News article highlights some of the young attendees involved at the event.

Across the globe, there are initiatives to get women more involved in cybersecurity. Cybersecurity Ventures and WiCys predict that women will hold 25 percent of cybersecurity jobs globally by the end of 2021, up from 20 percent in 2019. This was evident @Hack.

Women were equally, if not more, involved at the conference than their male counterparts in terms of communication, interest, types of questions they were asking, etc. The transition to more progressive ideologies in the region has clearly resulted in a large influx of highly educated and motivated women wanting to break into the space.

Overall, the event was a great opportunity to connect and share information with security peers across the globe and we hope they will put on @Hack next year. With our new SecureLink partnership, we’re excited to continue educating the region on the benefits of penetration testing and the value it brings when done well. Want to connect with us at the next big cybersecurity event? We’re heading to RSA Conference in San Francisco, February 7-10, 2022. Schedule a meeting with us!

Explore our penetration testing, adversary simulation, and attack surface management services.
[post_title] => @Hack: Cybersecurity Transformation in Saudi Arabia [post_excerpt] => Read highlights and lessons learned from the 2021 @Hack cybersecurity conference in the Kingdom of Saudi Arabia. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => athack-cybersecurity-transformation-saudi-arabia [to_ping] => [pinged] => [post_modified] => 2023-01-23 15:11:04 [post_modified_gmt] => 2023-01-23 21:11:04 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26810 [menu_order] => 336 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [31] => WP_Post Object ( [ID] => 26670 [post_author] => 65 [post_date] => 2021-11-16 07:00:00 [post_date_gmt] => 2021-11-16 13:00:00 [post_content] =>

The number of security controls and activities any given cybersecurity leader manages is continuously evolving. For instance, this year, the Global InfoSec Awards features 212 different categories – from penetration testing to insider threat detection to breach and attack simulation. That’s 212 different technologies or security activities CISOs could be implementing dependent on their needs.

This highlights the importance of taking a step back to recognize the activities making the greatest difference in your security program. AF Group CISO Seth Edgar does just that during our Agent of Influence podcast interview. For Seth, asset management, vulnerability management, and authentication topped his list of cybersecurity best practices.

Continue reading to learn how these security activities are changing the game for Seth and for highlights from our discussion around his unconventional career path, lessons learned from reverse engineering, and cyberattack trends in the insurance space. For more, listen to the full episode on the NetSPI website, or wherever you listen to podcasts.

What have you learned from your experience as a middle school teacher that you apply to your role as a CISO? 

Middle school is an area of your life that's memorable for a lot of reasons. As a teacher, delivering materials to students in a manner they can consume is an ever-changing battle. Not only is there the struggle to remain interesting and relevant to a roomful of 12-year-olds, but also understanding how to communicate a complex subject or introduce a new, complex theme. I couldn't stop at relaying the information. “I've put it out there, now it's on you to consume,” was not an effective strategy. I had to make sure that the concepts were reinforced and delivered in a manner they could grasp because my goal was for them to be successful. 

As CISO, I'm doing the exact same thing. Most of my job is education and a portion of my job is security, budget, and people management. I'm teaching people why security is important. What is higher or lower risk. I'm talking to executives and communicating the ROI of security. In doing so, I have to gauge on the fly whether they're tracking with what I'm saying, or whether we're going to need to revisit the topic from a different angle.

I get exposure to upper-tier leadership within my organization, but those interactions are limited. They have to be because they're scheduled. It's not like a classroom where I'm with the students every single day. If we missed it today, we'll get it tomorrow – same time, same place. With business leaders I've got to get it right the first time. 

Just like in a classroom, you have to get to know the students before you can truly teach them. And at the end of the day, you must make your material relevant and usable for them, make it understandable and draw upon their background. It must also be presented in a manner that makes sense, without oversimplification. All those techniques are exactly what I'm doing right now as a CISO, just with a different body of knowledge. Finding that the balance between simplification and understanding is a challenge, but it's something that I can draw upon from my prior experience and from my undergraduate education to help me communicate complex security topics clearly to my leadership team.

Are there components of what you learned from reverse engineering that you apply in practice today?

I am a big fan of learning by doing. It's way different completing a sample problem than it is touching real software. It is helpful to have a deep technical background to be able to have conversations with technical folks and establish credibility. However, the more important lesson that I pulled from those early days doing reverse engineering is that it's okay to have trial and error. It's okay to make mistakes and learn from them. 

As a leader, I don't punish mistakes, we learn from them. If that mistake is repetitive, I'm likely going to move you away from doing that that role within our organization. But mistakes are part of the learning process. They're important. We too often think, ‘I'm not going to do anything because if I screw it up, I'm going to get in trouble.’ That's not how we learn, that's not the way that systems are developed, and it's certainly not the way you have a breakthrough. So, the most important lesson I learned from reverse engineering was learning how to make a mistake, recover, and use it to inform the next steps going forward.

There's been a lot of news recently about the insurance space, such as the CNA Financial ransomware attack. Are there certain attack trends that you're paying close attention to today?

We are watching a lot right now. Ransomware is a high risk and a top-of-mind issue, not only from a perspective of ransomware prevention, but also from an insurance perspective. We don't touch this area much in my role, but cybersecurity insurers are starting to realize that their model may need to change because the risk profile has ramped up significantly. If you look at how ransomware has grown, we have seen this crazy upward trend in financial impact and sophistication – nobody wants to get hit. At the same time, if it is not going to happen to you, it is very likely will happen to a third- or a fourth- or a fifth-party provider. That’s where supply chain security issues come into play.

With COVID-19, many went from a fully on premise workforce to a fully remote workforce almost overnight. There are inherent network risks and security models that bank on a perimeter-centric security. There's a large knowledge exchange that had to happen with groups of people who always report into a building that have maybe never used a VPN or had to do any kind of multifactor authentication before, whereas other people have been doing it for a decade or more. There's going to be that subset of your users that this is the first rodeo they've ever had with remote workforce security protocols

We've seen interesting scams arise out of this. Wire fraud transfer scams have always been existent but are taking advantage of companies that have changed business models. Attackers try and monetize whatever it is they get their hands on. If I'm an attacker, if I compromise an email account, I want to turn that into some sort of monetization as quickly as possible. 

There is one clever attack that I’ve heard described among my peers. Let's say I compromise a mailbox and immediately search for the word “invoice” looking for unpaid invoices. I find out who the sender of that invoice is and create a look alike domain for that sender. Now I spoof that exact user that sent the invoice in the first place and say, “This invoice is overdue and needs to be paid.” It creates that sense of urgency just like a normal attack would and then, you get them to change the wire transfer number. Now they're stuck in a position where they're trying to describe a decently complex attack to probably an under resourced small- to medium-sized business. 

Many organizations don't have the capability to view and understand how the user got into their environment and it becomes a game of finger pointing. It's an awkward and difficult situation to be in. This brings up the importance of validating senders and sources. A positive business best practice in this situation would be to reach out and validate the information with a verified contact.

What are the most effective security activities you're implementing today?

The most effective security activities that are changing the game for me have revolved around strong asset management, patching, and vulnerability management practices. 

Beyond that, having strong authentication is equally critical. Not only multifactor, but checking system state, user agent strings, consistent source IP, and similar practices. I can know, with relatively simple rule set, whether a log in is attempted with a new IP, device, or if it is a new source for this user’s authentication, and act accordingly. We've seen some incredible progress, just not only in our own development or tooling, but leveraging products we already have available. They're not perfect, none of them are airtight. But it gives us a certain probability or a reasonable level of assurance that this user is who they claim to be – or not.

As mentioned earlier, moving your workforce to remote is a hard problem to solve in areas like vulnerability remediation and patch management. Getting software updated, especially if it was historically on-premise, is a major shift. If you're working with an incomplete asset inventory right out of the gate, you have no indication what your success ratio is. This is an age-old problem that organizations still struggle with today. Whether you have good asset management can tell you whether your security program is successful.

The bright side? Vulnerability management and asset management are areas that can be improved and understanding your attack surface is a good first step. The Print Nightmare vulnerability is a great example of this. Once alerted, having a good understanding of the devices that need to get print drivers locked down on and what devices you need to make changes to rapidly reduce your exposure proved vital in that situation. 

Want to hear more from Seth Edgar? Listen to episode 35 of Agent of Influence!
[post_title] => Q&A: Asset Management, Vulnerability Management, and Authentication are Changing the Game for this CISO [post_excerpt] => AF Group CISO Seth Edgar shares which security activities are making the greatest difference in his security program via the Agent of Influence cybersecurity podcast. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => vulnerability-management-authentication-ciso-priorities [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:46 [post_modified_gmt] => 2022-12-16 16:51:46 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26670 [menu_order] => 350 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [32] => WP_Post Object ( [ID] => 26665 [post_author] => 53 [post_date] => 2021-11-12 12:30:08 [post_date_gmt] => 2021-11-12 18:30:08 [post_content] =>
Watch Now

What’s next for enterprise security professionals? No one can know for certain, but NetSPI’s expert bench of security pros – pulling from their decades of cybersecurity leadership and daily conversations with some of the world’s most prominent organizations – have a few ideas as to where the industry is headed.

Watch our 2022 cybersecurity predictions webinar, where our panel will tackle some of the most debated topics of the past 365 days and predict how each will evolve in the new year and beyond. Topics include: 

  • The cybersecurity hiring crisis
  • Application security program maturity
  • Attack surface management
  • The evolution of ransomware
  • Cybersecurity budget allocation
  • And next generation architectures (see: blockchain)

[wonderplugin_video iframe="https://youtu.be/rLcwnJAO5Qo" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => 2022 Cybersecurity Predictions:
What to Expect in the New Year [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => 2022-cybersecurity-predictions-what-to-expect-in-the-new-year [to_ping] => [pinged] => [post_modified] => 2023-06-22 20:39:36 [post_modified_gmt] => 2023-06-23 01:39:36 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=26665 [menu_order] => 50 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [33] => WP_Post Object ( [ID] => 26628 [post_author] => 65 [post_date] => 2021-11-04 09:56:02 [post_date_gmt] => 2021-11-04 14:56:02 [post_content] =>

On November 4, 2021, NetSPI Managing Director Nabil Hannan was featured in an article by TechRepublic:

In the latest effort to combat cybercrime and ransomware, federal agencies have been told to patch hundreds of known security vulnerabilities with due dates ranging from November 2021 to May 2022. In a directive issued on Wednesday, the Cybersecurity and Infrastructure Security Agency (CISA) ordered all federal and executive branch departments and agencies to patch a series of known exploited vulnerabilities as cataloged in a public website managed by CISA.

The directive applies to all software and hardware located on the premises of federal agencies or hosted by third parties on behalf of an agency. The only products that seem to be exempt are those defined as national security systems as well as certain systems operated by the Department of Defense or the Intelligence Community.

All agencies are being asked to work with CISA's catalog, which currently lists almost 300 known security vulnerabilities with links to information on how to patch them and due dates by when they should be patched.

...

Within 60 days, agencies must review and update their vulnerability management policies and procedures and provide copies of them if requested. Agencies must set up a process by which it can patch the security flaws identified by CISA, which means assigning roles and responsibilities, establishing internal tracking and reporting and validating when the vulnerabilities have been patched.

However, patch management can still be a tricky process, requiring the proper time and people to test and deploy each patch. To help in that area, the federal government needs to provide further guidance beyond the new directive.

"This directive focuses on patching systems to meet the upgrades provided by vendors, and while this may seem like a simple task, many government organizations struggle to develop the necessary patch management programs that will keep their software and infrastructure fully supported and patched on an ongoing basis," said Nabil Hannan, managing director of vulnerability management firm NetSPI.

"To remediate this, the Biden administration should develop specific guidelines on how to build and manage these systems, as well as directives on how to properly test for security issues on an ongoing basis," Hannan added. "This additional support will create a stronger security posture across government networks that will protect against evolving adversary threats, instead of just providing an immediate, temporary fix to the problem at hand."

Read the full TechRepublic article here: https://www.techrepublic.com/article/us-government-orders-federal-agencies-to-patch-100s-of-vulnerabilities/

[post_title] => TechRepublic: US government orders federal agencies to patch 100s of vulnerabilities [post_excerpt] => The Cybersecurity and Infrastructure Security Agency is maintaining a database of known security flaws with details on how and when federal agencies and departments should patch them. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => techrepublic-us-government-orders-federal-agencies-to-patch-100s-of-vulnerabilities [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:47 [post_modified_gmt] => 2022-12-16 16:51:47 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26628 [menu_order] => 353 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [34] => WP_Post Object ( [ID] => 26557 [post_author] => 65 [post_date] => 2021-10-19 07:00:00 [post_date_gmt] => 2021-10-19 12:00:00 [post_content] =>

Being a cybersecurity leader is not for the faint of heart. The increasing sophistication of adversaries and number of successful breaches puts significant pressure on security teams today. For advice, I invited Pacific Northwest infosec leader David Quisenberry to join me on the Agent of Influence podcast, a series of interviews with industry leaders and security gurus where we share best practices and trends in the world of cybersecurity and vulnerability management

During our conversation, David shared four ways he’s approaching cybersecurity leadership today by:

  1. Tapping into his wealth management experience.
  2. Collaborating across the organization.
  3. Working closely with his local security community as the president of OWASP Portland, Oregon.
  4. Creating a solid network of mentors.

Continue reading for highlights from our conversation around wealth management, collaboration mentorship, OWASP Portland, and more. You can listen to the full episode on the NetSPI website, or wherever you listen to podcasts.

Can you tell me about your career transition from wealth management to cybersecurity?

David Quisenberry: The decisions you make, the careers you go down, the things you do – everything's interrelated and connected. There are some differences, but there are also a lot of similarities. In wealth management, you deal a lot with risk tolerance. Like companies, when someone is just starting off as an investor, they don't have a lot of money and they're much more willing to take on risk and do things that a more established person, family, or trust might not do with their money because they have a fiduciary standard to make sure that they invested wisely for all the beneficiaries. 

Again, like companies, when they're building out their foundation of revenue, security may not be as front and center to a lot of the decisions that get make. But as companies grow, a lot of enterprise corporations view security and risk tolerance much differently. They want to understand all the risks that go into each decision they make as a business. Risk tolerance is a similar theme. 

Another thing that's very similar between the world of wealth management and the world of cybersecurity is you’re always reading, always studying. As a wealth manager, you're constantly keeping up to speed on all the trends with global investment flows, global economics, and mutual funds. That obviously translates into the security world where you're reading and learning things all the time. 

Lastly, convincing others to take action. As a wealth manager, there's this tension, especially if you're working with families where you want them to save as much as possible so that they can have a lot of money in retirement for their kids, for their charities. They care about their future self, but they also want to live life. There are these tensions, and you have to convince someone to spend their dollars one way versus the other. This is very similar to security work with developers, product owners, product managers, et cetera. It's a constant game of understanding all the various priorities and working together to identify that sweet spot of paying security debt, staying on top of future security debt, as well as getting other features built to drive your business forward. 

You are taking some unique approaches to better collaborate across different teams within your organization. Can you share with us the approach you're taking? Are there things that worked well or not?

David Quisenberry: One of the philosophies I have about most things is “relationships first.” As an information security manager, I've tried to take the approach of being available and approachable. If somebody sends me a question or an email, most of the time I will drop what I'm doing so that I can answer them in that moment. Even when people get frustrated with me, I take the opportunity to take a step back and think, “We have a tension right now. But they're thinking about security. What's not clear? How do I communicate the why?” If I can accurately explain the why, it's going to help so much. I try to take that relationship first approach, identify those early wins, and set clear expectations of what is a milestone and then celebrate those when we hit them.

It’s important to have regular monthly meetings with the scrum masters with the various leadership teams for the different products, engineering managers, project managers. I encourage them to ask questions and know that we're going to have an opportunity to dig into things. To prepare for those meetings, we have a working agenda that both parties can add to, and I also try to give visibility into data and analytics. 

As the president of the OWASP chapter in Portland, how did you get involved with the community? And what are some interesting things that you're doing that might be different from other chapters? 

David Quisenberry: David Merrick introduced me to the chapter. I started going to the chapter meetings whenever I could. Around late 2018/2019, I started being mentored by the previous chapter President, Ian Melvin, who's been an amazing mentor and really helped me along in my progress. He got me more involved on the on the leadership side. 

What I found is most OWASP chapters have leadership that have been laboring hard for a long time to keep the chapter going. If you're willing to help bring in speakers, engage in membership, promote social media activity, or think of topics to present, they'll open up. Especially if you prove yourself and that you can deliver on it. What I found with the Portland chapter was that, as I started getting involved, we needed to meet developers where they're at. 

We did a lot early on when I took over as president of the Portland OWASP chapter. We built out a mentorship program where we had around 24 people with varying skill levels meeting regularly. We really ramped up our social media presence, specifically Twitter and LinkedIn. We used meetup.com which helped us solidify returning visitors and provided an easy mechanism for people to RSVP to our monthly meetings. By the end of 2019, we were close to 50-60 people per meeting. And we brought in a lot of great speakers. 

And then then COVID-19 hits, and suddenly you can't meet in person anymore. We had to do everything virtually, but we were able to continue our path of monthly, or bi-monthly meetings. We also have another thing that we do as a local chapter, which is study sessions. More hands-on, shorter sessions or labs and then 40-minutes hands on keyboard. Working with Burp Suite or Wireshark. You name it. We started a podcast in late 2019 and that's been super successful. We had 6,500 listeners or so over this last year and some interesting guests. We're also exploring some other opportunities for cybersecurity training with other chapters. We’ve been trying to collaborate more with the chapters around us and that's been going quite well. 

NetSPI’s Portland, OR office is growing. Check out our open cybersecurity careers in PDX!

I wouldn't be where I am today without my involvement with OWASP. If you're interested in truly excelling and expanding your horizons in the security space, these community meetings and chapters really pay dividends in the long run. I'd be curious to get your perspective on any guidance you have on how to choose a mentor that's right for you?

David Quisenberry: The first thing I would say is don't have one mentor. And I think of it almost as a personal board of directors. For myself what I want is people from across the spectrum. So, business leaders, engineering leadership, security leadership, different types of security leadership. I want 4-10 people that I talk with quarterly, some people more often. I want to be able to have multiple perspectives to bounce ideas off when I'm having a hard time with something at work or a moral decision I need to make or just trying to understand what is normal or what is acceptable. 

One of the big things that I always try to push with my mentors is what are you learning? What are you reading? What are the things that you go back to time and time again? There is a saying that I always think about: “If you dig, you get diamonds.” Where can you dig to get diamonds? With mentors, I try to have a lot of people, I try to be real with them, and make it clear that this is only between us. And I also try to pay it forward. I want to help people and lots of people have helped me. 

There is a book by Keith Ferrazi I read a long time ago, it's called Never Eat Alone. It's all about how people like to help people. We're all hesitant to ask for advice or ask for help. But the most successful people ask for help all the time. As humans, we like to help each other. His whole thing is to find out what you want to do and where you want to get, and then build a relationship action plan to move your way there. He's also big on building your network before you really need it. If you're in a job hunt, and you're trying to build your mentorship, or mentor platform at that point in time, that's going to be hard to do. But if you're in career, and you start building that network, and you don't need to use it for a couple years, by the time you do need to use it those people will know that you're genuine and know who you really are. They'll be more than willing to help you. 

For more, listen to episode 23 of Agent of Influence. Or, connect with David on LinkedIn or listen to the OWASP Portland podcast.

Agent of Influence Episode 23 with David Quisenberry

[post_title] => Q&A: David Quisenberry Discusses Cybersecurity Careers, Collaboration, Mentorship, and OWASP Portland [post_excerpt] => Read cybersecurity career and leadership advice from David Quisenberry, OWASP Portland President. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => david-quisenberry-cybersecurity-careers-mentorship-owasp-portland [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:48 [post_modified_gmt] => 2022-12-16 16:51:48 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26557 [menu_order] => 357 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [35] => WP_Post Object ( [ID] => 26525 [post_author] => 53 [post_date] => 2021-10-08 15:09:54 [post_date_gmt] => 2021-10-08 20:09:54 [post_content] =>
Watch Now

Overview 

Supply chain security, vendor risk management, third-party security. Each of these synonymous cybersecurity terms has become widely used, thanks to the increase in the exploitation of threat vectors from outside of an organization. 

So, what can software vendors and third-party technology partners do to ensure they don’t become the weak link in the supply chain? 

In this webinar you’ll get two different viewpoints on supply chain security from two NetSPI team members, Field CISO, Nabil Hannan, who will explore the topic from the software development perspective, and Managing Director, Chad Peterson, who will approach it from a business risk perspective. 

  • Their differing views on supply chain security  
  • The anatomy of a supply chain attack  
  • Considerations and best practices for securing the supply chain   
  • How vendors can get proactive to show potential partners that they are NOT the weakest link  
  • The future of supply chain security 

Key highlights: 

Defining the supply chain 

When it comes to supply chain security, it’s important to look at it from two sides – business risk and insider threat.

Business risk includes: 

  • Critical assets and intellectual property 
  • Internal risk programs 
  • Business partners  

Insider threat includes:

  • Internal software development 
  • Unique capabilities of the adversary  

Supply chain and risk 

From a business risk perspective, the supply chain landscape has changed substantially over the years. 

Here are some of the key motivators of change:

  • Perimeter transparency: Today’s environments extend well beyond the traditional brick and mortar business, with cloud and software as a service and remote work now being the norm.  
  • Reliance on business partners: Organizations today are relying on partners to support essential pieces of their business, including business processes, infrastructure, and application development.  
  • Increased attack surface: Outsourcing and the transparency of the perimeter have resulted in a loss of control for internal security teams. Additionally, external and internal environments have become blurred and there’s now an increased emphasis on privileged access.  

As a result of this changing landscape, the anatomy of attacks has evolved for many organizations.  

Some of the ways in which attacks are changing include:

  • Island hopping: Because companies are doing a better job of protecting their own environments, attacks are no longer exclusively focused directly at the organization, but rather within the supply chain. Emerging attack methods include network-based, reverse email, and watering hole attacks.  
  • External motivations: Organizations are increasingly outsourcing their software development for cost savings and to have additional resources to expedite and accelerate software development. To support this, more software developers are being hired from outside the U.S., which can pose challenges with managing insider threats in the supply chain. 
  • Internal motivations: It can be challenging for organizations to know for certain that when they hire developers, they’re not malicious and that they’ll truly perform the work they’ve been hired to do. Another related concern is when U.S.-based employees outsource their own software development jobs to developers in China or elsewhere, which can give individuals outside the company access to an organization’s code or other sensitive data. Many organizations don’t have a full picture of what’s happening within their company, which can pose supply chain security risks in the long run.  

Traditional malware vs. malicious code 

A key piece of effective supply chain security is understanding the differences between traditional malware and malicious code.

  • Traditional malware is installed on systems from external sources, usually downloaded through different attack vectors like phishing, and is a result of outside attackers trying to compromise systems at a larger scale, such as sending a phishing email to thousands of people at once, hoping at least someone will click on it.  
  • Malicious code is code is much more targeted and inserted into software that’s built internally, usually inserted by an internal employee, and looks and feels like regular, non-malicious code. Internal adversaries include different types of employees, such as software developers, administrators or operations team members, and change management team members, all of whom have access to internal systems.  

Proactive supply chain security measures  

While the supply chain threats that businesses face today are significant, there are some proactive measures organizations can put in place to ensure supply chain security is effective.

Consider the following proactive measures at your organization: 

  • Security awareness training: Ensure you’re training your staff on security best practices to follow. Have a process in place for the training to be provided to all new employees, as well as an annual refresher training with all employees. 
  • Policy and standards adherence: Implement organizational policies and standards that are a reflection not only of best practices, but are followed and in line with business processes.  
  • Vendor management: Assess all new vendors using a risk-based vendor management program. The program should also address retesting vendors in accordance with their identified risk level.    

The three proactive measures outlined above are some of the foundational steps your organization can take to elevate your supply chain security. Some of the other critical components to consider bringing in to improve supply chain security include attack surface management, penetration testing, and red team exercises.  

What’s next in supply chain security?  

When it comes to internal software development and associated risks from a supply chain perspective, the next steps to take after identifying malicious risk are not as simple as some may think. The reason it’s not straightforward is because the typical vulnerability escalation process now includes the adversary, because internal resources are seen as potential threats. As a result, “just fix the vulnerability” isn’t a viable mitigation strategy and organizations need to instead define governance the process and controls around managing malicious code.  

Malicious code risk mitigation steps can range from rather benign to very serious and may include: 

  1. Suspicious, but not malicious 
  2. Circle of trust invitation 
  3. Passive monitoring 
  4. Active suppression 
  5. Executive-level event 

NetSPI’s supply chain security capabilities  

Leading businesses trust NetSPI for continuous threat and exposure management, leveraging our team, technology, and comprehensive methodology to detect and remediate vulnerabilities.

Learn more about how our Attack Surface Management, penetration testing, and red team testing capabilities can help identify where security gaps exist in your software supply chain. Connect with an expert team member by scheduling a demo today.

[wonderplugin_video iframe="https://youtu.be/xBYMzqZd4eA" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => How NOT to be the Weakest Link in the Supply Chain [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-not-to-be-the-weakest-link-in-the-supply-chain [to_ping] => [pinged] => [post_modified] => 2023-09-01 07:05:14 [post_modified_gmt] => 2023-09-01 12:05:14 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=26525 [menu_order] => 51 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [36] => WP_Post Object ( [ID] => 26522 [post_author] => 65 [post_date] => 2021-10-04 14:11:00 [post_date_gmt] => 2021-10-04 19:11:00 [post_content] =>

On October 4, 2021, NetSPI Managing Director Nabil Hannan was featured as a guest contributor for TechTarget:

Software and applications are present in everything from consumer goods to medical devices to submarines. Many organizations are evaluating their application security, or AppSec, to ensure their strategies are mature and not vulnerable to cyber attacks.

According to Forrester Research, applications remain a top cause of external breaches. The prevalence of open source, APIs and containers only adds to the complexity of the problem.

Most organizations struggle to understand how to approach AppSec program maturity. Given many organizations have switched from Waterfall to Agile in their software development lifecycle (SDLC), practitioners are asking, "How do we continue to evolve our AppSec programs?"

Roadmaps can help navigate these issues. Organizations looking to develop mature programs need to be mindful of inherent team biases. For example, if the AppSec team comes from a pen testing background, the program may lean toward a bias. If the team is experienced in code review, then bias may shine through, too. While both disciplines are important and should be a part of an AppSec program, the experiences may cause bias when a more objective approach is needed.

Many mature AppSec frameworks exist, but a one-size-fits-all approach is not going to work. Every organization has unique needs and objectives around thresholds, risk appetite and budgets. This is largely why prescriptive frameworks, such as Microsoft Security Development Lifecycle, Software Security Touchpoints or Open Software Assurance Maturity Model, are not the answer. It's best to tailor roadmaps on the specific needs and objectives of a particular organization.

5 principles for implementing an AppSec program

These five tenets can serve as a guide for implementing a mature AppSec program.

  1. Make sure people and culture drive success
  2. Insist on governance in the SDLC
  3. Strive for frictionless processes
  4. Employ risk-based pen testing
  5. Determine when to use automation in vulnerability discovery

Read Nabil's 5 principles for AppSec program maturity on TechTarget's SearchSecurity: https://searchsecurity.techtarget.com/post/5-principles-for-AppSec-program-maturity

[post_title] => TechTarget: 5 principles for AppSec program maturity [post_excerpt] => Applications remain a top cause of external data breaches. Follow these five principles to achieve application security program maturity. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => techtarget-5-principles-for-appsec-program-maturity [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:50 [post_modified_gmt] => 2022-12-16 16:51:50 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26522 [menu_order] => 362 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [37] => WP_Post Object ( [ID] => 26519 [post_author] => 65 [post_date] => 2021-10-04 05:00:00 [post_date_gmt] => 2021-10-04 10:00:00 [post_content] =>

NetSPI Managing Director Nabil Hannan sat down with BBC World News anchor Lewis Vaughan Jones on October 4, 2021 to talk about a global social media outage. Nabil discusses the current state of the world's software ecosystem, what it means for modern businesses, and the potential of a passwordless world.

Watch the clip below:

https://youtu.be/iOU0wLDt444
[post_title] => NetSPI Software Security Expert Nabil Hannan Featured on BBC World News [post_excerpt] => NetSPI Managing Director Nabil Hannan sat down with BBC World News anchor Lewis Vaughan Jones on October 4, 2021 to talk about a global social media outage. Nabil discusses the current state of the world's software ecosystem, what it means for modern businesses, and the potential of a passwordless world. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => bbc-world-news-global-social-media-outage [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:50 [post_modified_gmt] => 2022-12-16 16:51:50 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26519 [menu_order] => 361 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [38] => WP_Post Object ( [ID] => 26360 [post_author] => 65 [post_date] => 2021-09-07 07:00:00 [post_date_gmt] => 2021-09-07 12:00:00 [post_content] =>

Building an application security (AppSec) program that stays current is no easy feat. Add to that the ubiquity of software and applications in everything from consumer goods to medical devices to submarines.There is an increasingly urgent need for organizations to take another look at their AppSec strategies to ensure they are not left vulnerable to cyberattacks and continuously measure and improve their program maturity.

Heads up: Building a world-class, mature AppSec security program is something that needs to be accomplished in phases. It will not happen overnight. A great deal of foundational work needs to be in place before an organization can achieve positive results. 

When analyzing AppSec programs, we often find a number of sizable gaps in how vulnerabilities are managed as well as opportunities for improvement, especially related to security processes around the software development lifecycle (SDLC). Addressing these issues and harmonizing the various security processes will help give organizations the capability and vision to identify, track, and remediate vulnerabilities more efficiently, eventually elevating the organization to the level of maturity it seeks.

Following is a checklist to help organizations think through the issues around AppSec maturity to build a program that produces valuable security results.

  Ensure Your Security Practices are Current

Given how rapidly application development techniques and methodologies are transforming – and the rate at which software is developed today – companies need to ensure that their security practices are staying current with the ever-changing pressures around compliance/governance, software deployment, DevOps, SDLC, and training. Understanding the current level of maturity and developing a data-driven plan to evolve your AppSec program is key to the success of an organization’s security efforts.

  Leverage Real World Data to Benchmark Your AppSec Program

Put a stake in the ground and objectively determine the status of your AppSec program. Comparing your organization’s program with real world data across multiple business verticals will help augment your efforts and determine areas that require focus. Base your security decision on your specific business needs andlessons learned from other mature programs in your industry.

  Put Roadmaps in Place to Prioritize and Allocate Resources

The AppSec and software engineering teams within an organization should constantly partner to evolve and improve the AppSec posture for all software assets. This collaboration will help determine how to improve upon current efforts while uncovering additional activities that should be adopted to meet business goals. Putting in place a formalized roadmap for this collaboration allows an organization to better prioritize its business initiatives, budgets, and resource allocation while reducing the overall AppSec risk faced by the organization.

Roadmap stipulation: Use caution and watch for bias. Organizations that are serious about developing a mature program need to be mindful that there may be inherent team biases based on familiarity. For example, if the AppSec team comes from a penetration testing background, the program may lean toward a pentesting bias. Is the team’s experience in code review? Then that bias may shine through. While both disciplines are important and should be a part of an AppSec program, my point is that there may be bias when a more objective approach is needed. 

Also understand that there are many frameworks to mature application security. A one-size-fits-all approach is not going to work because every organization has unique needs and objectives around thresholds, risk appetite, and budget availability. 

  Insist on Governance in the SDLC

Setting up governance within the SDLC is critical. Why? If security teams don’t define what they are trying to accomplish or what security looks like within the SDLC process, it leaves too much ambiguity for who is accountable. Creating governance around SDLC will also help define where an organization needs to build in testing, both manual and automated, from a vulnerability discovery perspective.

  Track Your Progress; Benchmark Your Efforts Against Your Peers

Benchmarking your AppSec program by leveraging industry standard frameworks allows you to measure AppSec program maturity consistently and objectively, and make informed decisions based on your business objectives.

Benchmarking scorecards, supported with visuals, enable high-bandwidth conversations with your organization’s leadership team and provides an opportunity to showcase the positive influence that your AppSec program is having on the organization’s business goals. Additionally, you can leverage data from your benchmarking efforts to compare your efforts to others within your peer vertical group, and other business verticals that are also leveraging the same industry standard AppSec framework. 

  Employ Risk-Based Application Penetration Testing

When looking to mature an AppSec program, organizations should view application penetration testing as a gate validating that everything implemented in the SDLC is working, not just a discovery of vulnerabilities. Pentesting services should be the method used to determine the effectiveness of your secure SDLC and all the automated and manual processes implemented. Oftentimes, organizations will approach this concept in the reverse by starting with penetration testing

Additionally, having a dynamic pentesting platform that offers data points and risk scores aids in objectively identifying where AppSec is immature and what needs to be prioritized to remediate vulnerabilities that present the greatest risk.  

  Determine When to Use Automation in Vulnerability Discovery

To build an optimum, mature AppSec program, it is important to determine when it is best to use automation in vulnerability discovery and when to employ manual penetration testing. In short, an effective AppSec program includes the ability to manage and employ threat modelingmanual penetration testing, and secure code review, augmented with automated vulnerability discovery tools that are deployed at various phases of the SDLC process. 

For example, automatic testing like dynamic scanning, static analysis, and interactive security testing may be sufficient day to day, but manual penetration testing is warranted when significant architectural changes or technology upgrades to software systems are made. Finding balance in vulnerability discovery is important. It isn’t an either/or.

Vulnerabilities found in production cost roughly $7,600 to fix – 9,500% more than the $80 it would cost to fix those same vulnerabilities when they are detected early in the development process.

– WhiteSource reporting on a joint study by IBM and Ponemon Institute

  Insist on Metrics for Proper Data and Analysis

Consistent, timely, and accurate DevSecOps data measurements are important feedback for any organization to capture and analyze as it looks to govern development operations. Quality metrics (numbers with analysis and meaning in context) can ensure visibility, accountability, and management of software security initiatives. Proper application security program metrics allow you to articulate the AppSec program’s value to your organization’s leadership. The benefit? Being able to properly evangelize the value of your AppSec effortsmakes it easier to procure funding and improve the security risk posture of your organization. Additionally, understanding the data at hand to be able to answer contextualized business questions allow for better strategic decision making.

  Maturity Attained: Be an Ambassador

What does an organization do once it determines its AppSec program is mature? First, decide if a mature program is a long-term goal. Obviously, security always needs to be a priority, but ongoing maturity programming can be expensive and time consuming. Secondly, there will undoubtedly always be areas that require more attention. While addressing them, I encourage organizations to share their program successes with the broader market. Become a leader and use AppSec maturity as a differentiator that can drive customer and team member goodwill, brand differentiation, and market leadership.

[post_title] => A Checklist for Application Security Program Maturity [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => checklist-application-security-program-maturity [to_ping] => [pinged] => [post_modified] => 2023-04-07 09:19:32 [post_modified_gmt] => 2023-04-07 14:19:32 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26360 [menu_order] => 367 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [39] => WP_Post Object ( [ID] => 26333 [post_author] => 65 [post_date] => 2021-08-31 07:00:00 [post_date_gmt] => 2021-08-31 12:00:00 [post_content] =>

When the term “reality check” is used, it’s intended to get someone to recognize the truth about a situation. In a fast-moving industry like cybersecurity, reality checks from its leaders are necessary. Thinking pragmatically about the solutions to our biggest challenges helps drive the industry forward. 

I recently sat down with Splunk global advisory CISO Doug Brush on the Agent of Influence podcast, a series of interviews with industry leaders and security gurus where we share best practices and trends in the world of cybersecurity and vulnerability management. During our conversation he shared three major cybersecurity reality checks: 

  • Most tools out there are going to be part of the picture, but they're not going to solve everything. The slow progression of incident response today is not a technology problem.
  • Chinese-based organizations, such as DJI Drones and TikTok, have much in common with the Bay Area tech community. We have a lot to learn from them.
  • A top-down mentality must be applied to mental health in cybersecurity. Prioritizing mental health should be adopted at the C-Suite level.

Read on for highlights from our conversation around the evolution of incident response, security practices at Chinese-based organizations, mental health in cybersecurity, and more. You can listen to the full episode online here, or wherever you listen to podcasts.

Nabil: You've done many different incident response investigations. How that has that evolved over time – or over the course of your career?

Doug: I wish incident response has evolved more. I would say it's a slow evolution. Early on, it was very manual process to parse things. Say if you're doing dead box forensics, or even memory forensics to a large degree, there weren’t tools that could automate some of those processes. 

By no means is a tool an answer to all problems, but it's going to help build efficiencies if you understand the process. We had to deconstruct things in hex editors, it was a very manual process and took a very, very long time. Now, you can script automate a lot of those and these tools can build databases – so that’s gotten better.

When I see things like the SolarWinds incident, we focus on the TTPs around how somebody gets in. And once they get in, they move laterally, privilege escalation, build backdoors, get domain, get other accounts, and build this persistence mechanism. We've been tracking this since 2006/2007. There's nothing new about it. And that's the frustrating part to me. While we think some of the technologies evolved to allow us to be more efficient, some of the root things that we should be looking for, we are not. I think there needs to be a greater focus on detection and response and building our response capabilities, as opposed to an afterthought past defense.

Nabil: Is there a reason why that hasn't happened yet, or why it's taking so long?

Doug: It's hard. And it's not a technology problem. I work for a technology vendor, I would like to say. “we're the best in the world and we can stop everything, detect anything,” but that's not the reality. Most of the tools out there are going to be part of the picture, but they're not going to solve everything. 

When you look at the entire security operations, it's going to be people, process, then technology. Technology is only a small percentage, it's not your entire program. We get really excited about cool, new shiny objects. We all go to Black Hat and RSA, and we all pat ourselves on the back that all these new things are coming out. The reality is that we're solving the same problems we saw 30 years ago. We don't have good asset inventory. We don't have visibility of our environments.

Nabil: Let's shift gears to talk about a topic that I quite enjoy. I would love to learn about your work with various Chinese based organizations – DJI Drones and TikTok. In particular, what do you think about the privacy and security concerns that people bring up about using their technology?

Doug: It gets overly politicized at times. Inevitably, the Chinese government has their agenda, and I would add the blanket statement that there are also a lot of Chinese companies that don't necessarily align with how the Chinese government operates. Some of these companies I've talked to have said, “You folks in the U.S. think we're the enemy and think we're stealing all this data. But we're just a startup.”

The thing that surprised me most in Shenzhen was that the tech center reminded me of the Bay Area. It was very westernized and had a startup vibe with many young professionals. That's the fallacy that we have: they’re against us. We don’t realize how much we have in common. They have a distrust of their government, just as we have a distrust of our own government. They have a mentality of “trust but verify” more than we appreciate. They have some built out documented and thoughtful programs when it comes to governance and organization.

In reality these companies are trying to create cool products just as we are. The reason DJI Drones became so popular is because they work really well. They built a vertically integrated manufacturing process where they weren’t using third parties – they had control over their supply chain. They manage third party risk well in advance. There are a lot of things that these organizations do that allow them to be competitive in the capitalistic and development space that we need to learn from. 

We have to change this mindset that, because you’re in a specific country, you have to share the viewpoints of whatever the loudest political party is at that time. We need to try to look at things in a more pragmatic and realistic way.

Nabil: You're a big advocate for mental health. It's a huge issue and an area of focus today in the security industry, especially due to things like staffing shortages and burnout. What advice do you have for security leaders when addressing mental health?

Doug: Yeah, it's a tough one, there's no doubt about it. The last few years have been particularly tough, but it’s an issue that's been coming up for a long time that we don't talk about enough. First of all, we need to have honest and frank discussions about it. There was a nominated study in 2019 that looked at global cybersecurity professionals. 91% of the CISOs surveyed said that their stress levels were suffering and 60% felt really disconnected from their work role. In the U.S., almost 90% of CISOs have never taken a two-week break from their job. And a lot of them feel that a breach is inevitable in their environment. 

We talk about top-down security and top-down leadership, which should go for mental health too. It has to be something that is adopted at the board and C-Suite level. Leaders should recognize that they’re only as good as the people that are working for them… when they're at their best. Humans aren’t batteries, you can't just revolve through them. The cost of acquiring the good cybersecurity professional right now is very high and CISOs are even harder to find and you don't want to be churning through these people. Continuously hiring people, training them, and getting them onboarded, increases the cost and reduces efficiencies. We need to change this idea of how we hire. 

I would say it's changed since I started in consulting. It was very easy to continue this idea that you had to work 80-90 hours a week. More of the folks that I've hired in the past decade or so have focused on balancing mental health. We shouldn’t expect someone to work overtime each week if we want the best from them. Happier staff results in better work, more efficiencies, higher employee retention – which, in turn, results in happier customers and more top line revenue.

When people feel the best, they perform at their best. This idea that it's mental health versus business is a zero-sum game. If we construct that from the leadership level down and appreciate the fact that you can do more to retain your employees by giving them a better self-care environment, they're going to be better employees for you. Investing in employee health, mental health, and wellbeing is non-negotiable. 

Nabil: Can you also share a little bit about the neurodiversity initiatives you're supporting at Splunk?

Doug: The mental health aspect is just one part of the neurodiversity journey. When we talk about diversity in the workplace it should also include neurological differences like autism, ADHD, mood, and other functions. These have historically been viewed with a negative perception, but they’re just natural variation in the human genome. These folks have exceptional abilities alongside what is traditionally been viewed as a “disability.” Recognizing that it’s not something that needs to be fixed is a shift that needs to be adopted and supported. 

Instead of saying, “thou shalt think like we do,” it's this idea that a diverse mental environment is going to give you more candidates, and probably a better output. When I've had a diverse staff and we all get in the room, I don't get affinity bias. My greatest fear is that I'm going to build my own echo chamber of people telling me what I want to hear. We need diversity in thought to increase better output for our customers. You’ll find that you get a better outcome overall when you bring a lot of different people to the table.

For more, listen to episode 33 of Agent of Influence. Or, connect with Doug on LinkedInTwitter, or listen to his podcast, Cyber Security Interviews.

Listen Now to Episode 33 of Agent of Influence with Doug Brush - The Evolution of Incident Response, Lessons Learned from Chinese-Based Tech Companies, Mental Health, and More
[post_title] => Q&A: Doug Brush Talks Incident Response, DJI Drones, and Mental Health in Cybersecurity [post_excerpt] => Read highlights from our Agent of Influence podcast episode with security leader Doug Brush. He discusses incident response, DJI Drones, Mental Health, and more. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => doug-brush-incident-response-dji-drones-mental-health-cybersecurity [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:53 [post_modified_gmt] => 2022-12-16 16:51:53 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=26333 [menu_order] => 368 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [40] => WP_Post Object ( [ID] => 25984 [post_author] => 65 [post_date] => 2021-07-20 07:00:00 [post_date_gmt] => 2021-07-20 12:00:00 [post_content] =>

Cybersecurity leaders hold one of the most difficult positions today, as they’re often tasked with protecting an entire organization from sophisticated threats with limited resources. I recently sat down with founding partner and CTO at Security Curve Diana Kelley on the Agent of Influence podcast, a series of interviews with industry leaders and security gurus where we share best practices and trends in the world of cybersecurity and vulnerability management, to discuss key challenges and opportunities security leaders face today. Read on for highlights from our conversation around communicating cybersecurity ROI, building an application security program, inclusivity in the cybersecurity industry, and more. 

Nabil: Connecting and conveying a particular message to the C-suite is a common challenge across the security industry. What has worked well for you when communicating ROI or asking for budget from leadership? 

Diana: Cybersecurity ROI can be tough to communicate. First, remember, if you're going to the executives or presenting to the C-suite, you have to look at the world through their lens. We tend to, as technical people, look at it through our lens – which is okay for our understanding, but it is the fiduciary responsibility of the stakeholders of the company to make it profitable. It is important to always think about that, think about how security translates to profitability. Do not go into a leadership or board meeting with technical detail, go in there with “this is what it means” or “this is how it impacts our bottom line.” 

Second, do not dismiss the fact that their lens is different, as if it is somehow denigrated. The craziest thing I’ve experienced was a technical person in front of a board of directors say, “I'm the risk expert here.” They may have been the technical risk expert, but they didn't understand that the job of the board is risk assessment. It's a different lens of risk assessment, focused on business and profit, but it's still risk. 

People always say to speak in the language of business. The way to do this in practice is to remember their lens of profitability, remember that risk is about business risk, and then tie your technical risk in a business way that isn't deeply technical, but is very strong and powerful. You can also share examples, such as, “Did a similar customer lose money due to a competitor having the same problem?” or “Is there new legislation coming down the pipeline that's going to change our implementation and strategy?”

Finally, do not forget to engage leadership in the decision-making process. You want to avoid being demanding, which often happens after a breach or audit. Early on, engage with leadership and communicate the security issues, what it could mean to your profitability, and explain how the security team can help improve or protect the business in the future. Most importantly, ask if they agree that the investment is a good way to spend the organization’s money and ensure you have a consensus. 

 For more on how to showcase ROI of cybersecurity read NetSPI’s Five Metrics to Showcase the ROI of Pentesting

Nabil: Let's talk about application security. What insight would you give people as they try to decide what frameworks they should use and how to navigate the different options out there?

Diana: Organizations must get an application security program in place – a secure software development lifecycle (SSDLC). This is the most critical part. As far as frameworks go, BSIMM is a good option to understand what other companies that look like you are doing in terms of application security. It allows organizations to have a maturity model to build towards. 

Have a framework in place to start implementing an application security program, create standards for your developers, and start application security testing early on. Identify your application security requirements and understand the threat model so that you can start to build and think about the test harness as soon as possible. It's more important to start implementing rather than focusing on which framework you choose.

It concerns me that now we're getting into this big shift in the enterprise where we're no longer writing code from the ground up, we're doing a lot of low-code no-code. This is fantastic in terms of what we're able to build and how quickly we're able to build it. But companies that are now creating low-code no-code solutions are using a lot of functions and libraries and they are not thinking about it as custom-built code. 

I've heard many times, “we don't actually build any applications.” Then, you start talking to the company and you find out that they have many scripts that are pulling in functions from the cloud, they're using cool tools like Zappy or Airtable, but they're giving access into parts of their data sets, and they don't realize those scripts are code. I'm hopeful that companies don’t solely have an application security program in place, but also an understanding that they need to extend this program to the low-code no-code serverless world that we are moving towards.

Nabil: A lot of the work that you do is focused on inclusivity in the security industry. What advice do you have for security leaders looking for new talent?

Diana: With Women in Cybersecurity (WiCyS) specifically, we’re very focused on bringing women into cybersecurity, but there are many different non-profits out there that are looking at cohorts and sectors that have not been involved in cybersecurity in the past. I think security leaders could benefit from getting involved with these organizations to look for internships for externships.

It's very common for leaders to say, we can't find any diverse talent and we had to hire somebody who looks like everybody else because there were no other candidates. Often, it's not that you didn't look far enough or hard enough. And that may be because they're not in your network. If your network doesn't extend out broadly to different groups of people, then work to expand it. 

Be open to people that may not have college degrees, as every job in cybersecurity doesn't necessarily need a four-year liberal arts degree. Maybe there is somebody who has recently graduated from high school that's completed the right training. Rethink what you know, how you're hiring, who you're hiring, open that aperture wider, and work with those communities that are encouraging inclusivity. 

Another tip is to think critically about how you’re writing job descriptions. There is research that shows that women will not apply for a job unless they match about 90% of the criteria or higher, whereas men will apply if they only match 50%. If you write a job description that includes every experience and skill under the sun because you want to get great resumes, what you’re actually doing is turning off the candidates who are reading that job description and believe that, if they don't have 90 percent or 100 percent of the criteria, they're not going to be eligible for the job. Rethink your job descriptions: do not gender the job descriptions and make sure that they're not overstuffed. Write it for what are you looking for and focus on what is important. You’ll be surprised at the resumes it brings in.

Listen to Agent of Influence Episode 30 featuring Diana Kelley
[post_title] => Q&A: Diana Kelley Discusses ROI, Application Security, and Inclusivity [post_excerpt] => Read this blog to learn security expert Diana Kelley’s insights on communicating cybersecurity ROI, how to build an appsec program, and hiring for inclusivity. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => diana-kelley-roi-application-security-inclusivity [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:57 [post_modified_gmt] => 2022-12-16 16:51:57 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=25984 [menu_order] => 381 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [41] => WP_Post Object ( [ID] => 25942 [post_author] => 65 [post_date] => 2021-07-16 17:06:17 [post_date_gmt] => 2021-07-16 22:06:17 [post_content] =>

On July 16, 2021, NetSPI Managing Director Nabil Hannan was featured as a guest contributor for TechTarget:

At the end of the day, for those of us on DevSecOps teams, it is all about managing risk, even in the highly regulated healthcare industry. Compliance around medical records and privacy concerns is a driver, so development and security professionals must take aggressive steps to prioritize risk management as the healthcare industry continues to be a frequent target of bad actors. According to Gartner, the worldwide end-user spending on public cloud services is forecasted to grow 18.4% in 2021 to a total of $304.9 billion, up from $275.5 billion in 2020. "The pandemic validated the cloud's value proposition," Gartner Research Vice President Sid Nag said.

The monetary loss from cybercrime goes beyond just affecting healthcare with an estimated $945 billion cost in 2020, according to McAfee. For those working in the healthcare industry, realize that a 2020 breach analysis report by IBM and Ponemon Institute found that healthcare breaches were the costliest. In other words, not managing risk is expensive.

Gartner also reported COVID-19 forced organizations to preserve cash and optimize IT costs, support and secure a remote workforce, and ensure resiliency. And the cloud became a convenient means to address all three. If this scenario sounds familiar to your organization, the following are four insights to consider that will help to protect data in the cloud.

Read Nabil's 4 tips for secure cloud migration on TechTarget's SearchSecurity: https://searchsecurity.techtarget.com/post/4-healthcare-risk-management-tips-for-secure-cloud-migration

[post_title] => Tips for a secure cloud migration for Healthcare [post_excerpt] => From improving the security posture and updating threat modeling to securing cloud data, learn about four risk management tips for healthcare organizations migrating to cloud. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => techtarget-4-healthcare-risk-management-tips-for-secure-cloud-migration [to_ping] => [pinged] => [post_modified] => 2023-08-22 09:16:41 [post_modified_gmt] => 2023-08-22 14:16:41 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=25942 [menu_order] => 382 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [42] => WP_Post Object ( [ID] => 25569 [post_author] => 65 [post_date] => 2021-06-15 07:00:00 [post_date_gmt] => 2021-06-15 07:00:00 [post_content] =>

It is amazing how much the cybersecurity industry has grown and evolved over the years. If you just look back even just a couple of years, the strategic conversations we were having have certainly changed. The space is evolving, and each of us in the industry are having to evolve with it to stay current. One area that has evolved greatly over time is risk management, specifically the role of the Chief Risk Officer.

To dig deeper on its evolution, I sat down with CEO and founder of Risk Neutral Jeff Sauntry on the Agent of Influence podcast, a series of interviews with industry leaders and security gurus where we share best practices and trends in the world of cybersecurity and vulnerability management. Read on for highlights from our conversation around risk management, the role of the Chief Risk Officer, and more.

Nabil: Let's start by talking about risk management. How did you make that transition from cybersecurity to risk management?

Jeff: For me, it was a natural evolution to upskill my vocabulary as I started interacting with more senior business leaders and board members. When the board members and the C-suite have normal discussions, they're still discussing challenges and opportunities, but they're speaking in terms of risk, cost, and outcomes. In cybersecurity, we're often discussing threats and consequences. Something was getting lost in translation, so I decided to build on my strong technical and cybersecurity background and dig into risk management and the ability to become a more effective communicator.

Nabil: What would you say are some of the key characteristics that make someone great at risk management?

Jeff: My whole career had risk management components to it, but I did not yet understand risk as an empirical domain. That’s one of the reasons I chose to make this pivot. I also made the investment of time and resources to go to Carnegie Mellon and get my Chief Risk Officer certification. What was great about that is I went from being very myopic, maybe talking about technology, operational, or compliance risk, and then opening my eyes to the fact that there are five major risk categories that every business has to worry about: Strategic risk – which is by far the most important if you don't get that one right, nothing else matters – then operational, finance, compliance, and then reputational risk comes into play if any of the first four fail.

Nabil: Tell me more about your experience at Carnegie Mellon.

Jeff: Most of us in cybersecurity are very familiar with some of the great work Carnegie Mellon has done with the maturity model of Capability Maturity Model Integration (CMMI), the insider threat program, and they have been a great partner with the government in terms of coming up with funded cybersecurity programs. I was familiar with the quality of the Carnegie Mellon products and insights, and when I read the curriculum, I thought to myself, “this is going to be really awesome.” One thing I wanted to avoid was that I didn't want the course to be comprised of completely fintech leaders. For a lot of people fintech and financial services firms lead the way in terms of Chief Risk Officers and managing risk from a quantifiable perspective. But I knew the risk domain was much bigger and I wanted to be a well-rounded risk professional. Having a very broad group of peers in my cohort really helped me as well as the caliber of instructors that they brought in that could talk about the different ways to look at risk. I feel that I can now talk about enterprise risk management programs and not have such a myopic view around cybersecurity-, technology-, or compliance-related risk.

Nabil: Do you think the way organizations approach cybersecurity risk today needs to evolve?

Jeff: One hundred percent! It's one of those things that you're embarrassed about because you've been part of the problem for so long. We have to take a hard look in the mirror. I've looked back at some of the conversations I’ve had and they're almost cringeworthy. Given the knowledge I have gained in the last two years about risk management, I wish I could go back and redo conversations with certain clients. 

Nabil: From your experience, how has the role of the Chief Risk Officer evolved?

Jeff: A big part of this evolution is the cybersecurity profession. In general, cybersecurity is very focused on technical skills. That's naturally how a lot of us come up through our profession and education. But, it's even more important to understand that if you can't explain the outcome of your results or your findings, it's not going to resonate with clients. It's as if you never did a security engagement if you can't get the message or the impact across. That's where I think the risk management professional is evolving. Improving soft skills that so that cybersecurity risk can have a seat at the table rather than someone coming in to tell them that the sky is falling. The Chief Risk Officer has to be a true peer to the rest of the C-suite, they should even have a solid line into the board of directors. Most companies should think about having a dedicated Risk Management Committee at the management level that's complemented by one at the board level so that risk gets the right amount of time and attention. Then, you’ll have people with the right skill set in the room having the right discussions. 

One of the important things that came out the financial services industry is that they found if you embed risk managers structurally within each business unit there to please their boss and rubber stamp high risk decisions, it can end badly. This is part of what got us into the problem of the big financial meltdown in 2008/2009. It should have been a canary in the coal mine moment for risk management as a profession to say, “you have to be very careful about allowing the Chief Risk Officer to operate independently.” They need the right reporting structures and shouldn’t be allowed to be fired on a whim because they raised their hand and said, “I think this is a little too risky for us.” So, I think the evolution of the chief risk officer is at a very exciting point in time right now.

Nabil: Let's talk a little bit about your advisory board work. Do you have any advice for others who are looking to work in that capacity?

Jeff: You need to be very pragmatic, just like you would plan your secondary education and your master's degree in your career. From a board journey perspective, it's very much the same thing. You should start with an organization that you’re passionate about in order to understand: What are the procedures? What are the roles that are played? What are the different committees? Then, as you decide that you want to pursue service on a private board or a public board, think about the additional skill sets that you may need related to your fiduciary responsibilities and insurance and what are some of the personal and professional liabilities. Set a game plan for yourself, make some investments of time and money, and really figure out what it takes to be a board professional. I think it's very worthwhile. People with a strong technical and cybersecurity background definitely have something to contribute to advisory boards from a cognitive diversity perspective as organizations face digital transformations and threats from a wider range of actors each year.

Nabil: You are a scuba instructor and a captain of the US Merchant Marines. What parallels do you draw between being a scuba instructor or captain and risk management?

Jeff: All of us have something to learn from an environmental, social, and governance perspective. One of the reasons I'm a merchant marine captain is that people covet what they know. I thought it was extremely important to get people under the water and really understand things like what plastics are doing to our oceans to understand that, yes, the stuff you throw out your car actually does make it into environments that we care about.

Everything related to instructing scuba is about risk management. The standards they have for teaching, how many students you can have per instructor, the burden being on the instructor to determine whether it's safe to do certain things, the insurance I have to carry – all that stuff is designed to minimize risk to the students and staff. It's incredible how they handle violations of policy. There's a professional journal and if somebody does something wrong, they put it out there for everyone to learn from. 

The reality is when you take those people out onto the ocean and you're responsible for them, you need to bring them back healthy and safe. This comes down to a couple thing: What experiences do I bring to those situations based on the training I've had on the water? What is the quality of the vessel and the equipment that I'm relying on to help me deal with those situations? How prepared am I for this situation? And those are the three things as a captain that you can control.  

Those core concepts resonate with cybersecurity well. How prepared are you to do the job you've been asked to do as part of a team? How well have you prepared your organization to deal with a specific threat? That prudent mindset of being a good steward for the people you're responsible for resonates with people in cybersecurity as well.

Agent of Influence - Episode 026 - The Evolution of Risk Management and the Chief Risk Officer - Jeff Sauntry
[post_title] => The Evolution of the Chief Risk Officer [post_excerpt] => Read highlights from a cybersecurity podcast featuring Jeff Sauntry. We discuss risk management, the role of the Chief Risk Officer, and more. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => evolution-chief-risk-officer [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:50:48 [post_modified_gmt] => 2022-12-16 16:50:48 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=25569 [menu_order] => 396 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [43] => WP_Post Object ( [ID] => 25326 [post_author] => 65 [post_date] => 2021-05-11 07:00:00 [post_date_gmt] => 2021-05-11 07:00:00 [post_content] =>

The scope of healthcare data is remarkable. It’s no wonder healthcare cybersecurity is a growing concern as security professionals are challenged by managing and protecting the immense amount of personally identifiable information (PII) and protected health information (PHI) housed in their systems. 

Introduce a public health pandemic to the threat landscape, and the healthcare data management and security challenge grows exponentially. In 2020, more than 29 million healthcare records were breached due to the 25 percent year-over-year increase in healthcare data breaches, according to HIPAA Journal’s analysis of the U.S. Department of Health and Human Services healthcare breach data figures. 

During the 2021 Cyber Security Hub Healthcare Summit, NetSPI managing director Nabil Hannan and RxMx senior director of engineering Jesse Parente sat down to discuss the world of healthcare data management – notably, how to manage sensitive data securely. They explore the healthcare industry’s regulatory pressures and share insights on how to collect, store, and manage healthcare data securely and look at your data security program holistically using threat modeling and design review initiatives. Additionally, with the pandemic as a catalyst for digital transformation in the healthcare industry, cloud adoption has soared. Nabil and Jesse discuss the benefits of the cloud for data management, along with its security considerations. 

Continue reading for highlights from the discussion or watch the full session online here

In a compliance driven industry, such as healthcare, why is risk-based security so critical?

Jesse Parente: Risk-based security in general, regardless of the industry, is critical. At the end of the day, security is about managing risk. The easiest, and the most obvious answer in healthcare is, it can cost you if you're not focusing on risk. I was looking into the 2020 breach analysis report by IBM and Ponemon Institute and healthcare breaches were the costliest. That's mostly due to the fact that it's a very regulated market. You've got laws like HIPAA, which was formed and assigned in 1996, so it's rather old now. And it's actually a fairly low bar if you think about it. For example, encryption was considered an optional item back then. But in 2009, the HITECH Act was signed into law, and that gave HIPAA some teeth: Breach notification requirements and additional fines for non-compliance. There were almost 730 reported breaches in the last two years. If you do some simple math, that's about a breach a day… reported breaches. Now, the average cost per record is about $150-$200 and the average number of records exposed or lost was over 3,000. It's costly to not focus on risk. 

Nabil Hannan: Speaking of breaches and the data involved, ultimately securing personal data is important. People understand why their personal, non-public data should be kept private. If someone else has that information, they could easily impersonate you and ruin things like your credit, or your records, or even steal your identity. And that's a problem. But we also want to think about healthcare data and the complexities that surround it. For example, there are a lot of children whose data go into medical systems because they go see the doctor. But for a lot of non-adults, when that happens, their parent’s information is also associated with that record. Now you have multiple people whose information is available, their insurance information, home address, financial information in many cases. The importance of securing personal data, especially in the medical field, becomes exponentially more important because of the complexities of your family and relatives whose data may also be associated with your personal records. And the challenge there too, is personal information is something that you can't easily change. If you, for example, were part of a breach where an attacker accessed your credit card number, you can call up the company and immediately change that number and a new card sent to you. If your social security number gets breached, you can't change that. Or if your home address is breached, you're not going to move in order to change that. There are certain data types that are permanent and cannot be changed, data that presents a higher risk if breached – data that is often found in healthcare systems. 

How can healthcare IT and security leaders securely collect, store, and manage data?

Jesse: Before you even collect data or store it or manage it, it's really important to understand what the data is. Also, there's a concept of minimum necessary. Do you need this data? You have to do an analysis to understand what the data is and if it is sensitive. Classifying data is a really important piece when you're going to collect and store it. Additionally, pay attention to where that data goes downstream. This is the management aspect of it. Do the vendors that you work with need or have access to some of this data? In 2013, there was a final Omnibus ruling, which was an addition to HIPAA, and this essentially held business associates or vendors that you work with, accountable for non-compliance as well. So, you also have an obligation to make sure that your vendors are doing the right thing, when it comes to collecting, storing, and securing healthcare data.

Nabil Hannan: There is the actual safe way to store and manage data and then there's the part of making sure you have the data that's relevant, and you're only storing and managing the data that you truly need to maintain your business functionality. A significant amount of breaches lately, over the last five years or so, happened because of simple misconfigurations of data storage. So often we see that you may have data stores, such as Amazon S3 buckets, that are meant to be private and internal, but because of the misconfiguration, they're publicly available to the internet. Understanding what you're collecting and how it needs to be stored and, then, have automation and processes regularly checking to ensure that the attack surface that your data is exposed to is managed correctly is really important. That's ultimately the first step: Making sure you have processes in place to ensure that you're not inadvertently making a configuration change that leaves you exposed. 

What can healthcare organizations do now to evaluate their current security posture?

Nabil: There are a lot of common security tactics that healthcare organizations are using today. They are performing regular security scans using automated tools, making sure that their external attack surface is not easily reachable by script kiddies that are also running similar tools on the internet, performing penetration tests with manual humans testing and breaching systems to identify exploitable areas. To take these initiatives to the next level, start looking at things that tools and automation cannot identify, which is design flaws. To describe this, I typically use home inspection as a parallel. If you've bought a house, you've probably completed a home inspection. A person shows up and they inspect the house at a basic level, checking the locks, windows, insulation, furnace, roof, etc. to see if they work. But looking at a home from the outside in, you cannot truly determine if the house was designed properly. To understand if the load bearing wall has enough support or not, or to understand if the studs are spaced correctly or not, they have to look at the blueprint and look at the internals of how the house was designed. Similarly, for any system, you have to look at the threat model and how the different components of a system interact with each other. Threat modeling is so important because it is a manual process. Tools are not able to tell you what the greatest risks are. It requires a human to think critically and be clever. With threat modeling, you’re identifying what the assets are in your systems and the threat actors that you should care about. Based on that, you define the threat vectors that the attackers would use to try and get to your assets. With this information, you can start assigning trust zones within your systems and determine how those interactions occur and review whether you have the right controls in place, like authentication, authorization, encryption, error handling and logging and things of that nature. I think threat modeling is the next step we need to take as an industry because there is a whole different classification of vulnerabilities and issues that come from the design side. Empirically, we see 50 percent of security issues are at the design level and 50 percent are what we call bugs. We have to start doing threat modeling to uncover the inner workings of how our systems are working and interacting together and whether they pose a threat.

Jesse: I think what organizations can do to evaluate their posture is get a baseline. There are tons of ways to do this with frameworks and certifications. One of my favorites is the Cloud Security Alliance, an organization that's purpose built to support the transition to the cloud. They have something called the Cloud Controls Matrix that helps organizations align to various frameworks, whether that’s NIST, ISO27001, or HIPAA. When it comes to data, oftentimes the software world is pushing these activities to the left, or the idea of shifting left, and that means doing these security-based activities earlier on in a software development lifecycle. A great example is threat modeling. In the design phase, understand what your threats are and figure out ways to mitigate them. In the cloud, we’re shifting, too. The four walls and the castle approach of securing a perimeter, those days are gone. There is this shift in the landscape changes as well, as we now see a lot of organizations operating partially in the cloud. Because the data is potentially publicly available, we have to find ways to identify where the data is, where the data is going to go, and how to secure it. There are many cloud providers out there, and with that there are many services to help you manage the data and have visibility into the cloud. And for me, that visibility is one of the key things that has helped my organization manage healthcare data securely. Organizations not leveraging the cloud do not necessarily have that visibility. The last thing to remember is that we need to hold our vendors accountable, understand their security posture and what activities are they doing to help secure the data we share.

How has the pandemic triggered the increased adoption in the cloud?

Jesse: Almost overnight, many organizations were forced to have data and resources available remotely and externally accessible. VPNs were overloaded and people scrambled to find a physical space to work outside of the office. The cloud was – and is – an opportunity to make things available. As we saw in our viewer poll, 42 percent of participants are operating partially in the cloud. It’s clear people are experimenting with the cloud and this comes with its own challenges, as organizations haven't had the opportunity to fully vet and evaluate the cloud. Remember, we should consider cloud providers vendors and need to evaluate them as such – and that requires time. That's the challenge that's missing from this rush to make things available and it can create serious problems.

Nabil: There is another gap in our knowledge as employees don't necessarily know how their organization manages its data. It may be completely invisible to us on whether an organization increased adoption of the cloud. And that's how it should be. The whole purpose of cloud-based systems is the ability to scale as needed and have elasticity. Teleconferencing systems are a good example of this. The reason Zoom could support the huge demand of users as the pandemic started was because of the cloud. If they were not using cloud infrastructure for their systems, they would not have been able to support the large number of users because it was not expected or planned. And then there are security considerations to think about too. Just because you're in the cloud and the cloud providers are providing you with certain baseline of security controls and protection, that doesn't mean that you don't have to think about security anymore. Ensure you understand the implications of your transition from a traditional data center deployment to the cloud, and ensure you're maintaining regular best practice initiatives around things like configuration reviews, design reviews, and threat modeling. Be sure to understand the risk implications of the decisions you're making. 

healthcare data protection in a pandemic driven world
[post_title] => Q&A: How to Securely Manage Healthcare Data [post_excerpt] => Explore the world of healthcare data management – notably, how to manage sensitive data securely with threat modeling and cloud security. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-securely-manage-healthcare-data [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:50:52 [post_modified_gmt] => 2022-12-16 16:50:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=25326 [menu_order] => 408 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [44] => WP_Post Object ( [ID] => 25314 [post_author] => 53 [post_date] => 2021-04-13 07:01:55 [post_date_gmt] => 2021-04-13 07:01:55 [post_content] =>
Watch Now

This session was originally shown at Cyber Security Digital Summit's online event for Healthcare and Life Sciences.

In this session, NetSPI’s Nabil Hannan and RxMx’s Jesse Parente will explore the world of healthcare data management – notably, how to manage sensitive data securely. 

Delve into the healthcare industry’s regulatory pressures and the biggest cyber threats it faces today, then hear insights on how to:

  • collect, store, and manage your data securely
  • look at your data security program holistically (threat modeling and secure design review)

Lastly, with the pandemic as a catalyst for digital transformation in the healthcare industry, cloud adoption has soared. Nabil and Jesse will discuss the benefits of the cloud for data management and review its security considerations.

[wonderplugin_video iframe="https://youtu.be/2Pcub41I-Qc" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Healthcare Data Protection in a Pandemic-Driven World [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => healthcare-data-protection-in-a-pandemic-driven-world [to_ping] => [pinged] => [post_modified] => 2023-09-01 07:06:21 [post_modified_gmt] => 2023-09-01 12:06:21 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=25314 [menu_order] => 60 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [45] => WP_Post Object ( [ID] => 24626 [post_author] => 53 [post_date] => 2021-04-07 22:07:45 [post_date_gmt] => 2021-04-07 22:07:45 [post_content] =>
Watch Now

In this presentation, NetSPI COO Charles Horton and Managing Director Nabil Hannan explore the evolution of “as a Service” offerings, and how these offerings are being applied successfully in application security programs. If you are working with the right partner, “as a Service” should go far beyond the traditional automated or cloud-based delivery models for both technology and expertise. When applied correctly, it can dramatically influence how internal resources and capital are directed and deployed and can provide the needed support to continue to improve and evolve your application security program and collapse timeframes for remediation. Unlike a traditional “as a Service” technology solution, AppSec as a Service combines both technology and human talent that is packaged for quick and easy consumption. 

Through this discussion, learn:

  • the core criteria that define an “as a Service” partnership
  • the different options in an AppSec as a Service offering
  • how AppSec as a Service can help you improve and evolve your application security program

As these offerings continue to increase and more vendors jump on the “as a Service” bandwagon, this webinar should serve as a guide to help organizations evaluate potential providers and ensure they are getting the most out of their relationship.

[wonderplugin_video iframe="https://youtu.be/W6JoXUFHlR8" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => A Key Ingredient in a World Class Application Security Program: AppSec as a Service [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => a-key-ingredient-in-a-world-class-application-security-program-appsec-as-a-service [to_ping] => [pinged] => [post_modified] => 2023-06-22 19:51:57 [post_modified_gmt] => 2023-06-23 00:51:57 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=24626 [menu_order] => 59 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [46] => WP_Post Object ( [ID] => 24961 [post_author] => 65 [post_date] => 2021-04-06 07:00:18 [post_date_gmt] => 2021-04-06 07:00:18 [post_content] =>

On April 6, 2021, NetSPI Managing Director Nabil Hannan was featured in TechTarget:

Chief information security officers, or CISOs, around the world have come to learn from the SolarWinds manual supply chain attack that insider threats are a real issue, one that must be prioritized in 2021. The breach also brings to light an underdiscussed application security challenge: developers writing malicious code that can later be exploited.

The frequency and financial impacts of insider threats have grown dramatically in the past two years. In a recent Ponemon Institute report, the overall average cost of insider threats per incident increased by 31% from $8.76 million in 2018 to $11.45 million in 2020. In addition, the number of incidents has increased by a staggering 47% in just two years, from 3,200 in 2018 to 4,716 in 2020.

Building off the lessons learned from the SolarWinds breach, here are six steps CISOs can take to prevent insider threats.

  1. Change your mindset around your threat landscape

  2. Employ threat modeling

  3. Map out potential insider threat exposure

  4. Enact a proactive and ongoing insider threat detection governance program

  5. Define risk scenarios and escalation steps

  6. Push for holistic solutions for long-term protection

Read the full article here: https://searchsecurity.techtarget.com/post/6-ways-to-prevent-insider-threats-every-CISO-should-know

[post_title] => TechTarget: 6 ways to prevent insider threats every CISO should know [post_excerpt] => On April 6, 2021, NetSPI Managing Director Nabil Hannan was featured in TechTarget. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => techtarget-6-ways-to-prevent-insider-threats-every-ciso-should-know [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:50:55 [post_modified_gmt] => 2022-12-16 16:50:55 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=24961 [menu_order] => 413 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [47] => WP_Post Object ( [ID] => 23762 [post_author] => 65 [post_date] => 2021-03-30 07:00:25 [post_date_gmt] => 2021-03-30 07:00:25 [post_content] =>

NetSPI Managing Director Nabil Hannan was featured on the Open Web Application Security Project (OWASP) Portland, Oregon Chapter podcast. During the interview, Nabil and the hosts, David Quisenberry and John L. Whiteman, discuss mentorship, advice for entry-level pentesters, security hiring amid the cyber security skills shortage, advice for companies building a security program, cyber security policy, and more. Listen to the full episode or continue reading for highlights from the conversation.

John: A lot of people in our chapter want to be pentesters. What advice do you have for them, especially coming from your direction as a consultant?
Nabil: When I built a pentesting practice, I was tasked with hiring and training a team of pentesters. The saying I picked up from that experience is, “I can teach someone to be smart, but I can’t teach someone to be clever.” So, if you want to be a pentester, truly a pentester that’s finding interesting and unique things, it requires you to think creatively and think outside the box. The technical part of pentesting can be learned or acquired, or you can get help, but it ultimately is someone that is clever who succeeds at pentesting.

John: Are there certain security domains that we simply don't have enough skilled people for?
Nabil: Today, there is demand for security professionals in general across every domain. It is evident that we have a shortage in security expertise across the board. Security is still in its infancy. If you want to get in, regardless of which area, whether you want to test autonomous cars, or mobile applications, or medical devices, there’s a need for security in all of those things. What I would recommend to people is, figure out what you are truly interested in and figure out if there is an area or a domain that really excites you. Find something that you understand and are passionate about and decide if the security aspect is a fit for you.

John: Has there ever been a time where you have been challenged with communicating results or recommendations to clients that may have differing levels of security understanding?
Nabil: It’s a common situation that we find ourselves in. You have to speak the right language for your audience. And if you are not doing that, it can be a challenge. It’s even more challenging when you have multiple levels of people in your audience that have varying degrees of technical or security understanding.

An example that comes to mind is a secure code review assessment I completed where we found cross site request forgery (CSRF). Nobody seemed to pay attention to it because we rated it medium severity given you had to be authenticated to really do any harm. The leadership team came to us, and said, let us know if you find anything critical, then we will decide if we need to push the production date. We replied, the vulnerability may not be critical, but it can still cause a lot of damage. To communicate the severity of the damage effectively we decided to create a proof of concept to show the impact and we were able to effectively show how easy it would be to exploit that vulnerability. As a result, they pushed their deployment to focus on remediation and better secure the application, based on our recommendation.

John: Its exploits that speak louder than words, if you just give two-dimensional bug numbers or risk rating, it doesn’t mean anything until you bring it to life as what you did here.
Nabil: As a consultant, your job is to help people understand what the true impact is based on the business that is being supported. Make sure you’re speaking the right language, the right message, and the impact defined from the business perspective and the technical perspective.

David (aka: Quiz): We often get asked by the young people in our chapter, do you need to have some time as a developer before going into something like pentesting?
Nabil: There are two ways to think about it. I come from a software development background and when I look at vulnerabilities, I can dissect them by really understanding the inner workings of the software and where it failed. If you don’t have software development experience, you can still be a tester. You can still run scripts, you can probably still run tools, and you can learn basic scripting to build automation and identify vulnerabilities. If you want to be an application pentester, chances are if you have a better understanding of how software systems are built, it will give you an advantage in coming up with creative ways to make those systems break. Is it a requirement? I don’t think so. But some of the best pentesters I know do come from a software development background.

John: What advice do you have for companies building a security program?
Nabil: Being in the security space, people naturally think security is the most important thing. That being said, when trying to figure out what’s the right security strategy for your organization, you first have to learn how the business makes money. That’s the first thing you need to learn as a security professional.

Then, align your security practices and efforts to enable the business to be better versus thinking of security as something separate. Organizations that are more immature or just getting started with security often view it as a roadblock or cost center, something that is going to only slow them down. But more mature practices adopt security culture over time and incorporate it into their processes. They learn do it in a way where it enables the business. This allows you to have a program that is mature, with security integrated. Understand the appetite for the organization and what threshold of risk you are willing to take when designing and defining the program. Try as hard as possible to make security a part of the process without it becoming a friction point for the business to function. For example, trigger out-of-band activities for security reviews in an automated fashion that won’t block your business flow and understand your risk appetite and have the ability to stop a business process from going forward if it is too risky. Being able to build that level of culture, communication, buy-in, and metric alignment is key.

John: …Should this process start with policy?
Nabil: Policy comes from somewhere even more important. It comes from your customers. Ask what security expectations your customers have. Then, depending on the business, there’s also regulation and compliance. Based on these two components, you need the right structures of leadership and culture to get buy-in across the organization to make security a part of your regular workflow versus it being a separate function.

Quiz: A challenge I have had this past year, is ensuring our security conversations are communicated correctly to others… product, customers, engineering, leadership, etc.
Nabil: Human behavior is something that I am fascinated by – how people can react to the same message but deliver it differently.

At NetSPI, our Resolve™ threat and vulnerability management platform is used by many of our customers internally to track and communicate their program metrics and dashboarding. If you start showing metrics like number of open vulnerabilities by business unit, it creates a very different effect than if you were to name the open vulnerabilities with the leader of that business unit. It builds a sense of competition to be better. When we work with customers to build threat and vulnerability management programs, security champions, or training curriculum, we try to focus on the human element of it to get people excited to improve their security posture rather than see it as a hinderance.

Quiz: What were your favorite Agent of Influence podcast episodes to date?
Nabil: My favorite was the first podcast episode I did with Ming Chow, a professor at Tufts University. We talked about computer science and education around security and we even touched around interesting topics such as, how he feels about teaching someone who could potentially do bad things.

During the episode with the former CISO of the CIA Bob Bigman, he provided really great insights around the life of the CISO, what they do, and what they have to live through. He helped define and change the focus of the CISO career.

Jeff Williams, the CTO of Contrast Security was a good one, too. Him and I recently did a joint webinar, How to Streamline AppSec with Interactive Pentesting.

And Quiz, I’m not saying this because you're on this interview, but your interview was great too. Especially the book recommendations near the end. I had friends reach out the day it posted telling me how much they enjoyed the interview.

[post_title] => Lessons Learned Building a Penetration Testing Program: OWASP Portland, OR Podcast with NetSPI’s Nabil Hannan [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => owasp-application-security-podcast [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:50:56 [post_modified_gmt] => 2022-12-16 16:50:56 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=23762 [menu_order] => 416 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [48] => WP_Post Object ( [ID] => 25007 [post_author] => 53 [post_date] => 2021-03-19 16:12:47 [post_date_gmt] => 2021-03-19 16:12:47 [post_content] =>
Watch Now

This session was originally shown at InfoSec Finance Connect 2021.

Keeping pace with the threat landscape has become a constant challenge for financial services organizations. Tapping into their years of financial security leadership, Navy Federal Credit Union’s Larry Larsen, BMO Harris’ Yi Li, and NetSPI’s Nabil Hannan assess the current threat landscape and share invaluable advice on how to protect your organization.  

Watch this on-demand presentation from the InfoSec Finance Connect virtual conference to hear expert insights on:

  • Which security threats are on the rise – and why 
  • Recent financial breaches, such as Equifax and Capital One, and what other financial companies can learn from the incidents
  • The importance of threat intelligence and how financial institutions can stay informed of the current environment 
  • How companies can defend against the ever-changing threat landscape with a stable and systematic approach 
  • How to determine the right security activities for your organization 
  • The persistence and prevalence of phishing and business email compromise (BEC) 

[wonderplugin_video iframe="https://youtu.be/vPY4kMuFZsU" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Assessing The Threat Landscape And How To Protect Your Organization in 2021 [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => assessing-the-threat-landscape-and-how-to-protect-your-organization-in-2021 [to_ping] => [pinged] => [post_modified] => 2023-09-01 07:07:24 [post_modified_gmt] => 2023-09-01 12:07:24 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=25007 [menu_order] => 61 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [49] => WP_Post Object ( [ID] => 22992 [post_author] => 53 [post_date] => 2021-03-11 07:00:38 [post_date_gmt] => 2021-03-11 07:00:38 [post_content] =>
Watch Now

There simply isn’t enough time and resources to perform pentesting on everything developed in the worlds of Agile and DevOps where release cycles occur daily – or even faster.

Discover what next-generation pentesting looks like when combined with interactive application security testing (IAST). Attendees will learn:

  • Why pentesting shouldn’t compete with other AppSec testing tools and waste time with things already thoroughly tested
  • How pentesters should partner with development teams to gain deeper insights into individual applications
  • How pentesting can be adapted to modern application complexities such as APIs, microservices, etc.
  • How pentesting should be combined with security instrumentation for tracking data flows, control flows, backend connections, etc.
  • And more!

[wonderplugin_video iframe="https://youtu.be/Ajqgqwo1Plw" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => How to Streamline AppSec with Interactive Pentesting [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => how-to-streamline-appsec-with-interactive-pentesting [to_ping] => [pinged] => [post_modified] => 2023-06-22 19:39:42 [post_modified_gmt] => 2023-06-23 00:39:42 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=22992 [menu_order] => 62 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [50] => WP_Post Object ( [ID] => 23129 [post_author] => 65 [post_date] => 2021-03-09 14:02:00 [post_date_gmt] => 2021-03-09 14:02:00 [post_content] =>

Now that the dust has settled on the recent Oldsmar, Florida water treatment facility breach, let’s take a deeper look at some of the lessons we can learn from the incident.

For those unfamiliar with the breach, on February 5, hackers accessed a Florida water facility that treats water for around 15,000 people near the Tampa area. The hackers were able to increase the amount of sodium hydroxide, or lye, distributed into the water supply, which is dangerous to consume at high levels. Luckily, there was an attendant that noticed the suspicious behavior and reported it, mitigating the breach without consequence.

They gained access to the computer system through TeamViewer, a popular remote access software application commonly used for remote desktop control, screen sharing, online meetings, and file transfers. Third party IT support is a common use case for TeamViewer and, according to its website, it has been installed on over 2.5 billion devices today. There has not been confirmation on how the attacker got ahold of the remote access system credentials, but we can speculate that an employee of the water facility fell victim to a social engineering attack, such as phishing.

Given the breach itself was not sophisticated and its impact was minimal, many in the cyber security community are surprised that this is making national headlines. But it is the potential of what could have happened that is causing a panic – and rightfully so.

Investigative journalist Brian Krebs interviewed a number of industrial control systems security experts, and discovered that there are approximately 54,000 distinct drinking water systems in the U.S. Of which, nearly all of them rely on some type of remote access to monitor and/or administer these facilities. Additionally, many of these facilities are unattended, underfunded, and do not have 24/7 monitoring of their IT operations. In other words, this type of breach is likely to happen again and, if we don’t take the necessary security considerations into account, the consequences could be devastating.

The industrial control systems and utilities notoriously prioritize operational efficiencies over security. This is a wakeup call for the industry to start looking at their systems from a security and safety perspective. To get started, here are the key lessons I learned from the incident.

Lessons Learned from the Florida Water Facility Breach

Many of the reports written about the breach are centered around remote access. That is not surprising as the security concerns of remote access and host-based security have escalated amid COVID-19. Host-based security represents a large attack surface that is rapidly evolving as employees continue to work disparately.

Think back to March 2020. Organizations needed to get people online fast and began enabling Remote Desktop Protocol (RDP) which is known to be vulnerable. Cyber security firm Kapersky found that the number of brute force attacks targeting RDP rose sharply after the onset of the coronavirus pandemic. Further, internet indexing service Shodan reported a 41 percent increase in RDP endpoints available on the internet as the virus began to spread. When determining the type of remote access to give systems the decision should be based on the level of security desired and which type of remote access is deemed appropriate.

That being said, in my opinion there is more to learn from this incident beyond the remote access system vulnerabilities.

It is critical to analyze your security program holistically

These systems are complex and require a design-level review to understand what could go wrong rather than completing ad hoc security assessments that look at the technology separately.

For example, say you performed an assessment of your desktop images and are notified that you have TeamViewer installed as a potential risk. This is something that is likely to get written off as a valid use case because it is how the IT team accesses the computer to troubleshoot operational issues remotely. Unless you assess all the systems involved in the environment and how they work together, it can be difficult to understand the risk your organization faces.

This is where threat modeling and design reviews prove vital. According to software security expert Gary McGraw, 50 percent of application security risks come in the form of software design flaws that cannot be identified by automated means. Threat modeling and design reviews leverage human security experts to evaluate your program in its entirety and provide you with an understanding of the current level of security in your software and its infrastructure components. Threat modeling in particular analyzes attack scenarios, identifies system vulnerabilities, and compares your current security activities with industry best practices. And with a design review, you gain clarity on where security controls exist and make strategic decisions on absent or ineffective controls.

Defense in depth is non-negotiable

The software the facility uses to increase the amount of sodium hydroxide should have never been able to reach dangerous levels in the first place. When software is developed, it should be built with security and safety in mind. For example, the maximum threshold should be an amount of sodium hydroxide that is safe, not one that is potentially life-threatening.

What if it was a disgruntled employee that decided to change the amount of sodium hydroxide? Or if the technology attendant had been bribed? The outcome of the situation would have looked much different.

It’s a best practice in security to create as much segregation in your operational technology (OT), or technology that physically moves things, and information technology (IT), the technology that stores and manages data, to avoid incidents that could result in physical harm. To achieve this, defense in depth is essential.

Defense in depth is a cyber security approach that layers defensive controls to ensure that, if one control fails, there will be another to prevent a breach. Authentication and access management are protections at the front line of a defense in depth strategy and a critical security pillar for industrial control systems and utilities. For systems or tasks that can have a detrimental impact if breached, add multiple layers of authentication so that not one computer or one individual can carry out the task. Additionally, adopting the concept of Least Privilege, or only allowing employees access to the minimum number of resources needed to accomplish their tasks, would be a good practice to implement industry wide.

We are not prepared for disaster scenarios

We are reliant on the use of outdated systems that are not prepared for certain disaster scenarios. For an industrial control system to experience downtime, it does not require an adversary to compromise a system. Look at what happened with the Texas winter storm. No one expected the weather to get that bad, but we could have better prepped our systems for it.

That is the challenge with utilities and industrial control systems. If you are not preparing for adversaries in tandem with natural disasters and other unforeseen circumstances, you could have major issues to deal with in the long run.

Another key factor to consider is time. When something goes wrong, coming up with the easiest, least expensive, and most feasible solution isn’t possible because of time constraints. And with water, heat, electricity, energy, or gas companies the pressure of time is mounting because they are critical part of our lives. Say your furnace in your home breaks when it is below freezing out. You typically have two options: have someone come out and evaluate the situation, wait weeks for the part, and fix the existing furnace or buy a new one and have it installed in days. To avoid frozen pipes and infrastructure issues, most would choose the fastest option. In a recent study, those who did not test their disaster recovery plan cited time and resources as the biggest barriers.

At utility facilities, there remains a lack of awareness around cyber security. Regular tabletop exercises that simulate a crisis scenario are necessary when working with systems this complex.

The three key learnings discussed in this blog should work in concert with one another. Use the findings from your holistic security assessment and dust off your disaster recovery and incident response plans to remediate your biggest security and safety gaps – and, in turn, strengthen your defense in depth.

[post_title] => Key Takeaways from the Florida Water Facility Hack [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => florida-cyber-security-water-utilities-breach [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:50:58 [post_modified_gmt] => 2022-12-16 16:50:58 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=23129 [menu_order] => 421 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [51] => WP_Post Object ( [ID] => 21951 [post_author] => 65 [post_date] => 2021-02-26 07:00:53 [post_date_gmt] => 2021-02-26 07:00:53 [post_content] => On February 26, 2021, NetSPI Managing Director Nabil Hannan was featured in TechTarget: We're in the midst of a cybersecurity staffing crisis. Many major news outlets, such as The New York Times, have reported that unfilled jobs in the industry are expected to reach up to 3.5 million this year – leaving existing security teams stretched thin and burnt out. To make matters worse, attackers have increased their activity since the beginning of the pandemic and continue to take advantage of the prolonged crisis. In this new year, CISOs everywhere will need to shift their talent management practices in order to attract new candidates to the field and prevent employee burnout. How? Here are a few ideas.
  1. Invest in training for new employees
  2. Match people to the job, set goals and mentor
  3. View your project managers through a new lens
  4. Be careful with incentives
  5. Enable automation
  6. Encourage more people to enter cybersecurity
Read the full article here: https://searchsecurity.techtarget.com/post/6-ways-to-prevent-cybersecurity-burnout [post_title] => TechTarget: 6 ways to prevent cybersecurity burnout [post_excerpt] => On February 26, 2021, NetSPI Managing Director Nabil Hannan was featured in TechTarget. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => techtarget-6-ways-to-prevent-cybersecurity-burnout [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:50:59 [post_modified_gmt] => 2022-12-16 16:50:59 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=21360 [menu_order] => 423 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [52] => WP_Post Object ( [ID] => 21246 [post_author] => 65 [post_date] => 2021-02-09 07:00:53 [post_date_gmt] => 2021-02-09 07:00:53 [post_content] =>

Media throughout the world have reported on the SolarWinds manual supply chain attack which has created concern about cyber security and software vulnerabilities among businesses and government entities alike. What hasn’t been in the headlines outside of the cyber security world is the need to not only plan and test for external threats, but CISOs must also start prioritizing efforts around abating insider threats. In the case of the SolarWinds attack, malicious code was inserted somewhere within the supply chain as part of a software update, which then was made available to all SolarWinds customers. This insider attack has to-date impacted hundreds of private companies and government agencies. CISOs must lead their organizations in preventing both external and internal cyber security threats.

Thwarting External and Internal Threats – A Two-Pronged Approach

Protecting against internal threats should be the first prong in a threat detection program. The SolarWinds breach brings to light this under-discussed application security challenge: developers writing malicious code which can later be exploited. And while this isn’t the only means that inside threat actors can wreak havoc on an organization, the frequency and financial impacts of insider threats—defined as a careless or negligent employee or contractor; a criminal or malicious insider; or a credential thief—has grown dramatically in just the past two years. In a recent Ponemon Institute study, the overall average cost of insider threats per incident increased by 31% from $8.76 million in 2018 to $11.45 million in 2020. In addition, the number of incidents has increased by a staggering 47% in just two years, from 3,200 in 2018 to 4,716 in 2020. This data shows that insider threats are still a lingering and often under-addressed cyber security threat within organizations, compared to external threats.

Thwarting external threats is the second prong of a threat detection program. As explored in-depth in our whitepaper, How to Build the Best Penetration Testing and Vulnerability Management Program, the reality is that cyber security breaches today from outside the organization are inevitable and put companies at grave risk. One of the key cyber security weaknesses is the lack of continuous penetration testing and patching. This can turn into the “Achilles heel” of any organization’s security posture if not addressed and implemented properly. Organizations should think of pentesting as the final security gate. It ensures all other security controls and applications are working as designed from a security standpoint, an approach that is not often adopted by organizations with young or immature application security programs.

Further, organizations with a mature security program understand that point-in-time pentesting is not the best option for securing their applications and networks. You cannot test yourself to be more secure. New code and configurations are released every day; a continuous penetration testing approach can help test an entire system in totality and delivers results to customers around the clock, enabling them to manage their vulnerabilities easier and more efficiently.

Now, let’s focus on the steps to take to prevent insider threats. To do so, I believe that CISOs need a shift in vision. Most companies prioritize external threats but give a blanket of trust to the people within the organization. Now, in large part because of SolarWinds, it is apparent that organizations have to change this mindset.

Changing Your Mindset Around Your Threat Landscape

Threat modeling needs to first be adopted more widely to understand the organization’s threat landscape. It is essential to identify who would want to attack a system, and where the assets are, in order to understand the appropriate attack vectors, and to best enable the appropriate security controls. In my opinion, this involves a mindset shift. The biggest change is in moving from only looking for vulnerabilities to also looking for suspicious or malicious code. Let’s define the two threats. With a vulnerability, the threat actor interacts with the attack surface in a way that exploits a weakness. With malicious code, the threat actor is either choosing or creating the attack surface and functionality because they have control over the system internally. So, instead of the threat actor exploiting vulnerabilities in the attack surface, now the threat actor creates the attack surface and exercises the functionality that he or she implements. Given that, threat modeling should study potential threat to both vulnerabilities and malicious code, as the harm from both could cost an organization millions. Doing one type of threat modeling without the other can set your organization up with a false sense of security.

Potential Insider Threat Exposure

Job ResponsibilityPossible exposure area for threat activity
Administration or OperationsLocal area network, high access credentials, production systems
DevelopersDesign and source code; Application configurations; Third-party libraries and deployment descriptors
Control ManagementBinaries (susceptible to repackaging); Code promotion from QA to production; Encryption keys

Additionally, how you go about detecting a threat like the SolarWinds supply chain attack is vastly different from traditional pentesting, code review, or other vulnerability detection techniques. It not only requires a different type of lens on how you look at software to identify these issues, but it also requires a complete change in your organization’s internal threat detection governance process. Altogether, dealing with a threat issue once it’s identified is not as simple as going back to the developers asking them to fix them. Unfortunately, your developers could be the adversary.

Putting in Place a Proactive and Ongoing Internal Threat Detection Governance Program

To put in place a proactive and ongoing threat detection governance program you’ll first have to get buy-in from the leadership team. After all, at its core, malicious code review is a process where you theoretically treat those within your operations who have privileged access as threats. And secondly, you’ll need to educate the leadership team regularly on the scope of your malicious code review engagements. While finding malicious code is difficult and the probability is small, the risk of an insider threat is on the rise. In fact, Forrester research predicts that this year, 33% of data breaches will be caused by insider incidents.

Importantly, all of your malicious code review efforts have to be done in secrecy, involving only small teams you trust completely. It has to be a covert operation where you don’t notify or give knowledge to stakeholders in the software supply chain. They should never know that you are implementing a process to look through their code with a lens of trying to identify code that looks suspicious and potentially malicious.

Risk Scenarios and Escalation Steps to Take

Once your malicious code review regimen is in process and suspicious activity is detected, there are escalation steps that can be put in place to mitigate risk. Consider the following:

  1. Suspicious, But Not Malicious: You find something that looks suspicious or malicious but that can’t be exploited, and it may have even be left my mistake. Escalation Step: In this case, you may do nothing.
  2. Circle of Trust Invitation: You find something that looks suspicious, but you can’t get confirmation on whether it is malicious or not. Escalation Step: This is where you have to build a relationship with a trusted developer or someone from a developer organization who you can trust and can bring into your circle of trust to verify that suspicion.
  3. Passive Monitoring: You’ve found suspicious code but choose a monitoring stance. Escalation Step:This is where you do additional logging in production or potentially add some type of data layer protection that you can trigger so you can passively monitor if there’s a point in time when someone is trying to exploit the suspicious line of code.
  4. Active Suppression: You find suspicious or malicious code and work to suppress it. Escalation Step: This is where you actively either write a rule within your firewall, build a compensating function or do some type of dependency injection or weaving to actively stop that suspicious code from ever being executed.
  5. Commencement of an Executive Event: You find malicious code and have identified its source, whether it be a sole insider threat, a whole team of suspicious actors, or find threats that involve a particular department, country or line of business. Escalation Step: This step has nothing to do with software, but it has everything to do with safeguarding your organization. You will need to involve your organization’s leadership and execute some sort of severe executive level event which could include terminations of implicated employee(s) or contractors and may even involve law enforcement.

A Caveat: Another challenge with supply chain attacks: they may never happen at the code level—they may happen in the process of a piece of code being elevated from development to production. Therefore, analysis both at the code level and also at the binary level is warranted to get information in artifacts from operations themselves.

Looking holistically at supply chain attacks, the security industry does not yet have a complete solution. Long term, we need to examine how the industry approaches the evaluation and risk acceptance of third-party solutions, which could come in the form of changes to compliance requirements around least privilege, auditing, and integrity checks.

However, with many studies and news reports pointing to a continuing rise in both external and insider threats—in number of incidences, time to contain, and cost implications – it’s essential for us to begin taking immediate steps as a part of the holistic solution. It’s imperative that CISOs advance leadership support in the development and implementation of a two-pronged threat detection and governance program that involves both malicious code review and vulnerability management initiatives. With breaches often costing organizations millions of dollars, there’s no time to waste.

[post_title] => The Need to Prevent Insider Threats, as Revealed by the SolarWinds Cyber Security Breach [post_excerpt] => CISOs must prioritize efforts around preventing insider threats in the supply chain. Read this article to learn how to detect and prevent insider attacks. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => whats-next-and-new-with-netspi-resolve-2 [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:01 [post_modified_gmt] => 2022-12-16 16:51:01 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=21246 [menu_order] => 427 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [53] => WP_Post Object ( [ID] => 21947 [post_author] => 65 [post_date] => 2021-02-08 07:00:48 [post_date_gmt] => 2021-02-08 07:00:48 [post_content] =>

On February 8, 2021, NetSPI Managing Director Nabil Hannan was featured in TechTarget:

Ransomware attack simulations, accessing enterprise logs and pen testing software code are among the best practices cybersecurity pros suggest following the SolarWinds breach.

Forensics teams are still investigating how hackers were able to exploit SolarWinds' patching system to attack numerous high-profile commercial and governmental organizations, including Microsoft and the U.S. Department of Justice, as well as other customers of the security monitoring software vendor. At the same time, experts from a range of security service providers -- including those offering penetration testing, vulnerability scanning and software code reviews -- advise businesses to act now to shore up their own enterprise security.

The SolarWinds breach was first revealed in late 2020 -- although the attacks may have begun in 2019 -- and now includes the discovery of two backdoors created by malware. The first, named Sunburst, has been linked to numerous supply chain infections and nation-state attacks, and the second, named Supernova, is not a supply chain attack, but rather malware that required the exploitation of a vulnerability in the Orion software program recently patched by SolarWinds. U.S. government and cybersecurity experts are still uncovering the damage caused by the two infections.

Security service providers suggest the following list of five lessons learned to help organizations ward off or detect a SolarWinds-type hack. These best practices also lessen the "threat noise" across the enterprise, enabling a company to quickly identify and handle suspicious behavior.

Don't rely on internal developers to test internally developed software

Developers should not have the final say on how secure their code is. They are not security experts, and they might be the ones who inserted malicious code, intentionally or not, according to Nabil Hannan, managing director at pen testing provider NetSPI. "To uncover a SolarWinds type of issue, you have to think differently than a developer would about what you are looking for, including who has access to your systems," he said. "How can a developer determine another developer's true intent for putting code in the system and how it will behave? He can't." Hannan recommended forming a group of trusted executives and senior managers to work with an external testing firm. When developers are done with their reviews or completed updates, the group sends it to the testers to look for malicious code and insider threats. "We examine the source code and binaries, looking at executables and comparing what is published versus what is in the source code," he said. Testers search for backdoors, time bombs, Trojan horses and signature patterns. "If there are differences, we will report back to the group in a discreet way and work with them to mitigate the issues." Hannan said having this practice and these controls in place are helpful when there is a management shakeup, a disgruntled developer leaves or a merger or acquisition is about to take place.

Read the full article here: https://searchsecurity.techtarget.com/feature/5-cybersecurity-lessons-from-the-SolarWinds-breach

[post_title] => TechTarget: 5 cybersecurity lessons from the SolarWinds breach [post_excerpt] => On February 8, 2021, NetSPI Managing Director Nabil Hannan was featured in TechTarget. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => techtarget-5-cybersecurity-lessons-from-the-solarwinds-breach [to_ping] => [pinged] => [post_modified] => 2022-12-16 10:51:02 [post_modified_gmt] => 2022-12-16 16:51:02 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=21280 [menu_order] => 428 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [54] => WP_Post Object ( [ID] => 21946 [post_author] => 65 [post_date] => 2021-01-22 07:00:00 [post_date_gmt] => 2021-01-22 07:00:00 [post_content] => On January 22, 2021, NetSPI Managing Director Nabil Hannan was featured in TechTarget: When you hear the term "pen testing," what do you envision? A web app test done with a dynamic scanning tool? A test done by a human being who's digging deep to replicate what an attacker would do in the real world? What about the term "network pen testing?" An automated discovery of your network infrastructure resulting in a pages-long report on what assets you have? A real-life person examining how your network is architected in order to flesh out vulnerabilities? Depending on who you ask, each of the responses above could be right. And therein lies the conundrum. There's no standardized lexicon in the cybersecurity world and it's causing confusion among independent and organizational security professionals alike. For organizations, the challenge is using the right terminology so they can seek out and price comparable services to meet their security needs, as well as understand exactly what they're consuming from the security professionals they engage. For cybersecurity professionals, the hurdle lies in understanding just what an organization needs and expects to accomplish its security goals. And, if your industry is compliance-focused, regulatory drivers will also determine what type of assessments your company must perform, making it critical that you get your terminology right. Read the full article here: https://searchsecurity.techtarget.com/post/Standardize-cybersecurity-terms-to-get-everyone-correct-service [post_title] => TechTarget: Standardize cybersecurity terms to get everyone correct service [post_excerpt] => On January 22, 2021, NetSPI Managing Director Nabil Hannan was featured in TechTarget. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => techtarget-standardize-cybersecurity-terms-to-get-everyone-correct-service [to_ping] => [pinged] => [post_modified] => 2021-04-14 05:28:11 [post_modified_gmt] => 2021-04-14 05:28:11 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=21127 [menu_order] => 433 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [55] => WP_Post Object ( [ID] => 20831 [post_author] => 65 [post_date] => 2021-01-12 07:00:07 [post_date_gmt] => 2021-01-12 07:00:07 [post_content] =>

Starting, or even refining, a cyber security program can be daunting. Because a security program is as individual as an organization and must be built around business objectives and unique security aspirations, there’s no one-size-fits all solution and the number of tools and services available can overwhelm. The good news is that, if you’re about to embark on a security journey, the following activities will set you on the right path.

Define Your S-SDLC Governance

No matter what security techniques you end up using, you must start by defining your Secure Software Development Lifecycle (S-SDLC) governance security gates and incorporate them into your SDLC. For each gate definition, make sure you collect information needed to determine whether a component passes or fails before the software can advance to the next phase of development. For example, before promoting your application from the coding phase, you might want to do a static analysis scan. If that scan reveals a critical vulnerability, you’ll want to prevent your application from being promoted to the testing phase. Instead, report that vulnerability to the development team, who then must resolve the problem to the degree that the piece of software passes the next static analysis scan without revealing any more critical vulnerabilities. Only then do you advance your application to the testing phase.

For governance rules to be effective, you have to build a collaborative culture within your development organization and communicate and evangelize about these processes. Make sure everyone involved is aware of, and understands, the expectations to which they’re being held.

Secure Design Review

Secure Design Review (SDR) is a broad term with many different definitions. It can refer to high level, pen and paper exercises to see if there are common issues with the application being developed. It can also mean a deep analysis complete with full blown threat models. Or anything in between. Regardless of your approach, SDR allows your organization to catch vulnerabilities at the design level to adopt better security controls. SDR allows organizations to start adopting a culture of security by focusing on developing secure by design frameworks or libraries that create opportunities to efficiently implement re-usable security features as appropriate. A positive outcome? Peace of mind.

Penetration Testing and Security Testing as Part of QA

Penetration testing to assess internal and external infrastructures, often driven (but not exclusively) by governance or compliance regulations, is one of most common activities involved in cyber security programs. Note: It often requires expertise that you might not have inhouse as you get your security efforts underway. Fortunately, there are plenty of firms out there that are really good at it, and outsourcing may be your best option – especially for assets that meet mission-critical risk thresholds. Ultimately, penetration testing’s biggest value for your new security program is that it will reveal just how secure your SDLC is, which you defined in the previous steps.

Security testing is also typically performed by outside experts. However, if you have a group internally who’s already doing some sort of testing – like functional testing or QA testing – it’s easy to introduce basic concepts that allow them to test for vulnerabilities. For example, when your QA testers are building test cases, encourage them to adopt techniques like constantly building edge and boundary test cases. At a bare minimum, this will assess your application from an input validation, and output encoding perspective.

If you’re doing pentesting, look at the results and build test cases based on them into your QA workflow as well. As an example, verbose error messages should be examined. How many times have you tried to log into an app, mistyped the password and received an error message along the lines of: “Your user ID is right, but your password is wrong.” A message like that can give an attacker information they can use to brute force all possible passwords to effectively determine which are valid and which aren’t.

Interactive Application Security Testing (IAST) is gaining popularity quickly and is a rising star amongst application security testing and discovery techniques. Because it is instrumented into running the application on the server side, it can report issues that are truly exploitable, which results in the IAST tool reporting little to no false positives.

Create a Threat and Vulnerability Management Process

To improve your risk posture, it is advisable that organizations create a threat and vulnerability management process. In other words, a process to measure the rate at which you’re identifying vulnerabilities and the rate at which they’re being addressed. Next, create a centralized system to manage the vulnerabilities themselves – and build metrics to be sure you’re getting the right business insights into your program.

Before we go further, let’s clarify what metrics and measurements are, as there can be a lot of confusion around what each term means. Measurement is a fact or number used to quantify something. A metric is usually a combination of measurements, frequently a ratio, that provides business intelligence. For example, “I had three cups of coffee today” is just a measurement. My blood/caffeine ratio, however, would be a metric. The fact that I had three cups of coffee today doesn’t tell me much. But the amount of caffeine in my blood tells me something that might be important.

Read about how CISOs can work with CFOs to identify metrics that are meaningful to leadership.

To take the example a step further, people sometimes will take raw data, such as the number of vulnerabilities found, and use that to measure their success. Wrong! You have to build key performance indicators (KPIs) and key risk indicators (KRIs) that are based on your business risks. Use your KPIs and KRIs to develop metrics that will guide you in your application security journey.

Initially, you might be able to only build metrics on coverage, such as the percentage of your applications portfolio that is currently being tested. Over time you can build more mature metrics to determine things like holistic policy compliance and later, look at effectiveness metrics for things like penetration testing and secure code review. Lastly, when you are heavily focused on remediation and reducing

Develop Application Security Standards

Chances are, you have security policies that you need to adhere to, whether established internally, by regulatory bodies or even customers. It is important to unify them to build application security standards applicable to your business and SDLC practices. Then, enforce them with automation whenever possible. For example, you might want to customize static analysis or dynamic analysis tools so they understand what your standards are. These tools will trigger an alert when a certain security standard isn’t being met.

Various automation tools and techniques are available that can improve the quality and security of the software that you’re implementing, including:

  1. SAST – Static analysis security testing
  2. DAST – Dynamic application security testing
  3. IAST – Interactive application security testing
  4. RASP – Real-time application self protection
  5. SCA – Software composition analysis

For a deeper dive into these tools, check out this Cyber Defense Magazine article, starting on pg. 65.

Identify and Inventory Open Source Risk

Open source code is everywhere. It’s convenient, replicatable and efficient to use. Many developers employ it. With open source code, however, you need to maintain a heightened awareness of possible security risks.

Maintain an inventory of all open code that you’re using throughout your organization. While those components might not currently pose a risk today, or be known to contain vulnerabilities, some type of zero day vulnerability could be discovered on a particular component. The moment that happens, you need to identify: 1) whether you’re using the component that’s vulnerable, and 2) know where you’re using it and whether your software is now exploitable. You’ll also want to track any possible licensing conflicts as early as possible to avoid legal headaches.

The Longest Journey...

… begins with the first step. If the application security journey you’re about to embark on feels like the epic trek of a lifetime, don’t worry. These six security activities will start you on solid footing and help you navigate along the way.

[post_title] => Six Activities to Jump Start Your Application Security Journey [post_excerpt] => Start or refine your application security and pentesting journey with these six best practices from the cybersecurity experts at NetSPI. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => six-cyber-security-activities-jumpstart-security-journey [to_ping] => [pinged] => [post_modified] => 2021-04-14 06:37:15 [post_modified_gmt] => 2021-04-14 06:37:15 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=20831 [menu_order] => 435 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [56] => WP_Post Object ( [ID] => 20678 [post_author] => 65 [post_date] => 2021-01-05 07:00:09 [post_date_gmt] => 2021-01-05 07:00:09 [post_content] =>

Application Security is a crucial component to all software development today. At least, it should be as cyber security concerns continue to grow at the same furious pace as the number of apps out there. However, here at NetSPI, we talk with a lot of software development teams who haven’t yet adopted a security mindset, thereby placing not only their programs at risk of cyber-attacks, but their entire organizations as well.

If you’re fighting resistance within your organization to incorporate security measures into the software development life cycle (SDLC), this blog post is for you. We’re going to set straight four of the most common myths and misconceptions we hear among those who don’t have robust application security processes in place.

Myth #1 – An application security team is optional

On the contrary – an application security team today is a must. Someone within your organization should own the function. The good news is that you don’t need a big team to manage it. In fact, we’ve seen programs that work really well with small teams – even teams composed of just one person, in some cases.

Another must: enable an application security culture and nurture that culture across the entire organization, paying special attention to key stakeholders who contribute to your application development lifecycle. Some companies foster an application security philosophy with a security champions program, where leaders in the software applications organization are nominated to advocate on behalf of the application security team. The beauty of this approach is that you have team members within your software engineering organization who can accelerate and fix vulnerabilities quickly. In many cases, they can help reduce the number of vulnerabilities your applications have in the first place. The best side-effect of this approach is that you start organically evangelizing a culture of application security within your organization.

Myth #2 – My organization is too small to have an application security team

This belief is especially common among startups. As intimated above, no organization is too small to focus on application security, mainly because it isn’t just about finding vulnerabilities. You can start by creating governance processes that define security measures and that guide implementation of a secure SDLC, such as:

  • Introduce technologies at different points during your SDLC to ensure you capture vulnerabilities early, before a hacker or attacker can exploit your software.
  • Integrate security concepts into your software by building application security-specific requirements that become part of your software before a single line of code is even written.
  • Create security use cases (also known as misuse and abuse cases) and build functional requirements that focus on security concepts. Then, make sure that your developers have access to those requirements and implement the software against them.
  • Educate developers on defensive programming techniques to be able to build software that is naturally resilient to attacks.

Myth #3 – Because we love DevOps and we’re an Agile organization, we can’t have an application security team

Organizations that feel this way usually believe that security teams slow things down. However, security doesn’t have to slow you down when you use the right tools and processes at the right times; and a relatively new concept known as DevSecOps can help. DevSecOps is a culture in which security is integrated between the development and operations functions to close the gap between the development, security, and operations teams, three roles which are historically siloed. If these three roles are required to work more collaboratively, a shared responsibility for application security is created, which enables a DevOps and/or an Agile organization to introduce security as a frictionless component of all processes. Ultimately, the objective is to make security-driven decisions and execute security actions at the same scale and speed as development and operations decisions and actions. To succeed with this approach, an organization must adopt a DevSecOps culture.

Myth #4 – Application security teams will slow us down

As mentioned above, application security doesn’t have to be a hinderance. If you’re using best practices and building good quality software, security is an inherent part of that. Most software performs better and is more efficient when it’s developed securely in the first place. When you adopt a security mindset, your SDLC will flow smoothly, enable you to build better software and can even save you money in the long run.

Concerned about your application’s security? Learn more about our application security penetration testing.

Getting started with application security:

Best practice dictates the introduction of appropriate touchpoints throughout each phase of your process.

Education, for example, is a good first step:

  • Educate your product managers and business analyst(s) on common security vulnerabilities and real-world scenarios of how these security vulnerabilities had a severe impact on an organization, so they can help guide security requirements for your software and always be security conscious.
  • Educate developers on defensive programming to make sure they implement software that is naturally resilient against vulnerabilities.
  • Educate your teams who are involved with testing and deployment to detect vulnerabilities using various techniques like manual penetration testing, adversarial simulations or red teaming activities.

Learn more about secure code review and building application security into your software development lifecycle.

Second, during the planning phase, create security requirements, or benchmark your program, so that you can understand how mature your organization’s SDLC is, from a security perspective, and so that you can take educated steps to evolve and elevate it over time.

Third, in the design phase, construct your software so that it is naturally resilient to attacks. When you’re building use cases, be sure to add misuse and abuse cases. An example of a misuse/abuse case would be when an attacker tries to “brute force” all possible usernames and passwords into those fields in a login page. You can address such a case by making the software automatically lock an account after multiple wrong tries. You should also create a velocity or anti-automation check to prevent an automated tool and scripts from brute-forcing its way into compromising your application.

During the coding phase, you can not only educate your coders on writing secure code, you also can employ techniques like static analysis, manual code review, and composition analysis to identify vulnerabilities early in your SDLC.

In the testing phase, you have the opportunity to leverage manual penetration testing, dynamic scanning, and build risk-based test cases based on the misuse and abuse cases defined earlier.

Lastly, in the deployment phase, test your detection controls, perform adversarial simulations and red teaming activities. Consider manual penetration testing or implement technologies like RASP to offer continuous protection of an application even if a perimeter is breached.

Because in today’s world software is everywhere – from refrigerators and coffeemakers to medical equipment and data farms – application security is becoming ever-more complex and increasingly critical. Every software development organization, no matter how large or small, must focus on application security to protect its products, the end users and, ultimately its own organization.

For more information, watch my presentation at the recent Cyber Security Summit or contact us to learn how you can get started on your own application security journey.

[post_title] => Four Application Security Myths – Debunked [post_excerpt] => Application Security is a crucial component to all software development today. At least, it should be as cyber security concerns continue to grow [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => our-application-security-myths-debunked [to_ping] => [pinged] => [post_modified] => 2021-04-14 12:53:06 [post_modified_gmt] => 2021-04-14 12:53:06 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=20678 [menu_order] => 437 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [57] => WP_Post Object ( [ID] => 20687 [post_author] => 65 [post_date] => 2020-12-11 07:00:09 [post_date_gmt] => 2020-12-11 07:00:09 [post_content] => On December 11, NetSPI Managing Director Nabil Hannan was featured in TechTarget: At the end of the day, cybersecurity is a financial issue. Breaches can result in significant financial loss and reputational damage. Consider these statistics:
  • The global average cost of a data breach is $3.86 million, according to the "Cost of a Data Breach Report 2020," with the U.S. having the highest average at $8.64 million.
  • Another report found that insider threats are the most expensive category of attack to resolve, costing an average of $243,101. And this number is increasing.
  • Lastly, in just the first six months of 2020, 3.2 million records were exposed in the 10 biggest breaches – eight of the breaches occurred at medical or healthcare organizations. Healthcare was deemed the costliest industry by the "Cost of a Data Breach Report" with the average cost of a breach reaching $7.13 million.
Now forget those statistics; push them aside. While it's important to understand the financial aftermath of a breach, security teams need to uncover more proactive methods for communicating the value of their investments with organizational leadership to get buy-in (and funding) upfront. However, communicating the return on investment (ROI) of a security program, in which the results are not always tangible, has proven to be a challenge for security leadership. The shift to a more proactive security program assessment can only occur if the chief information security officer (CISO) first has a greater voice at the table in the boardroom. As the individual most responsible for ensuring information assets and technologies are adequately protected, the CISO can serve as a bridge between the highly technical voices in infosec and other C-suite executives who are more financially, operationally or innovation focused. And who among the C-suite can make this shift a reality? The chief financial officer (CFO). CISOs need to establish a stronger relationship with their CFO and financial team to better communicate the value of existing, and future, security investments. Here are three ways – and reasons why – the CISO and CFO should work more closely together. Read the full article here: https://searchsecurity.techtarget.com/post/3-reasons-why-CISOs-should-collaborate-more-with-CFOs [post_title] => TechTarget: 3 reasons why CISOs should collaborate more with CFOs [post_excerpt] => On December 11, NetSPI Managing Director Nabil Hannan was featured in TechTarget. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => techtarget-3-reasons-why-cisos-should-collaborate-more-with-cfos [to_ping] => [pinged] => [post_modified] => 2021-04-14 05:29:27 [post_modified_gmt] => 2021-04-14 05:29:27 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=20687 [menu_order] => 442 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [58] => WP_Post Object ( [ID] => 20601 [post_author] => 53 [post_date] => 2020-12-02 10:31:59 [post_date_gmt] => 2020-12-02 16:31:59 [post_content] =>
Watch Now

This session was originally for the Fall 2020 Cyber Security Digital Summit.

Overview 

Has your organization considered Interactive Application Security Testing (IAST), Runtime Application Self Protection (RASP), and related solutions as part of your security program,? What has your experience been so far? 

Understanding the value provided by different types of vulnerability detection and exploit prevention technologies that are available today is critical to every security organization. This discussion featuring Nabil Hannan, Field CISO at NetSPI and Travis Hoyt, Head of Cybersecurity Technology at TIAA, will focus on IAST and RASP. 

  • What is IAST, and how does it complement Pentesting, Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST)? 
  • What is RASP, and why is it challenging to deploy at scale? 

Tune into the discussion to learn:  

  • The capabilities of new emerging technologies that detect security vulnerabilities in software 
  • The strengths and weaknesses of some of the new techniques 
  • How organizations are using these techniques at scale 
  • Challenges around adding yet another piece of technology to the ecosystem 

Key highlights:

  • 1:40 – IAST
    • 1:56 – What makes IAST so appealing?
    • 6:30 – Security testing as part of the QA testing process
  • 16:35 – RASP
    • 18:44 – How does RASP compare to application firewalls?
    • 24:01 – Why is it challenging to deploy RASP at scale?  
    • 27:01 – How is RASP different from IAST?
    • 27:42 – What are the challenges in adopting new technologies (adding yet another tool into the ecosystem)?

Interactive Application Security Testing (IAST)  

To get started, it’s important to first understand IAST and why it’s important.  

What makes IAST so appealing? 

IAST has powerful technological capabilities to emulate a pentester in a way that isn’t possible with dynamic security testing. This is particularly appealing for organizations that may not necessarily have the resources, from a human capital perspective, to have senior pentesting talent or other individuals with the right skills on staff.

While some individuals in the security space may say that organizations are overly reliant on tools, when capabilities aren’t available internally, then the right tools and automation are necessary to get to where an organization needs to be from a security standpoint. It’s a powerful capability, especially for smaller teams. 

Another way to look at the appeal of IAST is by taking a step back and thinking of IAST as an instrumented solution that has an agent that sits in your application and is providing what can be referred to as security telemetry. One important consideration to remember is, this isn’t necessarily different from technologies that engineering teams are already using for performance telemetry. Rather, it's a similar concept that involves embedding an agent into your running application to understand and determine the inner workings of your system. This provides security telemetry, instead of performance telemetry that other solutions are providing.

Once engineering teams look at IAST this way, they embrace it, because it doesn't necessarily take them out of their normal workflow. Additional overhead or requirements aren’t required from a security perspective because security gets ingrained into the engineering teams’ normal processes in a seamless fashion. As a result, this makes IAST appealing and helps with adoption.  

Security Testing as Part of the QA Testing Process 

Security testing is important to QA testing both for organizations that don’t have robust teams and resources, as well as an augmentation for companies with larger teams and pentesting capabilities. Oftentimes companies with bigger teams also have much larger scopes of applications. While a smaller organization may have 10 or 15 apps, larger firms may have thousands of them.

Leveraging IAST, just like any other type of automation, can free up pentesters and other team members to focus on other more complex issues. For example, at larger companies or those with complex application ecosystems, a lot of times when testing an application, it leads to a siloed view. This approach is taken across all applications. By integrating an interactive application security testing tool as well as other DevOps tooling, then team members can focus on the points in-between to break down silos. For example, understanding the handoffs of how an integration works and identifying weaknesses. As a result, you start to look at your threat model, and you can spend more time focusing on how it works.

In the pentesting space, incorporating IAST early on is a powerful component of the development lifecycle. A lot of effort has been put into embedding security into normal QA processes in the past, such as leveraging dynamic scanning technologies and static analysis technologies. However, a lot of these out-of-the-box tools tend to be noisy and can produce false positives. Asking a QA team to manage and interpret results from these tools becomes challenging because it requires additional education and experience that most QA teams don't have.

A QA team can work well when they’re assigned misuse or abuse cases, or if the organization is building specific security requirements and gives the QA team a specific security test case to execute. But on their own, trying to interpret results from different security tooling and scaling this approach across a QA team is challenging.  

This is why IAST is beneficial to QA teams.  

The false positive rate is significantly lower compared to traditional static and dynamic analysis tools—and because of this—QA teams don’t necessarily need to be security experts. With IAST, the tool can run in the QA process seamlessly. The QA team does their work and the IAST tool reports on security issues.

Runtime Application Self Protection (RASP)  

Runtime Application Self-Protection (RASP) can be challenging to deploy at scale and has distinct differences from IAST.  

Overall, opinions are divided on RASP and the level of adoption varies greatly by organization. One factor that plays into adoption is the size of a company. RASP is typically embedded and needs to be instrumented appropriately, with the right rules in place, during production.

Some RASP models can learn and mature over time, but many organizations are hesitant to adopt RASP because they don’t want technology in place that can make a decision about whether or not something is accurate. For example, if a team makes a code change and the RASP model isn’t updated effectively, it may block functionality and can hinder applications. In general, vendors have addressed this over the years, but the risk of blocked functionality isn’t zero. 

How Does RASP Compare to Web Application Firewalls?  

The primary difference between RASP and a web application firewall is that a web application firewall is sitting outside of an application, not inside the application trying to analyze traffic to determine and block possible attacks.

When it comes to web application firewalls, understanding the two distinct types is important. One type is signature-based, which requires building signatures and patterns that you're looking for in traffic and trying to block them. The other type is behavioral-based, which involves teaching the web application firewall what good behavior looks like in an application in order for the firewall to know how to block malicious behavior. 

While a web application firewall is an effective defense in depth and almost a necessity today, with different types of DDoS and related attacks, it can end up blocking a lot of context with the inner workings of the application.  

When this is the case, RASP is the more beneficial solution, because it can look at fields and values and understand how values are used. For example, if you get a value that has a payload for cross-site scripting, but it's going to be used only for a database query, chances are that's not going to be exploitable.  

On the other hand, it can determine, if you have a payload for SQL injection, and it is going to be used in the database query and immediately block the behavior and also notify you of that behavior. In these instances, RASP is the more effective solution, as it allows defense in depth—and no organization can trust only one layer of defense today. 

Challenges to Deploying RASP at Scale 

Early on, when RASP products weren’t as mature, overall user experiences weren’t positive, and RASP impacted application performance. In addition, while RASP has improved and will continue to evolve over time, the efforts it takes to train RASP models are complicated, especially on mission-critical platforms. As a result, justifying RASP tools and encouraging adoption can be difficult for some teams.  

 
RASP technology is improving and making the barrier to entry or barrier to usage much less significant. RASP vendors are getting better at making it easy for the technology to feel more like plug-and-play, rather than needing significant customization. As RASP technology improves, what needs to change now is the mindset of teams who are worried about an unknown entity as part of their application that could potentially modify the behavior of the application, especially for mission-critical applications.

Many large organizations, such as financial services businesses are concerned because every moment of downtime leads to lost money. If the RASP technology blocks app functionality or other regular behavior that it shouldn't be, this can have a drastic impact on the business. While RASP technologies may not see rapid deployment at scale because of valid concerns, more companies will start using RASP tools in small areas of the business. 

IAST versus RASP 

At a high level, IAST focuses on early lifecycle instrumentation and testing of the application during the development cycle for the most part. On the other hand, RASP offers runtime, in-production defense.  

Challenges of Adopting New Technologies 

One of the challenges with adopting new technologies is history. When organizations have had a negative experience with security tools in the past, they are hesitant to adopt new tools because of the assumed negative experiences. Developers, engineering managers, and organizations are frustrated with having to manage many security tools for different purposes. 

Initially, there was a push over the past decade to shift security efforts to earlier in the software development lifecycle (SDLC). The only way to shift earlier in the SDLC with technology was using static analysis tools because static tools can be run from the moment you're writing your first piece of code. However, static tools end up being more noisy, disruptive, and prone to false positives, versus newer technologies. Static tools and other early technologies that teams tried to implement left a negative impression for a long time, which continues to make them skeptical of new technologies. 

Another challenge is that as organizations invest in more security testing tools, they think they’re improving from a security perspective, but don’t focus on remediation. What many organizations are missing is an effective threat and vulnerability management solution and a strategy to take results from testing tools and consolidate and resolve them.  

Improve Your Security Program with NetSPI 

Whether your organization uses IAST, RASP, or a combination of any other technology tools to identify vulnerabilities in your applications, having a strategy in place to remediate vulnerabilities is critical.  

NetSPI’s proactive security platform, can help your team improve vulnerability management, achieve penetration testing efficiencies, leverage security automation, understand your risk, scale your security program, and manage your ever-expanding attack surface.  

Schedule a demo to learn more about how NetSPI can help you effectively improve and scale your security program.

[wonderplugin_video iframe="https://youtu.be/EsVGocu5YDA" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => The Adoption of Emerging AppSec Technology [post_excerpt] => Watch our on-demand session from Cyber Security Digital Summit, "The Adoption of Emerging AppSec Technology: A Possible Shift to the Right." [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-adoption-of-emerging-appsec-technology-cyber-security-digital-summit [to_ping] => [pinged] => [post_modified] => 2024-01-15 14:00:06 [post_modified_gmt] => 2024-01-15 20:00:06 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=20601 [menu_order] => 65 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [59] => WP_Post Object ( [ID] => 20355 [post_author] => 65 [post_date] => 2020-11-17 07:00:01 [post_date_gmt] => 2020-11-17 07:00:01 [post_content] =>

In a recent episode of Agent of Influence, I talked with Jeff Williams, a celebrity in the cyber security space. Jeff is the co-founder and Chief Technology Officer of Contrast Security where he developed IAST. He was also previously the co-founder and CEO of Aspects Security and was a founder and major contributor to OWASP where he created the OWASP Top 10 – among other things.

I wanted to share some of his insights in a blog post, but you can also listen to our interview here, on Spotify, Apple Music or wherever you listen to podcasts.

Critical Responsibilities of Cyber Security Consultants

Jeff believes that critical to a career in cyber security is the knowledge of security defenses and security vulnerabilities. You actually have to learn how defenses are supposed to work. Your job as a security consultant in a lot of respects is to make sure those defenses are in place and that they're working the way they're supposed to. Therefore, understanding how they're supposed to work is critical. Jeff sees a lot of consultants who read a document about how a security defense is supposed to work, and they assume that that's how it does work. But that’s often not the case.

The other piece you really have to understand is how vulnerabilities work. And not just in theory – you actually have to work through them, exploit them, and learn how they work. Jeff believes that if you know about these vulnerabilities only in theory, you know nothing about them. You really have to dig in and make sure they work.

You can use things like WebGoat, which Jeff created, to start to understand them, but you should go back and recreate them. And it's not going to work the first time, you're going to have to experiment around and figure out how to make it work – which is part of the job.

Effectively Communicating Vulnerability Findings

Learning how to write up vulnerabilities is almost as important as being able to find them. It’s really important to be able to communicate your findings and get people to take action. Jeff said he’s read a number of vulnerability write ups that are so bad because they're too technical and don't describe the risk, especially the risk from a business context.

Ultimately your work goes to waste if you can't effectively communicate with others what you found and the importance of what you found.

You can read more in this article Jeff wrote on LinkedIn.

The Necessity and Benefits of IAST (Interactive Application Security Testing)

Jeff was having trouble getting their customers to succeed in their application security programs. They were getting some results and fixing some vulnerabilities, but it was a lot of work to get there. He wrote a paper a number of years ago called Enterprise Java Rootkits, and the question was: what could a malicious developer do inside a major financial enterprise? Everything in that paper is still valid today – and it's terrifying. One of the techniques that he looked at was instrumentation, and dynamically instrumenting an app from within that same app.

This paper got Jeff thinking about instrumentation and if it could be used for good. It struck him that this was a way of getting inside the running application and watching it run. He realized he could watch a SQL injection vulnerability from soup to nuts. He could see the data come in, track that data through the application, see it get combined into a SQL query, see that query then get sent to the database, and check back on that path to see if the data went through the right defenses.

If you see that path, if you see data come in, and go into a query without being escaped or parameterized, then that's pretty good evidence that you've got a SQL injection vulnerability, so he started playing around with it and tested it on WebGoat.

He shared about the first time he found the SQL injection in WebGoat without doing anything other than adding the agent and using the application normally. He watched the log spitting up, and then saw this thing that said: SQL injection detected. That magic has stayed with him to this day.

It's amazing to watch instrumentation work. It's like finding all this fantastic information out of your application without any extra work.

IAST is also seamless from a development perspective. It happens in the background and in real time. You don't have to have a security background or security awareness to be able to do this.

Noisy Cricket: Strengths and Weaknesses of Static Analysis and IAST

Jeff shared that OWASP decided they didn't really know what static tools were good at and bad at, so they wanted to measure it. To do that, they created this huge test suite, almost 3,000 test cases, half of which are false positives and half of which are true positives. Then they run a static tool against it, get the report from the static tool, feed it into the benchmark, and it will score the report and create charts to show the strengths and weaknesses.

It's a low bar; none of these tests are particularly difficult. But what is surprising is how poorly the static tools do on things like data flow problems and all the injections (including command injection, SQL injection, XSS, LDAP, etc.). In response to that, the static vendors started changing their products to do better against the benchmark, which was one of the intentions of the benchmark project: set a bar so that products could get better.

Jeff noted, however, that the strategy the static tools chose was to not miss any true vulnerabilities, but basically not care about false positives. As a result, the static tools increased their identification of true positives, but at the same time, added false positives.

In response to this, Jeff wrote a tool called Noisy Cricket that finds all the true positives without caring about false positives. Basically, it says any place you use SQL, that's SQL injection, any place you use encryption, that's a weak encryption. It reports all the results. And when you look at the results of Noisy Cricket, they're not that different from what the static vendors are producing. It was kind of a joke, but also demonstrates that finding all the true positives without caring about false positives provides zero value. The only value happens when you find true positives with low false positives. That's how you measure the value and that’s how the benchmark project scores tools.

Jeff believes there has to be a balance and static tools have never been able to improve in that direction. They can only bias their findings towards finding true positives or towards only reporting true vulnerabilities.

IAST is a great solution because of the nature of the analysis and the fact that it produces results that are not very noisy with low rates of false positives. Meanwhile, static analysis by nature, out of the box is extremely noisy and shows a lot of false positives, but there is the opportunity to fine tune and customize your static analysis capability. It can be a lot of work to get it to a point where you reduce the false positives to an acceptable rate, but there is value in both. The low false positive rate is one of the reasons though that IAST really shines against static analysis techniques.

However, for certain use cases, static analysis is exactly the right tool. For example, if you're a security researcher, and you're tasked with finding new and interesting kinds of vulnerabilities, static can be a real powerful tool. In addition, if you get good at writing custom static rules, you can search your code for things that are custom to your code.


To listen to the full podcast, click here, or you can find Agent of Influence on Spotify, Apple Music, or wherever you listen to podcasts.

[post_title] => The Power of Instrumentation to Automate Components of Vulnerability Testing – from the Creator of IAST [post_excerpt] => In a recent episode of Agent of Influence, I talked with Jeff Williams, a celebrity in the cyber security space. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-power-of-instrumentation-to-automate-components-of-vulnerability-testing-from-the-creator-of-iast [to_ping] => [pinged] => [post_modified] => 2023-06-22 18:33:37 [post_modified_gmt] => 2023-06-22 23:33:37 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=20355 [menu_order] => 453 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [60] => WP_Post Object ( [ID] => 20195 [post_author] => 53 [post_date] => 2020-10-26 10:38:20 [post_date_gmt] => 2020-10-26 15:38:20 [post_content] =>

This session was originally for the Cyber Security Summit.

In order for an organization to have a successful Application Security Program, there needs to be a centralized governing Application Security team that’s responsible for Application Security efforts. In practice, we hear many reasons why organizations struggle with application security, and here are four of the most common myths that need to be dispelled:

  1. An Application Security Team is Optional
  2. My Organization is Too Small to Have an Application Security Team
  3. I Cannot Have an Application Security Team Because We Are a DevOps/Agile/Special Snowflake Shop
  4. An Application Security Team will Hinder Our Ability to Deliver/Conduct Business

This session will cover taking a strategic approach to application security.

[post_title] => Getting Started on Application Security [post_excerpt] => Watch our webinar from Cyber Security Summit, "Getting Started on Application Security," by NetSPI's Managing Director, Nabil Hannan, on-demand now. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => getting-started-on-application-security-cyber-security-summit [to_ping] => [pinged] => [post_modified] => 2023-09-20 11:40:09 [post_modified_gmt] => 2023-09-20 16:40:09 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=20195 [menu_order] => 68 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [61] => WP_Post Object ( [ID] => 19486 [post_author] => 65 [post_date] => 2020-10-13 07:00:51 [post_date_gmt] => 2020-10-13 07:00:51 [post_content] =>

In a recent episode of Agent of Influence, I talked with John Markh of the PCI Council. John has over 15 years of experience in information security, encompassing compliance, threat and risk management, security assessments, digital forensics, application security, and emerging technologies such as AI, machine learning, IoT, and blockchain. John currently works for the PCI Council and his role includes developing and evolving standards for the emerging mobile payments technologies, along with technical contributions and effort surrounding penetration testing, secure application, secure application lifecycle, and emerging technologies such as mobile payment technologies, cloud, and IoT.

I wanted to share some of his insights in a blog post, but you can also listen to our interview here, on Spotify, Apple Music, or wherever you listen to podcasts.

About the PCI (Payment Card Industry) Council

The PCI Council was established in 2009 by several payment brands to create a single framework that would be acceptable to all those payment brands to secure payment or account data of the merchants and service providers in that ecosystem. Since then, the PCI Council has created many additional standards that not only cover the operational environment, but also device security standards such as the PCI PTS standard and security standards that cover hardware security modules and point to point encryption solutions. The Council is in the process of developing security standards for various emerging payment technologies. The mission of the council is to allow secure payment processing by all stakeholders.

Over the years, a number of the security requirements created by the Council have been enhanced to ensure the standard does not become obsolete but keeps up with the current threats to the payment card industry as a whole. For example, PCI DSS, which was the very first standard created and published by the Council, has evolved and had numerous iterations since its publication to account for evolving threats.

The standards built by the PCI Council are built in a way to address threats that directly impact the payment ecosystem. They are not all-encompassing standards. For example, organizations that operate national infrastructure or electricity grids will find some security requirements that will be applicable to them, but the standards will not address all the risks that are applicable to them. The PCI Council standard is focused on the payment ecosystem.

The Evolution of the Payment Card Industry

John shared how people want convenience – not just in payment, but in every aspect of their life. They want convenience and security. So, payments will evolve to accommodate that.

Even today, there are stores where you put items you want to purchase in your shopping bag and you walk out. Automation, artificial intelligence, machine vision, and biometric systems that are installed in that store will identify the products you have put in your bag and deduct the money from your pre-registered account completely seamlessly.

There are also pilot stores in Asia where you still have to check out at the grocery store, but to pay, you just look at a scanner, which identifies you through iris scan to verify your identity, and then payment is process from a pre-registered account.

Many appliances are also becoming connected to the internet, so it is possible that in the future, a refrigerator will identify that you run out of milk, purchase the milk to be delivered to you, and perform the payment on your behalf. You could soon wake up with a fresh gallon of milk on your doorstep that was ordered by your refrigerator.

And of course, mobile is everywhere. More and more people have smartphones – and smartwatches – and with that comes the convenience of paying using your device. Paying by smart device is way simpler and in these times of COVID-19, it’s also contactless. I think we will see more and more technologies that allow this type of payment. It will still be backed by a credit card behind the scenes, but the form factor of your rectangular plastic will shift to other form factors that are more convenient and seamless.

There are also “smart rings” that can perform biometric authentication of the wearer of the ring. You can load payment cards and transit system cards into the ring, for example. So, when you want to pay or take the train, you just tap your ring to the NFC-enabled reading device, and you're done.

Convenience will drive innovation. Innovation will have to adapt to meet security standards and it will also drive new security standards to ensure that the emerging technologies are secure.

Innovation and Privacy

In order to have seamless payments, the system still needs some way to validate who you are. If you use a chip and pin enabled card, you authenticate yourself by entering a pin, which is a manual process. But John noted, it's far more seamless to use iris scans, but to do that, you need to surrender something of yours to the system so the system can identify that you are you.

Right now, the standards are focusing on protecting account data, but maybe in the future, there will be a merge between standards that focus on protecting account data and standards that protect biometric data.

People have several characteristics that identify us for the duration of our lifetime since they don't change much, including fingerprints and iris scans. It's difficult to say whether a choice of fingerprint or iris scan is the right choice for consumer authentication or not. At the end of the day, the payment system needs to authenticate you. If the system is using characteristics that cannot be changed, then it also needs to have additional inputs into making sure that it's not a fraudulent transaction.

For example, payment authentication could be a combination of your fingerprint and the mobile device you're using. If it is a known mobile device that belongs to you, the system could accept the transaction that was authenticated by your fingerprint plus additional information collected from your device, such as the fact that it belongs to you and there is no known malware on the device. If you were using your fingerprint on a new device, the system could identify that the fingerprints match, but recognize it's a new device or the device might have some suspicious software on it, in which case the system will ask you to enter your PIN or to provide additional authentication. It will be a more elaborate system that takes numerous characteristics of the transaction and its environment into account before the transaction is processed.

Challenges of Making the Phone a Point of Sale (POS)

One area of focus for the PCI Council are mobile payment platforms. As John said, business owners want to be able to install an app on mobile devices and be able to take payments through that – creating an instant point of sale. However, the fact that the phone is not controlled by an enterprise, and people can install a variety of applications on their phones (some of which might be malware) puts tremendous risk on the entire payment processing system.

While this enables business owners to sell to more people, especially those who don’t have cash and only have credit cards or smart devices, it also creates an additional system for potential fraud.

John said the PCI Council is focused on a way to make mobile payment platforms more secure. As such, the Council has already published two standards.

  • The Software-based PIN Entry on COTS (SPoC) standard enables solution providers to develop an app along with a small hardware dongle. The purpose of the hardware dongle is to read card information while the phone becomes a point of sale and device for consumers to enter their pin to authenticate consumers.
  • The second standard the PCI Council has released is Contactless Payments on COTS (CPoC™). In this case, it’s just an application that the merchant can download to their phone that would make sure the phone is reasonably secure by performing various attestations of the phone and application, and allow merchants to instantly transform their phone into a point of sale. In some emerging markets, there is no payment infrastructure that exists where you can walk into a bank and get a merchant account, or it may take a very long time. With the mobile payment technologies, you can basically become a merchant immediately.

As I have personally seen, having the ability to make financial transactions in parts of the world that don't have a lot of infrastructure through mobile devices has dramatically changed people's livelihood. And we need to make sure that it’s being done securely.

To listen to the full podcast, click here, or you can find Agent of Influence on Spotify, Apple Music, or wherever you listen to podcasts.

[post_title] => The Payment Card Industry: Innovation, Security Challenges, and Explosive Growth [post_excerpt] => In a recent episode of Agent of Influence, Nabil Hannan talked with John Markh of the PCI Council. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => payment-card-industry-innovation-security-challenges-explosive-growth [to_ping] => [pinged] => [post_modified] => 2021-11-15 11:19:52 [post_modified_gmt] => 2021-11-15 17:19:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=19486 [menu_order] => 462 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [62] => WP_Post Object ( [ID] => 19891 [post_author] => 65 [post_date] => 2020-10-07 07:00:50 [post_date_gmt] => 2020-10-07 07:00:50 [post_content] => On October 7, NetSPI Managing Director Nabil Hannan and Product Manager Jake Reynolds were featured in Cyber Defense Magazine: With Continuous Integration/Continuous Deployment (CI/CD) increasingly becoming the backbone of the modern DevOps environment, it's more important than ever for engineering and security teams to detect and address vulnerabilities early in the fast-paced software development life cycle (SDLC) process. This is particularly true at a time where the rate of deployment for telehealth software is growing exponentially, the usage of cloud-based software and solutions is high due to the shift to remote work, contact tracing software programs bring up privacy and security concerns, and software and applications are being used in nearly everything we do today. As such, there is an ever-increasing need for organizations to take another look at their application security (AppSec) strategies to ensure applications are not left vulnerable to cyberattacks. Read the full article for three steps to get started – starting on page 65 of the digital magazine here. [post_title] => Cyber Defense Magazine: 3 Steps to Reimagine Your AppSec Program [post_excerpt] => On October 7, NetSPI Managing Director Nabil Hannan and Product Manager Jake Reynolds were featured in Cyber Defense Magazine. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cyber-defense-magazine-3-steps-to-reimagine-your-appsec-program [to_ping] => [pinged] => [post_modified] => 2021-05-04 17:08:43 [post_modified_gmt] => 2021-05-04 17:08:43 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=19891 [menu_order] => 463 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [63] => WP_Post Object ( [ID] => 18888 [post_author] => 65 [post_date] => 2020-10-06 07:00:26 [post_date_gmt] => 2020-10-06 07:00:26 [post_content] =>

In a recent episode of Agent of Influence, I talked with Cassio Goldschmidt, Head of Information Security at ServiceTitan about the evolution of security frameworks used to develop software, including key factors that may affect one company’s approach to building software versus another. Cassio is an internationally recognized information security leader with a strong background in both product and program level security.

I wanted to share some of his insights in a blog post, but you can also listen to our interview here, on Spotify, Apple Music or wherever you listen to podcasts.

Key Considerations When Developing Software

As Goldschmidt noted, one of the first security frameworks that was highly publicized was Microsoft STL, and a lot of security practitioners thought it was the way to develop software, and it was a one size fits all type of environment. But that is definitely not the case.

Goldschmidt said that when SAFECode (Software Assurance Forum for Excellence and Code), a not for profit was created, it was a place to discuss how to develop secure code and what the development lifecycle should be among those companies and at large. But – different types of software and environments require different approaches, and will be affected by a variety of factors at each business, including:

  1. Type of Application: Developing an application that is internet facing or just internet connected, or software for ATM machines, will influence the kind of defense mechanisms you need and how you should actually think about the code you’re developing.
  2. Compliance Rules: If your organization has to abide by specific compliance obligations, such as PCI, for example, they will in some ways dictate what you have to do. Like it or not, you will need to follow some steps, including looking at the OWASP Top 10 and make sure you are free of any cross-site scripting or SQL injections.
  3. The Platform: The architecture for phones and the security controls you have for memory management are very different from a PC, or what you have in the data center, where people will not be able to actually reverse engineer things. It's something you have to take into consideration when you are deciding how you are going to review your code and the risk that it represents.
  4. Programming Language: Still today a lot of software is developed using C++. Depending on the language you use, you may not have the proper support for cross site scripting, so you have to actually make sure that you're doing something to compensate for the flaws of the language.
  5. Risk Profile: Each business has its own risk profile and the kind of attacks they are willing to endure. For example, DDoS could be a big problem for some companies versus others, and for some companies, even if they have a data breach, it might not matter as much as for other companies depending on the type of business. For example, if you’re in the TV streaming business and a single episode of Game of Thrones leaks, it likely won’t have a big impact, but if you’re in the movie business and one of your movies leaks, then that will likely affect revenue for that movie.
  6. Budget: Microsoft, Google, and other companies with large budgets have employee positions that don't exist anywhere else. For example, when Goldschmidt was at Symantec, they had a threat research lab, which is a luxury. Start-ups and many other companies might not have this and might need to use augmented security options.
  7. Company Culture: The maturity of the culture of the company also matters quite a bit as well. Security is not just a one stop activity that you can do at a given time, but something that ends up becoming part of your culture.

Today, there are a lot of tools and resources in the market such as Agile Security by O’Reilly that will tell you how to do things in a way that really fit the new models that people are using for developing code.

Security Software Versus Software Security

Security software is the software used to defend your computer, such as antivirus, firewalls, IDS, and IPS. These are really important, but that doesn’t mean they are necessarily secure software or that they were actually developed with defense programming in mind. Meanwhile, secure software is software developed to resist malicious attacks.

Goldschmidt said he often hears that people who make security software don't necessarily make secure software. In his experience though, security software is so heavily scrutinized that it eventually becomes secure software. For example, antivirus software is a great target for hackers because if an attacker can get in and disable that antivirus, they can ultimately control the system. So, from his experience, security software does tend to become more secure, although it’s not necessarily true all the time.

One inherent benefit I’ve noticed for companies developing security software is that they’re in the business of security, so the engineers and developers they’re hiring are already very savvy when it comes to understanding security implications. Thus, they tend to focus on making sure at least some of the most common and basic issues are covered by default, and they're not going to fall prey to basic issues.

If an individual doesn’t have this experience when they join a company developing security software, it becomes part of their exposure and experience since they are spending so much time learning about viruses, malware, vulnerabilities, and more. They inherently learn this as part of their day to day – it’s almost osmosis from being around other developers who are constantly thinking about it.

One of my mentors described the difference between security software and secure software to me this way: Security software is software that's going to protect you as the end user from getting breached. Software security is making sure that your developers are developing the software in a manner that the software is going to behave when an attacker is trying to make it misbehave.

Goldschmidt and I also spent time discussing the cyber security of the Brazilian elections. You can listen to the podcast here to learn more.

[post_title] => The Evolution of Security Frameworks and Key Factors that Affect Software Development [post_excerpt] => In a recent episode of Agent of Influence, I talked with Cassio Goldschmidt, Head of Information Security at ServiceTitan about the evolution of security frameworks [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-evolution-of-security-frameworks-and-key-factors-that-affect-software-development [to_ping] => [pinged] => [post_modified] => 2021-11-15 11:20:26 [post_modified_gmt] => 2021-11-15 17:20:26 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=18888 [menu_order] => 464 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [64] => WP_Post Object ( [ID] => 19884 [post_author] => 65 [post_date] => 2020-10-01 07:00:57 [post_date_gmt] => 2020-10-01 07:00:57 [post_content] => On October 1, NetSPI Managing Director Nabil Hannan was featured in TechTarget: During the Black Hat 2020 virtual conference, keynote speaker Matt Blaze analyzed the security weaknesses in our current voting process and urged the infosec community – namely pentesters – and election commissions to work together. His point: Testers can play an invaluable role in securing the voting process as their methodology of exploring and identifying every possible option for exploitation and simulating crisis scenarios is the perfect complement to shore up possible vulnerabilities and security gaps. Read the full article here. [post_title] => TechTarget: 3 common election security vulnerabilities pros should know [post_excerpt] => On October 1, NetSPI Managing Director Nabil Hannan was featured in TechTarget. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => techtarget-3-common-election-security-vulnerabilities-pros-should-know [to_ping] => [pinged] => [post_modified] => 2021-04-14 05:30:28 [post_modified_gmt] => 2021-04-14 05:30:28 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=19884 [menu_order] => 465 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [65] => WP_Post Object ( [ID] => 19349 [post_author] => 65 [post_date] => 2020-09-29 07:00:08 [post_date_gmt] => 2020-09-29 07:00:08 [post_content] =>

In a recent episode of Agent of Influence, I talked with Miles Edmundson, a 30-year veteran in the IT and Information Security space. Miles started as a security consultant, was Carlson Company's first global Information Security Manager, worked for the largest crop insurance company in the world, and served as both the CISO for Ceridian as well as the US CISO for Equinity. His last 12 to 14 years have been in the financial services industry. I wanted to share some of his insights in a blog post, but you can also listen to our interview here, on Spotify, Apple Music, or wherever you listen to podcasts.

“Exploring” the Network Neighborhood

To start, Miles shared an interesting story about how he first stumbled into and became interested in cyber security.

He was curious about how networks worked and saw an icon on his desktop that said, “network neighborhood.” He clicked on that and it took a while to populate, but he started to see over 2500 different systems. As he was looking at them, he realized he was seeing the entire client server system for all of Weyerhaeuser, his employer at the time. It became clear to him that there was a consistent naming convention by location, job title, etc., and so, within about 30 minutes, he was able to find the CFO’s machine and access sensitive information, including executive salaries. He reported the finding to their IT team, but this was the beginning of his career in cyber security.

Miles shared this as a lesson to security teams everywhere that exposing sensitive information doesn't always require having a very high degree of skill. There's a misconception that you have to be super skilled to break into systems, but in many cases, there are simple misconfigurations that can cause a lot of these problems and don’t require a lot of skill for someone to break.

Where to Focus When Starting a New Senior Level Position

In the early 2000s, Miles made the transition from consulting to being a practitioner, first joining Carlson Company as the Global Information Security Manager. He was the only person on this team in a brand new role and his budget the first year was $100k, which was already earmarked for a specific project. He was at Carlson for three years and by the time he left, the department budget had increased to $3.5M.

I’m always curious to ask CISOs and senior cyber security leaders about how they start in a role and prioritize areas of focus. Miles has two key areas of focus when he starts new senior level positions, which are obviously dependent on audit findings, regulatory issues, number of employees, budget, and more:

  1. He always wants to see org charts to know who’s who and how to reach out to different people so he can start trying to build relationships with people.
  2. He also wants to see any audit reports or regulatory reports to understand the underlying issues the organization needed to focus on.

Keys to Relationship Building

Relationship building is extremely important, not only for your personal success, but also the success of your team and entire company.

Miles shared a story from the book, Good to Great by Jim Collins about people who are excellent in their field. One of the people highlighted was a hotel housekeeper, who when interviewed, didn't say she was a housekeeping person at a hotel chain, but rather that she was a representative of her company, and she wanted to ensure that people were having a wonderful time at her facility – and she was doing all she could to make that happen.

When Miles was asked what he did at Carlson Company, he would often say that he helped promote world understanding, because Carlson was a leading player in international travel and he thought it was critically important for people to know that the world is much bigger than his local area.

Miles also cultivated relationships by asking questions – and listening to the answers. He didn't tell.

He was very conscious to be a good representative of his organization, his company, his state, and his country.

Biggest Challenge Facing CISOs Today

Keeping up.

Miles believes the biggest challenge facing CISOs is simply keeping up with all the requirements. In many respects, the role is responsible for juggling a number of different items all at the same time, and receiving constant requests from regulators, compliance teams, auditors, and customers. And CISOs have to meet these requests all while being constrained by budgets, personnel, talent, and more.

In addition, CISOs are effectively on call 24/7/365.

Advice for CISOs

Over the years, Miles has subscribed to a couple quotes that he shared that could be good advice for many things.

The first was from Teddy Roosevelt, President of the United States from 1901 to 1909, and he said, “Do what you can where you are with what you have.” Miles noted that you can only do so much with what you have – and so, do that.

The next quote is from Winston Churchill during World War II, and the paraphrased quote is, “Never, never, never give up.” This served Miles well in his career and he passed it along as advise to senior leaders.

To listen to the full podcast, click here, or you can find Agent of Influence on Spotify, Apple Music, or wherever you listen to podcasts.

[post_title] => The Biggest Challenge Facing CISOs Today – and the Key to Winning [post_excerpt] => In a recent episode of Agent of Influence, Nabil Hannan talked with Miles Edmundson, a 30-year veteran in the IT and Information Security space. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => biggest-challenge-facing-cisos-today-key-to-winning [to_ping] => [pinged] => [post_modified] => 2021-04-14 10:06:39 [post_modified_gmt] => 2021-04-14 10:06:39 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=19349 [menu_order] => 466 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [66] => WP_Post Object ( [ID] => 19786 [post_author] => 53 [post_date] => 2020-09-10 14:52:02 [post_date_gmt] => 2020-09-10 19:52:02 [post_content] =>
Watch Now

This session was originally shown at Black Hat USA 2020.

Overview 

A successful Application Security Program requires a happy marriage between people, processes, and technology. 

In this on-demand webinar, NetSPI Field CISO Nabil Hannan and Head of Emerging Technology Jake Reynolds explore:  

  • How leading organizations use different discovery techniques as part of their AppSec program 
  • Strengths and weaknesses of common AppSec vulnerability discovery technologies 
  • Techniques that make security frictionless for your developers as they embrace a DevSecOps culture 
  • How functional your application security program can be with a “makeover” to: 
    • Enhance your reporting to empower leadership to optimize your AppSec program 
    • Improve your vulnerability ingestion, correlation, and enrichment  
    • Increase your speed to remediation 

Key highlights: 

  • 0:35 – Pre-renovation 
  • 1:28 – Application vulnerability discovery techniques  
  • 7:30 – Post-renovation 
  • 10:50 – NetSPI’s platform demo 

Pre-Renovation  

If you’re considering giving your application security program an extreme makeover, you’ll likely notice some telltale signs that your AppSec program is in need of renovation.

Some to signs include:

  • New and immature AppSec programs are reactive 
  • Security testing is performed ad-hoc 
  • Vulnerabilities and remediation efforts aren’t managed centrally 
  • Organizations face challenges conveying the value of AppSec efforts and investment  

Application Vulnerability Discovery Techniques  

When it comes to application vulnerability discovery techniques, a few traditional techniques are more commonly used while emerging ones are gaining adoption and popularity. Traditional techniques include: 

  • Static application security testing (SAST) and manual code review 
  • Dynamic application security testing (DAST) and manual pentesting 
  • Manual inventory of OSS usage  

Emerging techniques include:  

  • Interactive application security testing (IAST) 
  • Real-time application self-protection (RASP) 
  • Software composition analysis (SCA) 

Common Discovery Tool Types 

As you decide how you want to renovate your AppSec program, there are many different options to consider, including the following:

  • SAST and DAST
    • Challenging to deploy and manage in large organizations 
    • Noisy (high false positive rates out of the box)  
    • Long scan times 
    • Quality of results varies significantly between SAST and DAST products 
    • Security expertise required to interpret results and remove false positives 
  • Interactive application security testing (IAST)
    • Most popular IAST products are passive 
    • Quality of results driven by test automation and QA test coverage 
    • Easy to integrate into CI/CD pipelines 
    • Seamless to the development organization 
    • Low false positive rates
  • Real-time self-protection (RASP) 
    • Challenging to deploy and manage in large organizations 
    • The level of effort to deploy is almost the same as fixing vulnerabilities  
    • Provides protection from common vulnerabilities getting exploited
  • Software composition analysis (SCA)  
    • Identify known security vulnerabilities in components being used 
    • Doesn’t identify new vulnerabilities in source code 
    • Challenging to deploy at scale at large organizations
    • Create a bill of materials (BOM) of Open Source components 

Post-Renovation 

Once you’ve determined what’s working with your application security program and which parts need a makeover, it’s important to take the following into consideration:

  • Build a centralized system of record to manage all AppSec activities 
  • Strategize an effective approach to AppSec with multiple touchpoints 
  • Integrate technology into processes as appropriate 
  • Enable automation to assign people to strategic tasks/activities  

Next-Gen AppSec Infrastructure  

Your next-generation application security infrastructure should be built around all your testing initiatives, including SAST, DAST, IAST, RASP, and SCA. Under each type of testing activity, the infrastructure includes project management, testing, ticketing, and reporting, and remediation.  

In the middle of the infrastructure is a rock-solid threat and vulnerability management platform. NetSPI’s Resolve™ platform is built to be the warehouse of all your data and is capable of managing all of your S-SDLC in the product.  

NetSPI Can Help Make Over Your Application Security Program 

As attack surfaces continue to expand and evolve, and threat actors become more sophisticated, your AppSec program has room for improvement. Read our in-depth whitepaper, Getting Started on Your Application Security Program, to begin your journey to mature your application security program and reduce risk.

With NetSPI’s offensive security platform, your organization can improve vulnerability management, achieve penetration testing efficiencies, leverage security automation, understand your risk, scale your security program, and manage your attack surface. Learn more – schedule a demo today.

[wonderplugin_video iframe="https://youtu.be/aojAelxBXDc" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Extreme Makeover AppSec Edition [post_excerpt] => Did you miss Black Hat USA 2020? Watch our webinar, "Extreme Makeover: AppSec Edition," by NetSPI's Managing Director, Nabil Hannan, and Product Manager, Jake Reynolds, on-demand now. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => extreme-makeover-appsec-edition-black-hat-2020 [to_ping] => [pinged] => [post_modified] => 2023-07-12 13:05:46 [post_modified_gmt] => 2023-07-12 18:05:46 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=19786 [menu_order] => 70 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [67] => WP_Post Object ( [ID] => 19658 [post_author] => 65 [post_date] => 2020-08-12 07:00:58 [post_date_gmt] => 2020-08-12 07:00:58 [post_content] => On August 12, NetSPI Managing Director Nabil Hannan was featured in TechTarget: There's a reason why a computer virus is called a "virus," as they have many similarities to medical viruses. Notably, as medical viruses can have a severe impact on your personal health, a computer virus can severely impact the health of your business. In today's digital world, a computer virus, a "wormable" remote code execution vulnerability designed to persistently replicate and spread to infect programs and files, can begin causing damage in minutes. Sound familiar? According to the CDC, the virus that causes COVID-19 spreads very easily and sustainably, meaning it spreads from person-to-person without stopping. With COVID-19 top of mind and making headlines across the globe, CISOs should now take the time to make observations about viruses outside of the technology industry and see how they apply to cybersecurity strategies. So, what exactly can security teams learn from studying medical viruses to ensure the health of a business' systems and applications? Here are three key considerations. Read the full article here. [post_title] => TechTarget: What cybersecurity teams can learn from COVID-19 [post_excerpt] => On August 12, NetSPI Managing Director Nabil Hannan was featured in TechTarget. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => techtarget-what-cybersecurity-teams-can-learn-from-covid-19 [to_ping] => [pinged] => [post_modified] => 2021-04-14 05:30:32 [post_modified_gmt] => 2021-04-14 05:30:32 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=19658 [menu_order] => 475 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [68] => WP_Post Object ( [ID] => 19612 [post_author] => 65 [post_date] => 2020-08-11 07:00:51 [post_date_gmt] => 2020-08-11 07:00:51 [post_content] =>

Black Hat looked different this year as the security community gathered on the virtual stage, due to COVID-19 concerns. “Different” doesn’t necessarily carry a negative connotation: the shift not only addressed public safety concerns, but also enabled the security community to critically think about the way we do work in our digital-centric world, particularly at a time where we are increasingly reliant on technology to stay connected.

When scrolling through the countless briefings available, it was clear that politics and COVID-19 remain top-of-mind. So, let’s start with the biggest topic of the week: election security.

Takeaway #1: Securing the Vote Relies on Collaboration… and Testing

Matt Blaze, a Georgetown University security researcher, kicked off the conference with a keynote titled, Stress-Testing Democracy: Election Integrity During a Global Pandemic. In the past, the industry has had conversations about securing voting machines themselves, but this year, the discussions were centered on online and mail-in voting mechanisms and the hacking of the process. Matt shared, “our confidence in the [election] outcome increasingly depends on the mechanisms that we use to vote.” And this year, we are tasked with scaling up mail-in voting mechanisms.

Blaze looked at software as the core of the election security framework, noting that “software is generally hard to secure, even under the best circumstances.” Though we expect a majority of votes to be made via paper ballot, software will still be used in every facet of the election system, from pre-election (ballot definition, machine provisioning) to post-election (tallying results, reporting, audits, and recounts). So, what is the industry to do?

He suggested that election committees prepare for a wide range of scenarios and threats and work towards software independence, though most don’t have the appropriate budgets to do so – a problem all too familiar to the security industry. Because of this, he encouraged the IT community to volunteer their time and become more involved with their local election efforts, specifically, testing the software and machines for vulnerabilities. In a way, he opened the door for ethical hackers, like our team at NetSPI, to get involved. An encouraging call to action that proved realistic during Black Hat occurred when voting machine maker ES&S and cybersecurity firm Synack announced a program to vet ES&S’ electronic poll book and new technologies - a call for “election technology vendors to work with researchers in a more open fashion and recognize that security researchers at large can add a lot of value to the process of finding vulnerabilities that could be exploited by our adversaries,” according to WIRED.

Continuing the narrative of election security, the day two keynote from Renee DiResta, research manager at the Stanford Internet Observatory informed Black Hat attendees of how to use information security to prevent disinformation in mass media (social media, broadcast, online publications). She explained how influence campaigns can skew not only voting results, but also perceptions of companies and, larger-scale, entire countries and governments. She reiterated that disinformation is indeed a cybersecurity problem that CISOs can’t ignore. In another humbling call to action for the security testing community, DiResta suggested, “we need to do more red teaming around social [media] and think of it as a system and [understand] how attacks can impact operations.” Read more about the keynote on ThreatPost.

Takeaway #2: The Importance of Application Security Has Heightened in 2020

Let’s start with healthcare. Amid the current public health pandemic, healthcare systems continue to be a top target for adversaries due to the sensitive and confidential patient records they hold. During Black Hat, the security industry shined a light on some of the various areas of weakness that can be exploited by an attacker. A big one? Healthcare application security.

One conversation that stuck out to me was from the Dark Reading news desk: HealthScare: Prioritizing Medical AppSec Research. In the interview, Seth Fogie, information security director at Penn Medicine, explains why healthcare application vulnerabilities matter in the day-to-day business of providing patient care. He recommends that the security and healthcare communities should have a better line of communication around AppSec research and testing efforts. He would like to see more security professionals asking healthcare administrators which other applications, including third-party vendors, they can assess for vulnerabilities. I agree with his recommendation to raise awareness for application testing in healthcare security as it would add value to the assessments already in effect and ultimately the overall security posture for the organization.

Then, there are web applications, such as virtual meeting and event platforms, that have seen a surge in popularity. Released at Black Hat, researchers found critical flaws in Meetup.com that showcased common gaps in AppSec. Researchers explained how common AppSec flaws cross site scripting and request forgery (both tied to the platform’s API) could have resulted in threat actors redirecting payments and other malicious actions. This is just one example showcased at Black Hat that showed the heightened AppSec risks amid COVID-19, as we continue to shift in-person activities to online platforms.

With NetSPI a Black Hat sponsor, myself and my colleague Jake Reynolds hosted a 20-minute session on revamping application security (AppSec) programs: Extreme Makeover: AppSec Edition. During the session, we explored the various options for testing [SAST, IAST, SCA, manual] and the challenges that exist in current AppSec testing programs and how to “renovate” an AppSec program to ultimately increase time to remediation. Watch the session to learn, through one centralized platform, how to remodel your AppSec program to achieve faster remediation, add context to each vulnerability, enable trends data and reporting functions to track and predict vulnerabilities over time, and reduce false positives.

Takeaway #3: Our Connected Infrastructure Is Vulnerable

As in years past, the Internet of Things (IoT) again took over Black Hat conversations. This year, the research around IoT vulnerabilities proved fascinating. Showcasing the potential impact of IoT infiltration was at the core of the research. Here are some examples:

  • Security researchers at the Sky-Go Team, found more than a dozen vulnerabilities in a Mercedes-Benz E-Class car that allowed them to remotely open its doors and start the engine.
  • Researchers with the Georgia Institute of Technology described how certain high-wattage Internet-connected devices such as smart air-conditioners and electric-vehicle (EV) chargers could be used to manipulate energy markets.
  • And perhaps the most interesting, and alarming: James Pavur, an academic researcher and doctoral candidate at Oxford University, used $300 worth of off-the-shelf equipment to hack satellite internet communications to eavesdrop and intercept signals across the globe.

All these examples highlight how much complexity goes into building systems today. As we continue to increase complexity and inter-connectivity, it becomes more challenging to properly protect these systems from being compromised. At NetSPI, we are constantly working with our clients to help them build well-rounded cyber security initiatives. It’s well understood today that just performing penetration testing near the end of a product’s lifecycle before going to production isn’t adequate from the perspective of security. It’s important to understand various business objectives and implement proper security touchpoints throughout a product’s lifecycle. Vulnerability detection tools have come a long way in the past decade or so. With significant advances in products like SAST, DAST, RASP, IAST, SCA, etc., integrating these tools into the SDLC in earlier phases have been a common approach for many organizations. The true challenge however is determining how to make security as frictionless as possible with the overall product development lifecycle. NetSPI works continually with clients to help them build and implement strategy around their security program based on their business objectives and risk thresholds.

Takeaway #4: We’re Learning More About Securing the Remote Workforce

Lastly, many cloud, container, and remote connection-related sessions were held during the conference. Many of them highlighted the need to reinforce security practices pertaining to remote work, or telecommuting – not surprising, given the state of today’s workforce amid the pandemic.

Black Hat research from Orange Cyber Defense demonstrated that VPN technologies ordinarily used by businesses to facilitate remote access to their networks are “poorly understood, improperly configured and don't provide the full level of protection typically expected of them.” The researchers attribute the vulnerabilities to a common scenario where the remote worker is connected to Wi-Fi that is untrusted, insecure or compromised. Watch this video interview with the researchers via Security Weekly.

It's an ever-evolving issue that has warranted additional focus this year and the industry is continuing to learn best practices to achieve a secure remote connection. I would consider this topic a silver lining to the pandemic. It has forced the security industry to learn, better understand, and serve as counsel to organizational leaders on the security considerations that come with scaling up remote workers. A great starting place for remote connection security? Read my recent blog post: Keeping Your Organization Secure While Sending Your Employees to Work from Home.

While we certainly missed the face-to-face connections and networking opportunities, the virtual conference was an invaluable opportunity to hold urgent security conversations around election mechanisms, healthcare systems during the pandemic, application security, the growing remote workforce, and connected devices and infrastructures.

While these were my key takeaways, there were many more discussions that took place – and DefCon continues today with prerecorded presentations and live streamed Q&As and panels on Twitch. Want to explore more Black Hat 2020 news? Check out this Black Hat webpage. We hope to see you next year, hopefully in-person!

[post_title] => Black Hat 2020: Highlights from the Virtual Conference; Calls to Action for the Industry [post_excerpt] => Black Hat looked different this year as the security community gathered on the virtual stage, due to COVID-19 concerns. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => black-hat-2020-highlights-from-virtual-conference-calls-to-action-for-industry [to_ping] => [pinged] => [post_modified] => 2021-04-14 00:52:14 [post_modified_gmt] => 2021-04-14 00:52:14 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=19612 [menu_order] => 477 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [69] => WP_Post Object ( [ID] => 19648 [post_author] => 65 [post_date] => 2020-08-11 07:00:34 [post_date_gmt] => 2020-08-11 07:00:34 [post_content] => On August 11, NetSPI Managing Director Nabil Hannan was featured in Security Boulevard: Offensive security measures like penetration testing can help enterprises discover the common vulnerabilities and exploitable weaknesses that could put an them at risk of costly cybersecurity incidents. By pitting white hat hackers against an organization’s deployed infrastructure, organizations can gain a better understanding of the flaws they should fix first—namely the ones most likely to be targeted by an everyday criminal. However, over the years penetration testing services have evolved to be extremely automated and limited in scope. Armed with scanning tools and limited rules of engagements, pen testers tend to focus purely on the technical vulnerabilities within a given system, platform, or segment of the network. Pen tests are usually conducted over short durations of time and their resultant reports offer up recommendations on fixes that architects or developers can make to code and configuration. Read the full article here. [post_title] => Security Boulevard: 12 Hot Takes on How Red Teaming Takes Pen Testing to the Next Level [post_excerpt] => On August 11, NetSPI Managing Director Nabil Hannan was featured in Security Boulevard. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => security-boulevard-12-hot-takes-how-red-teaming-takes-pen-testing-next-level [to_ping] => [pinged] => [post_modified] => 2021-04-14 05:30:39 [post_modified_gmt] => 2021-04-14 05:30:39 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=19648 [menu_order] => 476 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [70] => WP_Post Object ( [ID] => 19525 [post_author] => 65 [post_date] => 2020-08-04 07:00:23 [post_date_gmt] => 2020-08-04 07:00:23 [post_content] =>

Distributed Denial of Service (DDoS) attacks have gained celebrity status during COVID-19. In the first quarter of 2020, DDoS attacks in the U.S. rose more than 278% compared to Q1 2019 and more than 542% compared to Q4 2019, according to Nexusguard’s 2020 Threat Report. This increase in attacks is correlated with the increased dependency on remote internet access and online services as many organizations’ workforces continue to work from home amid COVID-19 concerns. With the dependency on remote internet access comes an increased need for Internet Service Providers (ISPs) to monitor and mitigate irregular activity on their networks before it results in server outages or loss of critical resources, data, or money. But ISPs aren’t the only ones that need to be proactive as DDoS attacks continue to rise – their customers will face the same problems if proactive security measures are not in place.

Learning from others: Amazon Web Services (AWS) successfully thwarted the largest DDoS attack ever recorded on its infrastructure, internet infrastructure firms Akamai and Cloudflare fended off record-breaking DDoS attacks in June, and online gaming platforms are being targeted as attackers figure out how to further monetize DDoS attacks (see: GGPoker). These recent attacks underscore how similar vulnerabilities and weaknesses can easily propagate across many organizations since there’s a tendency of reusing similar technologies to support business functions, such as widespread use of open source code or network hardware. Additionally, it’s also common that simple misconfigurations issues can result in breaches that have significant business impact.

It’s important to understand that there are two common forms of DDoS attacks:

  1. Application layer attacks where attackers try to overload a server by sending an overwhelming number of requests that end up overtaking much of the processing power.
  2. Network layer attacks where attackers try to overwhelm network bandwidth and deny service to legitimate network traffic.

The ultimate goal of both techniques is to overwhelm a particular business, service, web app, mobile app, etc. and keep them from being accessible to legitimate access requests from the intended users/customers. This is extremely challenging to manage since the attacks come from compromised machines or ‘bots’ in a very distributed fashion, which makes blocking those requests using simple filtering techniques unrealistic.

Many web application firewall vendors have DDoS mitigation solutions available for customers to buy, but that shouldn’t be the only step that organizations should rely on. Defense in depth, or an approach to cyber security in which defensive tactics are layered to ensure back up measures in the case that another security control fails, is key for all security concepts. Here are five techniques organizations can layer on to stop DDoS attacks:

  1. Penetration Testing – Although it’s difficult to properly simulate full-scale DDoS attacks during a penetration test, it’s important to do regular third-party testing that simulates real-world attacks against your infrastructure and applications. A proactive penetration testing approach will allow organizations to be prepared for when the time comes that they’re actually under attack. Tip: Implement Penetration Testing as a Service (PTaaS) to enable continuous, always-on vulnerability testing.
  2. Vulnerability Management and Patching – Ensure that all your systems have been properly updated to the latest version and any relevant security and/or performance patches have been applied. A proper patching and vulnerability management process will ensure this is happening within a reasonable timeframe and within acceptable risk thresholds for the business.
  3. Incident Response Planning – Build a team whose focus is on responding in an expedited fashion with the appropriate response. This team’s focus needs to be on ensuring they can minimize the impact of the attack and ensure they can trigger the appropriate processes to ensure that communications with customers and internal teams are happening effectively. More on incident response planning here.
  4. Traffic Anomaly Monitoring – Make sure there’s proper monitoring taking place across all network traffic to set off alerts if any abnormal behavior is detected from suspicious sources, especially if they are from geographies that don’t make normal business sense.
  5. Threat Intelligence and Social Media – Keep an eye on threat intel feeds and social media for any relevant information that may help predict attacks before they happen, allowing organizations to plan accordingly.

DDoS is just one of many cyberattack methods that have increased due to COVID-19 remote working dependency. As networks continue to expand, we are opening new entry points to attackers to secure footholds and cause critical damage – pointing to the need for continuous evaluation of security strategies.

My overarching advice? Go beyond the baseline security measures, such as a firewall, and implement a proactive security strategy to identify and remediate vulnerabilities, monitor network activity, plan for a breach as they become more inevitable, and connect with the security community to stay on top of the latest threat intel.

[post_title] => The Rise of DDoS Attacks and How to Stop Them [post_excerpt] => Distributed Denial of Service (DDoS) attacks have gained celebrity status during COVID-19. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-rise-of-ddos-attacks-and-how-to-stop-them [to_ping] => [pinged] => [post_modified] => 2021-04-14 00:52:20 [post_modified_gmt] => 2021-04-14 00:52:20 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=19525 [menu_order] => 479 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [71] => WP_Post Object ( [ID] => 19329 [post_author] => 65 [post_date] => 2020-07-21 07:00:05 [post_date_gmt] => 2020-07-21 07:00:05 [post_content] =>

In a recent episode of Agent of Influence, I talked with Mike Rothman, President of DisruptOps. Mike is a 25-year security veteran, specializing in the sexy aspects of security, such as protecting networks, protecting endpoints, security management, compliance, and helping clients navigate a secure evolution in their path to full cloud adoption. In addition to his role at DisruptOps, Mike is Analyst & President of Securosis. I wanted to share some of his insights in a blog post, but you can also listen to our interview here, on Spotify, Apple Music, or wherever you listen to podcasts.

The Evolving Perception of the Cyber Security Industry

Mike shared the evolution of the cyber security industry from his mom’s perspective – twenty years ago, his mom had no idea what he did – “something with computers.” Today, though, as cyber security and data breaches have made headline news, he can point to that as being what he does – helping companies prevent similar breaches.

Cyber security has become much more visible and has entered the common vernacular. A lot of people used to complain that nobody takes the industry seriously, nobody cares about what we're doing, and they marginalize everything that we're talking about. But that has really flipped, because now nobody's marginalizing anything about security. We have to show up in front of the board and talk about why we're not keeping pace with the attackers and why we're not protecting customer data to the degree that we need to. Security has become extremely visible in recent years.

To show this evolution of the industry, Mike noted he’s been to 23 out of last 24 RSA conferences, but when he first started going to the show, it was in a hotel on top of Nob Hill in San Francisco, and there were about 500 people in attendance, most of whom were very technical. Now the conference has become a huge staple of the industry with 35,000-40,000 people attending each year. (Read our key takeaways from this year’s RSA Conference.)

As many guests on the Agent of Influence podcast have noted, the security industry is always evolving; there's always a new challenge or a new type of methodology that’s being adopted. However, at the same time, there are also a lot of parallels of things that don’t change. For example, a lot of the new vulnerabilities and things that are being identified today are ultimately still the same type of vulnerabilities we've been finding for the longest time – there's still injection attacks, they just might be a different type of injection attack. I personally enjoy looking at things that are recurring and are the same, but just look and feel different in the security space, which makes it interesting.

What Does Cloud Security Really Mean?

Mike started to specialize in cloud security because he says he just got lucky. A friend of his, Jim Reavis founded the Cloud Security Alliance and wanted to offer a certification in cloud security, but he had no way to train people so they could obtain the certification. Jim approached Mike and Rich Mogull to see if they could build the training curriculum for him. As Mike and Rich considered this offer, they realized they A) knew nothing about cloud and B) knew nothing about training!

That was 10 years ago, and as they say… the rest is history. Mike and Rich have been teaching cloud security for the past 10 years, including at the Black Hat Conference for the past five years and advising large customers about how to move their traditional data center operations into the cloud while protecting customer data and taking advantage of a number of the unique characteristics of the cloud. They’ve also founded a company called DisruptOps, which originated from research Mike did with Securosis that they spun out into a separate company to do cloud security automation and cloud security operations.

As Mike says, 10 years ago, nobody really knew what the cloud was, but over time, people started to realize that with the cloud, you get a lot more agility and a lot more flexibility in terms of how you can provision, and both scale up and contract your infrastructure, giving you the ability to do things that you could never do in your own data center. But as with most things that have tremendous upside, there's also a downside. When you start to program your infrastructure, you end up having a lot of application code that's representative of your infrastructure, and as we all know – defects happen.

One of the core essential characteristics of the cloud is broad network access, which means you need to be able to access these resources from wherever you are. But, if you screw up an access control policy, everybody can get to your resources, and that's how a lot of cloud breaches happen today – somebody screws up an access control policy to a storage bucket that is somewhere within a cloud provider.

[embed]https://youtu.be/yOwxk8YajoE[/embed]

Data Security and the Cloud

DisruptOps’ aim is to get cyber security leaders and organizations to think about how they can start using architecture as the security control as we move forward. By that he means, you can build an application stack that totally isolates your data layer from your compute layer from your presentation.

These are things you can't do in your data center because of lateral movement. Once you compromise one thing in the data center, in a lot of cases, you've compromised everything in the data center. However, in the cloud, if you do the right thing from an isolation standpoint and an account boundary standpoint, you don't have those same issues.

Mike encourages people to think more expansively about what things like a programmable infrastructure, isolation by definition, and default deny on all of your access policies for things that you put into the cloud would allow you to do. A lot of these constructs are kind of foreign to people who grew up in data center land. You really must think differently if you want to set things up optimally for the cloud, as opposed to just retrofitting what you’ve been doing for many years to fit the cloud.

Driving Forces Behind Moving from Traditional Data Centers to the Cloud

  1. Speed – Back in the day, it would take three to four weeks to get a new server ordered, shipped, set up in the rack, installed with an operating system, etc. Today, if you have your AWS free tier application, you can have a new server using almost any operating system in one minute. So, in one minute, you have unbounded compute, unbounded storage, and could set up a Class B IP network with one API call. This is just not possible in the data center. So there's obviously a huge speed aspect of being able to do things and provision new things in the cloud quickly.
  2. Cost – Depending on how you do it, you can actually save a lot of money because you're not using the resources that you had to build out in order to satisfy your peak usage; you can just expand your infrastructure as you need to and contract it when you're not using those resources. If you're able to auto scale and scale up and scale down and you build things using microservices and a lot of platform services that you don't have to build and run all the time in your environment, you can really build a much more cost effective environment in order to run a lot of your technology operations.
    However, Mike said, if you do it wrong, which is taking stuff you already paid for and depreciated in your data center and move it into the cloud, that becomes a fiasco. If you're not ready to move to the cloud, you end up paying by the minute for resources that you've already paid for and depreciated.
  3. Agility – If you have an attack in one of your technology stacks, you just move it out, quarantine it, build a new one, and move your sessions over there. Unless you want to have totally replicable data centers, you can't do this in a data center.

There are a lot of architectural, agility, cost, global capabilities, elasticity to scale up and down, and other reasons to take advantage of the capabilities of the cloud.

Resources to Get Started in the Cloud

Mike recommended the below resources and tools for people looking to learn more about the cloud:

  1. Read The Phoenix Project by Gene Kim, which Mike considers the manifesto of DevOps. Regardless of whether your organization is in the cloud or moving to the cloud, we're undergoing a cultural transformation on the part of IT that looks a lot like DevOps. Some organizations will embrace the cloud in some ways, and other organizations will embrace it in others. The Phoenix Project will give you an idea in the form of a parable about what is possible. For example, what is a broken environment and how can you embrace some of these concepts and fix your environment? This gives you context for where things are going and what the optimal state looks like over time.
  2. Go to aws.amazon.com and sign up for an account in their free tier for a year and start playing around with it by setting up servers and networks, peering between things, sending data, accessing things via the API, logging into the console, and doing things like setting up identity access management policies on those resources. Playing around like this will allow you to get a feel for the granularity of what you can do in the cloud and how it's different from how you manage your on-prem resources. Without having a basic understanding of how the most fundamental things work in the cloud, moving to the cloud will be really challenging. It is hard to understand how you need to change your security practice to embrace the cloud when you don't know what the cloud is.
  3. Mike also plugged their basic cloud training courses which give both hands on capabilities, as well as background to be able to pass the Certificate of Cloud Security Knowledge certification. You’ll be able to both talk the language of cloud and play around with Cloud.

To listen to the full podcast, click here, or you can find Agent of Influence on Spotify, Apple Music, or wherever you listen to podcasts.

[post_title] => Cloud Security: What is it Really, Driving Forces Behind the Transition, and How to Get Started [post_excerpt] => In a recent episode of Agent of Influence, Nabil Hannan talked with Mike Rothman, President of DisruptOps. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => cloud-security-driving-forces-behind-transition-how-to-get-started [to_ping] => [pinged] => [post_modified] => 2021-11-15 11:21:25 [post_modified_gmt] => 2021-11-15 17:21:25 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=19329 [menu_order] => 484 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [72] => WP_Post Object ( [ID] => 19262 [post_author] => 65 [post_date] => 2020-07-09 07:00:47 [post_date_gmt] => 2020-07-09 07:00:47 [post_content] => On July 9, 2020, NetSPI Managing Director Nabil Hannan was featured in Dark Reading. Google "pen testing return on investment (ROI)" and you will find a lot of repetitive advice on how to best communicate the value of a pen-testing engagement. Evaluate the costs of noncompliance penalties, measure the impact of a breach against the cost of a pentest engagement, reduce time to remediation, to name a few. While all of these measurements are important, pen testing provides value beyond compliance and breach prevention, even through a financial lens. Let's explore the critical steps to successfully define and communicate ROI for security testing. Read the full article here. [post_title] => Dark Reading: Pen Testing ROI: How to Communicate the Value of Security Testing [post_excerpt] => On July 9, 2020, NetSPI Managing Director Nabil Hannan was featured in Dark Reading. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => dark-reading-pen-testing-roi-how-to-communicate-the-value-of-security-testing [to_ping] => [pinged] => [post_modified] => 2021-04-13 00:06:51 [post_modified_gmt] => 2021-04-13 00:06:51 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=19262 [menu_order] => 487 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [73] => WP_Post Object ( [ID] => 19208 [post_author] => 65 [post_date] => 2020-06-23 07:00:11 [post_date_gmt] => 2020-06-23 07:00:11 [post_content] =>

I was recently a guest on the CU 2.0 podcast, where I talked with host, Robert McGarvey about enabling a remote workforce while staying secure during the COVID-19 pandemic and how the pandemic has actually inspired digital transformation in some organizations.

I wanted to share some highlights in a blog post, but you can also listen to the full interview here.

Security Implications of Decisions Made to Enable a Remote Workforce

In spring 2020, when the COVID-19 pandemic first started, a lot of organizations weren't quite ready to enable their employees to work from home. It became a race to get people to be functional while working remote, and security was an afterthought – if that. During this process, a lot of companies started to realize they had limitations in terms of how much VPN licensing they had or that they had a lot of employees who were required to work in the office so didn’t have laptops they could take home.

For many companies, when they started working remote in early spring, they thought it would be for a relatively short time. As such, many are just now starting to realize they haven't truly assessed the risk they're exposing themselves to. Only now, two to three months later, are many starting to focus their efforts on understanding the security implications of the decisions they made this past spring to enable a remote workforce.

For example, companies have limited licenses for virtual desktops, virtual images or even operating systems and versions of operating systems that they’re using. To quickly get employees access to their work systems and assets from home, a lot of companies ended up re-enabling operating systems that they had previously disabled, including bringing back Windows XP, Windows 7, and Windows 8 machines that they had stopped using. There are challenges to this, including not having proper patches and updates for issues that are discovered. If you have operating systems that are outdated, they have certain known vulnerabilities that could be exploited. Many companies made certain decisions from a business perspective, but are now asking what type of security risk they’ve exposed ourselves to.

It’s important to understand the basic principles of security, and make sure you're thinking through those as you enable your workforce to be more effective while working from home. Organizations have to strike the balance between their business objectives, the function they’re trying to accomplish, and the security risks they’re exposing themselves to.

There’s also a lack of education around remote access technologies. For example, when you're working from home, you need to ensure you're always connected to your organization's VPN when you're browsing the internet or doing work-related activities, because that traffic then can’t be intercepted and viewed by anyone else on the internet.

It’s critical to use the right technologies correctly and enable things like multi-factor authentication. With multi-factor authentication, if a hacker has your password, at least they don't have that second factor that comes to you via email, text message, phone call, or an authenticator application.

I strongly believe that today we must have multi-factor authentication enabled on everything. It's almost negligent of an organization to not enable multi-factor authentication, especially given how much prevalence we've seen with passwords being breached or organizations with database breaches where their employees’ or their clients’ username and passwords have been exposed.

The Weakest Link in Today’s Technology Ecosystem: People

The weakest link in today's technology ecosystem is the human element.

As soon as the COVID-19 pandemic started, we noticed there was a significant increase in phishing emails and scams that attackers started deploying and that they were very specifically geared towards the COVID-19 pandemic itself. Some examples include:

  • Emails pretending to be from your doctor's office with attachments that have certain steps that you need to take to prevent yourself from getting the virus or supporting your immune system.
  • Emails supposedly from your business partners with FAQ attachments containing details around what they were doing to protect their business from a business continuity perspective during the pandemic.
  • Emails from fake employees claiming they had contracted the virus and the attachment contained lists of people they had come in contact with.
  • Emails pretending to be from HR, letting people know that their employment had been terminated, and they needed to click on a link to claim their severance check.

Spam filters are only upgraded once they see the new techniques attackers are using. Much of the language in these phishing emails was around COVID-19 language that they hadn’t seen before, so they weren’t being caught in spam filters – and many people fell victim to a lot of these attacks. Once this happens, you’re exposed to potential ransomware. Once one employee downloads a file onto their machine containing malware, that can eventually propagate across your whole network to other machines that are connected. And, like the real virus, this can propagate very fast and hide its symptoms until a certain time or a particular event that triggers a payload.

There are a lot of similarities between a medical virus and a computer virus, but the biggest difference in the digital world is that the spreading of the virus can happen exponentially faster, because everything moves faster on the internet.

COVID-19: Inspiring Digital Transformation

I believe there needs to be an increased focus on education around the importance of cyber security going beyond the typical targeted groups. Everyone within an organization is responsible for cyber security – and getting that broad understanding and education to all employees is key.

We’re seeing a lot of transformation today in terms of how we work and how the norm is going to change given this situation. As such, there needs to be increased awareness around security across the board, given the pandemic. People have to be the first step to making sure that they're making good and sensible decisions before they take any specific action online. There are a lot of very simple hygiene related things that are missing today that needs to be done better – and people just need better education around these items.

Additionally, I believe organizations are going to take steps to make sure that they're doing things like enabling multi-factor authentication for their employees to connect remotely and making sure that VPN access is required for you to work on your machine if you’re remote.

Cloud-based software is also going to be key and in fact, I can't imagine organizations being very successful at sending people to work from home if they weren't leveraging the cloud to quickly scale their ability to serve their employees and their customers in a different format.

In many ways, COVID-19 has inspired a lot of transformation and innovation in how we approach the work culture. People are also becoming more aware of what actions they're taking online and thinking about security implications of the actions that they're taking.

[post_title] => COVID-19: Evaluating Security Implications of the Decisions Made to Enable a Remote Workforce [post_excerpt] => Nabil Hannan was featured on the CU 2.0 podcast with host, Robert McGarvey, and talked about enabling a secure, remote workforce during COVID-19 [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => covid-19-evaluating-security-implications-of-the-decisions-made-to-enable-a-remote-workforce [to_ping] => [pinged] => [post_modified] => 2021-04-14 00:52:52 [post_modified_gmt] => 2021-04-14 00:52:52 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=19208 [menu_order] => 492 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [74] => WP_Post Object ( [ID] => 19198 [post_author] => 65 [post_date] => 2020-06-16 07:00:42 [post_date_gmt] => 2020-06-16 07:00:42 [post_content] => On June 16, 2020, NetSPI Managing Director Nabil Hannan was featured in TechTarget. What do NVIDIA's Jensen Huang, Salesforce's Marc Benioff and Microsoft's Satya Nadella have in common? They were all deemed the greatest business leaders of 2019, according to Harvard Business Review's "The CEO 100" list. But another commonality they share is that each have had mentors to help guide them through their careers in technology and get them to where they are today. Mentorship is critical in every industry but given the immense opportunity for career growth in the cybersecurity industry today, having the right guidance is a must. The industry faces many challenges from a staffing perspective -- from the skills shortage to employee burnout -- making the role of a mentor that much more important as others navigate these challenges. While mentorship is often considered subjective, there are a few best practices to follow to ensure you're establishing a solid foundation in the mutually beneficial relationship, not only to help new talent navigate the industry, but also to help strengthen the industry as a whole. First, let's explore what to look for when hiring new cybersecurity talent. Read the full article here. [post_title] => TechTarget: Invest in new security talent with cybersecurity mentorships [post_excerpt] => On June 16, 2020, NetSPI Managing Director Nabil Hannan was featured in TechTarget. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => techtarget-invest-in-new-security-talent-with-cybersecurity-mentorships [to_ping] => [pinged] => [post_modified] => 2021-04-14 05:31:49 [post_modified_gmt] => 2021-04-14 05:31:49 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=19198 [menu_order] => 494 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [75] => WP_Post Object ( [ID] => 19184 [post_author] => 65 [post_date] => 2020-06-16 07:00:02 [post_date_gmt] => 2020-06-16 07:00:02 [post_content] =>

Common Myths Around Application Security Programs

In order for an organization to have a successful Application Security Program, there needs to be a centralized governing Application Security team that’s responsible for Application Security efforts. In practice, we hear many reasons why organizations struggle with application security, and here are four of the most common myths that need to be dispelled:

1. An Application Security Team is Optional

Just like everything else, there needs to be dedicated effort and responsibility assigned for Application Security in order for an Application Security Program to be successful. Based on our experience and evidence of successful Application Security Programs, all of them have a dedicated Application Security team focused on managing Application Security efforts based on the organization’s business needs.

2. My Organization is Too Small to Have an Application Security Team

A small organization is no excuse to avoid doing Application Security activities. Application security cannot be an after-thought or something that’s bolted on when needed. It needs to be an inherent property of your software and having focus and responsibility for Application Security in the organization will help prevent and remediate security vulnerabilities.

3. I Cannot Have an Application Security Team Because We Are a DevOps/Agile/Special Snowflake Shop

Just because your business or your development processes are different from others, doesn’t mean that you don’t have a need for Application Security, nor does it mean that you cannot adopt certain application security practices. There are many opportunities in any type of an SDLC to inject application security touchpoints to ensure that business objectives or development efforts are not hindered by security, but rather are enhanced by security practices.

4. An Application Security Team will Hinder Our Ability to Deliver/Conduct Business

In our experience, we have seen that more secure applications are typically better in all perspectives – performance, quality, scalability, etc. Application Security activities, if adopted correctly will not hinder your organization or team’s ability to conduct business but will in fact provide a competitive advantage within your business vertical.

Why Do You Need an Application Security Program?

Defect Discovery – Organizations typically start their application security journey in defect discovery efforts. The two most common discovery techniques used are Penetration Testing and Secure Code Review to get started discovering security vulnerabilities and remediating them appropriately.

Defect Prevention – An Application Security Program’s goal is not only to help proactively identify and remediate security issues, but also to avoid security issues from being introduced.

Understanding Risk – In order to identify an organization’s risk posture, it’s necessary to identify what defects exist, and then determine the likelihood of these defects being exploited and the resulting business impact from successful exploitation. Organizations need to understand how the defects identified actually work and determine what components of the organization and business are affected by the identified defects.

Getting Started with Defect Discovery

There are many different techniques of defect discovery, and each has its own set of strengths, weaknesses, and limitations in what they can identify. Certain techniques are also prone to higher levels of false positives than others. There’s also factors such as speed at which these techniques can be implemented and how quickly results can be made available to the appropriate stakeholders which need to be considered when implementing a particular defect discovery technique in an organization. Ultimately, all of the techniques do have certain areas of overlap in terms of the types of defects that they can identify, and all the techniques do complement each other.

Discovery Technique #1 - Penetration Testing

Penetration Testing is the most popular defect discovery technique used by organizations and is a great way to get started if you have had no focus towards Application Security in the past. Pentesting allows an organization to get a baseline of the types of vulnerabilities that their applications are most likely to contain. There’s a plethora of published materials on known attacks that work and it’s easy to determine what to try. When performing penetration testing, the type of testing varies significantly based on the attributes of the system being tested (web application, thick client, mobile application, embedded application, etc.).

Execution Methods

Technology/Tool Driven
  • Multiple commercial and open source tools available
  • DAST tools are widely available while IAST tools are maturing and gaining adoption
  • Cost, tool capability, customizability, deployment options, features, etc. are factors to consider
Outsourced Manual Penetration Testing (Third-Party Vendor)
  • Many options available
  • NetSPI provides a wide range of Penetration Testing services at varying levels of depth
  • Available on-demand and easy to scale
  • Driving factors to consider – cost, scalability, quality, scheduling logistics, trust, delivery model maturity, etc.
In-House Manual Penetration Testing
  • Hard to find good talent
  • Harder to retain good talent long-term
  • Impossible to scale

Discovery Technique #2 - Secure Code Review

Secure Code Review is often mistaken for Code Review that development teams typically do in a peer review process. Secure Code Review is an activity where source code is reviewed in an effort to identify security defects that may be exploitable. There are plenty of checklists on common patterns to look for or certain coding practices to avoid (hardcoded passwords, usage of dangerous APIs, buffer overflow, etc.). There are also various development frameworks that publish secure coding guidelines that are readily available. Some organizations with more mature Secure Code Review practices have implemented secure by design frameworks or adopted hardened libraries to ensure that their developers are able to avoid common security defects by enforcing the usage of the organization’s pre-approved frameworks and libraries in their development efforts.

Execution Methods

Technology/Tool Driven
  • Multiple commercial and open source SAST tools available
  • Cost, tool capability, customizability, false positive rates, deployment options, features, etc. are factors to consider
  • Triaging scan results can be costly and time consuming given the nature of SAST scanning and the high false positive rates
Outsourced Manual Secure Code Review (Third-Party Vendor)
  • Many options available
  • NetSPI provides a wide range of Secure Code Review services at varying levels of depth
  • Available on-demand and easy to scale
  • Driving factors to consider – cost, scalability, quality, scheduling logistics, trust, delivery model maturity, etc.
In-House Manual Secure Code Review
  • Hard to find good talent
  • Harder to retain good talent long-term
  • Impossible to scale
  • Inconsistent results – even if it's the same person on a different day
  • Checklists help, but results vary significantly based on the reviewer's capabilities

Defect Discovery is Just the Beginning

It’s important to remember that defect discovery is more than just the two techniques discussed here. In the scheme of your Application Security Program, the effort towards defect discovery is just a part of your application security program. In addition to defect discovery, you need to consider the following (and much more):

  • What does it mean for your organization to have a Secure SDLC from a governance perspective?
  • How are you going to create awareness and outreach for your SDLC to ensure the appropriate stakeholders know what their roles and responsibilities are towards application security?
  • What key processes and technology do you need to put in place to ensure everyone is capable of performing the application security activity that they’re responsible for?
  • How are you going to manage software that’s developed (and/or managed) by a third party (augmenting vendor management to reduce risk)?

Application Security Governance and Strategy

Application security governance is a blueprint that is comprised of standards and policies layered on processes that an organization can leverage in their decision-making processes in their application security journey.

Organizations have started adopting a Secure SDLC (S-SDLC) process as part of their software development efforts, and this tends to vary greatly between organizations. Ultimately, the focus of the S-SDLC is to ensure that vulnerabilities are detected and remediated (or prevented) as early as possible.

Many organizations unfortunately have not defined their application security governance model, and as a result, lack a proper S-SDLC. Without the proper processes in place, it’s challenging, if not impossible to have oversight of the application security risks posed to all the applications in an organization’s application inventory.

Ultimately, we’ve observed that regardless of where the governance function is implemented (software engineering, centralized application security team, or somewhere else), there needs to be dedicated focus on application security to get started on the journey to reducing risk faced from an application security perspective.

The Trifecta of People, Process and Technology

1. Application Security Team (People)

Organizations need to assign responsibility for application security. In order to do this, it’s important to establish an application security team that is a dedicated group of people who are focused on making constant improvements to an organization’s overall application security posture and as a result, protect against any potential attacks. Organizations that have a dedicated application security team are known to have a better application security posture overall.

2. Secure SDLC/Governance (Process)

Clear definition of standards, policies, and business processes are key to having a successful application security strategy. The S-SDLC ensures that applications aren’t created with vulnerabilities or risk areas that are unacceptable to the organization’s business objectives.

3. Application Security Tools and Technology

There’s a plethora of open source and commercial technologies available today that all leverage different defect discovery techniques to identify vulnerabilities in applications. DAST, SAST, IAST, SCA, and RASP are some of the more common types of technologies available today. Based on the business goals, objectives, and the software development culture, the appropriate tool (or combination of tools) needs to be implemented to automate and expedite detection of vulnerabilities as accurately and early as possible in the SDLC.

Taking a Strategic Approach to Application Security

In order to grow and improve, organizations need to have an objective way to measure their current state, and then work on defining a path forward. Leveraging the appropriate application security framework to benchmark the current state of the application security program allows organizations to use real data and drive their application security efforts more strategically towards realistic application security goals.

Standard frameworks also allow for re-measurements over time to objectively measure progress of the application security program and determine how effective the time, effort, and budget being put towards the application security program are.

As the application security capabilities mature, so does the amount and quality of data that is at the organization’s disposal. It’s important to ensure that the data collection is automated and proper application security metrics are captured to determine the effectiveness of different application security efforts, and also measure progress while being able to intelligently answer the appropriate questions from executive leadership and board members.

NetSPI’s Strategic Advisory Services

NetSPI offers a range of Strategic Advisory Services to help organizations in their application security journey.

Regardless of where you are in your application security goals and aspirations, NetSPI provides:

  • Application Security Benchmarking – Measure the current state of your application security program and understand how your organization compares to other similar organizations within the same business vertical.
  • Application Security Roadmap – Understand the organization’s application security goals and build a realistic roadmap with key timely milestones.
  • Application Security Metrics – Based on the organization’s application security program, understand what data is available for collection and automation, allowing for definition of metrics that allow the application security team to answer the appropriate questions to help drive their application security efforts.
[post_title] => Getting Started on Your Application Security Journey [post_excerpt] => In order for an organization to have a successful Application Security Program, there needs to be a centralized governing team that’s responsible for all efforts. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => getting-started-on-your-application-security-journey [to_ping] => [pinged] => [post_modified] => 2021-04-14 06:41:41 [post_modified_gmt] => 2021-04-14 06:41:41 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=19184 [menu_order] => 495 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [76] => WP_Post Object ( [ID] => 19230 [post_author] => 53 [post_date] => 2020-06-02 13:49:41 [post_date_gmt] => 2020-06-02 13:49:41 [post_content] =>
Watch Now

Your employees were probably working from home more and more anyway, but the COVID-19 situation has taken work from home to a whole new level for many companies. Have you really considered all the security implications of moving to a remote workforce model?

Chances are you and others are more focused on just making sure people can work effectively and are less focused on security. But at times of crisis – hackers are known to increase their efforts to take advantage of any weak links they can find in an organization’s infrastructure.

Host-based security represents a large surface of attack that continues to grow as employees become increasingly mobile and work from home more often. Join our webinar to make sure your vulnerability management program is covering the right bases to help mitigate some of the implicit risks associated with a remote workforce.

[wonderplugin_video iframe="https://youtu.be/YMmK74ilyew" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Host-Based Security: Staying Secure While Your Employees Work from Home [post_excerpt] => Watch this on-demand webinar to make sure you are vulnerability management program is covering the right bases to help mitigate some of the implicit risks associated with a remote workforce. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => host-based-security-staying-secure-while-your-employees-work-from-home-2 [to_ping] => [pinged] => [post_modified] => 2023-09-01 07:12:05 [post_modified_gmt] => 2023-09-01 12:12:05 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=19230 [menu_order] => 73 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [77] => WP_Post Object ( [ID] => 19027 [post_author] => 65 [post_date] => 2020-06-02 07:00:12 [post_date_gmt] => 2020-06-02 07:00:12 [post_content] =>

Online Shopping Behavior and E-Commerce Transformation

By this point it’s clear that organizations and every individual has to make changes and adapt their day-to-day activities based on the weeks of lock-down that everyone has faced due to the Coronavirus.

All of a sudden, there’s a surge of people using mobile applications and online payments to order food, medicine, groceries, and other essential items through delivery. Due to social distancing requirements that are in place, the interactions between retailers and consumers have shifted drastically. We all know that people are heavily dependent on their mobile devices, even more so today than ever before. New studies show that “72.1% of consumers use mobile devices to help do their shopping” and there’s been a “34.9% increase year over year in share of consumers reporting online retail purchases.”

Businesses have been drastically impacted during the pandemic, and they’re also adapting to be able to conduct business online more than ever before. In these efforts, there’s been an increase in using online payments through the use of credit card transactions because they are preferred since they’re contact-less compared to cash transactions (which add a higher likelihood of the Coronavirus spreading through touch).

Given the increased use of online credit card payments, organizations need to ensure that they’re compliant with the Payment Card Industry (PCI) Data Security Standard – referred to as PCI DSS. The current version of the PCI-DSS is version 3.2.1 which was released in May 2018.

The Spirit of PCI DSS

PCI DSS is the global data security standard which all payment card brands have adopted for anyone that is processing payments using credit cards. As stated on their website, “the mission is to enhance global payment account data security by developing standards and supporting services that drive education, awareness, and effective implementation by stakeholders.”

Payment card data is highly sensitive, and in the case cardholder data is stolen or compromised, it’s quite a hassle to deal with that compromise. Attackers are constantly scanning the internet to find weaknesses in systems that store and process credit card information.

One well-known credit card breach was in late 2013 when a major retailer, had a major breach. This attack brought to the foreground the importance of ensuring proper segregation of systems that process credit card payments from other systems that may be on the same network.

PCI DSS wants to ensure that any entity that is storing or processing credit card information are following the minimum bar when it comes to protecting and segregating credit card information.

I am not going to go into detail about each and every PCI DSS requirement, but focus this post around how we at NetSPI help our clients with their journey towards PCI compliance.

NetSPI’s Role in Our Clients’ PCI Compliance Journey

First of all, it’s important to note that at the moment, NetSPI has made the business decision not be a PCI “Approved Scanning Vendor” (ASV) or a PCI “Qualified Security Assessor” (QSA). You can find the PCI definitions for these at the PCI Glossary. Given NetSPI’s business objectives, our focus is on deep dive, high quality technical offerings instead of having an audit focus that is required in order to be an ASV or a QSA.

That being said, we are extremely familiar with PCI DSS requirements and work with some of the largest banks and financial institutions to enable them to become PCI compliant (and meet other regulatory pressures) by helping them in their efforts to develop and maintain secure systems and applications, and regularly testing their security systems and processes.

We’ll go deeper into some of the specific PCI requirements that we work closely with our customers on, and describe our service offerings that are leveraged in order to satisfy the PCI requirements. In almost all cases, NetSPI’s service offerings go above and beyond the minimum PCI requirements in terms of technical depth, scope, and thoroughness.

PCI Requirements and Services Mapping

PCI at a high level provides a Security Standard that’s broken down as shown in the table below that’s been published in their PCI DSS version 3.2.1.

PCI DSS Requirement 6.3

Incorporate information security throughout the software-development lifecycle.

NetSPI Offerings Leveraged by Clients

NetSPI is an industry-recognized leader in providing high quality penetration testing offerings. NetSPI’s offerings around Web Application, Mobile, and Thick Client Penetration Testing services are leveraged by clients to not only satisfy PCI DSS Requirement 6.3, but also go beyond the PCI requirements as elaborated on in their requirement 6.5.

PCI DSS Requirement 6.5

Verify that processes are in place to protect applications from, at a minimum, the following vulnerabilities:

  • Injection Flaws
  • Buffer Overflows
  • Insecure Cryptographic Storage
  • Insecure Communications
  • Improper Error Handling
  • Cross-site Scripting
  • Cross-site Request Forgery
  • Broken Authentication and Session Management

NetSPI Offerings Leveraged by Clients

NetSPI’s Web Application, Mobile, and Thick Client Penetration Testing services go above and beyond looking for vulnerabilities defined in the list above. The list of vulnerabilities in this particular requirement is heavily driven by the OWASP Top Ten list of vulnerabilities. NetSPI’s service offerings around Application Security provides at a minimum OWASP Top Ten issues, but typically goes above and beyond the OWASP Top Ten list of vulnerabilities.

PCI DSS Requirement 6.6

For public-facing web applications, address new threats and vulnerabilities on an ongoing basis and ensure these applications are protected against known attacks by either of the following methods:

  • Reviewing public-facing web applications via manual or automated application vulnerability security assessment tools or methods, at least annually and after any changes
  • Installing an automated technical solution that detects and prevents web-based attacks (for example, a web application firewall) in front of public-facing web applications, to continually check all traffic.

NetSPI Offerings Leveraged by Clients

NetSPI’s Web Application Penetration Testing offerings are highly sought after by our clients. In particular, at NetSPI we work closely with seven of the top 10 banks in the U.S., and are actively delivering various levels of Web Application Penetration Testing engagements for them. Customers continue to work with us because of the higher level of quality that they see in the output of the work we deliver, which is typically credited to the use of our Penetration Testing as a Service delivery model which is enabled by our Resolve ™ platform.

PCI DSS Requirement 11.3

Implement a methodology for penetration testing:

  • A penetration test must be done every 12 months.
  • The penetration testing verifies that segmentation controls/methods are operational and effective, and isolate all out-of-scope systems from systems in the CDE.
  • Methodology includes network, server, and application layer testing.
  • Includes coverage for the entire CDE perimeter and critical systems. Includes testing from both inside and outside the network.

NetSPI Offerings Leveraged by Clients

NetSPI provides both Internal and External Network Penetration Testing services. When customers request an assessment that they want to leverage the results to satisfy PCI requirements, NetSPI provides customers with deliverables that are tailored to be provided to our client’s QSA to satisfy PCI requirements. In cases where the Cardholder Data Environment (CDE) is located in the Cloud, then NetSPI Cloud Penetration Testing goes above and beyond the minimum PCI Stanrdard’s testing requirements.

[post_title] => E-Commerce Trends During COVID-19 and Achieving PCI Compliance [post_excerpt] => By this point it’s clear that organizations and every individual has to make changes and adapt their day-to-day activities based on the weeks of lock-down [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => e-commerce-trends-during-covid-19-and-achieving-pci-compliance [to_ping] => [pinged] => [post_modified] => 2021-04-14 00:53:44 [post_modified_gmt] => 2021-04-14 00:53:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=19027 [menu_order] => 498 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [78] => WP_Post Object ( [ID] => 19011 [post_author] => 53 [post_date] => 2020-05-27 12:00:20 [post_date_gmt] => 2020-05-27 17:00:20 [post_content] =>

Each year, thousands of merger and acquisition (M&A) applications are approved. While M&As are growing in popularity, they aren’t risk free. According to West Monroe Partners, 40 percent of acquiring businesses discovered a high-risk security problem after an M&A was completed.

During this on-demand webinar, NetSPI Managing Director, Nabil Hannan will dive into critical vulnerability management considerations for your M&A activity, including:

  • The added layer of concern that comes with digital communications channels
  • Knowing which assets the acquirer is responsible for
  • Common M&A security red flags to look out for
  • How to obtain complete visibility into true risk exposure during an M&A
[post_title] => Vulnerability Management Best Practices for Mergers & Acquisitions [post_excerpt] => During this on-demand webinar, NetSPI Managing Director, Nabil Hannan, will dive into critical vulnerability management considerations for your M&A activity. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => webinars-vulnerability-management-best-practices-for-mergers-acquisitions-on-demand [to_ping] => [pinged] => [post_modified] => 2023-09-20 11:51:31 [post_modified_gmt] => 2023-09-20 16:51:31 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=19011 [menu_order] => 66 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [79] => WP_Post Object ( [ID] => 18835 [post_author] => 65 [post_date] => 2020-05-26 07:00:02 [post_date_gmt] => 2020-05-26 07:00:02 [post_content] =>

In a recent episode of Agent of Influence, I talked with Sean Curran, Senior Director in West Monroe Partners’ Technology Practice in Chicago. Curran specializes in cybersecurity and has over 20 years of business consulting and large-scale infrastructure experience across a range of industries and IT domains. He has been in the consulting space since 2004 and has provided risk management and strategic advice to many top-tier clients. Prior to consulting, Curran held multiple roles with National Australia Bank.

I wanted to share some of his insights in a blog post, but you can also listen to our interview here, on Spotify, Apple Music or wherever you listen to podcasts.

Cybersecurity Challenges of COVID-19

From Curran’s perspective, the COVID-19 pandemic has created a lot of challenges for organizations, many of which weren’t prepared for this situation. For example, some organizations primarily used desktop computers and now their employees are being asked to work from home without laptops, which is particularly hard at a time when hardware is difficult to source.

In addition, many companies had processes in place that they never tested – or their processes were too localized. While many companies are prepared to withstand a disaster in one location – for example, Florida in case of a hurricane – COVID-19 has affected the entire world, and organizations weren’t prepared to withstand that. The widespread global impact is why most companies’ disaster recovery and business continuity plans are failing.

The same thing goes for cyberattacks – they aren’t localized to a particular building or region, which is a challenge when most companies are only set up to lose a single building or a single data center.

As during other similar situations, we have seen an increase in cyberattacks during the COVID-19 pandemic, meaning organizations are not only having to implement their business continuity plans on a very broad scale, but also ensure cybersecurity during a heightened period of attacks.

What Makes an Organization Prone to a Security Breach?

People. Budget. And more. Sometimes it’s just that the organization is focused on the wrong things. Or they still believe that security is the security team’s responsibility – but it’s everyone's responsibility.

Curran has seen organizations with a small number of employees and low budgets do some really amazing things, showing it comes down to the capability of the individuals involved and how interested they are in security.

Organizations also need to strike a balance of protecting themselves from old attack methods while thinking about what the next attack method might be. Attackers are very good at figuring out what security teams are looking at, ignoring it, and moving on to the next delivery mechanism. At the same time, ignoring an old attack method isn’t necessarily the right approach either because we do see attackers re-using old schemes when people have moved on and forgotten about it – or combining several old attack methods into a new one.

Key Steps After a Breach

It’s critical to first understand the point at which your employee fell victim to the virus. The day the antivirus program alerts you that you have a virus isn't necessarily the day you got the virus.

Then you need to understand what the virus did when someone clicked on a link. Was it credential stealing or malware dropping?

To understand this, you can use toolboxes, which allow you to drop an email, an application or point to a website, and the toolbox will tell you what the virus did. Curran uses a tool called Joe’s Sandbox.

Once you understand what the virus did, you can determine next steps. For example, if it was credential stealing, you need to think about what those user credentials have access to. It’s critical to think holistically here – if the user gave away internal credentials, are they re-using those for personal banking platforms or a Human Resources Information System (HRIS)? People tend to think myopically around active directory, but Curran argues that we need to start thinking beyond that, especially as we start using cloud services.

Curran pointed out that social communication is happening on almost every platform, including Salesforce, Slack, and more. Everything has a social component to it, meaning also that there's a new delivery mechanism that attackers could start to use.

It’s critical for organizations to start thinking more holistically about how they prepare for and respond to a security breach. For many organizations, the COVID-19 pandemic has created a perfect storm of trying to implement business continuity plans that weren’t tested or up to the task, while also ensuring security during a heightened time of cyberattacks.

To listen to the full podcast, click here, or you can find Agent of Influence on Spotify, Apple Music, or wherever you listen to podcasts.

[post_title] => Why Organizations Should Think More Holistically About Preparing for and Responding to a Security Breach [post_excerpt] => In a recent episode of Agent of Influence, Nabil Hannan talked with Sean Curran, Senior Director in West Monroe Partners’ Technology Practice in Chicago [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => why-organizations-should-think-more-holistically-about-preparing-for-and-responding-to-a-security-breach [to_ping] => [pinged] => [post_modified] => 2021-11-15 11:22:43 [post_modified_gmt] => 2021-11-15 17:22:43 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=18835 [menu_order] => 500 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [80] => WP_Post Object ( [ID] => 18949 [post_author] => 65 [post_date] => 2020-05-13 07:00:25 [post_date_gmt] => 2020-05-13 07:00:25 [post_content] => On May 13, 2020, NetSPI Managing Director Nabil Hannan was featured in Credit Union Journal. As COVID-19 stay-at-home orders begin to lift, people who have the capability to do business from home are being encouraged to do so – and credit unions are no exception. Throughout the pandemic, organizations have had to put business disaster recovery (BDR) and business continuity plans (BCP) to the test – and in tandem, we’ve seen an increased emphasis on cybersecurity resiliency. Cybersecurity concerns have risen over the past couple of months as attackers continue to take advantage of the situation. Notably, the Zeus Sphinx banking trojan has returned, phishing attacks are up 350%, and the growing remote workforce has increased the use of potentially vulnerable technologies. Read the full article here. [post_title] => Credit Union Journal: Credit unions must step up cybersecurity during coronavirus [post_excerpt] => On May 13, 2020, NetSPI Managing Director Nabil Hannan was featured in Credit Union Journal. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => credit-union-journal-credit-unions-must-step-up-cybersecurity-during-coronavirus [to_ping] => [pinged] => [post_modified] => 2021-04-14 05:32:00 [post_modified_gmt] => 2021-04-14 05:32:00 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=18949 [menu_order] => 505 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [81] => WP_Post Object ( [ID] => 18644 [post_author] => 65 [post_date] => 2020-05-05 07:00:53 [post_date_gmt] => 2020-05-05 07:00:53 [post_content] =>

In a recent episode of Agent of Influence, I talked with Anubhav Kaul, Chief Medical Officer at Mattapan Community Health Center near Boston about not only some of the medical challenges they are facing during COVID-19, but also some of the software and security challenges. I wanted to share some of his insights in a blog post, but you can also listen to our interview here, on Spotify, Apple Music, or wherever you listen to podcasts.

COVID-19 Impacts on Telemedicine

Telemedicine has been available and used for multiple years and takes many different forms. For example, your doctor calling you on the phone and updating you on your results is telemedicine, receiving results through an electronic portal is telemedicine, or receiving feedback from your provider over a text message platform is telemedicine.

However, COVID-19 has drastically changed many doctors’ reliance on telemedicine to be the primary platform for how they provide care to their patients. According to Kaul, 90 percent of care being delivered by Mattapan is currently being delivered via telemedicine, including treatment of chronic conditions and urgent concerns. This has been made possible largely because the payers, both public and private, recognized the essential need of working in the current climate and have been able to help Mattapan receive reimbursement for providing telemedicine-based care.

The challenges Mattapan is currently experiencing are mostly around adoption of video and phone technology enabling remote treatment, since many clinicians have never had training on how to conduct effective telemedicine appointments.

In addition, while there is a tremendous amount of care that can be provided to patients without physically seeing them, the ability to be in the presence of patients and evaluate them in person is sometimes irreplaceable. In part to combat this challenge, Mattapan is leveraging medical devices to help manage certain conditions by patients from home, many of which automatically send data directly to doctors as it’s collected, including devices to measure blood pressure, glucose, weight, and more.

Kaul has also noticed that doctor-patient relationships, like so many relationships, are struggling with the lack of social connection, one of the most gratifying parts of providing care in person. With new technological developments, people are in general more distracted by their technology from the person right in front of them, including doctors when seeing patients. This may even be exacerbated as doctors leverage telemedicine to provide treatment and try to connect with patients over video and phone.

Staying Secure While Providing Remote Treatment

Providers have always had to focus on ensuring their communications with patients are secure and HIPAA compliant. Many clinicians want to provide the best care to their patients, which may sometimes mean giving out their cell phone numbers to patients or texting their patients to allow for accessibility of care. While they have every intention of doing the right thing for the patient, these are not necessarily considered safe modes of communicating with patients. They may be easy and accessible, but there is a level of risk when it comes to using unofficial platforms.

Using encrypted emails and online patient portals to send text messages are more secure options, even if they may not be as convenient for clinicians and patients.

Even outside of a pandemic situation, doctors and clinics will always face this security challenge that sometimes stands in conflict: trying to protect the patient's information and trying to protect the patient’s health by providing accessible care. And at the same time, not putting themselves or their clinic at risk when using unsecure modes of communication.

Mattapan uses Epic, an Electronic Medical Record (EMR) system that is integrated with Zoom technology to provide telemedicine via video and which allows patients to send pictures that are then uploaded into their patient portal and medical record. However, most visits will continue to be phone-based, primarily because of accessibility. While getting people to adopt new technology is always a challenge, Mattapan is working to increase video adoption to give all their patients the full functionality that that medium provides.

As Mattapan and clinics around the world leverage new technology and medical devices to treat patients remotely, they don’t necessarily know the security threats these technology solutions pose because they’ve never used them before, especially to this extent. While hospital IT and security teams are working to quickly test and set up these systems, there are risks associated.

[embed]https://youtu.be/qxLObXu9OCI[/embed]

As a clinician, Kaul is not necessarily constantly thinking about security risks, but more about the most accessible way to provide care to Mattapan’s patients. He sees this time as presenting an opportunity in the market for telemedicine software solutions and medical devices, so that doctors can continue to treat patients remotely – and even offer broader and improved treatments.

I’ve completed a fair number of security assessments for electronic medical devices and organizations that build hardware leveraged by doctors, and in my experience, doctors hate security because it interferes with their ability to conduct the job at hand. And in certain cases, the job at hand takes significantly higher priority than the potential security risks. For example, I don't think any doctor wants to have to enter a password before they can use a surgical device, because sometimes every second matters when it comes to the life of a patient.

Increasing Challenges of Patient Authentication

Another challenge when it comes to treating patients remotely is that of patient authentication. For example, you may be trying to monitor the blood pressure of your patient and you send them home with a device that’s continually sending data back, but how do you know that data is for your patient and not their child, sibling or someone else? Kaul acknowledges that there’s no easy way to authenticate this and it’s very easy for patients to cheat the system if they want to. These are challenges that need more focus and attention, of which they’re probably not getting right now because usability is taking a much higher priority than security.

Mattapan is focused on making sure any patient interactions they’re having are as reliable as possible, especially during this time. However, there are unique challenges. For example, sometimes they rely on talking with family members of people who can’t speak English, but maybe that family member doesn’t have full jurisdiction about their health care information and making decisions about their health care. These types of scenarios are opportunities for software and medical device companies to fill, but they may not be given the highest priority at this time.

Prescribing Prescriptions Virtually

Doctors have long been able to electronically prescribe most medications, but during the COVID-19 pandemic, they are also allowed to prescribe other medications that previously required a paper prescription, including controlled substance pain medications, certain psychiatric medications, and medications meant to treat addictions.

Being able to prescribe controlled substances electronically has made the process more accessible, especially in these current times, but it has also added security challenges. These challenges include making sure that the patient is properly identified, and they are receiving the prescriptions in a secure manner from the pharmacy. This level of accessibility is great for the patient and for the provider, but certain guidelines have been adopted to make sure this is done in a standardized fashion and to make sure that doctors are still connecting with these patients over the phone or video to see how their care is going, whether it's for pain management or treating them for addiction-based disorders.

During these uncertain times, doctors and hospitals are working to increase accessibility of care, but with accessibility comes the responsibility of making sure that parameters of appropriately treating patients are in place – along with the appropriate security measures.

To listen to the full podcast, click here, or you can find Agent of Influence on Spotify, Apple Music, or wherever you listen to podcasts.

[post_title] => Overcoming Challenges of COVID-19 with Telemedicine and New Technology Solutions [post_excerpt] => In a recent episode of Agent of Influence, Nabil Hannan talked with Anubhav Kaul, Chief Medical Officer at Mattapan Community Health Center near Boston [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => overcoming-challenges-of-covid-19-with-telemedicine-and-new-technology-solutions [to_ping] => [pinged] => [post_modified] => 2021-11-15 11:23:54 [post_modified_gmt] => 2021-11-15 17:23:54 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=18644 [menu_order] => 508 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [82] => WP_Post Object ( [ID] => 18559 [post_author] => 65 [post_date] => 2020-04-28 07:00:33 [post_date_gmt] => 2020-04-28 07:00:33 [post_content] =>

Physical Distancing, Yet Connecting Virtually

We find ourselves living the COVID-19 pandemic, abruptly switching to a work from home model with virtual meetings becoming the norm. By now, unless you’re living under a rock, you’ve heard about people using the Zoom videoconferencing service.

Given that everyone is trying to shelter-in-place/social distance to stop the spread of the virus, the popularity of using Zoom for video calls with groups of people has become extremely popular. Given Zoom’s popularity, there’s been a spike in the usage of Zoom’s video conferencing capabilities for both professional and personal meetings as an avenue for multiple people to have face-to-face video conference calls.

Unfortunately for Zoom, given the spike in usage, there’s also been a rise in the number of security vulnerabilities that keep getting reported with Zoom’s software. This has resulted in Zoom focusing all their development efforts to sort out security and privacy issues with their software.

Let’s explore some of the most popular vulnerabilities that are being discussed and see if we can make sense of them and the impact that they’re going to have if exploited.

Zoom Security Concerns

Zoombombing

This term has gained a lot of popularity. Derived from the term “photo-bombing,” zoombombing refers to when a person or multiple people join a zoom meeting that they’re not invited to and interrupt the discussion in some sort of vulgar manner (e.g. sharing obscene videos or photos in the meeting).

There are a few reasons why this is possible:

  • Meetings with Personal IDs – Zoom gives each user a personal ID, and you can use those IDs to quickly start a Zoom meeting at any given time. Because this ID is static and doesn’t change, zoombombers will keep iterating through all possible personal IDs until they get one that has an active meeting going on and they can join.
  • Meetings not requiring passwords – Users can set up meetings that don’t require a password, so if a zoombomber figures out the meeting ID, and there’s no password for that meeting, then they can join the meeting.
  • Lack of rate limiting – Zoom didn’t seem to have any type of rate limiting that would limit a machine from trying to access meetings.

There are steps a user can take to prevent their meetings from getting zoombombed, including:

  • Generate a new meeting ID for every meeting instead of using your static Personal ID
  • Make sure your meeting has a password (this is now enabled by default)
  • Enable the waiting room, so you have to give users permission to join before they can join the meeting

A Lack of True End-to-End (E2E) Encryption

Zoom does do E2E encryption, but it’s not doing the necessary encryption on the video conference piece. If you’re just using Zoom for social interactions and non-business meetings, and there’s nothing sensitive being shared, you probably don’t care about this too much.

There is encryption happening on the transport layer, but the encryption isn’t true E2E because Zoom can still decrypt your video information. What you basically have is the same level of protection as you would from having interactions with any website that you’re interacting with over HTTPS (with TLS).

When Zoom refers to E2E encryption, they mean all of their chat functionality is protected with true E2E encryption.

The reason you don’t see true E2E encryption for other platforms either is because it is really challenging to do. If you look at other services available like WhatsApp and FaceTime that allow group video calls, they limit how many participants you can have on a call at a time and don’t scale like Zoom does, where currently in the Zoom Gallery View you can see video from up to 49 participants at the same time.

Details around the use of encryption in Zoom can be found here.

China Being Able to Eavesdrop on Zoom Meetings

There’s been a lot of discussion around the issue that the Chinese government can force Zoom to hand over keys and as a result, they would be able to decrypt and view Zoom conversations because a few of the key servers that were used to generate encryption keys were located in China.

It needs to be noted that Zoom does have employees in China and runs development and research operations from there. That being said, most of Zoom’s key servers are based in the U.S. and if there are subpoenas from the FBI or other agencies, then Zoom would be required to hand over the keys (FISA warrant).

To summarize, if you’re just having regular video calls with your family and friends and there’s nothing that’s sensitive in nature being discussed on these calls, you probably shouldn’t worry too much about this issue.

Your Private Chat Conversation Isn’t Really Private

During a zoom meeting, when you send a private chat to someone in that meeting through Zoom, even though it cannot be exposed to anyone else immediately, but after the meeting if the host decides to download the transcript for that meeting, they will have access to both the chats that occurred in public and was sent to everyone in that meeting, along with any private chat message that may have been exchanged between two people privately.

This isn’t necessarily surprising that a host/admin would have access to all chat transcripts, but to sum this one up, if you’re chatting about something privately with someone during the meeting, don’t talk about something or say something that you wouldn’t want others to see or find out about.

Zoom Mimics the OS X Interface to Gain Additional Privileges

When Zoom is being installed, the app requires some additional privileges to complete the process, and so the app installer prompts the user for their OS X password. The message presented to the user is very deceiving since the messages says “System needs your privilege to change” while asking for the administrator credentials to be entered.

It’s challenging to determine whether this is truly malicious or not, because we can’t get into the heads of the developers to determine the true intent – but this trick is commonly used by malware to gain additional privileges.

This in itself isn’t as big of an issue since it happens to take place while a user is intentionally installing the software on their own machine. There are scenarios where attackers could somehow convince their target to install Zoom, and then leverage any vulnerabilities in Zoom itself to cause damage. While not impossible, it’s a little far-fetched.

Zoom Can Escalate Privileges to ‘Root’ on Mac OS

I’m not going to spend too much time going over the technical details of how this can be done, but you can find all the details you want on Patrick Wardle’s blog.

What I want to emphasize here is that this requires someone to already have access to your system to be able to exploit it. If an attacker has physical access to your computer, you have other things to worry about because they basically “own” your machine at this point – they can do whatever they want – time and skill permitting of course. And if there’s some malware that’s exploiting this Zoom issue on your computer, guess what, the malware is already on your machine, and probably has root (or close to root) access anyway because you somehow inadvertently gave it additional privileges to install itself on your machine.

Attackers Can Steal Your Windows Credentials Through the Windows Zoom Application

This is a vulnerability where an attacker can send you a chat message with a UNC link. The Windows app was converting the UNC links into clickable links just like they would with web links. So a link like “ \\ComputerName\Shared Folder\mysecretfile.txt ” would get converted to a web link like “www.netspi.com.” By clicking the link, a user would have their Windows credentials (the username and the password hash) sent to the attacker.

I want to re-iterate the importance of not clicking on links that you don’t trust, or don’t know where it’s really going. It’s important for everyone to be vigilant against clicking on untrusted links just like they would with email phishing. This is no different.

This issue has reportedly been fixed by Zoom on April 1, 2020, as long as you update to the latest version of the Zoom application on Windows.

Source: https://blog.zoom.us/wordpress/2020/04/01/a-message-to-our-users/

What Does This Mean for You?

There’s a lot to digest here, and given Zoom’s popularity in recent times, it’s not surprising that more and more issues are getting reported because researchers are focusing on these issues more, and attackers are trying to take advantage of any little issue that can be exploited on an app that a majority of the population may be using.

The bottom line on how you should use Zoom really depends on your use case. If you’re using it for informal, personal, social purposes and there’s nothing of sensitive nature that you’re worried about, Zoom will serve you just fine. On the other hand, if you need to have sensitive business-related discussions or need to use a communication channel to discuss something that’s top secret, then it’s probably best to avoid Zoom, and use known secure methods of communication that have been approved and vetted by your business/organization.

[post_title] => Zoom Vulnerabilities: Making Sense of it All [post_excerpt] => We find ourselves abruptly switching to a work from home model with virtual meetings becoming the norm on videoconferencing services, like Zoom [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => zoom-vulnerabilities-making-sense-of-it-all [to_ping] => [pinged] => [post_modified] => 2021-04-14 00:54:06 [post_modified_gmt] => 2021-04-14 00:54:06 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=18559 [menu_order] => 511 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [83] => WP_Post Object ( [ID] => 18374 [post_author] => 65 [post_date] => 2020-04-15 07:00:12 [post_date_gmt] => 2020-04-15 07:00:12 [post_content] => On April 15, 2020, NetSPI Managing Director Nabil Hannan was featured in BAI Banking Strategies. The mass relocation of financial services employees from the office to their couch, dining table or spare room to stop the spread of the deadly novel coronavirus is a significant data security concern, several industry experts tell BAI. But they add that it is a challenge that can be managed with the right tools, the right training and enduring vigilance. Read the full article here. [post_title] => BAI Banking Strategies: Work from home presents a data security challenge for banks [post_excerpt] => On April 15, 2020, NetSPI Managing Director Nabil Hannan was featured in BAI Banking Strategies. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => bai-banking-strategies-work-from-home-presents-a-data-security-challenge-for-banks [to_ping] => [pinged] => [post_modified] => 2021-04-14 05:32:10 [post_modified_gmt] => 2021-04-14 05:32:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=18374 [menu_order] => 515 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [84] => WP_Post Object ( [ID] => 18287 [post_author] => 65 [post_date] => 2020-04-14 07:00:40 [post_date_gmt] => 2020-04-14 07:00:40 [post_content] =>

In the inaugural episode of NetSPI’s podcast, Agent of Influence, Managing Director and podcast host, Nabil Hannan talked with Ming Chow, a professor of Cyber Security and Computer Science at Tufts University about the evolution of cyber security education and how to get started in the industry.

Below is an excerpt of their conversation. To listen to the full podcast, click here, or you can find Agent of Influence on Spotify, Apple Music, or wherever you listen to podcasts.

Nabil Hannan

What are your views and thoughts on how actual education in cyber security and computer science has evolved over the last couple of decades?

Ming Chow

I think one thing that is nice, which we didn't have, is this: ten or twenty years ago, if we wanted to learn Java, for example, or about databases, or SQL, you had to go buy a book from your local tech bookstore or we had to go to the library. That doesn't have to happen now. There's just so much information out there on the web.

I think it’s both a good and a bad thing. Now, with all this information readily available, it feels like that content and information is much more accessible. I don't care if you're rich or poor, it really leveled the playing field in terms of the accessibility and the availability of information.

At the same time, there is also the problem of information overload. I'll give you two good examples. Number one: I've had co-workers ask me, “What's the best book to use for python?” That question, back in the day when we had physical books was a lot easier to answer. Making a recommendation now is a lot harder. Do you want a physical book? Are you looking for a publisher? Are you looking for an indie publisher? Are you looking for a website? Are you looking for an electronic form? Now, there are just way too many options.

Now it's even worse when it comes to cyber security and information security. There are a lot of people trying to get into cyber security and a common question is how to get started. If you ask 10 experts that question, you’ll get 10 different answers. This is one of the reasons why, especially for newcomers, that it’s hard to understand where to get started. There are way too many options and too many avenues.

Nabil Hannan

Right, so people get confused by what's trustworthy and what's not, or what's useful versus what isn't.

Ming Chow

And, what makes this worse is social media because a lot of people in cyber security are on Twitter and there’s also a community on Facebook. This has both pros and cons, of course. You have community, which is great, but at the same time, there is just more information and more information overload.

But, there is one thing that hasn't changed in cyber security education – or lack thereof – and computer science curricula since 2014. I don't see much changed in computer science curricula at all. I still see a lot of students walking out of four years of computer science classes who don't know anything about basic security, not to mention about cross site scripting and SQL injection. Here we are in 2020 and there are still many senior developers who don’t know about these topics.

Nabil Hannan

Let's say you have a student who wants to become a cyber security professional or get into a career in cyber security. What's your view on making sure they have a strong foundation or strong basics of understanding of computer science? What do you tell them? And how do you emphasize the importance of knowing the basics correctly?

Ming Chow

Get the fundamentals right. Learn basic computer programming and understand the basics. It makes absolutely no sense to talk about cyber security if you don't have the fundamentals or technical underpinnings right. You must have the basic technical underpinnings first in order to understand cyber security. Because you see a lot of people talkabout cyber security – they talk and talk and talk – but half of the stuff they say makes no sense because they don't have the basic underpinnings.

That's why I tell brand new people, number one, get the fundamentals right. You must get those because you're going to look like a fool if you talk about cyber security, but you don't actually have any knowledge of the basic technical underpinning.

Nabil Hannan

The way I tell people, that is, it's important for you to know how software is actually built in order for you to learn or figure out how you're going to break that piece of software. So that's how I iterate the same thing. But yes, continue please.

Ming Chow

Number two is to educate yourself broadly. Let me explain why that's important. You want to have the technical underpinnings, but you also want to educate yourself broadly – take courses in calligraphy, psychology, political science, information warfare, nuclear proliferation, and others.

Educate yourself broadly, because cyber security is a very broad field. I think that's something that many people fail to understand. A lot of people, especially in business, think that cyber security is just targeted toward technology. A lot of people think cyber security is IT’s responsibility. But of course, that's not true, because things like legal and HR have huge implications for cyber security. You have to educate yourself broadly because sometimes the answer is not technical at all.

Nabil Hannan

I think some of the most successful people that I've seen in this space are usually very adaptable – they learn to adapt to different situations, different scenarios, different cultures, different environments. And, technology is always evolving and so are the actual security implications of the evolving technology. Some of the basics and foundations may still be similar, but the way to approach certain problems ends up being different. And the people who are most adaptable to those type of changing and evolving scenarios tend to be the most successful in cyber security, from what I've seen.

Ming Chow

I think it's a huge misnomer for any young person who is studying and trying to get into security. Cyber security is not about the 400-pound hacker in the basement. It's also not hunting down adversaries or just locking yourself in a room, isolating yourself in a cubicle, writing code that would actually launch attacks.

Nabil Hannan

So, you're saying it's not as glamorous as Hollywood makes it seem in their movies like Hackers and Swordfish?

Ming Chow

I think the most legit show is Mr. Robot because they actually vet out real security professionals for that show.

Now, I want to go back into something you said about the software engineering role. Probably one of the best ways to get into cyber security is to follow one of these avenues: software development, software engineering, help desk, network administration, or system administration. And the reason is because when you're in one of those positions, you will actually be on the front lines and see how things really work.

Nabil Hannan

Things in practice are so different than things in theory, right? So, that's what you really got to learn hands on.

To listen to the full podcast, click here, or you can find Agent of Influence on Spotify, Apple Music, or wherever you listen to podcasts.

[post_title] => The Evolution of Cyber Security Education and How to Break into the Industry [post_excerpt] => In the inaugural episode of NetSPI’s podcast, Agent of Influence, Managing Director and podcast host, Nabil Hannan talked with Ming Chow [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-evolution-of-cyber-security-education-and-how-to-break-into-the-industry [to_ping] => [pinged] => [post_modified] => 2021-11-15 11:25:59 [post_modified_gmt] => 2021-11-15 17:25:59 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=18287 [menu_order] => 516 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [85] => WP_Post Object ( [ID] => 17711 [post_author] => 65 [post_date] => 2020-04-08 07:00:51 [post_date_gmt] => 2020-04-08 07:00:51 [post_content] => On April 8, 2020, NetSPI Managing Director Nabil Hannan was featured in Credit Union Times. Given the hundreds of merger and acquisition applications approved each year by the NCUA, M&As remain an appealing strategy for growth. However, in today’s cyberworld, merging with another company also means adopting another company’s network infrastructure, software assets and all the security vulnerabilities that come with it. In fact, consulting firm West Monroe Partners reported that 40% of acquiring businesses discovered a high-risk security problem after an M&A was completed. A case in point: In the early 2000s, I was part of a team heavily involved during and after the merger of two large financial institutions. We quickly came to the realization that the entities had two completely different approaches to cybersecurity. One had a robust testing program revolving around penetration testing (or pentesting) and leveraged an industry standard framework to benchmark its software security initiative annually. The other did not do as much penetration testing but focused more on architecture and design level reviews as its security benchmarking activity. Trying to unify these divergent approaches quickly brought to the surface myriad vulnerabilities that required immediate remediation. However, the acquired entity didn’t have the business cycle or funding needed for the task, which created a backlog of several hundred thousand issues needing to be addressed. This caused delays in the M&A timing because terms and conditions had to be created. Both parties also had to agree to timelines within which the organizations would address identified vulnerabilities and the approach they would take to prioritize remediation activities accordingly. Read the full article here. [post_title] => Credit Union Times: Vulnerability Management Considerations for Credit Union M&As [post_excerpt] => On April 8, 2020, NetSPI Managing Director Nabil Hannan was featured in Credit Union Times. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => credit-union-times-vulnerability-management-considerations-for-credit-union-mas [to_ping] => [pinged] => [post_modified] => 2021-04-14 05:32:11 [post_modified_gmt] => 2021-04-14 05:32:11 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=17711 [menu_order] => 517 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [86] => WP_Post Object ( [ID] => 18106 [post_author] => 65 [post_date] => 2020-04-07 07:00:03 [post_date_gmt] => 2020-04-07 07:00:03 [post_content] =>

The Internet is a hacker’s playground. When a hacker is looking for targets to attack, they typically start with the weakest link they can find on the perimeter of a network – something they can easily exploit. Usually when they find a target that they can try to breach, if the level of effort becomes too high or the target is sufficiently protected, they simply move on to the next target.

The most common type of attackers are called “Script Kiddies.” These are typically inexperienced actors that simply use code/scripts posted online to replicate a hack or download and use software like Metasploit to try to run scans against systems to find something that breaks. This underscores the need for making sure that the perimeter of your network and what’s visible to the outside world is properly protected so that even inadvertent scanning by Script Kiddies or tools being ran against the network don’t end up causing issues.

Testing the External Network

In most cases, it seems like all the focus and energy goes to the network perimeter that is externally facing from an organization. Almost all the organizations we work with have a focus on, or automated scanners that are regularly testing the external facing network. This is an important starting point because usually vulnerabilities within the network are easily detected and there are many tools out there at the hackers’ disposal allowing them to easily discover vulnerabilities in the network. It’s very common for attackers to regularly scan the Internet network space to try to find vulnerabilities and determine what assets they are going to try to exploit first.

Organizations have processes or automations that regularly scan the external network looking for vulnerabilities. Depending on the industry an organization falls into, there are regulatory pressures that also require a regular cadence of security testing.

The Challenge of Getting an Inventory of All Web Assets

Large organizations that are rapidly creating and deploying software commonly struggle to have an up-to-date inventory of all web applications that are exposed to the Internet at any given time. This is due to the dynamic nature in which organizations work and have business drivers that require web applications to be regularly deployed and updated. One common example is organizations that heavily use web applications to support their business needs, particularly for marketing purposes where they deploy new micro-sites on a regular basis. Not only are new pages deployed regularly, but with today’s adoption of the DevOps culture and Continuous Integration (CI) / Continuous Deployment (CD) methodology being adopted by so many software engineering teams, almost all applications are regularly being updated with code changes.

With changes happening on the perimeter where web applications are exposed and updated all the time, organizations need to regularly scan their perimeter to discover what applications are truly exposed, as well as if they have any vulnerabilities that are easily visible on the outside to attackers that may be running scanning tools.

Typically, organizations do have governance and processes to perform regular testing of applications in non-production or production-like UAT environments, but many times, testing applications in production doesn’t happen. Although doing authenticated security testing in production may not always be feasible depending on the nature and business functionality of a web application, performing unauthenticated security scanning of applications using Dynamic Application Security Testing (DAST) tools can be done easily – after all, the hackers are going to be doing it anyways, so might as well proactively perform these scans and figure out ahead of time what will be visible to the hackers.

The Need for Unauthenticated DAST Scanning Against Web Applications in Production

Given these challenges, it’s important for organizations to seriously consider whether it makes sense to start performing unauthenticated DAST scanning against all their web applications in production on a periodic basis to ensure that vulnerabilities don’t make it through the SDLC to production.

At NetSPI, for our assessments we typically use multiple DAST scanning tools to perform assessments, leveraging both open source DAST scanners and commercial DAST scanning tools.

What Does NetSPI’s Assessment Data Tell Us?

Given our experience in web application assessments, we looked at data from the last 10,000 vulnerabilities identified from web application assessments, and the most common issues are Security Misconfiguration (28%) and Sensitive Data Exposure (23%). Full analysis of the data is in the graphs below.

Given how Security Misconfiguration is the most common vulnerability that we tend to find, it highlights how these misconfigurations could accidentally also be moved into production. This shows why it’s so important to periodically test all web applications in production for common vulnerabilities.

Key Takeaways

  1. Testing your external network is a good start and a common practice in most organizations.
  2. Keeping an up-to-date inventory of all web applications and Internet facing assets is challenging.
  3. Companies should periodically perform DAST assessments unauthenticated against all web applications in production.
  4. Data from thousands of web application assessments shows that Security Misconfiguration and Sensitive Data Exposure are the most common types of vulnerabilities found in web applications.
[post_title] => Through the Attacker's Lens: What Is Visible on Your Perimeter? [post_excerpt] => The Internet is a hacker’s playground. When a hacker is looking for targets to attack, they typically start with the weakest link they can find on the perimeter of a network [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => through-the-attackers-lens-what-is-visible-on-your-perimeter [to_ping] => [pinged] => [post_modified] => 2021-04-14 00:54:24 [post_modified_gmt] => 2021-04-14 00:54:24 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=18106 [menu_order] => 518 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [87] => WP_Post Object ( [ID] => 18001 [post_author] => 65 [post_date] => 2020-03-30 07:00:50 [post_date_gmt] => 2020-03-30 07:00:50 [post_content] =>

Pandemics Happen: You Can’t Predict a Crisis

A worldwide pandemic broke out, and your employer is asking you to work from home instead of coming into the office. Well, you’re not alone. This is the situation that many people have found themselves in during this Covid-19 pandemic.

Although it may seem like no big deal at first, working from home daily for an extended period of time is vastly different than going into the office every day. For some, the line of work-life balance gets even more blurred than before.

The hurdles of working from home tend to amplify in our current situation when people are trying to work from home and have their children at home (instead of at school) all day too. People try to make light of the situation they’re in by posting some really funny posts on social media like the one from Jason White:

This is truly the new normal for all of us, and there’s no certainty as to how much longer this pandemic is going to force people to work from home.

Luckily for us, today we are extremely well connected via the Internet and leveraging cloud-based software solutions/Software-as-a-Service (SaaS), Virtual Private Networks (VPN), Virtual Desktops Infrastructure (VDI), etc., makes it easy for some organizations to enable their workforce to work effectively from home.

This pandemic is also the first time many of these organizations are actually executing their Business Disaster Recovery (BDR) and Business Continuity Plans (BCP). Businesses are quickly learning from the challenges since these plans are very different when they’re being documented theoretically versus when they’re being executed in a real-time crisis.

Getting Comfortable With “The New Norm”

Let’s face it, as humans since the beginning of time, we’ve always had to adapt to different challenges. This is no different. This might be the new normal for a while, until we figure out how to get this pandemic under control.

I’ve been very fortunate at various parts of my career to have the experience of working from home or starting a consulting practice from scratch in a new geographic region – where at the beginning, there’s no office to work out of.

Here are a few things that have worked well for me and allowed me to work effectively from home, and during a time of crisis like a pandemic, how not to feel isolated.

1. Create a Dedicated Space for Work

It’s important to create a separate area for you to dedicate to work. Ideally you want a space where you can close the door and seclude yourself for taking phone calls, conference calls, video conferences, or just shutting out any distractions when you need to get some work done.

This is important as you can create a virtual boundary for when you’re working and when you’re not. Force yourself to leave this space when you take breaks (whether it be to get some coffee, go for a walk, grab lunch, etc.). This allows you to mimic some of the social norms that you’d have while at the office – like taking a bathroom break, walking to the kitchen to grab a coffee or going out for lunch with your coworkers – where you end up leaving your actual workspace multiple times during the day to let your mind take a break from work.

2. Get Your Technology Set Up Properly

Ergonomics is important, but so is getting actual equipment and connectivity that will allow you to be most effective while working from home. There are plenty of resources online discussing how to set up your workspace with proper ergonomics that fit your needs. I would like to focus on the technology side of things, where certain equipment can make your life significantly less stressful when working from home.

First, invest in a strong and reliable Internet connection. High speed Internet has become really affordable, and most organizations that require their workforce to work from home will usually subsidize some (if not all) of your Internet bill. I recommend getting a reliable and fast connection – this will pay dividends in the long run as you have more and more video conference calls and can start using your VOIP setup if your organization has one.

Second, for making phone calls, if I’m at my desk, I typically use my VOIP setup that NetSPI provides all their employees through Microsoft Teams. I have a dedicated work number where people can reach me, and I use it to make calls from my desk (and even sometimes from my smartphone if I’m somewhere with a spotty cell network but I have a strong Wi-Fi connection).

Third, make sure you get a big monitor/display if you can. You’re going to be hunkered down in a small space working, forcing yourself to work on a small laptop screen ends up being very stressful, especially today when all of us are multi-tasking, having an extra monitor is extremely helpful in reducing the amount of back-and-forth between applications. If an extra monitor isn’t viable, for Mac users, you may be able to use your iPad as an extra screen with Sidecar and have an application or window that you use most regularly on there. This will make it so you don’t have to constantly be switching windows. If you cannot have a multiple screen setup, you can still leverage your operating systems features like “Spaces” on a Mac or “Virtual Desktops” on a Windows machine to have multiple screens set up for different purposes (e.g. one screen for things you’re actively working on and a second screen for all communications, like Instant Messaging and email).

Here’s a view of my work-setup at home:

Screens and their usage from left to right:

  • I use my iPad Pro as an extra screen with Sidecar to always have my email on display. I like using a stand (Lamicall tablet stand) for the iPad to help raise it a little closer to the height of the other screens.
  • The main monitor (Dell U3818DW) I use for things I’m actively working on – usually things like document creation, web browsing, news feeds, taking notes, etc. – this is basically my active workspace.
  • My MacBook Pro is on a stand to bring it to an eye-level height for me, and I am usually running my virtual machines to perform scanning work, or security testing as I research new things and try to learn and keep up with new technologies as they evolve in the security space.

You’ll also notice that I have a gel-pad for my wrist that spans my keyboard and my mouse. This is because I did in the past start experiencing aches in my wrists and was worried about getting Carpal Tunnel Syndrome – this has helped tremendously to relieve a lot of stress in my hands and shoulders as well.

I also invested in getting myself a nice webcam (Logitech C920S HD Pro), with a privacy shutter. Currently I work remotely from home – even outside of Covid-19 – so I try to make sure that on all conference calls I have my video turned on. I find that it encourages others to turn on their video too making the virtual meetings feel more intimate and also makes you feel more connected with others on your team. Make sure to try and place the camera close to eye-level and at an angle where it’s facing you directly, if possible. Here are some tips on how to kick your video conferencing game up a notch and look more professional during your video calls. As we get more connected globally, and business today happens across all borders and oceans, video conferencing is going to start being more and more prominent. It’s time we start mastering video conferencing.

Here are some home-office setups from some of our other NetSPI colleagues:

3. Embrace Your New “Co-Workers”

All of a sudden, you’re co-habituating and working with some “creatures” that you would normally be away from while at the office. This may be your children, parents, significant other, cat, dog, duck, gecko, etc.

You need to accept that you’ll be “co-working” together and potentially sharing and intruding on each other’s space from time to time. The sooner you accept it, the less friction you’ll have, and you can plan to share the space peacefully. Be grateful for the extra time that you might have with your family, children or your pets – they are definitely excited to have more time with you.

With family members, make sure you have some way to signal them if you’re in the middle of working on something or if on a conference call and need to avoid distractions. For me, when the door to my home office is closed, my pets and my family members know not to bother me. When the door is open, they are welcome to share space as long as they’re not being overbearing or too distracting.

Pets can also be very therapeutic, especially at a time when you’re physical distancing from everyone and may start feeling isolated. Accept them into your space. Let them sleep at your feet (or on your lap for that lap dog or lap cat). Pet them from time to time and let them know that you appreciate the way they naturally relieve your stress and give you a sense of companionship and support that all humans crave.

At NetSPI we have created a Slack channel called #pets_of_netspi where we all share pictures and videos of our new fuzzy (and some non-fuzzy) “co-workers” that help us get through our day. Here’s just a preview of some of our #pets_of_netspi rockstars:

4. Virtual Lunch and Coffee Video Conferences

Just like you don’t always talk to your coworkers in the office about work, you need to continue harboring both a professional and personal relationship with your colleagues. We discussed how video conferences have become more prominent – not only that, but Microsoft is making Teams available for everyone to help in the face of the Covid-19 pandemic. With technical solutions being at our disposal today, take advantage of this, and schedule virtual lunch meetings or coffee meetings with colleagues. Take a break from work and discuss non-work related topics like you normally would during lunch or coffee.

5. Maintain a Routine

Even though none of your colleagues or boss would know if you didn’t  brush your teeth, stay in your pajamas all day, or even shower for days, it doesn’t mean you should start getting lazy about your regular day to day activities. Make sure you still maintain a regular routine. Things like going to bed and waking up at a consistent time, making  your bed, making yourself a healthy breakfast, taking your dog for a morning walk, exercising, meditating, etc. are all important factors that will make you more effective at your work.

Taking some breaks and setting aside some personal time is always healthy. Pick up meditation or take a quick walk around the neighborhood, text or call your loved ones and check in on how they are doing in this moment of crisis.

Another thing you may want to consider is picking up a new skill or hobby. Now that you have all that extra time from not commuting back and forth from the office, you have no excuse. Always wanted to be able to pick up a guitar and play some sick tunes? Well, now is your chance to start learning and practicing. Want to complete your New Years’ resolution of losing those 15 extra pounds you gained over the holidays? Well, maybe now it’s time to start some workout programs that you can do at home. Maybe you always wanted to better yourself with more education? I’ve actually been spending time taking some free Ivy League courses online on topics that I’ve always been interested in delving into deeper.

6. Organize Virtual Social Events with Your Company or Team

Little things can make a big difference in a team’s morale and also help build camaraderie and a sense of togetherness. Organizing a virtual happy hour or just a video conference call to check-in with everyone and hang out helps reduce the feeling of isolation that everyone is facing from physical distancing.

Last Friday evening right at the end of business hours, we organized a virtual video happy hour event at NetSPI. It was wonderful to see everyone join in, with their favorite beverages in hand, and enthusiasm to see and connect with rest of the team. Some did the video conference from their deck in their backyard, some took it from their home office setup, and one even joined from their kid’s bedroom where he was assembling furniture for his kids. The most amount of excitement actually came when pet owners started showing off their pets to each other, and the pets got to greet their new friends during the video conference. There were various topics that were discussed (completely non-work related) as everyone was facing similar circumstances. People even shared ideas they had for activities they were going to attempt over the weekend while trying to practice social distancing.

7. Over-Communicate

You’re not going to get the opportunity to run into your boss or coworker in the hallway and mention all the cool things you’re working on or the amazing meeting you had with a client or the really amazing discovery you made while doing an assessment – so make sure you’re over communicating and keeping everyone looped in. Send regular status updates to your managers and your teams. As a manager make sure you communicate regularly with your team members to make sure they’re all on track and try to understand if they’re facing any challenges early and try to help sooner rather than later. Keeping your team and your management updated regularly is key to making sure everyone’s on the same page. If you have customers that you interface with regularly, at times like this, the need for regular communication with customers is even more important since your business probably depends heavily on the customers’ current state of business.

Putting It All Together

Remember, you’re not in this situation alone. This working from home situation is turning out to be the new normal. Create a separate workspace dedicated for working. Make sure you get the right technology or accessories to be efficient and effective at your job. Embrace the fact that you’re going to be sharing space and spending more time with your family and pets at home while you’re working. Maintain a routine and stay active both mentally and physically. Set aside time for virtual social activities over video conference. Lastly, make sure you over-communicate and keep everyone looped in on necessary updates.

Hopefully you find these tips helpful as you try to adjust and get acclimated to working from home. If you have comments or other tips that have worked well for you, we would love to hear from you. Share them with us via Twitter by tweeting to @NetSPI with #WorkFromHome.

[post_title] => #WFH – Embracing the New Norm of Working From Home [post_excerpt] => A worldwide pandemic broke out, and your employer is asking you to work from home instead of coming into the office. Well, you’re not alone. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => wfh-embracing-the-new-norm-of-working-from-home [to_ping] => [pinged] => [post_modified] => 2021-04-14 00:55:25 [post_modified_gmt] => 2021-04-14 00:55:25 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=18001 [menu_order] => 521 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [88] => WP_Post Object ( [ID] => 17927 [post_author] => 65 [post_date] => 2020-03-25 07:00:58 [post_date_gmt] => 2020-03-25 07:00:58 [post_content] =>

Enabling Employees to Work from Home

All of a sudden, the world is facing a pandemic, and you are asking all your team members to work from home. Have you really considered all the security implications of moving to a remote workforce model? Chances are you and others are more focused on just making sure people can work effectively and are less focused on security. But at times of crisis – hackers are known to increase their efforts to take advantage of any weak links they can find in an organization’s infrastructure.

I travel significantly for work and have always been fortunate to have a good setup to be able to effectively work from anywhere with a reliable Internet connection. Not everyone is this fortunate, nor do many people have the experience of working remotely until now.

Managing Host-Based Security

Host-based security represents a large attack surface that is rapidly evolving as employees continue to become more mobile. Let’s discuss some key things organizations need to keep in mind as they migrate their teams to be effective while working from home.

1. Education/Employee Training

Before we start talking about technical controls that are important to consider, it’s necessary to start with the people factor. All the technical controls can easily be rendered useless if your team members are not properly trained on security. People need to be trained on how to securely access and manage the organization’s IT assets. With a rise in phishing attacks, it’s important that training not only cover secure ways to access different systems, but also how to avoid potential scams. Education is paramount in making sure that the organization is safe, and people in the organization are not making decisions that can have adverse effects from a security and privacy perspective.

2. Workstation Image Security

Most organizations deploy laptops using a standard set of system images and configurations. The problem with using standard images and configurations is that it becomes challenging to secure a workstation in the event that the laptop is lost, stolen, and/or compromised by a threat actor.

Here are some things to consider while trying to secure laptops and mobile devices:

  • Ensure all workstation images are configured based on a secure baseline.
  • Make sure the secure baselines are managed and updated based on business needs.
  • Track critical operating system and application patches, and ensure that they are applied.
  • Review application and management scripts for vulnerabilities and common attack patterns.
  • Enable full-disk encryption.
  • Perform regular security testing for each workstation image – typically organizations have multiple images that are in use – e.g. Windows 7, Windows 10, MacOS, etc.

3. Virtual Desktop Infrastructure (VDI) Security

Many organizations are moving away from physical laptops and are having their employees access applications and desktops through solutions leveraging VDIs. A common solution that is used widely is provided by Citrix. This allows employees to connect to an organization’s systems by remotely connecting to a virtual desktop server (from their personal computer or mobile device like a tablet or a smartphone) working directly from where the virtual desktop is hosted.

The following are some things that are important to consider in this type of a scenario:

  • Enforce multi-factor authentication (MFA) for all VDI portals and VPN access.
  • Ensure that the VDI is configured so that users cannot exfiltrate data through shared drives, the clipboard, email, websites, printer access, or any other common egress point.
  • Proper access control so users cannot easily pivot to critical internal resources like databases, application servers and domain controllers.
  • Lock down applications to prevent unauthorized access to the operating system resources and ensure that they have the least amount of privileges enabled to function properly.

4. Windows and Linux Sever Security

Unlike laptops/workstations and VDI portals which are directly exposed to the Internet, once an attacker can pivot into the environment, they usually find it trivial to identify Windows and Linux servers on the network to target. Server Operating Systems need to be configured, reviewed and hardened to reduce the attack surface. Vulnerability scanning by itself is usually not enough since it won’t expose vulnerabilities that could be used by authenticated attackers.

5. z/OS Mainframe Security

Windows and Linux servers are typically deployed using standard images, but z/OS mainframe tend to be more unique. In most environments, the mainframe configurations are not centrally managed as effectively as their Windows and Linux counterparts, which is why there are many inconsistencies in how mainframes are configured, leading to vulnerabilities that are often accessible to domain users.

It’s important to consider the following:

  • Check for missing critical application and operating system patches on a regular cadence.
  • Centrally manage and implement z/OS mainframe configurations based on a secure baseline.
  • Check if Active Directory domain users can log into z/OS mainframe applications or have direct access through SSH or other protocols.
  • Periodically perform penetration testing and security reviews of your deployed z/OS mainframes.
[post_title] => Keeping Your Organization Secure While Sending Your Employees to Work from Home [post_excerpt] => All of a sudden, the world is facing a pandemic, and you are asking all your team members to work from home. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => keeping-your-organization-secure-while-sending-your-employees-to-work-from-home [to_ping] => [pinged] => [post_modified] => 2021-04-14 00:55:37 [post_modified_gmt] => 2021-04-14 00:55:37 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=17927 [menu_order] => 524 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [89] => WP_Post Object ( [ID] => 17864 [post_author] => 65 [post_date] => 2020-03-24 07:00:47 [post_date_gmt] => 2020-03-24 07:00:47 [post_content] =>

Similarities Between Computer Viruses and Medical Viruses

There’s a reason why a computer virus is called a “virus” – they have many similarities with medical viruses (like COVID-19) that have a severe impact on your personal health. Just like Coronavirus can hide its symptoms and be contagious for long periods of time before causing any visible damage, a computer virus operates no different.

With how interconnected we are in today’s digital world, a computer virus (a “wormable” remote code execution vulnerability like EternalDarkness that affects Microsoft Server Message Block SMBv3) can start infecting and spreading in a matter of minutes. Typically, these types of virally distributing malware can also keep symptoms hidden, like a real virus, until the exploit payload is executed causing damage to computer systems.

Plenty of Phish in the Sea – Hackers Taking Advantage at a Time of Fear and Uncertainty

It seems like during any time of a disaster, phishing emails increase as well. Hackers take advantage of the human element, especially at a time of fear and uncertainty – like during the major pandemic that we are currently facing. Naturally, due to people’s fears and the seriousness of the pandemic, people are actively seeking as much information as they can to keep themselves, their families, and their loved ones safe. Preying on the human element, hackers are actively sending various types of phishing emails related to the Coronavirus. The volume of these phishing emails have reportedly increased significantly over the last couple of weeks. Some of the most common examples of these phishing emails are fake emails:

  • From a doctor with attachments that claim to have certain steps to avoid Coronavirus and encourages the recipient to share the attachment with family and friends.
  • From business partners with attachments that supposedly contain FAQs regarding the Coronavirus.
  • From company management, a link to a meeting recording discussing Coronavirus and how it’s being handled by the organization – with a malicious link embedded in the email instead of a recording.
  • From a fake employee claiming that an employee in the company has contracted the Coronavirus and attached is an advisory that all employees are encouraged to read.
  • From an organization that is giving away free equipment and protective gear (like masks) and needs the recipient to click on a link to confirm the delivery address.
  • From HR talking about how they are giving extra money to their employees available only during the next few hours.
  • From the IT service desk asking employees to follow a link and take a survey.
  • From the CDC with a malicious link about new confirmed cases in the recipient’s city.

An Ounce of Prevention Goes a Long Way

Taking a little bit of precaution, especially when it comes to getting infected by malware or having your personal data stolen, goes a long way. The headache and hassle of having to deal with a personal data breach or ransomware attack can easily be avoided, if people are vigilant and well informed about determining whether an email is a phishing email or not.

Common Symptoms of a Phishing Email

1. Requesting Private and Personal Information

Just like you don’t expect the prince of some African country to need your banking information to help them move money around, if you’re receiving an email about a pandemic or issue related to a topic focused on the public health, there’s absolutely no reason why they would need to ask you to click on a link to log in with your user credentials or personal details. Just by using some common sense, you should be able to determine that there’s something very phishy about that email. This should be a clear sign that the email is malicious.

2. Unnecessary Sense of Urgency or Fear Mongering

When it comes to sharing information about a pandemic or any crisis, any given agency or legitimate source of information would most likely use language that’s calm and credible. The subject of the email or the body will typically not be something that sounds extra alarming. In the case that the email is actually necessary to convey an urgent message, it won’t require the recipient to click on a link or require the recipient to open an attachment to get the information. Instead a legitimate email would contain the relevant information in the email body itself.

3. Sender’s Email Address is Unfamiliar or Suspicious

Many phishing emails claim to come from organizations that work in an official capacity during the time of the crisis (e.g. World Health Organization or Center for Disease Control). Emails claiming to be from these organizations with multiple attachments or links to additional resources and information regarding the crisis at hand but coming from email addresses ending in @hotmail.com or @aol.com makes absolutely no sense. Hopefully these will be caught by your email spam filter. Unfortunately, some do slip by those filters, and it should be very clear to you that these emails are clearly phishing attempts.

4. Companies Will Usually Use Your Name to Greet You in Emails

Most companies or organizations where you might be a customer, or your doctor’s office for example, will typically have access to some basic information like your name. When they send out communications to you, they will address you with your name instead of a generic salutation like “Dear Client” or “Dear Subscriber.” There are also many cases where hackers will just avoid salutations, especially if they are sending emails offering special deals or requesting the recipient to click on links to go somewhere to potentially get something for free or win something.

5. Poor Spelling and Grammar

Criminals on the internet or fake royal family members from different continents don’t necessarily have the best education, and in many cases, the language in which they are sending out phishing emails may not be their primary language. Therefore, it’s very common that phishing emails will be riddled with spelling errors or poor grammar. Finding oddly structured sentences, weird capitalizations, or just the usage of a completely wrong word or phrase are clear signs of phishing emails.

6. Low Resolution Graphics in Emails

Cybercriminals will often copy and paste graphics for logos in emails from different parts of the Internet. An email claiming to be from the CDC with information about the Coronavirus, but the logo looks a little fuzzy, or tiny, should be a clear red flag that the email is malicious or fake – it’s a clear sign that the sender of the email doesn’t work for the organization they are claiming to be from.

[post_title] => Staying Safe Online During the COVID-19 Pandemic [post_excerpt] => There’s a reason why a computer virus is called a “virus” – they have many similarities with medical viruses (like COVID-19) that have a severe impact on your personal health. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => staying-safe-online-during-the-covid-19-pandemic [to_ping] => [pinged] => [post_modified] => 2021-04-14 00:55:45 [post_modified_gmt] => 2021-04-14 00:55:45 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=17864 [menu_order] => 527 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [90] => WP_Post Object ( [ID] => 16611 [post_author] => 65 [post_date] => 2020-02-18 07:00:17 [post_date_gmt] => 2020-02-18 07:00:17 [post_content] =>

It is very common to hear people make blanket statements like “WhatsApp is secure,” but they rarely truly understand the actual security controls that WhatsApp is providing. In fact, this notion of being “secure” is one of the main reasons why WhatsApp gained so much popularity and built such a big user base.

In today’s world where everything is on the Internet, people tend to crave some privacy, especially when they are communicating with other people and sharing personal conversations, and the fact that WhatsApp offers a secure communication channel where the messages between users are fully encrypted to the point where the company/app that is providing the service cannot see the messages between their users makes people feel a false sense of security when using WhatsApp.

What “Security” is WhatsApp Really Providing?

Let’s first make sure we understand what security control WhatsApp is claiming it provides. WhatsApp uses the Signal protocol. The encryption scheme is simply asymmetric encryption of messages between the users, and the transmission of the encrypted messages are facilitated by a server provided by WhatsApp.

So, the way the message is protected while in transit from the sender to the intended recipient is secure.

What Other Aspects of Security Do People Need to be Mindful Of?

When it comes to security, there’s a lot more involved than just securing the data while it’s in transit. If securing applications were as simple as securing the communication channel, then websites wouldn’t have any vulnerabilities in them once they had implemented SSL, but we know that is not the case. So why would it be any different for WhatsApp, or any other mobile app for that matter?

Just because the communication channel is secure, doesn’t mean that the rest of the application is secure too. What people tend to forget is that the content of the messages that they’re receiving may still be malicious and have a security impact based on the user’s behavior.

Phishing Attacks

Let’s say a user is sent a phishing link, and the user clicks on it to see where it takes them – they will fall victim to the attack just like they would have if they had received the same link via email or any other method. Just like people are told never to click on a link from an email – especially if it’s from someone they don’t know or trust – the same rule applies here.

Malware

Malware is everywhere on the internet, and being able to identify and avoid opening infected files is a common challenge. Just like malware can be downloaded from web-browsing or from opening email attachments, similarly, opening files that may be infected that were received by a messaging app has the same consequences. There are many stories on the news today about how people are affected because they opened a video clip, audio file, etc. and were infected with malware.

The App Itself

The app that you are using, may itself be vulnerable too and allow attackers to remotely execute code on a user’s device. WhatsApp had a buffer overflow vulnerability that allowed attackers to easily execute code on WhatsApp users’ devices. Details of the vulnerability itself can be found on the CVE-2019-11931 page. Almost all users of WhatsApp on Android, iOS, and Windows were affected. This wasn’t the only vulnerability found on WhatsApp, but attackers were able to inject spyware on to phones by exploiting a zero-day vulnerability. The most damaging part of this attack was that it did not require any action to be taken by the user that was being infected. Read more in this article by the Financial Times.

Other than WhatsApp, there are also cases where the app itself was created for secure communications but was designed incorrectly and ended up all over the news. The most recent example that comes to mind is when the French government launched a new message app for their state employees only, but the account sign-up process was flawed, and allowed anyone to sign up and message using the system. Details of the issue can be found here.

Why Should You Care?

People need to understand the consequences of using apps for communication purposes, especially when they may be using these apps for business. Organizations will typically have contracts with service providers like Slack, Microsoft Teams, etc. to have official channels of communication. This allows the organization to securely manage their employee’s communications, and ensure that sensitive information stays secured correctly, both in transit and at rest. In addition, in the event of lost devices, these services allow organizations to remotely delete any sensitive data that may have been stored on the devices themselves.

An example of where there’s serious concern around public officials using WhatsApp for official communications was raised when it was discovered that Jared Kushner may have been using WhatsApp for his official communications. Read more about the concerns here.

Using proper communication channels is very critical when conducting business, given the sensitive nature of almost all communication and data that enables running a successful business.

[post_title] => Why Do People Confuse “End-to-End Encryption” with “Security”? [post_excerpt] => It is very common to hear people make blanket statements like “WhatsApp is secure,” but they rarely understand the actual security controls [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => why-do-people-confuse-end-to-end-encryption-with-security [to_ping] => [pinged] => [post_modified] => 2021-04-14 00:56:08 [post_modified_gmt] => 2021-04-14 00:56:08 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=16611 [menu_order] => 537 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 91 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 31564 [post_author] => 53 [post_date] => 2024-02-20 09:59:20 [post_date_gmt] => 2024-02-20 15:59:20 [post_content] =>
Watch Now

Overview

Incorporating Artificial Intelligence (AI) into your business or developing your own machine learning (ML) models can be exciting! Whether you are purchasing out-of-the-box AI solutions or developing your own Large Language Models (LLMs), ensuring a secure foundation from the start is paramount — and not for the faint of heart.  

Looking for guidance on how to safely adopt generative AI? Look no further. There’s no better guiding light than other security leaders that have already experienced the process — or are going through it as we speak.  

NetSPI Field CISO, Nabil Hannan, welcomed two AI security leaders for a discussion on what they’ve learned throughout their experiences implementing Generative AI in their companies. Chris Schneider, Senior Staff Security Engineer at Google, and Tim Schulz, Distinguished Engineer, AI Red Team at Verizon, shared their perspectives on cybersecurity considerations companies should address before integrating AI into their systems and proactive measures can organizations take to avoid some of the most common cybersecurity pitfalls teams face. 

Access the on-demand webinar to hear their discussion on:  

  • Cybersecurity questions to ask before starting your AI journey  
  • Common pitfalls and challenges you can avoid 
  • Stories from security leaders on the top lessons they’ve learned   
  • Security testing approaches for AI-based systems  
  • And more! 

Key Highlights 

03:27 - AI as a misnomer 
12:22 – What to consider before implementing AI 
17:51 – Aligning AI initiatives with cybersecurity goals 
10:41 - Perspectives on community guidance 
24:35 - Cybersecurity pitfalls with Generative AI 
34:51 - Testing AI-based systems vs traditional software 
41:50 - Security testing for AI-based systems 
47:58 – Lessons learned from implementing AI 

Artificial Intelligence can be a misnomer because it implies that there’s a form of sentience behind the technology. In most cases when talking about AI, we’re talking about technology that digests large amounts of data and gives an output quickly. Can you share your perspective on the technology and how it’s named?  

Tim: Tim explains that generative AI has influenced the discourse on the essence of artificial intelligence, sparking debates over terminology. The widespread familiarity with AI, thanks to its portrayal in Hollywood and elsewhere, has led to diverse interpretations. However, he notes the existing definition fails to accommodate the nuanced discussions necessitated by technological advancements. This discrepancy poses a significant challenge. While the term "AI" is easily recognizable to the general public, the field's rapid evolution demands a reassessment of foundational definitions. Expert opinions vary, which is why discussions like these are constructive because it’s better to have diverse perspectives rather than categorizing any particular viewpoint as unpopular. 

Chris: Chris makes a case for AI being a term more widely recognized by the public compared to machine learning. The historical marketing associated with AI makes it more familiar to people and increases its appeal. However, he cautions the influence of popular media may distort factual aspects and contribute to exaggerated claims, often made by celebrities. As a scientist, he advocates for a cautious approach, emphasizing the importance of basing discussions on demonstrated capabilities and evidence from past experiences. Differing opinions can be valid if they are not sensational, such as concerns about a robot uprising, which is divergent from the field's focus on probabilistic forecasting and observed behaviors. AI is a process involving memorization, repetition, and probabilistic synthesis rather than independent intelligence or foresight. 

What are some aspects to consider before organizations start their journey to leverage AI-based technologies? Are there common pitfalls that organizations run into? 

Tim: Tim believes it’s important to assess available resources for AI adoption. AI isn’t a simplistic, plug-and-play solution. Rather it has significant infrastructure and engineering efforts necessary for seamless integration. The complexity results in a vital need to dedicate resources and adopt a comprehensive approach. Moreover, AI literacy plays a crucial role in facilitating effective communication and decision-making.  

Tim cautions against the risk of being outmaneuvered in discussions by vendors and advocates for seeking partnerships or trusted advisors to bridge knowledge gaps. The industry needs to embrace continuous learning and adaptation in response to evolving regulations and the dynamic nature of AI technology. Outsourcing can be a viable option to streamline operations for those reluctant to commit to ongoing maintenance and operational efforts. 

Are there ways organizations can ensure their AI initiatives align with their cybersecurity goals and protocols? 

Chris: Speaking as a prospective employee at Google, but not officially on behalf of Google, Chris explains one of the ways he approaches this is to use the Android AppSec Knowledgebase within Android Studio. This tool provides developers with real-time alerts regarding common errors or security risks, often accompanied by quick fixes. It’s updated with ongoing efforts to expand its functionality to encompass machine learning implementations, aligning with Google's Secure AI Framework (SAIF). The framework offers guidelines and controls to address security concerns associated with ML technologies, although it may not cover all emerging issues, prompting ongoing research and development. Chris emphasizes the adaptability of these controls to suit different organizational needs and highlights their open-source nature, allowing individuals to apply custom logic. He mentions drawing inspiration from existing literature and industry feedback, aiming to contribute positively to the community, while acknowledging the learning curve and the complexity involved. 

Do you have any perspectives on the community guidance that’s being generated? Anything you’re hoping to see in the future?  

Tim: Tim notes a significant challenge in the AI domain is the gap between widespread knowledge and expert-driven understanding. Despite the rapid advancements in AI, Tim observes a lack of comprehensive knowledge across organizations due to the sheer volume of developments.  

Community efforts have had a positive impact on sharing knowledge so far, but challenges remain in discerning quality information amidst the abundance of resources. Major tech companies like Google, Meta, and Microsoft have contributed by releasing tools and addressing AI security concerns, facilitated by recent executive orders. However, the absence of a common toolset for testing models remains a challenge. Tim commends the efforts of large players in the industry to democratize expertise but acknowledges the ongoing barrier posed by the need for specialized knowledge. Broadening discussions beyond model deployment is important to address emerging complexities in AI. 

What have you seen as some of the most common cybersecurity pitfalls that organizations have encountered when they implement AI technologies? Do you have any recommendations to avoid those? 

Tim: Tim says it’s inevitable that Generative AI will permeate organizations in various capacities, requiring heightened security measures. AI literacy is essential in understanding and safeguarding AI systems, which differs significantly from conventional web application protection.  

Notably, crafting incident response plans for AI incidents poses unique challenges, given the distinct log sources and visibility gaps inherent in AI systems. While efforts to detect issues like data poisoning are underway, they remain primarily in the research phase. Explainable AI and AI transparency is incredibly important in enhancing visibility for security teams.  

Distinguishing between regular incident response and AI incident response processes is crucial, potentially involving different teams and protocols. Dynamics are shifting within data science teams, now grappling with newfound attention and security concerns due to Generative AI. Bridging the gap between data science and cybersecurity teams requires fostering collaboration and adapting to evolving processes. Legal considerations also come into play, as compliance requirements for AI systems necessitate legal counsel involvement in decision-making processes.  

These ongoing discussions reflect the dynamic nature of AI security and underscore the need for continual adaptation and collaboration among stakeholders. The field is developing rapidly with new advancements emerging often on a daily, weekly, or even hourly basis. Drawing from personal experience, Tim emphasizes the unprecedented speed at which research transitions into practical applications and proof-of-concepts (POCs), ultimately integrating into products. This remarkable acceleration from research to productization represents an unparalleled advancement in technology maturity timelines. 

Chris: The concept of "adopt and adapt" is helpful here, noting both traditional and emerging issues with code execution. Machine learning introduces unintentional variants in input and output, posing challenges for software developers. A modified approach for machine learning has multiple stages, including pre-training and post-deployment sets. While traditional infrastructure controls may suffice, addressing non-infrastructure controls, particularly on devices, proves more challenging due to physical possession advantages. Hybrid models, such as those seen in the gaming industry, offer a viable approach, particularly for mitigating risks like piracy. He highlights the need for robust assurances in machine learning usage, especially concerning compliance and ethical considerations. 

Traditional software testing paradigms may not apply to AI-based systems that are non-deterministic. What makes testing AI-based systems unique compared to traditional software?  

Chris: Considering security aspects, the focus is on achieving security parity with current controls. However, addressing emerging threats or new capabilities in machine learning poses challenges. If existing controls prove inadequate for these scenarios, alternative approaches must be explored. For instance, the synthesis of identity presents significant concerns, as advancements in technology allow for sophisticated audio synthesis with minimal sample data requirements. This underscores the need for proactive measures to address evolving security risks. 

In security, the focus is on achieving security parity with current controls while addressing emerging threats or new capabilities in machine learning. For instance, the synthesis of identity presents significant concerns, as advancements in technology enable sophisticated audio and video synthesis, allowing for impersonation and potentially fraudulent activities. Preventing such misuse is a pressing concern, with efforts aimed at developing semantic and provable solutions to combat these challenges.  

Additionally, there's a distinction between stochastic and non-stochastic software, with an increasing emphasis on the collection of vast amounts of data without strict domain and range boundaries. This shift challenges traditional security principles, particularly the importance of authenticating data before processing it, as emphasized by Moxie Marlinspike's "Doom principle."  

Despite the widespread acceptance of indiscriminate data ingestion, there's growing recognition of the risks associated with it, such as prompt injection and astroturfing. Testing the security of systems against inconsistent behaviors and untrusted data sources has always been challenging, with approaches like utility functions proposed to address these complexities. Finding the right balance between control and innovation remains a central dilemma, with both excessive control and insufficient oversight posing risks to the integrity and reliability of systems. 

From a Red Teaming perspective, what measures should organizations take to ensure comprehensive security testing for AI-based systems? What tips or tricks have been effective in your experience that you wish you had known earlier? 

Tim: Tim explains that one of the aspects organizations need to consider is the testing phase, especially during deployment of AI-based systems like web applications integrated with language models. Understanding the intended behavior is crucial, and simulating user interactions helps in documenting various use cases accurately. Cost is another significant aspect to evaluate, as API usage can incur charges based on request/response rates. Red teaming or penetration testing should explore context length expansion tactics to avoid unforeseen financial burdens, especially when manipulating parameters to change response lengths.  

Efficient resource utilization is paramount, considering that most organizations won't deploy or train massive models due to cost constraints. Therefore, managing expenses and implementing guardrails for API usage becomes imperative. Additionally, safeguarding brand reputation is crucial, particularly for public-facing platforms, where Generative AI content could potentially lead to negative publicity if misused. Thus, a comprehensive approach to security and Red Teaming in AI systems involves addressing not only technical controls but also considering broader implications and partnering with responsible AI teams to mitigate risks effectively. 

If you could go back in time and share one lesson with your younger self that would have helped on your AI journey, what would it be? 

Chris: Synthesizing content can offer benefits, yet it entails inherent trade-offs. The ability to produce unique interactions correlates with the tolerance for risk that the business is willing to accept. This aspect is quantified by a term known as "temperature" in business jargon. Conversely, if the generated content pertains to sensitive information like payment details, it can present challenges that need careful consideration before implementation. Miguel Rodriguez's suggestion regarding pre- and post-training, as well as pre- and post-deployment phases, serves as an excellent starting point. Additionally, augmenting these phases with considerations for networking, hardware, operating systems, and application context helps fortify the threat model review process. 

Tim: Similar to what Chris mentioned, sending specific resources on honing in on lessons about neural networks could be beneficial. Overall, the key is to continue using these systems. Besides understanding the theory, interacting with the systems and trying different prompts is crucial. Experimenting with advertised hacks and cheats found online can provide insights into their effectiveness. Diversity of thought is important as it offers various approaches to exploring these systems. Therefore, focusing on experimentation and continual learning is essential for gaining knowledge in this field. 

Hear the full discussion between Nabil, Chris, and Tim by requesting the on-demand webinar using the form above or continue your AI security learning by accessing our eBook, “The CISO’s Guide to Securing AI/ML Models.” 

[wonderplugin_video iframe="https://youtu.be/LC9E44mDJEY" lightbox=0 lightboxsize=1 lightboxwidth=1200 lightboxheight=674.999999999999916 autoopen=0 autoopendelay=0 autoclose=0 lightboxtitle="" lightboxgroup="" lightboxshownavigation=0 showimage="" lightboxoptions="" videowidth=1200 videoheight=674.999999999999916 keepaspectratio=1 autoplay=0 loop=0 videocss="position:relative;display:block;background-color:#000;overflow:hidden;max-width:100%;margin:0 auto;" playbutton="https://www.netspi.com/wp-content/plugins/wonderplugin-video-embed/engine/playvideo-64-64-0.png"]

[post_title] => Hindsight’s 20/20: What Security Leaders Wish They Knew Before Implementing Generative AI  [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => what-to-know-before-implementing-generative-ai [to_ping] => [pinged] => [post_modified] => 2024-03-27 13:40:01 [post_modified_gmt] => 2024-03-27 18:40:01 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=31564 [menu_order] => 3 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 91 [max_num_pages] => 0 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 7271a63527e93476d9e4443ab69037d1 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) )
Innovation & Cyber Resiliency
Nabil Hannan
NetSPI LinkedIn Live: HTTP/2 Rapid Reset
Nabil Hannan
The Adoption of Emerging AppSec Technology
Nabil Hannan
Getting Started on Application Security
Nabil Hannan
Extreme Makeover AppSec Edition
Nabil Hannan

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X