Kurtis Shelton

Kurtis plays a pivotal role as an AI/ML Security Engineer, where his expertise is central to guiding clients through the intricate process of identifying and neutralizing security threats in their machine learning infrastructures. As the architect behind NetSPI's AI/ML service line, he now spearheads the division and manages its operations. His responsibilities include executing comprehensive security assessments and penetration tests, meticulously analyzing system vulnerabilities, and formulating strategic recommendations for robust remediation. Furthermore, Kurtis is a leading figure in the research domain of adversarial machine learning, continually pushing the boundaries of security in the field.
More by Kurtis Shelton
WP_Query Object
(
    [query] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "164"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "164"
                            [compare] => LIKE
                        )

                )

        )

    [query_vars] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "164"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "164"
                            [compare] => LIKE
                        )

                )

            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [paged] => 0
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [search_columns] => Array
                (
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 1
            [update_post_term_cache] => 1
            [update_menu_item_cache] => 
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [nopaging] => 1
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "164"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "164"
                            [compare] => LIKE
                        )

                    [relation] => OR
                )

            [relation] => OR
            [meta_table] => wp_postmeta
            [meta_id_column] => post_id
            [primary_table] => wp_posts
            [primary_id_column] => ID
            [table_aliases:protected] => Array
                (
                    [0] => wp_postmeta
                )

            [clauses:protected] => Array
                (
                    [wp_postmeta] => Array
                        (
                            [key] => new_authors
                            [value] => "164"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                    [wp_postmeta-1] => Array
                        (
                            [key] => new_presenters
                            [value] => "164"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                )

            [has_or_relation:protected] => 1
        )

    [date_query] => 
    [request] => SELECT   wp_posts.ID
					 FROM wp_posts  INNER JOIN wp_postmeta ON ( wp_posts.ID = wp_postmeta.post_id )
					 WHERE 1=1  AND ( 
  ( wp_postmeta.meta_key = 'new_authors' AND wp_postmeta.meta_value LIKE '{b6789796e5c02b1f6357b253a6c16c57284e480c1dd8f52db9697ab3b662e01d}\"164\"{b6789796e5c02b1f6357b253a6c16c57284e480c1dd8f52db9697ab3b662e01d}' ) 
  OR 
  ( wp_postmeta.meta_key = 'new_presenters' AND wp_postmeta.meta_value LIKE '{b6789796e5c02b1f6357b253a6c16c57284e480c1dd8f52db9697ab3b662e01d}\"164\"{b6789796e5c02b1f6357b253a6c16c57284e480c1dd8f52db9697ab3b662e01d}' )
) AND wp_posts.post_type IN ('post', 'webinars') AND ((wp_posts.post_status = 'publish'))
					 GROUP BY wp_posts.ID
					 ORDER BY wp_posts.post_date DESC
					 
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 31920
                    [post_author] => 164
                    [post_date] => 2024-02-21 13:55:00
                    [post_date_gmt] => 2024-02-21 19:55:00
                    [post_content] => 

Artificial Intelligence (AI) and Machine Learning (ML) present limitless possibilities for enhancing business processes, but they also expand the potential for malicious actors to exploit security risks. Like many technologies that came before it, AI is advancing faster than security standards can keep up with. That’s why we guide security leaders to go a step further by taking an adversarial lens to their company’s AI and ML implementations. 

These five questions will kickstart any AI journey with security in mind from the start. For a comprehensive view of security in ML models, access our white paper, “The CISO's Guide to Securing AI/ML Models.”

5 Questions to Ask for Better AI Security

  1. What is the business use-case of the model?
    Clearly defining the model's intended purpose helps in identifying potential threat vectors. Will it be deployed in sensitive environments, such as healthcare or finance? Understanding the use-case allows for tailored defensive strategies against adversarial attacks. 
  2. What is the target function or objective of the model?
    Understanding what the model aims to achieve, whether it's classification, regression, or another task, can help in identifying possible adversarial manipulations. For instance, will the model be vulnerable to attacks that attempt to shift its predictions just slightly or those that aim for more drastic misclassifications? 
  3. What is the nature of the training data, and are there potential blind spots?
    Consider potential biases or imbalances in the training data that adversaries might exploit. Do you have a comprehensive dataset, or are there underrepresented classes or features that could be manipulated by attackers?
  1. How transparent is the model architecture?
    Will the architecture details be publicly available or proprietary? Fully transparent models might be more susceptible to white-box adversarial attacks where the attacker has full knowledge of the model. On the other hand, keeping it a secret could lead to security through obscurity, which might not be a sustainable defense. 
  1. How will the model be evaluated for robustness?
    Before deployment, it's crucial to have an evaluation plan in place. Will the model be tested against known adversarial attack techniques? What tools or benchmarks will be used to measure the model's resilience? Having a clear evaluation plan can ensure that defenses are systematically checked and optimized.

The most successful technology innovations start with security from the ground up. AI is new and exciting, but it leaves room for critical flaws if security isn’t considered from the beginning. At NetSPI, our proactive security experts help customers innovate with confidence by proactively planning for security through an adversarial lens. 

If your team is exploring the applications of AI, ML, or LLMs in your company, NetSPI can help define a secure path forward. Learn about our AI/ML Penetration Testing or contact us for a consultation.  

[post_title] => Ask These 5 AI Cybersecurity Questions for a More Secure Approach to Adversarial Machine Learning [post_excerpt] => These questions will kickstart your journey into Adversarial Machine Learning and AI security with key considerations from the start. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => 5-ai-cybersecurity-questions-for-more-secure-approach-to-adversarial-machine-learning [to_ping] => [pinged] => [post_modified] => 2024-02-21 13:55:02 [post_modified_gmt] => 2024-02-21 19:55:02 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=31920 [menu_order] => 14 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 31463 [post_author] => 164 [post_date] => 2023-11-21 09:00:00 [post_date_gmt] => 2023-11-21 15:00:00 [post_content] =>

Artificial Intelligence (AI) and Machine Learning (ML) have vast applications in the cyber space. With its quick adoption and limitless possibilities, the industry is in need of authorities who can provide expertise and perspective to help guide other professionals in their exploration of Large Language Models (LLMs). One of the best ways to start learning a new area is by studying the common terminology practitioners use. We created this glossary of terms to help anyone researching AI and ML to gain an understanding of discussions around Adversarial Machine Learning

Artificial Intelligence (AI) versus Machine Learning (ML) 

Before we dive in, let’s level set on the differences between AI and ML, or perhaps the lack thereof.  

Artificial Intelligence 

Artificial Intelligence is a broader field that focuses on creating machines that can perform tasks that typically require human intelligence. It aims to build systems that can reason, learn, perceive, and understand natural language, among other capabilities. AI encompasses various techniques, and machine learning is one of its subfields. 

Machine Learning 

Machine Learning is a subset of AI that deals with designing algorithms and models that enable computers to learn from data without explicit programming. Instead of being programmed with specific rules, ML models use patterns and examples to improve their performance on a given task. ML can be further divided into different categories, such as supervised learning, unsupervised learning, and reinforcement learning, each suited for different types of learning tasks. 

While they are closely related areas, they do have nuanced differences. To put it concisely, AI is a broader field that encompasses various techniques and methods to create intelligent systems, while ML is a specific approach within AI that focuses on learning from data to improve task performance. 

At this point in time, definitions within the realm of Adversarial Machine Learning (AML) lack standardization. We recognize the significance of setting clear and robust definitions to shape the future of AML, which is why our team is integrated in refining and solidifying these definitions to help establish industry standards. By leveraging NetSPI’s expertise and in-house knowledge, we strive to present definitions that are not only comprehensive but also accurate and relevant to the current state of AML.

Key Terminology in AI Cybersecurity

TermDefinition
Adversarial AttacksTechniques employed to create adversarial examples and exploit the vulnerabilities of machine learning models.
Adversarial Example DetectionMethods designed to distinguish adversarial examples from regular clean examples and prevent their misclassification.
Adversarial ExamplesAML hinges on the idea that machine learning models can be deceived and manipulated by subtle modifications to input data, known as adversarial examples. These adversarial examples are carefully crafted to cause the model to misclassify or make incorrect predictions, leading to potentially harmful consequences. Adversarial attacks can have significant implications, ranging from evading spam filters and malware detection systems to fooling autonomous vehicles' object recognition systems.
Adversarial Learning/TrainingA learning approach that involves training models to be robust against adversarial examples or actively generating adversarial examples to evaluate the model's vulnerability.
Adversarial Machine Learning (AML)A field that focuses on studying the vulnerabilities of machine learning models to adversarial attacks and developing strategies to enhance their security and robustness.
Adversarial PerturbationsSmall, carefully crafted changes to the input data that are imperceptible to humans but can cause significant misclassification by the machine learning model.
Adversarial Robustness EvaluationThe process of assessing the robustness of a machine learning model against adversarial attacks, often involving stress testing the model with various adversarial examples.
Adversarial TrainingA defense technique involving the augmentation of the training set with adversarial examples to improve the model's robustness.
AutoencodersNeural network models trained to reconstruct the input data from a compressed representation, useful for unsupervised learning and dimensionality reduction tasks.
Batch NormalizationA technique used to improve the training stability and speed of neural networks by normalizing the inputs of each layer.
Bias-Variance TradeoffThe tradeoff between the model's ability to fit the training data well (low bias) and its ability to generalize to new data (low variance).
Black-Box AttacksAdversarial attacks where the attacker has limited knowledge about the target model, usually through input-output interactions.
Certified DefensesDefense methods that provide a "certificate" guaranteeing the robustness of a trained model against perturbations within a specified bound.
Cross-Entropy LossA loss function commonly used in classification tasks that measures the dissimilarity between the predicted probabilities and the true class labels.
Data AugmentationA technique used to increase the diversity and size of the training dataset by generating new samples through transformations of existing data.
Decision BoundariesThe dividing lines or surfaces that separate different classes or categories in a classification problem. They define the regions in the input space where the model assigns different class labels to the data points. Decision boundaries can be linear or nonlinear, depending on the complexity of the classification problem and the algorithm used. The goal of training a machine learning model is to learn the optimal decision boundaries that accurately separate the different classes in the data.
Defense MechanismsTechniques and strategies employed to protect machine learning models against adversarial attacks.
DefenseGANA defense technique that uses a Generative Adversarial Network (GAN) to project adversarial perturbed images into clean images before classification.
Deep LearningA subfield of machine learning that utilizes artificial neural networks with multiple layers to learn hierarchical representations of data. 
Discriminative ModelsModels that learn the boundary between different classes or categories in the data and make predictions based on this learned decision boundary.
DropoutA regularization technique where random units in a neural network are temporarily dropped out during training to prevent over reliance on specific neurons.
Ensemble MethodsRefer to machine learning techniques that combine the predictions of multiple individual models to make more accurate and robust predictions or decisions. Instead of relying on a single model, ensemble methods leverage the diversity and complementary strengths of multiple models to improve overall performance.
Evasion AttacksAdversarial attacks aimed at perturbing input data to cause misclassification or evasion of detection systems.
Feature EngineeringThe process of selecting, transforming, and creating new features from the available data to improve the performance of a machine learning model.
Generative ModelsModels that learn the underlying distribution of the training data and generate new samples that resemble the original data distribution.
Gradient DescentAn optimization algorithm that iteratively updates the model's parameters in the direction of steepest descent of the loss function to minimize the loss.
Gradient Masking/ObfuscationDefense methods that intentionally hide or obfuscate the gradient information of the model to make adversarial attacks less successful.
Gray-Box AttacksAdversarial attacks where the attacker has partial knowledge about the target model, such as access to some internal information or limited query access.
HyperparametersParameters that are not learned from data during the training process but are set by the user before training begins. These parameters control the behavior and performance of the machine learning model. Unlike the internal parameters of the model, which are learned through optimization algorithms, hyperparameters are predefined and chosen by the user or the machine learning engineer.
L1 and L2 RegularizationTechniques used to prevent overfitting by adding a penalty term to the model's objective function, encouraging simplicity or smoothness.
Mean Squared Error (MSE)A commonly used loss function that measures the average squared difference between the predicted and true values.
Neural NetworksComputational models inspired by the structure and functioning of the human brain, consisting of interconnected nodes (neurons) organized in layers.
Offensive Machine Learning (OML)The practice of leveraging machine learning techniques to design and develop attacks against machine learning systems or to exploit vulnerabilities in these systems. Offensive machine learning aims to manipulate or deceive the target models, compromising their integrity, confidentiality, or availability.
OverfittingA phenomenon where a machine learning model becomes too specialized to the training data and fails to generalize well to new, unseen data.
Poisoning AttacksAdversarial attacks involving the injection of malicious data into the training set to manipulate the behavior of the model.
Precision and RecallEvaluation metrics used in binary classification tasks to measure the model's ability to correctly identify positive samples (precision) and the model's ability to find all positive samples (recall).
Regularization MethodsTechniques that penalize large values of model parameters or gradients during training to prevent large changes in model output with small changes in input data.
Reinforcement LearningA machine learning paradigm where an agent learns to take actions in an environment to maximize a cumulative reward signal. A learning paradigm where an agent interacts with an environment, receiving rewards or penalties based on its actions, to learn optimal policies.
Robust OptimizationDefense techniques that modify the model's learning process to minimize misclassification of adversarial examples and improve overall robustness.
Security-Accuracy Trade-offThe trade-off between the model's accuracy on clean data and its robustness against adversarial attacks. Enhancing one aspect often comes at the expense of the other.
Semi-Supervised LearningA learning paradigm that combines labeled and unlabeled data to improve the performance of a model by leveraging the unlabeled data to learn better representations or decision boundaries.
Supervised LearningA machine learning approach where the model learns from labeled training data, with inputs and corresponding desired outputs provided during training. 
Transfer AttacksAdversarial attacks that exploit the transferability of adversarial examples to deceive target models with limited or no direct access.
Transfer LearningA technique that leverages knowledge learned from one task to improve performance on a different but related task.
TransferabilityThe ability of adversarial examples generated for one model to deceive other similar models.
UnderfittingA phenomenon where a machine learning model fails to capture the underlying patterns in the training data, resulting in poor performance on both the training and test data.
Unsupervised LearningA machine learning approach where the model learns patterns and structures from unlabeled data without explicit output labels.
White-Box AttacksAdversarial attacks where the attacker has complete knowledge of the target model, including its architecture, parameters, and internal gradients.

Want to continue your education in Adversarial Machine Learning? Learn about NetSPI’s AI/ML Penetration Testing

[post_title] => Common Terminology in Adversarial Machine Learning [post_excerpt] => Learn about common terminology used in artificial intelligence, machine learning, and adversarial machine learning from NetSPI. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => common-terminology-in-adversarial-machine-learning [to_ping] => [pinged] => [post_modified] => 2023-11-17 14:40:19 [post_modified_gmt] => 2023-11-17 20:40:19 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=31463 [menu_order] => 38 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 2 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 31920 [post_author] => 164 [post_date] => 2024-02-21 13:55:00 [post_date_gmt] => 2024-02-21 19:55:00 [post_content] =>

Artificial Intelligence (AI) and Machine Learning (ML) present limitless possibilities for enhancing business processes, but they also expand the potential for malicious actors to exploit security risks. Like many technologies that came before it, AI is advancing faster than security standards can keep up with. That’s why we guide security leaders to go a step further by taking an adversarial lens to their company’s AI and ML implementations. 

These five questions will kickstart any AI journey with security in mind from the start. For a comprehensive view of security in ML models, access our white paper, “The CISO's Guide to Securing AI/ML Models.”

5 Questions to Ask for Better AI Security

  1. What is the business use-case of the model?
    Clearly defining the model's intended purpose helps in identifying potential threat vectors. Will it be deployed in sensitive environments, such as healthcare or finance? Understanding the use-case allows for tailored defensive strategies against adversarial attacks. 
  2. What is the target function or objective of the model?
    Understanding what the model aims to achieve, whether it's classification, regression, or another task, can help in identifying possible adversarial manipulations. For instance, will the model be vulnerable to attacks that attempt to shift its predictions just slightly or those that aim for more drastic misclassifications? 
  3. What is the nature of the training data, and are there potential blind spots?
    Consider potential biases or imbalances in the training data that adversaries might exploit. Do you have a comprehensive dataset, or are there underrepresented classes or features that could be manipulated by attackers?
  1. How transparent is the model architecture?
    Will the architecture details be publicly available or proprietary? Fully transparent models might be more susceptible to white-box adversarial attacks where the attacker has full knowledge of the model. On the other hand, keeping it a secret could lead to security through obscurity, which might not be a sustainable defense. 
  1. How will the model be evaluated for robustness?
    Before deployment, it's crucial to have an evaluation plan in place. Will the model be tested against known adversarial attack techniques? What tools or benchmarks will be used to measure the model's resilience? Having a clear evaluation plan can ensure that defenses are systematically checked and optimized.

The most successful technology innovations start with security from the ground up. AI is new and exciting, but it leaves room for critical flaws if security isn’t considered from the beginning. At NetSPI, our proactive security experts help customers innovate with confidence by proactively planning for security through an adversarial lens. 

If your team is exploring the applications of AI, ML, or LLMs in your company, NetSPI can help define a secure path forward. Learn about our AI/ML Penetration Testing or contact us for a consultation.  

[post_title] => Ask These 5 AI Cybersecurity Questions for a More Secure Approach to Adversarial Machine Learning [post_excerpt] => These questions will kickstart your journey into Adversarial Machine Learning and AI security with key considerations from the start. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => 5-ai-cybersecurity-questions-for-more-secure-approach-to-adversarial-machine-learning [to_ping] => [pinged] => [post_modified] => 2024-02-21 13:55:02 [post_modified_gmt] => 2024-02-21 19:55:02 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=31920 [menu_order] => 14 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 2 [max_num_pages] => 0 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 3bf841d44121ecc4760a27c6f7aecf1c [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) )

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X