Back

NetblockTool: The Easy Way to Find IP Addresses Owned by a Company

TL;DR

Use NetblockTool to easily dump a unique list of IP addresses belonging to a company and its subsidiaries.

Download the tool here: https://github.com/NetSPI/NetblockTool

The Problem

A problem that I was frequently running into for both offensive and defensive roles is determining the IP addresses that a company owns and uses. Traditionally, gathering a list of IP addresses a company owns is a long and very manual process. Various sources need to be used like Google, ARIN, WHOIS, IPinfo, Censys, and Shodan. The list goes on.

Thankfully, there are some automated tools that exist that make this process a bit easier. Recon-ng is one of these tools but it isn’t perfect, and while it does a lot of things well, easily gathering a complete list of netblocks for a company is not one of those things. This is where NetblockTool comes in.

The Solution: NetblockTool

Written as a standalone Python script, NetblockTool is designed to fill in this tooling gap.

For blue team users, simply provide the name of your company and receive a list of unique netblocks, ranked by the likelihood that the returned netblock belongs to your company.

For red team users, use NetblockTool to gather IP ranges, points of contact, and even netblocks belonging to your target’s subsidiaries.

Netblock Usage

Getting Started

Getting started is easy. Simply clone the repository, install the requirements, and you’re ready to start using NetblockTool.

git clone https://github.com/NetSPI/NetblockTool.git
cd NetblockTool && pip3 install -r requirements.txt
python3 NetblockTool.py -v Company

How does it work?

NetblockTool uses several data sources to gather netblocks that a company may own, which include Google dorking, the ARIN database, the ARIN API website, and IPinfo. Since public websites are being scraped, there is no API key needed for any site when using NetblockTool.

First, the user provides a target company. NetblockTool then scrapes Google using a Google dork to retrieve networks that IPinfo knows about.

Netblock Google Dork

Next, the ARIN database is queried by sending the same traffic a normal user would send by visiting their website and manually searching for a company. The results are then scraped for ARIN objects (like networks and company contacts) and the objects are visited and further scraped. The advantage of this method is that more results are provided than just directly querying the database using their APIs.

Netblock Arin

After all sources have been scraped, each discovered netblock is deduplicated and assigned a confidence score that it belongs to the company. The score is largely based on the name of the netblock, the type of ARIN object it is (either ASN, network, or a leased range known as a customer), and the address linked to the netblock.

Netblock Dedup

From here, further operations are then performed that are based on the user’s arguments, such a retrieving geolocation data for each IP.

Finally, the total number of addresses is printed and the results are written to a CSV. The first 15 rows for Google are shown below.

Netblock Results

Subsidiaries

What if a company has subsidiaries and has netblocks registered to them? NetblockTool has you covered. It’s able to automatically query the Securities and Exchange Commission’s public database to retrieve a list of possible subsidiaries and then enumerate the subsidiaries’ netblocks.

Netblock Subsid

Common Use Cases

There are many different ways of getting the data you desire from NetblockTool, but the easiest way of running the tool is simply:

python3 NetblockTool.py -v Company

If you want to extract netblocks owned by your target company’s subsidiaries, specify that flag:

python3 NetblockTool.py -v Company -s

Extracting point of contact information can also be helpful:

python3 NetblockTool.py -v Company -p

Or, if you want to get as much information as possible, including netblocks found using wildcard queries, points of contact, geolocation data, and physical addresses:

python3 NetblockTool.py -wpgav Company -so

Conclusion

Whether you need to find the netblocks your employer owns or find the netblocks for your next red team engagement, NetblockTool is your quick and easy solution. Give it a shot and see if you find it useful.

https://github.com/NetSPI/NetblockTool

Back

A Beginners Guide to Gathering Azure Passwords

It has been a while since the initial release (August 2018) of the Get-AzurePasswords module within MicroBurst, so I figured it was time to do an overview post that explains how to use each option within the tool. Since each targeted service in the script has a different way of getting credentials, I want users of the tool to really understand how things are working.

For those that just want to jump to information on specific services, here’s links to each section:

Additionally, we’ve renamed the function to Get-AzPasswords to match the PowerShell modules that we’re using in the function.

AzureRM Versus Az

As of March 19, 2020, we pushed some changes to MicroBurst to switch the cmdlets over to the Az PowerShell modules (from AzureRM). This was a much needed switch, as the AzureRM modules are being replace by the Az modules.

Along with these updates I also wanted to make some additions to the Get-AzurePasswords functionality. Since we reorganized all of the code in MicroBurst to match up with the related supporting modules (Az, AzureAD, MSOL, etc.), we thought it was important to separate out function names based off of the modules the script was using.

Modules

Going forward, Get-AzurePasswords will still live in the AzureRM folder in MicroBurst, but it will not be updated with any new functionality. Going forward, I highly recommend that everyone switches over to the newly renamed version “Get-AzPasswords” in the Az folder in MicroBurst.

Important Script Usage Note – Some of these functions can make minor temporary changes to an Azure subscription (see Automation Accounts). If you Ctrl+C during the script execution, you may end up with unintended files or changes in your subscription.

I’ll cover each of these concerns in the sections below, but have patience when running these functions. I know Azure can have its slow moments, so (in most cases) just give the script a moment to catch up and everything should be good. I haven’t been keeping track, but I believe I’ve lost several months of time waiting for automation runbook jobs to complete.

Function Usage

For each service section, I’ve noted the script flag that you can use to toggle (“-Keys Y” versus “-Keys N”) the collection of that specific service. Running the script with no flags will gather credentials from all services.

Step 1. Import the MicroBurst Module

PS C:\MicroBurst> Import-Module .MicroBurst.psm1
Imported Az MicroBurst functions
Imported AzureAD MicroBurst functions
Imported MSOnline MicroBurst functions
Imported Misc MicroBurst functions
Imported Azure REST API MicroBurst functions

Step 2. Authenticate to the Azure Tenant

PS C:\MicroBurst> Login-AzAccount

Account           SubscriptionName  TenantId                             Environment
-------           ----------------  --------                             -----------
test@example.com  TestSubscription  4712345a-6543-b5s4-a2b2-e01234567895 AzureCloud

Step 3. Gather Passwords

PS C:\MicroBurst> Get-AzPasswords -Verbose
VERBOSE: Logged In as test@example.com
VERBOSE: Getting List of Key Vaults...
VERBOSE:  Exporting items from testingKeys
VERBOSE:   Getting Key value for the testKey Key
VERBOSE:   Getting Secret value for the TestKey Secret
VERBOSE: Getting List of Azure App Services...
VERBOSE:  Profile available for microburst-application
VERBOSE: Getting List of Azure Container Registries...
VERBOSE: Getting List of Storage Accounts...
VERBOSE:  Getting the Storage Account keys for the teststorage account
VERBOSE: Getting List of Azure Automation Accounts...
VERBOSE:  Getting the RunAs certificate for autoadmin using the XZvOYzsuBiGbfqe.ps1 Runbook
VERBOSE:   Waiting for the automation job to complete
VERBOSE:    Run AuthenticateAs-autoadmin-AzureRunAsConnection.ps1 (as a local admin) to import the cert and login as the Automation Connection account
VERBOSE:   Removing XZvOYzsuBiGbfqe runbook from autoadmin Automation Account
VERBOSE:  Getting cleartext credentials for the autoadmin Automation Account
VERBOSE:   Getting cleartext credentials for test using the dOFtWgEXIQLlRfv.ps1 Runbook
VERBOSE:    Waiting for the automation job to complete
VERBOSE:    Removing dOFtWgEXIQLlRfv runbook from autoadmin Automation Account
VERBOSE: Password Dumping Activities Have Completed

*Running this will “Out-GridView” prompt you to select the subscription(s) to gather credentials from.

For easier output management, I’d recommend piping the output to either “Out-GridView” or “Export-CSV”.

With that housekeeping out of the way, let’s dive into the credentials that we’re able to gather with Get-AzPasswords.

Key Vaults (-Keys Y)

Azure Key Vaults are Microsoft’s solution for storing sensitive data (Keys, Passwords/Secrets, Certs) in the Azure cloud. Inherently, Key Vaults are great sources for finding credential data. If you have a user with the correct rights, you should be able to read data out of the key stores.

Vault access is controlled by the Access Policies in each vault, but any users with Contributor rights are able to give themselves access to a Key Vault. Get-AzPasswords will not modify any Key Vault Access Policies, but you could give your account read permissions on the vault if you really needed to read a key.

An example Key Vault Secret:

Keyvault

Get-AzPasswords will export all of the secrets in cleartext, along with any certificates. You also have the option to save the certificate files locally with the “-ExportCerts Y” flag.

Sample Output:

Type : Key
Name : DiskKey
Username : N/A
Value : {"kid":"https://notArealVault.vault.azure.net/keys/DiskKey/63abcdefghijklmnop39","kty ":"RSA","key_ops":["sign","verify","wrapKey","unwrapKey","encrypt","decrypt"],"n":"v[REDACTED]w","e":"AQAB"}
PublishURL : N/A
Created : 5/19/2020 5:20:12 PM
Updated : 5/19/2020 5:20:12 PM
Enabled : True
Content Type : N/A
Vault : notArealVault
Subscription : NotARealSubscription

Type : Secret
Name : TestKey
Username : N/A
Value : Karl'sPassword
PublishURL : N/A
Created : 3/7/2019 9:28:37 PM
Updated : 3/7/2019 9:28:37 PM
Enabled : True
Content Type : Password
Vault : notArealVault
Subscription : NotARealSubscription

Finally, access to the Key Vaults may be restricted by network, so you may need to run this from an Azure VM on the subscription, or from an IP in the approved “Private endpoint and selected networks” list. These settings can be found under the Networking tab in the Azure portal.

Alternatively, you may need to use an automation account “Run As” account to request the keys. Steps to complete that process are outlined here.

App Services (-AppServices Y)

Azure App Services are Microsoft’s option for rapid application deployment. Applications can be spun up quickly using app services and the configurations (passwords) are pushed to the applications via the App Services profiles.

In the portal, the App Services deployment passwords are typically found in the “Publish Profile” link that can be found in the top navigation bar within the App Services section. Any user with contributor rights to the application should be able to access this profile.

Appservices

These publish profiles will contain Web and FTP credentials that can be used to get access to the App Service’s files. In addition to that, any stored connection strings should also be available in the file. All available profile credentials are all parsed by Get-AzPasswords, so it’s easy to gather credentials for multiple App Services applications at once.

Sample Output:

Type         : AppServiceConfig
Name         : appServicesApplication - Web Deploy
Username     : $appServicesApplication
Value        : dS[REDACTED]jM
PublishURL   : appServicesApplication.scm.azurewebsites.net:443
Created      : N/A
Updated      : N/A
Enabled      : N/A
Content Type : Password
Vault        : N/A
Subscription : NotARealSubscription

Type         : AppServiceConfig
Name         : appServicesApplication - FTP
Username     : appServicesApplication$appServicesApplication
Value        : dS[REDACTED]jM
PublishURL   : ftp://appServicesApplication.ftp.azurewebsites.windows.net/site/wwwroot
Created      : N/A
Updated      : N/A
Enabled      : N/A
Content Type : Password
Vault        : N/A
Subscription : NotARealSubscription

Type : AppServiceConfig
Name : appServicesApplication-Custom-ConnectionString
Username : N/A
Value : metadata=res://*/Models.appServicesApplication.csdl|res://*/Models.appServicesApplication.ssdl|res://*/Models.appServicesApplication.msl;provider=System.Data.SqlClient;provider connection string="Data Source=abcde.database.windows.net;Initial Catalog=app_Enterprise_Prod;Persist Security Info=True;User ID=psqladmin;Password=somepassword9" 
PublishURL : N/A
Created : N/A
Updated : N/A
Enabled : N/A
Content Type : ConnectionString
Vault : N/A
Subscription : NotARealSubscription

Potential next steps for App Services have been outlined in another NetSPI blog post here.

Automation Accounts (-AutomationAccounts Y)

Automation Accounts are one of the ways that you can automate jobs and routine tasks within Azure. These tasks (Runbooks) are frequently run with stored credentials, or with the service account (Run As “Connections”) tied to the Automation Account.

Automation

Both of these credential types can be returned with Get-AzPasswords and can potentially allow for privilege escalation. In order to gather these credentials from the Automation Accounts, we need to create new runbooks that will cast the credentials out to variables that are then printed to the runbook job output.

To protect these credentials in the output, we’ve implemented an encryption scheme in Get-AzPasswords that encrypts the job output.

Enccreds

The Run As certificates (covered in another blog) can then be used to authenticate (run the AuthenticateAs PS1 script) from your testing system as the stored Run As connection.

Authas

Any stored credentials may have a variety of uses, but I’ve frequently seen domain accounts being stored here, so that can lead to some interesting lateral movement options.

Sample Output:

Type : Azure Automation Account
Name : kfosaaen
Username : test
Value : testPassword
PublishURL : N/A
Created : N/A
Updated : N/A
Enabled : N/A
Content Type : Password
Vault : N/A
Subscription : NotARealSubscription

As a secondary note here, you can also request bearer tokens for the Run As automation accounts from a custom runbook. I cover the process in this blog post, but I think it’s worth noting here, since it’s not included in Get-AzPasswords, but it is an additional way to get a credential from an Automation Account.

And one final note on gathering credentials from Automation Accounts. It was noted above, but sometimes Azure Automation Accounts can be slow to respond. If you’re having issues getting a runbook to run and cancel the function execution before it completes, you will need to manually go in and clean up the runbooks that were created as part of the function execution.

Runbook

These will always be named with a 15-character random string of letters (IE: lwVSNvWYpPXCcDd). You will also have local files in your execution directory to clean up as well, and they will have the same name as the ones that were uploaded for execution.

Storage Account Keys (-StorageAccounts Y)

Storage Accounts are the multipurpose (Public/Private files, tables, queues) service for storing data in an Azure subscription. This section is pretty simple compared to the previous ones, but gathering the account keys is an easy way to maintain persistence in a sensitive data store. These keys can be used with the Azure storage explorer application to remotely mount storage accounts.

Storage

These access keys can easily be cycled, but if you’re looking for persistence in a Storage Account, these would be your best bet. Additionally, if you’re modifying Cloud Shell files for escalation/persistence, I’d recommend holding on to a copy of these keys for any Cloud Shell storage accounts.

Sample Output:

Type         : Storage Account
Name         : notArealStorageAccount
Username     : key1
Value        : W84[REDACTED]==
PublishURL   : N/A
Created      : N/A
Updated      : N/A
Enabled      : N/A
Content Type : Key
Vault        : N/A
Subscription : NotARealSubscription

Azure Container Registries (-ACR Y)

Azure Container Registries are used in Azure to manage Docker images for use within Azure Kubernetes Service (AKS) or as container instances (either in Azure or elsewhere). More often than not, we will find keys and credentials stored in these container images.

In order to authenticate to the repositories, you can either use the AZ CLI with AzureAD credentials, or there can be an “Admin user”. If you have AzureAD user rights to pull the admin password for a container registry, you should already have rights to authenticate to the repositories with the AZ CLI, but if an Admin user is enabled for the registry, this user could then be used for persistence when you lose access to the initial AzureAD user.

Container

Fun Fact – Any AzureAD user with “Reader” permissions on the Container Registry is able to connect to the repositories and pull images down with Docker.

For more information on using these credentials, here’s another NetSPI blog.

Conclusion

There was a lot of ground to cover here, but hopefully this does a good job of explaining all of the functionality available in Get-AzPasswords. It’s worth noting that most of this functionality relies on your AzureAD user having Contributor IAM rights on the applicable service. While some may argue that Contributor access is equivalent to admin in a subscription, I would argue that most subscriptions primarily consist of Contributor users.

If there are any sources for Azure passwords that you would like to see added to Get-AzPasswords, feel free to make a pull request on the MicroBurst GitHub repository.

Back

Introducing PTaaS+: Decreasing Your Organization’s Time to Remediation

The PTaaS+ features listed within this blog post are now offered with any NetSPI service that leverages PTaaS. This excludes ticketing integrations, which are available for an additional cost. Contact us to learn more.

NetSPI is focused on creating the next generation of security testing. Our Penetration Testing as a Service (PTaaS) delivers higher quality vulnerabilities, in less time than any other provider and we are now expanding these benefits into your remediation lifecycle.

This month we’re expanding your options with our PTaaS+ plan, which focuses on vulnerability management and remediation. With our base PTaaS plan, we deliver vulnerabilities the same day they are found, now with PTaaS+ you and your team are empowered to act upon and begin remediating them immediately, decreasing your time-to-remediation by up to 1 month for high severity issues. A couple of key features contribute to this new functionality:

Ticketing Integrations

On average, we report over 50 vulnerabilities on a regular web application test, that number jumps above 700 when we perform external network testing. When receiving so many vulnerabilities, making sense of it all can be a full-time job before you even get to remediating them. With PTaaS+, we offer free integration with Jira or Service Now to easily get the vulnerabilities into your tools and into the remediator’s hands on day zero.

Remediation Assignments & SLAs

After receiving a large number of vulnerabilities, the first step is assigning a due date for remediation based on vulnerability severity. PTaaS+ allows each severity to be assigned a timeframe in which it must be remediated from the delivery date. NetSPI’s standard recommendation is:

  • Critical – 30 days
  • High – 60 days
  • Medium – 90 days
  • Low – 365 days

However, these can be customized to fit your organization’s policies. Additionally, with PTaaS+, you can assign vulnerabilities to specific users, letting you track and delegate vulnerabilities throughout the remediation lifecycle.

Vulnerability Customization

After delivering vulnerabilities, one common point of discussion is NetSPI’s severity rating vs. an organization’s internal vulnerability rating. Every organization rates vulnerabilities differently and to help with that, PTaaS+ allows you to provide an assigned severity to all vulnerabilities, from which your remediation due dates can be calculated. Both NetSPI’s and your severities will be maintained for auditing and future reporting.

Data Analytics

After you have a handle on your remediation processes, you can start looking for trends to ensure fewer vulnerabilities next year. PTaaS+ grants you access to NetSPI’s Data Lab which allows you to analyze and trend vulnerabilities across all your assessments with NetSPI. Popular data lab queries include:

  • Riskiest asset in your environment
  • Most common vulnerabilities across your company
  • Top OWASP categories

The PTaaS+ features listed within this blog post are now offered with any NetSPI service that leverages PTaaS. This excludes ticketing integrations, which are available for an additional cost. Contact us to learn more.

Back

NetSPI Adds to Leadership Team to Support Continued Focus on Customer Success

Florindo Gallicchio and Robert Richardson bring a combined 50 years of cyber security experience to NetSPI.

Minneapolis, Minnesota  –  NetSPI, the leader in enterprise security testing and vulnerability management, today announced Florindo Gallicchio has joined as Managing Director and Robert Richardson has been promoted to Vice President of Customer Success. Expanding the leadership team is a principle component of NetSPI’s strategy to drive customer growth, program success, and return on investment (ROI) of penetration testing.

“Finding vulnerabilities that other pentesters miss, making reporting easier to digest and act upon, and streamlining our customer engagements through the Resolve™ vulnerability management platform are key areas of focus for our team,” said Aaron Shilts, President at NetSPI. “The growth of our leadership team gives us the opportunity to evolve and expand our services, providing customers peace-of-mind that they’re working with the best security testing and vulnerability management team on the market today.”

Cumulatively, Gallicchio and Richardson bring half a century of cyber security excellence to NetSPI, where they will help customers align security strategies to business goals.

  • Gallicchio  is a senior risk management and information security practitioner with over 30 years of experience in building and running cyber security programs to securely manage the business while also achieving and maintaining compliance to regulatory and industry requirements. As Managing Director at NetSPI, he will be a strategic advisor to executives, boards of directors, and technology staff, helping them understand the role of security as a business strategy. Prior to joining NetSPI, Gallicchio was the CISO at a global advisory investment firm in New York City. He began his career with the National Security Agency (NSA) while serving in the U.S. Navy, where in 10 years of service he worked in signals and communications intelligence collection and systems exploitation.
  • Richardson has more than 20 years of experience as a builder of people, processes, and sales enablement that support and drive sales growth. Richardson is being promoted to Vice President of Customer Success at NetSPI, and will focus on people leadership, personnel development, and operational efficiency. Prior to NetSPI, Richardson built a professional services process and delivery capability that resulted in 150% growth over two years as Director of Strategic Staffing and the Program Management Office (PMO) at Optiv Security. Prior to the merger that formed Optiv, Richardson managed projects at FishNet Security.

“Gallicchio and Richardson bring new perspectives to the table,” added Deke George, Founder and CEO of NetSPI. “Notably, Gallicchio’s experience on the client side as a financial services CISO and his time serving in the U.S. Navy coupled with Richardson’s personnel development track record and ability to scale operations will allow NetSPI to further improve our customers’ vulnerability management programs. Having two of the industry’s best minds on our roster is a crucial part of our mission to provide invaluable pentesting services and counsel to our clients – and continue to stay one step ahead of adversaries.”

To learn more about NetSPI’s efforts to drive customer success, visit the company website to hear first-hand customer success stories or connect with the NetSPI team at info@netspi.com or call: (612) 465-8880.

About NetSPI

NetSPI is the leader in enterprise security testing and vulnerability management. We are proud to partner with seven of the top 10 U.S. banks, three of the world’s five largest health care companies, the largest global cloud providers, and many of the Fortune® 500. Our experts perform deep dive manual penetration testing of application, network, and cloud attack surfaces. We uniquely deliver Penetration Testing as a Service (PTaaS) through our Resolve platform. Clients love PTaaS for the simplicity of scoping new engagements, viewing their testing results in real-time, orchestrating remediation, and the ability to perform always-on continuous testing. We find vulnerabilities that others miss and deliver clear, actionable recommendations allowing our customers to find, track and fix their vulnerabilities faster. Follow us on FacebookTwitter, and LinkedIn.

Media Contact:
Tori Norris, Maccabee PR for NetSPI
tori@maccabee.com
612-294-3100

Back

TechTarget: How to hold Three Amigos meetings in Agile development

On October 22, NetSPI’s Shyam Jha, SVP of Engineering was featured in TechTarget:

Agile software development teams often adopt what’s called a Three Amigos approach, which combines the perspectives of business, development and quality assurance for sprint planning and evaluation.

Typically, a Three Amigos meeting includes a business analyst, quality specialist and developer. The product owner or other businessperson clarifies the requirements before development; the development expert lays out the issues that arise in development processes; and the software-quality expert or tester ensures that testing considerations are designed into requirements and development.

Read the full article here.

Back

Cyber Security: How to Provide a Safe, Secure Space for People to Work

Creating a Culture of Education Around Cyber Security

When it comes to cyber security training, less is more. Determine what is necessary and make it mandatory compliance training. The balance can be subtle and served through a variety of media. Ideally, it will not even feel like training. For example, record a brief video, narrate over a few PowerPoint slides, or host Q&A sessions for any cyber security questions.

It is no longer realistic to base cyber security standards around employees keeping their personal and professional activities separate. By educating employees about digital security in their personal lives, it will extend into their business activities. Additionally, employees will appreciate the trusted guidance.

Another of my training philosophies is having open communication absent of shaming. When someone is the victim of a cyber security scam, they feel shame. We need to move past this. If an employee reports being tricked by a suspicious email, link, call, etc., thank them and encourage them to share their experience with employees. This helps others protect themselves and your business.

Lessons Learned from the Unlikeliest of Places

In 2016, I joined the cyber security team after two years of international travel. Together with my husband, we bicycled from Seattle to Singapore. The lessons I learned along the way were surprisingly relevant to my work in cyber security.

  1. Learning to communicate when you don’t share a common language was key for me both in work and in life. Professionally, I was able to translate cyber security or any tech speak to something employees could understand and relate to their daily responsibilities. More broadly, as our environment grows more diverse, we will continue to find ourselves interacting with people from different geographies, cultural backgrounds, and native languages. It is increasingly important for us to effectively communicate with our global citizens whether personally or professionally.
  2. Change has become the new normal. Being in a different location and needing to find food, water, and shelter each day forced me to live with change and uncertainty. Within a few weeks, it became normal and much less stressful. I became comfortable trusting that things would work out.
Agent of Influence Episode 12, Featuring Kristin Walsh

Areas of Cyber Security Focus in the Biotech Sector

Great cyber security is boring and some media have done a disservice to the industry by making it flashy and scary. Cyber security is about doing the preparation to provide a safe, secure space for people to work. Three main areas I would focus on are:

  1. Equipment Maintenance: Everything runs off software. Ensure the software is current with security patches applied. It can be a difficult balance between business and security when you need to take a money-making instrument offline to do a security upgrade. Having excellent cross-functional relationships so you can have those tough conversations is critical.
  2. Data Privacy: Ensure your systems are secured from the outside and you have alerting and monitoring mechanisms in place should the worst happen. Prevention only goes so far. You need to be prepared and practiced for what to do in the event of a breach. The speed to recognition and recovery is more important.
  3. Audit Trails: Ensure that the right information and the right discoveries are attributed to the right people. Audit trails are also key in cyber security investigations. When you are trying to determine whose PC or which server or what part of the network was infiltrated, that audit trail and an environment with open communication allows you to conduct a successful investigation.

The views presented by Kristin are those of her own and do not necessarily represent the views of her employer or its subsidiaries.

Back

The Payment Card Industry: Innovation, Security Challenges, and Explosive Growth

In a recent episode of Agent of Influence, I talked with John Markh of the PCI Council. John has over 15 years of experience in information security, encompassing compliance, threat and risk management, security assessments, digital forensics, application security, and emerging technologies such as AI, machine learning, IoT, and blockchain. John currently works for the PCI Council and his role includes developing and evolving standards for the emerging mobile payments technologies, along with technical contributions and effort surrounding penetration testing, secure application, secure application lifecycle, and emerging technologies such as mobile payment technologies, cloud, and IoT.

I wanted to share some of his insights in a blog post, but you can also listen to our interview here, on Spotify, Apple Music, or wherever you listen to podcasts.

About the PCI (Payment Card Industry) Council

The PCI Council was established in 2009 by several payment brands to create a single framework that would be acceptable to all those payment brands to secure payment or account data of the merchants and service providers in that ecosystem. Since then, the PCI Council has created many additional standards that not only cover the operational environment, but also device security standards such as the PCI PTS standard and security standards that cover hardware security modules and point to point encryption solutions. The Council is in the process of developing security standards for various emerging payment technologies. The mission of the council is to allow secure payment processing by all stakeholders.

Over the years, a number of the security requirements created by the Council have been enhanced to ensure the standard does not become obsolete but keeps up with the current threats to the payment card industry as a whole. For example, PCI DSS, which was the very first standard created and published by the Council, has evolved and had numerous iterations since its publication to account for evolving threats.

The standards built by the PCI Council are built in a way to address threats that directly impact the payment ecosystem. They are not all-encompassing standards. For example, organizations that operate national infrastructure or electricity grids will find some security requirements that will be applicable to them, but the standards will not address all the risks that are applicable to them. The PCI Council standard is focused on the payment ecosystem.

The Evolution of the Payment Card Industry

John shared how people want convenience – not just in payment, but in every aspect of their life. They want convenience and security. So, payments will evolve to accommodate that.

Even today, there are stores where you put items you want to purchase in your shopping bag and you walk out. Automation, artificial intelligence, machine vision, and biometric systems that are installed in that store will identify the products you have put in your bag and deduct the money from your pre-registered account completely seamlessly.

There are also pilot stores in Asia where you still have to check out at the grocery store, but to pay, you just look at a scanner, which identifies you through iris scan to verify your identity, and then payment is process from a pre-registered account.

Many appliances are also becoming connected to the internet, so it is possible that in the future, a refrigerator will identify that you run out of milk, purchase the milk to be delivered to you, and perform the payment on your behalf. You could soon wake up with a fresh gallon of milk on your doorstep that was ordered by your refrigerator.

And of course, mobile is everywhere. More and more people have smartphones – and smartwatches – and with that comes the convenience of paying using your device. Paying by smart device is way simpler and in these times of COVID-19, it’s also contactless. I think we will see more and more technologies that allow this type of payment. It will still be backed by a credit card behind the scenes, but the form factor of your rectangular plastic will shift to other form factors that are more convenient and seamless.

There are also “smart rings” that can perform biometric authentication of the wearer of the ring. You can load payment cards and transit system cards into the ring, for example. So, when you want to pay or take the train, you just tap your ring to the NFC-enabled reading device, and you’re done.

Convenience will drive innovation. Innovation will have to adapt to meet security standards and it will also drive new security standards to ensure that the emerging technologies are secure.

Innovation and Privacy

In order to have seamless payments, the system still needs some way to validate who you are. If you use a chip and pin enabled card, you authenticate yourself by entering a pin, which is a manual process. But John noted, it’s far more seamless to use iris scans, but to do that, you need to surrender something of yours to the system so the system can identify that you are you.

Right now, the standards are focusing on protecting account data, but maybe in the future, there will be a merge between standards that focus on protecting account data and standards that protect biometric data.

People have several characteristics that identify us for the duration of our lifetime since they don’t change much, including fingerprints and iris scans. It’s difficult to say whether a choice of fingerprint or iris scan is the right choice for consumer authentication or not. At the end of the day, the payment system needs to authenticate you. If the system is using characteristics that cannot be changed, then it also needs to have additional inputs into making sure that it’s not a fraudulent transaction.

For example, payment authentication could be a combination of your fingerprint and the mobile device you’re using. If it is a known mobile device that belongs to you, the system could accept the transaction that was authenticated by your fingerprint plus additional information collected from your device, such as the fact that it belongs to you and there is no known malware on the device. If you were using your fingerprint on a new device, the system could identify that the fingerprints match, but recognize it’s a new device or the device might have some suspicious software on it, in which case the system will ask you to enter your PIN or to provide additional authentication. It will be a more elaborate system that takes numerous characteristics of the transaction and its environment into account before the transaction is processed.

Challenges of Making the Phone a Point of Sale (POS)

One area of focus for the PCI Council are mobile payment platforms. As John said, business owners want to be able to install an app on mobile devices and be able to take payments through that – creating an instant point of sale. However, the fact that the phone is not controlled by an enterprise, and people can install a variety of applications on their phones (some of which might be malware) puts tremendous risk on the entire payment processing system.

While this enables business owners to sell to more people, especially those who don’t have cash and only have credit cards or smart devices, it also creates an additional system for potential fraud.

John said the PCI Council is focused on a way to make mobile payment platforms more secure. As such, the Council has already published two standards.

  • The Software-based PIN Entry on COTS (SPoC) standard enables solution providers to develop an app along with a small hardware dongle. The purpose of the hardware dongle is to read card information while the phone becomes a point of sale and device for consumers to enter their pin to authenticate consumers.
  • The second standard the PCI Council has released is Contactless Payments on COTS (CPoC™). In this case, it’s just an application that the merchant can download to their phone that would make sure the phone is reasonably secure by performing various attestations of the phone and application, and allow merchants to instantly transform their phone into a point of sale. In some emerging markets, there is no payment infrastructure that exists where you can walk into a bank and get a merchant account, or it may take a very long time. With the mobile payment technologies, you can basically become a merchant immediately.

As I have personally seen, having the ability to make financial transactions in parts of the world that don’t have a lot of infrastructure through mobile devices has dramatically changed people’s livelihood. And we need to make sure that it’s being done securely.

To listen to the full podcast, click here, or you can find Agent of Influence on Spotify, Apple Music, or wherever you listen to podcasts.

Back

Cyber Defense Magazine: 3 Steps to Reimagine Your AppSec Program

On October 7, NetSPI Managing Director Nabil Hannan and Product Manager Jake Reynolds were featured in Cyber Defense Magazine:

With Continuous Integration/Continuous Deployment (CI/CD) increasingly becoming the backbone of the modern DevOps environment, it’s more important than ever for engineering and security teams to detect and address vulnerabilities early in the fast-paced software development life cycle (SDLC) process. This is particularly true at a time where the rate of deployment for telehealth software is growing exponentially, the usage of cloud-based software and solutions is high due to the shift to remote work, contact tracing software programs bring up privacy and security concerns, and software and applications are being used in nearly everything we do today. As such, there is an ever-increasing need for organizations to take another look at their application security (AppSec) strategies to ensure applications are not left vulnerable to cyberattacks.

Read the full article for three steps to get started – starting on page 65 of the digital magazine here.

Back

The Evolution of Security Frameworks and Key Factors that Affect Software Development

In a recent episode of Agent of Influence, I talked with Cassio Goldschmidt, Head of Information Security at ServiceTitan about the evolution of security frameworks used to develop software, including key factors that may affect one company’s approach to building software versus another. Cassio is an internationally recognized information security leader with a strong background in both product and program level security.

I wanted to share some of his insights in a blog post, but you can also listen to our interview here, on Spotify, Apple Music or wherever you listen to podcasts.

Key Considerations When Developing Software

As Goldschmidt noted, one of the first security frameworks that was highly publicized was Microsoft STL, and a lot of security practitioners thought it was the way to develop software, and it was a one size fits all type of environment. But that is definitely not the case.

Goldschmidt said that when SAFECode (Software Assurance Forum for Excellence and Code), a not for profit was created, it was a place to discuss how to develop secure code and what the development lifecycle should be among those companies and at large. But – different types of software and environments require different approaches, and will be affected by a variety of factors at each business, including:

  1. Type of Application: Developing an application that is internet facing or just internet connected, or software for ATM machines, will influence the kind of defense mechanisms you need and how you should actually think about the code you’re developing.
  2. Compliance Rules: If your organization has to abide by specific compliance obligations, such as PCI, for example, they will in some ways dictate what you have to do. Like it or not, you will need to follow some steps, including looking at the OWASP Top 10 and make sure you are free of any cross-site scripting or SQL injections.
  3. The Platform: The architecture for phones and the security controls you have for memory management are very different from a PC, or what you have in the data center, where people will not be able to actually reverse engineer things. It’s something you have to take into consideration when you are deciding how you are going to review your code and the risk that it represents.
  4. Programming Language: Still today a lot of software is developed using C++. Depending on the language you use, you may not have the proper support for cross site scripting, so you have to actually make sure that you’re doing something to compensate for the flaws of the language.
  5. Risk Profile: Each business has its own risk profile and the kind of attacks they are willing to endure. For example, DDoS could be a big problem for some companies versus others, and for some companies, even if they have a data breach, it might not matter as much as for other companies depending on the type of business. For example, if you’re in the TV streaming business and a single episode of Game of Thrones leaks, it likely won’t have a big impact, but if you’re in the movie business and one of your movies leaks, then that will likely affect revenue for that movie.
  6. Budget: Microsoft, Google, and other companies with large budgets have employee positions that don’t exist anywhere else. For example, when Goldschmidt was at Symantec, they had a threat research lab, which is a luxury. Start-ups and many other companies might not have this and might need to use augmented security options.
  7. Company Culture: The maturity of the culture of the company also matters quite a bit as well. Security is not just a one stop activity that you can do at a given time, but something that ends up becoming part of your culture.

Today, there are a lot of tools and resources in the market such as Agile Security by O’Reilly that will tell you how to do things in a way that really fit the new models that people are using for developing code.

Security Software Versus Software Security

Security software is the software used to defend your computer, such as antivirus, firewalls, IDS, and IPS. These are really important, but that doesn’t mean they are necessarily secure software or that they were actually developed with defense programming in mind. Meanwhile, secure software is software developed to resist malicious attacks.

Goldschmidt said he often hears that people who make security software don’t necessarily make secure software. In his experience though, security software is so heavily scrutinized that it eventually becomes secure software. For example, antivirus software is a great target for hackers because if an attacker can get in and disable that antivirus, they can ultimately control the system. So, from his experience, security software does tend to become more secure, although it’s not necessarily true all the time.

One inherent benefit I’ve noticed for companies developing security software is that they’re in the business of security, so the engineers and developers they’re hiring are already very savvy when it comes to understanding security implications. Thus, they tend to focus on making sure at least some of the most common and basic issues are covered by default, and they’re not going to fall prey to basic issues.

If an individual doesn’t have this experience when they join a company developing security software, it becomes part of their exposure and experience since they are spending so much time learning about viruses, malware, vulnerabilities, and more. They inherently learn this as part of their day to day – it’s almost osmosis from being around other developers who are constantly thinking about it.

One of my mentors described the difference between security software and secure software to me this way: Security software is software that’s going to protect you as the end user from getting breached. Software security is making sure that your developers are developing the software in a manner that the software is going to behave when an attacker is trying to make it misbehave.

Goldschmidt and I also spent time discussing the cyber security of the Brazilian elections. You can listen to the podcast here to learn more.

Back

TechTarget: 3 common election security vulnerabilities pros should know

On October 1, NetSPI Managing Director Nabil Hannan was featured in TechTarget:

During the Black Hat 2020 virtual conference, keynote speaker Matt Blaze analyzed the security weaknesses in our current voting process and urged the infosec community – namely pentesters – and election commissions to work together. His point: Testers can play an invaluable role in securing the voting process as their methodology of exploring and identifying every possible option for exploitation and simulating crisis scenarios is the perfect complement to shore up possible vulnerabilities and security gaps.

Read the full article here.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X