Ethereum Virtual Machine Internals – Part 1

The Ethereum Virtual Machine (EVM) functions as the sandboxed compatibility layer used by the thousands of nodes that constitute the Ethereum distributed state machine, ensuring that smart contracts are executed deterministically in a platform-independent environment.  

The EVM operates on the execution layer of the Ethereum protocol, and all operations performed within the virtual machine are recorded on the blockchain and can be verified by any node in the network. This allows for full transparency and immutability of the data processed by the EVM, ensuring the integrity of the network. In this post, we will look at the inner workings of the EVM in detail. 

In this three-part series, we will first provide an overview of volatile memory management, function selection, and potential vulnerabilities that may arise from insecure bytecode execution within the EVM. In Part 2, we will dive deeper into persistent EVM storage, exploring how data is stored and retrieved by smart contracts. Finally, Part 3 will focus on message calls in Solidity and the EVM, and vulnerabilities that stem from insecure message calls and storage mismanagement. 

Ethereum Network & Smart Contract Recap 

Upon receiving incoming transactions, Ethereum network validators first validate and verify them to ensure they are valid and properly signed. Validators then execute the smart contract code contained in transactions to verify the correctness and consistency of the results. If the output is valid, validators propagate the transaction output to additional validators to reach consensus.  

Pending transactions are stored in lists called mempools, where they sit until they are added to a block. Once consensus is reached, valid transactions from the mempools are added to the next block to be added to the blockchain. Once the block is added to the blockchain, the transactions it contains are considered final and other nodes in the network can rely on the state of the blockchain to perform further transactions or execute smart contracts1

Smart contract execution occurs in the EVM implementations running on Ethereum network validators. The EVM is not an actual virtual machine that you would launch via VMware or similar. It is more akin to the Java Virtual Machine in that it primarily functions to translate high-level smart contract code into bytecode for the purposes of portable execution. Common EVM implementations include the Golang-based geth2 client and the Python-based py-evm client3. EVM smart contracts are usually written in high-level domain-specific languages such as Solidity and Vyper4

Below is a Solidity smart contract representing a short CTF challenge that we will follow through its execution in the EVM. It has at least two significant vulnerabilities, so please do not use this contract for anything other than educational purposes. The goal of this challenge is to take ownership of the contract via exploiting the two issues:


pragma solidity >=0.7.0 <0.9.0; 

contract DodgyProxy { 
    address public owner; 

    constructor() { 
        owner = msg.sender; 

    modifier onlyOwner { 
        require(msg.sender == owner, "not owner!"); 

    function deleg() private onlyOwner { 

    struct Pointer { function () internal fwd; } 

    function hitMe(uint offset) public { 
        Pointer memory p; 
        p.fwd = deleg; 
        assembly { mstore(p, add(mload(p), offset)) } 

    function inc(uint _num) public pure returns (uint) { 
        return _num++; 

These issues have been introduced because they are a great way of getting to grips with EVM memory quirks. The rest of this article will center around this contract as it is executed through the EVM. While one of the issues is well-documented, the original inspiration for the second issue can be traced back to a CTF challenge deployed to the Ropsten testnet in June 20185 by Reddit user u/wadeAlexC6.  


At its core, the EVM is a stack-based virtual machine that operates on bytecode derived from higher level languages used to write EVM-compatible smart contracts. Below is the bytecode for the example contract, compiled with version 0.8.17 of the Solc Solidity compiler with default optimizations7:


Let’s see how this bytecode ends up looking as it starts moving through EVM’s architecture, by first looking at some key EVM data structures. 

EVM Architecture 

Smart contract execution occurs inside the EVM instances of Ethereum network validators. When a smart contract starts execution, the EVM creates an execution context that includes various data structures and state variables that are described below. After execution has finished, the execution context is discarded, ready for the next contract. Here is a high-level overview of an EVM execution context: 

Ethereum Virtual Machine (EVM) Execution Context


The EVM’s stack serves as Last-In-First-Out (LIFO) data structure for storing temporary values during the execution of a smart contract. As of writing, the stack operates with a maximum of 1024 32-byte elements8. These elements may include control flow information, storage addresses, and the results and parameters for smart contract instructions. 

The main instructions that operate on the stack are the PUSH opcode variants, the POP opcode, as well as the DUP and SWAP opcode variants. These allow elements to be added, removed, duplicated, and swapped on the stack respectively. 


The code region stores a contract’s bytecode in its entirety. This region is read-only. 


The Storage region is a persistent key-value store. Keys and values are both 32-byte slots where permanent but mutable data that forms part of the contract’s state is stored. Data in Storage is persistent in that it is retained between calls. This includes state variables, structs and local variables of structs. Possible uses for the Storage area include storing and providing access to public data such as token balances, and giving libraries access to storage variables. However, contracts cannot arbitrarily access each other’s Storage locations. The relevant opcodes for operating on Storage are the SSTORE and SLOAD opcodes, which allow writing and reading 32 bytes to and from Storage respectively. 


The Memory region is a volatile, dynamically sized byte array used for storing intermediate data during the execution of a contract’s functions. It is akin to allocated virtual memory in classic execution contexts. More specifically, the Memory section holds temporary yet mutable data necessary for the execution of logic within a Solidity function. At a low-level, the MLOAD and MSTORE opcode variants are responsible for reading and writing to Memory respectively. Like Storage, data in the Memory section can be stored and read in 32-byte chunks. However, the MSTORE8 opcode can be used to write the least significant byte of a 32-byte word9


The Calldata region is like the Memory region in that it is also a volatile data store, however it instead stores immutable data. It is intended to hold data sent as part of a smart contract transaction10. Therefore, data stored here can include function arguments and the constructor code when creating new contracts within existing contracts. The CALLDATALOAD, CALLDATASIZE and CALLDATACOPY opcodes can be used to read Calldata at various offsets. Calldata is formatted in a specific way so that specific function arguments can be isolated from the calldata. We will go into this format in more detail later. 

As data stored here is immutable, function arguments that are of simple data types such as unsigned integers are automatically copied over to Memory within a function, so that they can be modified. This does not apply to strings, arrays and maps, which need to explicitly be marked with memory or storage in function arguments, depending on whether they are to be modified during a function’s execution11

Program Counter 

The Program Counter (PC) is similar to the RIP register in x86 assembly, in that it points to the next instruction to be executed by the EVM. The PC will usually increase by one byte after an instruction has been executed. Exceptions to this include the JUMP opcode variants, which relocate the PC to positions specified by data at the top of the stack. 

Global Variables 

The EVM also keeps track of special variables in the global namespace. These are used to provide information about the blockchain and current contract context. There are quite a few global variables12, but you might recognize some of the following global variables from our smart contract example: 

  • msg.sender – the address of the sender of the current call. 
  • msg.value – the value in Wei sent with the current call. 
  • – the current calldata. 
  • tx.origin – the original external account that started a transaction chain. 

Return Data 

The Return Data section stores the return value of a smart contract call. It is read and written to by the RETURNDATASIZE/RETURNDADACOPY and RETURN/REVERT opcodes respectively. 

Gas & Beating the Solidity Compiler 

Every opcode in the EVM has an opportunity cost associated with its execution. This is measured in “gas”. It is important to note that gas is not the same as Ether, in that it cannot be directly bought and sold as a native cryptocurrency. However, it is paid in Gwei (1 Gwei = 10-9 Ether). It is simply a unit of measurement for the work the EVM must do to execute a particular instruction.  

Gas also exists to incentivize efficient smart contract code. To save on gas, Solidity developers sometimes write EVM assembly themselves mid-contract, by dropping into an intermediate language called Yul13. Yul is similar to literal EVM assembly; however, it allows for additional control flow (loops, conditional statements, etc.) so developers do not have to PUSH and POP their way up and down the stack. This is quite common, because Solidity is not yet a very well-established language and therefore Solidity compiler implementations are not as efficient as they could be. 

The EVM execution context takes into account gas limits set for the current transaction and the gas costs for executing each opcode. Gas and gas management are complex topics in EVM development, as smart contract end users are the ones who bear the brunt of gas costs during execution. Cheaper gas costs tend to incentivize users to choose one contract over another.  

For further information on gas, Section 5 of the Ethereum Yellowpaper14 is highly recommended.  

In any case, while it is often necessary to drop into assembly/Yul for the sake of gas efficiency, it comes as no surprise that doing so the wrong way can have some interesting security implications. Specifically, writing contract logic in Yul/assembly can bypass some significant access control mechanisms only implemented in the higher-level Solidity, which we will demonstrate toward the end of this article.  

Application Binary Interfaces 

To map high-level Solidity to bytecode, the Solidity compiler generates an intermediate data structure known as an Application Binary Interface (ABI) from the contract. ABIs serve a similar role as APIs do for exposing methods and structures necessary to interact with back-end application services. Below is the ABI for our example contract:

      "inputs": [ 
                    "internalType": "uint256", 
                    "name": "offset", 
                    "type": "uint256" 
      "name": "hitMe", 
      "outputs": [], 
      "stateMutability": "nonpayable", 
      "type": "function" 
      "inputs": [], 
      "stateMutability": "nonpayable", 
      "type": "constructor" 
      "inputs": [ 
                    "internalType": "uint256", 
                    "name": "_num", 
                    "type": "uint256" 
      "name": "inc", 
      "outputs": [ 
      "internalType": "uint256", 
      "name": "", 
      "type": "uint256" 
      "stateMutability": "pure", 
      "type": "function" 
      "inputs": [], 
      "name": "owner", 
      "outputs": [ 
                    "internalType": "address", 
                    "name": "", 
                    "type": "address" 
      "stateMutability": "view", 
      "type": "function" 

ABIs are comprised of the following elements: 

  • name: defines function names. 
  • type: defines the type of function. This is necessary to differentiate between regular functions, constructors, and specialized function types such as receive and fallback
  • inputs: an array of objects that themselves define argument names and types.  
    • Note that both types and internalTypes are defined. This is because there are subtle differences in the way that certain data types are referenced in Solidity versus ABIs. For example, an input of type struct in Solidity would have an internalType as tuple15.   
  • outputs: similar to inputs, the Output array includes objects that denote function return value names and types. 
  • stateMutability: denotes any function mutability attributes such as pure and view. These are needed to ascertain whether the function is intended to modify on-chain data, or is simply a getter function that returns existing values. payable and nonpayable modifiers are also denoted here. 

Data derived from an ABI is necessary to encode function calls in a low-level way that can be parsed by the EVM, which is referenced in the Calldata region in a contract’s execution context. 

EVM Function Selection & Calldata 

Let’s say we have compiled and deployed our bytecode to a testnet, and now we want to call a function exposed by the contract’s ABI. To do this, the EVM formats function arguments into calldata. Calldata is a standard way of representing function calls, and it is referenced in a transaction via the global variable. 

Calldata is comprised of the following: 

  • A function selector 
  • Optional function arguments 

Contracts expose and identify public functions by means of function selectors. Function selectors are (mostly) unique 4-byte identifiers that allow the EVM to locate and call function logic as it is represented in bytecode. 

By “public functions,” we mean functions that are denoted with the public function visibility keyword in Solidity. To recap, below are the available visibility keywords for functions and state variables: 

  • public: function is visible to contract itself, derived contracts, external contracts, and external addresses. 
  • private: function is only visible to contract itself. 
  • internal: function is visible to contract itself and derived contracts. 
  • external: function is visible to external contracts and external addresses. Not available for state variables. 

Function arguments are similarly encoded along with the function selector so complete call data can be encoded. Most data types are encoded in discrete 32-byte chunks. Note that 32 bytes is the minimum; simple types like uint and bool arguments will result in 32 bytes each, whereas string, byte and array types are encoded according to their length and whether they are fixed or dynamically sized. 

Consider the following function definition, called like foo(16,7)

function foo(uint64 a, uint32 b) public view returns (bool) {}

To derive calldata, the EVM does the following: 

  1. Take the first 4 bytes of the Keccak256 hash of ASCII representation of a function, ignoring the argument variable names. This representation of a function is called a function signature. 
  •  e.g. keccak256(“foo(uint64,uint32)”)0xb9821716 
  1. For the call data, function arguments are added to the hash of the function signature, after being padded by 32 bytes. E.g. 
  • uint64 160x00…..10
  • uint32 70x00…..07 
  1. Altogether, the foo function selector and calldata would be:

We can confirm this with the following short contract, which uses the abi.encodeWithSignature method to produce the function selector given the foo() function signature and its arguments. It will then emit the result (enc) in an event called Encoded. Events are a means to include additional details in transaction logs, and are useful for granularly committing specific occurrences on-chain:

pragma solidity >=0.7.0 <0.9.0; 

contract AbiEncodeTest { 

    event Encoded(bytes); 

    function GetCallData() public { 
        bytes memory enc = abi.encodeWithSignature("foo(uint64,
uint32)", 16, 7); 
        emit Encoded(enc);

We will compile and deploy this contract, and then call the GetCallData method with the Brownie development environment16

Events In This Transaction 
└── AbiEncodeTest (0x9E4c14403d7d9A8A782044E86a93CAE09D7B2ac9) 
    └── Encoded 
        └── : 0xb982171600000000000000000000000000000000000000

It should also be noted that public state variables are also given their own selectors and are treated as getters by the compiler17. For instance, a public state variable named owner will have a selector exposed as keccak256(“owner()”) = 0x8da5cb5b. It can be accessed from other contracts that import DodgyProxy as DodgyProxy.owner()

If the bytecode of the AbiEncodeTest and DodgyProxy contracts were compared, a common section of bytecode will be present, shown below in green. The 4-byte function selectors for the GetCallData() and hitMe() functions immediately follow this common section of bytecode: 

AbiEncodeTest: 60003560e01c806301cc20f114602d57

DodgyProxy: 60003560e01c806373c768d71461004657

This bytecode represents the function selection logic that the EVM uses to identify public functions. The following opcodes are responsible for function selection in DodgyProxy:

026→ 60→ PUSH1 0x00
029→ 60→ PUSH1 0xe0 
031→ 1C→ SHR 
032→ 80→ DUP1 
033→ 63→ PUSH4 0x73c768d7 // hitMe(uint256) public function selector 
038→ 14→ EQ 
039→ 61→ PUSH2 0x0046 
042→ 57→ *JUMPI 
043→ 80→ DUP1 
044→ 63→ PUSH4 0x812600df // inc(uint256) public function selector 
049→ 14→ EQ 
050→ 61→ PUSH2 0x005b 
053→ 57→ *JUMPI 
054→ 80→ DUP1 
055→ 63→ PUSH4 0x8da5cb5b // owner getter function selector 
060→ 14→ EQ 
061→ 61→ PUSH2 0x0081 
064→ 57→ *JUMPI 

Let’s follow execution of this bytecode snippet, given calldata for the following function call. Below is the call we will make to inc() with an argument of uint256 of 1: 


Running this function call through the GetCallData function in the AbiEncodeTest contract will result in the following calldata. This can also be derived manually following the same method we went through earlier: 


If you want to follow along, smlXL18 has a brilliant EVM playground, where bytecode execution can be modeled for learning and debugging purposes. You may follow along with the function selection logic bytecode here, given calldata for a call to inc(uint256) with an argument of 1.  

First, the PUSH1 opcode pushes a value of 0x00 to the stack. This value functions as an offset for the next opcode, CALLDATALOAD:

The PUSH1 opcode pushes a value of 0x00 to the stack. This value functions as an offset for the next opcode, CALLDATALOAD.

CALLDATALOAD loads the first 32 bytes of the global variable to the stack. Here we can see that the function selector for inc(uint256) is included in these first 32 bytes: 

The function selector for inc(uint256) is included in these first 32 bytes.

However, the EVM now needs to parse out the 4-byte function selector from the rest of the calldata chunk. The EVM currently carries this out by bitshifting the calldata chunk to the right by some amount to be left with only the 4-byte function selector. Since the calldata chunk is 256 bits (32 bytes), it needs to be shifted right by 224 bits (0xE0 in hex). 

This is done by first pushing 0xE0 to the stack with PUSH1, and shifting right by SHR. After the shift, the offset will be popped, leaving the clean function selector as the sole element in the stack:

Trimming calldata to obtain the first 4 bytes of the function selector.

Selecting a Function 

To understand the significance of the next few bytecodes, it is helpful to know that the EVM compares function selectors as a kind of switch statement, where function selectors are piled on top of each other on the stack before being compared to the function selector below it. In doing so, the EVM is attempting to discover if incoming calldata is targeting a valid function by parsing them sequentially looking for a match. 

This is why the DUP1 opcode is used to duplicate the function selector on the stack before the next function selector is pushed to the stack with the PUSH4 opcode. Note that this function selector is not derived from any calldata; it is the first function selector to be placed on the stack by the EVM itself. Here, the function selector for the hitMe(uint256) (0x73c768d7) function will eventually be pushed:

SHR has right-shifted the calldata, removing the previous offset.

The EQ opcode is then used to compare the last two items on the stack. If a match is found, the last two items are popped off the stack and a value of 1 replaces them as a positive comparison result. If not, then a value of 0 replaces them instead.  

Here, a value of 0 will be pushed to the stack as 0x812600df and 0x73c768d7 do not match. Note that the original function selector derived from our calldata still remains below the comparison result, ready to eventually be compared to the next function selector:

Duplicating calldata function selector on the stack.
Pushing the next function selector to the stack to compare against the calldata's function selector.

Now that a result has been found, a decision needs to be made as to whether to execute the function represented by the function selector. In this case, function execution will not occur yet because the calldata’s function selector does not match the first one encountered by the EVM.

Function execution will not occur yet because the calldata’s function selector does not match the first one encountered by the EVM.
Function execution will not occur yet because the calldata’s function selector does not match the first one encountered by the EVM.

However, the EVM presently still needs to make another comparison to decide whether to jump to the function selector’s corresponding function logic before it can start comparing the next function selector. This comparison is done by first pushing an offset to the start of function logic to the stack with the PUSH2 opcode.

The jump will not be taken here as the previous comparison resulted in 0.

The JUMPI opcode is then used. Note that JUMPI represents a conditional jump, whereas the JUMP instruction only requires an offset to jump to and is used when an unconditional jump is necessary. The JUMPI instruction first pops the offset and result of the EQ comparison off the stack. If the comparison result is 1, then the program counter will be changed to the offset, which denotes the start of the function logic to be executed. Here, the comparison result is 0, so execution will continue without changing the program counter. 

The previous bytecode is then more or less repeated from the DUP1 opcode: duplicating the calldata-derived function selector, pushing the next function selector to be compared on the stack, comparing the two function selectors, placing an offset to the next section of function logic, and then deciding whether to jump to said function logic depending on the outcome of the comparison. 

This is shown below. This time, the inc(uint256) function selector is compared, so execution will continue at offset 0x5B (91) in the bytecode. 


If we follow execution past the function selection logic, the first opcode we land on will be the JUMPDEST opcode, found at position 91 in the bytecode.  

09261PUSH2 0x006e 
09561PUSH2 0x0069 
09960PUSH1 0x04 
10161PUSH2 0x0178 

Think of JUMP and JUMPDEST opcodes are opposite ends of a wormhole from one area in the bytecode to another. Every JUMP opcode must have a corresponding “landing” JUMPDEST for the jump to be valid. JUMPDESTs remove the need to dynamically assess starting points for function logic after a jump has been taken. 

It should also be noted that JUMPDESTs do not only denote the start of function logic. In fact, there seems to be little bearing over the placement of function logic in low-level bytecode versus high level Solidity.  

Aside – Function Signature Clashing 

Note: OpenZeppelin has a great post on this rare but interesting EVM quirk. 

Earlier, we referred to function selectors as “mostly” unique because it is not too uncommon to have two or more distinct functions with the same first four bytes of the keccak256 hash of the function name. The Ethereum Signature Database is a great example of this. The selector for the owner state variable getter function is actually the same as the function selector for ideal_warn_timed(uint256,uint128), 0x8da5cb5b19

The Solidity compiler is sophisticated enough to notice function signature clashes, so long as the relevant functions are in a single contract. In theory however, function signature clashes are possible between distinct contracts, such as between an implementation contract and a well-known pattern known as a proxy contract20. Potential security risks associated with proxy contract usage will be covered in a subsequent post. 

Private/Internal Functions 

Only public and external functions will have function signatures created for them. Private and internal functions do not receive function selectors. For example, a function selector for deleg() is not present in the bytecode. In fact, attempting to execute a private or internal function via or similar will not result in any sort of access error. Rather explicitly, an exception will be raised as the function selector does not exist21

import web3 
from solcx import compile_source 
# make sure you have a network provider, Ganache is good for this 
w3 = web3.Web3(web3.HTTPProvider('')) 
compiled_sol = compile_source( 
pragma solidity >=0.7.0 <0.9.0; 

contract DodgyProxy { 
    address public owner; 

    constructor() { 
        owner = msg.sender; 

    modifier onlyOwner { 
        require(msg.sender == owner, "not owner!"); 

    function deleg() private onlyOwner { 
    struct Pointer { function () internal fwd; } 

    function hitMe(uint offset) public { 
        Pointer memory p; 
        p.fwd = deleg; 
        assembly { mstore(p, add(mload(p), offset)) } 

    function inc(uint _num) public pure returns (uint) { 
        return _num++; 
}''', output_values=['abi', 'bin']) 
contract_id, contract_interface = compiled_sol.popitem() 
w3.eth.default_account = w3.eth.accounts[0] 
Proxy = w3.eth.contract(abi=contract_interface['abi'], 
# will succeed, as hitMe is public 
pub = Proxy.get_function_by_name("hitMe") 
    # will fail, as cannot find deleg 
    priv = Proxy.get_function_by_name("deleg") 
except ValueError as v: 
    print("%s: deleg" %v)

Running the above script will result in:

❯ python3 

<Function hitMe(uint256)> 

Could not find any function with matching name: deleg 

However, a lack of function selectors for private/internal functions does not mean that private function logic is inaccessible internally, as private function logic still needs to be executable by the contract.  

At present, the following contract is valid Solidity, if somewhat redundant. A public function can make calls to private functions within the same contract and handle any results without issue:

pragma solidity >=0.7.0 <0.9.0; 

contract Pubpriv { 

    function priv() private returns (uint) { 
        return 1; 

    function pub() public returns (uint) { 
        return priv(); 

Going back to our DodgyProxy, a similar pattern might be clear now, starting from hitMe()

    function deleg() private onlyOwner { 
    struct Pointer { function () internal fwd; } 
    function hitMe(uint offset) public { 
        Pointer memory p; 
        p.fwd = deleg; 
        assembly { mstore(p, add(mload(p), offset)) } 

When executing the hitMe() function as any account other than the contract owner, it is not possible to directly call the private deleg() function from hitMe() due to the custom onlyOwner modifier, which prevents the private function logic from being executed if the owner state variable is not set to the caller’s address (msg.sender).  

However, a not-so subtle memory corruption vulnerability of sorts has been introduced in the assembly block. If the hitMe() function is given the right input, it is possible to end up in the middle of the deleg() function and resume execution, despite the deleg() function being inaccessible due to the onlyOwner modifier, and restricted from being called externally due to the private function visibility. 

This is because function visibility, and indeed modifiers in general, only function as definitive access control in high-level Solidity. Meaning that if execution is somehow redirected to the middle of a private function, there is no kind of DEP/NX-like mechanism for preventing execution by unauthorized contexts. 

The conditions necessary for this kind of flaw to be exploitable are unlikely to present themselves in the most cases. Such occurrences are not outside the realm of possibility however, especially considering how often Yul is used to write low-level EVM opcodes directly into contracts to encode operations in such a way as to save on gas fees versus the compiler’s own bytecode.  

With that in mind, the general idea behind this flaw is that the Pointer struct essentially functions as a “base” of sorts in memory, from which an offset can be added to. By specifying specific offsets, the struct’s fwd() function can be used as a jump pad to various JUMPDESTs, some of which would belong to function logic that would otherwise be prevented from being executed via conventional control flow in high-level Solidity. To understand more about how memory is referenced during bytecode execution, the Memory section of the EVM call context needs to be revisited in more detail.  

More About Memory 

As a dynamically sized byte array, the Memory area of the EVM call context can be read and written to in discrete 32-byte chunks. There is also the concept of “touched” and “untouched” memory, where newly written chunks of memory accrue increasing gas and storage costs22. Before new memory can be utilized, the EVM needs to retain a section of utility memory that is used to keep track of subsequent memory writes. 

This utility memory is comprised of four main sections, starting from the beginning of the Memory array23

  • bytes 0x00-0x3f: scratch space mainly used to store intermediate outputs from hashing operations such as keccak256(). Note that this section is allocated as two 32-byte slots. 
  • bytes 0x40-0x5f: space reserved for the 32-byte “free memory pointer.” This is used as a pointer to unallocated memory for use during contract execution.  
  • bytes 0x60-0x7f: the 32-byte “zero slot,” which is used as a volatile storage space for dynamic memory arrays which should not be written to. 

Initially, the free memory pointer points to position 0x80 in memory, after which point additional memory can be assigned without overwriting memory that has already been allocated to other data. As such, the free memory pointer also functions as the currently allocated memory size. 

The free memory pointer also functions as the currently allocated memory size.

Exploiting the first vulnerability in the smart contract is contingent on manipulating the value of the free memory pointer to redirect control flow and eventually end up in the middle of the protected deleg() function. It is useful to follow along with the execution of the DodgyProxy contract in the Remix IDE to understand the next sections.  

First, compile the contract with an optimization value of 200 and a compiler version of 0.8.17. Changing optimization settings and compiler versions may increase or decrease the offsets slightly, but the general bytecode logic should remain consistent regardless of optimization. Optimization was chosen as the standalone Solc compiler enabled optimization by default24, whereas Remix does not enable optimization by default. 

Deploy the contract using any account. Then, attempt to call the hitMe() function with an argument of 0, using the same account that was used to deploy the contract (0x5B38Da6a701c568545dCfcB03FcB875f56beddC4). This will result in a successful function call:

Call succeeds because the contract owner has called hitMe().

Then attempt to do the same as any account other than the contract owner. Here, account 0xAb8483F64d9C6d1EcF9b849Ae677dD3315835cb2 was used, representing the address of a malicious actor. The transaction will revert: 

Call fails as the attacker is not yet the contract owner.

From the attacker’s account, make call to the hitMe() function with an argument of 1 uint and debug the failed transaction: 

Debug the failed transaction.

Execution starts with the values 0x80 and 0x40 being pushed to the stack:

Execution starts with the values 0x80 and 0x40 being pushed to the stack.

The free memory pointer is then initialized by the MSTORE operation, which stores the value 0x80 at memory location 0x40.

Initializing the free memory pointer.

Then, place a breakpoint at the initialization of the Pointer struct on line 22 and continue execution: 

Examining the Pointer struct.

Execution until this breakpoint will involve many operations, including the function selection logic described earlier. Once the breakpoint is reached, the stack is seen to contain some familiar values, namely the function selector for hitMe(uint256), and an argument of 1:

Calldata represented on the stack.

Place a second breakpoint at the start of the assembly block on line 24 and continue execution. A value of 0xE2 is eventually pushed to the stack: 

Pushing function variables to the stack.

The third value on the stack, 0x01, is then duplicated to the top of the stack by the DUP3 opcode.

Duplicating the calldata argument.

The ADD opcode then adds 0x01 to 0xE2 and stores 0xE3 on the stack. Note that the bytecode generated from the compiler will differ slightly from the exact assembly due to compiler optimizations and translations from Yul:

Adding to the previously pushed value.

The last two arguments on the stack are then duplicated by the DUP1 and DUP3 opcodes, in preparation for the MSTORE opcode to store value 0xE3 at memory location 0x80

Setting up arguments to MSTORE.
Storing a new value at the free memory pointer.

This follows the logic of the assembly block on line 24. Therefore, the value that was just stored at the free memory pointer is most likely the location in memory of the Pointer struct first instantiated on line 26.  

Repeating these debugging steps by sending calldata of hitMe(2) will confirm this, as the free memory pointer will now store a value of 0xE4, one more than our previous value owing to the incremented offset argument to hitMe():

Confirming that the argument (0x2) to hitMe() are added to a constant value (0xe2).
The free memory pointer will now store a value of 0xE4.

Continuing execution with arbitrary arguments to the hitMe() function will result in a failed transaction, as a JUMP will eventually be made to an invalid location in the bytecode. However, with granular control over the value at the free memory pointer, an offset can be calculated to bypass access control modifiers protecting the deleg() function.  

To do this, the contract’s bytecode can be examined for opcodes that correspond to sections of high-level Solidity that we would want to end up in. This is simple in this case, as the correct section of bytecode will have the only DELEGATECALL opcode, since it is only called once in this contract: 

30260PUSH1 0x40 
30760PUSH1 0x00 

However, recall from earlier that for control flow to be altered by means of a jump, a corresponding JUMPDEST opcode must be present at the target location, just as with normal function selection. This is why the challenge was set up as a struct with an internal function.  

Without a means to set a function call, it would be much more difficult — if not practically impossible — to jump to another section of bytecode to break normal control flow. For this reason, this technique is also only likely to work if there are no opcodes between the JUMPDEST we land on and the target opcodes that interfere with execution. 

In this case, there does not appear to be any opcodes that would prevent us from eventually executing the DELEGATECALL operation. From here, the offset to take can easily be calculated by subtracting the location in memory of the Pointer struct from the offset in the contract bytecode of the nearest preceding JUMPDEST from the DELEGATECALL opcode. 

In our bytecode, the target JUMPDEST is found at offset 0x012D (301) in our bytecode. The location of the Pointer struct is at 0xE2 (226). This gives a difference of 0x4B (75). This difference is added to the Pointer struct base in subsequent bytecode, before a jump is made to the JUMPDEST at 0x12D:

Calculating the correct offset to jump to the DELEGATECALL bytecode block.
The difference is added to the Pointer struct base in subsequent bytecode, before a jump is made to the JUMPDEST at 0x12D.

We can confirm that the delegatecall is then reached by stepping through by placing a breakpoint at the line with the delegatecall and calling the hitMe() function with an argument of 75. Execution will proceed to the delegatecall

Bypassing the onlyOwner modifier.

At this point, we have successfully bypassed the onlyOwner modifier. To complete the challenge however, we need to take ownership of the contract by abusing the delegatecall to overwrite the owner state variable. If you have a bit of experience with Solidity, this should be relatively simple.

That’s all for this article. In Part 2 of this installment, we will go into more detail on storage layout in the EVM. Part 3 will contain a detailed look at low-level contract calls, proxy contracts, and a few more examples of how DELEGATECALL can be abused to subvert control flow and take ownership of contracts.

Ethereum Virtual Machine Internals – Part 2 



  21. You will need a local Ethereum test net provider, along with the and solcx packages.

Sneak Peek: NetSPI 2023 Offensive Security Vision Report

In cybersecurity, the discovery of assets and vulnerabilities is table stakes. What makes offensive security valuable today is its ability to prioritize remediation of issues that matter most to a business.  

Modern security and development teams are inundated with challenges that demand their attention, leading to higher pressure in an already stressful role. What’s needed most is risk-based prioritization of vulnerabilities to help direct remediation efforts. NetSPI’s inaugural Offensive Security Vision Report delivers on this with data-backed prioritization of attack surfaces, vulnerabilities, and more.  

We worked hard to uncover an anonymous, yet impactful way to share the trends we’ve seen during more than 240,000 hours of annual pentesting — and we can’t wait to share our insights with you!


Our report is based on analysis of over 300,000 anonymized findings from thousands of 2022 pentest engagements. Here’s the approach we took:

  1. We identified the top 30 most prevalent vulnerabilities from our six core focus areas or “attack surfaces” [web, mobile, and thick applications, cloud, and internal and external networks].
    Additional criteria include:
  • Only medium, high, and critical severities were reported.
  • There were multiple instances of the finding across different company environments.
  • The findings were exploitable on multiple occasions.  
  1. Then we asked our in-house offensive security experts to manually identify 3-5 findings that security teams should prioritize based on likelihood and impact.
  2. Lastly, we analyzed data for key trends across attack surface and industry.

The vulnerabilities within are based on likelihood and impact – we recommend any business with these attack surfaces to test for and remediate the security concerns highlighted in our Vision Report. 

State of Remediation 

We also surveyed several cybersecurity leaders from around the world to gauge the current state of remediation. A key narrative throughout our report, and made evident in our survey results, is that a lack of resources and prioritization are the two greatest barriers to timely and effective remediation. Yet, survey data showed security teams have limited plans for hiring in the coming year, especially when it comes to entry-level cybersecurity talent.  

Even though security resources will remain tight, prioritization of efforts is one area security leaders can take action on to help alleviate priorities with parallel weight. Our report analyzed industries, attack surfaces, and vulnerabilities to distill the highest potential of risk for an organization to investigate and remediate. 

Let’s start with industries.

Top 3 industries with the largest percentage of high & critical vulnerabilities:

  • Government & Non-profit 
  • Healthcare 
  • Education 

Top 3 industries with the lowest percentage of high & critical vulnerabilities:

  • Energy & Utilities 
  • Financial Services 
  • Insurance 

On average the highest volume of critical and high severity vulnerabilities was found within government and non-profit industries. On the other hand, insurance and financial services had the lowest volume of the same type of vulnerabilities. We found it interesting that two of the highest regulated industries landed at both ends of the spectrum with this data.  

We also asked survey respondents to share their average SLAs, or remediation due dates for the four severities. In the report, you’ll find data from your peers that can help you revise or benchmark your SLAs.

Vulnerabilities to Prioritize 

Our report analyzed six core areas: web, mobile, and thick applications, cloud, and internal and external networks. As detailed in the methodology, our expert offensive security team manually evaluated the top findings for each and identified the 3-5 vulnerabilities to prioritize discovery and remediation.  

To view a complete list of all vulnerabilities we researched alongside detailed remediation tips from our team.

During the analysis, we also examined overarching trends across the attack surfaces. Two major findings include:  

  • Web applications have a higher prevalence of high and critical vulnerabilities compared to mobile and thick applications. 
  • We also analyzed entry points, or vulnerabilities that were deemed exploitable, finding that internal networks have nearly three times more exploitable vulnerabilities than external networks. 

Dig into the Data for Yourself 

Remember, offensive security is only as valuable as its ability to help prioritize remediation of the issues that matter most to your business. Arm yourself and your team with the insights necessary to add prioritization to your remediation efforts.  

Our Vision Report covers:  

  • Impactful vulnerabilities that are most pervasive across core application, cloud, and network attack surfaces 
  • Which attack surface presents the least/most risk 
  • Industries that hold the lowest/highest risk 
  • Today’s requirements for remediation due dates 
  • The greatest barriers to timely and effective remediation 

VMblog: Industry Experts Share Hot Topics and Trends for HIMSS 2023

NetSPI was featured in VMblog’s pre-show coverage of the HIMSS conference. Read the preview below or view it online here.


HIMSS (Healthcare Information and Management Systems Society) is one of the largest conferences in the healthcare industry, bringing together industry leaders, experts, and enthusiasts from across the globe. The conference provides a platform for sharing the latest trends, technologies, and best practices in the healthcare IT sector.

The future of Healthcare IT and its impact on the healthcare workforce is going to be a hot topic discussed at HIMSS 2023. The integration of new technologies such as AI, Blockchain, and Telemedicine in healthcare will require a new set of skills and competencies among healthcare workers. HIMSS 2023 will provide a platform for industry leaders and experts to discuss the training and education programs needed to equip the healthcare workforce with the necessary skills to adapt to these changes.

HIMSS 2023 promises to be an exciting event, bringing together healthcare professionals, industry leaders, and enthusiasts to discuss the latest trends and technologies in healthcare IT. The conference will provide a platform for discussing the challenges and opportunities facing the healthcare industry and exploring how new technologies can be leveraged to improve patient outcomes, reduce costs, and enhance the overall quality of care.

Keep reading below as industry experts share their thoughts around the hot topics and trends they expect to hear more about at this year’s event.

Chad Peterson, Managing Director, NetSPI

“As ransomware attacks against the healthcare sector rise, it’s critical that organizations ensure they are remaining compliant with HIPAA. Last year, the Department of Health and Human Services’ (HHS) Office for Civil Rights (OCR) filed 22 HIPAA resolution agreements totaling over $1.12 million in settlement fines. A key issue is that HIPAA provides little guidance around the best practices to achieve compliance – leaving holes in healthcare organization’s security strategies. An often overlooked solution to this ongoing issue is penetration testing, which addresses the need to map, understand, and close gaps in an organization’s attack surface that could expose electronic protected health information (ePHI). Looking forward, healthcare security and IT teams must take a proactive mindset to HIPAA compliance. Organizations that implement comprehensive pentesting programs into their security programs will achieve better compliance and build resilience in the current threat landscape.”

Continue reading on VMblog:


Enterprise Security Tech: Hot Topics to Expect at HIMSS 2023

NetSPI was featured in Enterprise Security Tech’s pre-show coverage of the HIMSS conference. Read the preview below or view it online here.


The Healthcare Information and Management Systems Society (HIMSS) Global Health Conference and Exhibition is approaching on April 17, 2023. The event, which will take place in Las Vegas, is one of the largest health IT conferences in the world, bringing together professionals from across the healthcare industry to discuss the latest innovations and trends in healthcare technology. The conference will feature keynote speeches, educational sessions, and an exhibition hall showcasing the latest products and services from leading healthcare technology vendors. This year’s event will focus on several key themes, including cybersecurity and data privacy.

We heard from security experts from organizations attending HIMSS on what the industry should expect at the event.

Chad Peterson, Managing Director, NetSPI

“As ransomware attacks against the healthcare sector rise, it’s critical that organizations ensure they are remaining compliant with HIPAA. Last year, the Department of Health and Human Services’ (HHS) Office for Civil Rights (OCR) filed 22 HIPAA resolution agreements totaling over $1.12 million in settlement fines. A key issue is that HIPAA provides little guidance around the best practices to achieve compliance – leaving holes in healthcare organization’s security strategies. An often overlooked solution to this ongoing issue is penetration testing, which addresses the need to map, understand, and close gaps in an organization’s attack surface that could expose electronic protected health information (ePHI). Looking forward, healthcare security and IT teams must take a proactive mindset to HIPAA compliance. Organizations that implement comprehensive pentesting programs into their security programs will achieve better compliance and build resilience in the current threat landscape.”

Continue reading on Enterprise Security Tech:


NetSPI Recognized in Global InfoSec Awards at RSA for its Excellence in Pentesting and Attack Surface Management

Offensive security leader awarded “Most Comprehensive Penetration Testing” and “Next-Gen Attack Surface Management” in 11th Annual Global InfoSec Awards.

SAN FRANCISCONetSPI, the offensive security leader, received two Global InfoSec Awards for “Most Comprehensive Penetration Testing” and “Next-Gen Attack Surface Management” from Cyber Defense Magazine (CDM), the industry’s leading electronic information security magazine.

NetSPI maintains a thorough approach to penetration testing through the combination of automated and manual testing methods, deep and extensive expertise across the entire attack surface (e.g. application, cloud, network, IoT, blockchain), and a commitment to staying up-to-date with the latest security trends and technologies. With Resolve, NetSPI’s Penetration Testing as a Service (PTaaS) platform, NetSPI has built a reputation for delivering high-fidelity actionable reports to clients, with key insights to help prioritize the issues that pose the greatest risk to their business.

Complementary to its penetration testing capabilities, NetSPI brings to market the most comprehensive suite of offensive security solutions with breach and attack simulation and attack surface management.

The company’s Attack Surface Management (ASM) solution was honored as a “Next Generation” solution for its ability to continuously identify vulnerabilities and exposures using a combination of advanced automation and manual testing, deep understanding of emerging attack vectors and techniques, and commitment to helping organizations separate signal from noise.

“Thank you to Cyber Defense Magazine for recognizing NetSPI twice in the Global InfoSec Awards. Developing in-depth, future-focused offensive security solutions is, and will continue to be, a priority for us,” said Vinay Anand, Chief Product Officer at NetSPI. “This is a true testament to our dedicated team, who work tirelessly to help clients improve security and innovate with confidence.”

Global InfoSec Awards Winner – Cyber Defense Magazine – 2023

This is the second consecutive year NetSPI has been awarded by Cyber Defense Magazine for its comprehensive, innovative, and technology powered, human delivered cybersecurity tools. Last year, NetSPI was awarded “Most Innovative in Penetration Testing” for revolutionizing the Penetration Testing as a Service (PTaaS) delivery model to enable organizations to view penetration testing results in real time, scale to support innovation, orchestrate faster remediation, perform always-on continuous pentesting, and more.

“NetSPI embodies three major features we judges look for to become winners: understanding tomorrow’s threats, today, providing a cost-effective solution and innovating in unexpected ways that can help mitigate cyber risk and get one step ahead of the next breach,” said Gary S. Miliefsky, Publisher of Cyber Defense Magazine.

For more information on NetSPI, visit the company website or speak with the company’s offensive security experts at booth #5618 in the North Expo Hall at RSA Conference 2023.

About NetSPI

NetSPI is the leader in enterprise penetration testing, attack surface management, and breach and attack simulation – the most comprehensive suite of offensive security solutions. Through a combination of technology innovation and human ingenuity NetSPI helps organizations discover, prioritize, and remediate security vulnerabilities. For over 20 years, its global cybersecurity experts have been committed to securing the world’s most prominent organizations, including nine of the top 10 U.S. banks, four of the top five leading global cloud providers, four of the five largest healthcare companies, three FAANG companies, seven of the top 10 U.S. retailers & e-commerce companies, and many of the Fortune 500. NetSPI is headquartered in Minneapolis, MN, with global offices across the U.S., Canada, the UK, and India.

About Cyber Defense Magazine

Cyber Defense Magazine is the premier source of cyber security news and information for InfoSec professions in business and government. We are managed and published by and for ethical, honest, passionate information security professionals. Our mission is to share cutting-edge knowledge, real-world stories and awards on the best ideas, products, and services in the information technology industry.  We deliver electronic magazines every month online for free, and special editions exclusively for the RSA Conferences. CDM is a proud member of the Cyber Defense Media Group. Learn more about us at and visit and to see and hear some of the most informative interviews of many of these winning company executives.  Join a webinar at and realize that infosec knowledge is power.

NetSPI Media Contacts:
Tori Norris, NetSPI
(630) 258-0277

Jessica Bettencourt, Inkhouse for NetSPI
(774) 451-5142 

CDM Media Inquiries:
Irene Noser, Marketing Executive
Toll Free (USA): 1-833-844-9468
International: 1-646-586-9545 


Keeping Up with Medical Device Cybersecurity

NetSPI hosted three cybersecurity professionals in the medical device industry for a roundtable discussion on their top learnings from implementing medical device security programs. I had the pleasure of moderating the session and was joined by: 

  • Matt Russo, Senior Security Director, Medtronic 
  • Dr. Matt Weir, Principle Cyber Security Researcher, MITRE 
  • Curt Blythe, Director of Product Security, Abbott 

The conversation covered core factors a medical device security program must have, the departmental structure of a security team within a medical device company, how they each approach medical device pentesting and vulnerability management, and much more.  

Security for medical devices is complex as it continually evolves alongside product innovation. The best programs bring security into the product development lifecycle from the start, with the flexibility for enhancements as new trends emerge. 

Read the highlights below or watch the webinar on-demand here

3 Factors of Successful Medical Device Security Programs 

Panelists agreed on these three factors to give medical device security programs the best chance of success:  

  1. Executive buy-in. This is easier said than done, but dedicating effort to educating the team that influences business decisions will pay off greatly over time.  
  2. Integration into quality assurance. When talking about baking security into the product development lifecycle, this is one tangible way to do so. The clinical process for medical devices is well-established. Steps for security must be intentional and agreed-upon to create consistent protocols in medical device design.  
  3. Internal and external partnerships. Security is a business enabler because it reduces the risk of adverse events that could affect an organization. The more security is embedded into the medical device process, the more empowered a team becomes to move faster in a safe manner. 

    On the external partnerships side, many industry organizations have collected input and developed research to help organizations embrace security in medical devices. Leaning on these associations and the educational content they publish is akin to a cheat sheet for medical device security.  

This list isn’t exhaustive, but it’s a grounding step toward creating a strong strategy for medical device security. 

“We need to share information effectively across the ecosystem to make sure we’re all using as much knowledge as we can to continue to be in a spot to secure very critical assets.” 

Matt Russo, Senior Security Director, Medtronic

Lean on External Partners for Medical Device Cybersecurity Education 

Our panelists mentioned several industry organizations and common frameworks they’ve created to help share collective knowledge across the industry. These organizations are a good place to start when designing a medical device security program: 

  1. Medical Device Innovation Consortium (MDIC)  
  2. Information Sharing and Analysis Centers (ISACs) 
  3. Health Sector Coordinating Council (HSCC) 
  4. International Medical Device Regulators Forum (IMDRF) 

Bring leadership along in this education journey! Matt Russo recommends monitoring what’s happening in your industry at the legislative level and relaying it back to the company to let your team know what’s coming. This helps show value early on to help influence team buy-in. 

Are you keeping tabs on the recently passed omnibus bill? According to a report from Health IT Security, within its 4,000 pages, you’ll find “language that would require medical device manufacturers to ensure that their devices meet select cybersecurity requirements.” Listen to the panelists discuss the package, and more on medical device security compliance, starting at 23:55.

Medical Penetration Testing

How Security Teams are Structured within Medical Device Departments 

The structure of a security team within an organization depends on the size of the company. As companies grow, the size of security teams does too, resulting in more specialized roles within the department. On the other hand, medical device manufacturers may have a single cybersecurity person on the team responsible for integrating security measures into the clinical process.  

One commonality in both of these scenarios is that the security team is a centralized function that works with all individualized divisions. This avoids multiple people doing the same type of work and aids a consistent process organization-wide. 

“When you can start actually trying to solve problems and get ahead of these issues, that’s when you start being able to get that full buy-in to do more.” 

Dr. Matt Weir, Principle Cyber Security Researcher, MITRE 

If You Knew Then What You Know Now… What Would You Do Differently? 

Experience is the best teacher. Panelists shared what they would do differently if they were starting over with a medical device security program. 

  1. Dr. Matt Weir: Understand that the clinical environment has a steep learning curve for people with traditional cybersecurity backgrounds.  
  2. Matt Russo: Push harder on internal education to equip non-technical leaders with the knowledge needed for buy-in. Move faster on best practices without needing legislation to drive the changes. 
  3. Curt Blythe: Build in a strategy from the start to update medical devices in the field as they transition from a single device to connected devices through IoT. 

“As we’re looking at the devices that are out in the field, how do we get updates to those? Is it a matter of sending a clinical engineer out there to update [it] holding a USB stick? Or can we do it over the air? Especially with the speed of security today, we need to be able to move faster. I think it becomes a speed and scale issue that we’re going to have to work on.”  

Curt Blythe, Director of Product Security, Abbott 

Bookmark Now, Watch Later: Medical Device Security Webinar 

Keep growing your knowledge in med device security by watching the roundtable discussion with Dr. Weir, Matt, and Curt. Their industry expertise and perspectives on trending topics such as the omnibus bill, updatability, and IoMT give anyone learning about med device security ideas on how to move their programs forward. 

Explore NetSPI’s medical device pentesting or watch the webinar on demand.  

Medical Device Security Webinar

Healthcare IT News: Tips on Medical Device Security from the Product Leaders’ Perspective

NetSPI’s medical device security roundtable was featured in Healthcare IT News in an article recapping the virtual event. Read the preview below or read it online here.

+ + +

Medical device innovations have enhanced healthcare and improved patient care, but they present a broad attack surface for healthcare organizations.

NetSPI, a security service company, hosted medical device product security experts to talk about the business and challenges of securing connected technologies in healthcare. They addressed sharing information across teams throughout the product lifecycle, building product security teams, legislative changes governing the space and strategies to increase the pipeline of talent.

Where does product security sit within the enterprise?

Matt Russo, senior director of product security at Medtronic, Curt Blythe, director of product security at Abbott and Matt Weir, principal cybersecurity engineer at MITRE, all agreed that, regardless of where product security teams sit, they need to be partners in product development.

Where it makes sense from a scale and efficiency perspective, there’s one team dedicated to scanning devices as a centralized function with a distributed model, Blythe said.

But the key point is embedding design and security practices into what developers do every day, which ultimately enables them to move fast, “but in a safe way.”

Russo said that at Medtronic, “You can really see that across the landscape.” 

While resource restrictions make centralized product security functions more feasible, and they generally work for Medtronic and other large organizations, he said many device companies need to look at the technical aptitude of security teams.

Is product security just a part of what they do?

Weir noted that it’s hard to have a dedicated security team if you have a small product base. 

“The big thing though is that you do have that integration during your product development lifecycle,” he said. 

When medical device developers try to add cybersecurity later into the process, it makes it much harder to be successful, he added. Weir advised integrating product security as early as possible into the product life cycle, and continuing communication as products evolve. 

Product security specialists bring visibility into systems. They can then see how the devices are being used, and they are better positioned to recommend mitigations, he said. 

Continue reading at Healthcare IT News:


CRN: 10 Key Cybersecurity Acquisition Deals In Q1 2023

NetSPI’s acquisition of nVisium was featured in CRN’s review of ten key cybersecurity acquisition deals in Q1 2023. Read a preview below or read the full article online here.

+ + +

The consolidation continued in the cybersecurity market during the first three months of the year, both among top vendors in the industry and major solution providers in the channel. We’ve collected details on 10 notable acquisition deals in cybersecurity that were announced or completed during the first quarter of 2023.

NetSPI Acquires nVisium

NetSPI, a provider of penetration testing services and attack surface management capabilities, said it’s expanding its capabilities for offensive security services with the acquisition in January of nVisium. The terms of the acquisition were not disclosed, and it was mainly aimed at adding talent for NetSPI’s penetration testing services, according to NetSPI CEO Aaron Shilts (pictured). The acquisition brings two “complementary offensive security teams together who are committed to delivering the highest standard of penetration testing on the market today,” Shilts said in a news release. The acquisition follows NetSPI’s $410 million funding round in October, aimed at uses including the expansion of its channel program.

Continue reading on CRN:


What You Need to Know about Breach and Attack Simulation

As the tools, technology, and processes to launch cyberattacks become increasingly sophisticated, organizations’ security controls must be more proactive than ever to get ahead of potential breaches by identifying vulnerabilities before they become an issue.

Unfortunately, few executives are confident in their company’s security effectiveness. Research from Accenture found that only 52 percent of security executives and 38 percent of non-security executives agree that their organization is well-protected from cyber threats.

To get ahead of the latest cybersecurity threats, forward-thinking organizations are turning to breach and attack simulation (BAS). In fact, research shows the breach and attack simulation market is projected to reach $1.12 billion by the end of 2022 and see a compound annual growth rate of 35.12% by 2032.

If protecting sensitive data and preventing access to critical systems is a goal for your organization, then learn more about BAS solutions, including its benefits, use cases and what to look for in a vendor to enhance security posture.

What is Breach and Attack Simulation?  

Breach and attack simulation (BAS) is an advanced security testing method that involves playing the role of a sophisticated real-world threat actor to assess an organization’s security controls. BAS is defined by the larger market as automated security control validation that allows for continuous simulation, in most cases focused on validating detective control coverage. Market intelligence firm IDC defines key functions of BAS, including:  

  • Attack: mimic real threats 
  • Visualize: see exposures 
  • Remediate: address gaps 

In today’s evolving threat landscape, a single click can expose an organization’s global environment to an adversary. Breach and attack simulation plays a critical role in protecting organizations’ systems and infrastructure by simulating common attack methods throughout the cyber kill chain and offering expert counsel to prioritize remediation steps. 

Advantages of Breach and Attack Simulation at Your Organization 

According to NetSPI data, 80 percent of common attack behaviors are missed by out-of-the-box solutions for endpoint detection and response (EDR), security information and event management (SIEM), and managed security service provider (MSSP). This can leave organizations with a false sense of security. 

While 100 percent detection doesn’t exist, breach and attack simulation can improve security controls to better detect a wide range of relevant attacks.  

Key benefits of breach and attack simulation include: 

  • Test your organization’s security controls and defend against emerging cyber threats and attacks.
    To stay ahead of malicious actors and threats, organizations must focus on detecting threats before an attack. An advanced BAS solution can continuously replicate real attack behavior, measure the effectiveness of security controls and identify gaps with customizable procedures. Because BAS mimics real-world threat actors, security teams can identify common adversary behaviors and — armed with this information — more effectively prioritize detection development as well as investments.  
  • Meet the challenge of today’s cybersecurity skills gap. 
    Reliance on technology has increased the need for workforces with technical expertise. The number of open positions in cybersecurity is increasing, while the demands put on employees are expanding, leading to fewer people taking on more responsibilities. Breach and Attack Simulation is a step in the right direction to combat today’s skills gap by directing the security team’s focus on the most impactful actions.  
  • Help operational development and measure detective controls. 
    BAS not only educates SOC teams on their environment and common attack behaviors, but it also helps enhance security programs by validating the efficacy of detective controls. NetSPI helps define KPIs upfront so security teams can track effectiveness over time. Data is consolidated into one centralized platform with the ability to configure and run customizable procedures.  
  • Justify security spending and make the case for increased budget.  
    A common goal for any security team is demonstrating the effectiveness of security spending to executive leadership and the board of directors. And cybersecurity is increasingly becoming a top strategic business priority across organizations, with Gartner predicting that 40 percent of boards of directors will have a dedicated cybersecurity committee by 2024. This has the potential for CISOs and security teams to receive more scrutiny, but also presents opportunities for increased security support and resources. 

With comprehensive breach and attack simulation services, findings are delivered with descriptions, procedures, and recommendations based on expert human analysis. Actionable insights are also available to track and trend your security posture, benchmark against industry competitors, and measure ROI, which can help make the case for an expanded security budget. 

Examples of Breach and Attack Simulation from Gartner 

As threats rapidly evolve, breach and attack simulation vendors continue to improve and expand their technology, features, and scope. While BAS has a wide range of use cases, some common examples Gartner listed include: 

  • Complete an attack simulation procedure to better understand gaps in an organization’s security defenses and identify actionable steps to improve security controls 
  • Gain an attacker’s outside perspective of an organization’s environment and systems 
  • Work in partnership with red teams to run BAS procedures using the methods and approach of real adversaries in a controlled environment 
  • Leverage findings from the simulation to flag top risks and vulnerabilities, and identify actionable steps for remediation 

Quick Guide to Evaluating Breach and Attack Simulation Vendors 

Several breach and attack simulation services are available on the market and selecting a partner with advanced technology and a team of proven security experts is critical to protecting against the latest threats. Review the key criteria below to take into consideration when assessing different breach and attack simulation vendors: 

  • A single, centralized platform to consolidate and organize relevant data  
  • Capabilities for BAS services to be automated, consistent, and continuous 
  • White-glove service and communication available throughout the engagement from experienced, trained professionals 
  • Customizable procedures to gain an attacker’s view of your environment at scale 
  • Seamless user experience (UX) and user interface (UI) for both expert and novice users  
  • Extensive, consistently updated security plays and playbooks, that enable organizations to better strengthen security posture 
  • Real-time, actionable data to identify trends and coverage gaps, benchmark security posture against competitors, measure ROI of security investments, and prioritize remediation efforts  

Test your security controls with NetSPI’s Breach and Attack Simulation 

Protecting your business effectively against security threats requires a reputable, expert partner. For more than 20 years, NetSPI’s global cybersecurity experts have been trusted partners in securing the world’s most prominent organizations.  

NetSPI’s Breach and Attack Simulation enables organizations to create and execute customized procedures utilizing purpose-built technology. Professional human pentesters simulate real-world attacker behaviors, not just indicators of compromise (IOCs), putting your detective controls to the test in a way no other BAS solution can.  

With the combination of the AttackSim cloud-native technology platform and personalized counsel from NetSPI’s manual testing teams, your organization can build resilience against ransomware, denial of service, data loss, fraud, information leaks, and more.  

Learn more about NetSPI’s Breach and Attack Simulation by downloading our data sheet

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.