Back

Help Net Security: Fresh perspectives needed to manage growing vulnerabilities

NetSPI’s 2023 Offensive Security Vision Report was featured in Help Net Security. Read the preview below or view the article online.

+++

In its inaugural 2023 Offensive Security Vision Report, NetSPI unveils findings that highlight vulnerability trends across applications, cloud, and networks.

Vulnerability patterns

The report offers a look back — and forward — at some of the most significant vulnerability patterns of the past year to help security and business leaders focus discovery, management, and remediation efforts on the riskiest vulnerabilities most likely to exist on their attack surface.

According to the NIST National Vulnerability Database vulnerability count has steadily increased year-over-year for the past five years – and shows no signs of slowing down. This, coupled with the reality of burnt-out security and development teams, creates an imminent need for prioritization.

The report analyzed over 300,000 anonymized findings from thousands of pentest engagements, spanning more than 240,000 hours of testing, to identify the most prevalent vulnerabilities across various industries — which include healthcare, retail, finance, and manufacturing.

Today, offensive security is only as valuable as its ability to help you prioritize remediation of the issues that matter most to your business.

Read the full article here: https://www.helpnetsecurity.com/2023/05/26/netspi-2023-offensive-security-vision-report/

Back

Ethereum Virtual Machine Internals – Part 2 

This is the second part of our series on Ethereum Virtual Machine (EVM) internals. Part 1, which can be found here, introduced the EVM call context and its architecture, followed by a deep dive into the non-persistent Memory section, function selection and visibility, and how contract control flow can be bypassed at the bytecode level.  

This installment will focus on the EVM’s persistent storage region and explore how various Solidity types and data structures are represented and referenced in contract Storage.

EVM Storage and Contract States

In part 1, we dove into the memory section of the call context, used to hold temporary, mutable data necessary for inter-function logic. As a decentralized state machine however, the EVM needs a dedicated mechanism to keep track of persistent data that represents a given contract’s overall state between function calls. This is the job of the EVM’s Storage region.  

EVM implementations treat the storage region as a dictionary of 2256 32-byte keys and values. These keys are considered to be logically initialized to zero by default. However, like all data structures that constitute the EVM, it is up to specific implementations to define exactly how the storage region is implemented. Most implementations, including the widely used Go-Ethereum (geth) client stays true to the overall idea of the Storage region by implementing storage as hash maps or key-value pairs1, as does the C++-based EMVC implementation of the EVM, which uses the C++ standard library std::map object2.  

The relevant opcodes for operating on storage are the SSTORE and SLOAD opcodes, which allow writing and reading 32 bytes to and from storage respectively. It should be noted that EVM operations that work on the Storage region incur some of the highest gas fees relative to other opcodes3, therefore a sufficient understanding of this region is especially important for writing gas-efficient contracts. 

Storage Slots 

EVM implementations use the concept of storage slots, which are discrete sections within the storage space in which state variables are stored.

Storage slots

Within the wider Ethereum network, storage slots are also sometimes referenced using the keccak256 hash of the hexadecimal slot number, left padded to 32 bytes.

keccak256(0x0000…0000) = 0x290decd9548b62a8d60345a988386fc84ba6bc95484008f6362f93160ef3e563 
keccak256(0x0000…0001) = 0xb10e2d527612073b26eecdfd717e6a320cf44b4afac2b0732d9fcbe2b7fa0cf6 
keccak256(0x0000…0002) = 0x405787fa12a823e0f2b7631cc41b3ba8828b3321ca811111fa75cd3aa3bb5ace

Below is an example of a simple storage layout as shown in Remix of three 32-byte uint256 values, each in their own storage slot:

Some development environments also show storage slot indexes in addition to storage slots.

It is important to note that within the EVM execution context, slot keys are usually referenced via their literal hexadecimal values, most evident when storage slots are referenced by the commonly used SSTORE4 and SSLOAD5 EVM opcodes. However, hashed slot indexes shown by Remix and similar tools play an important role in the wider Ethereum ecosystem.  

Merkel Patricia Tries 

The reason why slots are sometimes also displayed via a hash of the slot numbers themselves stems from a fundamental data structure used by the wider Ethereum network for storing account data and contract storage, called Merkle Patricia Tries (MPTs)6. In an MPT, each key-value pair is hashed using the keccak256 hash function, and the resulting hash is used as the key for the next level of the trie.  

This creates a hierarchical structure where each level of the trie corresponds to a different prefix of the key, and each leaf node in the trie corresponds to a complete key-value pair. MPTs are used in various contexts throughout the Ethereum ecosystem, including for storing account data, contract storage, and transaction receipts. 

At a high level, an MPT is a modified version of the standard trie data structure. A trie7 is a generic tree-based data structure used for storing and retrieving key-value pairs. It allows for efficient prefix-based searches by organizing keys based on their common prefixes.  

MPTs incorporate elements of Merkle and Patricia tries, which are derivative trie structures. Merkle tries, also known as hash trees, use hashed representations of keys and values to save on search and storage overhead8. Patricia tries are optimized for memory usage by compressing and merging multiple nodes within a single child node, further reducing storage requirements9.  

The Bitcoin network uses Merkle tries to archive and verify the integrity of transactions that have been confirmed with enough blocks10, whereas the Ethereum network requires MPTs to facilitate smart contract and account state storage. Outside of blockchain systems, Patricia tries and the more general Radix tries are commonly used in applications that require fast prefix-based searches, such as IP routing tables and dictionary implementations. 

Storage slots are referenced by their keccak256 hashes in the MPT when contract balances and other state information is read and written, largely because MPTs need a standardized method of referencing variable-length key values. However, the EVM references storage slots via their explicit storage key values. These are not always hexadecimal values; for more complicated data structures, additional keccak256 hashing must be done to derive slot keys, as will be demonstrated later. 

MPTs and the data structures used to maintain the wider Ethereum network are out of scope for this article. The rest of this article will focus on EVM storage. The EVM has a peculiar way of storing state variables; necessitated in part by the EVM’s implementation as a register-less stack machine, and as a means of minimizing the gas costs of storing smart contract states and data.  

For the remainder of this article however, we will omit slot indexes when displaying key-value sets in storage for clarity.

Static Variable Storage 

Simple Variable Storage & Packing 

Consider the following Solidity contract which demonstrates the concepts of simple variable storage and static variable packing. The modify() function assigns values to the state variables, which have been implemented as unsigned integers of varying sizes. The integers are represented in hex format for clarity. You may follow along with this contract via Remix here.

pragma solidity >=0.7.0 <0.9.0; 

contract VarPacking { 
    uint256 slot_0; 
    uint128 slot_1; 
    uint64 still_slot_1; 
    uint64 slot_1_again; 
    uint128 slot_2; 

    function modify() public { 
        slot_1 = 0xAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA; // slot 1 
        slot_0 = 0xBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB; // slot 0 
        still_slot_1 = 0xCCCCCCCCCCCCCCCC; // still in slot 1 
        slot_1_again = 0xDDDDDDDDDDDDDDDD; // still in slot 1 
        slot_2 = 0xEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE; // slot 2 
    } 
}

The order in which state variables are assigned values does not affect which slot they will occupy. This is because the storage slots are determined depending on these factors: 

  • The order in which state variables are declared. 
  • The type of the state variable. 
  • The size of the state variable. 
  • Whether the variable is fixed-size (simple unsigned integers, fixed-size arrays), or dynamically sized (mappings and unbounded arrays). 

Until variables are assigned values, no data will be written to the storage slot, and no storage fees are incurred until storage is interacted with. Pay attention to the state variable names, which have been named according to how they will occupy storage slots. 

We will now go through the execution of the modify() function and show how each storage slot is populated. 

Note that Solidity compiler optimizations have been turned off for all examples shown in this post, as compiler optimizations can affect the order in which slots are populated. These optimizations do not affect the values stored in the slots, however. As of writing, the main factor that affects which slot a value is stored in is still the order in which the variable was declared. 

Integers 

unit and uint256 types are 32 bytes large. When state variables of these types are assigned values, the next available storage slot is used in its entirety. This can be confirmed by deploying the VarPacking contract, placing a breakpoint on line 12, and debugging the call to modify(). After the first SSTORE operation, storage will resemble the following: 

"key": "0x0000 . . . 0001",
   "value": "0x00000000000000000000000000000000aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

The 128-bit value 0xAA..AA has been written to the latter half of slot 1. Continuing past the next SSTORE operation will result in the 256-bit value 0xBB…BB being written to slot 0, as the slot_1 state variable was assigned a value before slot_0

"key": "0x0000 . . . 0001", // slot 0x00…01 
   "value": "0x00000000000000000000000000000000aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"key": "0x0000 . . . 0000", // slot 0x00…00
   "value": "0xbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb

Stepping through until the still_slot_1 value is assigned to storage (just before line 14) results in slightly different storage behavior however:  

"key": "0x0000 . . . 0001", // slot 0x00…01 
   "value": "0x0000000000000000ccccccccccccccccaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"key": "0x0000 . . . 0000", // slot 0x00…00 
   "value": "0xbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb" 

Variable packing:  

Unlike slot_1, the value for still_slot_1 was stored after the 8th byte of storage slot 1. This is because the consecutively declared slot_1 and still_slot_1 are of types uint128 and uint64, and are 16 and 8 bytes large respectively. If we continue execution until the end of the modify() function, we will notice that the values for slot_1, still_slot_1, slot_1_again, and slot_2 are also not all stored in their own storage slots.  

To save on storage fees, the EVM has attempted to “pack” these values together such that more than one value can occupy a single storage slot. This is called “variable packing”, and is subject to the following rules11

  • The first item in a storage slot is stored lower-order aligned. 
  • Value types use only as many bytes as are necessary to store them. 
  • If a value type does not fit the remaining part of a storage slot, it is stored in the next storage slot. 
  • Structs and array data always start a new slot and their items are packed tightly according to these rules. 
  • Items following struct or array data always start a new storage slot. 

The storage layout will present the following values after the call has completed: 

"key": "0x0000 . . . 0001", // slot 0x00…01 
   "value": "0xddddddddddddddddccccccccccccccccaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", 
"key": "0x0000 . . . 0000", // slot 0x00…00 
   "value": "0xbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb", 
"key": "0x0000 . . . 0002", // slot 0x00…02 
   "value": "0x00000000000000000000000000000000eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee"

Take a moment to reflect on how the previously mentioned variable packing rules are at play. Stepping through the bytecode executed between lines 14 and 15 will show these rules being applied, shifting and rearranging the values as necessary until the SSTORE opcode is used to preserve the values in the correct slot. Other static types are stored differently; however the same overall goal of maximizing slot usage is maintained. 

Compiler Optimizations: 

Because opcodes like SSTORE and SLOAD are gas intensive, with Solc compiler optimizations turned on, optimized bytecode will be generated which attempts to limit the number of storage operations. This can be observed using the previous VarPacking contract as an example, using this Remix link. Here, the “Enable optimization” flag has been selected under Advanced Configurations in the Solidity Compiler tab. Optimizations are enabled by default in Remix and most other development frameworks. 

With optimizations, even though the slot_0 variable was assigned after slot_1, setting a breakpoint on line 14 shows that storage slot 0 was written to first. This is then followed by a single SSTORE operation to store the values of slot_1, still_slot_1 and slot_1_again to slot 1 within a single operation.  

Due to compiler optimizations, slot 1's contents are stored via a single SSTORE operation using the 3 values to be stored in slot 1. This happens after slot 0 is fully populated.

This is because the contract’s bytecode was optimized to pre-arrange the values to be stored in a particular slot before any storage operations are performed. Thus, the previously described variable packing rules are still in effect, however they are instead performed in the Memory region and stack instead of successive storage read/writes. 

After the single SSTORE operation on slot 1.

For simplicity, compiler optimization will be disabled for the remaining contract links in this article. 

Fixed-Size Arrays: 

Elements in fixed-size arrays are stored as they otherwise would be if the elements were declared sequentially in singular storage variables. After executing modify() and debugging the transaction for the following contract, each element of the array will occupy its own storage slots. This is only because each element is 32 bytes large. As with singular state variables, array state variable elements can share the same slot, subject to the same packing rules discussed before. 

pragma solidity >=0.7.0 <0.9.0; 

contract FixedArray { 
    uint256[3] arr; 

    function modify() public { 
        // slots 0 and 1 assigned values of 1, 2.  
        // slot 2 will still contain zeros. 
        arr = [1,2]; 
        // slot 2 assigned value of 3 
        arr[2] = 3; 
    } 
}

When assigning values for up to n-1 elements in an array, the Solidity compiler will explicitly store a value of 0 for the unassigned values. This can be seen by placing a breakpoint at the first assignment in modify() and stepping through execution until a value of 0 is pushed to the stack, to eventually be stored at slot 2.

Before storing to slot 2.

After executing the SSTORE instruction, slot 2 contains a value of 0.

Before storing to the final slot.

This is later overwritten by the value of 3 after the final element has been assigned. These variable packing rules suffice for simple unsigned integer types. However, the EVM must perform additional storage packing logic to store more complex data types. 

String and byte types: 

Strings and byte variations are stored according to their own packing heuristics. Each character in a Solidity string takes up 1 byte. The following contract demonstrates string storage: 

pragma solidity >=0.7.0 <0.9.0; 

contract StringStorage { 

    //same applies to byte types 
    string short_string; 
    string long_string; 

    function modify() public { 
        short_string = "ABCD";      
        long_string = "ABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCD"; //84 characters long 
    } 
} 

Strings and byte values that are less than or equal to 31 bytes in length are indexed in storage in the following way, where n represents the next available storage slot for 0 ≤ n ≤ 2256-1 slots: 

String/Byte Storage ( of size 1 ≤ 31 bytes)
Slot KeySlot Value
nString data up to 31 bytes in higher-order bytes, and length of string * 2 in lowest-order byte

Placing a breakpoint at line 11, after the assignment of short_string, and debugging the call to modify() will show the string ABCD (0x41424344) stored in slot 0, aligned to the higher order of the storage slot. Double the length of the string, 0x08, is stored at the lower order of the slot (shown in red):

"key": "0x0000 . . . 0000", // slot 0x00…00
   "value": "0x414243440000 . . . 0008",
. . .

A different method is used to index and store strings and byte values over 32 bytes in length, however it still retains the concept of storing string/byte data first, followed by the length of the data. Instead of referencing slot keys via their literal values, the keccak256 hash of the previous slot key number is used as the current slot key. This hash is then incremented for every remaining 32 byte chunk of string data. This is summarized in the table below, for n storage slots, storing i 32 byte chunks of string data, for 0 ≤ n, i ≤ 2256-1:

String/Byte Storage ( of size 1 ≤ 32 bytes)
Slot KeySlot Value
. . .. . .
keccak256(n)+(i-1)i-1th 32 byte chunk of string data. 
keccak256(n)+i ith 32 byte chunk of string data, left-aligned with 0 if i ≤ 32 bytes (e.g., 0x41424344 . . . 000000) 
n+1 (Length of string/bytes * 2) + 1 in lower-order bytes 

 After long_string has been assigned its value, we see several more storage keys being used:

. . .
"key": "0xb10e . . . 0cf6", // slot keccak256(0x00…00) 
   "value": "x4142434441424344414243444142434441424344414243444142434441424344",
"key": "0xb10e . . . 0cf7", // slot keccak256(0x00…00) + 0x00…01 
   "value": "0x4142434441424344414243444142434441424344414243444142434441424344",
"key": "0xb10e . . . 0cf8", // slot keccak256(0x00…00) + 0x00…02 
   "value": "0x4142434441424344414243444142434441424344000000000000000000000000", 
"key": "0x0000. . . 0001", // slot 0x00…01 
   "value": "0x0000 . . . 00a9"

In this case, the keccak256 hash of the storage identifier for slot 1 (0xb10e…0cf6) is used as the first slot key for the first 32-byte chunk of the string. This slot key is incremented by one for all remaining 32-byte chunks, with the last chunk being padded with zeros if it is less than 32 bytes in length. 

Finally, the length of the string is stored at the storage slot corresponding to when the long string was declared. We can see that this mirrors the way shorter string storage variables are stored, with the data taking up higher order bytes, and the length of the string in the lower order bytes. 

Enums 

The storage of the enum type is not currently covered in the Solidity documentation12. Testing shows that enum elements are stored as lower-aligned single-byte values within a single slot for enums up to 32 elements in size. 

This is evident with the following contract, with enum E containing 35 elements. Note that all elements of enum E and their assignments in the modify() function have been omitted for brevity in the example below: 

pragma solidity >=0.7.0 <0.9.0; 

contract EnumStorage { 

    enum E {t1, . . . ,t35} 
    E e1; 
    . . .  
    E e35; 

    function modify() public { 
        e1 = E.t1; 
        . . .  
        e35 = E.t35; 
    } 
}

For enums of more than 32 elements, the next available slot n is used to store the ith group of up to 32 elements, for 0 ≤ n,i ≤ 2256-1:  

Enums
Slot KeySlot Value
. . .. . .
n+(i-1)i-1th set of 32 elements, each represented as a lower-ordered single byte.
n+iith set of up to 32 elements, each represented as a lower-ordered single byte, padded with zeros if ≤ 31 elements.

Storage will resemble the following after executing modify() in this contract.

"key": "0x00 . . . 00", // slot 0x00…00 
  "value": "0x1f1e1d1c1b1a191817161514131211100f0e0d0c0b0a09080706050403020100",
"key": "0x00 . . . 01", // slot 0x00…01 
   "value": "0x0000000000000000000000000000000000000000000000000000000000222120"

Structs

In Solidity, structs can include elements of different data types. Like fixed-size arrays, the elements of struct state variables are allocated slots sequentially based on declaration order and type. Contract developers can save considerably on gas fees by declaring the elements of state variable structs to maximize the amount of data stored per slot13. This is demonstrated in the following contract

pragma solidity >=0.7.0 <0.9.0; 

contract StructStorage { 

    struct S1 { 
        uint128 a; 
        uint256 b; 
        uint128 c;  
    } 

    struct S2 { 
        uint128 d; 
        uint128 e; 
        uint256 f; 
    } 

    S1 expensive_struct; 
    S2 cheaper_struct; 

    function modify() public { 
        expensive_struct.a = 1; 
        expensive_struct.b = 1; 
        expensive_struct.c = 1;  
    } 

    function modify2() public { 
        cheaper_struct.d = 1; 
        cheaper_struct.e = 1; 
        cheaper_struct.f = 1;   
    } 
}

EIP-2929 and Access Sets

According to Remix, the cost of executing modify() was reported as 66588 gas, whereas modify2() incurred only 44760 gas. This is because assigning values to all expensive_struct’s members requires 3 storage slots, while cheaper_struct only requires two. 

StructExecution Cost (gas)Storage Changes After Execution
expensive_struct66588 "key": "0x00…00",
   "value": "0x00…01",
"key": "0x00…01",
   "value": "0x00…01"
"key": "0x00…02",
   "value": "0x0…01"
cheaper_struct44760 "key": "0x00…03",
   "value": "0x00…01…00…01"
"key": "0x00…04",
   "value": "0x00…01"

This is because the first two elements of struct S2, which are of type uint128 and thus 16 bytes each, can be packed into the same slot, whereas the second element of struct S1 is a full 32 bytes, and therefore requires that each element of this type is stored in its own slot. 

Writing non-zero values to “untouched” storage slots is more expensive than writing to a slot that has already been written to. This can be confirmed by deploying the StructStorage contract and executing the modify2() function. Debug the transaction, and step through until the first call to SSTORE. Observe that 20000 gas is required to write to slot 0 in the Step Details window.

First writing to slot 3 via SSTORE requires 20000 gas.

Stepping through to the next SSTORE operation, which operates on the same storage slot, will instead require only 100 gas. 

Subsequent storage writes to the same slot are considerably cheaper.

This is because the Berlin hard fork of the Ethereum network in April 2021 brought various changes to the network. One such change, EIP-292914, increased the gas cost of accessing “cold” (not already accessed) storage slots and account addresses relative to “warm” (previously accessed) ones. To differentiate between cold slots/addresses and ones that have been “warmed up” by means of prior access, the concept of Access Sets15 were introduced. 

The motivation behind this was that storage-accessing opcodes were considered underpriced in terms of gas execution fees. The 2016 Shanghai DDOS attack against the Ethereum network exploited this, as the low gas fee of the EXTCODESIZE opcode was abused16 to force nodes and miners to read state information from disk, thus congesting the network. The specific calculations for warm or cold reads and writes are best covered in the most recent version of the Ethereum Yellow Paper17.  

Appendix G lays out the gas costs for common operations, including accessing warm and cold slots/addresses via Gwarmaccess and Gsset respectively. Note that writing zero values to any storage is still treated as if accessing a warm storage slot, as all storage slots are considered to be logically zero-initialized by default18

The Ethereum Yellow Paper section highlighting the differences between cold and warm access sets.

Dynamic Variable Storage

The state variables demonstrated so far were all fixed-size types. With fixed-size state variables, the EVM can know ahead of time where and how to best store them according to packing rules. This is not possible for dynamic sized types such as mappings and dynamic-sized arrays. Storage of dynamic-sized array state variables are demonstrated in the following contract

Dynamic arrays

pragma solidity >=0.7.0 <0.9.0; 

contract DynamicArray { 

    uint256[] ints; 
    uint256[][] int_ints; 

    function modify() public { 
        ints.push(0xaa . . . aa); 
        ints.push(0xbb . . . bb); 
        int_ints.push(ints); 
        int_ints.push(ints); 
        int_ints.push(ints); 
    }
}

Dynamic arrays are stored using a similar hash-of-hashes trick as with string and byte types. They are primarily referenced by a “base” slot identifier that is continually updated with the number of elements currently assigned to a dynamic array. The slot corresponding to the keccak256 hash of this base slot identifier then holds the first element of the array. For all subsequent elements, the inner hash is incremented by one, and a hash of that value is taken.  

This process is summarized below, storing the number of elements in a one-dimensional dynamic array at the nth storage slot, with the ith element in the array being stored at the keccak256(n)th storage slot, for 0 ≤ n,i ≤ 2256-1: 

One-Dimensional Dynamic Array Storage
Slot KeySlot Value
nThe number of elements in the array so far
. . .. . .
keccak256(n)+(i-1)The i-1th element in the array
keccak256(n)+iThe ith element in the array

In this simple example, storage will resemble the following once modify() has been executed:

"key": "0x0000. . . 0000", // slot 0x00…00  
  "value": "0x0000. . . 0002", 
"key": "0x290d . . . e563", // slot keccak256(0x00…00) 
   "value": "0xaaaa . . . aaaa",
"key": "0x290d . . . e564", // slot keccak256(0x00…00) + 0x00…01 
   "value": "0xbbbb . . . bbbb",
. . .

Multidimensional dynamic arrays are stored via the same rule applied recursively. For a two-dimensional dynamic array, the nth storage slot contains the number of sub-arrays in the two-dimensional array stored so far. The index of the ith sub-array is stored at the keccak256(n)+ith storage slot, and the jth element of the ith sub-array is stored at the keccak256(keccak256(n)+i)+jth storage slot, for 0 ≤ n,i,j ≤ 2256-1: 

Two-Dimensional Dynamic Array Storage
Slot KeySlot Value
nNumber of elements in the outermost array so far
. . .. . .
keccak256(n)+(i-1)i-1th sub-array
. . .. . .
keccak256(keccak256(n)+(i-1))+(j-1)j-1th element in the i-1th sub-array
keccak256(keccak256(n)+(i-1))+jjth element in the i-1th sub-array
keccak256(n)+i ith sub-array
. . .. . .
keccak256(keccak256(n)+i)+(j-1)j-1th element in the ith sub-array
keccak256(keccak256(n)+i)+j jth element in the ith sub-array

This is repeated for the second and third arrays within int_ints, demonstrated in the storage representation below: 

. . .
"key": "0x00 . . . 01", // slot 0x00…01 
  "value": "0x00 . . . 03"
"key": "0xb10e . . . 0cf6", // slot keccak256(0x00…01) 
   "value": "0x00 . . . 02"
"key": "0xb5d9 . . . 7d22", // slot keccak256(keccak256(0x00…01)) 
   "value": "0xaa . . . aa"
"key": "0xb5d9 . . . 7d23", // slot keccak256(keccak256(0x00…01)) + 0x00…01 
   "value": "0xbb . . . bb"
"key": "0xb1e2 . . . 0cf7", // slot keccak256(0x00…01) + 0x00…01 
   "value": "0x00 . . . 02",
"key": "0xea78 . . . bd31", // slot keccak256(keccak256(0x00…01) + 0x00…01) 
   "value": "0xaa . . . aa",
"key": "0xea78 . . . bd32", // slot keccak256(keccak256(0x00…01) + 0x00…01) + 0x00…01 
   "value": "0xbb . . . bb",
"key": "0xb1e2 . . . 0cf8", // slot keccak256(0x00…01) + 0x00…02 
   "value": "0x00 . . . 02",
"key": "0xb327 . . . e02b", // slot keccak256(keccak256(0x00…01) + 0x00…02) 
   "value": "0xaa . . . aa",
"key": "0xb327 . . . e02c", // slot keccak256(keccak256(0x00…01) + 0x00…02) + 0x00…01 
   "value": "0xbb . . . bb"

Aside – Deterministic Pitfalls – Storing Private Data 

The Storage region provides a deterministic way of storing state data in the EVM. For instance, the exact storage slot of an m-dimensional array can be predicted before a contract implementing the array is even deployed. For an m-dimensional array, storage logic can be generalized by applying the storage heuristic of a two-dimensional array recursively. This is shown via the following expression for the jth element in the ith index of the m-dimensional array, declared starting at slot n, where 0 ≤ n,m,i,j ≤ 2256-1:  

keccak256(keccak256(...(keccak256(keccak256(n)+i[1])+i[2])...+i[m-1])+i[m])+j

For a sufficiently-populated, 4-dimensional dynamic array arr[][][][], the base of which is stored at slot 3, the slot key for the element at arr[1][0][8][1] can therefore be calculated as follows, and confirmed by viewing the fourth-to-last slot key written after executing the modify() function of this contract

keccak256(keccak256(keccak256((keccak256(0x00…03)+ 0x00…01))+ 0x00…00)+ 0x00…08)+ 0x00…01 = 0xb8928d09db2f3fc6a2c8bd4dafbdf7cd5aa6c337f2c2fad8d85a5e908c8ddf49
The deterministic nature of contract storage means that the slot key of almost any data in storage can be predicted ahead of time.

Any data in a contract’s Storage region is publicly available using block explorers or web3 clients. For example, the contents of storage slot 0 of the live Wrapped Ethereum (WETH) contract19 can be quickly queried using the cast20 web3 CLI client from the Foundry development suite, given a free-tier Infura21 API access or another RPC endpoint: 

$ cast storage 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2 0 --rpc-url 'https://mainnet.infura.io/v3/<API-KEY>' 

0x577261707065642045746865720000000000000000000000000000000000001a

While this data is simply the string Wrapped Ether, subject to the previously described string storage logic, this is why storing sensitive data within a contract’s Storage region constitutes a security risk as contract storage is publicly available to all. 

Mappings

Mappings in Solidity are similar to hash tables or dictionaries in other languages, the storage of which is demonstrated by this contract

pragma solidity >=0.7.0 <0.9.0; 

contract Mappings { 

    struct S { 
        uint256 a; 
        uint256 b;  
    } 

    mapping(uint256 => uint256 ) simple_map; 
    mapping(uint256 => S) struct_map; 
    mapping(uint256 => mapping(uint256 => S)) nested_map; 

    function modify() public { 
        simple_map[0] = 0x11 . . . 11; 
        simple_map[1] = 0x22 . . . 22; 
        struct_map[0] = S(0xaa . . . aa, 0xbb . . . bb); 
        struct_map[1] = S(0xcc . . . cc, 0xdd . . . dd); 
        nested_map[0][1] = S(0xee . . . ee, 0xff . . . ff); 
    } 
} 

Mapping values are also initialized to zero by default, meaning that mappings are assigned values during declaration. They may only be stored in storage, and the keccak256 hash of keys are stored in place of the literal keys. This is why it is not possible to directly query the length of a mapping in Solidity 22.  

Mapping slot keys are derived by concatenating the storage slot key of the mapping itself (n) and the index of the key where the element is to be stored (i), for all 0 ≤ n,i ≤ 2256-1: 

Mapping Storage
Slot KeySlot Value
keccak256(i-1 . n)ith element of mapping at storage slot n

This can be confirmed by debugging the call to modify() and observing the storage after simple_map[0] is assigned a value of 0x11..11. This value will be stored at storage slot keccak256(0x00..00 . 00..00)  = 0xad32 . . . 5fb5

"key": "0xad32 . . . 5fb5", // slot keccak256(0x00…00 . 00…00) 
  "value": "0x1111 . . . 1111", // value of simple_map[0]
"key": "0xada5 . . . 9e7d", // slot keccak256(0x00…01 . 00…00) 
  "value": "0x2222 . . . 2222", // value of simple_map[1] 
. . .

Mapping values like structs or fixed-length arrays combines the storage heuristic for structs and mappings. State variables of this particular type are stored by incrementing this slot key for each element in the value, and storing element values at these incremented slot keys. This can be observed by stepping through line 17, where the struct S(0xa…,0xb…) is assigned as the first value in struct_map

. . .
"key": "0xa6ee . . . cb49", // slot keccak256(0x00…00 . 00…01) 
  "value": "0xaaaa . . . aaaa", // value of struct_map[0].a 
"key": "0xa6ee . . . cb4a", // slot keccak256(0x00…00 . 00…01) + 0x00…01 
  "value": "0xbbbb . . . bbbb", // value of struct_map[0].b 
. . .

Like dynamic arrays, nested mappings are stored recursively. However, since mapping keys are never stored, a somewhat simpler mapping heuristic is used. This involves taking the hash of the outermost mapping, and then taking a hash of this value concatenated with the index of the innermost mapping.  

This is summarized below, where: 

  • i is the index of within the outermost mapping.  
  • j is the index in the innermost mapping.  
  • n is the next available storage slot.  
  • Since nested_map stores structs, the value of m is added to the hash as an index for the element within the struct.  
Nested Struct Mapping Storage
Contents of Slot KeyValue
keccak256(j . keccak256(i . n)) + kValue at struct element k at innermost mapping i, at outermost mapping j, stored at slot n

nested_map is takes up slot 2, and stores an instance of struct S(0xe…,0xf…) at index 1 of its innermost mapping. Therefore, the first element in the struct will be stored at key keccak256(0x…01 . keccak256(0x0…0 .0x0…2)) = 0x79c0…d89d, and the second will be stored at this value incremented by one. 

. . .
"key": "0x79c0 . . . e89d", // slot keccak256(0x0…01 . keccak256(0x00…00 .0x00…02)) 
  "value": "0xeeee . . . eeee", // value of nested_map[0][1].a 
"key": "0x79c0 . . . e89e", // slot keccak256(0x0…01 . keccak256(0x00…00 .0x00…02)) + 0x00…01 
  "value": "0xffff . . . ffff" // value of nested_map[0][1].b

Data Location Keywords and Reference Types 

Solidity supports three data location keywords: storage, memory, and calldata, which can be used to explicitly provide the data location within the EVM execution context, where certain variables are stored within functions. In all examples thus far, variables have been declared outside of functions and are thus state variables stored in the Storage region. 

Whether or not variables require being prefixed with data location keywords depends on whether the variable’s type is considered a reference or value type23. Value types are types that do not store any data other than their own, while reference types store the data of other variables.  

Structs, arrays, and mappings are therefore reference types, and almost all others are considered value types. Strings are also considered reference types in most situations. Declaring state variables is the only place where explicitly specifying data location can be omitted. Reference types declared inside functions must be prefixed with a data location keyword. It is currently not possible to declare data locations for non-state variable that are of value types within functions. 

The storage keyword essentially produces a pointer to an already-declared state variable, referred to as a storage pointer. It is not possible to declare new instances of state variables within a function; the storage keyword cannot be used unless a state variable has already been declared. When updating the storage pointer, the original state variable value will also be updated. Therefore, the storage keyword state variable allows for variables to be passed by reference if used in function arguments.  

The memory keyword can be used to specify that reference types declared within functions are to be stored in memory. When used in a function argument, it results in an in-memory copy of the passed argument being created, which will be discarded at the end of the function call if not returned or persisted elsewhere.  

One use case for the memory keyword is to save on gas by limiting the amount of storage read/write operations. For example, by declaring a reference type as a state variable, initializing it, and then creating an in-memory copy of this state reference type. The in-memory copy can be manipulated with little to no SSTORE/SSLOAD operations. After the variable has been processed, it can be assigned back to the original state variable. This is particularly economical with multiple iterations of the same processing. Like “warm” storage, it is cheaper to write to already-allocated memory.  

The calldata keyword specifies that function arguments are to be stored in a temporary yet immutable region. It can only be specified for function arguments. It is only available for external, read-only function call parameters, however. This is primarily used for data that is sent with the call to the contract and doesn’t need to be changed during the execution of the function. Using calldata is cheaper than both memory and storage. Check the official Solidity documentation for more information regarding data locations and how best to use them to write gas-efficient contracts. 

This concludes part 2 of this series. In the third and final part of this series, we will take a detailed look at proxy contracts, inter-contract message calls, and their effects on storage. We will also look at how storage mismanagement and insecure message calls have resulted in major DeFi breaches, and how to limit the risk of such issues. 

References: 

Citations

  1. https://github.com/ethereum/go-ethereum/blob/41f89ca94423fad523a4427c9af2c327fe344fe2/core/state/state_object.go#L40 
  2. https://github.com/ethereum/evmc/blob/58e16bf6a1be9a5f445b2dd443f018e55218e777/examples/example_host.cpp#L26https://github.com/ethereum/evmc/blob/58e16bf6a1be9a5f445b2dd443f018e55218e777/examples/example_host.cpp#L26  
  3. https://www.evm.codes/#54  
  4. https://www.evm.codes/#55  
  5. https://www.evm.codes/#54  
  6. https://ethereum.org/en/developers/docs/data-structures-and-encoding/patricia-merkle-trie/  
  7. https://bioinformatics.cvr.ac.uk/trie-data-structure/  
  8. https://www.investopedia.com/terms/m/merkle-tree.asp  
  9. https://www.allisons.org/ll/AlgDS/Tree/PATRICIA/  
  10. https://www.coinbase.com/bitcoin.pdf  
  11. https://docs.soliditylang.org/en/latest/internals/layout_in_storage.html#layout-of-state-variables-in-storage  
  12. https://docs.soliditylang.org/en/v0.8.20/internals/layout_in_storage.html#layout-of-state-variables-in-storage  
  13. https://docs.soliditylang.org/en/v0.8.20/internals/layout_in_storage.html#layout-of-state-variables-in-storage  
  14. https://eips.ethereum.org/EIPS/eip-2929  
  15. https://www.evm.codes/about#accesssets  
  16. https://blog.ethereum.org/2016/09/22/ethereum-network-currently-undergoing-dos-attack  
  17. https://ethereum.github.io/yellowpaper/paper.pdf  
  18. https://ethereum.org/en/developers/tutorials/yellow-paper-evm/#91-basics  
  19. https://etherscan.io/address/0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2#code  
  20. https://book.getfoundry.sh/cast/  
  21. https://www.infura.io/pricing  
  22. https://docs.soliditylang.org/en/latest/types.html#mapping-types  
  23. https://docs.soliditylang.org/en/v0.8.20/types.html#reference-types  
Back

NetSPI Offensive Security Solutions Updates: Q2 2023

Continual commitment to advancing your security measures is essential in the evolving threat landscape. The need for powerful offensive security has never been greater as security teams face increasingly sophisticated adversaries. NetSPI is at the forefront of this goal with a team dedicated to developing and enhancing our Breach and Attack Simulation (BAS), Attack Surface Management (ASM), and Penetration Testing as a Service (PTaaS) platforms. 

Our commitment to innovation drives our product teams to work tirelessly, crafting new features and improving usability across our platforms. By listening to your feedback and conducting comprehensive interviews, we’ve gained insights into common challenges and developed solutions that empower internal teams to strengthen their offensive security. 

We’re excited to show the latest updates that will advance the way you approach offensive security. Explore these exciting new additions across our BAS, ASM, and PTaaS platforms below, or schedule a demo anytime to get access to a team of offensive security professionals to guide your strategy. 

Breach and Attack Simulation (BAS) 

NetSPI’s Breach and Attack Simulation (BAS) platform puts your detective controls to the test by leveraging advanced technology and skilled penetration testers to simulate real-world attack behaviors. BAS plays a crucial role in building resilience against threats like ransomware, fraud, denial of service, information leaks, data loss, and more. The latest updates to BAS add customization options, reduce alert fatigue, and improve reporting. 

Advanced Filtering 

Alert fatigue poses a significant challenge today. Amidst thousands of alerts, identifying the crucial ones can mean the difference between maintaining a secure environment or falling victim to an attack.  

However, with our new Advanced Filtering feature, sifting through alerts is a task of the past. You can effortlessly sort and filter data based on your priorities, such as specific threat actors, tools and malware, tested versus untested controls, and much more. The filtered information is presented in real-time charts within the platform and can be conveniently exported in CSV, JSON, and now PDF formats. Say goodbye to alert overload and embrace a streamlined approach to security management. 

PDF Export 

Introducing the PDF Export feature makes us proud to enable seamless communication across various teams — including engineers, analysts, executive leadership team members, and board members. It includes an executive summary alongside comprehensive details about the results of an engagement, catering to both technical and non-technical audiences, and fostering effective collaboration across divisions. 

NetSPI's BAS PDF Export Feature

Attack Surface Management (ASM) 

Attack Surface Management makes continuous external network pentesting a reality by providing ongoing asset discovery coupled with manual exposure validation all within a centralized ASM platform. What sets our ASM offering apart from other vendors is our people — a dedicated attack surface operations team that meticulously reviews, validates, and organizes the results, alleviating the burden on you and your team and focusing remediation efforts with a prioritized list. The latest updates to ASM help clients visualize their entire attack surface and sift through alerts to find meaningful ones among the noise. 

Company Hierarchy Dashboard 

Visualizing your attack surface can be an undertaking, especially when dealing with complex company structures, subsidiaries, divisions, or mergers and acquisitions. NetSPI’s Company Hierarchy Dashboard simplifies this task.  

With just two clicks, gain a comprehensive view of your entire company and entity relationships on a single screen. This powerful tool helps identify IP addresses within subsidiaries, uncover new assets, and trace their origins, enabling effective asset management and improved security posture. 

Signal Dashboard 

Our team introduced the Signal Dashboard so you can see all the noise our ASM operations team digs into, and how we turn that into a few actionable, validated vulnerability findings. The dashboard gives you transparency into the work going on behind the scenes on your engagement. While we don’t share all alerts immediately, they do remain accessible just in case you want to take a look. 

NetSPI's ASM Signal Dashboard

1,000+ New Integration Capabilities 

Integrations play a pivotal role in enhancing your user experience, streamlining workflows across platforms, and broadening asset discovery. Recognizing the significance of this, the Gartner® Competitive Landscape: External Attack Surface Management indicates that vendors who prioritize expanding the scope of asset discovery through deeper integrations gain a competitive edge that’s passed along to customers. 

In line with this research and in response to your feedback, we’ve diligently listened and integrated the most crucial platforms, such as Jira, ServiceNow, Splunk, Microsoft Teams, GitHub, ZoomInfo, and over 1,000 others. 

Penetration Testing as a Service (Resolve™) 

NetSPI’s Penetration Testing as a Service (PTaaS) platform Resolve™ is proven to advance vulnerability discovery and speed up remediation. Our centralized technology offers real-time reporting, trending findings data, and manual prioritization by our expert analysts. The latest updates to Resolve help clients keep track of the status of engagements and easily drill down to relevant data points. 

Program Management Dashboard 

Obtaining a pentest is just the beginning, but keeping tabs on its status takes vulnerability insight to a new level. Our Program Management Dashboard simplifies understanding the test statuses, remediation progress, important dates, and beyond. Bonus! We took a page out of Dominos book and built a visual tracker that brings transparency into progress throughout engagements. 

Data Lab Dashboard  

Introducing the revamped Data Lab Dashboard with an enhanced interface. Alongside the visual update, we expanded the capabilities, empowering security teams to construct and export personalized reports effortlessly. Simply specify the desired entity on the left, apply filters to the results grid, and drill down into more detailed information with a single click. Moreover, the data grids can be conveniently exported, providing greater flexibility in using the data. 

NetSPI's Data Lab Dashboard

We aim to meet you where you are by enhancing our technology to streamline your team’s work. If you have a feature request for our team, be sure to have a conversation with your Account Executive to relay the message to NetSPI’s Product Team!  

This article is a part of our Offensive Security solutions update series. Stay tuned for additional innovations and catch the latest updates in our platform release notes:  

Past solutions updates:  

Back

NetSPI Unveils 2023 Offensive Security Vision Report, Shines Light on the Need for Improved Vulnerability Prioritization

Survey finds lack of resources and prioritization as the greatest barriers to timely vulnerability remediation.

Minneapolis, MinnesotaNetSPI, the global leader in offensive security, today announced the findings from its inaugural 2023 Offensive Security Vision Report, focusing on vulnerability trends across applications, cloud, and networks. The report offers a look back — and forward — at some of the most significant vulnerability patterns of the past year to help security and business leaders focus discovery, management, and remediation efforts on the riskiest vulnerabilities most likely to exist on their attack surface.

The report analyzed over 300,000 anonymized findings from thousands of pentest engagements, spanning more than 240,000 hours of testing, to identify the most prevalent vulnerabilities across various industries — which include healthcare, retail, finance, and manufacturing.  

Top findings include: 

  • On average, the highest volume of critical and high severity vulnerabilities were discovered within the government and nonprofit industry. On the contrary, insurance had the lowest volume of critical and high severity vulnerabilities.  
  • Internal networks have 3x more exploitable vulnerabilities than external networks. 
  • Of the applications tested, web applications have a higher prevalence of high and critical vulnerabilities compared to mobile and thick applications. 
  • The two greatest barriers to timely and effective remediation today are a lack of resources (70%) and prioritization (60%). 
  • 71% of respondents shared that less than one-fourth of security roles budgeted were entry-level, with 46% of those reporting no plans for entry-level hiring in 2023. 

“One narrative made evident from our Offensive Security Vision Report is that vulnerability prioritization is critical,” said Vinay Anand, Chief Product Officer at NetSPI. “The reality is that we cannot fix every vulnerability discovered, but if prioritization and support continue to lack, the security industry will fall short. This realization, coupled with the industry experiencing rising burnout rates among developer teams, should evoke a sense of urgency. Our findings can help leaders grasp the severity of the situation to prioritize vulnerability management.” 

“This report makes it abundantly clear that there’s still a lot to be done to support and enable the industry to improve vulnerability management,” said Cody Chamberlain, Head of Product at NetSPI. “We hope the observations and actionable recommendations throughout our inaugural Offensive Security Vision Report are a great data-driven starting point for security teams to harden their security.”  

The 2023 Offensive Security Vision Report is available to download now. For more information about NetSPI, visit www.netspi.com.  

About NetSPI

NetSPI is the global leader in offensive security, delivering the most comprehensive suite of penetration testing, attack surface management, and breach and attack simulation solutions. Through a combination of technology innovation and human ingenuity NetSPI helps organizations discover, prioritize, and remediate security vulnerabilities. Its global cybersecurity experts are committed to securing the world’s most prominent organizations, including nine of the top 10 U.S. banks, four of the top five leading global cloud providers, four of the five largest healthcare companies, three FAANG companies, seven of the top 10 U.S. retailers & e-commerce companies, and many of the Fortune 500. NetSPI is headquartered in Minneapolis, MN, with global offices across the U.S., Canada, the UK, and India. 

Media Contacts:
Tori Norris, NetSPI
victoria.norris@netspi.com
(630) 258-0277

Jessica Bettencourt, Inkhouse for NetSPI
netspi@inkhouse.com
(774) 451-5142 

Back

Celebrating a World of Opportunities with a World-Class Offensive Security Team 

There’s a lot to celebrate when you’re the global leader in offensive security. 

Technology innovation, global expansion, responsible vulnerability disclosures, special shoutouts from clients. Plus, personal accomplishments like job promotions, buying your first home, welcoming a new life into the world, traveling to new places, the list goes on. 

All of this and more was the focal point of our 2023 NetSPI Employee Kickoff event. We brought nearly 500 of our offensive security experts together to Minneapolis to connect face-to-face, celebrate our accomplishments, and align the entire organization on our vision for the future. This year’s theme was “A World of Opportunities” which explored the opportunities we can uncover to make a real impact as the global leader in pentesting, attack surface management, breach and attack simulation – and beyond! 

The claim “global leader in offensive security” is a bold one to make. The proof is in our impressive growth year-over-year, the skillset of our team, the comprehensiveness of our solutions, and our client’s desire to choose us over our competition. 

The kickoff event presented a unique opportunity to reflect on what exactly got us to this point. Certainly, a lot of hard work and grit from the team, but these four core narratives were made evident throughout the course of the day. 

Collaboration and diverse thought 

“Individual commitment to a group effort – that is what makes a team work, a company work, a society work, a civilization work.”

Vince Lombardi

This quote articulates the importance of collaboration and the main reason we brought everyone together in one room (plus, I couldn’t let the day go by without at least one Green Bay Packers reference). Building relationships with one another and feeding that culture of collaboration was at the core of our 2023 kickoff. 

What we’ve built is special because of our people. But talented individuals alone are not enough. While it’s often an overused marketing term, collaboration is, and will always be, a core value of NetSPI. We pride ourselves on our ability to collaborate across boundaries and teams to uncover new offensive security solutions and solve seemingly impossible cybersecurity challenges.  

There is immense power in the diversity of thought that brings unique ideas and approaches to challenge the status quo. It was incredible to see members of the team spending time with people outside of their social circle or department, people with backgrounds and perspectives different to theirs. This is where the magic happens. 

Unwavering dedication to our clients 

Events like this are a great platform to reinforce the why behind everything we do. In our case the why is our clients, some of the world’s most prominent organizations. We are maniacally focused on customer delight, ensuring that everything we do brings value to programs and is not creating more work for security teams.  

At the end of the day, we are providing offensive security testing solutions so that businesses can innovate with confidence. 

Our keynote speaker, Brittany Hodak, spoke about how to create superfans, leaving us with the acronym SUPER (Start with your story, Understand your customer’s story, Personalize, Exceed expectations, Repeat). Her session was preceded by a panel of six NetSPI superfans, or soon to be. 

NetSPI Chief Revenue Officer Alex Jones moderated the panel, which centered around the key pain points cybersecurity leaders face today. Interacting with and speaking to customers to understand the challenges they are dealing with is invaluable, and something we make a conscious effort to do day in and day out. Key takeaways? 

  • Cyber is not the #1 risk for businesses, it is just one of the risks. 
  • Perimeter security is paramount. Understanding what is on your perimeter should be a priority. 
  • Those who follow foundational security best practices are going to succeed. Get the basics done right. 
  • Generative AI (e.g. ChatGPT) is a real concern for security leaders. Many are evaluating the risk and building policies. 
  • Security now has a seat at the table! The role of the CISO is becoming less technical and more strategic. 
  • Media headlines around breaches and other security incidents are helping cyber leaders get executive buy-in. 

NetSPI delivers real solutions to these mission critical challenges. We are a key player in the arms race against a sophisticated and well-funded adversary. Offensive security must be scaled and adopted globally, or our clients will fall short.  

This year, we made a strategic investment in a product leader, Vinay Anand, to help us rise to this challenge and create a strong technology platform that is scalable, continuous, easy to use, and leverages intelligent automation. Aligning on this vision as an organization was one of my top highlights of the day. I don’t want to give too much away, but a unified offensive security platform is on its way.  

Maintaining the NetSPI culture and staying true to our Purpose 

Innovation dies without a strong culture. While there may be amazing ideas brought to bear, they won’t flourish without a culture of collaboration, respect, motivation, and challenging our peers. Our culture is the foundation of our success.  

We spend a massive amount of our lives at work, so it’s truly meaningful when we say on top of all this, we’re having fun! We’re disciplined; we’re focused; we’re competitive. But at the end of the day, we enjoy each other’s company and the strong culture we’ve built together. 

Our clients feel this in their interactions with us. They tell us this regularly and it’s one of the many reasons they continue to work with us. They feel supported by a team that takes their offensive security goals as seriously as they do. Our culture at NetSPI is the formula for creating superfans that help us unlock opportunities.  

Someone who emulates the NetSPI culture is our very own Eric Gruber who was the recipient of the NetSPI Founders Award at this year’s kickoff event.  

Another component of our culture is our passion for giving back, whether that’s by way of security community involvement, donating our time to help others, or financially with our philanthropic partner, the Masonic Children’s Hospital.  

Last year, we raised $250,000 for Jersey, the newest facility dog at the University of Minnesota Masonic Institute for the Developing Brain. Jersey made her debut at the kickoff, and we heard first-hand how she is making an impact supporting the emotional and social needs of patients.

Jersey, the newest facility dog at the University of Minnesota Masonic Institute for the Developing Brain
Jersey, the newest facility dog at the University of Minnesota Masonic Institute for the Developing Brain

Making an impact, globally 

In attendance were teams from across the US, UK, and Canada – then we took the show on the road to Pune, India where our team of 60+ came prepared with unbelievable energy. 

Most organizations have global aspirations, so who are we to think our global aspirations are so unique? Well when you have something so special, you want others to experience it. In a very short time, we’ve ramped up top-notch teams in Canada and the UK and continue to build our Pune team. There’s no doubt in my mind that we have the top offensive security teams in those regions today. There are many large multinational clients who need NetSPI to have global operations, and many regions where we can advance and accelerate their testing programs. 

Circling back to a point made earlier, innovation thrives in diversity of thought. Learning new cultures and ways of doing business is a once in a lifetime experience for our team and an opportunity to bring increasingly diverse viewpoints and approaches to support our clients.

I mentioned this in the written recap of last year’s kickoff, but it’s worth reiterating: To other business leaders considering an all-employee in-person event, I couldn’t recommend it more. There’s no replacement for human connection. 

We are at an inflection point as an organization and we cannot become complacent. We continue to find ways to ensure we attract top talent, refine our processes, and stay agile while driving innovation. 

I am both honored and privileged to be a part of the global leader in offensive security. Keep an eye on this team. You won’t want to miss the world of opportunities we uncover next. I’ll leave you with this recap video to keep the energy high until next year – cheers!

Back

CRN’s 2023 Women of the Channel Honors NetSPI’s Lauren Gimmillaro

MINNEAPOLISNetSPI, the offensive security leader, announced today that CRN®, a brand of The Channel Company, has named Lauren Gimmillaro, Vice President of Business Development & Strategic Alliances, to the 2023 Women of the Channel list. Every year, CRN recognizes women from vendor, distributor, and solution provider organizations whose expertise and vision are leaving a noticeable and commendable mark on the technology industry.

“We are ecstatic to announce this year’s honorees and shine a light on these women for their significant achievements, knowing that what they’ve accomplished has paved the way for continued success within the IT channel,” said Blaine Raddon, CEO of The Channel Company. “The channel is stronger because of them, and we look forward to seeing what they do next.”

Lauren Gimmillaro has a track record of launching four successful partner programs, working with channel, referral, reseller, and technology partners. In August 2022, Gimmillaro led the launch of NetSPI Partner Program, which empowers its global channel and technology partners to deliver offensive security services at a time when it’s needed most – a program that drove a 70 percent increase in YoY channel revenue. Gimmillaro now leads the NetSPI Partner Program to continue building strategic relationships between NetSPI and its partners. 

“I’m honored to be recognized among this incredible list of female channel leaders. Partners play a vital role in NetSPI’s growth and expansion plans for the future,” said Gimmillaro. “It’s critical to provide end users with the tools, services, and skill sets they need to take an ​​offensive approach to security. As we continue to grow our partner program, we’ll be looking at a variety of different partners, including MSP/MSSPs, VARs, vCISOs, to collectively help organizations across the globe improve security.” 

The CRN 2023 Women of the Channel honorees bring their creativity, strategic thinking and leadership to bear in a variety of roles and responsibilities, but all are turning their unique talents toward driving success for their partners and customers. With this recognition, CRN honors these women for their unwavering dedication and commitment to furthering channel excellence. 

The 2023 Women of the Channel list will be featured in the June issue of CRN Magazine, with online coverage starting May 8 at www.CRN.com/WOTC.

About NetSPI

NetSPI is the offensive security leader, delivering the most comprehensive suite of penetration testing, attack surface management, and breach and attack simulation solutions. Through a combination of technology innovation and human ingenuity NetSPI helps organizations discover, prioritize, and remediate security vulnerabilities. For over 20 years, its global cybersecurity experts have been committed to securing the world’s most prominent organizations, including nine of the top 10 U.S. banks, four of the top five leading global cloud providers, four of the five largest healthcare companies, three FAANG companies, seven of the top 10 U.S. retailers & e-commerce companies, and many of the Fortune 500. NetSPI is headquartered in Minneapolis, MN, with global offices across the U.S., Canada, the UK, and India.

About The Channel Company

The Channel Company enables breakthrough IT channel performance with our dominant media, engaging events, expert consulting and education, and innovative marketing services and platforms. As the channel catalyst, we connect and empower technology suppliers, solution providers, and end-users. Backed by more than 30 years of unequaled channel experience, we draw from our deep knowledge to envision innovative solutions for ever-evolving challenges in the technology marketplace. www.thechannelcompany.com

NetSPI Media Contacts:
Tori Norris, NetSPI
victoria.norris@netspi.com
(630) 258-0277

Back

Q&A with Michelle Eggers: An Inside Look at the SANS ICS Security Summit 2023

Most people rarely think about the systems that keep our world running. But every once in a while, it’s worth it to pause and reflect on the critical infrastructure that makes our society run smoothly. In this case, we‘re talking about the security of industrial control systems (ICS).  

When daily activities go as planned, everyone carries on, but if things go awry, what can be a bad day for IT applications can mean taking an entire system offline in the ICS arena. In some cases, this can be hazardous to human safety and potentially cause environmental disasters. In fact, there’s an entire conference dedicated to ICS — and we’ve got the inside scoop. 

NetSPI Security Consultant Michelle Eggers earned a scholarship from Dragos and SANS to attend the ICS Security Summit & Training 2023. The summit is a deep dive into the field of ICS security, creating the space to share ideas, methods, and techniques for safeguarding critical infrastructure. We caught up with Michelle to share her experience and recap educational takeaways, memorable moments, and why ICS security is an important field of focus.  

Q&A with Michelle Eggers on ICS Security Summit & Training 2023 

1. How would you summarize your experience at the SANS ICS Security Summit 2023? 

The SANS ICS Summit was a phenomenal opportunity to undergo a crash-course into many foundational aspects of Operational Technology (OT) and the current trends surrounding the technologies used to support critical infrastructure worldwide.  

I had the opportunity to sit in on the beta rollout of the new SANS ICS 310 course and took away many valuable insights, such as a comparative analysis on the ways in which ICS and IT security concerns are similar, and the areas in which they differ dramatically.  

Each talk during the summit provided relevant, actionable information for ICS asset owners and operators with recommendations for navigating the current threat landscape. As a note, ransomware is by far the biggest concern facing the field at this time. In short, the conference presented zero filler and instead focused on rich information directly applicable to real-world scenarios.

Sans Five Critical Controls for ICS/OT

2. What did you find most interesting in terms of tools used to secure ICS? 

Tooling wise, testing OT systems can be very similar to other types of penetration testing. The differences lie within the implementation. For example, Industrial Control Systems are often built on decades-old legacy hardware that may not be equipped to manage an active scan. In fact, something like a Nessus scan could easily knock out an entire system.  

While this situation may be a bad day for IT applications, in the ICS arena, taking a system offline can be hazardous to human safety and in extreme situations could even lead to an environmental disaster or loss of life.  

Cybersecurity for IT systems is built upon the CIA Triad model: Confidentiality, Integrity, and Availability. When working with operational technology, confidentiality is not the top priority; safety and availability instead play a much more crucial role. The impact goes beyond the potential loss or compromise of data or dollars and extends to potentially catastrophic effects in real-time, physical scenarios.  

3. How did you get introduced to ICS security? 

I first encountered the topic of ICS Security when I began my initial cybersecurity educational journey. It was not a large focus area but was mentioned in passing for general awareness purposes. I recall hearing at the time that air-gapped systems protect much of operational technology, but during the summit I came to understand that many OT systems are in fact networked and if there is an “air-gap” in place it is often a logical and not a physical separation, which as we know presents an opportunity for attacks that target bypassing security controls such as a misconfigured firewall. 

4. What makes you passionate about ICS security? 

Most people across the globe rely daily upon manufactured products or foods, critical infrastructure services (like healthcare), or utilities such as water or power to survive. The only way to escape the need for what OT provides us would be to live completely off grid, growing our own food, creating our own medicine, and so on.  

While this sounds ideal to some, the reality is very few people are actually living this way in industrialized countries. Industrial Control Systems are a crucial component of our daily lives whether we acknowledge it or not, and keeping these systems secure is of the utmost importance. 

5. Do you have a vision for how you could merge your pentesting skillset with operational technology (OT)? 

Ah, the dream! I would love to merge my existing interests in OSINT, Social Engineering, Physical Pentesting, and my current work in Web Application Penetration Testing with OT Pentesting into a well-rounded Red Team role that would assist organizations in securing their most vital assets in a multi-tiered and comprehensive approach.  

As far as OT testing branching out from Web App testing, many industrial control systems have some form of network connectivity that incorporates human-machine interfaces, and these can often present very similar vulnerabilities to IT systems regarding the authentication process. If forged remote authentication can be achieved to a workstation in control of a real-world, physical process you’ve got a very serious problem at hand. 

6. What tips do you have for security professionals looking to learn more about ICS security?  

Resources abound. Everything from YouTube to a basic browser search can provide a solid starting point for a better understanding of Operational Technology. I would recommend reading about the Colonial Pipeline cyberattack, studying up on Stuxnet and the Ukraine power grid attacks, and investigating any other infrastructure attacks of interest to gain a general idea of the OT landscape and what’s at stake.  

While at the conference I also had the opportunity to chat with Robert M. Lee, SANS ICS Fellow, who has put together a wonderful blog providing a list of resources for those interested in growing their knowledge base on ICS Cybersecurity, “A Collection of Resources for Getting Started in ICS/SCADA Cybersecurity.” 

In addition, Dean Parsons has also released several free PDF resources on the subject, entitled “ICS Cybersecurity Field Manual” Volumes 1-3. These will also be available soon in a consolidated hardcopy edition as well. I managed to snag a signed first edition copy of his book at the Summit and it’s an excellent read, I wholeheartedly recommend adding it to any cybersecurity resource collection.  

If ICS security piques your interest, then you’ve got some reading to do! Connect with Michelle Eggers on LinkedIn for more OT insights and learn about NetSPI’s OT-centric offensive security services here.  

Back

Security Weekly: The Evolution of External Attack Surface Management (EASM)

Security Weekly interviewed NetSPI Chief Product Officer live at RSAC 2023. The discussion centered around the evolution of the external attack surface management (EASM) market.

+++

Tune in for a conversation on:

Back

ISMG: Empowering a Powerhouse of Offensive Security Solutions

In this video interview with Information Security Media Group at RSA Conference 2023, NetSPI Chief Product Officer Vinay Anand discusses:

  • The ever-expanding attack surface and its implications for offensive security;
  • NetSPI’s mission to be the leader in offensive security by understanding a customers’ exposure and adding a knowledge of business context to detect the risk level of each asset;
  • The crucial importance of rapid response to offensive security.

Watch the video interview and read the full recap on ISMG.

Back

Protect Your Growing Attack Surface in a Modern Environment

Unmanaged attack surfaces are increasingly becoming a pathway for threat actors to gain access to systems, making effective attack surface management (ASM) more critical than ever before.  

According to research from Enterprise Strategy Group (ESG), more than half of businesses surveyed (52 percent) say that security operations are more difficult today than they were two years ago. The top reasons respondents indicated for increased challenges include an evolving threat landscape and a changing attack surface.  

Given the sophistication of threats today, a comprehensive attack surface management strategy can help proactively identify gaps and vulnerabilities while strengthening security controls.  

Let’s start by breaking down what an attack surface is. 

What is an Attack Surface? 

An attack surface is an accumulation of all the different points of entry on the internet that a threat actor could exploit to access your external-facing assets, such as hardware, software, and cloud assets. 

An enterprise attack surface may include digital attack surfaces, such as:  

  1. Application attack surface 
  2. Internet of Things (IoT) attack surface 
  3. Kubernetes attack surface 
  4. Network attack surface 
  5. Software attack surface 
  6. Cloud attack surface 

Other types of enterprise attack surfaces include human attack surfaces and physical attack surfaces. 
 
In our connected environment, a company’s total number of attack surfaces and overall digital footprint continues to expand, which puts external-facing assets at risk for exposures and vulnerabilities. 
 
Cloud storage adoption and hybrid work environments that rely on cloud solutions are some of the top reasons for expanded attack surfaces. Another factor is that an uptick in mergers and acquisitions can lead to acquiring assets that may be unknown, resulting in unmanaged attack surfaces. 

How Are Attack Vectors and Attack Surfaces Related?  

Attack vectors and attack surfaces are related because attack surfaces comprise all of the attack vectors, which include any method a threat actor can use to gain unauthorized access to an environment. Examples of attack vectors include ransomware, malware, phishing, internal threats, misconfiguration, and compromised credentials, among many others – vectors can also exist as a combination of these examples listed.  

As attack vectors become more complex, security teams need to identify and implement new, more effective solutions to secure attack surfaces and stay ahead of sophisticated threat actors.  

Monitoring and protecting against evolving attack vectors becomes more critical as an attack surface grows. For the purpose of this article, we’re focusing on how to effectively manage external attack surfaces since this is a common challenge many businesses face. The external attack surface remains a priority for remediation because it presents a higher risk due to its exposure to the internet. 

What is Attack Surface Management? 

Many businesses struggle to keep up with their ever-evolving attack surface. The good news is that ASM vendors equip internal teams with data-driven decisions to methodically tackle remediation efforts. 
 
Attack surface management provides continuous observability and risk assessment of your organization’s entire attack surface. When coupled effectively with continuous penetration testing, attack surface management helps companies improve their attack surface visibility, asset inventory, and understanding of their critical exposures. 

More specifically, external attack surface management (EASM) is the process of identifying and managing your organization’s attack surface, specifically from the outside-in view. The goal is to identify external assets that attackers could potentially leverage and discover exposures before malicious actors do.

Attack Surface Management Use-Cases 

Through the attack surface, adversaries can exploit exposures to identify vulnerabilities that will give them access to your organization. If threat actors are successful, then outcomes will vary depending on the attack surface and other factors—but they will undoubtedly be negative.  

Common outcomes include: 

  1. Deployment of malware on your network for the purposes of ransomware, or worse, killware. 
  2. Extraction of employee data such as social security numbers, healthcare data, and personal contact information. 

Effective asset management and change control processes are challenging, and even the most well-intentioned companies often see this as an area for improvement. The right attack surface management solution should include a combination of three core pillars: human expertise, continuous penetration testing, and prioritized exposures based on risk. 
 
Common reasons to invest in attack surface management include: 

  1. Continuous observability and risk management 
  2. Identification of external gaps in visibility 
  3. Discovery of known and unknown assets and Shadow IT 
  4. Risk-based vulnerability prioritization 
  5. Assessment of M&A and subsidiary risk 

Manage Growing Attack Surfaces with NetSPI 

NetSPI’s Attack Surface Management (ASM) platform helps security teams quickly discover and address vulnerabilities across growing attack surfaces before adversaries do.   
 
Four of the top five leading global cloud providers trust NetSPI for continuous threat and exposure management, leveraging our team, technology, and comprehensive methodology to detect known, unknown, and potentially vulnerable public-facing assets. 

Learn more about NetSPI’s attack surface management solutions or request a demo. Also check out our free Attack Surface Management Tool to search more than 800 million public records for potential attack surface exposures. 

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X