Most organizations have more vulnerabilities than can be fixed at current resource levels. Halfway through 2018 the NVD is on pace to match the historic 20,000 published CVEs in 2017.
A perfect storm of circumstances can make it difficult for your threat and vulnerability management program to maintain a good security posture. Multiple scanners are required to get full coverage, which in turn piles on the work. The sheer quantity of patches, configuration changes, and code changes is daunting. Automated patch management solutions are limited by the risk of downtime, so human intervention is required for many configuration and code changes.
The growth of the cybercrime industry requires companies to accelerate the vulnerability fix cycle. Exploits come out ever faster, as malicious actors take advantage of known vulnerabilities that organizations have not yet fixed.
Organizations that prioritize vulnerabilities based on risk will maximize security resources. There’s no perfect intelligence on new exploits, and lessening the risk doesn’t mean the risk is gone. However, risk-based approaches to threat and vulnerability management offer the best path forward when vulnerabilities pile up and resources are limited.
Keeping up with a blizzard of vulnerabilities and exploits requires closing the remediation gap, or the time to remediation. The fundamental challenge lies in expedient remediation for every fix. Your organization will want to get through a litany of remediation workflows quickly to minimize effort. Nonetheless, every vulnerability requires decision and possible subsequent effort.
Five Phases of the Vulnerability Management Process
We recommend your organization implement the following five-phase vulnerability management process in managing the vulnerability life cycle:
Discovery
Correlation & enrichment
Verification
Prioritization
Remediation
In addition, these five goals help document each phase of the vulnerability management lifecycle:
Identify the key stakeholders and systems involved
Determine what policies have bearing in each phase
Define the inflection points where a decision must be made
Define the junctures where communication must occur
Establish output destinations for the data flow
Move a Mountain of Vulnerabilities
Processes that look good on paper may break down in the face of real world challenges. In your organization, different departments may own responsibility for remediation, and they each may use separate systems. Uptime may be prioritized quietly over patch management without notification of exception requests. Code changes need to be vetted in the software development life cycle (SDLC) before being released into production. Configuration changes need to be evaluated for potential impact to running systems.
Implementation of a complete vulnerability management process is a challenge that is made easier by security orchestration tools – a topic for a future post. Defining a complete security orchestration process will help you move mountains.
This is a quick blog to cover an alternative technique to load a .Net Assembly without having to call the suspicious Assembly.LoadFile() or Assembly.Load() Functions.
Not too long ago I released a tool called RunDotNetDll32 to make it easier to execute methods from .Net DLLs without going through the process of loading them and executing them in PowerShell. It can be downloaded here. However, under the hood the application is still called System.Reflection.Assembly.LoadFile(). Below is a C# example of this process.
While this was functional, it was also not ideal if you’re trying to avoid calling suspicious methods or APIs.
Enter type loading
There are a multitude of ways to call reflection, one of which is the System.Type.GetType() method. This particular method requires the DLL to be in one of two places: the same directory as the executing assembly or in the GAC. Using this as a search path, the Assembly’s Fully Qualified Name can be passed as a parameter and be automatically loaded.
There is nothing particularly special about System.Reflection.AssemblyName in this context, but it does save us some effort by automatically parsing the Assembly’s Fully Qualified Name.
In PowerShell, the previous code could be equivalently executed with the following script:
Again, note that the DLL being reflectively loaded needs to exists in the same directory as the executing assembly or in the GAC.
RunDotNetDll32 – Now with less Assembly.LoadFile()
Given this the option to run a method using Type.GetType() has been added along with other flags, doing away with the previously used positional parameters.
In recent years, we have seen Microsoft Azure services gathering a larger market share in the cloud space. While they’re not seeing quite the adoption that AWS has, we are running into more clients that are using Microsoft Azure services for their operations. If everything is configured correctly, this can be totally fine, but it’s pretty rare for an environment to be perfectly secured (and that’s why we do security testing). Given the increase in Azure usage, we wanted to dive deeper into automating our standard Azure testing tasks, including the enumeration of publicly available files. In this blog, we’ll go over the different types of Azure file stores and how we can potentially enumerate and access publicly available “Blob” files.
Storage Accounts
One of the issues that we’ve seen within Azure environments is publicly exposing files through storage accounts. These issues are pretty similar to the issues that have come up around public S3 buckets (A good primer here). “Storage Accounts” are Microsoft’s way of handling data storage in Azure. Each storage account has a unique subdomain assigned to it in the core.windows.net domain.
For example, if I create the netspiazure storage account, I would have netspiazure.core.windows.net assigned to the account.
This subdomain structure also extends out to the different file types within the storage accounts
Blobs – netspiazure.blob.core.windows.net
File Services – netspiazure.file.core.windows.net
Data Tables – netspiazure.table.core.windows.net
Queues – netspiazure.queue.core.windows.net
Blobs
For the purpose of this blog, we’re just going to focus on the “Blobs”, but the other data types are also interesting. Blobs are Microsoft’s unstructured data storage objects. Most frequently, we’re seeing them used for serving static public data, but we have found blobs being used to store sensitive information (config files, database backups, credentials). Given that Google is indexing about 1.2 million PDFs from the Azure “blob.core.windows.net” subdomain, I think there’s pretty decent surface area here.
Permissions
The blobs themselves are stored within “Containers”, which are basically folders. Containers have access policies assigned to them, which determines the level of public access that is available for the files.
If a container has a “Container” public access policy, then anonymous users can list and read any of the files that are in the container. The “Blob” public access policy still allows anonymous users to read files, but they can’t list the container files. “Blob” permissions also prevent the basic confirmation of container names via the Azure Blob Service Rest APIs.
Automation
Given the number of Azure environments that we’ve been running into, I wanted to automate our process for enumerating publicly available blob files. I’m partial to PowerShell, but this script could potentially be ported to other languages.
At the core of the script, we’re doing DNS lookups for blob.core.windows.net subdomains to enumerate valid storage accounts and then brute-force container names using the Azure Blob Service REST APIs. Additionally, the Bing Search API can be used within the tool to find additional storage accounts and containers that are already publicly indexed. Once a valid container name is identified, we use the Azure Blob APIs again to see if the container allows us to list files via the “Container” public access policy. To come up with valid storage account names, we start with a base name (netspi) and either prepend or postpend additional words (dev, test, qa, etc.) that come from a permutations wordlist file.
The general idea for this script along with parts of the permutations list comes from a similar tool for AWS S3 buckets – inSp3ctor
Invoke-EnumerateAzureBlobs Usage
The script has five parameters:
Base – The base name that you want to run permutations on (netspi)
Permutations – The file containing words to use with the base to find the storage accounts (netspidev, testnetspi, etc.)
Folders – The file containing potential folder names that you want to brute force (files, docs, etc.)
BingAPIKey – The Bing API Key to use for finding additional public files
OutputFile – The file to write the output to
Example Output:
PS C:> Invoke-EnumerateAzureBlobs -Base secure -BingAPIKey 12345678901234567899876543210123
Found Storage Account - secure.blob.core.windows.net
Found Storage Account - testsecure.blob.core.windows.net
Found Storage Account - securetest.blob.core.windows.net
Found Storage Account - securedata.blob.core.windows.net
Found Storage Account - securefiles.blob.core.windows.net
Found Storage Account - securefilestorage.blob.core.windows.net
Found Storage Account - securestorageaccount.blob.core.windows.net
Found Storage Account - securesql.blob.core.windows.net
Found Storage Account - hrsecure.blob.core.windows.net
Found Storage Account - secureit.blob.core.windows.net
Found Storage Account - secureimages.blob.core.windows.net
Found Storage Account - securestorage.blob.core.windows.net
Bing Found Storage Account - notrealstorage.blob.core.windows.net
Found Container - hrsecure.blob.core.windows.net/NETSPItest
Public File Available: https://hrsecure.blob.core.windows.net/NETSPItest/SuperSecretFile.txt
Public File Available: https://hrsecure.blob.core.windows.net/NETSPItest/TaxReturn.pdf
Found Container - secureimages.blob.core.windows.net/NETSPItest123
Empty Public Container Available: https://secureimages.blob.core.windows.net/NETSPItest123?restype=container&comp=list
By default, both the “Permutations” and “Folders” parameters are set to the permutations.txt file that comes with the script. You can increase your chances for finding files by adding any client/environment specific terms to that file.
Adding in a Bing API Key will also help find additional storage accounts that contain your base word. If you don’t already have a Bing API Key set up, navigate to the “Cognitive Services” section of the Azure Portal and create a new “Bing Search v7” instance for your account. There’s a free pricing tier that will get you up to 3,000 calls per month, which will hopefully be sufficient.
If you are using the Bing API Key, you will be prompted with an out-gridview selection window to select any storage accounts that you want to look at. There are a few publicly indexed Azure storage accounts that seem to hit on most company names. These accounts seem to be indexing documents for or data on multiple companies, so they have a tendency to frequently show up in your Bing results. Several of these accounts also have public file listing, so that may give you a large list of public files that you don’t care about. Either way, I’ve had pretty good luck with the script so far, and hopefully you do too.
If you have any issues with the script, feel free to leave a comment or pull request on the GitHub page.
Exploiting weaknesses in name resolution protocols is a common technique for performing man-in-the-middle (MITM) attacks. Two particularly vulnerable name resolution protocols are Link-Local Multicast Name Resolution (LLMNR) and NetBIOS Name Service (NBNS). Attackers leverage both of these protocols to respond to requests that fail to be answered through higher priority resolution methods, such as DNS. The default enabled status of LLMNR and NBNS within Active Directory (AD) environments allows this type of spoofing to be an extremely effective way to both gain initial access to a domain, and also elevate domain privilege during post exploitation efforts. The latter use case led to me developing Inveigh, a PowerShell based LLMNR/NBNS spoofing tool designed to run on a compromised AD host.
PS C:userskevinDesktopInveigh> Invoke-Inveigh -ConsoleOutput Y
[*] Inveigh 1.4 Dev started at 2018-07-05T22:29:35
[+] Elevated Privilege Mode = Enabled
[+] Primary IP Address = 192.168.125.100
[+] LLMNR/NBNS/mDNS/DNS Spoofer IP Address = 192.168.125.100
[+] LLMNR Spoofer = Enabled
[+] NBNS Spoofer = Enabled
[+] SMB Capture = Enabled
[+] HTTP Capture = Enabled
[+] HTTPS Capture = Disabled
[+] HTTP/HTTPS Authentication = NTLM
[+] WPAD Authentication = NTLM
[+] WPAD Default Response = Enabled
[+] Real Time Console Output = Enabled
WARNING: [!] Run Stop-Inveigh to stop manually
[*] Press any key to stop real time console output
[+] [2018-07-05T22:29:53] LLMNR request for badrequest received from 192.168.125.102 [Response Sent]
[+] [2018-07-05T22:29:53] SMB NTLMv2 challenge/response captured from 192.168.125.102(INVEIGH-WKS2):
testuser1::INVEIGH:3E834C6F9FC3CA5B:CBD38F1537AAD7D39CE6A5BC5687373A:010100000000000071ADB439D114D401D5B48AB8C3EC8E010000000002000E0049004E00560045004900470048000100180049004E00560045004900470048002D0057004B00530031000400160069006E00760065006900670068002E006E00650074000300300049006E00760065006900670068002D0057004B00530031002E0069006E00760065006900670068002E006E00650074000500160069006E00760065006900670068002E006E00650074000700080071ADB438D114D401060004000200000008003000300000000000000000000000002000004FC481EC79C5F6BB2B29A2C828A02EC028C9FF563BE5D9597D51FD6DF29DC8BD0A0010000000000000000000000000000000000009001E0063006900660073002F006200610064007200650071007500650073007400000000000000000000000000
Throughout my time working on Inveigh, I’ve explored LLMNR/NBNS spoofing from the perspective of different levels of privilege within AD environments. Many of the updates to Inveigh along the way have actually attempted to cover additional privilege based use cases.
Recently though, some research outside of Inveigh has placed a nagging question in my head. Is LLMNR/NBNS spoofing even the best way to perform name resolution based MITM attacks if you already have unprivileged access to a domain? In an effort to obtain an answer, I kept returning to the suspiciously configured AD role which inspired the question to begin with, Active Directory-Integrated DNS (ADIDNS).
Take it from the Top
For the purpose of this write-up, I’ll just recap two key areas of of LLMNR/NBNS spoofing. First, without implementing some router based wizardry, LLMNR and NBNS requests are contained within a single multicast or broadcast domain respectively. This can greatly limit the scope of a spoofing attack with regards to both the affected systems and potential privilege of the impacted sessions. Second, by default, Windows systems use the following priority list while attempting to resolve name resolution requests through network based protocols:
DNS
LLMNR
NBNS
Although not exploited directly as part of the attacks, DNS has a large impact on the effectiveness of LLMNR/NBNS spoofing due to controlling which requests fall down to LLMNR and NBNS. Basically, if a name request matches a record listed in DNS, a client won’t usually attempt to resolve the request through LLMNR and NBNS.
Do we really need to settle for anything less than the top spot in the network based name resolution protocol hierarchy when performing our attacks? Is there a simple way to leverage DNS directly? Keeping within our imposed limitation of having only unprivileged access to a domain, let’s see what we have to work with.
Domain controllers and ADIDNS zones go hand in hand. Each domain controller will usually have an accessible DNS server hosting at least the default ADIDNS zones.
The first default setting I’d like to highlight is the ADIDNS zone discretionary access control list (DACL).
As you can see, the zone has ‘Authenticated Users’ with ‘Create all child objects’ listed by default. An authenticated user is a pretty low barrier of entry for a domain and certainly covers our unprivileged access goal. But how do we put ourselves into a position to leverage this permission and what can we do with it?
Modifying ADIDNS Zones
There are two primary methods of remotely modifying an ADIDNS zone. The first involves using the RPC based management tools. These tools generally require a DNS administrator or above so I won’t bother describing their capabilities. The second method is DNS dynamic updates. Dynamic updates is a DNS specific protocol designed for modifying DNS zones. Within the AD world, dynamic updates is primarily leveraged by machine accounts to add and update their own DNS records.
This brings us to another default ADIDNS zone setting of interest, the enabled status of secure dynamic updates.
Dynamic Updates
Last year, in order to leverage this default setting more easily during post exploitation, I developed a PowerShell DNS dynamic updates tool called Invoke-DNSUpdate.
PS C:UserskevinDesktopPowermad> Invoke-DNSupdate -DNSType A -DNSName test -DNSData 192.168.125.100 -Verbose
VERBOSE: [+] TKEY name 648-ms-7.1-4675.57409638-8579-11e7-5813-000c296694e0
VERBOSE: [+] Kerberos preauthentication successful
VERBOSE: [+] Kerberos TKEY query successful
[+] DNS update successful
The rules for using secure dynamic updates are pretty straightforward once you understand how permissions are applied to the records. If a matching DNS record name does not already exist in a zone, an authenticated user can create the record. The creator account will receive ownership/full control of the record.
If a matching record name already exists in the zone, the authenticated user will be prevented from modifying or removing the record unless the user has the required permission, such as the case where a user is an administrator. Notice that I’m using record name instead of just record. The standard DNS view can be confusing in this regard. Permissions are actually applied based on the record name rather than individual records as viewed in the DNS console.
For example, if a record named ‘test’ is created by an administrator, an unprivileged account cannot create a second record named ‘test’ as part of a DNS round robin setup. This also applies across multiple record types. If a default A record exists for the root of the zone, an unprivileged account cannot create a root MX record for the zone since both records are internally named ‘@’. Further along in this post, we will take a look at DNS records from another perspective which will offer a better view of ADIDNS records grouped by name.
Below are default records that will prevent an unprivileged account from impacting AD services such as Kerberos and LDAP.
There are few limitations for record types that can be created through dynamic updates with an unprivileged user. The permitted types are only restricted to those that are supported by the Windows server dynamic updates implementation. Most common record types are supported. Invoke-DNSUpdate itself currently supports A, AAAA, CNAME, MX, PTR, SRV, and TXT records. Overall, secure dynamic updates alone is certainly exploitable if non-existing DNS records worth adding can be identified.
Supplementing LLMNR/NBNS Spoofing with Dynamic Updates
In a quest to weaponize secure dynamic updates to function in a similar fashion to LLMNR/NBNS spoofing, I looked at injecting records into ADIDNS that matched received LLMNR/NBNS requests. In theory, a record that falls down to LLMNR/NBNS shouldn’t exist in DNS. Therefore, these records are eligible to be created by an authenticated user. This method is not practical for rare or one time only name requests. However, if you keep seeing the same requests through LLMNR/NBNS, it may be beneficial to add the record to DNS.
The upcoming version of Inveigh contains a variation of this technique. If Inveigh detects the same LLMNR/NBNS request from multiple systems, a matching record can be added to ADIDNS. This can be effective when systems are sending out LLMNR/NBNS requests for old hosts that are no longer in DNS. If multiple systems within a subnet are trying to resolve specific names, outside systems may also be trying. In that scenario, injecting into ADIDNS will help extend the attack past the subnet boundary.
PS C:userskevinDesktopInveigh> Invoke-Inveigh -ConsoleOutput Y -DNS Y -DNSThreshold 4
[*] Inveigh 1.4 Dev started at 2018-07-05T22:32:37
[+] Elevated Privilege Mode = Enabled
[+] Primary IP Address = 192.168.125.100
[+] LLMNR/NBNS/mDNS/DNS Spoofer IP Address = 192.168.125.100
[+] LLMNR Spoofer = Enabled
[+] DNS Injection = Enabled
[+] SMB Capture = Enabled
[+] HTTP Capture = Enabled
[+] HTTPS Capture = Disabled
[+] HTTP/HTTPS Authentication = NTLM
[+] WPAD Authentication = NTLM
[+] WPAD Default Response = Enabled
[+] Real Time Console Output = Enabled
WARNING: [!] Run Stop-Inveigh to stop manually
[*] Press any key to stop real time console output
[+] [2018-07-05T22:32:52] LLMNR request for dnsinject received from 192.168.125.102 [Response Sent]
[+] [2018-07-05T22:33:00] LLMNR request for dnsinject received from 192.168.125.100 [Response Sent]
[+] [2018-07-05T22:35:00] LLMNR request for dnsinject received from 192.168.125.104 [Response Sent]
[+] [2018-07-05T22:41:00] LLMNR request for dnsinject received from 192.168.125.105 [Response Sent]
[+] [2018-07-05T22:50:00] LLMNR request for dnsinject received from 192.168.125.106 [Response Sent]
WARNING: [!] [2018-07-05T22:33:01] DNS (A) record for dnsinject added
Remembering the Way
While trying to find an ideal secure dynamic updates attack, I kept hitting roadblocks with either the protocol itself or the existence of default DNS records. Since, as mentioned, I had planned on rolling a dynamic updates attack into Inveigh, I started thinking more about how the technique would be employed during penetration tests. To help testers confirm that the attack would even work, I realized that it would be helpful to create a PowerShell function that could view ADIDNS zone permissions through the context of an unprivileged account. But how would I even remotely enumerate the DACL without access to the administrator only tools? Some part of my brain that obliviously hadn’t been taking part in this ADIDNS research immediately responded with, “the zones are stored in AD, just view the DACL through LDAP.”
LDAP…
…there’s another way into the zones that I haven’t checked.
Reviewing the topic that I likely ran into during my days as a network administrator, I found that the ADIDNS zones are currently stored in either the DomainDNSZones or ForestDNSZones partitions.
LDAP provides a method for ‘Authenticated Users’ to modify an ADIDNS zone without relying on dynamic updates. DNS records can be added to an ADIDNS zone directly through LDAP by creating an AD object of class dnsNode.
With this simple understanding, I now had a method of executing the DNS attack I had been chasing the whole time.
Wildcard Records
Wildcard records allow DNS to function in a very similar fashion to LLMNR/NBNS spoofing. Once you create a wildcard record, the DNS server will use the record to answer name requests that do not explicitly match records contained in the zone.
PS C:UserskevinDesktopPowermad> Resolve-DNSName NoDNSRecord
Name Type TTL Section IPAddress
---- ---- --- ------- ---------
NoDNSRecord.inveigh.net A 600 Answer 192.168.125.100
Unlike LLMNR/NBNS, requests for fully qualified names matching a zone are also resolved.
PS C:UserskevinDesktopPowermad> Resolve-DNSName NoDNSRecord2.inveigh.net
Name Type TTL Section IPAddress
---- ---- --- ------- ---------
NoDNSRecord2.inveigh.net A 600 Answer 192.168.125.100
With dynamic updates, my wildcard record injection efforts were prevented by limitations within dynamic updates itself. Dynamic updates, at least the Windows implementation, just doesn’t seem to process the ‘*’ character correctly. LDAP however, does not have the same problem.
Taking a step back, let’s look at how DNS nodes are used to form an ADIDNS record. The main structure of the record is stored in the dnsRecord attribute. This attribute defines elements such as the record type, target IP address or hostname, and static vs. dynamic classification.
All of the key record details outside of the name are stored in dnsRecord. If you are interested, more information for the attribute’s structure can be found in MS-DNSP.
I created a PowerShell function called New-DNSRecordArray which can create a dnsRecord array for A, AAAA, CNAME, DNAME, MX, NS, PTR, SRV, and TXT record types.
S C:UserskevinDesktopPowermad> $dnsRecord = New-DNSRecordArray -Type A -Data 192.168.125.100
PS C:UserskevinDesktopPowermad> [System.Bitconverter]::ToString($dnsrecord)
04-00-01-00-05-F0-00-00-BA-00-00-00-00-00-02-58-00-00-00-00-79-D8-37-00-C0-A8-7D-64
As I previously mentioned, LDAP offers a better view of how DNS records with a matching name are grouped together. A single DNS node can have multiple lines within the dnsRecord attribute. Each line represents a separate DNS record of the same name. Below is an example of the multiple records all contained within the dnsRecord attribute of a node named ‘@’.
Lines can be added to a node’s dnsRecord attribute by appending rather than overwriting the existing attribute value. The PowerShell function I created to perform attribute edits, Set-ADIDNSNodeAttribute, has an ‘Append’ switch to perform this task.
ADIDNS Syncing and Replication
When modifying an ADIDNS zone through LDAP, you may observe a delay between when the node is added to LDAP and when the record appears in DNS. This is due to the fact that the DNS server service is using its own in-memory copy of the ADIDNS zone. By default, the DNS server will sync the in-memory copy with AD every 180 seconds.
In large, multi-site AD infrastructures, domain controller replication time may be a factor in ADIDNS spoofing. To fully leverage the reach of added records within an enterprise, the attack time will need to extend past replication delays. By default, replication between sites can take up to three hours. To cut down on delays, start the attack by targeting the DNS server which will have the biggest impact. Although adding records to each DNS server in an environment in order to jump ahead of replication will work, keep in mind that AD will need to sort out the duplicate objects once replication does take place.
SOA Serial Number
Another consideration when working with ADIDNS zones is the potential presence of integrated DNS servers on the network. If a server is hosting a secondary zone, the serial number is used to determine if a change has occurred. Luckily, this number can be incremented when adding a DNS node through LDAP. The incremented serial number needs to be included in the node’s dnsRecord array to ensure that the record is copied to the server hosting the secondary zone. The zone’s SOA serial number will be the highest serial number listed in any node’s dnsRecord attribute. Care should be taken to only increment the SOA serial number by one so that a zone’s serial number isn’t unnecessarily increased by a large amount. I have created a PowerShell function called New-SOASerialNumberArray that simplifies the process.
The SOA serial number can also be obtained through nslookup.
PS C:UserskevinDesktopPowermad> nslookup
Default Server: UnKnown
Address: 192.168.125.10
> set type=soa
> inveigh.net
Server: UnKnown
Address: 192.168.125.10
inveigh.net
primary name server = inveigh-dc1.inveigh.net
responsible mail addr = hostmaster.inveigh.net
serial = 255
refresh = 900 (15 mins)
retry = 600 (10 mins)
expire = 86400 (1 day)
default TTL = 3600 (1 hour)
inveigh-dc1.inveigh.net internet address = 192.168.125.10
The gathered serial number can be fed directly to New-SOASerialNumberArray. In this scenario, New-SOASerialNumberArray will skip connecting to a DNS server and instead it will use the specified serial number.
Maintaining Control of Nodes
To review, once a node is created with an authenticated user, the creator account will have ownership/full control of the node. The ‘Authenticated Users’ principal itself will not be listed at all within the node’s DACL. Therefore, losing access to the creator account can create a scenario where you will not be able to remove an added record. To avoid this, the dNSTombstoned attribute can be set to ‘True’ upon node creation.
Having the creator account’s ownership and full control permission listed on a node can make things really easy on the blue team in the event they discover your record. Although changing node ownership is possible, a token with the SeRestorePrivilege is required.
Node Tombstoning
Record cleanup isn’t as simple as just removing the node from LDAP. If you do, the record will hang around within the DNS server’s in-memory zone copy until the service is restarted or the ADIDNS zone is manually reloaded. The 180 second AD sync will not remove the record from DNS.
When a record is normally deleted in an ADIDNS zone, the record is removed from the in-memory DNS zone copy and the node object remains in AD. To accomplish this, the node’s dNSTombstoned attribute is set to ‘True’ and the dnsRecord attribute is updated with a zero type entry containing the tombstone timestamp.
Generating you own valid zero type array isn’t necessarily required to remove a record. If the dnsRecord attribute is populated with an invalid value, such as just 0x00, the value will be switched to a zero type array during the next AD/DNS sync.
For cleanup, I’ve created a PowerShell function called Disable-ADIDNSNode which will update the dNSTombstoned and dnsRecord attributes.
The cleanup process is a little different for records that exist as a single dnsRecord attribute line within a multi-record DNS node. Simply remove the relevant dnsRecord line and wait for sync/replication. Set-DNSNodeAttribute can be used for this task.
One note regarding tombstoned nodes in case you decide to work with existing records through either LDAP or dynamic updates. The normal record aging process will also set the dNSTombstoned attribute to ‘True’. Records in this state are considered stale, and if enabled, ready for scavenging. If scavenging is not enabled, these records can hang around in DNS for a while. In my test labs without enabled scavenging, I often find stale records that were originally registered by machine accounts. Caution should be taken when working with stale records. Although they are certainly potential targets for attack, they can also be overwritten or deleted by mistake.
Node Deletion
Fully removing the DNS records from both DNS and AD to better cover your tracks is possible. The record needs to first be tombstoned. Once the AD/DNS sync has occurred to remove the in-memory record, the node can be deleted through LDAP. Replication however makes this tricky. Simply performing these two steps quickly on a single domain controller will result in only the node deletion being replicated to other domain controllers. In this scenario, the records will remain within the in-memory zone copies on all but one domain controller. During penetration tests, tombstoning is probably sufficient for cleanup and matches how a record would normally be deleted from ADIDNS.
Defending Against ADIDNS Attacks
Unfortunately, there are no known defenses against ADIDNS attacks.
Oh alright, there are easily deployed defenses and one of them actually involves using a wildcard record to your advantage. The simplest way to disrupt potential ADIDNS spoofing is to maintain control of critical records. For example, creating a static wildcard record as an administrator will prevent unprivileged accounts from creating their own wildcard record.
The records can be pointed at your black-hole method of choice, such as 0.0.0.0.
An added bonus of an administrator controlled wildcard record is that the record will also disrupt LLMNR/NBNS spoofing. The wildcard will satisfy all name requests for a zone through DNS and prevent requests from falling down to LLMNR/NBNS. I would go so far as to recommend administrator controlled wildcard records as a general defense for LLMNR/NBNS spoofing.
You can also modify an ADIDNS zone’s DACL to be more restrictive. The appropriate settings are environment specific. Fortunately, the likelihood of having an actual requirement for allowing ‘Authenticated Users’ to create records is probably pretty low. So, there certainly may be room for DACL hardening. Just keep in mind that limiting record creation to only administrators and machine accounts may still leave a lot of opportunities for attack without also maintaining control of critical records.
And the Winner Is?
The major advantage of ADIDNS spoofing over LLMNR/NBNS spoofing is the extended reach and the major disadvantage is the required AD access. Let’s face it though, we don’t necessarily need a better LLMNR/NBNS spoofer. Looking back, NBNS spoofing was a security problem long before LLMNR joined the game. LLMNR and NBNS spoofing both continue to be effective year after year. My general recommendation, having worked with ADIDNS spoofing for a little while now, would be to start with LLMNR/NBNS spoofing and add in ADIDNS spoofing as needed. The LLMNR/NBNS and ADIDNS techniques actually complement each other pretty well.
To help you make your own decision, the following table contains some general traits of ADIDNS, LLMNR, and NBNS spoofing:
Trait
ADIDNS
LLMNR
NBNS
Can require waiting for replication/syncing
X
Easy to start and stop attacks
X
X
Exploitable when default settings are present
X
X
X
Impacts fully qualified name requests
X
Requires constant network traffic for spoofing
X
X
Requires domain credentials
X
Requires editing AD
X
Requires privileged access to launch attack from a compromised system
X
Targets limited to the same broadcast/multicast domains as the spoofer
X
X
Disclaimer: There are still lots of areas to explore with ADIDNS zones beyond just replicating standard LLMNR/NBNS spoofing attacks.
Tools!
I’ve released an updated version of Powermad which contains several functions for working with ADIDNS, including the functions shown in the post:
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
Name
Domain
Purpose
Expiry
Type
YSC
youtube.com
YouTube session cookie.
52 years
HTTP
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.
Name
Domain
Purpose
Expiry
Type
VISITOR_INFO1_LIVE
youtube.com
YouTube cookie.
6 months
HTTP
Analytics cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
We do not use cookies of this type.
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
We do not use cookies of this type.
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
We do not use cookies of this type.
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies we need your permission. This site uses different types of cookies. Some cookies are placed by third party services that appear on our pages.
Cookie Settings
Discover why security operations teams choose NetSPI.