Back

Healthcare’s Guide to Ryuk Ransomware: Advice for Prevention and Remediation

Making its debut in 2018, the Ryuk ransomware strand has wreaked havoc on hundreds of businesses and is responsible for one-third of all ransomware attacks that took place in 2020. Now it is seemingly on a mission to infect healthcare organizations across the country, already having hit five major healthcare providers, disabling access to electronic health records (EHR), disrupting services, and putting sensitive patient data at risk.

The healthcare industry is surely bracing for what the U.S. Cybersecurity & Infrastructure Security Agency (CISA) is warning as, “an increased and imminent cybercrime threat to U.S. hospitals and healthcare providers.” What can organizations do to preemptively protect themselves? Our recommendation:

  1. Analyze what makes the healthcare industry a key target for ransomware,
  2. Educate yourself to better understand Ryuk and TrickBot, and
  3. Implement proactive cyber security strategies to thwart ransomware attacks and minimize damage from an incident (we’ll get into this more later in this post).

We’ve pulled together this Guide to Ryuk as a resource to help organizations prevent future ransomware attacks and ultimately mitigate its impact on our nation’s healthcare systems.

Why are Healthcare Providers a Target for Ransomware?

Healthcare is widely known as an industry that has historically struggled to find a balance between the continuation of critical services and cyber security. To put this into perspective, doctors and physicians can’t stop everything and risk losing a life if their technology locks them out due to forgetting a recently changed password. So, security, while critically important in a healthcare environment, is more complex due to its “always on” operational structure.

We’ve seen a definite uptick in attention paid to security at healthcare organizations, but there’s much work to be done. The task of securing a healthcare systems is extremely challenging given its scale and complexity, consisting of many different systems and, with the addition of network-enabled devices, it becomes difficult for administrators to grasp the value of security relative to its costs.  In addition, third parties, such as medical device manufactures also play a role. Historically, devices in hospitals, clinics, and home-healthcare environments had no security controls, but there has been more of a focus on “security features” as connectivity (network, Bluetooth, etc.) has increased. Yet most healthcare networks are still rife with these sorts of devices that have minimal, if any, built-in security capabilities.

Healthcare is by no means the only target industry: any organization can fall victim to ransomware. Though, healthcare is a prime target for two reasons:

  • It’s a gold mine for sensitive data, including social security numbers, payment information, birth certificates, addresses, and more. While monetizing such data may require additional effort on the part of cybercriminals, breaches of such data is a major HIPAA compliance violation that can result in heavy fines and could also potentially have a negative impact to patients if their data is leaked.
  • The criticality of the business is as high-risk as it gets. In other words, hospitals cannot afford downtime. Add a public health pandemic to the mix and the criticality increases drastically.

This sense of urgency to get systems back up and running quickly is a central reason why Ryuk is targeting the industry now. Hospitals are more likely to pay a ransom due to the potential consequence downtime can have on the organization and its patients.

Ransomware, Ryuk, and TrickBot:

To understand Ryuk, it is important to first understand ransomware attacks at a fundamental level. Ransomware gains access to a system only after a trojan or ‘botnet’ finds a vulnerable target and gains access first. Trojans gain access often through phishing attempts (spam emails) with malicious links or attachments (the payload). If successful, the trojan installs malware onto the target’s network by sending a beacon signal to a command and control server controlled by the attacker, which then sends the ransomware package to the Trojan.

In Ryuk’s case, the trojan is TrickBot. In this case, a user clicks on a link or attachment in an email, which downloads the TrickBot Trojan to the user’s computer. TrickBot then sends a beacon signal to a command and control (C2) server the attacker controls, which then sends the Ryuk ransomware package to the victim’s computer.

Trojans can also gain access through other types of malware, unresolved vulnerabilities, and weak configuration, though, phishing is the most common attack vector. Further, TrickBot is a banking Trojan, so in addition to potentially locking up the network and holding it for ransom, it may also steal information before it installs the ransomware.

How does an organization know if they have fallen victim to ransomware, more specifically Ryuk? It will be obvious if Ryuk has successfully infiltrated a system. It will take over a desktop screen and a ransom note will appear with details on how to pay the ransom via bitcoin:

A screenshot of Ryuk’s ransom note.

An early warning sign of a ransomware attack is that at the technical level, your detective controls, if effective, should alert to Indicators of Compromise (IoC). Within CISA’s alert, you can find TrickBot IoCs listed along with a table of Ryuk’s MITRE ATT&K techniques.

A threat to the increasing remote workforce: In order to move laterally throughout the network undetected, Ryuk relies heavily on native tools, such as Windows Remote Management and Remote Desktop Protocol (RDP). Read: COVID-19: Evaluating Security Implications of the Decisions Made to Enable a Remote Workforce

Implementing Proactive Cyber Security Strategies to Thwart Ransomware Attacks

We mentioned at the start of this post that one of the things organizations can do preemptively to protect themselves is to put in place proactive security strategies. While important, security awareness only goes so far, as humans continue to be the greatest cyber security vulnerability. Consider this: In past NetSPI engagements with employee phishing simulations, our click-rates, or fail-rates, were down to 8 percent. This is considered a success, but still leaves open opportunity for bad actors. It only takes one person to interact with a malicious attachment or link for a ransomware attack to be successful.

Therefore, we support defense-in-depth as the most comprehensive strategy to prevent or contain a malware outbreak. Here are four realistic defense-in-depth tactics to implement in the near- and long-term to prevent and mitigate ransomware threats, such as Ryuk:

  1. Revisit your disaster recovery and business continuity plan. Ensure you have routine and complete backups of all business-critical data at all times and that you have stand-by, or ‘hot,’ business-critical systems and applications (this is usually done via virtual computing). Perform table-top or live disaster recovery drills and validate that ransomware wouldn’t impact the integrity of backups.
  2. Separate critical data from desktops, avoid siloes: Ryuk, like many ransomware strands, attempts to delete backup files. Critical patient care data and systems should be on an entirely separate network from the desktop. This way, if ransomware targets the desktop network (the most likely scenario) it cannot spread to critical hospital systems. This is a long-term, and challenging, strategy, yet well worth the time and budgetary investment as the risk of critical data loss will always exist.
  3. Take inventory of the controls you have readily available – optimize endpoint controls: Assess your existing controls, notably email filtering and endpoint controls. Boost email filtering processes to ensure spam emails never make it to employee inboxes, mark incoming emails with a banner that notifies the user if the email comes from an external source, and give people the capability to easily report suspected emails. Endpoint controls are essential in identifying and preventing malware. Here are six recommendations for optimizing endpoint controls:
    1. Confirm Local Administrator accounts are strictly locked down and the passwords are complex. Ensure Domain Administrator and other privileged accounts are not used for routine work, but only for those tasks that require admin access.
    2. Enable endpoint detection and response (EDR) capabilities on all laptops and desktops.
    3. Ensure that every asset that can accommodate anti-malware has it installed, including servers.
    4. Apply all security patches for all software on all devices. Disable all RDP protocol access from the Internet to any perimeter or internal network asset (no exceptions). 
  1. Test your detective controls, network, and workstations:
    1. Detective control testing with adversarial simulation: Engage in a purple team exercise to determine if your detective controls are working as designed. Are you able to detect and respond to malicious activity on your network?
    2. Host-based penetration testing: Audit the build of your workstations to validate that the user does have least privilege and can only perform business tasks that are appropriate for that individual’s position.
    3. Internal network penetration testing: Identify high impact vulnerabilities found in systems, web applications, Active Directory configurations, network protocol configurations, and password management policies. Internal network penetration tests also often include network segmentation testing to determine if the controls isolating your crown jewels are sufficient.

Finally, organizations that end up a victim to ransomware have three options to regain control of their systems and data.

  • Best option: Put disaster recovery and business continuity plans in motion to restore systems. Also, perform an analysis to determine the successful attack vector and remediate associated vulnerabilities.
  • Not advised: Pay the ransom. A quick way to get your systems back up and running, but not advised. There is no guarantee that your business will be unlocked (in fact, the offer may also be ransomware), so in effect you are funding adversarial activities and it’s likely they will target your organization again.
  • Rare cases: Cracking the encryption key, while possible with immature ransomware groups, is often unlikely to be successful. Encryption keys have become more advanced and require valuable time to find a solution.

For those that have yet to experience a ransomware attack, we encourage you to use the recent Ryuk news as a jumping point to future-proof your security processes and prepare for the inevitability of a breach. And for those that have, now is the time to reevaluate your security protocols.

Back

The Power of Instrumentation to Automate Components of Vulnerability Testing – from the Creator of IAST

In a recent episode of Agent of Influence, I talked with Jeff Williams, a celebrity in the cyber security space. Jeff is the co-founder and Chief Technology Officer of Contrast Security where he developed IAST. He was also previously the co-founder and CEO of Aspects Security and was a founder and major contributor to OWASP where he created the OWASP Top 10 – among other things.

I wanted to share some of his insights in a blog post, but you can also listen to our interview here, on Spotify, Apple Music or wherever you listen to podcasts.

Critical Responsibilities of Cyber Security Consultants

Jeff believes that critical to a career in cyber security is the knowledge of security defenses and security vulnerabilities. You actually have to learn how defenses are supposed to work. Your job as a security consultant in a lot of respects is to make sure those defenses are in place and that they’re working the way they’re supposed to. Therefore, understanding how they’re supposed to work is critical. Jeff sees a lot of consultants who read a document about how a security defense is supposed to work, and they assume that that’s how it does work. But that’s often not the case.

The other piece you really have to understand is how vulnerabilities work. And not just in theory – you actually have to work through them, exploit them, and learn how they work. Jeff believes that if you know about these vulnerabilities only in theory, you know nothing about them. You really have to dig in and make sure they work.

You can use things like WebGoat, which Jeff created, to start to understand them, but you should go back and recreate them. And it’s not going to work the first time, you’re going to have to experiment around and figure out how to make it work – which is part of the job.

Effectively Communicating Vulnerability Findings

Learning how to write up vulnerabilities is almost as important as being able to find them. It’s really important to be able to communicate your findings and get people to take action. Jeff said he’s read a number of vulnerability write ups that are so bad because they’re too technical and don’t describe the risk, especially the risk from a business context.

Ultimately your work goes to waste if you can’t effectively communicate with others what you found and the importance of what you found.

You can read more in this article Jeff wrote on LinkedIn.

The Necessity and Benefits of IAST (Interactive Application Security Testing)

Jeff was having trouble getting their customers to succeed in their application security programs. They were getting some results and fixing some vulnerabilities, but it was a lot of work to get there. He wrote a paper a number of years ago called Enterprise Java Rootkits, and the question was: what could a malicious developer do inside a major financial enterprise? Everything in that paper is still valid today – and it’s terrifying. One of the techniques that he looked at was instrumentation, and dynamically instrumenting an app from within that same app.

This paper got Jeff thinking about instrumentation and if it could be used for good. It struck him that this was a way of getting inside the running application and watching it run. He realized he could watch a SQL injection vulnerability from soup to nuts. He could see the data come in, track that data through the application, see it get combined into a SQL query, see that query then get sent to the database, and check back on that path to see if the data went through the right defenses.

If you see that path, if you see data come in, and go into a query without being escaped or parameterized, then that’s pretty good evidence that you’ve got a SQL injection vulnerability, so he started playing around with it and tested it on WebGoat.

He shared about the first time he found the SQL injection in WebGoat without doing anything other than adding the agent and using the application normally. He watched the log spitting up, and then saw this thing that said: SQL injection detected. That magic has stayed with him to this day.

It’s amazing to watch instrumentation work. It’s like finding all this fantastic information out of your application without any extra work.

IAST is also seamless from a development perspective. It happens in the background and in real time. You don’t have to have a security background or security awareness to be able to do this.

Noisy Cricket: Strengths and Weaknesses of Static Analysis and IAST

Jeff shared that OWASP decided they didn’t really know what static tools were good at and bad at, so they wanted to measure it. To do that, they created this huge test suite, almost 3,000 test cases, half of which are false positives and half of which are true positives. Then they run a static tool against it, get the report from the static tool, feed it into the benchmark, and it will score the report and create charts to show the strengths and weaknesses.

It’s a low bar; none of these tests are particularly difficult. But what is surprising is how poorly the static tools do on things like data flow problems and all the injections (including command injection, SQL injection, XSS, LDAP, etc.). In response to that, the static vendors started changing their products to do better against the benchmark, which was one of the intentions of the benchmark project: set a bar so that products could get better.

Jeff noted, however, that the strategy the static tools chose was to not miss any true vulnerabilities, but basically not care about false positives. As a result, the static tools increased their identification of true positives, but at the same time, added false positives.

In response to this, Jeff wrote a tool called Noisy Cricket that finds all the true positives without caring about false positives. Basically, it says any place you use SQL, that’s SQL injection, any place you use encryption, that’s a weak encryption. It reports all the results. And when you look at the results of Noisy Cricket, they’re not that different from what the static vendors are producing. It was kind of a joke, but also demonstrates that finding all the true positives without caring about false positives provides zero value. The only value happens when you find true positives with low false positives. That’s how you measure the value and that’s how the benchmark project scores tools.

Jeff believes there has to be a balance and static tools have never been able to improve in that direction. They can only bias their findings towards finding true positives or towards only reporting true vulnerabilities.

IAST is a great solution because of the nature of the analysis and the fact that it produces results that are not very noisy with low rates of false positives. Meanwhile, static analysis by nature, out of the box is extremely noisy and shows a lot of false positives, but there is the opportunity to fine tune and customize your static analysis capability. It can be a lot of work to get it to a point where you reduce the false positives to an acceptable rate, but there is value in both. The low false positive rate is one of the reasons though that IAST really shines against static analysis techniques.

However, for certain use cases, static analysis is exactly the right tool. For example, if you’re a security researcher, and you’re tasked with finding new and interesting kinds of vulnerabilities, static can be a real powerful tool. In addition, if you get good at writing custom static rules, you can search your code for things that are custom to your code.


To listen to the full podcast, click here, or you can find Agent of Influence on Spotify, Apple Music, or wherever you listen to podcasts.

Back

Shifting Left to Move Forward: Five Steps for Building an Effective Secure Code Review Program

Today, nearly every company is a software company, resulting in an unbelievable amount of code that’s subject to security issues. What’s more, a myriad of methods for identifying vulnerabilities focus heavily on a post-deployment approach, leaving security gaps between development and testing.

It’s time to shift left.

By shifting left and introducing secure code review (SCR) the moment the first line of code is written (or as soon in the software development lifecycle as possible), any organization creating applications can identify real vulnerabilities well in advance of deployment, thereby increasing team productivity and thwarting future outside attacks, all while decreasing costs of testing for vulnerabilities too late in the software development life cycle (SDLC). SCR coupled with pentest and threat modelling engagements provides the most in-depth application security testing coverage. It is an essential strategy to any application security strategy. To learn more, read 3 Steps to Reimagine Your AppSec Program in Cyber Defense Magazine.

If not done right, failed SCR programs can cause organizations to lose valuable time, money, and effort. It’s important to develop a SCR program that offers  risk context, uses the right tools, has set processes and standards, and doesn’t overwhelm application teams with false positives. Ultimately, the challenge lies in the fact that creating and running a SCR program is not straight forward and one strategy may not fit all organizations. It requires ongoing planning, discussion and possibly, organizational changes. To help, we’ve compiled five steps to get you started on the right path.

Step 1: Develop a Security Culture and Listen to Your Developers

SCR program planning should not be done by security people alone – be sure to include your experienced developers and application teams when discussing strategies for selecting the right platform, integrating tools in the development ecosystem, and assessing and improving the process.

A solid security culture also aligns code review activity toward assisting developers in writing secure code rather than appearing as an additional burden that delays the release within an already restricted timeline. In other words, look at secure code review as a way to empower developers to write secure code from the start.

Another important element to developing a strong security culture is to rotate code reviewers. This ensure that your source code is regularly reviewed with a fresh pair of eyes and secure code reviewers get exposure to different development environments.

For enterprise organizations, hold awareness sessions during which various teams share common mistakes or top findings – without calling out the developers. And, when developers do write highly secure code, encourage them to continue to do so with rewards that they find meaningful.

Step 2: Create Simple and Effective Methodology and Processes

Lack of transparency and simplicity in a SCR program can lead to loss of time and frustration, which is why streamlined, clearly documented processes, policies, and guidelines are critical for success. Information that should be made available to all teams include items such as defining expectations, scan frequencies, and how to approach remediation (see below). Above all, keep it simple.

One key to keeping it simple is automation, which should be leveraged to run scans and to track issues and remediation progress. However, customizing off-the-shelf tools is equally as important. But don’t rely on tools solely – while automated tools are useful in finding vulnerabilities in less time, there will always be a certain category of issues that they won’t identify Make an effort to contextualize your code review process by understanding the application’s use case and  underlying framework issues. Conducting manual secure code reviews is essential to finding those hidden culprits, particularly in critical applications.

Step 3: Plan Application Onboarding and Scan Frequency

When you’re implementing a SCR program, you need to consider multiple factors. Whether you have a large or small inventory of applications, a risk-based approach to scope and static analysis is essential. In other words, prioritize your “crown jewels,” business critical applications or those with sensitive data. Scan them more closely and frequently than internal applications that do not contain critical data.

In terms of scan frequency, many organizations adhere to “once-a-year minimum” compliance guidelines to scan all applications or at few major releases. This approach does not work for all applications within a portfolio, however. Instead, leverage static analysis automation tools as frequently as possible in the CI/CD pipeline.

If your organization is moving to an agile environment, where you do development work in sprints, another approach to application onboarding is to implement a separate code review sprint. We all know how much pressure sprints can place on teams to create new code within a constrained timeframe. By dedicating a security sprint that’s separate from the development sprint, you can achieve secure code without delay and within specified deadlines. If you decide to take this approach, we recommend implementing a lightweight secure code review during pre-commit, when new code is reviewed before it is introduced into the code base, to eliminate the introduction of security issues early in the cycle.

Step 4: Understand That Remediation Matters Most

Code reviews are great, but don’t stop there. Once you have your list of vulnerabilities, make sure you also have a plan to remediate them, and enable your developers to do so properly with remediation libraries and guidelines at their fingertips. Include:

  • Tools that identify security issues and give feedback as developers are coding
  • Readouts after every assessment, thereby giving developers time to examine identified issues and raise questions
  • Clear deadlines for fixing problems
  • Training to enable developers write secure code

Step 5: Measure and Improve

What you don’t measure, you can’t improve. If you want your secure code reviews to improve over time, then measure and track your progress (or lack thereof). Determine what your key metrics are and find opportunities to gather the appropriate data. Ask yourself questions like:

  • What is your remediation rate?
  • How much time is remediation taking? Is it increasing or decreasing?
  • Are you discovering the same type of security issues and what action should be taken to avoid that ?
  • Are developers learning anything from your vulnerability reports? That is, are they writing better code or are they reintroducing the same issues over and over?

Examine your organization’s performance against your key metrics on a regular basis – quarterly or yearly. Additionally, look beyond code review activity within your own organization to what others are doing. Where do you stand amongst your peers? What are they doing differently?

The ultimate goal of any SCR program is to reduce the time, cost, and effort spent on identifying and fixing security issues that are captured later in the SDLC. The more effective your SCR program is, the less time and money will be spent on analyzing and remediating issues after an application has been deployed.

Back

5 Things You Didn’t Know a Project Manager Could Do

But Once You’ve Experienced Them, You Can’t Live Without Them

When it comes to vulnerability management, the goal of the cyber security team is to identify, verify, and prioritize vulnerability remediation on internal, internet facing, and cloud-based IT infrastructure. But without a project manager on the team, too often I see that pentesters fall into responsibilities outside of that clearly defined goal – into areas like administration, logistics, and finance, which ultimately take the tester away from the job at hand. This is where the project manager becomes essential.

The project management role is a cross-functional and integral part of every vulnerability management program. They bear the responsibility of effectively working not only with pentesters but also with sales, finance, developers, and management, all aimed at driving a path to success for the client. Drilling down even further, the project management team also ensures that project tracking is timely, reports are getting to clients by the promised date, budgeting alignment is maintained, and last but not least, work with the team to schedule client tests and ensure the maximum use of resource allocation.

In short, project managers are capable of much more than what’s written in their job description. To paint a picture of these capabilities and understand the value they can bring to your pentesting engagements, here are five things you didn’t know a project manager could do – but once you’ve experienced them, you can’t live without them.

Administration Services That Give a Concierge-like Experience

Project managers are a bit like a concierge. It’s important that they are able to read clients and tailor their style of project delivery to best suit them. This ability gives the client that feeling of “white glove service.” But it’s more than style, it’s also technical competence that a professional project manager can bring to a vulnerability testing program. Applying past experiences (successes and failures) to current or new clients who have never gone through a specific type of penetration test before is invaluable. Time, energy, and budget are saved. So, if your project manager asks, “have you ever been part of a (insert example here) type of pentest,” it’s due to a desire to help clear some hurdles or roadblocks early on in the engagement and set the stage for a smooth and successful project.

Documentation at Your Fingertips

Everyone is crazy busy these days. And our clients are no different. No one has the time to read the results of vulnerability testing from hundreds of pages-long PDFs that are not organized, deduplicated or consolidated. Project managers help assist busy security professionals cut through the clutter and assist in training them how to have a quick, or ‘dashboard,’ view of the information versus sifting through all the data. Eliminating the need to wade through those reports is a huge time saver and allows security teams to consume the data in real-time and discern where things are at and if any immediate action should be taken. With hackers attacking every 39 seconds, on average 2,244 times a day according to a University of Maryland study, time is critical.

Customization to Provide Information that Matters

Importantly, project managers work hard to provide the information that matters most to a client engagement. For example, information on the project status dashboard, which also includes information around the project budget, is customized as not all clients have the same needs. To design the dashboard view, project managers work as advisors and collaborators to help a client determine what is most important to see, taking their role into consideration. These customization sessions oftentimes result in a healthy back-and-forth dialogue which helps with envisioning future views of data as well.

In addition to the customized project status and financial view of data created by the project manager, NetSPI has a vulnerability management and orchestration platform, Resolve, that provides a dashboard view of penetration testing results and allows clients to dig deeper into the testing outcomes, delegate findings to different team members, have threaded discussions, and run reporting for different levels of the organization, all directly from the platform.

Logistics to Save Time and Budget, Eliminate Stress

In any particular penetration testing assignment, there can be as many as 15 people involved, from both the client side and the testing side. Imagine you’re a tester and now you have to coordinate ever-changing schedules, confirm scope, track project dates, and maintain them in the system, send out reminders, write up meeting minutes, join sales calls, attend and prepare information for monthly or quarterly client meetings – all on top of the actual testing. With a smile. In my view, it’s too much for a tester to handle, and ultimately takes them away from the important work of ethical hacking. This really comes down to customer service. Project managers live and breathe logistics so the project can thrive.

An Anchor Who Handles Issues Management Like a Pro

An ideal project manager is one who has passion for the job and puts the client first. Critically, the project manager may be in a situation where issues management skills are needed to analyze a particular client circumstance and provide workable solutions on how to move a project forward. The project manager should be the anchor of the vulnerability management program, who advocates for the client at every turn.

Historically, project managers have been very task oriented. They had a project plan, checked in with a team, assigned tasks, and checked back periodically to see the status of those tasks. That style of project management is waning, and we are now seeing project managers step into an actual leadership role. They’re leading the entire team, in addition to leading clients toward the best course of success.

Discover how the NetSPI BAS solution helps organizations validate the efficacy of existing security controls and understand their Security Posture and Readiness.

X