The Cloud is one of the “new big things” in IT and security and I hate it. To be clear, I don’t actually hate the concept of The Cloud (I’ll get to that in a minute) but, rather, I hate the term. According to Wikipedia, cloud computing is “the delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet).” What this pretty much amounts to is outsourcing. There are a lot of reasons that people “move to The Cloud” and I’m not really going to dive into them all; suffice it to say that it comes down to cost and the efficiencies that Cloud providers are able to leverage typically allow them to operate at lower cost than most organizations would spend accomplishing the same task. Who doesn’t like better efficiency and cost savings? But what is cloud computing really? Some people use the term to refer to infrastructure as a service (IaaS), or an environment that is sitting on someone else’s servers; typically, the environment is virtualized and dynamically scalable (remember that whole efficiency / cost savings thing). A good example of an IaaS provider is Amazon Web Services. Software as a service (SaaS) is also a common and not particularly new concept that leverages the concept of The Cloud. There are literally thousands of SaaS providers but some of the better known ones are Salesforce.com and Google Apps. Platform as a Service (PaaS) is less well-known term but the concept is familiar: PaaS providers the building blocks for hosted custom applications. Often, PaaS and IaaS solutions are integrated. An example of a PaaS provider is Force.com. The Private Cloud is also generating some buzz with packages such as Vblock, and OpenStack; really, these are just virtualized infrastructures. I’m currently at the Hacker Halted 2011 conference in Miami (a fledgling but well-organized event) and one of the presentation tracks is dedicated to The Cloud. There have been some good presentations but both presenters and audience members have struggled a bit with defining what they mean by The Cloud. One presenter stated that “if virtualization is involved, it is usually considered to be a cloud.” If we’re already calling it virtualization, why do we also need to call it The Cloud? To be fair, The Cloud is an appropriate term in some ways because it represents the nebulous boundaries of modern IT environments. No longer is an organization’s IT infrastructure bound by company-owned walls; it is an amalgamation of company and third party managed party services, networks, and applications. Even so, The Cloud is too much of a vague marketing term for my taste. Rather than lumping every Internet-based service together in a generic bucket, we should say what we really mean. Achieving good security and compliance is already difficult within traditional corporate environments. Let’s at least all agree to speak the same language.
Mobile computing technology is hardly a recent phenomenon but, with the influx of mobile devices such as smartphones and tablet computers into the workplace, the specter of malicious activity being initiated by or through these devices looms large. However, generally speaking, an information security toolkit that includes appropriate controls for addressing threats presented by corporate laptops should also be able to deal with company-owned smartphones. My recommendations for mitigating the risk of mobile devices in your environment include the following:
At the 2011 Black Hat Conference, Security Researcher Jay Radcliffe demonstrated what many healthcare security professionals have been concerned with; hacking a medical device. Medical devices have developed from isolated islands into systems with embedded operating systems that communicate with other applications. As such, a new threat window opened. Apart from the obvious benefits that such advancements have brought to healthcare, it also brings some responsibilities. Since Mr. Radcliffe’s presentation there has been lots of discussion about the security of insulin pumps and what the manufacturer should do. However I’d like to discuss the broader topic and maybe from a slightly different angle. Speaking generally about medical device security there is a lot of confusion about what can be done to ensure that privacy and security is maintained on, for all intents and purposes let’s call them “smart” devices. Many individuals will say that FDA regulated devices cannot be altered in any way. However; the FDA itself has published articles going back a couple of years now indicating that this is incorrect. Aware of such misinterpretation a November 2009 post clearly reminds readers that “cybersecurity for medical devices and their associated communication networks is a shared responsibility between medical device manufacturers and medical device user facilities.” That’s a powerful statement and what some may think upon first read, unfair. This doesn’t just say it is solely the responsibility of the device manufacturer but also to the organization that uses, distributes, and maintains them. If a pump or other medical device that transmits information and/or receives instructions remotely (such as heart pumps) fails, the patient will most likely go back to the covered entity for a reason. It doesn’t matter if it’s because the pump was damaged, altered maliciously, or just had a design flaw, both organizations will take a public relations hit. So what does this mean for covered entities? Devices used and distributed by covered entities should have had security as part of the design process and allow for updates if necessary. For example, if the device uses a Windows operating system, how will it receive updates and what department will be responsible for that? If you’d like to get more involved in this type of discussion check out the HIMSS Medical Device Security Work Group or the FDA Draft Guidance which is out for comments now.
When it comes to application of security controls, many organizations have gotten pretty good at selecting and implementing technologies that create defense-in-depth. Network segmentation, authorization and access control, and vulnerability management are all fairly well understood and generally practiced by companies these days. However, many organizations are still at risk because they can’t answer a simple question: where is sensitive data? It should go without saying but if a company can’t identify the locations where sensitive data is stored, processed, or transmitted, it will have a pretty hard time implementing controls that will effectively protect that data. Two effective methods for identifying sensitive data repositories and transmission channels are data flow mapping and automated data discovery. A comprehensive and accurate approach will include both. Note, of course, that both methods assume that you have already defined what types of data are considered sensitive; if this is not the case, you will need to go through a data classification exercise and create a data classification policy. Data flow mapping is exactly what it sounds like: a table-top exercise to identify how sensitive data enters the organization and where it goes once inside. Data flow mapping is typically pretty interview-centric, as you will need to really dig into the business processes that manipulate, move, and store sensitive data. Depending on the size and complexity of your organization, data flow mapping could either be very straightforward or extremely complicated. However, it is the only reliable way to determine the actual path that sensitive data takes through your organization. As you conduct your interviews, remember that you want to identify all the ways that sensitive data is input into a business process, where it is stored and processed, who handles it and how, and what the outputs are. Make sure that you get multiple perspectives on individual business processes as validation and also match up the outputs of one process with the inputs of another. It is not uncommon for employees in one business unit or area to have misunderstandings about other processes; your goal is to piece together the entire puzzle. Automated data discovery does a poor job of shedding light on the mechanisms that move sensitive data around an organization but it can be very valuable for validating assumptions, identifying exceptions, and helping to reveal the true size of certain data repositories. There are a number of free and commercial tools that can be used for data discovery (one of the most popular free tools is Cornell University’s Spider tool) but they all aim to accomplish the same objective: provide you with a list of files and repositories that contain data that you have defined as sensitive. Good places to start your discovery include network shares, databases, portal applications, home drives on both servers and workstations, and email inboxes. Be aware that most discovery tools will require that you provide or select a regular expression that matches the format of particular data fields. However, some more advanced commercial tools also provide signature learning features. Ultimately, your data discovery exercise should result in a much improved understanding of how sensitive data passes through your organization and where it is stored. The next step is to determine how to apply controls based on where data is stored, processed, and transmitted. Also, where necessary, business processes may need to be adjusted in order to consolidate data and meet data protection requirements. While identification of sensitive data is only the first phase in a process that will result in better data security and reduced risk, it is an absolutely critical step if application of security controls is to be effective.
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
YouTube session cookie.
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.
Analytics cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Preference cookies enable a website to remember information that changes the way the website behaves or looks, like your preferred language or the region that you are in.
Unclassified cookies are cookies that we are in the process of classifying, together with the providers of individual cookies.
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies we need your permission. This site uses different types of cookies. Some cookies are placed by third party services that appear on our pages.
Discover why security operations teams choose NetSPI.