Beyond20: A ServiceNow Elite Partner Protecting Data in a Changing Information Security Landscape

Protecting Data in a Changing Information Security Landscape

Mark Hillyard
Written by Mark Hillyard

Information Security has never been particularly easy. Beyond the technical constraints, business requirements, and general lack of understanding, Security Officers have faced constant—and often radical—change in the direction of IT itself. However, in the last 5 years, the acceleration of change has increased to a near-untenable speed. Agile methodologies, which gave way to DevOps, mean that change is fast becoming the only constant in our industry. Add to this the escalation and sophistication of attacks by well-funded and highly skilled hackers, and we have come to a tipping point. Can Information Security keep up with demands from our internal organization, while effectively protecting our data and infrastructure from an ever-expanding malicious horde at our gates?

It does feel overwhelming at times, considering all that is expected by the enterprise of its security team. But it is important to start where we are; understand the vision of the business, and plan appropriately. We can take advantage of the new ideas and frameworks coming into vogue, embracing those ideas which serve the purpose of providing the best possible protection, while maintaining enough functionality within the infrastructure.

Digital Transformation

This concept has gained a tremendous amount of traction in the last few years, though the idea is not particularly new. Mega-companies like Amazon have shown how successful agility of vision and operational direction can be. Other companies have fallen by the wayside because they failed to see the value in constantly reinventing themselves in a digital landscape, choosing to stick to core values and resist the tide. The idea of continual course correction flies in the face of traditional business logic. And what does this brave, new world mean for information security? Most immediately, it means increased complexity. When Amazon was simply an online retailer (of books, mostly), the infrastructure was objectively simple. The company had a sizable web presence, which required traditional infrastructure security around web applications, data sources, logistics, and, of course, financial and customer information. Then there was the physical infrastructure outside the data center: distribution centers, warehousing, transportation logistics. These were—and remain—well-understood constructs with time-tested security frameworks to keep companies safe. But then book sales weren’t enough. Failing brick and mortar booksellers (remember Border’s?) proved that a single value chain was insufficient to be successful in modern business, especially for younger companies looking to disrupt traditional retail norms. So, the successful businesses expanded and shifted to meet new—and sometimes untested—markets.

Embracing cloud technology meant Amazon’s security footprint was about to expand immensely, and with that expansion came new challenges. How does a company secure not only its own infrastructure but also support the security of thousands of third-party companies? And this phenomenon has spread across the industry, as other, more traditional organizations have recognized that this is the only way to stay relevant. The burden on information security has increased, especially in models where formerly transactional companies, like Microsoft, have begun to shift into service-oriented businesses. This means maintaining not only one’s own data, but now protecting data of countless customers, many of whom do not have their own internal data security.

As security professionals, we have had to secure infrastructures, applications, buildings, even mobile devices in recent years, but the cloud presents an entirely new set of challenges. At first blush, the cloud is a wonderful place where infrastructure security is no longer required. It is included as part of the service. However, it presents some very complex risks, as well.

The biggest challenge is the shared responsibility for security and compliance. AWS provides a handy infographic and model to define this concept, and every cloud provider has a similar ‘Shared Responsibility Model’. Regardless of how this model is laid out, it is clear that cloud providers are not bearing responsibility for everything. In fact, most of the critical portions of security remain the responsibility of internal security teams. A quick glance at AWS’s model shows us that Amazon has no intention of taking responsibility for customer data, likely the most desirable commodity to threat actors everywhere. So, although our responsibility for physical infrastructure has been transferred, application security, identity management, and customer/user privacy remain squarely in the realm of internal security.

DevOps

Everyone has heard about it. Many organizations are embracing its promise of continual deployment and ridiculously fast release schedules. But can a truly continuous deployment model leave room for meaningful security testing? The answer is not a simple one. Many evangelists believe that if information security is brought into the stream of automation and continuous integration, that any release can be assured of meeting most, if not all, security standards. But if we consider the very real idea that threat actors are adopting similar models of development, we may be witnessing the beginning of a corporate arms race.

Consider that in the two weeks prior to publishing this article, two major cloud-based development repositories had significant breaches (GitHub and Docker). In the case of GitHub, skimming code was discovered, which is meant to attack e-commerce sites that are deployed from GitHub repositories. As of April 29, over 200 sites had already been injected with the code. Docker’s breach was a traditional smash-and-grab, exposing nearly 200,000 user accounts. These numbers may only seem significant if you are one of the users or site owners that was compromised, but the reality is that this is a harbinger of the new wave of attacks. Rather than going directly after organizations, hackers are targeting code before it even gets released, meaning that internal security is rendered essentially useless. The hackers know that infrastructure security has become demonstrably stronger in the wake of countless data breaches and ransomware attacks in the last few years, so now the targets become service providers, who may be more vulnerable by design, as they must allow their own customers to access data remotely, over the web.

Time was our development and code repositories were nestled safely behind our own firewalls, far from the prying eyes and scripts of threat actors. Infrastructure security made sense. We didn’t have multiple cloud service providers with varying degrees of ‘shared responsibility’ written into our contracts, and we could control data with greater certainty.

Yet another vector recently exploited was a continual integration application, Jenkins. In mid-April it was discovered that a known security flaw in the code allowed attackers to hijack user credentials (SSH keys) to Matrix, a decentralized, real-time communication over IP network. There were two known vulnerabilities in the version being used by the company. Once again, the need for rapid deployment was ground zero for a successful attack. The company claims there is no evidence that large quantities of data were downloaded; however, the attacker(s) had access to production databases, which included “unencrypted content (including private messages, password hashes and access tokens) ….” Infosecurity Magazine

Security Configuration Management

The National Institute of Standards in Technology (NIST) defines Security Configuration Management (“SCM”) as “[t]he management and control of configurations for an information system with the goal of enabling security and managing risk.”

Traditionally, this concept related to hardware baselines and file system changes. And it still does. There is no question that maintaining control of our existing systems is crucial to successful security management. But in today’s world, configurations exist everywhere, and we must rise to meet the challenge in new and innovative ways if we are to keep up. Infrastructure as many of us have known it for years has changed so significantly, that it is difficult to define the term generally. It has different meanings to different organizations, and sometimes even within a single company. So, how do we, in this brave, new world, provide the level of security that is necessary to meet privacy laws, information security regulations, and internal business requirements?

The basic tenet of SCM is to maintain secure baseline configurations that can used to monitor our environment for any changes that do not meet our security guidelines. And this can be extended into the world of DevOps and cloud computing. We must expand our vision and look at our environment holistically. There are four main steps to establishing a successful SCM practice: Plan, Implement, Control, and Monitor. We need not abandon the guidance of NIST SP 800-128. Our traditional systems and infrastructure should be secured; however, the advent of cloud computing and DevOps necessitate further enhancement to this model. For example, we may have kept ‘gold’ copies of software and enterprise applications in a repository, meant as the secure baselines for our systems. Now, with continuous deployment, such a model is wildly impractical. The software is the repository, and is constantly being updated, tested, and deployed to production, sometimes with nearly no human interaction. Similarly, we have countless ‘hardened’ OS images ready to be pushed to newly built servers in our data center. But now there is no data center, and our infrastructure cloud provider images each new virtual machine on our behalf. Where once we could require specific hardware specifications to ensure secure configurations, we now must insist on such specifications in our supplier contracts.

What comes next are some suggestions for each stage of the SCM lifecycle as it relates to modern cloud computing and DevOps environments. These are by no means meant to be exhaustive. Each environment is different. But experience has taught us that there are some key ingredients to establishing SCM in today’s rather squishy definition of ‘infrastructure’.

Plan

The first order of business is to define those external services that our organization leverages. If we can begin to tear down the silos between the myriad services our organization employs, it is possible to identify best practice that can deliver the right level of protection. It may seem like a simple concept, but knowing what services are utilized, especially those beyond the reach of traditional infrastructure controls, is a critical step. It is likely that within even mid-sized companies, there are departments that have moved services to the cloud without seeking the counsel of the security officer. And if they have, it may not be immediately clear what sensitivity level the data stored with the service provider may have. Transparency in our architecture must be paramount, and we must centralize control of any and all security measures in place, internally and externally.

One great way to better understand how secure your cloud provider(s) may be is to request a SOC (Type 2) report. This can provide invaluable information regarding the operational controls in place to protect the service provider’s infrastructure, and it will give us a solid baseline from which to begin to augment those controls with our own. If a provider cannot provide such a report, it would be wise to reconsider doing business with them. While such a report is no guarantee, the lack of one should raise a serious red flag.

Implement

Once a better understanding of the services and controls currently in place has been achieved, we can begin to consolidate redundant systems, perform gap analyses for both our internal and external resources, and integrate those controls into a cohesive, and centralized system. Advanced attacks take advantage of siloed security systems. Without centralized understanding and control, these threats will be realized. Keep in mind that while reducing the number of vendors and tools may simplify the centralization of control, there are many tools that excel in certain areas, but are effectively useless in others. A solid mix of tools from different vendors, along with internal controls will bring the most effective means of staving off attacks.

For cloud-based services, software firewalls, WAFs, HIDS/HIPS, file system baselines, and strictly enforced identity and access management (IAM) are some key components. And, as always, data encryption is absolutely necessary, both in the cloud and internally. Encryption of data in transit is generally straightforward, utilizing TLS. Some regulations require encryption at-rest, as well, and this may require more effort in the cloud, especially if databases are being provided as a service. We must ensure service providers can meet this requirement.

Control

This phase is the time to design built-in compliance controls for all development. If we are employing DevOps, then we need to expand the concept to include security auditing as part of the SDLC. Because we can no longer maintain static copies of our current, approved software (continual integration pretty much negates this possibility), we need a way to properly vet changes without stunting performance gains.

It is important that we do not undertake this step in a vacuum. Operational and development staff can be extremely valuable when determining the controls necessary, as well as provide creative solutions where we may not see the larger picture as security professionals. Additionally, including those most impacted by this effort will go a long way toward breaking down those traditional barriers to collaborative information security solutions. Our development teams can assist in building automated controls to ensure our need to protect data does not significantly slow the deployment cycle, while still providing adherence to policy and regulation.

Monitor

Now that we have cleaned house and organized ourselves, we must be able to recognize the relationships between the controls in place and the architecture that they support. Transparency begins with knowing what we have, but the true goal is to recognize how the pieces fit together, so that we can better adapt to new requirements, regulations, and most importantly, evolving attack complexity. As with any improvement effort, analytics provide key data that we can employ to provide a truly secure environment.

What metrics are gathered for review are as varied as the environments we support. Volumes have been written on determining critical success factors and supporting KPIs; suffice to say that each organization must take its unique footprint into consideration when building the monitors and metrics they will use.

Tuning monitors is one of the most difficult operational tasks we undertake. If we set thresholds too low, we run the risk of bombarding response teams with false positives. If they are set too high, our monitors will likely miss attempted breaches. This can be more art that science. If we haven’t been employing SEIM systems or other monitoring tools effectively, it will take some time for this phase to settle in and be truly meaningful. But do not get discouraged. Communicate openly with response teams and stakeholders regularly, and do not be afraid to make changes to monitoring systems to meet the goals of the business.

Conclusion

The world of IT continues to evolve with ever increasing speed. With that evolution comes a great deal of responsibility for information security professionals. The most recent leap forward has created complexity that many of us have not experienced in many years, if ever. Traditional models of infrastructure security are being challenged almost daily by our growing desire for continuous integration and deployment, cloud services, and a proportionately rapid rise in threats—both in number and variation. Our assets must be protected, no matter where they are housed, or how they are utilized, and we must rise equally to these new challenges. Security Configuration Management will continue to evolve as the needs of our organizations shift with time and technology, and we cannot stand still.

Interested in a career in cyber security? It all starts with training.

From entry-level to C-suite, technical to strategic, our cyber security course catalog includes something perfect for you.
View cyber security courses

Originally published May 05 2019, updated January 01 2024
[class^="wpforms-"]
[class^="wpforms-"]
[class^="wpforms-"]
[class^="wpforms-"]
[class^="wpforms-"]
[class^="wpforms-"]
[class^="wpforms-"]
[class^="wpforms-"]
[class^="wpforms-"]
[class^="wpforms-"]