https://ift.tt/1M4lqTl Automated Security and Compliance Wed, 19 Aug 2020 23:39:01 +0000 en-US hourly 1 https://ift.tt/30MSsrA https://ift.tt/2FXjbZV
Wed, 19 Aug 2020 01:55:23 +0000
https://ift.tt/3b2q5bW
In Part 1 of our Shared Responsibility blog series, we provided a detailed overview to help you understand security in a public, hybrid, or multi-cloud environment. We broke down the infrastructure stack, explained the responsibilities taken by the cloud service…
The post Shared Responsibility Model Automation: Automating Your Share appeared first on CloudPassage.
]]>
In Part 1 of our Shared Responsibility blog series, we provided a detailed overview to help you understand security in a public, hybrid, or multi-cloud environment. We broke down the infrastructure stack, explained the responsibilities taken by the cloud service provider, and where you retain ownership over security. We also discussed how the shared responsibility model affects members of your team and changes the way you think about security as you move your workloads to the cloud. In this installment, we’ll dive deeper into shared responsibility model automation and the important role cloud security tools play in securing your complex, modern infrastructure at scale.
Meeting the Demands of Shared Responsibility Model Automation
Let’s quickly re-visit the shared responsibility model chart from Part 1. The sum total of your security ownership across each of your connected cloud environments is determined by your provider contract and the services you’ve chosen to use. Your first step is to define a strategy and choose tools that can handle the unique security requirements of each of your server-based and serverless instances, along with securing your on-premises bare-metal servers and virtual environments.
Figure 1. Division of duties in a shared responsibility security model
Regardless of where your contract with your provider draws the line, your security posture in a shared responsibility model depends on your ability to standardize and maintain security orchestration, action, and response across your entire infrastructure, including:
- Asset discovery, interrogation, and inventory monitoring
- Continuous inventory updates
- Vulnerability and exposure management, including network and privileged access configuration
- Integrity and drift monitoring
- Indication of compromise, threat detection, and security event management
- Network security configuration and management
- Compliance management and continuous compliance monitoring
Eight Key Attributes for Shared Security Model Automation
Effective cloud management unifies your security responsibilities on a single platform and provides shared responsibility model automation controls and compliance across all of your servers, containers, IaaS, and PaaS in any public, private, hybrid, and multi-cloud environment. Your security solution should encompass the following eight key attributes in order to provide complete, effective, and efficient security:
1. Unified: Traditional security tools often don’t meet the needs of the various and unique needs of a complex, shared responsibility cloud security environment. Without a unified security solution, you end up tying together several different tools, which can lead to operational complexity, unnecessary redundancy, and potential gaps in coverage. A security platform built specifically for the cloud gives you a comprehensive set of configurable tools and the flexibility you need to close your gaps, improve your security posture, and adapt as your infrastructure grows and changes.
2. Automated: As your environment grows in size and complexity, it becomes increasingly difficult to keep track of all the various, moving parts. Shared responsibility model automation provides dependable speed and consistency, and frees up staff time to focus on strategic goals rather than repetitive tasks. Your security automation platform should automate asset discovery and monitoring, and should automatically deploy sensors when a new service, environment, or application is created. You’ll also need integration with your DevOps tools to automatically fail builds when new vulnerabilities might be introduced, assign new issues automatically, and monitor the development pipeline for remediation. With comprehensive, shared responsibility model automation in place, you can centralize and simplify your security integration and operations across systems and solutions that have different security concerns. An automated security platform also enables security to shift left into the development process, and empowers the adoption of a true DevSecOps culture.
3. Portable: With the rate of change we experience in technology, it’s no longer an option to say “no” to a better solution when it comes along. Everything about your application infrastructure, from the code you write to the containers you configure to security, needs to be portable. When moving a workload or application between clouds, your share of the shared responsibilities may change. Your security solution needs to work seamlessly across any public, private, hybrid, and multi-cloud environment while requiring as few changes as possible during lift-and-shift operations.
4. Comprehensive: Your share of the shared responsibility model includes a wide range of requirements, including asset discovery, inventory, assessment, remediation, threat detection, microsegmentation, traffic discovery, and continuous compliance. If you have separate tools for each of those security domains, you’re setting yourself up for operational headaches and worse–the very real potential for introducing blind spots and gaps. A comprehensive security tool not only covers each of these requirements, but automates as much of the security management as possible to alleviate your operational burdens.
5. Fast: Everything about the cloud boils down to speed. CI/CD pipelines delivering microservices and features to the cloud in real time increase the demand for fast, integrated security. Your solution cannot slow that process or get in the way of your development team’s ability to deliver. Instead, your solution should provide high-speed deployment, telemetry, and analytics that keep up with the speed of DevOps.
6. Integrated: The problem with legacy security solutions is that they tend to “bolt on” to cloud environments, rather than working seamlessly within your instances, applications, and workloads. These non-cloud-based solutions increase manual tasks and complicate monitoring. A built-in security solution that integrates directly with cloud infrastructures ensures consistency and compliance with no extra effort. Your security platform should also be built into your application stack, rather than added on after the fact. An API based, embedded security foundation, integrated as part of your DevOps process and workflows, allows you to scale your security implementation up and out as needed and in parallel with your growth without becoming a bottleneck for the CI/CD pipeline.
7. Scalable: While nothing is truly infinite, cloud resources are about as close as you can get. Unlike a bare-metal data center, when you run up against the limits of your current cloud infrastructure, you simply ask for more, and it’s there. That means your security solution must scale automatically and instantaneously to keep up with fast-breaking, dynamic cloud changes. But you don’t always scale up. Cloud resources also provide a valuable opportunity to use resources as needed, and then release them when demand drops. This elastic scalability should be mirrored in your security platform so that you only use what you need in real time.
8. Cost-effective: Cloud architectures offer right-sized, pay-as-you go and usage-based pricing, which means you can control your costs while maximizing the value of your investment. Your cloud security solution should follow the same model. Security solutions that are built specifically for the cloud should provide pricing options that mirror those offered by the cloud provider, and that scale with the resources you use.
CloudPassage Halo is Shared Responsibility Model Automation That’s Built for the Cloud
Continuous monitoring and visibility across your cloud environments and into your data center, and across every environment is critical for maintaining accountability for your defined and accepted portions of the shared responsibility model. CloudPassage Halo provides a broad range of security controls that simplify shared responsibility model automation, with seamless integration between your DevOps pipeline and AWS and Azure cloud services.
Figure 2. CloudPassage Halo is shared responsibility model automation across any mix of environments
Halo provides seamless shared responsibility model automation across every environment
The ability to consolidate security onto a single platform simplifies operational processes and provides a foundation for automation. Halo provides cloud computing security and compliance in any public, private, hybrid, or multi-cloud environment. Rather than managing separate tools, Halo gives you continuous monitoring, automatic indication of compromise in the cloud, visualization of network traffic, and automated compliance management across IaaS services, virtual and bare-metal servers, containers, and Kubernetes environments.
Halo addresses the diversity and flexibility of hybrid cloud
With Halo, you can implement security controls across all your cloud applications, environments, and containers quickly and efficiently. With microagents and registry connectors that monitor and evaluate server and container infrastructure, across your cloud environments and in the data center, Halo is a single, unified platform that centralizes security management while decreasing complexity. Through bi-direction REST APIs, Halo allows you to automate security, settings, and policies. You can also export, import, and manage security policies using version control systems, and automatically enforce security policies throughout your CI/CD workflows based on pre-established rules and standards. These controls allow you to catch and remediate potential vulnerabilities before they become security gaps.
Halo delivers effortless, automatic security scalability
Halo’s distributed architecture offloads processing and automates security configuration controls, which means you can ensure compliance without impacting performance. By design, Halo maintains security coverage through infrastructure and application scaling. Once you’ve defined your security policies, you can also introduce new assets quickly, without additional configuration. And Halo’s on-demand licensing model means your security implementation always matches your application footprint.
Halo automates security policies across DevOps processes and workflows
Shared responsibility model automation allows your DevOps teams a secure path to self-service environment creation and rapid deployment. Whether your DevOps teams are spinning up new servers or your automated workflows are running builds, tests, and deployments, Halo’s developer SDK and toolkit, plugins for Jenkins, and automatic ingest of IaaS metadata help enforce security coverage through automation for your code repositories, build processes, and your DevOps toolkit. At each step in the CI/CD pipeline, Halo assesses changes and provides feedback on automated alerts to address potential vulnerabilities and misconfigurations before they become production security events.
Halo accelerates the path to compliance
Regulatory compliance is a never-ending challenge, requiring a team of knowledgeable professionals who stay up to date with industry changes and how those changes affect your particular company. Shared responsibility model automation is key to helping your compliance team maintain control over your growing cloud environment. Halo provides over 20,000 pre-configured rules and more than 150 policy templates that cover standards, including PCI, CIS, HIPPA, SOC, and more, and provides automated remediation when deviations are detected. And with a single, easy to navigate dashboard and fully customizable reporting, complete with automated notifications, your team can maintain a pulse on compliance across your dynamic cloud environment in real time. With Halo, you’ll break free from the ad-hoc emails and meetings for vulnerability communications, and you’ll skip the fire drills before audit. Instead, with continuous monitoring and shared responsibility model automation, you’ll know your state of compliance in real time, and will be ready when audit time comes.
Halo: The Only Unified Solution for Shared Responsibility Model Automation
Halo is the only battle-tested, unified solution built for addressing the needs of shared responsibility for AWS and Azure. With an emphasis on thorough, effective automation, Halo is especially valuable when it comes to defining, managing, and monitoring security across your multi-cloud environment. Automation reduces operational complexity and the potential for human error while reserving the time, resources, and energy of your team for ongoing product development efforts.
Ready to experience Halo for yourself? Claim your full-access 15-day free trial. If you need more help or have specific questions or testing objectives, don’t hesitate to contact us to speak with a cloud security expert.
The post Shared Responsibility Model Automation: Automating Your Share appeared first on CloudPassage.
]]>
https://ift.tt/32FLNyO
Thu, 13 Aug 2020 18:01:30 +0000
https://ift.tt/2ENKZzh
Cloud service providers adhere to a shared security responsibility model, which means your security team maintains some responsibilities for security as you move applications, data, containers, and workloads to the cloud, while the provider takes some responsibility, but not all….
The post Shared Responsibility Model Explained appeared first on CloudPassage.
]]>
Cloud service providers adhere to a shared security responsibility model, which means your security team maintains some responsibilities for security as you move applications, data, containers, and workloads to the cloud, while the provider takes some responsibility, but not all. Defining the line between your responsibilities and those of your providers is imperative for reducing the risk of introducing vulnerabilities into your public, hybrid, and multi-cloud environments.
Shared Responsibility Varies by Provider and Service Type
In a traditional data center model, you are responsible for security across your entire operating environment, including your applications, physical servers, user controls, and even physical building security. In a cloud environment, your provider offers valuable relief to your teams by taking on a share of many operational burdens, including security. In this shared responsibility model, security ownership must be clearly defined, with each party maintaining complete control over those assets, processes, and functions they own. By working together with your cloud provider and sharing portions of the security responsibilities, you can maintain a secure environment with less operational overhead.
Defining the lines in a shared responsibility model
The key to a successful security implementation in a cloud environment is understanding where your provider’s responsibility ends, and where yours begins. The answer isn’t always clear-cut, and definitions of the shared responsibility security model can vary between service providers and can change based on whether you are using infrastructure-as-a-service (IaaS) or platform-as-a-service (Paas):
- In the AWS Shared Security model, AWS claims responsibility for “protecting the hardware, software, networking, and facilities that run AWS Cloud services.”
- Microsoft Azure claims security ownership of “physical hosts, networks, and data centers.” Both AWS and Azure state that your retained security responsibilities depend upon which services you select.
While the wording is similar, shared responsibility agreements leave much open for discussion and interpretation. But there are always some aspects of security that are clearly owned by the provider and others that you will always retain. For the services, applications, and controls between those ownership layers, security responsibilities vary by cloud provider and service type. In a multi-cloud environment, these variations in ownership introduce complexity and risk. Each environment, application, and service requires a unique approach for security assessment and monitoring. However, your overall security posture is defined by your weakest link. If you have a gap in coverage in any one system, you increase vulnerability across the entire stack and out to any connected systems.
A vendor-agnostic look at shared responsibility
The following diagram provides a high-level, vendor-agnostic view of a shared responsibility model based on concepts, rather than service level agreements. When entering into a discussion with a cloud provider, security needs to be included upfront in the decision-making process regarding shared responsibilities. You can use this guide to inform your discussion and to understand your roles and responsibilities in securing your cloud implementation.
Figure: A vendor-agnostic view of the division of responsibilities in the shared responsibility model
Your Share of Cloud Security Responsibilities
Whether in the data center, or using a server-based IaaS instance, serverless system, or a PaaS cloud service, you are always responsible for securing what’s under your direct control, including:
- Information and Data: By retaining control over information and data, you maintain how and when your data is used. Your provider has zero visibility into your data, and all data access is yours to control by design.
- Application Logic and Code: Regardless of how you choose to spin up cloud resources, your proprietary applications are yours to secure and control throughout the entire application lifecycle. This includes securing your code repositories from malicious misuse or intrusion, application build testing throughout the development and integration process, ensuring secure production access, and maintaining security of any connected systems.
- Identity and Access: You are responsible for all facets of your identity and access management (IAM), including authentication and authorization mechanisms, single sign-on (SSO), multi-factor authentication (MFA), access keys, certificates, user creation processes, and password management.
- Platform and Resource Configuration: When you spin up cloud environments, you control the operating environment. How you maintain control over those environments varies based on whether your instances are server based or serverless. A server-based instance requires more hands-on control over security, including OS and application hardening, maintaining OS and application patches, etc. In essence, your server-based instances in the cloud behave similar to your physical servers, and function as an extension of your datacenter. For serverless resources, your provider’s control plane gives you access to the setup of your configuration, and you are responsible for knowing how to configure your instance in a secure manner.
Additionally, you maintain responsibility for securing everything in your organization that connects with the cloud, including your on-premises infrastructure stack and user devices, owned networks, and applications, and the communication layers that connect your users, both internal and external, to the cloud and to each other. You’ll also need to set up your own monitoring and alerting for security threats, incidents, and responses for those domains that remain under your control. These responsibilities are yours whether you are running on AWS, Azure, or any other public cloud provider’s systems.
Understanding the Gray Areas of the Shared Responsibility Model
Based on whether you are running an IaaS or PaaS implementation, you may retain additional security responsibilities, or your provider may take some of that burden off your team. The line between your responsibility and those of your cloud vendor is dependent upon selected services and the terms of those services.
In the case of server-based instances, you often assume full responsibility of:
- Identity and Directory Infrastructure: Whether you’re using OS-level identity directories like Microsoft Active Directory or LDAP on Linux, or you opt for a third-party identity directory solution, the security configuration and monitoring of that system is yours to control in an IaaS cloud implementation.
- Applications: Server-based cloud environments, much like on-premises hosts, are a blank slate for installing and maintaining applications and workloads. You may run PaaS applications on your cloud servers, in which case you might be relieved of some of the security burden. However, any application or workload you move from your data center to a server-based instance in the cloud is solely your responsibility to secure.
- Network Controls: Your provider only maintains the network that’s directly under their control. All networking above the virtualization layer—whether physical or infrastructure-as-code—requires your security configuration and monitoring.
- Operating System: With server-based instances, you get to choose your OS and patch levels. While this allows you greater flexibility, it also means greater responsibility when it comes to security. You’ll need to keep up with current vulnerabilities, security patches, and environment hardening exercises to keep your server-based cloud resources secured.
When you choose a serverless environment or PaaS solutions, you do alleviate some of the security burden. Serverless solutions provide a control plane for configuration, and you are responsible for configuring that service in a secure manner. For example, in a serverless environment, you may have the opportunity to choose an operating system (typically Microsoft or Linux), but your provider maintains responsibility of the OS patching and security management in that environment. Serverless environments typically provide some management of the physical implementation of your identity and directory infrastructure, applications, and network controls as well, but you are still responsible for properly configuring access management through the control plane.
Responsibilities Always Owned by Your Cloud Service Provider
While it may seem that you retain a significant share of security responsibilities, your provider does alleviate much of your burden. Cloud vendors maintain 100% of control over the security of:
- The Virtualization Layer: By controlling the provisioning of physical resources through virtualization, providers ensure segmentation and isolation of CPU, GPU, storage and memory to protect your users, applications, and data. This layer of abstraction acts as both a gateway and a fence, allowing access to provisioned resources, and protecting against potential misuse or malicious intrusion, both from the user environments, down, and the physical layer, up.
- Physical Hosts, Network and Datacenter: Cloud vendors protect their hardware through a variety of both software and physical means. Large cloud providers like AWS and Azure protect their servers from physical intrusion and tampering through a variety of protocols, and they also ensure rapid failover and high availability with comprehensive, built-in backup, restore, and disaster recovery solutions.
The Shared Responsibility Model in Practice
When speaking of “shared responsibility,” it’s important to understand that you and your cloud provider never share responsibility for a single aspect of security operations. The areas of ownership you control are yours alone, and your provider does not dictate how you secure your systems. Likewise, you have no control over how the provider secures their portions of the application and infrastructure stack. You do, however, have the ability and right to access your cloud vendor’s audit reports to verify that their systems are secure and that they are adhering to your terms of service. Cloud providers publish these reports regularly and freely, and the most current reports are accessible at all times.
How the shared responsibility model impacts your developers
Cloud services offer convenient, automated environment provisioning, allowing developers and test groups to spin up servers through self-service processes. These environments, however beneficial for innovative potential, are often connected to your production assets and can pose significant security risks if not properly configured. While the cloud is inherently secure from the provider’s perspective, a secure cloud requires proper configuration and diligent access management. Gartner states the misconfiguration accounts for 99% of cloud security failures. For would-be hackers, cloud development and testing environments that are set up without enforcing proper security policies can become a gateway into your production systems or proprietary code storage. This means that identity and access management and environment configuration management must be closely managed, sometimes at the expense of unfettered convenience. Centralized, automated access management and policy-driven environment creation are critical for the success of your cloud security implementation.
Securing the DevOps pipeline
Cloud applications, powered by an automated CI/CD pipeline and driven by a DevOps organization, accelerate the speed at which your business delivers new applications and features. Unfortunately, that also means your DevOps pipeline can inadvertently and rapidly introduce security vulnerabilities without proper consideration and management. In a shared responsibility model, you are responsible for securing your code and the tools you use to deliver applications to the cloud. The servers and serverless assets that make up your DevOps toolchain must be protected, including code repositories, Docker image registries, Jenkins orchestration tools, etc. Beyond securing your CI/CD pipeline, you can—and should—leverage CI/CD automation processes to shift security left, by integrating security into the code and making it part of the build. This idea of “shifting left” means automated testing against clearly defined security requirements, early and often in the development process, so that new vulnerabilities are caught and remediated before being merged into the larger code tree or introduced into a production service.
Shared responsibility and configuration management
The speed and ease of configuring software-defined infrastructure opens your company up to new levels of agility and adaptability. However, the ability to reconfigure resources on the fly can also have instantaneous and broad-reaching consequences. The potential for misconfiguration can lead to security vulnerabilities. Your operations team needs to work closely with security to maintain policy-based control over how and when your cloud resources are provisioned. Your security teams are also accountable for monitoring resource management in the cloud for potential vulnerabilities. Through scripting, automation, and carefully planned self-service workflows, your configuration management and security teams can work together to give your company controlled, secure access to the cloud resources they need without becoming a bottleneck.
Compliance management, threat management, and visibility into the cloud
Regardless of where your security responsibilities end and your cloud provider’s responsibilities start, compliance with your organizational standards and required regulatory boards is your company’s responsibility. Centralized security orchestration, automation, and response allows you to collect and analyze data across your entire infrastructure, including your on-premises systems, public, hybrid and multi-cloud environments, and out to your edge and endpoints. With the right security platform in place, your teams gain deep visibility that allows you to analyze and respond to threats and maintain compliance, often without human involvement.
Shared Responsibility Model Next Steps
Any time your cloud provider takes on a portion of security responsibility, it becomes one less concern for your organization. Clearly defined shared responsibilities allow you to focus your efforts on your application delivery strategy without overburdening your teams with day-to-day operational concerns in the physical layer. A security platform that unifies and automates security controls from the data center and across each cloud simplifies security management and minimizes risk. Centralized control and configuration of the provider control plane, hosting, and orchestration for containers, applications, and workloads further improves coverage of your environment from end to end.
Read Part 2 of this two-part series, “Shared Responsibility Model Automation: Automating your Share” to learn how you can gain better control of your public, hybrid and multi-cloud environments.
If you’re not subscribed to our blog, be sure to sign up now. And as always, if you’d like to discuss the needs of your specific environment, please don’t hesitate to contact us.
The post Shared Responsibility Model Explained appeared first on CloudPassage.
]]>
https://ift.tt/3hyj05g
Wed, 24 Jun 2020 04:10:17 +0000
https://ift.tt/2Qz7wTq
CloudPassage is officially announcing new packaging and pricing options for its award-winning Halo cloud security platform, effective immediately. These new options give more companies the ability to put Halo’s proven capabilities to work in a wider range of deployment scenarios…
The post New Halo Subscription Options appeared first on CloudPassage.
]]>
CloudPassage is officially announcing new packaging and pricing options for its award-winning Halo cloud security platform, effective immediately.
These new options give more companies the ability to put Halo’s proven capabilities to work in a wider range of deployment scenarios with even better economics.
If you are an existing CloudPassage customer, contact your account manager or customer success representative to learn how you can take advantage of these opportunities.
This article will cover our new subscription offerings, the drivers for the changes, and the economic advantages of the Halo pricing model. Here is a summary with select links if you want to jump to a specific section.
Two Halo Platform Editions
- Halo Essentials: New for small to mid-sized cloud deployments
- Halo Enterprise: Rate reduction for enterprises needing to reduce costs and consolidate on a single, unified cloud security platform
Flexible Subscription Options
- Flex Licensing: New for companies wanting volume discounts without having to precisely model future usage
- Pay-As-You-Go (PAYG): New for companies not able to make a contractual commitment to new vendors at this time
- AWS Marketplace Private Offers: New for those that prefer purchasing through the AWS marketplace or participate in the AWS Enterprise Agreement program
Special Onboarding Offers
- Usage-based Licensing: So you only pay for what you use, whenever and wherever you use it
- Simple, Transparent, and Predictable: To stay within budget constraints
- No Hidden Costs or Extra Fees: To prevent unexpected budget overruns
Why We Are Making These Changes
As the first mover in cloud workload security, CloudPassage has maintained a unique perspective on how the market category has evolved. Public cloud computing adoption by mainstream and mid-size enterprises has been accelerating for some time. This trajectory has surged in the wake of the COVID-19 crisis and the resulting shift to remote-working models.
The economic impacts of COVID-19 will not be short-lived. We’ve barely begun to understand how our day-to-day activities will change and how a completely new way of living and working will impact how businesses operate. And the pandemic itself has not run its course. In this fog of uncertainty, there are two tenets that have already been recognized broadly:
- Cloud-centric technology strategies are critical for pandemic preparedness. Companies across the board are aggressively accelerating cloud adoption plans, both to mitigate the current crisis and prepare for the next one.
- Spending discipline is tighter than ever and will stay that way for the foreseeable future. Technology owners are under enormous pressure to reduce costs, negotiate more flexible purchasing terms, and consolidate products to maximize investment value.
The surge in cloud adoption has driven a surge in our conversations with enterprise InfoSec leaders who are struggling. Most relate stories about being crushed by accelerated cloud security plans on top of existing security requirements—all while budgets and staff are being cut and attacks are on the rise.
The demands to get more done, faster, with less is an extremely intense set of constraints to meet. Security leaders and architects see the value of a unified, automated, rapidly deployable platform in times like this. But financial lock-down policies and personnel constraints leave them unable to move on a mission-critical solution.
We’ve heard you and understand your struggles. Here’s what we’re doing to help put enterprise-class cloud security automation in your hands.
Introducing Halo Essentials
We’ve heard chilling statements like this one quite a lot recently:
“My budget is extremely limited but I am terrified that our cloud environment has glaring holes… our business might not survive a compromise right now.”
There are a lot of reasons for this fear. Workloads are being shifted from datacenters to clouds rapidly to eliminate the need for physical plant and hardware management. Workloads already in the cloud are being moved between clouds to seek lower IaaS costs. Less expensive development and operations contractors are being onboarded quickly, which introduces risk. Product release schedules are being accelerated to drive more revenue. It’s a panicky environment. The opportunity for misconfigurations and other exposure is more significant in a “do more with less” environment unless you’re properly protected.
Pricing
Halo Essentials is a new package that automates cloud workload risk awareness with a price point that is friendly to extreme budget constraints. It’s made to support teams with limited budgets who need to get the basics done quickly and easily in small to midsize cloud environments. Here are the numbers:
- Security for AWS and Azure accounts* starts at $150 monthly per account on a prepaid basis, including unlimited account services and resources
- Security for Linux and Windows servers starts at 3¢ per server-hour on a prepaid basis, regardless of server size or hosting environment (cloud instances, traditional VMs, bare-metal hosts)
- Security for Docker hosts starts at 10¢ per host-hour on a prepaid basis, including securing the host itself, unlimited container launch monitoring, and automatic scanning of instantiated images (supports generic Docker hosts, Kubernetes nodes, ECS instances, etc)
- Volume and term commitment discounts include discounts of up to 65% based on subscription volume and term commitment
* Halo CSPM support for GCP is in progress and will be released in Q3 2020
What is Included
Security and compliance posture assessment and monitoring for IaaS, server-based, and containerized environments in any mix of public, private, hybrid, or multi-cloud environments. Explicit remediation advice and ongoing security posture monitoring are included, and there are no hidden fees or extras to install or manage.
Learn more about Halo Essentials
Reduced Rates for Halo Enterprise
Another discussion that’s coming up often starts out with something like this:
“I am being told that I have to consolidate products and I’m getting heavy pressure to go with the cheapest thing out there. I’m already dependent on automation and need more, but the cheap products don’t have strong automation…. that’s going to backfire big-time since I don’t have the personnel to fill the automation gap.”
InfoSec teams are being asked to make security tooling cheaper and easier. DevOps teams are under pressure too, and they want less to deal with—fewer agents, fewer static reports, a smaller toolset. Large-scale environments still need enterprise-class automation, scale, and flexibility but they’re under financial pressure as well.
Pricing
We’ve reduced rates on Halo Enterprise to help large organizations get more and better cloud security automation in a single platform with fewer moving parts. Halo Enterprise has not been downscaled—it’s the same platform that top cloud enterprises have depended on for years. We’ve just found technical and operational efficiencies that enable us to offer better economics to our customers. Here are the numbers:
- Security for AWS and Azure accounts* now starts at $180 monthly per account, including unlimited account services and resources
- Security for Linux and Windows servers now starts at 4¢ per server-hour, regardless of server size or hosting environment (cloud instances, traditional VMs, bare-metal hosts)
- Security for Docker hosts starts at 14¢ per host-hour, including securing the host itself, unlimited container launch monitoring, automatic scanning of instantiated images (supports generic Docker hosts, Kubernetes nodes, ECS instances, etc), and scanning of images at-rest in registries or in-motion via CD pipelines
- Volume and term commitment discounts include discounts of up to 65% based on subscription volume and term commitment
* Halo support for GCP is in progress and will be released in Q3 2020
What is Included
Halo Enterprise still offers a broad range of cloud workload security automation that works in any public, private, hybrid or multi-cloud hosting environment with a single pane of glass. Capabilities include asset discovery and inventory, security assessment, ongoing security posture monitoring, threat detection, microsegmentation, and continuous compliance for IaaS, server-based, and containerized application environments.
Learn more about Halo Enterprise
Flex Licensing Subscriptions
Many security organizations simply can’t predict how many workloads of various types they will need, or have a fixed budget that they need to deploy and redeploy with flexibility:
“We have no idea how many cloud servers or containers we have today, and we expect those numbers to continue shifting. We’d rather set a monthly budget and use it fluidly as our needs dictate.”
Flex Licensing Subscriptions have you covered. In the Flex licensing model, your unit discounts are based on the volume of your fixed monthly budget. Use your discounted Halo service units each month however you need to, and they’re charged against your monthly pool of usage dollars. You can deploy Halo to new infrastructure, decommission old infrastructure, and even move licensing at will—and you don’t even need to contact us to do it. Flex Licensing allows you to stick to a fixed monthly budget, even if you can’t predict what’s coming.
Contact us to learn more about Flex Licensing Subscriptions
Pay-as-you-Go (PAYG) Subscriptions
Some companies we’re talking with have the need for Halo right now, but absolutely cannot make a contractual commitment to a new vendor:
“I have some discretionary budget to go month-to-month but there’s no way procurement is going to let me sign a contract with a commitment to anything new.”
If you’re in this situation, you’re not alone. This situation is happening everywhere, and it’s frustrating to see a solution to a need that you can’t touch. We get it.
Halo Pay-as-you-Go (PAYG) subscriptions enable you to access Halo’s capabilities without a commitment. Available from the AWS Marketplace, subscriptions under the PAYG model allow you to pay for exactly the Halo security services that you use each month with no contractual commitment—cancel anytime without penalty or effort.
Learn more about Halo PAYG
AWS Marketplace Private Offers
The AWS Marketplace private offer feature enables you to negotiate product pricing, terms, and conditions that are not listed by a vendor on the AWS Marketplace.
CloudPassage now supports procurement of Halo through private offers. CloudPassage creates a private offer based on agreed-upon pricing and terms, which appears in the AWS account that you designate. You accept the private offer and start receiving the negotiated price and terms of use.
In addition to making procurement simpler, this can be financially beneficial for enterprises that participate in the AWS Enterprise Agreement program. Under some AWS Enterprise Agreements, AWS customers receive discounts based on the total amount spent with AWS. Software purchased through the AWS Marketplace can provide credit towards achieving higher AWS spending levels and therefore higher AWS discounts.
Ask your procurement team if this opportunity exists in your enterprise and contact us to discuss a private offer for Halo through the AWS Marketplace.
Learn more about AWS Private Offers
Free Initial Onboarding and At-Cost Professional Services
Many companies we speak to recognize that once fully deployed, CloudPassage Halo requires little to no operational effort. The challenge is that they don’t even have the personnel bandwidth for initial Halo deployment:
“We need it, we want it, we know the automation will give our team relief… but we don’t even have time to set it up.”
We understand the conditions that teams are working under—reduced hours, furloughed team members, and even security teams being reassigned to non-security tasks.
Free Initial Onboarding for New Halo Deployments through 2020
Under normal circumstances, onboarding is provided for a fee—but today’s circumstances are far from normal. To help teams who need Halo up and running quickly, our customer success team will perform initial onboarding free of charge. This includes:
- Setting up your Halo account
- Structuring your hierarchical asset groups
- Selecting the right policies for your environments
- Setting up your IaaS connectors
- Providing pre-populated microagent deployment scripts
- Setting up user accounts
- Configure reporting and alerting so that security, DevOps and system owners get the most critical information they need at the right frequency and in the right format
- Orient all of your Halo administrators and users to our online training resources
Professional Services Provided At Cost through 2020
Many companies want to go beyond initial onboarding to gain deeper value from Halo’s automation capabilities. The challenge is again staffing—they just don’t have the bandwidth to engage.
We’re alleviating this challenge by providing all professional services at-cost for advanced Halo integrations like automatic issue routing, workflow tool integration (e.g. Jira, ServiceNow), and SIEM/GRC/SOAR integrations.
NOTE: Professional services projects will be scheduled on a first-come, first-served basis based on resource availability.
Contact us to learn more about free initial onboarding and at-cost professional services
Additional Economic Advantages of Halo
Our new pricing and packaging options improve Halo’s value for investment on top of the existing economic advantages that are gained. The following are some key considerations for those not familiar with Halo’s existing economic benefits.
Usage-based Subscriptions
Many products bind licenses to factors like specific users, IP addresses, or specific IaaS accounts. Redeploying licenses with these products is often not permitted, even if the originally bound asset goes away. Products that do allow redeployment of licenses often have restrictions, require manual effort to deactivate and reactivate licenses, or even require that you contact the vendor to request changes. And very few products offer automatic on-demand licensing, discounted or not. These limitations result in lost budget dollars, plain and simple.
Halo is purely usage-based and all licensing is transparently portable. You can provision, deprovision, and move licensed services automatically. It’s simple—if you turn Halo on or off for an environment, the licensing takes care of itself. You don’t need to do anything extra, and you can see your usage at any time.
Halo’s usage-based subscription structures also have built-in volume discount tiers that apply to prepaid or “reserved” service units as well as on-demand service units. This enables you to pay for exactly what you need while maximizing discount opportunities and use discounted on-demand service to address variable requirements.
Simple, Transparent, and Predictable
Some products charge based on bandwidth, data ingest and storage, scans run, or even each time a rule is evaluated. Others charge different prices based on the size of workloads or hosting environment. The list goes on. This makes it extremely hard to predict costs, leading to frequent budget overruns.
Halo’s licensing model is simple and transparent, which leads to predictability. We charge based on the number of IaaS accounts and workloads, period. Maximize your discounts by committing to the steady-state workloads you need, and use discounted on-demand services for variable needs like intermittent autoscaling or planned seasonal capacity changes.
No Hidden Costs or Extra Fees
Some products “nickel-and-dime” customers with extra fees for things that should be included, like compliance rules, API access, or single sign-on integration. Some also advertise as SaaS solutions, but actually require the deployment of additional infrastructure—often a lot of additional infrastructure—for scanning appliances, data aggregators, and similar requirements. Not only do these extras cost hard dollars for licensing and infrastructure, but they increase operational costs since they need to be operated and maintained.
Halo is a true all-inclusive SaaS solution that has no requirements that drive extra costs for you. Halo’s architecture does not need intermediate collectors to scale and API connectors are operated from the Halo cloud environment. Halo also does not charge extra for items like compliance analytics, ad-hoc scans, or API connectors.
Learn More About Halo
Our goal with these sweeping changes is to make Halo’s powerful capabilities available to more companies and to help address the unprecedented challenges our industry is facing. We’re striving to deliver a superior, enterprise-class solution that’s unified, comprehensive, automated, and portable with even better economics and flexibility.
To learn more about CloudPassage and Halo, here are some helpful links:
The post New Halo Subscription Options appeared first on CloudPassage.
]]>
https://ift.tt/2EDjOrc
Thu, 18 Jun 2020 00:17:14 +0000
https://ift.tt/3hBCyWA
The “Forrester Wave™: Cloud Workload Security, Q4 2019” report provides an excellent overview of the security challenges posed by cloud computing and the solutions best poised to address cloud workload protection. In this fourth blog post of our series on…
The post Scalable Cloud Workload Security: Part 4 of a Series appeared first on CloudPassage.
]]>
The “Forrester Wave: Cloud Workload Security, Q4 2019” report provides an excellent overview of the security challenges posed by cloud computing and the solutions best poised to address cloud workload protection. In this fourth blog post of our series on the Forrester Wave, we explore two more criteria for which CloudPassage Halo received the highest scores possible, “Scalability: protected cloud instances” and “Scalability: protected containers.” We will share our thoughts on what scalable cloud workload security means, why it’s important, and why we believe CloudPassage received 5 out of 5 in these two criteria.
Enterprises want the speed and scalability offered by cloud infrastructure but often cite security and compliance as primary inhibitors of adoption. To eliminate these inhibitors, cloud workload security solutions require the same automatic, transparent scalability of the cloud environments they protect. Forrester punctuated this need for scalable cloud workload security by recommending that buyers seek solutions capable of “scalable deployment of protection to a large number of workloads without interruption.”
Scalability becomes a key requirement for security operations as instantly scalable cloud infrastructure becomes the norm for application hosting. Cloud environments can scale up rapidly and dramatically, which can easily overwhelm security solutions not designed for these kinds of operations. Cloud security platforms must be able to scale in lockstep and instantaneously secure new assets, and they must do it with zero operational overhead.
Let’s take a look at what “Scalable Cloud Workload Security” means and why it’s important.
What Scalable Cloud Workload Security is and Why it is Important
Cloud computing has become the new normal for enterprises as the benefits of IaaS are realized and scaled. Higher agility, faster and easier deployment, and scalability are just a few of these benefits. As cloud computing environments rapidly scale up and down automatically, security must be equally as scalable and automated to keep up with the rate of change. This is an extreme requirement that cannot be fulfilled by legacy security tools and approaches built for a different time.
Security and compliance stakeholders must recognize two key dimensions of scalability that cloud security solutions must address as their enterprise clouds grow:
- Short-term cloud scaling operations (e.g. cloudbursting, autoscaling, microservice orchestration) require security capabilities that can scale as rapidly as the servers, containers, and IaaS resources they protect.
- Long-term cloud growth as more enterprise workloads migrate to IaaS require security capabilities that can grow without encountering technical, operational, or economic limitations.
Clearly there’s a need for scalable cloud workload security solutions that can automatically adjust their scale to keep pace with the underlying cloud infrastructure.
How scalability can challenge cloud security and compliance
One of the great advantages of cloud infrastructure is the ability to size infrastructure iteratively to address current and future needs. Projects can be deployed without large upfront costs or risky predictions, instead starting at limited scale and adding “just-in-time” resources as growth dictates. As enterprises see pockets of early success with cloud infrastructure, every business unit will want to reap the benefits.
The ability to create and scale nearly instantly brings many benefits but creates challenges for security and compliance. Here are some of the most common challenges that we’ve built Halo to address.
Legacy tools can’t keep up with technical cloud scale
The technical characteristics of cloud infrastructure are markedly different from traditional datacenter hosting environments. Legacy tools were built under a different set of assumptions and premises that leaves them unable to function well, if at all, in cloud computing environments. Many of these challenges are directly related to scale.
For example, a single data center server is often redeployed as multiple smaller server instances in IaaS, core to the concept of cloud computing’s horizontal scalability. This means there are more individual operating systems, configurations, etc. to manage. In many cases cloud server instances are often ephemeral and are recycled far more frequently than traditional bare-metal hosts or virtual machines, creating more overhead for security tools. In addition, IP addresses change often in cloud environments, creating ripple effects on network-centric security tools and often breaking policies and other IP-centric control constructs. These are all changes that extract more processing and compute demands. Bottom line, there’s no place in the virtualized world of cloud computing for the hardware-based acceleration that traditional security tools depended upon to scale.
While these and other technical scalability factors cause legacy security tools to fail in cloud environments, they drive the successful cloud workload protection programs that are built on cloud-purposed solutions designed to address them.
Cloud security operations cannot scale without automation
There are also significant operational differences between legacy environments and cloud computing that drive the need for scalable cloud workload security. DevOps and continuous delivery, which go hand-in-hand with cloud infrastructure, can create serious security and compliance operational disruptions.
- Cloud infrastructure is software-defined and instantly scalable, making the volume and speed of changes orders of magnitude greater than traditional environments.
- Automation toolchains that implement continuous deployment amplify this new level of operational speed and scale.
- DevOps teams are now often very autonomous and embedded within business units, meaning traditionally regimented operational processes are often eschewed.
- The rapidly expanding universe of cloud services also drives operational challenges—the sheer number of diverse technologies that a central security organization must address is staggering.
- Instrumenting cloud security components requires direct integration into infrastructure templates and build-time automation before security controls can even be deployed.
Automation is the lynchpin to successful execution in these diverse, distributed, and dynamic cloud environments.
Securing these environments also requires deep automation, as failure to adapt security operations to these new realities results in a dangerous inability to keep up. Success in these new environments requires cloud workload security platforms with the deep automation capabilities needed to enable operational scale.
Traditional collaboration doesn’t scale for distributed DevOps organizations
Scalability problems can come from surprising sources—even organizational shifts. The structured, one-to-one cooperation between centralized security and operations teams is gone, and the new one-to-many model can create massive scalability strain if not handled properly.
Traditional organizations were established with a central IT organization at their core with subunits specializing in development, hosting operations, security, end-user computing, and so on. This centralized structure often resulted in well-defined, disciplined operations enforced by central IT executive management, with its rules of engagement and associated expectations well-understood.
The advent of DevOps, supplanted this regimented machinery with many small DevOps teams, very often reporting into distributed business units with their own priorities. This results in central security organizations being forced to cast a new model for communication and collaboration without the luxury of common executive authority. The independent nature of DevOps means that every team can be dramatically different. That may require InfoSec to have an individual approach for successfully interacting with each one of them.
Gone are the days of sending emails, PDFs, and spreadsheets to system owners. For scalable cloud workload security, DevOps teams want collaboration to happen in-line with their existing tools and processes. Slow-moving legacy approaches to collaboration impact their operational speed, something that’s tolerated at best and rejected at worst. Cloud security platforms must be designed with this reality in mind and provide methods for InfoSec to deliver automatable data to DevOps teams within their existing tools and workflows.
With the cloud security importance and challenges in mind, we’ll turn to sharing our thoughts on Forrester’s assessment of the CloudPassage Halo platform and our scalable cloud workload security.
Why We Believe CloudPassage Received 5 out of 5 in Forrester’s Criteria for Cloud Instance and Container Security Scalability
CloudPassage was purpose-built in 2010 to automate security and compliance management for servers across public and hybrid cloud environments. Since that time, CloudPassage has invested heavily in the platform’s evolution to address new cloud technologies and their security needs.
Halo now addresses security for server-based, containerized, and IaaS/PaaS services across any mix of public, private, hybrid, and multi-cloud deployments.
Halo customer deployments range in scale from a single cloud stack with a few assets to thousands of development and production stacks with millions of assets. Our largest scaling event involved 40,000 servers per hour. Halo’s transparent scalability and comprehensive capabilities give you the ability to address rapidly emerging cloud security needs and prevent security from impeding progress. Halo is blazing fast, and its architecture is designed for transparent scalability that makes temporary scale-up operations automatic and long-term growth simple.
The Halo platform’s architecture combines auto-scaling microservices, batch processing, streaming data analytics, SQL and NoSQL data stores, and cloud object storage, and is hosted 100% in public IaaS.
Security analytics and orchestration environment
The core of the Halo platform is the Halo cloud, a security analytics and orchestration environment that performs security analysis, control orchestration, and compliance monitoring for millions of cloud assets simultaneously. The Halo cloud receives continuous telemetry, state, and event data from lightweight microagents and API connectors deployed across the user’s cloud environments.
Autoscaling microservices
Telemetry and scan payloads are processed by highly efficient, purpose-built, autoscaling microservices. Based on user configurations, Halo’s security microservices take actions such as generating scan findings, analyzing cloud security events, executing REST API commands, or triggering other security automation microservices to generate and deliver intelligence, orchestrate distributed control and policy updates, perform situationally-specific interrogation of assets cloud-wide, and more.
Patented command and control model
Halo monitors millions of cloud resources simultaneously using this patented “command-and-control” model. Halo automates many recurring and ad-hoc security operational tasks.
Automated deployment and workflows are required for scalable cloud workload security
Halo’s comprehensive automation builds security into the continuous deployment pipelines and automates workflows between security and development—critical for scalable cloud workload security.
Halo microagents and API connectors are designed for quick and easy deployment using existing automation tools. Halo microagents transparently support server autoscaling (a.k.a. “cloudbursting”), cloning, and migration between environments, and thus can support scalable cloud workload security.
Integration with existing automation tools is accomplished through the Halo REST API, recognized as the industry’s most complete fully bi-directional API. Through the API, Halo is able to fully integrate with leading infrastructure automation tools for easy implementation and automated operation.
The API can use data from Halo to open a ticket in ticketing tools such as Jira or ServiceNow, export data to common SIEMs, and create an Ansible playbook to remediate vulnerable packages.
This allows teams to implement frictionless security by enabling security, IT, and DevOps to integrate and automate security into DevOps processes and continuous deployment pipelines while fostering collaboration between InfoSec and development or DevOps.
Automatic application of policy controls
Another important aspect of scalable cloud workload security is how the security controls themselves support scalability. Halo unifies a broad range of security controls across servers, containers, and public cloud infrastructure. Halo provides more than 150 security policies with thousands of rules to cover various asset types, operating systems, common applications, and security best practices. These policies are assigned to groups, and when new assets within that group come online, they are automatically assessed based on the assigned policies, with no manual intervention.
Autoscaling and cloud-bursting support
Halo automatically deploys, configures, inventories, interrogates, assesses, and initiates monitoring of new servers and containers without user intervention, for server cloning and autoscaling. Halo:
- Handles cloud-bursting and autoscaling events by automatically detecting and instrumenting, monitoring, and protecting new cloud assets as they come online
- Retains information on ephemeral workloads as you scale back down for assets that were not long-lived
Scalable licensing model
Halo offers a subscription model that aligns with the subscription models of cloud security providers. The Halo licensing model is designed for dynamic cloud environments so that you pay for only what you need; it is:
- Consumption-based with complete user flexibility in license allocation
- Based on cloud assets protected to make budget forecasting predictable
- On-demand when needed, with license bursting to address temporary infrastructure scale-up events transparently
Scalable Cloud Workload Security Conclusion
In summary, we believe Halo received the highest scores possible in both the “Scalability: protected cloud instances” and “Scalability: protected containers” criteria in the “Forrester Wave: Cloud Workload Security, Q4 2019″ report because the capabilities described above, when combined into one unified platform, support the following common characteristics of cloud adoption.
Organic application growth
The cloud infrastructure for a new application typically starts small and grows, often very quickly. With Halo:
- DevOps teams can acquire only the infrastructure they need to get started, then grow their environment as application demand mounts.
- As asset count grows and new application functionality is developed and deployed, every asset is automatically secured.
- Security and compliance can grow along with organic application growth, easily and without disruption.
Viral cloud adoption
When larger enterprises see initial success with cloud computing, adoption will go viral as more business units will want to migrate or build greenfield applications to reap the benefits of the cloud. With Halo:
- Security teams will not have to worry about the number of cloud environments or autonomous DevOps teams because Halo can automate instrumenting them for security.
- Security teams can handle dramatic environment growth and organizational shift.
Autoscaling applications
One of the core benefits of cloud-based application infrastructure is the ability for application components to autoscale, but with autoscaling, the number of infrastructure assets in an application environment can multiply many times over. With Halo:
- Autoscaling won’t require any manual effort to scale security tools in concert with the underlying environment.
- Securing all assets associated with autoscaling events is transparent and automated.
To learn more about how Halo’s transparently scalable cloud workload security can help you secure your cloud infrastructure and assets:
Read our previous blogs on criteria for which CloudPassage received the highest scores possible in the “Forrester Wave: Cloud Workload Security, Q4 2019” report.
Subscribe to our Blog in the upper right corner of this page, so you don’t miss the next one on Centralized Agent Framework Plans.
The post Scalable Cloud Workload Security: Part 4 of a Series appeared first on CloudPassage.
]]>
https://ift.tt/2KqP0JL
Wed, 22 Apr 2020 02:09:07 +0000
https://ift.tt/2EM0WWF
Thank you for the great response at BSides San Francisco 2020, where we unveiled our real-time vulnerability alerting engine. By harnessing public data and applying data analytics, we cut through the noise and get real-time alerts only for highly seismic…
The post Dozen Dirtiest CVEs Q120 (Cloud Vulnerability Exposures) appeared first on CloudPassage.
]]>
Thank you for the great response at BSides San Francisco 2020, where we unveiled our real-time vulnerability alerting engine. By harnessing public data and applying data analytics, we cut through the noise and get real-time alerts only for highly seismic cloud vulnerability exposures (CVEs)—making vulnerability fatigue a thing of the past. If you missed our BSidesSF 2020 talk, you can watch the video “Real-Time Vulnerability Alerting” on YouTube. The real-time vulnerability alerting engine has been humming and churning data since BSides, and here are the consolidated results for the dozen dirtiest CVEs Q120.
Overview of Q1 Vulnerabilities
The X-axis for this graph represents each day of the Q120, while the Y-axis represents the vulnerability intelligence quotient calculated by the engine (see the BSides presentation for more info). For simplicity, the Y-axis has been divided into four colors—Red, Orange, Yellow, and Green—which represent the dirtiness (or criticality) of each vulnerability. Each blue dot represents a vulnerability. Its placement on the X-axis represents the date on the timeline and placement on the Y-axis represents criticality (i.e. vulnerability intelligence quotient). It’s possible for the same vulnerability to appear on multiple days, especially vulnerabilities with a high X-axis value.
#1 Dirtiest CVE Q120 – CVE-2020-0601 (CurveBall)
The title for being the dirtiest CVE Q120 goes to CVE-2020-0601—a vulnerability discovered by the United States’ National Security Agency (NSA) that affects how cryptographic certificates are verified by cryptography libraries in Windows which makes up CryptoAPI. Dubbed “CurveBall”, an attacker exploiting this vulnerability could potentially create their own cryptographic certificates (signed with Elliptic Curve Cryptography algorithms) that appear to originate from a legitimate certificate that is fully trusted by Windows by default. The Proof of Concept (POC) is available, and one of them can be found in GitHub here.
#2 – CVE-2020-0796 (EthernalDarkness/GhostSMB)
The second dirtiest CVE Q120 is CVE-2020-0796—also known as EthernalDarkness or GhostSMB. On March 10, this vulnerability was inadvertently shared, under the assumption that Microsoft was releasing a patch, which Microsoft released only after public details were available on March 12. This vulnerability would allow an unauthenticated attacker to exploit this issue by sending a specially crafted packet to a vulnerable SMBv3 server. Similarly, if an attacker could convince or trick a user into connecting to a malicious SMBv3 server, then the user’s SMB3 client could also be exploited. Regardless if the target or host is successfully exploited, this would grant the attacker the ability to execute arbitrary code. Microsoft later released an out-of-band patch to fix the issue, and the POC for this issue can be found on GitHub here.
#3 – CVE-2019-19781
The honor of the third dirtiest CVE Q120 goes to CVE-2019-19781, which affects Citrix Gateway and Citrix Application Discovery Controller. Initially, it was thought to be just a directory traversal vulnerability that would allow a remote, unauthenticated user to write a file to a location on disk. But on further investigation, it was found that this vulnerability would allow full remote code execution on the host.
Top 12 Dirtiest CVEs Q120
The prioritized list of the complete dirty dozen for Q1 2020 is in the table below.
Priority
|
Vulnerability
|
Description
|
1
|
CVE-2020-0601 |
Windows Elliptic Curve Cryptography (ECC) certificates spoofing |
2
|
CVE-2020-0796 |
Windows SMBv3 Client/Server Remote Code Execution Vulnerability |
3
|
CVE-2019-19781 |
Citrix Application Delivery Controller (ADC) and Gateway RCE |
4
|
CVE-2020-0688 |
Microsoft Exchange Memory Corruption Vulnerability |
5
|
CVE-2020-0674 |
Microsoft Scripting Engine Memory Corruption Vulnerability |
6
|
CVE-2020-0609 |
Windows Remote Desktop Gateway (RD Gateway) Remote Code Execution Vulnerability |
7
|
CVE-2020-0610 |
Windows Remote Desktop Gateway (RD Gateway) Remote Code Execution Vulnerability |
8
|
CVE-2020-1938 |
Apache JServ Protocol (AJP) arbitrary file access |
9
|
CVE-2019-11510 |
Pulse Secure Pulse Connect Secure arbitrary file reading vulnerability |
10
|
CVE-2019-17026 |
Firefox and Thunderbird code execution |
11
|
CVE-2019-0604 |
Microsoft SharePoint Remote Code Execution Vulnerability |
12
|
CVE-2019-18634 |
Linux /etc/sudoers stack-based buffer overflow |
How CloudPassage Halo Can Help
CloudPassage Halo Customers can use Halo’s Server Secure service, our software vulnerability manager, to identify and prioritize the dozen dirtiest CVEs Q120 lurking in their environments.
CloudPassage Halo Servers Tab
Customers can also create custom reports to view details on the dozen dirtiest CVEs Q120.
CloudPassage Halo Vulnerability Report
To keep up to date on our new control policies as we release them and our quarterly reports on the Dozen Dirtiest CVEs Q120 and beyond, subscribe to the CloudPassage Blog in the upper right corner of this page.
Learn more about CloudPassage Halo Server Secure.
Get a free vulnerability assessment of your infrastructure in 30 minutes.
The post Dozen Dirtiest CVEs Q120 (Cloud Vulnerability Exposures) appeared first on CloudPassage.
]]>
https://ift.tt/3eoCvfg
Tue, 14 Apr 2020 03:01:58 +0000
https://ift.tt/2QLHcpf
As we mentioned in a previous blog, the “Forrester Wave™: Cloud Workload Security, Q4 2019” report provided an excellent overview of the security challenges posed by cloud-based environments and the cloud workload security solutions best poised to address them based…
The post Containerization and Container Orchestration Platform Protection: Cloud Workload Security Part 3 appeared first on CloudPassage.
]]>
As we mentioned in a previous blog, the “Forrester Wave: Cloud Workload Security, Q4 2019” report provided an excellent overview of the security challenges posed by cloud-based environments and the cloud workload security solutions best poised to address them based on 30 criteria. In this third blog post of our series, we explore the criterion “containerization and container orchestration platform protection.”
Public cloud computing caused a seismic shift in how application infrastructure was provisioned and managed. Soon after, another seismic shift opened up even more disruptive possibilities—workload containerization.
Today, containerized environments continue to evolve towards even greater levels of speed, flexibility, and dynamic operation. The challenge is they can be incredibly complex, especially in advanced use cases.
Solutions that provide container security must be able to address the broad set of tools typically in use, automation of change management, integration directly into continuous delivery pipelines, and scaling to handle hundreds of thousands to millions of container images and instances. The implementation of containerization and container orchestration platform protection is critical to cloud workload security.
About Containerization and Container Orchestration Platform Protection
In the key takeaways section of “The Forrester Wave: Cloud Workload Security, Q4 2019”, Forrester says “Support For Containerization And OS-Level Protection Are Key Differentiators” and describes the capability as follows:
“With Kubernetes and Docker becoming de facto container environments mainly deployed on cloud platforms, S&R professionals need to be sure that: 1) they scan container images pre-runtime and runtime; 2) there are controls for any configuration drifts at the container level; and 3) they monitor network communications and system calls among containers as well as between containers and the underlying host operating system. Other differentiators include vendor-supplied and constantly updated best practices and compliance templates as well as agentless and agent-based container architectures.”
We’ll start with our perspective on what these capabilities entail, why they’re important, and what security and compliance control objectives they can deliver.
What Is Containerization and Container Orchestration Platform Protection?
Containerization and container orchestration platform protection is a set of capabilities that integrate directly into the broad set of components that make up a containerized environment and automatically discover, inventory, assess, monitor, and control these components.
The scope of containerization and container orchestration platform protection capabilities typically includes:
- Security for container images at-rest (stored in registries) and in-motion (moving through CI/CD pipelines)
- Security for container runtimes (e.g., Docker) running on hosts, pre-configured clusters, container-as-a-service (e.g., AWS Elastic Container Service or Elastic Kubernetes Service) or runtime-as-a-service (e.g., AWS Lambda)
- Security for container instances launched into runtimes (e.g., Azure Container Instances)
- Security for underlying container orchestration platforms such as Kubernetes or Mesos
- Security for systems that support containerized environments, such as image registries, artifact servers, code repositories, and Jenkins hosts
- Security for deployments across public, hybrid, and multi-cloud models
Secondary capabilities must include a range of protection capabilities not only for the containers themselves but also for the images and the runtimes, the ability to handle massive numbers of container images and instances, and the ability to implement “shift-left” strategies.
Capabilities to protect runtimes implemented as self-operated Docker hosts are particularly important to protect because:
- All guest containers on a Docker host share that host’s operating system kernel and often other common components
- The Docker host itself contains instrumentation with privileged access to the containers and other infrastructure (e.g., dockerd, kubectl), so if the Docker host itself is compromised, the entire environment is compromised.
For now, let’s consider why containerization and container orchestration platform protection is important and what security and compliance use cases it can address.
Why Containerization and Container Orchestration Platform Protection are Important
Containerization has revolutionized application infrastructure. Innovations in container platforms now deliver nearly unlimited portability, dynamic capacity management, and levels of operational automation never before possible. These environments continue to evolve towards even greater levels of speed, flexibility, and dynamic operation.
Effective container security requires tools with the same level of speed, flexibility, and dynamic operation. This means automation.
Below are some of the reasons why containerization and container orchestration platform protection are essential capabilities to have in the security and compliance arsenal.
Preventing a Security Incident from Becoming Viral
By now, we all know how quickly someone infected with a virus can expose a large number of other people in a short time.
Similarly, a vulnerable or otherwise dangerous container image can result in hundreds or thousands of exposed container instances in the normal course of the image being used.
Running containers are instantiated from static images, much like virtual machines from a virtual-machine image or AWS EC2 instances from AMIs (Amazon Machine Images). But containers can be instantiated even faster, leaving little to no opportunity for security intervention if an image contains dangerous code or has been subverted.
This speed means that a single bad image can expose entire application environments to attack almost instantly. This risk is compounded when other “child” container images are built on top of a bad container image.
Given that every single configuration change to a container creates another image, this results in a massive number of images (container image sprawl) that could expose the broader container environment. A large number of potentially bad containers being accessible for instantiation combined with the speed of their instantiation can lead to an explosion in attackable application surface area.
Preventing a viral security incident is just one of the many reasons why containerization and container orchestration platform protection is important.
Shifting Security To The Left By Making Security Tests Part Of The Build
Tremendous gains have been made in enabling continuous software delivery by applying the DevOps practice of shifting left.
In a software testing context shifting left means moving application testing and security earlier in the development process, which is important here because shifting security to the left is key to delivering quality software at speed—as it drives teams to start testing earlier in the pipeline and ultimately build security into software rather than bolting it on at the end.
To learn more about the concept of shifting left and its value—such as reducing the cost of fixing vulnerabilities by almost 10 times—read What is “Shift Left”? Shift Left Testing Explained – BMC Blogs.
Containerization, DevOps, and continuous delivery go hand in hand.
The key goals of DevOps are to speed up production and improve product quality. These make containerization a critical component of DevOps and continuous delivery, as it helps to increase the stability and reliability of applications and makes application management easier.
Containerization also directly supports continuous delivery pipeline infrastructure by easily integrating into existing processes. To learn more, read “Continuous infrastructure: The other CI.”
Continuous delivery of infrastructure depends on automated testing to ensure that the application and infrastructure it runs on behaves as expected: Within each stage, teams use quality gates to assess the quality level of an application. These are often referred to as “gates” since the tests have to run clean before the build can progress towards production—a failure is known as “breaking the build” and requires the owner of the changes (usually a DevOps engineer) to go back and fix what they broke.
This testing, or quality gates, is the reason why continuous delivery works well. You can focus on specific tests that will give you the fastest feedback—and with the right tooling, security can be made part of the build, ensuring that vulnerabilities never make it to production.
Achieving a shift-left posture by making security testing part of the build is a major reason why containerization and container orchestration platform protection capabilities are important.
Rapid DevOps Feedback
With traditional security approaches, such as vulnerability scanners, getting feedback to DevOps teams and other system owners takes a long time (days, weeks, even months).
Containerized environments often leverage continuous delivery and the related automated testing described process discussed above.
Making security part of the build through container orchestration platform protection capabilities creates a real-time feedback loop for system owners that’s part of their day-to-day workflow. Which means that security flaws introduced by system owners will be presented to them in real time, and they will have to fix them before moving to the next stage. This will also result in better education and far fewer security flaws making it to production.
Reducing Exposures, Vulnerability Windows, and Security Assurance Costs
Security problems are harder to fix once they’re in production. The further they get down the development pipeline, the more complicated and costly they are to fix. These difficulties result in more people and more time.
Waiting until an issue gets further into the production workflow also makes it take longer to be fixed, which means there’s a longer vulnerability window—that is, the exposure is in production for some period of time while it’s being fixed. Additionally, once the software is in testing phase, reproducing any defects on a developer’s local environment poses yet another time-consuming task.
Implementing processes for detecting early and often can be extremely valuable because it cuts down on the potential exponential costs of re-work and vulnerability remediation. With the right automation, continuous delivery methods enable reduction of exposures released into production, reduces vulnerability windows, and cuts the overall cost of security assurance for containerized environments.
This is a reason why containerization and container orchestration platform protection capabilities are essential to the effectiveness of these environments.>
Prevent Attacks On Runtime, Orchestration, and Registry Layers
The end-to-end protection of containers in production is also critical to avoiding the steep operational and reputational costs of potential data breaches.
While there’s myopic focus on the containers themselves, they’re just part of the equation.
Comprehensive runtime container security also requires securing orchestration systems, which may have vulnerabilities creating even more attack surfaces for malicious actors. Attackers who are able to penetrate runtimes or orchestration layers own the environments they host and/or manage.
Your entire container ecosystem is only as secure as its least secure container, and the security of that container is at least in part dependent on the registry from which you pulled the original container image.
Compromise of image registries would mean that the very source images used to build containerized applications is compromised. Imagine what they could do by sending out pre-compromised software.
These components are critical to protect in addition to the running containers themselves. Well-rounded containerization and container orchestration platform protection capabilities are essential for ensuring that applications are protected from attacks and exploits throughout as much of the build-ship-run lifecycle as possible.
Use Cases for Containerization and Container Orchestration Platform Protection
The simple ability to implement containerization and container orchestration platform protection is a far cry from automating a specific operational task at scale, across an environment.
In our experience working with hundreds of companies on cloud security, the most critical question to ask may be “What can I do with it?”
The following capabilities can address many use cases, too many to list here. The most common use cases in which control objectives are achieved with containerization and container orchestration platform protection include:
- Continuous Asset Awareness – Automated, continuous discovery and inventory of IaaS services and resources. You can’t do any of the things below if you don’t know the assets exist.
- Point-in-Time Security Assessment – Assessment of public cloud account security, including the IaaS account itself and the security of the services and assets inside the account
- Continuous Security Monitoring – Ongoing IaaS environment monitoring to detect and evaluate how changes and events impact your security and compliance posture
- Compliance Auditing & Monitoring – Point-in-time evaluation of compliance posture against a range of standards (a.k.a. pre-auditing), or continuous compliance monitoring to surface issues as they arise instead of “cleaning up” right before an audit.
- Detect Indicators of Threat & Compromise – Attackers will use cloud technology to their advantage, leaving cloud “versions” of rootkits and other malicious artifacts as part of their attacks. With the right containerization and container orchestration platform protection automation, indicators of these situations are quickly detected to accelerate prevention, isolation, containment, investigation, and clean-up.
- Automated Issue Remediation – Leveraging cloud provider APIs to implement automatic remediation for exposures and compliance flaws is extremely valuable but often overlooked. Capturing metadata from provider APIs enables system owners to automate the process of zeroing in on and remediating problems, creating fully automated remediation capabilities.
Fundamental information security control objectives are still requirements in cloud environments. What’s new is how these objectives can be achieved consistently, at scale, across distributed environments.
Well-implemented containerization and container orchestration platform protection for IaaS and PaaS environments is capable of solving these new challenges through efficient, effective, and consistent automation.
Capabilities Needed to Manage Information Security Risk in Orchestrated Container Environments
Forrester discusses and evaluates the capabilities needed to manage information security risk in orchestrated container environments.
In our experience, the capabilities should address the following.
Security for:
- Container images at-rest (stored in registries), and in-motion (moving through CI/CD pipelines)
- Container runtimes (e.g., Docker) running on hosts, pre-configured clusters (e.g., AWS Elastic Container Service or Elastic Kubernetes Service), or turnkey container- or runtime-as-a-service (e.g., Azure Container Instances or AWS Fargate)
- Container instances launched into runtimes
- Container orchestration platforms such as Kubernetes or Mesos
- Systems supporting containerized environments, such as image registry and artifact servers, code repositories, and Jenkins hosts.
Why We Believe CloudPassage Halo Achieved 5 of 5 in the Containerization and Container Orchestration Platform Protection Criterion
Our flagship solution is the CloudPassage Halo cloud security platform. Halo was purpose-built in 2010 to automate security and compliance for servers across public and hybrid cloud environments. Since that time, CloudPassage has invested heavily in the platform’s evolution to address new cloud technologies and their security needs. Halo now addresses security for server-based, containerized, and public cloud infrastructure environments including public, hybrid, and multi-cloud deployments.
Halo’s containerization and container orchestration platform protection capabilities are included in Halo Container Secure, one of the three primary services of the Halo platform. The capabilities of Halo Container Secure are our implementation of security and compliance automation for containerized environments. Halo received the highest possible score (5 out of 5) in this criterion.
Here’s how we built Halo to achieve a level of capability worthy, in our opinion, of this independent recognition.
Key Requirements That Halo Is Designed To Address
In 2010, only the earliest adopters of public cloud technologies realized just how different containerized environments are. Then and now, CloudPassage has had the privilege of working with some of the largest and most sophisticated public cloud enterprises in the world.
- Unified Capabilities – IaaS and PaaS services don’t exist in isolation. Modern application architectures now combine IaaS and PaaS services with server-based and containerization technology (some of which is delivered by the provider themselves). Looking at components in isolation limits context and slows analysis of the overall application environment. This makes the unification of data and management across various types of cloud infrastructure a critical requirement.
- Portability – The majority of successful digital enterprises use multiple IaaS and PaaS providers for availability, cost management, and prevention of vendor lock-in. Even within a single cloud provider, not all regions operate identically; federal and some international service regions are good examples. This makes portability of capabilities within and across cloud providers critical. API compatibility, data normalization, and common policy management are just a few of the portability issues that are important to a successful deployment.
- Scalability – The scale of cloud infrastructure typically changes both on a short-term basis (cloudbursting or autoscaling events) and in the long term (organic application growth, new applications, data center migration). Containerization and container orchestration platform protection capabilities must be able to quickly and automatically adapt to changes in infrastructure scale, in terms of both functional capacity and licensing.
- Automation – Changes are programmatically automated in cloud and DevOps environments. If security and compliance functions are not equally automated, they will be quickly outpaced by the infrastructure’s rate of change. Automation is needed to ensure that security instrumentation is “part of the build” and not something to be added later. Automation also ensures consistency and eliminates errors, both critical needs in highly dynamic and diverse cloud environments.
- Operational Integration – As previously discussed, aligning security and DevOps is a critical success factor that delivers mutual benefit and a stronger overall security posture. This requires that security functionality and intelligence are automatically delivered to system owner workflow tools (e.g., Jira, Slack, Jenkins). These needs are complex, especially in larger environments, making comprehensive REST APIs, data routing, and other operational integrations critical.
CloudPassage was guided by top cloud enterprises to build the Halo platform with these and similar cloud-specific requirements in mind. Below is how Halo implements containerization and container orchestration platform protection, including details on how the platform successfully addresses these vital needs.
How Halo Implements Containerization and Container Orchestration Platform Protection
From its inception, the innovations built into the Halo cloud security platform were designed to address the critical needs discussed above. These innovations are recognized by ten patents granted to CloudPassage between 2013 and 2019 that cover various aspects of the Halo technology.
Below are just a few of the design decisions and features that enable Halo’s unification, portability, scalability, automation, and operational integration for containerization and container orchestration protection:
- Docker host and Kubernetes nodes protecting using low-friction, low-impact microagent (2MB in memory)—same agent for server, Docker host, and Kubernetes node security—all the agent features like automatic upgrade, no listening port to attack, patented agent and communication security, outbound-only “heartbeat” communication protocol, etc.
- Customizable “out-of-the-box” policy templates for Docker hosts and Kubernetes nodes that supporting common security and compliance standards such as PCI DSS, CIS Benchmarks, HIPAA, and SOC 2 / SysTrust criteria for Docker
- Deep inspection and collection of container host metadata including raw Docker-inspect output, image source, identification of unknown or “rogue” containers, etc…
- Fast, scalable, fully automated security analytics capabilities that include tracking of initial issue appearance, automated detection of remediation, and issue regressions
- Detailed remediation advice for issues identified, presentation of raw assessment data for automation and inspection purposes, and instructions to manually verify findings if needed
- Bidirectional REST APIs and direct integration with queueing services like AWS SQS to enable operational automation and direct integration with other security and DevOps tools
- Operational features and integration tools to automate deployment, configuration, issue routing, email alerting, and bidirectional interaction with operational tools such as Jira and Slack
- RBAC and data access features to ensure system owners only interact with authorized systems
The list of capabilities above only addresses Halo Container Secure, the Halo platform service that implements API-level connectivity and control.
An exhaustive explanation of every innovation is outside the scope of this article. However, Halo’s innovations cover a much broader range of cloud-related issues, including assumed-hostile running environments, multitenancy, asset cloning, ephemeral workloads, agent security, and more.
Learn More About CloudPassage Halo
To learn more:
Come back and read our upcoming blogs on:
- Scalability of protected cloud instances and protected containers
- Centralized Agent framework plans
Related Posts
The post Containerization and Container Orchestration Platform Protection: Cloud Workload Security Part 3 appeared first on CloudPassage.
]]>
https://ift.tt/3cdyBEi
Thu, 26 Mar 2020 00:15:12 +0000
https://ift.tt/3b2q6g0
At CloudPassage we’re keenly aware of the disruption and stress being caused by the COVID-19 outbreak and related quarantine orders. We’re seeing impact across our ecosystem of customers, teams, and other stakeholders worldwide. Communication is critical in situations like this,…
The post CloudPassage Response to COVID-19 appeared first on CloudPassage.
]]>
At CloudPassage we’re keenly aware of the disruption and stress being caused by the COVID-19 outbreak and related quarantine orders. We’re seeing impact across our ecosystem of customers, teams, and other stakeholders worldwide. Communication is critical in situations like this, and this article shares how CloudPassage is responding.
CloudPassage’s COVID-19 response strategy is focused on two critical priorities:
- Ensuring the safety and well-being of our team, customers, and partners
- Ensuring our cloud security services operate without impact
CloudPassage has a standing pandemic plan that implements this strategy. Elements of that plan have been activated in response to team needs and in line with guidance from the World Health Organization and Centers for Disease Control and Prevention. These elements include eliminating all travel, postponing events, validating pandemic plans of key partners, and adoption of a work-from-home strategy globally.
A significant portion of our workforce already operates remotely, so operational impact has been minimal. The CloudPassage team continues to deliver the level of support our customers need to protect their most critical cloud systems. Customers and partners should see no change in service or operations and can submit questions through the Halo Portal as always.
Our commitment to innovation and product improvement is also unimpacted. Several key container security features were released just this week, and the development of the Halo platform roadmap continues. We’re increasing our existing use of video and teleconference technologies whenever possible to continue to foster our strong culture of teamwork and collaboration.
With public gatherings being limited to small groups or being canceled entirely, we are developing plans to establish a series of live and online hosted and sponsored events to remain connected with our customers and partners.
We’re well aware that attackers aren’t stopping and business must continue in this environment (and in some cases, both are accelerating). We remain focused on supporting our customers while striving to assure team health and safety.
Communication during these times is critical. Please don’t hesitate to reach out to any CloudPassage representative or to covid-19@cloudpassage.com with questions or concerns.
Thank you, and good luck during these trying times.
Carson Sweet
Chief Executive Officer
CloudPassage
The post CloudPassage Response to COVID-19 appeared first on CloudPassage.
]]>
https://ift.tt/39rdYTZ
Thu, 05 Mar 2020 05:13:50 +0000
https://ift.tt/2ED9L5v
The “Forrester Wave™: Cloud Workload Security, Q4 2019” report published by leading global research and advisory firm Forrester, Inc. provides an excellent overview of the security challenges posed by the transition to cloud-based environments and discusses the cloud workload security…
The post API-level Connectivity and Control for IaaS and PaaS: Cloud Workload Security Part 2 appeared first on CloudPassage.
]]>
The “Forrester Wave: Cloud Workload Security, Q4 2019” report published by leading global research and advisory firm Forrester, Inc. provides an excellent overview of the security challenges posed by the transition to cloud-based environments and discusses the cloud workload security solutions best poised to address them. One important criterion is API-level Connectivity and Control for IaaS and PaaS.
Application infrastructure has always been complex. The “big bang” of cloud computing created an ever-expanding universe of new infrastructure services and resources available on-demand from IaaS and PaaS platforms like Amazon Web Services, Microsoft Azure, and Google Compute Platform. When combined, this universe of resources represents a mind-numbing set of potential permutations. Cloud computing and DevOps also drive the speed and volume of changes to levels almost guaranteed to overwhelm traditional security approaches and technologies.
Achieving security visibility and control in these new environments are key needs discussed in the Forrester Wave and other research. Fulfilling these needs typically involves automation that leverages the cloud provider’s APIs to discover, assess, and monitor services and resources in IaaS environments. Forrester refers to this overall capability as “API-level connectivity and control for IaaS and PaaS”.
CloudPassage’s solution is Halo, a platform for cloud computing security purpose-built to automate security and compliance management across public and hybrid cloud environments. In The Forrester Wave: Cloud Workload Security, Q1 2019 report, Halo received the highest possible score (5 out of 5) in the API-level connectivity and control for IaaS and PaaS criterion. This blog explores this criterion
About API-level Connectivity and Control for IaaS and PaaS
In the Key Takeaways section of “The Forrester Wave: Cloud Workload Security, Q4 2019”, Forrester states the following:
“As on-premises security suites technology becomes outdated and less effective to provide comprehensive support for cloud workloads, improved broad coverage support for guest/host OS; API-level connectivity to the infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) platform; and container orchestration and runtime platforms will dictate which providers lead the pack.”
If API-level connectivity and control will be a defining trait of cloud security leaders, the implication seems clear—this capability is important to customers. The Forrester report states that cloud workload security customers should seek vendors that:
“Provide templatized API-level configuration management to IaaS and PaaS platforms. You can’t control Amazon Web Services (AWS), Azure, or Google Cloud Platform (GCP) using old school, on-premises CMDB tools. Instead, you want tight control over instance and storage creation and network connectivity. Best practices, vulnerability, and compliance templates (CIS, CVE, or HIPAA) built into and consistently updated by vendors for managing configurations are key differentiators in this area.”
Clearly this capability is important. But what exactly is “API-level connectivity and control”, why is it important, and what can I do with it?
What Is API-level Connectivity and Control for IaaS and PaaS?
API-level connectivity and control uses cloud provider APIs to automatically discover, inventory, assess, monitor, and control IaaS and PaaS environments. The scope of these features typically includes infrastructure resources and services in the IaaS/PaaS account, as well as the account itself.
This basic functionality must be able to handle the dynamic, diverse and distributed nature of cloud infrastructure. Just a few of the additional capabilities needed include customizable policy and rule templates, data normalization across IaaS/PaaS providers, easy integration with cloud provider environments, and scalability.
Many industry terms are synonymous with “API-level connectivity and control for IaaS and PaaS”. A few of these include:
- Cloud security posture management (CSPM)
- Cloud workload security assessment and monitoring
- Continuous cloud compliance monitoring
- Cloud infrastructure security
- IaaS security
Regardless of the name, the concept is deceptively simple:
- connect to a cloud provider API
- retrieve data points relevant to security and compliance
- evaluate that data against standards
But as always, the devil is in the details. Scalability issues, impact on API limits, cross-cloud portability, multi-cloud data normalization, and correlation with other security and compliance data are all problems that a successful solution must handle. Later in this blog, we’ll cover how Halo’s implementation tamed these issues well enough to achieve a 5 out of 5 score.
For now, let’s consider why these capabilities are important and what you can do with them.
Why API-level Connectivity and Control for IaaS and PaaS is Important
Application programming interfaces (APIs) have been a critical part of application stacks for decades, most often related to the software itself. Cloud computing has made APIs central to the successful adoption of DevOps, continuous delivery, and infrastructure automation. Infrastructure today is really just more code, quickly and easily iterated across huge numbers of resources.
This trend in cloud infrastructure makes API-level connectivity and control important capabilities for security and compliance. Here are some of the most important reasons why.
APIs Help Keep Up Cloud Speed and Scale
API-driven speed and agility results in a massive increase in change velocity. Every change introduces the potential for harm, and those risks must be managed as changes occur. Without a way to keep up with the velocity of API-driven infrastructure, security and compliance practitioners are quickly overwhelmed and something will get missed.
Even the most meticulously hardened cloud environment will end up exposed by errors and oversights on the part of humans or weak automation tools. This is in large part related to the large number of configuration settings, access vectors, and access control structures that have to be constantly monitored. In fact, 99 percent of cloud security failures will be the customer’s fault through 2025, according to recent research from Gartner.
Without the right automation, the risk of making a mistake is amplified. This leaves us with a top reason that API-level connectivity and control for IaaS and PaaS is important: to extend the speed, scale, and consistency benefits of API-based automation to security and compliance.
APIs Help Security Align With DevOps To Achieve DevSecOps
DevOps is the new norm in how applications are developed, deployed, and operated. Smart security leaders are seeking ways to harmonize security with DevOps methods and processes in order to create similar scale and leverage.
API-based automation is a critical pillar at the center of any true DevOps shop. Workflows in a DevOps shop are driven by automation tools wired together with APIs, right down to the way that engineers communicate with one another. When a task is expected to be repeated, it’s automated on-the-spot. Changes are deployed when ready, typically without human intervention or review. These concepts are often foreign to security and compliance practitioners and may even seem to run counter to risk control objectives.
Collaboration with DevOps teams requires that security and compliance teams embrace “the DevOps way”, which in no small part means becoming API-driven. This is important to learning how to engage DevOps on their terms, achieving the speed and consistency benefits of DevOps-style automation, and even to ensure common situational awareness—if both teams leverage the same APIs, consistent awareness will be built-in.
Historically, security vendors have been remiss in providing users with rich APIs, making API-driven operations somewhat foreign to security teams. The emergence of purpose-built cloud security solutions are changing that scene by exposing API-driven capabilities to users. This is the very essence of API-level connectivity and control capabilities.
APIs Support Continuous Monitoring to Prevent The Worst
Unlike traditional data centers, cloud infrastructure environments are designed to be in a constant state of change. Compute, storage, networking, and other IaaS resources continuously added, removed, and modified by automated tools. Resources can be copied or made into templates used to scale infrastructure in autoscaling events, or just to address general growth. These capabilities are powerful.
But such power doesn’t come without risk. Cloud resources are often cloned in-place, which means every exposure is cloned with them. Automation scripts are not always QA’ed or inspected, especially in the heat of an outage situation. One vulnerable image or poorly written update script can become “Typhoid Mary”, spreading deadly problems throughout the environment very quickly. In other words, the creation of new attackable surface areas and exposures without warning should be completely expected.
In a recently released white paper, CloudPassage shared the nastiest mistakes we’ve seen expose IaaS & PaaS environments. In summary, those exposures include:
- Easily hacked administrative credentials
- Exposed data assets
- Weak network access controls
- Unconstrained blast radius
- Poor event logging
The Gartner research mentioned above confirms our own experience—issues like these can be prevented. API-level connectivity and control for IaaS and PaaS is one of the keys to that prevention. That makes these capabilities an important part of your cloud security arsenal.
Use Cases for API-level Connectivity and Control for IaaS and PaaS
The simple ability to connect to an API and analyze the data found there is a far cry from automating a specific operational task at scale, across the environment. In our experience working with hundreds of companies on cloud security, the most critical question to ask may be “What can I do with it?”
These capabilities can address many use cases, too many to list. The most common use cases in which control objectives are achieved with API-level connectivity and control include:
- Continuous Asset Awareness – API-based discovery and inventory of IaaS services and resources—you can’t do any of the things below if you don’t know the assets exist
- Point-In-Time Security Assessment – assessment of public cloud account security, including the IaaS account itself and the security of the services and assets inside the account
- Continuous Security Monitoring – ongoing IaaS environment monitoring to detect and evaluate how changes and events impact security and compliance posture
- Compliance Auditing & Monitoring – point-in-time evaluation of compliance posture against a range of standards (a.k.a. pre-auditing) or continuous compliance monitoring to surface issues as they arise instead of “cleaning up” right before an audit
- Detect Indicators of Threat & Compromise – attackers will use cloud technology to their advantage, leaving cloud “versions” of rootkits and other malicious artifacts as part of attacks. With the right API-based automation, indicators of these situations can be quickly detected to accelerate prevention, isolation, containment, investigation and clean-up.
- Automated Issue Remediation – leveraging cloud provider APIs to implement automatic remediation for exposures and compliance flaws is extremely valuable but often overlooked. Capturing metadata from provider APIs enables system owners to automate the process of zeroing in on and remediating problems, creating fully automated remediation capabilities.
Fundamental information security control objectives are still requirements in cloud environments. What’s new is how these objectives can be achieved consistently, at scale, across distributed environments. Well-implemented API-level connectivity and control for IaaS and PaaS environments is capable of solving these new challenges through efficient, effective, and consistent automation.
Why We Believe CloudPassage Halo Achieved 5 of 5 for API-level Connectivity and Control for IaaS and PaaS
CloudPassage’s solution is the Halo cloud security platform. Halo was purpose-built in 2010 to automate security and compliance management for servers across public and hybrid cloud environments. Since that time, CloudPassage has invested heavily in the platform’s evolution to address new cloud technologies and their security needs. Halo now addresses security for server-based, containerized, and public cloud infrastructure environments including public, hybrid, and multi-cloud deployments.
CloudPassage Halo received the highest score possible (5 out of 5) for seven criteria in The Forrester Wave: Cloud Workload Security report, including API-level connectivity and control for IaaS and PaaS. Halo’s public cloud infrastructure security capabilities are included in Halo Cloud Secure, one of the three major modules of the Halo platform. The capabilities of Halo Cloud Secure are our implementation of API-level connectivity and control for IaaS and PaaS.
Here’s how we built Halo to achieve, in our opinion, a level of capability worthy of this independent recognition.
Key Requirements That Halo Is Designed to Address
In 2010 only the earliest adopters of public cloud technologies realized just how different these environments really are. Then and now, CloudPassage has had the privilege of working with some of the largest and most sophisticated public cloud enterprises in the world to guide our building of the Halo platform for cloud-specific requirements. These experiences gave us a deep understanding of the key requirements for successful cloud security, including API-level connectivity and control for IaaS and PaaS. While other requirements certainly exist, some of the most critical include:
- Unified capabilities for IaaS, PaaS, servers, and containers – IaaS and PaaS services don’t exist in isolation. Modern application architectures now combine IaaS and PaaS services with server-based and containerization technology (some of which is delivered by the provider themselves). Looking at components in isolation limits context and slows analysis of the overall application environment. This makes unification of data and management across various types of cloud infrastructure a critical requirement.
- Portability across cloud providers – the majority of successful digital enterprises use multiple IaaS and PaaS providers for availability, cost management, and prevention of vendor lock-in. Even within a single cloud provider, not all regions operate identically; federal and some international service regions are good examples. This makes portability of API-based capabilities within and across cloud providers critical. API compatibility, data normalization, and common policy management are just a few of the portability issues that are important to a successful deployment.
- Scalability – the scale of cloud infrastructure typically changes both on a short-term basis (cloudbursting or autoscaling events) and in the long term (organic application growth, new applications, data center migration). API-level connectivity and control capabilities must be able to quickly and automatically adapt to changes in infrastructure scale, in terms of both functional capacity and licensing.
- Automation – changes are programmatically automated in cloud and DevOps environments. If security and compliance functions are not equally automated, it will be easily outpaced by the infrastructure’s rate of change. Automation is needed to ensure that security instrumentation is “part of the build” and not something to be added later. Automation also ensures consistency and eliminates errors, both critical needs in highly dynamic and diverse cloud environments.
- Operational Integration – as previously discussed, aligning security and DevOps is an important success factor that delivers mutual benefit and a stronger overall security posture. This requires that security functionality and intelligence is automatically delivered to system owner workflow tools (e.g. Jira, Slack, Jenkins). These needs are complex, especially in larger environments, making comprehensive REST APIs, data routing, and other operational integrations critical.
How Halo Implements API-Level Connectivity and Control
From its inception, the innovations built into the Halo cloud security platform were designed to address the critical needs discussed above. These innovations were recognized by ten patents being granted to CloudPassage between 2013 and 2019 that cover various aspects of the Halo technology.
Here are just a few of the design decisions and features that enable Halo’s unification, portability, scalability, automation and operational integration for API-level connectivity and control:
- Use of existing cloud service provider API access constructs for easy, low-friction configuration, including delegated access to enable cross-account security management
- Customizable “out-of-the-box” policy templates supporting common security and compliance standards such as PCI DSS, CIS Benchmarks, HIPAA, and SOC 2 / SysTrust criteria
- Deep inspection and collection of all cloud resource metadata including raw resource-inspection output, user-defined resource tags, and platform metadata such as region, creation details, etc.
- Fast, scalable, fully automated security analytics capabilities that include tracking of initial issue appearance, automated detection of remediation, and issue regressions
- Normalized data model that presents disparate IaaS details in a common structure and format
- Detailed remediation advice for issues identified, presentation of raw assessment data for automation and inspection purposes, and instructions to manually verify findings if needed
- Bidirectional REST APIs and direct integration with queueing services like AWS SQS to enable operational automation and direct integration with other security and DevOps tools
- Operational features and integration tools to automate deployment, configuration, issue routing, email alerting, and bidirectional interaction with operational tools such as Jira and Slack
- RBAC and data access features to ensure system owners only interact with authorized systems
The list of capabilities above only addresses Halo Cloud Secure, the Halo platform module that implements API-level connectivity and control.
An exhaustive explanation of every innovation is outside the scope of this article. However, Halo’s innovations cover a much broader range of cloud-related issues including assumed-hostile running environments, multitenancy, asset cloning, ephemeral workloads, agent security, and more.
To Learn More
Download The Forrester Wave: Cloud Workload Security, Q4 2019.
Read more about CloudPassage Halo’s IaaS CSPM (Cloud Security Posture Management) capabilties
Come back and read our upcoming blogs on other criteria for which CloudPassage received the highest scores possible in The Forrester Wave: Cloud Workload Security, Q4 2019 Report.
Containerization and container orchestration platform protection
Scalability: protected cloud instances and protected containers
Centralized agent framework plans
Or subscribe to our blog by entering your email in the upper right corner of this page and don’t miss a thing.
The post API-level Connectivity and Control for IaaS and PaaS: Cloud Workload Security Part 2 appeared first on CloudPassage.
]]>
https://ift.tt/2GFcYPq
Fri, 31 Jan 2020 04:01:49 +0000
https://ift.tt/31zGUrC
An independent evaluation published by leading global research and advisory firm Forrester provides an excellent overview of the security challenges posed by the transition to cloud-based environments—and discusses the cloud workload security solutions best poised to address them. Why is…
The post Cloud Workload Security – Part 1: Introducing the Forrester Wave Report appeared first on CloudPassage.
]]>
The post Cloud Workload Security – Part 1: Introducing the Forrester Wave Report appeared first on CloudPassage.
]]>
https://ift.tt/38SrBez
Fri, 10 Jan 2020 08:00:00 +0000
https://ift.tt/3jmRHLJ
Monolithic applications are outdated. We are now solidly in a development revolution as rapid software development and deployment have become standard. Microservices and containers are key to enabling this new way of working driven by DevOps practices such as Continuous Integration…
The post Securing Kubernetes Master and Workers appeared first on CloudPassage.
]]>
Monolithic applications are outdated. We are now solidly in a development revolution as rapid software development and deployment have become standard. Microservices and containers are key to enabling this new way of working driven by DevOps practices such as Continuous Integration and Continuous Delivery. As a result, securing Kubernetes master and worker nodes has become critical.
Harnessing the Value of Microservices and Kubernetes
As we welcome 2020, we expect mass migration to microservices. By enabling you to structure an application into several modular services, microservices bring:
- Improvements to scale
- The ability to withstand high server loads
- Faster deployments
- Easy fault isolation
Microservices offer flexibility in using a wide mix of technologies and having autonomous, cross-functional teams. But as microservices grow in popularity, so does the attack surface, so they require a different approach to security.
Kubernetes is one of the fastest-growing container orchestration platforms used to implement microservices and has more than a 50% market share. The idea behind the tool is to operate with containers, which contain a microservice—a small part of your application. Kubernetes by itself is secured, but no one can be safe from server misconfiguration, which was identified as one of the biggest threats in public cloud for security leaders in 2018.
For example, in 2018 hackers got access to Tesla`s Kubernetes and ran cryptocurrency miners on their cluster. So how do you secure the Kubernetes cluster?
CloudPassage Policy Templates Support Securing Kubernetes
Support for securing Kubernetes was released in CloudPassage Server Secure enabling customers to evaluate the security posture of their Kubernetes infrastructure.
Users can now perform security assessment scans of Kubernetes Master nodes and Worker nodes using our two Kubernetes policy templates, which are based on the CIS Benchmark standard. The master node policy template has 73 security configuration assessment rules, e.g. Ensure that the — anonymous-auth argument is set to false; while the worker node policy template has 23 security configuration assessment rules, e.g. Ensure that the –event qps argument is set to 0.
Figure 1. List of Master Rules
Kubernetes Security Scan Results
Let’s take a closer look at how our Kubernetes security support works. The scan results below are the output of a scan on a freshly installed default Kubernetes master node installation.
As you can see, a default Kubernetes installation needs a lot of work to be completely secure. Many benchmark rules produce ‘fail’ results which implies that the configuration needs hardening.
Figure 2. Master Fail Rules
Users can select any individual rule and go over the ‘Description’ and ‘Rationale’ fields to understand the check. If required, users can perform manual tests using the steps from the ‘Audit’ section. And finally, follow the guidance from the ‘Remediation’ section to secure their configuration. An example of one such rule is shown below:
Rule Details for: Ensure that the — anonymous-auth argument is set to false
Figure 3. Rule Details
Similar results are seen with the scan of the worker node. The CSM scan produces many fail results as seen below which implies that these settings need to be hardened to secure the configuration.
Figure 4. Worker Fail Rules
Configuration Checks of Note
Some of my favorite configuration checks for securing Kubernetes in this policy template are listed below with links to find more information:
Securing Container Runtimes
In addition, Kubernetes can be configured to use different container runtimes, with Docker being a popular choice in most cases. This implies that users should harden Docker as well as securing Kubernetes, which can be done using the CloudPassage Docker template that evaluates the security posture of a Docker configuration. Users should also keep in mind that all of these applications are running on the operating system which itself has many attack vectors.
- Users managing Kubernetes’ master and worker by themselves should use CloudPassage OS policy templates for a security evaluation.
- If users are using a public cloud (like AWS or Azure) to run their workers they should use CloudPassage Cloud Secure to evaluate the security posture of their cloud infrastructure.
Conclusion
To conclude, Kubernetes and microservices are great infrastructure choices. Although when it comes to securing Kubernetes, users should assess not only Kubernetes but container runtimes as well as operating systems or cloud infrastructures to get a complete end-to-end view of their security posture.
Learn how the CloudPassage Halo cloud security platform for servers can help secure the servers in your Kubernetes infrastructure.
Learn more about how the CloudPassage Halo platform helps with container security.
Get a free vulnerability assessment of your infrastructure in 30 minutes.
The post Securing Kubernetes Master and Workers appeared first on CloudPassage.
]]>
The post CloudPassage appeared first on Cybersecurity Insiders.
August 28, 2020 at 09:09PM