Amit Gupta

Author Archives: Amit Gupta

3 observability best practices for improved security in cloud-native applications

Why is observability important for better security?

Observability, especially in the context of cloud-native applications, is important for several reasons. First and foremost is security. By design, cloud-native applications rely on multiple, dynamic, distributed, and highly ephemeral components or microservices, with each microservice operating and scaling independently to deliver the application functionality. In this type of microservices-based architecture, observability and metrics provide security insights that enable teams to identify and mitigate zero-day threats through the detection of anomalies in microservices metrics, such as traffic flow, process calls, syscalls, and more. Using machine learning (ML) and heuristic analysis, security teams can identify abnormal behavior and issue alerts.

Observability also enables security teams to visualize the blast radius in the event of a breach. Using this information, teams can apply mitigating controls, such as security policy updates, to isolate the breached microservice and thereby limit exposure.

And finally, observability helps DevOps teams maintain the quality of service by identifying service failure and performance hotspots, and conducting a detailed investigation with capabilities such as packet capture and distributed tracing.

Observability challenges

DevOps and SRE teams today are being overwhelmed by an enormous amount of data from multiple, disparate systems that monitor infrastructure and Continue reading

What is platform engineering and when should you invest in it?

As application platforms grow larger, the idea of DevOps teams where developers support the software development lifecycle, but also manage infrastructure and the platform, is beginning to reach the limits of what these teams can support. Rather than taking their best application developers and making them work on infrastructure problems, more organizations are coming to the conclusion that a centralized platform team specialized in that area is a better use of their developers’ skill sets. But what exactly is the platform engineering team and how is it different from the DevOps team? Should your organization invest in platform engineering? Let’s take a closer look.

Platform engineering: What is it and how did it come about?

Platform engineering is essentially building (selecting/standardizing on), operating, and managing the infrastructure that supports 1st- and 3rd-party applications. In the days before cloud-native application development, what we saw was that there was a central team that provided compute infrastructure for enterprise developers to build and host their applications. At a certain point in time, those developers moved to a microservices-based architecture. They didn’t just need virtual machines or servers where they could run their applications; they were building those applications in a containerized form factor, Continue reading

Overcoming security gaps with active vulnerability management

Organizations can reduce security risks in containerized applications by actively managing vulnerabilities through scanning, automated image deployment, tracking runtime risk and deploying mitigating controls to reduce risk.

Kubernetes and containers have become de facto standards for cloud-native application development due to their ability to accelerate the pace of innovation and codify best practices for production deployments, but such acceleration can introduce risk if not operationalized properly.

In the architecture of containerized applications, it is important to understand that there are highly dynamic containers distributed across cloud environments. Some are ephemeral and short-lived, while others are more long-term. Traditional approaches to securing applications do not apply to cloud-native workloads because most deployments of containers occur automatically with an orchestrator that removes the manual step in the process. This automatic deployment requires that the images be continuously scanned to identify any vulnerabilities at the time of development in order to mitigate the risk of exploit during runtime.

In addition to these challenges, software supply chain adds complexity to vulnerability scanning and remediation. Applications increasingly depend on containers and components from third-party vendors and projects. As a result, it can take weeks or longer to patch the affected components and release new software Continue reading

Overcoming Security Gaps with Active Vulnerability Management

Organizations can reduce security risks in containerized applications by actively managing vulnerabilities through scanning, automated image deployment, tracking runtime risk and deploying mitigating controls to reduce risk

Kubernetes and containers have become de facto standards for cloud-native application development due to their ability to accelerate the pace of innovation and codify best practices for production deployments, but such acceleration can introduce risk if not operationalized properly.

In the architecture of containerized applications, it is important to understand that there are highly dynamic containers distributed across cloud environments. Some are ephemeral and short-lived, while others are more long-term. Traditional approaches to securing applications do not apply to cloud-native workloads because most deployments of containers occur automatically with an orchestrator that removes the manual step in the process. This automatic deployment requires that the images be continuously scanned to identify any vulnerabilities at the time of development in order to mitigate the risk of exploit during runtime.

In addition to these challenges, software supply chain adds complexity to vulnerability scanning and remediation. Applications increasingly depend on containers and components from third-party vendors and projects. As a result, it can take weeks or longer to patch the affected components and release new software Continue reading

4 ways to leverage existing kernel security features to set up process monitoring

The large attack surface of Kubernetes’ default pod provisioning is susceptible to critical security vulnerabilities, some of which include malicious exploits and container breakouts. I believe one of the most effective workload runtime security measures to prevent such exploits is layer-by-layer process monitoring within the container.

It may sound like a daunting task that requires additional resources, but in reality, it is actually quite the opposite. In this article, I will walk you through how to use existing Linux kernel security features to implement layer-by-layer process monitoring and prevent threats.

Threat prevention and process monitoring

Containerized workloads in Kubernetes are composed of numerous layers. An effective runtime security strategy takes each layer into consideration and monitors the process within each container, also known as process monitoring.

Threat detection in process monitoring involves integrating mechanisms that isolate workloads or control access. With these controls in place, you can effectively prevent malicious behavior, reduce your workload’s attack surface, and limit the blast radius of security incidents. Fortunately, we can use existing Kubernetes mechanisms and leverage Linux defenses to achieve this.

Kernel security features

By pulling Linux defenses closer to the container, we can leverage existing Kubernetes mechanisms to monitor processes and reduce Continue reading

WAF is woefully insufficient in today’s container-based applications: Here’s why

According to the Cloud Security Alliance, the average large enterprise has 946 custom applications deployed. Traditionally, organizations deployed Web Application Firewalls (WAF), which provide visibility and enforce security controls on external traffic that passes through them, at the perimeter to protect these applications against external attacks.

However, WAF-secured container-based applications have a high likelihood of being breached, as the concept of a perimeter does not exist in these architectures. A new approach is needed to address both external threats and threats from lateral movement inside the cluster. In a world where successful exploits may be inevitable, relying on a perimeter WAF for application security leaves your entire environment vulnerable unless adequate security tools and policies are implemented at the workload level.

WAF’s weak security

Security techniques for traditional container-based application architectures are analogous to medieval castles, where everything important to running an application is consolidated within castle walls. In this analogy, WAF played the role of the wall and gate, only letting in friendly traffic.

WAF provides additional capabilities in these traditional architectures. It actively parses through valid requests and threats and provides alerts when it receives suspicious log requests. These alerts keep the security team apprised of threats Continue reading

Kubernetes secrets management: 3 approaches and 9 best practices

Secrets, such as usernames, passwords, API tokens, and TLS certificates, contain confidential data that can be used to authenticate and authorize users, groups, or entities. As the name implies, secrets are not meant to be known or seen by others. So how do we keep them safe?

The key to keeping secrets safe lies within how you manage them. Where to store secrets, how to retrieve them, and how to make them available in an application as needed are all early design choices a developer must make when migrating an application or microservice to Kubernetes. Part of this design choice is to ensure the secrets can become available without compromising the application’s security posture.

In this article, I will provide approaches and recommended best practices for managing secrets in Kubernetes.

How to approach secrets management in Kubernetes

Let’s start with some approaches. Below are three approaches I recommend for Kubernetes secrets management.

etcd

etcd is a supported datastore in Kubernetes, and a lot of developers opt to store secrets in a Base64-encoded format in etcd as a key-value pair. Secrets stored in etcd can be made available from within Kubernetes deployment specs as an environment variable, which is stored in Continue reading

Process monitoring: How you can detect malicious behavior in your containers

The default pod provisioning mechanism in Kubernetes has a substantial attack surface, making it susceptible to malevolent exploits and container breakouts. To achieve effective runtime security, your containerized workloads in Kubernetes require multi-layer process monitoring within the container.

In this article, I will introduce you to process monitoring and guide you through a Kubernetes-native approach that will help you enforce runtime security controls and detect unauthorized access of host resources.

What is process monitoring?

When you run a containerized workload in Kubernetes, several layers should be taken into account when you begin monitoring the process within a container. This includes container process logs and artifacts, Kubernetes and cloud infrastructure artifacts, filesystem access, network connections, system calls required, and kernel permissions (specialized workloads). Your security posture depends on how effectively your solutions can correlate disparate log sources and metadata from these various layers. Without effective workload runtime security in place, your Kubernetes workloads, which have a large attack surface, can easily be exploited by adversaries and face container breakouts.

Traditional monitoring systems

Before I dive into the details on how to monitor your processes and detect malicious activities within your container platform, let us first take a look at some of Continue reading

Vulnerability management: 3 best practices and tips for image building and scanning

As enterprises adopt containers, microservices, and Kubernetes for cloud-native applications, vulnerability management is crucial to improve the security posture of containerized workloads throughout build, deploy, and runtime. Securing your build artifacts and deployment pipeline, especially when it comes to images, is extremely important. By following best practices for image building and scanning throughout the application development and deployment process, you can help ensure the security of the containers and workloads in your environment.

Let’s look at some of the nuances of choosing a base image, hardening your container image, and container image scanning, including tips on choosing an appropriate scanning solution and tackling privacy concerns.

Choose an appropriate base image

It’s important to choose a base image that reduces the attack surface of your container. I recommend using a distroless or scratch image because they contain only the application and its runtime dependencies. Both types of images improve your security posture by reducing the attack surface and exposure to vulnerabilities.

If for some reason you can’t use a distroless or scratch image, choose a minimal distro. Modern immutable Linux distributions, such as Bottlerocket and Flatcar Container Linux, can be used as base images for containers, as can minimal versions Continue reading

Troubleshooting microservices: Challenges and best practices

When people hear ‘microservices’ they often think about Kubernetes, which is a declarative container orchestrator. Because of its declarative nature, Kubernetes treats microservices as entities, which presents some challenges when it comes to troubleshooting. Let’s take a look at why troubleshooting microservices in a Kubernetes environment can be challenging, and some best practices for getting it right.

Why is troubleshooting microservices challenging?

To understand why troubleshooting microservices can be challenging, let’s look at an example. If you have an application in Kubernetes, you can deploy it as a pod and leverage Kubernetes to scale it. The entity is a pod that you can monitor. With microservices, you shouldn’t monitor pods; instead, you should monitor services. So you can have a monolithic workload (a single container deployed as a pod) and monitor it, but if you have a service made up of several different pods, you need to understand the interactions between those pods to understand how the service is behaving. If you don’t do that, what you think is an event might not really be an event (i.e. might not be material to the functioning of the service).

When it comes to monitoring microservices, you need to monitor at Continue reading

Kubernetes observability challenges in cloud-native architecture

Kubernetes is the de-facto platform for orchestrating containerized workloads and microservices, which are the building blocks of cloud-native applications. Kubernetes workloads are highly dynamic, ephemeral, and are deployed on a distributed and agile infrastructure. Although the benefits of cloud-native applications managed by Kubernetes are plenty, Kubernetes presents a new set of observability challenges in cloud-native applications.

Let’s consider some observability challenges:

  • Data silos – Traditional monitoring tools specialize in collecting metrics at the application and infrastructure level. Given the highly dynamic, distributed, and ephemeral nature of cloud-native applications, this style of metrics collection creates data in silos that need to be stitched together in the context of a service in order to enable DevOps and SREs to debug service issues (e.g. slow response time, downtime, etc.). Further, if DevOps or service owners add new metrics for observation, data silos can cause broken cross-references and data misinterpretation, leading to data misalignment, slower communication, and incorrect analysis.
  • Data volume and granular components – Kubernetes deployments have granular components such as pods, containers, and microservices that are running on top of distributed and ephemeral infrastructure. An incredibly high volume of granular data is generated at each layer as alerts, logs, and Continue reading

Industry-First Pay-as-you-go SaaS Platform for Kubernetes Security and Observability

We are excited to introduce Calico Cloud, a pay-as-you-go SaaS platform for Kubernetes security and observability. With Calico Cloud, users only pay for services consumed and are billed monthly, getting immediate value without upfront investment.

Introduction

Calico Cloud gives DevOps, DevSecOps, and Site Reliability Engineering (SRE) teams a single pane of glass across multi-cluster and multi-cloud Kubernetes environments to deploy a standard set of egress access controls, enforce security policies, ensure compliance, get end-to-end visibility, and troubleshoot applications. Calico Cloud is Kubernetes-native and provides native extensions to enable security and observability as code for easy and consistent enforcement across Kubernetes distributions, multi-cloud and hybrid environments. It scales automatically with the managed clusters according to the user requirements to ensure uninterrupted real-time visibility at any scale.

Security and Observability Challenges

  • North-South Controls: Often microservices need to communicate with services or API endpoints running outside the Kubernetes cluster. Implementing access control from Kubernetes pods to external endpoints is hard. Most traditional or cloud provider’s firewalls do not understand the Kubernetes context which forces the ops team to allow traffic from the entire cluster or a set of worker nodes.
  • East-West Controls: Even after effective perimeter-based north-south controls, the organizations face challenges to Continue reading

Now Available: Calico for Windows on Red Hat OpenShift Container Platform

Approximately one year ago, Kubernetes 1.14 made support of Windows containers running on Microsoft Windows Server nodes generally available. This was a declaration that Windows node support was stable, well-tested, and ready for adoption, meaning the vast ecosystem of Windows-based applications could be deployed on the platform.

Collaborating with Microsoft, Tigera leveraged the new Windows platform capabilities to create Calico for Windows, the industry’s first cross-platform Kubernetes solution to manage networking and network policy for Kubernetes deployments on Windows and Linux.

We are excited to announce that Calico for Windows now supports the latest Windows Dev Preview on the Red Hat OpenShift Container Platform (OCP). Built on Red Hat Enterprise Linux and Kubernetes, OCP v4 provides developers and IT organizations with a hybrid and multi-cloud application platform for deploying both new and existing applications on scalable resources, with minimal configuration and management overhead. OCP enables organizations to meet security, privacy, compliance, and governance requirements.

Calico for Windows is the only Kubernetes networking solution for teams using Windows on OpenShift. The combination of Calico for Windows and Red Hat OpenShift Container Platform represents a major leap forward in productivity for organizations that are deploying Windows on Kubernetes. DevOps teams Continue reading

Tigera Joins the Fortinet Fabric-Ready Program and Partners with Fortinet to Secure Kubernetes Environments

We are proud to partner with Fortinet and join their Fabric-Ready Technology Alliance Partner program. With this partnership, Fortinet customers will be able to extend their network security architecture to their Kubernetes environments.

Our partnership was driven from interest from Fortinet’s customers to protect their Kubernetes based infrastructure. Kubernetes adoption is growing like wildfire and nearly every enterprise on the planet is at some stage of their Kubernetes journey.

The Tigera and Fortinet joint solution will support all cloud-based and on-premises Kubernetes environments. With this architecture, Tigera Secure will map security policies from FortiManager into each Kubernetes cluster in the cloud or on-premises. The joint solution will enable Fortinet customers to enforce network security policies for traffic into and out of the Kubernetes cluster (North/South traffic) as well as traffic between pods within the cluster (East/West traffic).

Tigera Secure will also integrate with threat feeds from FortiGuard to detect and block any malicious activity inside the clusters. Tigera will monitor the cluster traffic and send these events to FortiSIEM, enabling the security operations team to quickly diagnose the situation.

If you are attending Microsoft Ignite join us at our respective booths to learn more about our solution (Fortinet Booth #519 Continue reading