Dhiraj Sehgal

Author Archives: Dhiraj Sehgal

What’s new in Calico Enterprise 3.9: Live troubleshooting and resource-efficient application-level observability

We are excited to announce Calico Enterprise 3.9, which provides faster and simpler live troubleshooting using Dynamic Packet Capture for organizations while meeting regulatory and compliance requirements to access the underlying data. The release makes application-level observability resource-efficient, less security intrusive, and easier to manage. It also includes pod-to-pod encryption with Microsoft AKS and AWS EKS with AWS CNI.

 

Live troubleshooting

Enterprises that want to carry out live troubleshooting in their production environments face the following challenges when doing packet capture at an organizational scale:

  • Difficult to limit access to packet capture by organizational roles
  • Takes hours to days to setting up packet capture instead of making part of the code
  • Extremely difficult to capture the right amount of data to lessen storage and compute cost
  • Spend days and weeks to correlate the data collected from different Kubernetes components such as namespaces, workloads, pods, microservices

With Dynamic Packet Capture, organizations can enable DevOps, SREs, service owners to collect the data that they need when they need it. They can filter the data based on protocol and port to fine-tune their capture for faster debugging and subsequent analysis for shorter time-to-resolution. With just-in-time data collection and built-in smart correlation, Continue reading

Observe & Troubleshoot Your Kubernetes Environments with Dynamic Service Graph

Kubernetes workloads are highly dynamic, ephemeral, and are deployed on a distributed and agile infrastructure. Application developers, DevOps teams, and site reliability engineers (SREs) often require better visibility of their different microservices, what their dependencies are, how they are interconnected, and which other clients and applications access them. This makes Kubernetes observability challenges unique. While Kubernetes helps to meet the needs of deploying and managing distributed applications, its observability challenges require a Kubernetes-native approach.

Traditional monitoring and observability solutions create data silos by collecting data at different levels (e.g. infrastructure, cluster, and application levels), or from a large number of ephemeral objects that generate data across a distributed environment. Traditional monitoring and observability solutions then stitch this data together to provide a near real-time snapshot view. This approach is not scalable given the high volume of granular data generated at each level, as well as Kubernetes’ distributed nature. It also starts to become expensive and budget unfriendly to run traditional monitoring solutions, as they require higher resource consumption (high-performance memory, more compute, and higher bandwidth).

In contrast, a Kubernetes-native observability solution can visualize all information with all relationship context intact and provide a high-fidelity view of the environment. This Continue reading

Observe & Troubleshoot Your Kubernetes Environments with Dynamic Service Graph

Kubernetes workloads are highly dynamic, ephemeral, and are deployed on a distributed and agile infrastructure. Application developers, DevOps teams, and site reliability engineers (SREs) often require better visibility of their different microservices, what their dependencies are, how they are interconnected, and which other clients and applications access them. This makes Kubernetes observability challenges unique. While Kubernetes helps to meet the needs of deploying and managing distributed applications, its observability challenges require a Kubernetes-native approach.

Traditional monitoring and observability solutions create data silos by collecting data at different levels (e.g. infrastructure, cluster, and application levels), or from a large number of ephemeral objects that generate data across a distributed environment. Traditional monitoring and observability solutions then stitch this data together to provide a near real-time snapshot view. This approach is not scalable given the high volume of granular data generated at each level, as well as Kubernetes’ distributed nature. It also starts to become expensive and budget unfriendly to run traditional monitoring solutions, as they require higher resource consumption (high-performance memory, more compute, and higher bandwidth).

In contrast, a Kubernetes-native observability solution can visualize all information with all relationship context intact and provide a high-fidelity view of the environment. This Continue reading