In Kubernetes, the Domain Name System (DNS) plays a crucial role in enabling service discovery for pods to locate and communicate with other services within the cluster. This function is essential for managing the dynamic nature of Kubernetes environments and ensuring that applications can operate seamlessly. For organizations migrating their workloads to Kubernetes, it’s also important to establish connectivity with services outside the cluster. To accomplish this, DNS is also used to resolve external service names to their corresponding IP addresses. The DNS functionality in Kubernetes is typically implemented using a set of core-dns pods that are exposed as a service called kube-dns
. The DNS resolvers for workload pods are automatically configured to forward queries to the kube-dns
service.
The output below shows the implementation of the kube-dns
services in a Kubernetes cluster.
kubectl get service kube-dns -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP
The core-dns pods have to rely on external DNS servers to perform domain name resolution for services outside the cluster. By default, the pods are configured to forward DNS queries to the DNS server configured in the underlying host in the /etc/resolv.conf
file. The output below displays Continue reading
Today, the cloud platform engineers are facing new challenges when running cloud native applications. Those applications are designed, deployed, maintained and monitored unlike traditional monolithic applications they are used to working with.
Cloud native applications are designed and built to exploit the scale, elasticity, resiliency, and flexibility the cloud provides. They are a group of micro-services that are run in containers within a Kubernetes cluster and they all talk to each other. It can quickly become overwhelming for any cloud engineer to understand and visualize their environment.
Visualizing Kubernetes network traffic and service dependencies presents significant challenges due to the dynamic and distributed nature of Kubernetes environments. The dynamic nature of Kubernetes clusters, with frequent scaling of pods, creation and deletion of services, and changes in network connections, makes it difficult to capture an accurate and up-to-date representation of network traffic and service dependencies.
Additionally, the complexity of Kubernetes networking, involving multiple components such as pods, services, and network policies, further complicates the visualization of network traffic flow and understanding of service dependencies. The use of microservices architecture in Kubernetes, where applications consist of interconnected services, adds to the complexity, particularly as the number of services grows.
Moreover, the scalability Continue reading
We are excited to announce the release of Calico v3.26! This latest milestone brings a range of enhancements and new features to the Calico ecosystem, delivering an optimized and secure networking solution. This release has a strong emphasis on product performance, with strengthened security measures, expanded compatibility with Windows Server 2022 and OpenStack Yoga, and notable improvements to the Calico eBPF dataplane.
As always, let’s begin by thanking our awesome community members who helped us in this release.
Big thanks to our GitHub users afshin-deriv, blue-troy, and winstonu for their valuable contributions in enhancing the Kind installation and VXLAN documentation, as well as improving the code comments.
Additionally, we would like to extend our appreciation to laibe and yankay for their efforts in updating the flannel version and improving the IPtables detection mechanism. Their contributions have been instrumental in improving the overall functionality and reliability of our project.
Finally, a huge thank-you to dilyevsky, detailyang, mayurjadhavibm, and olljanat for going above and beyond in pushing Calico beyond its original scope and for generously sharing their solutions with the rest of the community.
The primary responsibility Continue reading
Kubernetes has become the de facto standard for container orchestration, providing a powerful platform for deploying and managing containerized applications at scale. As more organizations adopt Kubernetes for their production workloads, ensuring the security and privacy of data in transit has become increasingly critical. Encrypting traffic within a Kubernetes cluster is one of the most effective components in a multi-layered defence when protecting sensitive data from interception and unauthorized access. Here, we will explore why encrypting traffic in Kubernetes is important and how it addresses compliance needs.
Two encryption methods are commonly adopted for protecting the data integrity and confidentiality; encryption at rest and encryption in transit. Encryption at rest refers to encrypting stored data, e.g. in your cloud provider’s managed disk solution, whereby if the data was simply copied and extracted the raw information obtained would be unintelligible without cryptographic keys to decrypt the data.
Encrypting data in transit is an effective security mechanism and a critical requirement for organization compliance and regulatory frameworks, as it helps protect sensitive information from unauthorized access and interception while it is being transmitted over the network. We will dive deeper into this requirement.
Encrypting data in transit Continue reading
Welcome to the Calico monthly roundup: May edition! From open source news to live events, we have exciting updates to share—let’s get into it!
Customer case study: Rafay Rafay achieved turnkey Kubernetes security using Calico on AWS. Read our new case study to find out how. |
New guide: CISO’s security guide to containers and Kubernetes This guide provides CISOs and other security decision-makers with an overview of container security, insights into securing Kubernetes landscapes and container-based applications, and why securing these technologies requires a unique approach. |
Tigera Named Winner of the Esteemed Global InfoSec Awards during RSA Conference 2023 We’re excited to announce that we won the ‘Hot Company: Container Security’ category of the Global InfoSec Awards from Cyber Defense Magazine! Check out the full press release for more details. Read the press release. |
Almost all modern network systems, including stateful firewalls, make use of connection tracking (“conntrack”) because it consumes less processing power per packet and simplifies operations. However, there are use cases where connection tracking has a negative impact, as we described in Linux Conntrack: Why it breaks down and avoiding the problem. Distributed Denial of Service (DDoS) mitigation systems, defending against volumetric network attacks, is a well known example of such a use case, as it needs to drop malicious packets as fast as possible. In addition to these attacks, connection tracking becomes a potential attack vector as it is a limited resource. There are also applications generating huge amounts of short lived connections per second, to the point that tracking connections leads to more processing and defeating its intended purposes. These use cases demonstrate that there is a need to not track connections in a firewall, also known as stateless firewalling.
In this blog post, we will explain how Project Calico uses eXpress Data Path (XDP) in its eBPF dataplane (also in its iptables dataplane but not the focus of this post) to improve the performance of its stateless firewall. XDP is an eBPF hook that allows a program to Continue reading
Organizations are adopting Kubernetes on Amazon Web Services (AWS) to modernize their applications. But Kubernetes clusters and application lifecycles demand a considerable investment of cost and resources, especially for edge applications.
Rafay’s SaaS-based Kubernetes operations platform (KOP) helps platform teams deploy, scale, and manage their fleet without requiring anyone on the platform team to be a Kubernetes expert. Hosted on AWS Elastic Kubernetes Services (EKS), Rafay’s unified, enterprise-grade KOP supports Kubernetes and application lifecycle management through automation and self-service with the right standardization, control, and governance level. Rafay empowers organizations to accelerate their digital transformation while limiting operating costs.
In partnership with AWS and Tigera, Rafay shares the story of how it leveraged Calico on AWS to secure its turnkey offering in an exclusive case study. Here are the highlights.
To secure its KOP and enable customers with little to no Kubernetes experience, Rafay required a scalable, Kubernetes-native security solution that could:
According to the recent Datadog report on real world container usage, Redis is among the top 5 technologies used in containerized workloads running on Kubernetes.
Redis database is deployed across multi-region clusters to be Highly Available(HA) to a microservices application. However, while Kubernetes mandates how the networking and security policy is deployed and configured in a single cluster it is challenging to enforce inter-cluster communication at pod-level, enforce security policies and connect to services running in pods across multiple clusters.
Calico Clustermesh provides an elegant solution to highly available multiple Redis clusters without any overheads. By default, deployed Kubernetes pods can only see pods within their cluster.
Using Calico Clustermesh, you can grant access to other clusters and the applications they are running. Calico Clustermesh comes with Federated Endpoint Identity and Federated Services.
Calico federated endpoint identity and federated services are implemented in Kubernetes at the network layer. To apply fine-grained network policy between multiple clusters, the pod source and destination IPs must be preserved. So the prerequisite for enabling federated endpoints requires clusters to be designed with common networking across clusters (routable pod IPs) with no encapsulation.
Federated services works with federated endpoint identity, Continue reading
FortiGate firewalls are highly popular and extensively utilized for perimeter-based security in a wide range of applications, including monolithic applications developed and deployed using the traditional waterfall model. These firewalls establish a secure perimeter around applications, effectively managing inbound and outbound traffic for the organization. FortiGate relies on IP addresses for implementing “allow/deny” policies.
The use of IP addresses is effective for non-cloud native applications, where static IP addresses serve as definitive network identifiers. However, in a Kubernetes environment, workloads have dynamic IP addresses that change whenever they are restarted or scaled out to different nodes. This dynamic nature poses challenges when utilizing FortiGate with Kubernetes workloads, requiring continuous updates to firewall rules and the opening of large CIDR ranges for node-based access. This introduces security and compliance risks, as workloads running on these CIDR ranges gain unrestricted access to external or public services.
To facilitate the usage of FortiGate firewalls with Kubernetes workloads, it becomes crucial to identify workloads that necessitate access to external resources and assign them fixed IP addresses for utilization in FortiGate firewall rules. The integration of Calico with FortiGate firewalls and FortiManager offers an elegant solution, enabling the use of FortiGate firewalls while retaining existing Continue reading
Founded in 1990, Aldagi is Georgia’s first and biggest private insurance firm. With a 32% market share in Georgia’s insurance sector, Aldagi provides a broad range of services to corporate and retail clients.
With the onset of the pandemic in 2019, Aldagi wanted to make its services available to customers online. To this end, the company adopted an Agile methodology for software development and re-architected its traditional VM-based applications into cloud-native applications. Aldagi then began using containers and Kubernetes as a part of this process. Using self-managed clusters on Rancher Kubernetes Engine (RKE), Aldagi created distributed, multi-tenant applications to serve its broad EU customer base.
In collaboration with Tigera, Aldagi details its journey using Calico to achieve EU GDPR compliance, in order to share its experience with the rest of the Kubernetes community.
Because Aldagi’s applications are distributed and multi-tenanted, the company faced three major challenges when it came to achieving EU GDPR compliance:
By deploying Calico, Aldagi solved these challenges and achieved EU GDPR compliance, Continue reading
The adoption of cloud native applications has become a necessity for organizations to run their businesses efficiently. As per Gartner, more than 85% of organizations will embrace a cloud-first principle by 2025, which will rely on adopting cloud native applications for complete execution. The massive increase in adoption of cloud native applications has given rise to more security challenges such as container image vulnerabilities, configuration errors and a larger runtime attack surface. Kubernetes has become the de facto standard platform to host and run cloud native applications. However, as Kubernetes adoption gains momentum, so does the security risk associated with business-critical applications that communicate with multiple external and internal systems. Kubernetes network security is a critical aspect of securing cloud native applications and the Kubernetes platform itself. This multi-part blog series aims to explore the network security challenges of the Kubernetes environment and provide insights into mitigating them using Calico security controls. The series will cover various aspects of network security, including security policies, application layer security, host network security, Pod traffic encryption, and the importance of a defense-in-depth security approach. Let’s dive into the world of Kubernetes network security and learn how to protect your organization’s valuable assets.
As containerized applications become increasingly complex, it can be challenging to design and execute an effective container security strategy. With the growing trend towards cloud-based applications and services, cyber criminals are also evolving their attack techniques, making container security solutions more critical than ever. Calico provides robust detection capabilities to detect known and zero-day container and network-based attacks. In this blog, we will look at Calico’s capabilities to detect network-based attacks.
Calico offers comprehensive protection against both known and zero-day network-based attacks. Using a combination of workload-based IDS/IPS, Calio can detect and block connections to known malicious IPs identified with AlienVault and custom threat intelligence feeds. Calico also uses heuristics-based learning to identify anomalous network activity and prevent zero-day attacks. To further protect against OWASP Top 10 attacks, Calico provides a web application firewall (WAF) that can intercept attacks and prevent them from reaching your applications. Additionally, Calico can also block requests from malicious IPs to prevent DDoS attacks from overwhelming your system.
In this blog, we will go through a scenario where an attacker compromises a public-facing application and gains a foothold in the AWS EC2 or EKS network Continue reading
In today’s fast-paced software development environment, developers often use common public libraries and modules to quickly build applications. However, this presents a significant challenge for DevOps teams who must ensure that these applications are safe to use. As organizations move towards dynamic models of software development that rely on Continuous Integration and Continuous Deployment, the responsibility for deploying secure applications has shifted from traditional security teams to development teams.
To address this challenge, I will provide general guidelines on how to integrate the Calico Image Scanning feature into a CI/CD pipeline, using Argo. This will help ensure that images are built safely and free from Common Vulnerabilities and Exposures (CVEs). In this blog post, we will use a Kubernetes validating webhook configuration to attach a Calico Cloud admission controller that can accept or reject certain actions on resources, such as the creation of pods. This will prevent the deployment of images that contain known CVEs, thus strengthening the overall security of your software development process.
The building blocks to use Argo as an example of this integration are below:
Before even committing changes to our application, we must setup the Calico Admission Controller within our Continue reading
We are proud to announce that we have been named one of America’s Best Startup Employers 2023 by Forbes!
The Forbes list of America’s Best Startup Employers 2023 was compiled by evaluating 2,600 companies with at least 50 employees in the United States. All of the companies considered were founded between 2013 and 2020, from the ground up, and were not spin-offs of existing businesses. Just like other Forbes lists, businesses cannot pay to be considered. Evaluation was based on three criteria: employer reputation, employee satisfaction, and growth.
Inclusion in Forbes’ list comes after we were proudly certified as a Great Place to Work in 2022.
The Great Place to Work Certification recognizes employers that create an exceptional employee experience. The certification process involves surveying employees and the employer, and rankings are based on employee feedback and independent analysis. This means job searchers can rely on the certification to help them determine whether an organization is truly a great place to work.
Our core values are the foundation of everything we do. Tigera believes in a collaborative, flexible work environment built on mutual respect and commitment. We are delighted to hear that our Continue reading
Organizations can reduce security risks in containerized applications by actively managing vulnerabilities through scanning, automated image deployment, tracking runtime risk and deploying mitigating controls to reduce risk
Kubernetes and containers have become de facto standards for cloud-native application development due to their ability to accelerate the pace of innovation and codify best practices for production deployments, but such acceleration can introduce risk if not operationalized properly.
In the architecture of containerized applications, it is important to understand that there are highly dynamic containers distributed across cloud environments. Some are ephemeral and short-lived, while others are more long-term. Traditional approaches to securing applications do not apply to cloud-native workloads because most deployments of containers occur automatically with an orchestrator that removes the manual step in the process. This automatic deployment requires that the images be continuously scanned to identify any vulnerabilities at the time of development in order to mitigate the risk of exploit during runtime.
In addition to these challenges, software supply chain adds complexity to vulnerability scanning and remediation. Applications increasingly depend on containers and components from third-party vendors and projects. As a result, it can take weeks or longer to patch the affected components and release new software Continue reading
Cloud computing revolutionized how a business can establish its digital presence. Nowadays, by leveraging cloud features such as scalability, elasticity, and convenience, businesses can deploy, grow, or test an environment in every corner of the world without worrying about building the required infrastructure.
Unlike the traditional model, which was based on notifying the service provider to set up the resources for customers in advance, in an on-demand model, cloud providers implement application programming interfaces (API) that can be used by customers to deploy resources on demand. This allows the customer to access an unlimited amount of resources on-demand and only pay for the resources they use without worrying about the infrastructure setup and deployment complexities.
For example, a load balancer service resource is usually used to expose an endpoint to your clients. Since a cloud provider’s bandwidth might be higher than what your cluster of choice can handle, a huge spike or unplanned growth might cause some issues for your cluster and render your services unresponsive.
To solve this issue, you can utilize the power of proactive monitoring and metrics to find usage patterns and get insight into your system’s overall health and performance.
In this hands-on tutorial, I will Continue reading
In this issue of the Calico Community Spotlight series, I’ve asked Saurabh Mishra from Vodafone to share his experience with Kubernetes and Calico Open Source. Let’s take a look at how Saurabh started his Kubernetes journey and the insights he gained from Calico Open Source.
Q: Please tell us a little bit about yourself, including where you currently work and what you do there.
I am working as a DevOps Manager with Vodafone Group. I am responsible for managing Kubernetes and cloud-based environments. I’m particularly interested in all things related to cloud, automation, machine learning, and DevOps.
Q: What orchestrator(s) have you been using?
Kubernetes.
Q: What cloud infrastructure(s) has been a part of your projects?
Amazon Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE).
Q: There are many people who are just getting started with Kubernetes and might have a lot of questions. Could you please talk a little bit about your own journey?
Kubernetes is a fully open-source project. It’s purposely designed this way so it can work with other open-source tools to create continuous improvements and innovations. In our team, we are using Kubernetes in non-production and production environments to run and manage critical Continue reading
During RSA Conference 2023, Utpal Bhatt sat down with SiliconANGLE & theCUBE host, John Furrier, to talk cloud-native security. Watch the full interview below.
Here’s a sneak peak of what’s inside…
“Cloud-native applications have fundamentally changed how security gets done. There are a lot of challenges that cloud-native applications bring to the table, given their large attack surface. You have attack vectors in your coding, CI/CD pipeline, deployment, and runtime. And I think that’s what organizations are realizing, that hey, this is fundamentally a different kind of architecture and we need to look at it differently.” —Utpal Bhatt, CMO at Tigera
“Cloud-native applications have fundamentally changed how security gets done. And there are a lot of challenges that cloud-native applications bring to the table, which is what organizations are realizing. If you think about organizations moving into the cloud, the majority have traditionally done a lift and shift. But now they’re recognizing that in order to get the economics right, they need to start developing cloud-native technologies, which are highly distributed, ephemeral, and transient. So all your standard security tools just really don’t work in that environment because you have a really large Continue reading
Kubernetes documentation clearly defines what use cases you can achieve using Kubernetes network policies and what you can’t. You are probably familiar with the scope of network policies and how to use them to secure your workload from undesirable connections. Although it is possible to cover the basics with Kubernetes native network policies, there is a list of use cases that you cannot implement by just using these policies.
You can refer to the Kubernetes documentation to review the list of “What you can’t do with network policies (at least, not yet)”.
Here are some of the use cases that you cannot implement using only the native network policy API (transcribed from the Kubernetes documentation):
As more organizations embrace containerization and adopt Kubernetes, they reap the benefits of platform scalability, application portability, and optimized infrastructure utilization. However, with this shift comes a new set of security challenges related to enabling connectivity for applications in heterogeneous environments.
In this blog post, we’ll explore a real-life scenario of security exposure resulting from egress traffic leaving the Kubernetes cluster. We’ll examine how the Calico Egress Gateway can help mitigate these issues by providing robust access control. By using Calico Egress Gateway, enterprises can secure communication from their Kubernetes workloads to the internet, 3rd party applications and networks while maintaining a high level of security.
The Calico Egress Gateway enforces security policies to regulate traffic flowing out of the Kubernetes cluster, providing granular control over egress traffic. This ensures that only authorized traffic is allowed to leave the cluster, mitigating the risks associated with unauthorized outbound traffic.
For enterprises developing cloud-native applications with containers and Kubernetes, a frequent requirement is to connect to a database server hosted either on-prem or in the cloud, which is safeguarded by a network-based firewall. Since workloads with Kubernetes are dynamic without a fixed IP address, enabling such connectivity from workloads Continue reading