Archive

Category Archives for "Tigera.io"

IBM’s journey to tens of thousands of production Kubernetes clusters

IBM Cloud has made a massive shift to Kubernetes. From an initial plan for a hosted Kubernetes public cloud offering it has snowballed to tens of thousands of production Kubernetes clusters running across more than 60 data centers around the globe, hosting 90% of the PaaS and SaaS services offered by IBM Cloud.

I spoke with Dan Berg, IBM Distinguished Engineer, to find out more about their journey, what triggered such a significant shift, and what they learned along the way.

So take me back to the beginning – how did this thing get started?

It must have been 3 years ago.  At that time, IBM had been building customer-facing services in the cloud for several years implemented as traditional VM or Cloud Foundry workloads. One of the services we offered was a container as a service (CaaS) platform that was built under the covers using Docker and OpenStack.  With this platform, we were the first in the industry to provide a true CaaS experience, but in some ways it was ahead of its time, and people weren’t ready to embrace a pure container as a service platform.

To this day we are still seeing double digit Continue reading

Istio Routing Basics

When learning a new technology like Istio, it’s always a good idea to take a look at sample apps. Istio repo has a few sample apps but they fall short in various ways. BookInfo is covered in the docs and it is a good first step. However, it is too verbose with too many services for me and the docs seem to focus on managing the BookInfo app, rather than building it from ground up. There’s a smaller helloworld sample but it’s more about autoscaling than anything else.

In this post, I’d like to get to the basics and show you what it takes to build an Istio enabled ‘HelloWorld’ app from ground up. One thing to keep in mind is that Istio only manages the traffic of your app. The app lifecycle is managed by the underlying platform, Kubernetes in this case. Therefore, you need to understand containers and Kubernetes basics and you need to know about Istio Routing primitives such as Gateway, VirtualService, DestinationRuleupfront. I’m assuming that most people know containers and Kubernetes basics at this point. I will focus on Istio Routing instead in this post.

 

Basic Steps

These are roughly the steps Continue reading

Prevent DNS (and other) spoofing with Calico

AquaSec’s Daniel Sagi recently authored a blog post about DNS spoofing in Kubernetes. TLDR is that if you use default networking in Kubernetes you might be vulnerable to ARP spoofing which can allow pods to spoof (impersonate) the IP addresses of other pods. Since so much traffic is dialed via domain names rather than IPs, spoofing DNS can allow you to redirect lots of traffic inside the cluster for nefarious purposes.

So this is bad, right? Fortunately, Calico already prevents ARP spoofing out of the box. Furthermore, Calico’s design prevents other classes of spoofing attacks. In this post we’ll discuss how Calico keeps you safe from IP address spoofing, and how to go above and beyond for extra security.

 

ARP Spoofing

ARP spoofing is an attack that allows a malicious pod or network endpoint to receive IP traffic that isn’t meant for it. Sagi’s post already describes this well, so I won’t repeat the details here. An important thing to note, however, is that ARP spoofing only works if the malicious entity and the target share the same layer 2 segment (e.g. have direct Ethernet connectivity). In Calico, the network is fully routed at layer 3, meaning that Continue reading

Tigera Secure 2.5 – Implement Kubernetes Network Security Using Your Firewall Manager

We are excited to announce the general availability of Tigera Secure 2.5. With this release, security teams can now create and enforce security controls for Kubernetes using their existing firewall manager.

Containers and Kubernetes adoption are gaining momentum in enterprise organizations. Gartner estimates that 30% of organizations are running containerized applications today, and they expect that number to grow to 75% by 2022. That’s tremendous growth considering the size and complexity of enterprise IT organizations. It’s difficult to put exact metrics on the growth in Kubernetes adoption; however, KubeCon North America attendance is a good proxy. KubeCon NA registrations grew from 1,139 in 2016 to over 8,000 in 2018 and are expected to surpass 12,000 this December, and the distribution of Corporate Registrations has increased dramatically.

KubeCon Registrations

Despite this growth, Kubernetes is a tiny percentage of the overall estate the security team needs to manage; sometimes less than 1% of total workloads. Security teams are stretched thin and understaffed, so it’s no surprise that they don’t have time to learn the nuances of Kubernetes and rethink their security architecture, workflow, and tools for just a handful of applications. That leads to stalled deployments and considerable friction between the application, infrastructure, Continue reading

How to enable serverless computing in Kubernetes

In the first two articles in this series about using serverless on an open source platform, I described how to get started with serverless platforms and how to write functions in popular languages and build components using containers on Apache OpenWhisk.

Here in the third article, I’ll walk you through enabling serverless in your Kubernetes environment. Kubernetes is the most popular platform to manage serverless workloads and microservice application containers and uses a finely grained deployment model to process workloads more quickly and easily.

Keep in mind that serverless not only helps you reduce infrastructure management while utilizing a consumption model for actual service use but also provides many capabilities of what the cloud platform serves. There are many serverless or FaaS (Function as a Service) platforms, but Kuberenetes is the first-class citizen for building a serverless platform because there are more than 13 serverless or FaaS open source projects based on Kubernetes.

However, Kubernetes won’t allow you to build, serve, and manage app containers for your serverless workloads in a native way. For example, if you want to build a CI/CD pipeline on Kubernetes to build, test, and deploy cloud-native apps from source code, you need to use your Continue reading

Extend CI/CD with CR for Continuous App Resilience

This is a guest post written by Govind Rangasamy, CEO and Founder, Appranix.

The radical shift towards DevOps and the continuous everything movement have changed how organizations develop and deploy software. As the consolidation and standardization of continuous integration and continuous delivery (CI/CD) processes and tools occur in the enterprise, a standardized DevOps model helps organizations deliver faster software functionality at a large scale. However, newer cyber threats, evolving regulatory requirements, and the need to protect brand reputation are putting tremendous pressure on IT leaders to effectively protect their customer and business-critical data.

Conceptually, DevOps pipeline approach makes a lot of sense, however, in practice, Site Reliability Engineering (SRE) and Ops teams optimize systems for service reliability and robustness at the cost of delivering new features. The need for software reliability inherently decreases Continuous Delivery (CD) throughput. This conundrum is the biggest challenge for any organization adopting DevOps practices at a large scale today. By integrating and extending CI/CD with Continuous Resilience (CR) to provide protection against multitudes of software reliability disruptions, DevOps teams can confidently deploy new software and not affect resiliency of the systems. In other words, Continuous Resilience is the radical new enabler that gives confidence for Continue reading

Big Data and Kubernetes – Why Your Spark & Hadoop Workloads Should Run Containerized…(1/4)

Starting this week, we will do a series of four blogposts on the intersection of Spark with Kubernetes. The first blog post will delve into the reasons why both platforms should be integrated. The second will deep-dive into Spark/K8s integration. The third will discuss usecases for Serverless and Big Data Analytics. The last post will round off with insights on best practices. 

Introduction

Most Cloud Native Architectures are designed in response to Digital Business initiatives – where it is important to personalize and to track minute customer interactions. The main components of a Cloud Native Platform inevitably leverage a microservices based design. At the same time, Big Data architectures based on Apache Spark have been implemented at 1000s of enterprises and support multiple data ingest capabilities whether real-time, streaming, interactive SQL platform while performing any kind of data processing (batch, analytical, in memory & graph, based) at the same time providing search, messaging & governance capabilities.

The RDBMS has been a fixture of the monolithic application architecture. Cloud Native applications, however, need to work with data formats of the loosely structured kind as well as the regularly structured data. This implies the need to support data streams that are Continue reading

5 reasons to use Kubernetes

Kubernetes is the de facto open source container orchestration tool for enterprises. It provides application deployment, scaling, container management, and other capabilities, and it enables enterprises to optimize hardware resource utilization and increase production uptime through fault-tolerant functionality at speed. The project was initially developed by Google, which donated the project to the Cloud-Native Computing Foundation. In 2018, it became the first CNCF project to graduate.

This is all well and good, but it doesn’t explain why development and operations should invest their valuable time and effort in Kubernetes. The reason Kubernetes is so useful is that it helps dev and ops quickly solve the problems they struggle with every day.

Following are five ways Kubernetes’ capabilities help dev and ops professionals address their most common problems.

1. Vendor-agnostic

Many public cloud providers not only serve managed Kubernetes services but also lots of cloud products built on top of those services for on-premises application container orchestration. Being vendor-agnostic enables operators to design, build, and manage multi-cloud and hybrid cloud platforms easily and safely without risk of vendor lock-in. Kubernetes also eliminates the ops team’s worries about a complex multi/hybrid cloud strategy.

2. Service discovery

To develop microservices applications, Java developers must Continue reading

1 14 15 16