Archive

Category Archives for "Tigera.io"

How to enable serverless computing in Kubernetes

In the first two articles in this series about using serverless on an open source platform, I described how to get started with serverless platforms and how to write functions in popular languages and build components using containers on Apache OpenWhisk.

Here in the third article, I’ll walk you through enabling serverless in your Kubernetes environment. Kubernetes is the most popular platform to manage serverless workloads and microservice application containers and uses a finely grained deployment model to process workloads more quickly and easily.

Keep in mind that serverless not only helps you reduce infrastructure management while utilizing a consumption model for actual service use but also provides many capabilities of what the cloud platform serves. There are many serverless or FaaS (Function as a Service) platforms, but Kuberenetes is the first-class citizen for building a serverless platform because there are more than 13 serverless or FaaS open source projects based on Kubernetes.

However, Kubernetes won’t allow you to build, serve, and manage app containers for your serverless workloads in a native way. For example, if you want to build a CI/CD pipeline on Kubernetes to build, test, and deploy cloud-native apps from source code, you need to use your Continue reading

Extend CI/CD with CR for Continuous App Resilience

This is a guest post written by Govind Rangasamy, CEO and Founder, Appranix.

The radical shift towards DevOps and the continuous everything movement have changed how organizations develop and deploy software. As the consolidation and standardization of continuous integration and continuous delivery (CI/CD) processes and tools occur in the enterprise, a standardized DevOps model helps organizations deliver faster software functionality at a large scale. However, newer cyber threats, evolving regulatory requirements, and the need to protect brand reputation are putting tremendous pressure on IT leaders to effectively protect their customer and business-critical data.

Conceptually, DevOps pipeline approach makes a lot of sense, however, in practice, Site Reliability Engineering (SRE) and Ops teams optimize systems for service reliability and robustness at the cost of delivering new features. The need for software reliability inherently decreases Continuous Delivery (CD) throughput. This conundrum is the biggest challenge for any organization adopting DevOps practices at a large scale today. By integrating and extending CI/CD with Continuous Resilience (CR) to provide protection against multitudes of software reliability disruptions, DevOps teams can confidently deploy new software and not affect resiliency of the systems. In other words, Continuous Resilience is the radical new enabler that gives confidence for Continue reading

Big Data and Kubernetes – Why Your Spark & Hadoop Workloads Should Run Containerized…(1/4)

Starting this week, we will do a series of four blogposts on the intersection of Spark with Kubernetes. The first blog post will delve into the reasons why both platforms should be integrated. The second will deep-dive into Spark/K8s integration. The third will discuss usecases for Serverless and Big Data Analytics. The last post will round off with insights on best practices. 

Introduction

Most Cloud Native Architectures are designed in response to Digital Business initiatives – where it is important to personalize and to track minute customer interactions. The main components of a Cloud Native Platform inevitably leverage a microservices based design. At the same time, Big Data architectures based on Apache Spark have been implemented at 1000s of enterprises and support multiple data ingest capabilities whether real-time, streaming, interactive SQL platform while performing any kind of data processing (batch, analytical, in memory & graph, based) at the same time providing search, messaging & governance capabilities.

The RDBMS has been a fixture of the monolithic application architecture. Cloud Native applications, however, need to work with data formats of the loosely structured kind as well as the regularly structured data. This implies the need to support data streams that are Continue reading

5 reasons to use Kubernetes

Kubernetes is the de facto open source container orchestration tool for enterprises. It provides application deployment, scaling, container management, and other capabilities, and it enables enterprises to optimize hardware resource utilization and increase production uptime through fault-tolerant functionality at speed. The project was initially developed by Google, which donated the project to the Cloud-Native Computing Foundation. In 2018, it became the first CNCF project to graduate.

This is all well and good, but it doesn’t explain why development and operations should invest their valuable time and effort in Kubernetes. The reason Kubernetes is so useful is that it helps dev and ops quickly solve the problems they struggle with every day.

Following are five ways Kubernetes’ capabilities help dev and ops professionals address their most common problems.

1. Vendor-agnostic

Many public cloud providers not only serve managed Kubernetes services but also lots of cloud products built on top of those services for on-premises application container orchestration. Being vendor-agnostic enables operators to design, build, and manage multi-cloud and hybrid cloud platforms easily and safely without risk of vendor lock-in. Kubernetes also eliminates the ops team’s worries about a complex multi/hybrid cloud strategy.

2. Service discovery

To develop microservices applications, Java developers must Continue reading

1 12 13 14