The HAProxy Kubernetes Ingress Controller 1.5.
With the introduction of features around different types of authentication, configuration, and the ability to run the controller external to a Kubernetes cluster, the release marks a new release cadence for the software, said HAProxy director of product
If you are going to use a service mesh to manage a set of microservices, you might as well start thinking of the service mesh as the “security kernel” for these distributed systems, suggested Tetrate Senior Engineer U.S. National Institute of Standards and Technology (NIST) and Tetrate, a purveyor of an
We all agree that open source development methods help create better code. The Cathedral and the Bazaar,” which explained how the methodology of openness worked in Fetchmail project. But, that’s a general rule. Open source can still be abused by unscrupulous developers. So, why don’t we make sure when a programmer attempts to merge code into a program that they’re really who they say they are, by using two-factor authentication (2FA) or a digital signature? Good question.
You might not think this is a real problem. Alas, it is. For example, in 2019 CursedGrabber malware was successfully Linux Foundation’s 2020 FOSS Contributor Survey, when developers were asked if the open source projects Continue reading
Tetrate sponsored this post.
Petr is an IT Professional with more than 20+ years of international experience and Master’s Degree in Computer Science. He is a technologist at Tetrate.
The Istio service mesh comes with its own ingress, but we see customers with requirements to use a non-Istio ingress all the time. Previously, we’ve covered Traefik ingress. With some slight adjustments to the approach we suggested previously, we at Tetrate learned how to implement Traefik as the ingress gateway to your Istio Service Mesh. This article will show you how.
The flow of traffic is shown on the diagram below. As soon as requests arrive at the service mesh from the Traefik ingress, Istio has the ability to apply security, observability and traffic steering rules to the request:
Incoming traffic bypasses the Istio sidecar and arrives directly at Traefik, so the requests terminate at the Traefik ingress.
Traefik uses the IngressRoute config to rewrite the “Host” header to match the destination, and forwards the request to the targeted service, which is a several step process:
Requests exiting Traefik Ingress are redirected to the Istio sidecar Continue reading
Tetrate sponsored this post.
Hongtao is a Tetrate engineer and former Huawei Cloud expert. One of PMC members of Apache SkyWalking, he participates in such popular open source projects as Apache ShardingSphere and Elastic-Job.
Want to observe a service mesh that extends to virtual machines? A new analyzer in previous article, we talked about observability of service mesh in a Kubernetes environment and applied it to the bookinfo application in practice. But in that scenario, in order to map IP addresses to services, SkyWalking would need access to service metadata from a Kubernetes cluster — which is not available for services deployed in VMs. In this tutorial, we’ll demonstrate how SkyWalking’s new analyzer can give you better observability of a mesh that includes virtual machines.
How It Works
What makes VMs different from Kubernetes is that, for VM services, there are no places where we can fetch the metadata to map the IP addresses to services.
The mechanics of SkyWalking Analyzer are the same Continue reading
Version 2.2 of offers service discovery and native support for the HashiCorp’s Daniel Corbett, head of product, HAProxy Technologies, in a blog post.
Through a RESTful HTTP API, HAProxy connects directly to a defined Consul server and ingests the list of services and nodes from a Consul catalog, Corbett later told The New Stack.
The API will set off a process that can “define an HAProxy backend and pool of servers to match this catalog and automatically scale up or down nodes/servers on-demand based on changes within the Consul catalog,” Corbett said.
Corbett noted in the has also released version 2.3 of HAProxy itself, adding features such as forwarding, prioritizing, and translating of messages sent over the Syslog Protocol on both UDP and TCP, an OpenTracing SPOA, Stats Contexts, SSL/TLS enhancements, an improved cache, and changes in the connection layer that lay the foundation for support for HTTP/3/QUIC.
For more information on the HAProxy’s Data Plane API,
Twain is a guest blogger for Twistlock and a Fixate IO Contributor. He began his career at Google, where, among other things, he was involved in technical support for the AdWords team. His work involved reviewing stack traces and resolving issues affecting both customers and the Support team, and handling escalations. Today, as a technology journalist, he helps IT magazines, and startups change the way teams build and ship applications.
Service meshes have been getting quite a bit of attention, and with good reason. By providing reliability, security, and observability at the platform layer, service meshes can play a mission-critical role in Kubernetes applications. But tales of adoption are mixed: some practitioners report shying away from adopting a service meshes due to their apparent complexity, while others report getting them up and running with apparent ease. So which is it? Are service meshes too complex to be worth the effort, or ready for adoption today?
In this article I wanted to focus on
Tetrate sponsored this post.
Zack is a Tetrate engineer. He is an Istio contributor and member of the Istio Steering Committee and co-author of 'Istio: Up and Running (O’Reilly: 2019).'
In an upcoming Ramaswamy Chandramouli, we’ll be presenting recommendations around safely and securely offloading authentication and authorization from application code to a service mesh. We’ll be discussing the advantages and disadvantages of that approach. This article presents an overview of the paper that will be presented later this month, at
About 10 to 12 years ago, the world of software experienced a shift in the architectural aspects of enterprise applications. Architects and software builders started moving away from the giant, tightly coupled, monolithic applications deployed in the private data centers to a more microservices-oriented architecture hosted in public cloud infrastructure. The inherent distributed nature of microservices is a new security challenge in the public cloud. Over the last decade, despite the growing adoption of microservices-oriented architecture for building scalable, autonomous, and robust enterprise applications, organizations often struggle to protect against this new attack surface in the cloud compared to the traditional data centers. It includes concerns around multitenancy and lack of visibility and control over the infrastructure, as well as the operational environment. This architectural shift makes meeting security goals harder, especially with the paramount emphasis placed on faster container-based deployments.
The purpose of this article is to understand what microsegmentation is and how it can empower software architects, DevOps engineers, and IT security architects to build secure and resilient microservices. Specifically, I’ll discuss the network security challenges associated with the popular container orchestration mechanism Kubernetes, and I will illustrate the value of microsegmentation to prevent lateral movement when a Continue reading
Tetrate sponsored this post.
Jimmy is a developer advocate at Tetrate, CNCF Ambassador, co-founder of ServiceMesher, and Cloud Native Community(China). He mainly focuses on Kubernetes, Istio, and cloud native architectures.
In this article, I’ll give you an overview of
Ballerina language to demonstrate how you can effectively use WebSocket features.
The Dynamic Web: Looking Back
Anjana is Director of Developer Relations at WSO2. His latest venture is his role in the Ballerina project, where he has been involved extensively in the design and implementation of the language and its runtime, and now primarily works on its ecosystem engineering and evangelism activities.
Tetrate sponsored this post.
Nick is a software engineer at Tetrate, the enterprise service mesh company. He is a DevOps expert on Istio, public cloud architecture, and infrastructure automation.
You may have heard that DNS functionality was added in Istio 1.8, but you might not have thought about the impact it has. It solves some key issues that exist within Istio and allows you to expand your mesh architecture to include multiple clusters and virtual machines. An excellent explanation of the features can be found on the What’s new in Istio 1.8 (DNS Proxy).
Enabling Istio’s DNS Proxy
This feature is currently in Alpha but can be enabled in the IstioOperator config.
View the code on
Yasser Ganjisaffar is the VP of Engineering at Forward Networks, overseeing all the company’s engineering efforts. He joined Forward Networks in 2014 as an early employee and led the team that scaled the computation core of Forward Enterprise product by 1000x in five years. Prior to that, he built large-scale search infrastructures in Facebook and Microsoft. He holds a Computer Science Ph.D. in the information retrieval domain.
Developing enterprise software is far from simple. Designing a platform to serve hundreds of thousands of users, devices, or data streams (sometimes all at once) is a Herculean task. But that doesn’t mean that it’s impossible to approach the design methodology in a way that encourages scalability in the future.
Scalability is one of the most important considerations in making a new software solution. Without it, the software cannot support user growth without crippling the user experience, and similarly inhibiting sales. Making a scalable software platform is challenging simply because it’s near impossible to know what factors, options and problems the vendor needs to take into consideration beforehand, requiring companies to instead iterate along the way.
That was the issue
Mizar is an open source project providing cloud networking to run virtual machines, containers, and other compute workloads. We built Mizar from the ground up with large scale and high performance in mind. Built in the same way as distributed systems in the cloud, Mizar utilizes XDP (eXpress Data Path) and Kubernetes to allow for the efficient creation of multitenant overlay networks with massive amounts of endpoints. Each of these technologies brings valuable perks that enable Mizar to achieve its goals.
With XDP, Mizar is able to:
Skip unnecessary stages of the network stack whenever possible and transit packet processing to smart NICs.
Efficiently use kernel packet processing constructs without being locked into a specific processor architecture.
Produce very small packet processing programs (<4KB).
With Kubernetes, Mizar is able to:
Efficiently program the underlying core XDP programs.
Manage the lifecycle of its abstractions via CRDs.
Have a scalable and distributed management plane.
Deploy its core components and modules across all specified hosts.
Mizar’s Goals and Continue reading
Infoblox sponsored this post.
Sandeep is a software engineer at Infoblox focussing on open source contributions to the Cloud Native Computing Foundation (CNCF) projects CoreDNS and Kubernetes.
There has been an increasing demand from users to be able to manage the health, status, rollout, rollback, etc., of CoreDNS in a Kubernetes cluster; and not just rely on CoreDNS being managed by the cluster management tools. Since the use of Operators in Kubernetes is now generally accepted, the aim of the
Third-party DNS providers have seen tremendous consolidation during the past few years, resulting in dependence on a smaller pool of providers that maintain the world’s largest website lookups. Reliance on only one of a few single DNS providers also represents a heightened risk in the event of a Carnegie Mellon University, 89.2% of the CDN MaxCDN, the researchers noted. A
Veteran networking pros at Extended Berkeley Packet Filter (eBPF) technology, which makes the Linux kernel programmable, to address the ephemeral challenges of Kubernetes and microservices.
“If you think about the Linux kernel, traditionally, it’s a static set of functionality that some Linux kernel developer over the course of the last 20 or 30 years decided to build and they compiled it into the Linux kernel. And it works the way that kernel developer thought about, but may not be applicable to the use case that we need to do today,” said Isovalent CEO
“There’s a lot to say about each of these service meshes and how they work: their architecture, why they’re made, what they’re focused on, what they do when they came about and why some of them aren’t here anymore and why we’re still seeing new ones,” Layer5, explained during his talk with “Service Mesh Specifications and Why They Matter in Your Deployment.”
Service mesh is increasingly seen as a requirement to manage microservices in Kubernetes environments, offering a central control plane to manage microservices access, testing, metrics and other functionalities. One-third of the respondents in The New Stack survey of our readers said their organizations already use service mesh. Among the numerous service mesh options available; Envoy, Linkerd and
Every so often, however, a new buzzword or acronym comes around that really has weight behind it. Such is the case with XDP (eBPF programming language to gain access to the lower-level kernel hook. That hook is then implemented by the network device driver within the ingress traffic processing function, before a socket buffer can be allocated for the incoming packet.
Let’s look at how these two work together. This outstanding example comes from Jeremy Erickson, who is a senior R&D developer with Sebastiano Piazzi on
Lens, the integrated development environment (IDE) for Kubernetes, has seen some rapid growth in the past year, ever since it made some changes to its deployment model and found the backing of Mirantis, that company that in 2019 acquired Docker. At this month’s launched an extensions API alongside several pre-built extensions from popular cloud native products, which