Crossplane, has created what it says is the first enterprise distribution of Crossplane called Bassam Tabbara, Upbound founder and CEO, in an interview. Crossplane “becomes your universal control plane that you could use, using the same style that the Kubernetes community pioneered, to manage essentially all the infrastructure that an enterprise touches from a single control plane.” UXP, then, is an open source, vendor-supported, enterprise-grade distribution of Crossplane that also adds on a layer of 24/7 support, priority bug fixes, and consultation with a subscription. UXP is available free for individual users and by subscription for larger deployments, and is a drop-in replacement for Crossplane that installs with a single command. Tabbara noted that UXP is “vendor-supported, not community-supported,” in that Upbound will “help enterprises deploy it, support it, and give them a number of features that makes it easier for them to deploy and manage it in their environment.” As a long-term supported project, UXP also lags behind Crossplane upstream to ensure reliability, and Upbound describes UXP as “designed to help enterprises adopt a universal control plane, moving beyond infrastructure as code,” in a press statement. In the case of UXP, Crossplane is further extended with its integration with both Upbound Cloud and Upbound Registry, both of which became generally available at the same time as the release of UXP. Upbound Cloud provides teams with visibility into their UXP instances and the infrastructure being managed, giving them a place to see what is running where, and by who it was provisioned. Upbound Registry then provides a place to both publicly and privately share Crossplane Configurations, and for providers to share managed resources. “With UXB, with Upbound Cloud and Upbound Registry, we believe we have a set of products now that can actually take this approach of using control planes in the enterprise and turn it into essentially a new way of managing infrastructure,” Tabbara said. “We see this with existing customers today, maybe even replacing a lot of what they do today with tools like Terraform and infrastructure-as-code approaches and going more towards a control plane approach, or even gitOps on top of a control plane.” The big difference Tabbara sees in all of this is that, by taking the API-driven approach rather than relying on templates, as with infrastructure as code, Crossplane and UXP can deliver a more scalable experience to managing infrastructure across large and varied environments. He explained that part of the appeal of Crossplane lies in the fact that teams can use the same Kubernetes-based tools and approaches that they are already using to deploy software to provision and manage infrastructure. Sponsor Note LaunchDarkly is a feature management platform that empowers all teams to safely deliver and control software through feature flags. By separating code deployments from feature releases, LaunchDarkly enables you to deploy faster, reduce risk, and iterate continuously. “If you are using Helm, or kustomize, or if you’re using literally any of the tools that people are deploying and love and use today with Kubernetes, as a container orchestrator, those tools work exactly in the same way,” said Tabbara. “When you’re using Kubernetes plus Crossplane to manage the rest of the cloud infrastructure and deployments across clouds and hybrid clouds, those tools work exactly in the same way. They are using Crossplane APIs that are extensions of Kubernetes extensions of the Kubernetes control plane.” Following the most recent KubeCon+CloudNativeCon, there were some
Stephen Reardon The one-man band that keeps the show running, Stephen Reardon is the DevOps engineer in the Entain Trading Solutions team, operating hundreds of Kubernetes nodes in the cloud using IaC tooling, chaos engineering testing tools and end to end monitoring. His main responsibility is operational reliability, keeping the platform resilient and available, and above all developer-proof.
At some point, you’ve got to stop building something you think people need and start putting it out there to test in the market. You have to go get users. This is where the first engineers of the Istio service mesh at Google found themselves about four years ago. But, like many things in the still-emerging cloud native space, the first response was: Well, what is it? Who else is using it? Tetrate Service Bridge to act as an application connectivity platform or a technical bridge to take you from those legacies to those modern environments, and to increase reliability and availability. Also called TSB, it looks to solve the issue of networking for heterogeneous workloads. Tetrate Service Bridge, built on Istio and now in general availability, presents itself as the solution to enterprise-grade challenges that can’t be just abstracted out with a Kubernetes layer. The Tetrate team has built out the core set of functionality around controlling traffic across an entire fleet of services, from the edge to the mesh. Butcher says TSB bridges the gap between having service mesh capabilities and actually realizing those capabilities in a way that is safe. He said, “This service mesh is great, but how do I actually use it in my enterprise? How do I change my process to take advantage of the mesh? And actually changing processes is really expensive, so how do I not change my process either?” And those enterprise processes aren’t simple either. They look to use service mesh to enforce security and compliance requirements. Or to gain control and visibility across entire complex infrastructures. How to put security controls in place across highly heterogeneous environments. “Service mesh serves a lot of problems I have but you are telling me I can only have it in Kubernetes? I want those things to help me get from my legacy to a modern environment, not already in that,” Butcher said. TSB helps you manage across the full breadth of compute, connecting Kubernetes and legacy infrastructure. He gives examples you can use to link with Istio and Envoy and just start assembling your application network. “Tetrate Service Bridge is a platform for applications to communicate securely and successfully without having to get into the weeds of what lives there.”— Zack Butcher, Founding Engineer, Tetrate Butcher says then there’s the enterprise management side, teams need to be able to prove they are using service mesh correctly and securely. He says TSB enables teams to divvy up their physical infrastructure and cloud-based environments, with multitenancy and controls, so you can use service mesh to “do cool things at runtime.” The connectivity tool works not only with Istio and the Apache Skywalking, enabling observability across whole systems. They are clear that while they are a tool to ease the use of these open source tools and the whole Tetrate team is contributors to the open source projects they depend on, they are not an open core company, intentionally. “In my opinion, there’s this really big tension in open-core companies. If me, as a developer, I have to decide project or product that people pay for — he doesn’t want to make the value prop decision,” Butcher explained. He continued, “We are building a layer on top of the open source pieces. We are assembling these open source pieces together in a coherent system.” Another part of this decision is that, since they are still essentially using open source tools, enterprises can do so in a relatively cheap way through Tetrate. Butcher points to the fundamental difference between enterprise closed source products like TCB and the open source projects it serves. “Capabilities go in open source and then how you manage those capabilities and how you use them within an organization, that’s what you put in the product,” he said.” While they only went fully public with TSB in April, they built it alongside adopters from the start. Butcher, paraphrasing Socrates, said that after the “pain of adopting Istio — we were in a cave without users” they were determined to build hand in hand with users. One such early adopter was FICO, the organization that creates the predominant credit risk score in the U.S. One emerging use case for service mesh is encryption in transit to ensure compliance to ever-changing regulations and standards from HIPAA and GDPR to automate enforcement of
The Albert Lombarte, co-founder and CEO of
Mike Hicks Mike is a principal solutions analyst at ThousandEyes, a part of Cisco, and a recognized expert with more than 30 years of experience in network and application performance. In the olden days, users were in offices and all apps lived in on-premises data centers. The WAN (wide area network) was what connected all of them. Today, with the adoption of SaaS apps and associated dependencies such as cloud services and third-party API endpoints, the WAN is getting stretched beyond recognition. In its place, the internet is directly and exclusively carrying a large — if not majority — share of all enterprise traffic flows. Enterprises are increasingly moving away from legacy WANs in favor of internet-centric, software-defined WANs, also called SD-WANs or software-defined networking in a wide area network. Architected for interconnection with cloud and external services, adopting SD-WANs can play a critical role in making enterprise networks cloud-ready, more cost-efficient and better suited to delivering quality digital experiences to customers and employees at all locations. But the transformation brings new visibility needs, and ensuring that SD-WAN delivers on expectations requires a new approach to monitoring that addresses network visibility and application performance equally. WAN in the Light of Continue reading
When Avesha. To his surprise, the industry hadn’t changed much over the past twenty years. This week, Avesha is demonstrating its new AI-based load balancing technology at KubeCon+CloudNativeCon 2021. Load balancing still mostly happens at a local level, within a particular cloud or cluster, and uses the same formulas that he helped popularize more than two decades ago. For example, a load balancer can use a “round-robin” formula, where requests go to each server in turn, and then back to the first one. A “weighted round-robin” is similar except that some servers get more requests than others because they have more available capacity. A “sticky cookie load balancer” is one where all the requests from a particular session are sent to the same server so that, say, customers don’t get moved around in the middle of shopping sessions and lose their shopping carts. “There are a few other variations, but they’re all based on fixed settings,” said Nair. “The state of the art hasn’t moved much in this area.” A very simple change that would make load balancers immediately more effective is to automatically adjust the weights based on server performance. “It’s actually a very low-hanging fruit,” he said. “I don’t know why they aren’t all doing this.” That’s what Avesha started looking at. Then, in addition to server performance, the company also added in other factors, like travel path times. The resulting service, the Smart Application Cloud Framework, was launched Tuesday. Deployment Structure Avesha is deployed with an agent that sits in its owner container inside a cluster or private cloud. It talks to its fellow agents and to Avesha’s back end systems via secure virtual private networks. The backend system collects information about traffic paths and server performance then uses machine learning to determine optimal routing strategies. The specific AI technique used is reinforcement learning. The system makes a recommendation and looks at how the recommendation works in practice, then adjusts its model accordingly. “Is it continuously tuning your network,” said Nair. “The network is constantly undergoing lots of changes, with traffic and congestion.” It also looks at the performance of individual servers and if some are having problems handling requests it automatically routes them elsewhere. And it works across all types of deployments — multiple public clouds, private clouds, and edge computing installations. Sponsor Note LaunchDarkly is a feature management platform that empowers all teams to safely deliver and control software through feature flags. By separating code deployments from feature releases, LaunchDarkly enables you to deploy faster, reduce risk, and iterate continuously. “The methods currently in use in Kubernetes are static,” he said. “You set fixed thresholds with a lower bound and an upper bound. But nobody even knows how to set those thresholds.” People wind up guessing, he said, set some basic targets, and then leave them in place. “You end up wasting resources,” he said. The Avesha technology is more like a self-driving car, he said. There are still parameters and guard rails, but, within those constraints, the system continually optimizes for the desired outcome, whether it be the lowest latency, or maximum cost savings, or even compliance-related data movement restrictions. “You want your data traffic to be managed in accordance with your policies,” he said. “For example, there might be regulations about where your data is and isn’t allowed to go.” Performance Improvements In internal studies, Avesha has seen improvements of 20% to 30% in the number of requests that are handled within their performance targets compared to standard weighted round-robin— approaches. When some clusters have hundreds of thousands of nodes, 30% is a big number, he said. Companies will see improvements in customer experience, lower bandwidth costs, and less need for manual intervention when things go wrong in the middle of the night. And it’s not just about the business bottom line, he added. “If you translate that into wasted energy, wasted natural resources, there are lots of benefits,” he said. For some applications, like video streaming, better performance would translate to competitive advantage, he said. “It’s like the difference between getting high definition and standard definition video.” There’s no commercial product currently on the market that offers AI-powered load balancing, he said, though some companies probably have their own proprietary technology to do something similar. “Netflix is an example of a company that’s a leader in the cloud native world,” he said. “I would say there’s a fairly good chance that they’ve already incorporated AI into their load balancing.” Other large cloud native technology companies with AI expertise may have also built their own platforms, he said. “Nobody has said anything publicly,” he said. “But it’s such an obvious thing to do that I am willing to believe that they have something, but are just keeping it to themselves.” There are also some narrow use cases, like that of content delivery networks. CDNs typically deliver content, like web pages, to users. They work by distributing copies of the content across the internet and optimize for the shortest possible distance between the end user and the source of the content. Avesha’s approach is more general, supporting connections between individual microservices. “It’s a little bigger than what a CDN is trying to do,” he said. “We are literally at the cutting edge with this.” AI-Powered Load Balancing as a Feature At some point, cloud vendors and third-party service providers will begin offering intelligent load balancing to their enterprise customers, either by building their own technology or by buying or partnering with Avesha or any competitors who might appear on the scene. “One way or the other, you’re going to be able to take advantage of it,” said Nair. Avesha itself is currently working with partners, he said, including some major industry players, and he is expecting to be making announcements this summer. But enterprises can also work directly with Avesha and get a jump on the competition, he added. Enterprises who deploy workloads to multiple clouds would find the technology of most interest, he added. Avesha is currently working with several companies on proof of concept projects. These are companies that typically are at $50 million in revenues or above in verticals such as media, manufacturing, health care and telecom. “We have also engaged with some partners who are big cloud players,” he said. More information, as well as return on investment analyses, will be released in the next few months. Verizon and AWS Serve Doctors at the Edge One case study that has been made public was a
The in detail — and in-person — at Kubecon 2019 in San Diego. There were already many well-known and Gateway API was intended as a redo of these APIs, built on the lessons from Services, Ingress and the service mesh community. With a group of Ingress and Service
With the goal of bringing more productive discussions on this topic into focus and understanding which types of multicloud capabilities are worth pursuing, this series concludes with a look at multicloud through the lens of traffic portability. Traffic Portability Armon Dadgar Armon is co-founder and CTO of HashiCorp, where he brings his passion for distributed systems to the world of DevOps tooling and cloud infrastructure. Multicloud traffic portability means you can shift traffic between environments dynamically. If you have geographically dispersed users, traffic portability would allow you to route traffic to the nearest cloud provider that could service them. So, if your app can run on Azure and AWS, maybe there’s a closer AWS data center to your customer than Azure. Or maybe one cloud vendor works better for data sovereignty in Europe, so you route to a particular vendor only for those requests. In most cases, the goal of traffic portability is to have the ability to dynamically shift traffic very quickly between multiple cloud platforms and on-premises data centers. This could also mean you’re balancing 50/50 traffic between AWS and Azure. Or maybe you’re doing maintenance in your Google Cloud environments, so you move 100% of traffic to Continue reading
The Ambassador and created by Datawire, Emissary Ingress is the open source core behind the Ambassador Labs founder and CEO
Allen McNaughton Allen is the Director of Technical Sales, Public Sector at InfoBlox. He has over 20 years of experience in developing security solutions for service providers, public sector and enterprise customers. Bad actors are constantly coming up with ways to evade defensive techniques put in place by government agencies, educational institutions, healthcare providers, companies and other organizations. To keep up, network security needs what’s known as “defense in depth” — a strategy that leverages different security solutions to provide robust and comprehensive security against unauthorized intruders. Think about securing your house — locks on your doors only protect your doors. But if you have locks on your doors and windows, a high fence, security cameras, an alarm system and two highly trained guard dogs, you have what we call “defense in depth.” The same goes for networks. When it comes to building a defense-in-depth strategy for your network, the first and most important feature is visibility — knowing what is on your network. Why Visibility? Because You Can’t Protect What You Can’t See If you can’t see it, you can’t protect it — it’s obvious if you think about it. Without understanding the devices, hardware, software and traffic Continue reading
Before the beginning of the COVID-19 pandemic, massive-scale remote connections over the Internet to households largely consisted of connections to entertainment services, such as Netflix. For those types of symmetric connections, fast download times ensure a good service. However, once the pandemic started, users working from home lacked sufficient upload times that could be at least 10 times slower for uploading data. This quickly became problematic for work-related connections, such as video and even audio connections for web meetings, said
Microservices have taken center stage in the software industry. Transitioning from a monolith to a microservices-based architecture empowers companies to deploy their application more frequently, reliably, independently, and with scale without any hassle. This doesn’t mean everything is green in Microservice architecture; there are some problems that need to be addressed, just like while designing distributed systems. This is where the “Service Mesh” concept is getting pretty popular. We have been thinking about breaking big monolithic applications into smaller applications for quite some time to ease software development and deployment. This chart below, borrowed from Burr Sutter’s talk titled “Burr Sutter at Devoxx The introduction of the service mesh was mainly due to a perfect storm within the IT scene. When developers began developing distributed systems using a multi-language (polyglot) approach, they needed dynamic service discovery. Operations were required to handle the inevitable communication failures smoothly and enforce network policies. Platform teams started adopting container orchestration systems like Envoy. What Is a Service Mesh? Pavan Belagatti Pavan Belagatti is one Continue reading
The open source Istio has emerged as the “dominant” service mesh to manage microservices and Kubernetes environments, solo.io executives say. Gloo Edge 2.0, to be released in beta in the middle of the year is the “first and the only” Istio-native API gateway with all of Istio’s native functionality, Posta said. The ingress controller will integrate #SoloCon2021 https://t.co/VKAxWqk5KJ is fully committed to Istio. We see it as the dominant service mesh—it’s the one that’s most deployed to production and the most mature. #Gloo @soloio_inc #sponsored March 24, 2021 Solo.io’s proclamation also coincides with a number of new improvements for solo.io’s Gloo Edge platforms announced the new capabilities feature, among other things, an even tighter integration between #SoloCon2021 Continue reading
Version 2.0 of Solo.io’s Gloo Edge will integrate the Gloo Edge, an ingress controller, and the open source Istio service mesh will form a single control plane, Solo.io said this week during its Torsten Volk, an analyst for Enterprise Management Associates (EMA), said. “Most organizations have regarded Istio as something to ‘attack once it’s become more approachable and easier to manage,’” Volk said. “These Solo.io announcements might ring in this new age of “service mesh for everyone.” In a Continue reading
Liz Rice Liz Rice, chair of the CNCF’s technical oversight committee For lots of folks in software engineering, every now and again a technology comes along that really sparks the imagination. I’m sure that many readers of The New Stack will recall their first encounters with containers, very possibly through Docker, and the realization that this was a technology that could change everything. Containerization is arguably the lynchpin of the move to cloud native. But every step forward creates new challenges, and new boundaries to push. For me, eBPF is another transformational technology and one that I’m excited to get more deeply involved in, as I join the leadership team at eBPF pioneers, Brendan Greggs from Netflix coined the phrase “superpowers for Linux,” and that’s no exaggeration. In my role as chair of the Continue reading
ThousandEyes sponsored this podcast. The cloud has become that new data center for how business gets done, and the internet is the new enterprise network backbone for how services are connected, said
Buoyant has released version 2.10 of William Morgan, CEO of Linkerd, in an interview. “An extension is basically a Kubernetes controller or operator. We’re relying as much as possible on Kubernetes primitives, but what we are doing is, there’s a little bit of wrapper magic that happens that makes those extensions feel like the rest of Linkerd.” Among those formerly-default features now being offered as extensions are the multicluster extension, which contains cross-cluster communications tools, the
ThousandEyes sponsored this post. Mike Hicks Mike is a principal solutions analyst at ThousandEyes, a part of Cisco, and a recognized expert with more than 30 years of experience in network and application performance. If you were to put application and network teams into a single room and ask them if ensuring optimal application performance and availability for their end users was critical to the success of their companies, you would undoubtedly have all heads shaking yes. The question, of course, is how? Many of us have lived through war rooms urgently called in response to degraded customer experiences, due to a performance or availability problem with a key application. Today’s modern applications are more distributed and modular than ever before, so not only has the number of stakeholders increased, but the lines of demarcation have also become blurred — causing confusion over responsibilities. Managing and optimizing application performance today is dependent on an increasingly complex underlying network and internet infrastructure that traditional application monitoring solutions fail to bridge, leaving visibility gaps for DevOps and NetOps to struggle with. These heterogeneous environments introduce changing conditions that are sparking new tactics to manage the application experience; and monitoring is one of Continue reading
Lin Sun Lin is the Director of Open-Source at Solo.io. She has worked on Istio service mesh since 2017 and serves on the Istio Technical Oversight Committee. Previously, she served on the Istio Steering Committee for three years and was a Senior Technical Staff Member and Master Inventor at IBM for 15+ years. She is the author of the book Istio Explained and has more than 200 patents to her name. This year’s first-ever Istio service mesh connects microservices. As the conference program co-chair, I had the incredible honor to work with the rest of the program committee to select conference submissions from a diverse range of world-class speakers. I wanted to share my five key takeaways from the show: 2020: A Year of Istio Innovation I have heard repeatedly from users that Istio is much easier to use thanks to the consolidation of all control plane components into Istiod. The removal of Mixer and the introduction of Web Assembly extensibility capabilities has also been widely lauded by the community. A complete list Continue reading
Sagar Nangare Sagar Nangare is technology blogger, focusing on data center technologies (Networking, Telecom, Cloud, Storage) and emerging domains like Edge Computing, IoT, Machine Learning, AI). Based in He is based in Pune, he is currently serving Calsoft Inc. as Digital Strategist. Despite the service mesh being a fairly new technology, as compared to other cloud native technologies, a March 2020 Cloud Native Computing Foundation report