Avesha Deploys Machine Learning for More Efficient Load Balancing

When Avesha. To his surprise, the industry hadn’t changed much over the past twenty years. This week, Avesha is demonstrating its new AI-based load balancing technology at KubeCon+CloudNativeCon 2021. Load balancing still mostly happens at a local level, within a particular cloud or cluster, and uses the same formulas that he helped popularize more than two decades ago. For example, a load balancer can use a “round-robin” formula, where requests go to each server in turn, and then back to the first one. A “weighted round-robin” is similar except that some servers get more requests than others because they have more available capacity. A “sticky cookie load balancer” is one where all the requests from a particular session are sent to the same server so that, say, customers don’t get moved around in the middle of shopping sessions and lose their shopping carts. “There are a few other variations, but they’re all based on fixed settings,” said Nair. “The state of the art hasn’t moved much in this area.” A very simple change that would make load balancers immediately more effective is to automatically adjust the weights based on server performance. “It’s actually a very low-hanging fruit,” he said. “I don’t know why they aren’t all doing this.” That’s what Avesha started looking at. Then, in addition to server performance, the company also added in other factors, like travel path times. The resulting service, the Smart Application Cloud Framework, was launched Tuesday. Deployment Structure Avesha is deployed with an agent that sits in its owner container inside a cluster or private cloud. It talks to its fellow agents and to Avesha’s back end systems via secure virtual private networks. The backend system collects information about traffic paths and server performance then uses machine learning to determine optimal routing strategies. The specific AI technique used is reinforcement learning. The system makes a recommendation and looks at how the recommendation works in practice, then adjusts its model accordingly. “Is it continuously tuning your network,” said Nair. “The network is constantly undergoing lots of changes, with traffic and congestion.” It also looks at the performance of individual servers and if some are having problems handling requests it automatically routes them elsewhere. And it works across all types of deployments — multiple public clouds, private clouds, and edge computing installations. Sponsor Note LaunchDarkly is a feature management platform that empowers all teams to safely deliver and control software through feature flags. By separating code deployments from feature releases, LaunchDarkly enables you to deploy faster, reduce risk, and iterate continuously. “The methods currently in use in Kubernetes are static,” he said. “You set fixed thresholds with a lower bound and an upper bound. But nobody even knows how to set those thresholds.” People wind up guessing, he said, set some basic targets, and then leave them in place. “You end up wasting resources,” he said. The Avesha technology is more like a self-driving car, he said. There are still parameters and guard rails, but, within those constraints, the system continually optimizes for the desired outcome, whether it be the lowest latency, or maximum cost savings, or even compliance-related data movement restrictions. “You want your data traffic to be managed in accordance with your policies,” he said. “For example, there might be regulations about where your data is and isn’t allowed to go.” Performance Improvements In internal studies, Avesha has seen improvements of 20% to 30% in the number of requests that are handled within their performance targets compared to standard weighted round-robin— approaches. When some clusters have hundreds of thousands of nodes, 30% is a big number, he said. Companies will see improvements in customer experience, lower bandwidth costs, and less need for manual intervention when things go wrong in the middle of the night. And it’s not just about the business bottom line, he added. “If you translate that into wasted energy, wasted natural resources, there are lots of benefits,” he said. For some applications, like video streaming, better performance would translate to competitive advantage, he said. “It’s like the difference between getting high definition and standard definition video.” There’s no commercial product currently on the market that offers AI-powered load balancing, he said, though some companies probably have their own proprietary technology to do something similar. “Netflix is an example of a company that’s a leader in the cloud native world,” he said. “I would say there’s a fairly good chance that they’ve already incorporated AI into their load balancing.” Other large cloud native technology companies with AI expertise may have also built their own platforms, he said. “Nobody has said anything publicly,” he said. “But it’s such an obvious thing to do that I am willing to believe that they have something, but are just keeping it to themselves.” There are also some narrow use cases, like that of content delivery networks. CDNs typically deliver content, like web pages, to users. They work by distributing copies of the content across the internet and optimize for the shortest possible distance between the end user and the source of the content. Avesha’s approach is more general, supporting connections between individual microservices. “It’s a little bigger than what a CDN is trying to do,” he said. “We are literally at the cutting edge with this.” AI-Powered Load Balancing as a Feature At some point, cloud vendors and third-party service providers will begin offering intelligent load balancing to their enterprise customers, either by building their own technology or by buying or partnering with Avesha or any competitors who might appear on the scene. “One way or the other, you’re going to be able to take advantage of it,” said Nair. Avesha itself is currently working with partners, he said, including some major industry players, and he is expecting to be making announcements this summer. But enterprises can also work directly with Avesha and get a jump on the competition, he added. Enterprises who deploy workloads to multiple clouds would find the technology of most interest, he added. Avesha is currently working with several companies on proof of concept projects. These are companies that typically are at $50 million in revenues or above in verticals such as media, manufacturing, health care and telecom. “We have also engaged with some partners who are big cloud players,” he said. More information, as well as return on investment analyses, will be released in the next few months. Verizon and AWS Serve Doctors at the Edge One case study that has been made public was a

Monitoring as code with Sensu + Ansible

A comprehensive infrastructure as code (IaC) initiative should include monitoring and observability. Incorporating the active monitoring of the infrastructure under management results in a symbiotic relationship in which failures are detected automatically, enabling event-driven code changes and new deployments.

In this post, I’ll recap a webinar I hosted with Tadej Borovšak, Ansible Evangelist at XLAB Steampunk (who we collaborated with on our certified Ansible Content Collection for Sensu Go). You’ll learn how monitoring as code can serve as a feedback loop for IaC workflows, improving the overall automation solution and how to automate your monitoring with the certified Ansible Content Collection for Sensu Go (with demos!).

Before we dive in, here’s a brief overview of Sensu.

 

About Sensu

Sensu is the turn-key observability pipeline that delivers monitoring as code on any cloud — from bare metal to cloud native. Sensu provides a flexible observability platform for DevOps and SRE teams, allowing them to reuse their existing monitoring and observability tools, and integrates with best-of-breed solutions — like Red Hat Ansible Automation Platform. 

With Sensu, you can reuse existing tooling, like Nagios plugins, as well as monitor ephemeral, cloud-based infrastructure, like Red Hat OpenShift. Sensu helps you Continue reading

How to best set up command aliases on Linux

Used frequently, bash aliases can make working on the Linux command line a lot smoother and easier, but they can also be complicated and hard to remember. This post examines how you might make your aliases work for you rather than vice versa.In general, aliases are especially good for: simplifying commands that are long and overly complex remembering commands with odd or complicated names saving time using commands that you use very often What you need to keep in mind is that: aliases can themselves be hard to remember giving an alias the same name as a regular command can be a good thing or a bad thing (more on this shortly) How to create an alias Use the alias command and remember to add it to your ~/.bashrc file so that it will still be waiting for you whenever you login.To read this article in full, please click here

How to best set up command aliases on Linux

Used frequently, bash aliases can make working on the Linux command line a lot smoother and easier, but they can also be complicated and hard to remember. This post examines how you might make your aliases work for you rather than vice versa.In general, aliases are especially good for: simplifying commands that are long and overly complex remembering commands with odd or complicated names saving time using commands that you use very often What you need to keep in mind is that: aliases can themselves be hard to remember giving an alias the same name as a regular command can be a good thing or a bad thing (more on this shortly) How to create an alias Use the alias command and remember to add it to your ~/.bashrc file so that it will still be waiting for you whenever you login.To read this article in full, please click here

Google announces custom video transcoding chip

You know Google has more money than it could ever spend when it invests in a custom chip to do one task. And now they’ve done it for the third time.The search giant has developed a new chip and deployed it in its data centers to compress video content. The chips, called Video (Trans)Coding Units, or VCUs, do that faster and more efficiently than traditional CPUs.In a blog post discussing the project, Jeff Calow, a lead software engineer at Google said the VCU gives the highest YouTube video quality possible on your device while consuming less bandwidth than before.To read this article in full, please click here

Dell delivers lineup of on-prem, pay-per-use hardware

Dell is launching a new offering of managed storage, server, and hyperconverged infrastructure that can be deployed in a company's own data center, at edge locations or in colocation facilities, and enterprises pay for capacity as needed.Dubbed Dell Apex, it includes storage, cloud services, and a console for streamlined management. The launch coincides with the kickoff of Dell Technologies World 2021, which is being held virtually this year.Now see "How to manage your power bill while adopting AI" Pay-per-use hardware models such as Dell Apex and HPE GreenLake are designed to deliver cloud-like pricing structures and flexible capacity to private data centers. The concept of pay-per-use hardware isn't new, but the buzz around it is growing. Enterprises are looking for alternatives to buying equipment outright for workloads that aren't a fit for public cloud environments.To read this article in full, please click here

Dell delivers lineup of on-prem, pay-per-use hardware

Dell is launching a new offering of managed storage, server, and hyperconverged infrastructure that can be deployed in a company's own data center, at edge locations or in colocation facilities, and enterprises pay for capacity as needed.Dubbed Dell Apex, it includes storage, cloud services, and a console for streamlined management. The launch coincides with the kickoff of Dell Technologies World 2021, which is being held virtually this year.Now see "How to manage your power bill while adopting AI" Pay-per-use hardware models such as Dell Apex and HPE GreenLake are designed to deliver cloud-like pricing structures and flexible capacity to private data centers. The concept of pay-per-use hardware isn't new, but the buzz around it is growing. Enterprises are looking for alternatives to buying equipment outright for workloads that aren't a fit for public cloud environments.To read this article in full, please click here

HPE kicks off software-defined storage-as-a-service

Hewlett Packard Enterprise took a big step toward delivering on its “entire-portfolio-as-a-service” strategy this week by unveiling cloud-based storage and data service that will help manage storage needs in distributed IT enterpises.HPE said in 2019 that by 2022 it wanted to remake itself into a more service-oriented company and announced plans to transition its entire portfolio to subscription based, pay-per-use, and as-a-service offerings. It has since made headway, for example, recently adding HPE GreenLake cloud services for HPC.To read this article in full, please click here

Back to Basics: Do We Need Interface Addresses?

In the world of ubiquitous Ethernet and IP, it’s common to think that one needs addresses in packet headers in every layer of the protocol stack. We have MAC addresses, IP addresses, and TCP/UDP port numbers… and low-level addresses are assigned to individual interfaces, not nodes.

Turns out that’s just one option… and not exactly the best one in many scenarios. You could have interfaces with no addresses, and you could have addresses associated with nodes, not interfaces.

Back to Basics: Do We Need Interface Addresses?

In the world of ubiquitous Ethernet and IP, it’s common to think that one needs addresses in packet headers in every layer of the protocol stack. We have MAC addresses, IP addresses, and TCP/UDP port numbers… and low-level addresses are assigned to individual interfaces, not nodes.

Turns out that’s just one option… and not exactly the best one in many scenarios. You could have interfaces with no addresses, and you could have addresses associated with nodes, not interfaces.

JNCIE-DC Lab Experience

After plenty of hours of studying and labbing the wide ranging topics on the JNCIE-DC blueprint, I took the JNCIE-DC lab exam and passed! I can proudly say I’m JNCIE-DC #389. In this conclusion of the previous JNCIE-DC blogs about my lab setup and about the remote lab environment, I will talk about my experience […]

The post JNCIE-DC Lab Experience first appeared on Rick Mur.

Tech Bytes: VMware vRealize Network Insight: App-Aware Network Monitoring And Assurance (Sponsored)

This Tech Bytes podcast explores the network assurance and verification feature in VMware's vRealize Network Insight network monitoring software. This feature builds a real-time model of your production network by collecting information from switches, routers, firewalls, and other network devices. This model can then be used for testing changes, verifying reachability, improving troubleshooting, and more. VMware is our sponsor.

Tech Bytes: VMware vRealize Network Insight: App-Aware Network Monitoring And Assurance (Sponsored)

This Tech Bytes podcast explores the network assurance and verification feature in VMware's vRealize Network Insight network monitoring software. This feature builds a real-time model of your production network by collecting information from switches, routers, firewalls, and other network devices. This model can then be used for testing changes, verifying reachability, improving troubleshooting, and more. VMware is our sponsor.

The post Tech Bytes: VMware vRealize Network Insight: App-Aware Network Monitoring And Assurance (Sponsored) appeared first on Packet Pushers.

Calico Enterprise enables live view of cloud-native apps deployed in Kubernetes

We are happy to announce that the latest release of Calico Enterprise delivers unprecedented levels of Kubernetes observability! Calico Enterprise 3.5 provides full-stack observability across the entire Kubernetes environment, from application layer to networking layer.

With this new release, developers, DevOps, SREs, and platform owners get:

  • A live, high-fidelity view of microservices and workload interactions in the environment, with the ability to take corrective actions in real time
  • An easy-to-understand, action-oriented view that maintains correlations at the service, deployment, container, node, pod, network, and packet levels
  • Kubernetes context for easy filtering and subsequent analysis of traffic payloads
  • A Dynamic Service Graph representing traffic between namespaces, microservices, and deployments for faster problem identification and troubleshooting
  • An interactive display that shows DNS information categorized by microservices and workloads, to determine whether DNS is the root cause of application connectivity issues
  • The ability to customize the duration and packet size for packet capture
  • Application-level observability to detect and prevent anomalous behaviors

For more information, see our official press release.

Are you a Calico Cloud user? Not to worry—these same features are now available in Calico Cloud, too.

To learn more about new cloud-native approaches for establishing security and observability with Kubernetes, check Continue reading

Calico Enterprise enables live view of cloud-native apps deployed in Kubernetes

We are happy to announce that the latest release of Calico Enterprise delivers unprecedented levels of Kubernetes observability! Calico Enterprise 3.5 provides full-stack observability across the entire Kubernetes environment, from application layer to networking layer.

With this new release, developers, DevOps, SREs, and platform owners get:

  • A live, high-fidelity view of microservices and workload interactions in the environment, with the ability to take corrective actions in real time
  • An easy-to-understand, action-oriented view that maintains correlations at the service, deployment, container, node, pod, network, and packet levels
  • Kubernetes context for easy filtering and subsequent analysis of traffic payloads
  • A Dynamic Service Graph representing traffic between namespaces, microservices, and deployments for faster problem identification and troubleshooting
  • An interactive display that shows DNS information categorized by microservices and workloads, to determine whether DNS is the root cause of application connectivity issues
  • The ability to customize the duration and packet size for packet capture
  • Application-level observability to detect and prevent anomalous behaviors

For more information, see our official press release.

Are you a Calico Cloud user? Not to worry—these same features are now available in Calico Cloud, too.

To learn more about new cloud-native approaches for establishing security and observability with Kubernetes, check Continue reading

Is Sticking With A Networking Vendor As Risky As Changing?

The networking industry has had a bumper crop of startup companies including a few unicorns, new and novel solutions, and fresh standards-driven tech in the last decade. There’s been enough churn that you’d think the landscape would be unrecognizable from what it was ten years back. And yet, a dominant vendor supplying networks to enterprises remains Cisco.

Data networking folks sometimes wonder why Cisco remains such a dominant force after all these years. With all the churn in the industry, with all the fancy new products, companies and approaches, with the cloud changing how computing is done, and with software eating the world, there are many more options than Cisco to meet networking needs. Of course, Cisco has always had competition. Cisco’s never gotten 100% of the pie, but, depending on market segment, there’s rarely been a second juggernaut in the enterprise networking space. The choice has typically been between Cisco and everyone else.

But in 2021, the networking market is increasingly fragmented with more startups than I’ve even heard of chasing after slivers of the diverse networking pie. Sure, that impacts Cisco. Still, Cisco tends to dominate, even if their share isn’t quite what it was depending on which Continue reading