Archive

Category Archives for "Networking – The New Stack"

Application Performance in the Age of SD-WAN

Mike Hicks Mike is a principal solutions analyst at ThousandEyes, a part of Cisco, and a recognized expert with more than 30 years of experience in network and application performance. In the olden days, users were in offices and all apps lived in on-premises data centers. The WAN (wide area network) was what connected all of them. Today, with the adoption of SaaS apps and associated dependencies such as cloud services and third-party API endpoints, the WAN is getting stretched beyond recognition. In its place, the internet is directly and exclusively carrying a large — if not majority — share of all enterprise traffic flows. Enterprises are increasingly moving away from legacy WANs in favor of internet-centric, software-defined WANs, also called SD-WANs or software-defined networking in a wide area network. Architected for interconnection with cloud and external services, adopting SD-WANs can play a critical role in making enterprise networks cloud-ready, more cost-efficient and better suited to delivering quality digital experiences to customers and employees at all locations. But the transformation brings new visibility needs, and ensuring that SD-WAN delivers on expectations requires a new approach to monitoring that addresses network visibility and application performance equally. WAN in the Light of Continue reading

Avesha Deploys Machine Learning for More Efficient Load Balancing

When Avesha. To his surprise, the industry hadn’t changed much over the past twenty years. This week, Avesha is demonstrating its new AI-based load balancing technology at KubeCon+CloudNativeCon 2021. Load balancing still mostly happens at a local level, within a particular cloud or cluster, and uses the same formulas that he helped popularize more than two decades ago. For example, a load balancer can use a “round-robin” formula, where requests go to each server in turn, and then back to the first one. A “weighted round-robin” is similar except that some servers get more requests than others because they have more available capacity. A “sticky cookie load balancer” is one where all the requests from a particular session are sent to the same server so that, say, customers don’t get moved around in the middle of shopping sessions and lose their shopping carts. “There are a few other variations, but they’re all based on fixed settings,” said Nair. “The state of the art hasn’t moved much in this area.” A very simple change that would make load balancers immediately more effective is to automatically adjust the weights based on server performance. “It’s actually a very low-hanging fruit,” he said. “I don’t know why they aren’t all doing this.” That’s what Avesha started looking at. Then, in addition to server performance, the company also added in other factors, like travel path times. The resulting service, the Smart Application Cloud Framework, was launched Tuesday. Deployment Structure Avesha is deployed with an agent that sits in its owner container inside a cluster or private cloud. It talks to its fellow agents and to Avesha’s back end systems via secure virtual private networks. The backend system collects information about traffic paths and server performance then uses machine learning to determine optimal routing strategies. The specific AI technique used is reinforcement learning. The system makes a recommendation and looks at how the recommendation works in practice, then adjusts its model accordingly. “Is it continuously tuning your network,” said Nair. “The network is constantly undergoing lots of changes, with traffic and congestion.” It also looks at the performance of individual servers and if some are having problems handling requests it automatically routes them elsewhere. And it works across all types of deployments — multiple public clouds, private clouds, and edge computing installations. Sponsor Note LaunchDarkly is a feature management platform that empowers all teams to safely deliver and control software through feature flags. By separating code deployments from feature releases, LaunchDarkly enables you to deploy faster, reduce risk, and iterate continuously. “The methods currently in use in Kubernetes are static,” he said. “You set fixed thresholds with a lower bound and an upper bound. But nobody even knows how to set those thresholds.” People wind up guessing, he said, set some basic targets, and then leave them in place. “You end up wasting resources,” he said. The Avesha technology is more like a self-driving car, he said. There are still parameters and guard rails, but, within those constraints, the system continually optimizes for the desired outcome, whether it be the lowest latency, or maximum cost savings, or even compliance-related data movement restrictions. “You want your data traffic to be managed in accordance with your policies,” he said. “For example, there might be regulations about where your data is and isn’t allowed to go.” Performance Improvements In internal studies, Avesha has seen improvements of 20% to 30% in the number of requests that are handled within their performance targets compared to standard weighted round-robin— approaches. When some clusters have hundreds of thousands of nodes, 30% is a big number, he said. Companies will see improvements in customer experience, lower bandwidth costs, and less need for manual intervention when things go wrong in the middle of the night. And it’s not just about the business bottom line, he added. “If you translate that into wasted energy, wasted natural resources, there are lots of benefits,” he said. For some applications, like video streaming, better performance would translate to competitive advantage, he said. “It’s like the difference between getting high definition and standard definition video.” There’s no commercial product currently on the market that offers AI-powered load balancing, he said, though some companies probably have their own proprietary technology to do something similar. “Netflix is an example of a company that’s a leader in the cloud native world,” he said. “I would say there’s a fairly good chance that they’ve already incorporated AI into their load balancing.” Other large cloud native technology companies with AI expertise may have also built their own platforms, he said. “Nobody has said anything publicly,” he said. “But it’s such an obvious thing to do that I am willing to believe that they have something, but are just keeping it to themselves.” There are also some narrow use cases, like that of content delivery networks. CDNs typically deliver content, like web pages, to users. They work by distributing copies of the content across the internet and optimize for the shortest possible distance between the end user and the source of the content. Avesha’s approach is more general, supporting connections between individual microservices. “It’s a little bigger than what a CDN is trying to do,” he said. “We are literally at the cutting edge with this.” AI-Powered Load Balancing as a Feature At some point, cloud vendors and third-party service providers will begin offering intelligent load balancing to their enterprise customers, either by building their own technology or by buying or partnering with Avesha or any competitors who might appear on the scene. “One way or the other, you’re going to be able to take advantage of it,” said Nair. Avesha itself is currently working with partners, he said, including some major industry players, and he is expecting to be making announcements this summer. But enterprises can also work directly with Avesha and get a jump on the competition, he added. Enterprises who deploy workloads to multiple clouds would find the technology of most interest, he added. Avesha is currently working with several companies on proof of concept projects. These are companies that typically are at $50 million in revenues or above in verticals such as media, manufacturing, health care and telecom. “We have also engaged with some partners who are big cloud players,” he said. More information, as well as return on investment analyses, will be released in the next few months. Verizon and AWS Serve Doctors at the Edge One case study that has been made public was a

The 4 Definitions of Multicloud: Part 4 — Traffic Portability

With the goal of bringing more productive discussions on this topic into focus and understanding which types of multicloud capabilities are worth pursuing, this series concludes with a look at multicloud through the lens of traffic portability. Traffic Portability Armon Dadgar Armon is co-founder and CTO of HashiCorp, where he brings his passion for distributed systems to the world of DevOps tooling and cloud infrastructure. Multicloud traffic portability means you can shift traffic between environments dynamically. If you have geographically dispersed users, traffic portability would allow you to route traffic to the nearest cloud provider that could service them. So, if your app can run on Azure and AWS, maybe there’s a closer AWS data center to your customer than Azure. Or maybe one cloud vendor works better for data sovereignty in Europe, so you route to a particular vendor only for those requests. In most cases, the goal of traffic portability is to have the ability to dynamically shift traffic very quickly between multiple cloud platforms and on-premises data centers. This could also mean you’re balancing 50/50 traffic between AWS and Azure. Or maybe you’re doing maintenance in your Google Cloud environments, so you move 100% of traffic to Continue reading

Defense in Depth: The First Step to Security Certainty

Allen McNaughton Allen is the Director of Technical Sales, Public Sector at InfoBlox. He has over 20 years of experience in developing security solutions for service providers, public sector and enterprise customers. Bad actors are constantly coming up with ways to evade defensive techniques put in place by government agencies, educational institutions, healthcare providers, companies and other organizations. To keep up, network security needs what’s known as “defense in depth” — a strategy that leverages different security solutions to provide robust and comprehensive security against unauthorized intruders. Think about securing your house — locks on your doors only protect your doors. But if you have locks on your doors and windows, a high fence, security cameras, an alarm system and two highly trained guard dogs, you have what we call “defense in depth.” The same goes for networks. When it comes to building a defense-in-depth strategy for your network, the first and most important feature is visibility — knowing what is on your network. Why Visibility? Because You Can’t Protect What You Can’t See If you can’t see it, you can’t protect it — it’s obvious if you think about it. Without understanding the devices, hardware, software and traffic Continue reading

How Your Network Impacts User Experience in a COVID-19 World

Before the beginning of the COVID-19 pandemic, massive-scale remote connections over the Internet to households largely consisted of connections to entertainment services, such as Netflix. For those types of symmetric connections, fast download times ensure a good service. However, once the pandemic started, users working from home lacked sufficient upload times that could be at least 10 times slower for uploading data. This quickly became problematic for work-related connections, such as video and even audio connections for web meetings, said

Service Meshes in the Cloud Native World

Microservices have taken center stage in the software industry. Transitioning from a monolith to a microservices-based architecture empowers companies to deploy their application more frequently, reliably, independently, and with scale without any hassle. This doesn’t mean everything is green in Microservice architecture; there are some problems that need to be addressed, just like while designing distributed systems. This is where the “Service Mesh” concept is getting pretty popular. We have been thinking about breaking big monolithic applications into smaller applications for quite some time to ease software development and deployment. This chart below, borrowed from Burr Sutter’s talk titled “Burr Sutter at Devoxx The introduction of the service mesh was mainly due to a perfect storm within the IT scene. When developers began developing distributed systems using a multi-language (polyglot) approach, they needed dynamic service discovery. Operations were required to handle the inevitable communication failures smoothly and enforce network policies. Platform teams started adopting container orchestration systems like Envoy. What Is a Service Mesh? Pavan Belagatti Pavan Belagatti is one Continue reading

Solo.io: Istio Is Winning the Service Mesh War

The open source Istio has emerged as the “dominant” service mesh to manage microservices and Kubernetes environments, solo.io executives say. Gloo Edge 2.0, to be released in beta in the middle of the year is the “first and the only” Istio-native API gateway with all of Istio’s native functionality, Posta said. The ingress controller will integrate #SoloCon2021 https://t.co/VKAxWqk5KJ is fully committed to Istio. We see it as the dominant service mesh—it’s the one that’s most deployed to production and the most mature. #Gloo @soloio_inc #sponsored March 24, 2021 Solo.io’s proclamation also coincides with a number of new improvements for solo.io’s Gloo Edge platforms announced the new capabilities feature, among other things, an even tighter integration between #SoloCon2021 Continue reading

Gloo Edge 2.0: A Fully Istio-Integrated API Gateway for Multiple Clusters

Version 2.0 of Solo.io’s Gloo Edge will integrate the Gloo Edge, an ingress controller, and the open source Istio service mesh will form a single control plane, Solo.io said this week during its Torsten Volk, an analyst for Enterprise Management Associates (EMA), said. “Most organizations have regarded Istio as something to ‘attack once it’s become more approachable and easier to manage,’” Volk said. “These Solo.io announcements might ring in this new age of “service mesh for everyone.” In a Continue reading

Liz Rice: Following the ‘Superpower’ Promise of eBPF

Liz Rice Liz Rice, chair of the CNCF’s technical oversight committee For lots of folks in software engineering, every now and again a technology comes along that really sparks the imagination. I’m sure that many readers of The New Stack will recall their first encounters with containers, very possibly through Docker, and the realization that this was a technology that could change everything. Containerization is arguably the lynchpin of the move to cloud native. But every step forward creates new challenges, and new boundaries to push. For me, eBPF is another transformational technology and one that I’m excited to get more deeply involved in, as I join the leadership team at eBPF pioneers, Brendan Greggs from Netflix coined the phrase “superpowers for Linux,” and that’s no exaggeration. In my role as chair of the Continue reading

Linkerd Goes on a Diet with Opt-In Extensions

Buoyant has released version 2.10 of William Morgan, CEO of Linkerd, in an interview. “An extension is basically a Kubernetes controller or operator. We’re relying as much as possible on Kubernetes primitives, but what we are doing is, there’s a little bit of wrapper magic that happens that makes those extensions feel like the rest of Linkerd.” Among those formerly-default features now being offered as extensions are the multicluster extension, which contains cross-cluster communications tools, the

Applying a DevOps Approach to the Network Your App Runs On

ThousandEyes sponsored this post. Mike Hicks Mike is a principal solutions analyst at ThousandEyes, a part of Cisco, and a recognized expert with more than 30 years of experience in network and application performance. If you were to put application and network teams into a single room and ask them if ensuring optimal application performance and availability for their end users was critical to the success of their companies, you would undoubtedly have all heads shaking yes. The question, of course, is how? Many of us have lived through war rooms urgently called in response to degraded customer experiences, due to a performance or availability problem with a key application. Today’s modern applications are more distributed and modular than ever before, so not only has the number of stakeholders increased, but the lines of demarcation have also become blurred — causing confusion over responsibilities. Managing and optimizing application performance today is dependent on an increasingly complex underlying network and internet infrastructure that traditional application monitoring solutions fail to bridge, leaving visibility gaps for DevOps and NetOps to struggle with. These heterogeneous environments introduce changing conditions that are sparking new tactics to manage the application experience; and monitoring is one of Continue reading

5 Key Takeaways from IstioCon 2021

Lin Sun Lin is the Director of Open-Source at Solo.io. She has worked on Istio service mesh since 2017 and serves on the Istio Technical Oversight Committee. Previously, she served on the Istio Steering Committee for three years and was a Senior Technical Staff Member and Master Inventor at IBM for 15+ years. She is the author of the book Istio Explained and has more than 200 patents to her name. This year’s first-ever Istio service mesh connects microservices. As the conference program co-chair, I had the incredible honor to work with the rest of the program committee to select conference submissions from a diverse range of world-class speakers. I wanted to share my five key takeaways from the show: 2020: A Year of Istio Innovation I have heard repeatedly from users that Istio is much easier to use thanks to the consolidation of all control plane components into Istiod. The removal of Mixer and the introduction of Web Assembly extensibility capabilities has also been widely lauded by the community. A complete list Continue reading

Why the Service Mesh Will Be Essential for 5G Telecom Networks

Sagar Nangare Sagar Nangare is technology blogger, focusing on data center technologies (Networking, Telecom, Cloud, Storage) and emerging domains like Edge Computing, IoT, Machine Learning, AI). Based in He is based in Pune, he is currently serving Calsoft Inc. as Digital Strategist. Despite the service mesh being a fairly new technology, as compared to other cloud native technologies, a March 2020 Cloud Native Computing Foundation report

Solo.io Launches Gloo Mesh Enterprise to General Availability

After a couple of years in development and just released Gloo Mesh Enterprise service mesh to general availability this month, marking API stability and a slate of new features, built in response to customer feedback during the beta period. Gloo Mesh Enterprise is the company’s enterprise-grade, Kubernetes-native solution to help organizations install and manage Istio service mesh deployments. While Gloo Mesh Enterprise may just now be reaching this milestone, Idit Levine speaks of massive, unnamed customers already using the product in production, in deployments spanning more than 40 data centers, and 1,200 clusters and Istio service mesh instances. “When you’re running with that scale, there are a lot of things that you need to do. This is exactly what Gloo Mesh is for. Gloo Mesh is basically saying, ‘crawl, walk, run, fly.'” said Levine, referring to the product’s ability to help not only with the initial steps of service mesh adoption and installation but also the day two operations and added capabilities to handle complex multicluster, multicloud, multiregion deployments. To start (or “crawl”), Gloo Mesh Enterprise provides Federal Information Processing Standards (FIPS) compliance and long-term support for Istio Continue reading

Behind the Scenes of the SunBurst Attack

Check Point sponsored this post. Lior Sonntag Lior is a Security Researcher at Check Point Software Technologies. He is a security enthusiast who loves to break stuff and put it back together. He's passionate about various InfoSec topics such as Cloud Security, Offensive Security, Vulnerability Research and Reverse Engineering. The biggest cyberattack in recent times came in the form of what seems like a

Why You Should Choose NGAC as Your Access Control Model

Tetrate sponsored this post. Jimmy Song Jimmy is a developer advocate at Tetrate, CNCF Ambassador, co-founder of ServiceMesher, and Cloud Native Community (China). He mainly focuses on Kubernetes, Istio, and cloud native architectures. Different companies or software providers have devised countless ways to control user access to functions or resources, such as Discretionary Access Control (DAC), Mandatory Access Control (MAC), Role-Based Access Control (RBAC), and Attribute-Based Access Control (ABAC). In essence, whatever the type of access control model, three basic elements can be abstracted: user, system/application, and policy. In this article, we will introduce ABAC, RBAC, and a new access control model — Next Generation Access Control (NGAC) — and compare the similarities and differences between the three, as well as why you should consider NGAC. What Is RBAC? Ignasi Barrera Ignasi is a founding engineer at Tetrate and is a member of the Apache Software Foundation. RBAC, or Role-Based Access Control, takes an approach whereby users are granted (or denied) access to resources based on their role in the organization. Every role is assigned a collection of permissions and restrictions, which is great because you don’t need to keep track of every system user and their attributes. You just Continue reading

1 8 9 10 11 12 16