Archive

Category Archives for "Networking – The New Stack"

Video Game Security Should Be Simple for Developers

Video games continue to Bharat Bhat (Okta marketing lead for developer relations) cover why and how video game platforms and connections should be more secure, with guest Okta senior developer advocate Video Game Security Should Be Simple for Developers Also available on Google Podcasts, PlayerFM, Spotify, TuneIn The gaming industry has often served as a showcase for some of the industry’s greatest programming talents. As a case in point,

Solo.io Adds Legacy SOAP Integration for Gloo Edge 1.8 Release

Service mesh integration software provider Solo.io has released into general availability (GA) version 1.8 of its Gloo Edge Kubernetes-native ingress controller and API gateway. Version 1.8 offers integration for legacy SOAP (Simple Object Access Protocol) web services and other features, as Solo seeks to improve API-centric support for scaling needs across cloud native environments. Based on the Gloo Edge now helps DevOps teams integrate decades-old SOAP through a single API. Gloo Edge 1.8’s support for SOAP is “the biggest breakout feature” of the release, blog post, Guan described how SOAP, an XML messaging protocol from the turn of the century, “remains prevalent today for enterprise web services across a number of industries, including financial services and healthcare.” Yet, “Unfortunately, SOAP (and associated legacy middleware applications) hold back large-scale modernization efforts because there hasn’t been a viable migration approach in the market,” Guan wrote. “Organizations haven’t been able to tackle incremental deprecation of SOAP web services over time without great difficulty.” Gloo Edge Enterprise 1.8, with the addition of

4 Advancements That Led to Decentralized Cloud Storage

The evolution of cloud storage as we know it is a fascinating journey filled with projects that built on one another to bring us to where we are today. Interestingly enough, most of the technology used to build a decentralized cloud storage network today has been available for decades. The fact that decentralized cloud storage is viable is mostly because of the growth of storage capacity available at the edge and the incredible increases we’ve made across the globe in bandwidth. Here are four key advancements throughout the years that have paved the way for decentralized cloud storage. Advancement #1: Network Bandwidth Increased JT Olio JT is the CTO at Storj. He oversees product development and led the re-architecture of Storj’s distributed cloud storage platform. He was previously director of engineering at Space Monkey, which was acquired by Vivint in 2014. JT has an MS in computer science from the University of Utah and a BS in computer science and mathematics from the University of Minnesota. There is a great paper by Charles Blake and Rodrigo Rodrigues entitled “

Install Calico to Enhance Kubernetes’ Built-in Networking Capability

Calico, from network software provider Tigera, is a third-party plugin for Kubernetes geared to make full network connectivity more flexible and easier. Out of the box, Kubernetes provides the NetworkPolicy API for managing network policies within the cluster. The problem many Kubernetes admins find (especially those new to the technology) is that network can quickly become a rather complicated mess of YAML configurations, where you must configure traffic ingress and egress properly, or communication between Kubernetes objects (such as pods and containers) can be difficult. That’s where the likes of Flannel, which cannot configure network policies. With Calico, you can significantly enhance the Kubernetes networking configuration. Take, for instance, the feature limitations found in the default NetworkPolicy, which are: Policies are limited to a single environment and are applied only to pods marked with labels. You can only apply rules to pods, environments, or subnets. Rules can only contain protocols, numerical ports, or named ports. When you add the Calico plugin, the Continue reading

Buoyant Cloud Beta Brings Simplified Linkerd

Network software provider Linkerd service mesh, has launched the public beta of William Morgan emphasizes that operational simplicity has always been a focus, he says that they expect Buoyant Cloud to take that one step further. “We want to take the operational burden off of the shoulders of whoever is bringing Linkerd into their organization. We want to handle that for you,” he said. “We want to carry the pager for you, we want to make it so that running Linkerd in production is a trivial task. This falls right in line with everything we’ve been doing with Linkerd since the very beginning — our focus has been really heavily on operational simplicity and on making it so that when you operate Linkerd, you’re not in this horrendous situation where you need to hire a team of experts just to maintain your service mesh. With Buoyant Cloud, we have the opportunity to take on a lot of those operational tasks for you, and make it so you get all Continue reading

Lightning-Fast Kubernetes Networking with Calico and VPP

Reza Ramezanpour Reza is a developer advocate at Tigera, working to promote adoption of Project Calico. Before joining Tigera, Reza worked as a systems engineer and network administrator. Public cloud infrastructures and microservices are pushing the limits of resources and service delivery beyond what was imaginable until very recently. To keep up with the demand, network infrastructures and network technologies had to evolve as well. Software-defined networking (SDN) is the pinnacle of advancement in cloud networking. By using SDN, developers can deliver an optimized, flexible networking experience that can adapt to the growing demands of their clients. This article will discuss how Tigera’s new Project Calico is an open source networking and security solution. Although it focuses on securing Kubernetes networking, Calico can also be used with OpenStack and other workloads. Calico uses a modular data plane that allows a flexible approach to networking, providing a solution for both current and future networking needs. VPP Continue reading

What the Heck Happened to the Internet? Fastly’s Hard Fall and Quick Recovery

Well, wasn’t that fun? On June 8, 2021, many internet users went to their usual sites such as Amazon, Reddit, CNN, or the New York Times and found nothing but an “Error 503 service unavailable” and an ominous “connection failure” note. So, what happened? The Commercial Internet Exchange (CIX) other features became important. In particular, everyone started demanding faster performance and lower latency. The solution? CDNs. These companies, which besides Fastly include market-leader Cloudflare, all use the same basic techniques to speed up the net. They take the data from popular sites and place it in distributed caches in points of presence (PoP) close to consumers. If that sounds familiar to you even if you’re a cloud native developer and not a network administrator there’s a good reason. CDNs were one of the first business models Continue reading

VMware Redefines Security After a Surge in Attacks

Enterprise virtualization software giant VMware says it is “redefining” security as it seeks to help customers meet the challenges associated with a skyrocketing number of threats, more numerous attack vectors, and having fewer human resources at their disposal to help keep attacks at bay. “So what we’re asking all of these IT security teams to do is essentially to do more — and there’s a lot more complexity,” 2020 Threat Landscape report results, 81% of the survey respondents reported a breach during the past 12 months — with four out of the five breaches (82%) deemed material. At the Continue reading

Birth of the Cloud: A Q&A with Vint Cerf and Linode’s Christopher Aker

Mike Maney Mike Maney leads corporate communications for Linode. Over the years, he’s led global communications teams for high profile, culture-shifting businesses at Fortune 50 companies and helped early stage startups tell better stories. I have had the opportunity to work with a number of tech pioneers over the course of my career. So when an opportunity to interview two who were at the forefront of the internet and the cloud, I jumped at it. a vice president and chief internet evangelist for Google). Years later after the creation of TCP/IP, Linode, the company Aker built, turns 18 this year, I asked Cerf and Aker to weigh in on where we’ve been, where we are today, and where we’re going. You’ve both been in the business of cloud for many years. Looking back to when you first started in this business, how has Continue reading

Calico Integration with WireGuard Using kOps

Reza Ramezanpour Reza is a developer advocate at Tigera, working to promote adoption of Project Calico. Before joining Tigera, Reza worked as a systems engineer and network administrator. It has been a while since I have been excited to write about encrypted tunnels. It might be the sheer pain of troubleshooting old technologies or countless hours of falling down the rabbit hole of a project’s source code that always motivated me to pursue a better alternative — without much luck. However, I believe luck is finally on my side. In this blog post, we will explore using open source Tigera announced a tech preview of its TLS were available to encrypt workloads’ traffic at higher TCP/IP layers, in this case, the application layer. However, WireGuard targets traffic at a lower layer, the transport layer, which makes it effective for a wider range Continue reading

Magma Brings a Systems Approach to Wireless Networking

Bruce Davie Bruce is a computer scientist noted for his contributions to the field of networking. With Larry Peterson, he recently co-founded Systems Approach, LLC, to produce open source books and educational materials. He is a former VP and CTO for the Asia-Pacific region at VMware. Prior to that, he was a Fellow at Cisco Systems, leading a team of architects responsible for multiprotocol label switching (MPLS). Davie has over 30 years of networking industry experience and has co-authored 17 Requests for Comments (RFCs). He was recognized as an Association for Computing Machinery (ACM) Fellow in 2009 and chaired ACM SIGCOMM from 2009 to 2013. Wireless networking is one of those technologies that is, for most of us, so ubiquitous that we take it for granted. WiFi permeates our homes, offices and coffee shops, while cellular networks allow us to stay connected in many other settings. Of course, network access of any sort is a lot less ubiquitous once you get out of densely populated areas. It turns out that making networking ubiquitous requires some fresh thinking about how wireless networks are built. This fresh approach has been realized in an open source project called

How We Built an Open Source Drop-In Replacement for gRPC

JT Olio JT is the CTO at Storj. He oversees product development and led the re-architecture of Storj’s distributed cloud storage platform. He was previously Director of Engineering at Space Monkey, which was acquired by Vivint in 2014. JT has an MS in Computer Science from the University of Utah and a BS in Computer Science and Mathematics from the University of Minnesota. Our team at Storj is building a decentralized cloud object storage and when we decided to build it using Go, we thought we’d also utilize

Upbound Universal Crossplane Wants to Replace Infrastructure as Code

Crossplane, has created what it says is the first enterprise distribution of Crossplane called Bassam Tabbara, Upbound founder and CEO, in an interview. Crossplane “becomes your universal control plane that you could use, using the same style that the Kubernetes community pioneered, to manage essentially all the infrastructure that an enterprise touches from a single control plane.” UXP, then, is an open source, vendor-supported, enterprise-grade distribution of Crossplane that also adds on a layer of 24/7 support, priority bug fixes, and consultation with a subscription. UXP is available free for individual users and by subscription for larger deployments, and is a drop-in replacement for Crossplane that installs with a single command. Tabbara noted that UXP is “vendor-supported, not community-supported,” in that Upbound will “help enterprises deploy it, support it, and give them a number of features that makes it easier for them to deploy and manage it in their environment.” As a long-term supported project, UXP also lags behind Crossplane upstream to ensure reliability, and Upbound describes UXP  as “designed to help enterprises adopt a universal control plane, moving beyond infrastructure as code,” in a press statement. In the case of UXP, Crossplane is further extended with its integration with both Upbound Cloud and Upbound Registry, both of which became generally available at the same time as the release of UXP. Upbound Cloud provides teams with visibility into their UXP instances and the infrastructure being managed, giving them a place to see what is running where, and by who it was provisioned. Upbound Registry then provides a place to both publicly and privately share Crossplane Configurations, and for providers to share managed resources. “With UXB, with Upbound Cloud and Upbound Registry, we believe we have a set of products now that can actually take this approach of using control planes in the enterprise and turn it into essentially a new way of managing infrastructure,” Tabbara said. “We see this with existing customers today, maybe even replacing a lot of what they do today with tools like Terraform and infrastructure-as-code approaches and going more towards a control plane approach, or even gitOps on top of a control plane.” The big difference Tabbara sees in all of this is that, by taking the API-driven approach rather than relying on templates, as with infrastructure as code, Crossplane and UXP can deliver a more scalable experience to managing infrastructure across large and varied environments. He explained that part of the appeal of Crossplane lies in the fact that teams can use the same Kubernetes-based tools and approaches that they are already using to deploy software to provision and manage infrastructure. Sponsor Note LaunchDarkly is a feature management platform that empowers all teams to safely deliver and control software through feature flags. By separating code deployments from feature releases, LaunchDarkly enables you to deploy faster, reduce risk, and iterate continuously. “If you are using Helm, or kustomize, or if you’re using literally any of the tools that people are deploying and love and use today with Kubernetes, as a container orchestrator, those tools work exactly in the same way,” said Tabbara. “When you’re using Kubernetes plus Crossplane to manage the rest of the cloud infrastructure and deployments across clouds and hybrid clouds, those tools work exactly in the same way. They are using Crossplane APIs that are extensions of Kubernetes extensions of the Kubernetes control plane.” Following the most recent KubeCon+CloudNativeCon, there were some

Near Real-Time Kubernetes at Scale: Increasing App Throughput with Linkerd

Stephen Reardon The one-man band that keeps the show running, Stephen Reardon is the DevOps engineer in the Entain Trading Solutions team, operating hundreds of Kubernetes nodes in the cloud using IaC tooling, chaos engineering testing tools and end to end monitoring. His main responsibility is operational reliability, keeping the platform resilient and available, and above all developer-proof.

Tetrate Service Bridge to Close Enterprise Application Networking Gap via Service Mesh

At some point, you’ve got to stop building something you think people need and start putting it out there to test in the market. You have to go get users. This is where the first engineers of the Istio service mesh at Google found themselves about four years ago. But, like many things in the still-emerging cloud native space, the first response was: Well, what is it? Who else is using it? Tetrate Service Bridge to act as an application connectivity platform or a technical bridge to take you from those legacies to those modern environments, and to increase reliability and availability. Also called TSB, it looks to solve the issue of networking for heterogeneous workloads. Tetrate Service Bridge, built on Istio and now in general availability, presents itself as the solution to enterprise-grade challenges that can’t be just abstracted out with a Kubernetes layer. The Tetrate team has built out the core set of functionality around controlling traffic across an entire fleet of services, from the edge to the mesh. Butcher says TSB bridges the gap between having service mesh capabilities and actually realizing those capabilities in a way that is safe. He said, “This service mesh is great, but how do I actually use it in my enterprise? How do I change my process to take advantage of the mesh? And actually changing processes is really expensive, so how do I not change my process either?” And those enterprise processes aren’t simple either. They look to use service mesh to enforce security and compliance requirements. Or to gain control and visibility across entire complex infrastructures. How to put security controls in place across highly heterogeneous environments. “Service mesh serves a lot of problems I have but you are telling me I can only have it in Kubernetes? I want those things to help me get from my legacy to a modern environment, not already in that,” Butcher said. TSB helps you manage across the full breadth of compute, connecting Kubernetes and legacy infrastructure. He gives examples you can use to link with Istio and Envoy and just start assembling your application network. “Tetrate Service Bridge is a platform for applications to communicate securely and successfully without having to get into the weeds of what lives there.”— Zack Butcher, Founding Engineer, Tetrate Butcher says then there’s the enterprise management side, teams need to be able to prove they are using service mesh correctly and securely. He says TSB enables teams to divvy up their physical infrastructure and cloud-based environments, with multitenancy and controls, so you can use service mesh to “do cool things at runtime.” The connectivity tool works not only with Istio and the Apache Skywalking, enabling observability across whole systems. They are clear that while they are a tool to ease the use of these open source tools and the whole Tetrate team is contributors to the open source projects they depend on, they are not an open core company, intentionally. “In my opinion, there’s this really big tension in open-core companies. If me, as a developer, I have to decide project or product that people pay for — he doesn’t want to make the value prop decision,” Butcher explained. He continued, “We are building a layer on top of the open source pieces. We are assembling these open source pieces together in a coherent system.” Another part of this decision is that, since they are still essentially using open source tools, enterprises can do so in a relatively cheap way through Tetrate. Butcher points to the fundamental difference between enterprise closed source products like TCB and the open source projects it serves. “Capabilities go in open source and then how you manage those capabilities and how you use them within an organization, that’s what you put in the product,” he said.” While they only went fully public with TSB in April, they built it alongside adopters from the start. Butcher, paraphrasing Socrates, said that after the “pain of adopting Istio — we were in a cave without users” they were determined to build hand in hand with users. One such early adopter was FICO, the organization that creates the predominant credit risk score in the U.S. One emerging use case for service mesh is encryption in transit to ensure compliance to ever-changing regulations and standards from HIPAA and GDPR to automate enforcement of

Application Performance in the Age of SD-WAN

Mike Hicks Mike is a principal solutions analyst at ThousandEyes, a part of Cisco, and a recognized expert with more than 30 years of experience in network and application performance. In the olden days, users were in offices and all apps lived in on-premises data centers. The WAN (wide area network) was what connected all of them. Today, with the adoption of SaaS apps and associated dependencies such as cloud services and third-party API endpoints, the WAN is getting stretched beyond recognition. In its place, the internet is directly and exclusively carrying a large — if not majority — share of all enterprise traffic flows. Enterprises are increasingly moving away from legacy WANs in favor of internet-centric, software-defined WANs, also called SD-WANs or software-defined networking in a wide area network. Architected for interconnection with cloud and external services, adopting SD-WANs can play a critical role in making enterprise networks cloud-ready, more cost-efficient and better suited to delivering quality digital experiences to customers and employees at all locations. But the transformation brings new visibility needs, and ensuring that SD-WAN delivers on expectations requires a new approach to monitoring that addresses network visibility and application performance equally. WAN in the Light of Continue reading

Avesha Deploys Machine Learning for More Efficient Load Balancing

When Avesha. To his surprise, the industry hadn’t changed much over the past twenty years. This week, Avesha is demonstrating its new AI-based load balancing technology at KubeCon+CloudNativeCon 2021. Load balancing still mostly happens at a local level, within a particular cloud or cluster, and uses the same formulas that he helped popularize more than two decades ago. For example, a load balancer can use a “round-robin” formula, where requests go to each server in turn, and then back to the first one. A “weighted round-robin” is similar except that some servers get more requests than others because they have more available capacity. A “sticky cookie load balancer” is one where all the requests from a particular session are sent to the same server so that, say, customers don’t get moved around in the middle of shopping sessions and lose their shopping carts. “There are a few other variations, but they’re all based on fixed settings,” said Nair. “The state of the art hasn’t moved much in this area.” A very simple change that would make load balancers immediately more effective is to automatically adjust the weights based on server performance. “It’s actually a very low-hanging fruit,” he said. “I don’t know why they aren’t all doing this.” That’s what Avesha started looking at. Then, in addition to server performance, the company also added in other factors, like travel path times. The resulting service, the Smart Application Cloud Framework, was launched Tuesday. Deployment Structure Avesha is deployed with an agent that sits in its owner container inside a cluster or private cloud. It talks to its fellow agents and to Avesha’s back end systems via secure virtual private networks. The backend system collects information about traffic paths and server performance then uses machine learning to determine optimal routing strategies. The specific AI technique used is reinforcement learning. The system makes a recommendation and looks at how the recommendation works in practice, then adjusts its model accordingly. “Is it continuously tuning your network,” said Nair. “The network is constantly undergoing lots of changes, with traffic and congestion.” It also looks at the performance of individual servers and if some are having problems handling requests it automatically routes them elsewhere. And it works across all types of deployments — multiple public clouds, private clouds, and edge computing installations. Sponsor Note LaunchDarkly is a feature management platform that empowers all teams to safely deliver and control software through feature flags. By separating code deployments from feature releases, LaunchDarkly enables you to deploy faster, reduce risk, and iterate continuously. “The methods currently in use in Kubernetes are static,” he said. “You set fixed thresholds with a lower bound and an upper bound. But nobody even knows how to set those thresholds.” People wind up guessing, he said, set some basic targets, and then leave them in place. “You end up wasting resources,” he said. The Avesha technology is more like a self-driving car, he said. There are still parameters and guard rails, but, within those constraints, the system continually optimizes for the desired outcome, whether it be the lowest latency, or maximum cost savings, or even compliance-related data movement restrictions. “You want your data traffic to be managed in accordance with your policies,” he said. “For example, there might be regulations about where your data is and isn’t allowed to go.” Performance Improvements In internal studies, Avesha has seen improvements of 20% to 30% in the number of requests that are handled within their performance targets compared to standard weighted round-robin— approaches. When some clusters have hundreds of thousands of nodes, 30% is a big number, he said. Companies will see improvements in customer experience, lower bandwidth costs, and less need for manual intervention when things go wrong in the middle of the night. And it’s not just about the business bottom line, he added. “If you translate that into wasted energy, wasted natural resources, there are lots of benefits,” he said. For some applications, like video streaming, better performance would translate to competitive advantage, he said. “It’s like the difference between getting high definition and standard definition video.” There’s no commercial product currently on the market that offers AI-powered load balancing, he said, though some companies probably have their own proprietary technology to do something similar. “Netflix is an example of a company that’s a leader in the cloud native world,” he said. “I would say there’s a fairly good chance that they’ve already incorporated AI into their load balancing.” Other large cloud native technology companies with AI expertise may have also built their own platforms, he said. “Nobody has said anything publicly,” he said. “But it’s such an obvious thing to do that I am willing to believe that they have something, but are just keeping it to themselves.” There are also some narrow use cases, like that of content delivery networks. CDNs typically deliver content, like web pages, to users. They work by distributing copies of the content across the internet and optimize for the shortest possible distance between the end user and the source of the content. Avesha’s approach is more general, supporting connections between individual microservices. “It’s a little bigger than what a CDN is trying to do,” he said. “We are literally at the cutting edge with this.” AI-Powered Load Balancing as a Feature At some point, cloud vendors and third-party service providers will begin offering intelligent load balancing to their enterprise customers, either by building their own technology or by buying or partnering with Avesha or any competitors who might appear on the scene. “One way or the other, you’re going to be able to take advantage of it,” said Nair. Avesha itself is currently working with partners, he said, including some major industry players, and he is expecting to be making announcements this summer. But enterprises can also work directly with Avesha and get a jump on the competition, he added. Enterprises who deploy workloads to multiple clouds would find the technology of most interest, he added. Avesha is currently working with several companies on proof of concept projects. These are companies that typically are at $50 million in revenues or above in verticals such as media, manufacturing, health care and telecom. “We have also engaged with some partners who are big cloud players,” he said. More information, as well as return on investment analyses, will be released in the next few months. Verizon and AWS Serve Doctors at the Edge One case study that has been made public was a

1 2 3 8