Archive

Category Archives for "Networking – The New Stack"

Is Network Security Relevant in the Cloud?

Vishal Jain Vishal Jain is the co-founder and CTO of Valtix. Vishal is a seasoned executive and has held engineering leadership roles across many successful startups and big companies in the networking and security space. Vishal was an early member of Andiamo Systems, Nuova Systems, and Insieme Networks, which were acquired by Cisco Systems. Vishal was also responsible for leading the security engineering team at Akamai and built their live streaming service in their early days. Is Network Security Relevant in the Cloud? Short answers: yes, and no. But the details matter. For the last 15 months, we’ve seen a previously unimaginable acceleration in the use of cloud and greater reliance on technology overall, all of which pushes more app efforts to cloud faster than originally planned. This acceleration brings several discussions to a head, but we’re here to talk about network security (netsec). Within netsec in the cloud, there are a few different ways of segmenting, but where this article will draw the line is between protecting users as they access the cloud and protecting apps deployed into the cloud. The former, protecting users, has seen plenty of investment and innovation and is a relatively well-understood problem. The latter Continue reading

Arrcus Brings Network Automation and API Accessibility to the Edge

Arrcus, a well-funded edge network software startup that is working to make a name for itself in the expanding multicloud arena. But even as enterprise adoption of multicloud and hybrid cloud strategies continues to rise, he sees the future being at the network and compute edge. “Everybody talks about how you can get benefits from large pools of centralized capacity in the public cloud,” said Ayyar, whose was announced as chairman and CEO on Sept. 15. “What I feel very, very confident about is that this action is almost passé in terms of the clouds, and it’s moving a lot more into the edge. The pendulum is swinging from consolidated and large data centers doing everything to highly distributed and disaggregated infrastructures doing things that are point of consumption, point of sale, Continue reading

What You Can Learn from the AWS Tokyo Outage

Jason Yee Jason is director of advocacy at Gremlin where he helps companies build more resilient systems by learning from how they fail. He also helps lead Gremlin's internal chaos engineering practices to make it more reliable. In the movies, it seems like Tokyo is constantly facing disasters — natural ones in the forms of earthquakes and tsunamis, and unnatural ones like giant kaiju and oversized robots. On the morning of Sept. 1, the mechanized behemoth was Amazon Web Services. At around 7:30 am JST, AWS began experiencing networking issues in its AP-Northeast-1 region based in Tokyo. The outage affected business across all sectors, from financial services to retail stores, travel systems and telecommunications. Despite the troubles with not being able to access money, purchase goods, travel or call each other, the Japanese people demonstrated resilience, proving that at least some things from the movies are true. However, the financial losses due to the outage are expected to be huge. After the six-hour outage, AWS explained the issue

Black Friday Downtime: How to Avoid Impacts on Your Business

Hannah Culver Hannah is a solutions marketer at PagerDuty interested in how real-time urgent work plays out across all industries in this digital era. It’s a brisk Friday morning in November. You’re sipping your coffee and mentally preparing yourself for the day that’ll define your fiscal year. How will you fare this Black Friday? Are your teams prepared? We’ve all heard the 2020 Holiday Shopping Season Report, “The online holiday season exceeded $188B resulting in a strong growth rate of 32% over the 2019 season.” This trend didn’t start with COVID-19, however. A

gRPC: A Deep Dive into the Communication Pattern

Danesh Kuruppu is a technical lead at WSO2, with expertise in microservices, messaging protocols and service governance. Danesh has spearheaded development of Ballerina’s standard libraries including gRPC, data and microservices framework. He has co-authored 'gRPC Up and Running' published by O’Reilly media. If you have built gRPC applications and know about the communication fundamentals, you may already know there are four fundamental communication patterns used in gRPC-based applications: simple RPC, server-side streaming, client-side streaming and bidirectional streaming. In this article, I dive deeper into these communication patterns and discuss the importance of each pattern as well as how to pick the right one, according to the use case. Before I discuss each pattern, I’ll discuss what they have in common, such as how gRPC sends messages between clients and servers over the network and how request/response messages are structured. gRPC over HTTP/2 According to official documentation, the gRPC core supports different transport protocols; however, HTTP/2 is the most common among them. In HTTP/2, communication between a client and a server happens through a single TCP connection. Within the connection, there can be multiple bidirectional flows of bytes, which are called streams. In gRPC terms, one RPC call is mapped to Continue reading

Are ISPs Better Bets to Offer Cloud Computing for the Edge?

Edge computing is getting more attention of late — because there are advantages to having computing power and data storage near the location where it’s needed. As Edge computing needs grow, users are likely to take a hard look at whether public cloud giants like AWS, Google are their best choice, or whether their local ISP is best suited for the job. ISPs — including cable, DSL and mobile providers — claim to offer benefits when delivering SaaS and other services compared to public cloud providers: low latency, high-bandwidth connections, fewer security vulnerabilities, regional regulation compliance, and greater data sovereignty. While they must also demonstrate that they can deliver services robust enough to meet DevOps needs, ISPs can offer tremendous benefits and fill gaps in current cloud computing offerings. “A key concern cloud customers have when leveraging their microservices architecture for the applications they offer or rely on is how to achieve and maintain ultra-low latency,” said

Ingress Controllers: The More the Merrier

Just like everything in the software development space, especially in today’s cloud native world, fragmentation is everywhere. As with any single category of tool — service meshes, orchestrators and observability tools — you will find multiple “brands” and variations of each tool being used in most organizations. We can identify two main causes for such fragmentation: One is deliberate, and the other is not. Let’s talk about the non-deliberate cause first and how that relates to my own service mesh company

ZeroLB, a New Decentralized Pattern for Load Balancing

Marco Palladino Marco Palladino is an inventor, software developer and internet entrepreneur based in San Francisco. As the CTO and co-founder of Kong, he is Kong’s co-author, responsible for the design and delivery of the company’s products, while also providing technical thought leadership around APIs and microservices within both Kong and the external software community. Prior to Kong, Marco co-founded Mashape in 2010, which became the largest API marketplace and was acquired by RapidAPI in 2017. With advancements in technology-driven by the Kubernetes — new architectural patterns have emerged to provide decentralized load balancing, yet portable across various platforms and clouds. The old monolithic and centralized load balancer, a technology largely stuck in the early 2000s, becomes deprecated in this new distributed world. The most common breed of load balancers being deployed across every application — centralized load balancers — are a legacy technology. They don’t work well in our new distributed and decentralized world. Remnants of a monolithic legacy way of doing things that did not adapt to modern best practices, centralized load balancers prevent users and organizations from effectively transitioning to the cloud Continue reading

Dynamic DNS Security Blues

Whenever you run into a network problem, the wise network admin or sysadmin always remembers “It’s always Black Hat USA 2021 security conference Ami Luttwak and head of research simple loophole that allowed them to intercept dynamic DNS (DDNS) traffic going through managed DNS providers like Amazon and Google. And, yes, that includes the DDNS you’re using on your cloud. And, if you think that’s bad, just wait until you see just how trivial this attack is. Our intrepid researchers found that “simply registering certain ‘special’ domains, specifically the name of the name server itself, has unexpected consequences on all other customers using the name server.

How OpenInfra Can Solve the Global Connectivity Crisis

Jonathan Bryce Jonathan Bryce, who has spent his career building the cloud, is Executive Director of the Open Infrastructure Foundation. Previously he was a founder of The Rackspace Cloud. He started his career working as a web developer for Rackspace, and during his tenure, he and co-worker Todd Morey had a vision to build a sophisticated web hosting environment where users and businesses alike could turn to design, develop and deploy their ideal website — all without being responsible for procuring the technology, installing it or making sure it is built to be always available. This vision became The Rackspace Cloud. Since then he has been a major driver of OpenStack, the open source cloud software initiative. When the internet began as Arapanet in 1969, it connected one computer at each of four universities. Today, it’s an estimated 50 billion devices, with that number growing each second. The computing architecture originally designed to connect four hard-wired laboratories in the southwest now connects billions of wired and wireless devices globally. On a recent episode of Martin Casado Continue reading

Linkerd Graduates CNCF with Focus on Simplicity

The number of requirements around stability, adoption, maturity, and governance, and joins more than a dozen other graduated projects, such as Helm, Prometheus, Envoy, and Kubernetes. In a press release regarding Linkerd’s graduation, H-E-B is quoted as saying that they didn’t “choose a service mesh based on hype,” and that they “weren’t worried about which mesh had the most marketing behind it.” The service mesh being alluded to here is Istio, which, in the most recent William Morgan. “And the fact that it has attained graduation, that it has this community of enthusiastic and committed adopters, I think it’s pretty remarkable given that context. It’s hard not to talk about Linkerd without also talking about Istio, although I think the reality is, there’s some pretty fundamental philosophical differences between those projects.” Linkerd was created by Buoyant in 2016, and Morgan said its first iterations were rather complex before the project found its focus on simplicity. This simplicity, which starts with Linkerd using Envoy, is a key differentiator for the service mesh, and one of the fundamental philosophical differences Morgan speaks of. “Naturally, as engineers, what you want to do when you’re building infrastructure is, you want to solve every possible problem with this beautiful platform that can do all things for all people,” Morgan said. “I think when you go down that path, which feels very natural to an engineer, you end up with something that is really unwieldy, and that’s complex, and that is fundamentally unsatisfying. It sounds great, but it’s so hard to operate that you never accomplish your goals.” Part of the balancing act, said Morgan, is to deliver all the features of the service mesh around reliability, security, and observability, “without getting mired in all the complexity, without having to hire a team of developers or a team of engineers, service mesh experts, just to run your service mesh.” In the past year, Linkerd has seen a 300% increase in downloads, and part of that acceleration may be attributed to a migration away from Istio due to its complexity. Rather than focusing on moving away from Istio, which he says some users may end up using simply because they see it first, Morgan again focuses on Linkerd’s simplicity as the reason behind its increased adoption. “In the absence of having these marketing bullhorns, these huge marketing budgets, the way that Linkerd has grown has been by word of mouth,” said Morgan. “It’s been like the way that open source projects used to grow. The way that we’ve been able to accomplish that is by having a really clear vision and a really clear message around simplicity.” Another key architectural decision made around simplicity was that Linkerd was made to focus on Kubernetes. An earlier version, said Morgan, was made to work with Mesos, Zookeeper, Kubernetes and others, and they instead decided that they had to go with the “lowest common denominator,” which was Kubernetes. Linkerd’s decision to go with the Rust programming language, rather than Go, C, or C++, was another distinction for the service mesh in its evolution, and one Morgan stands behind. “It was a scary choice, but we did that because we felt that the future of the service mesh, and in fact the future of all cloud native technology, really has to be built in Rust,” he said. “There’s no reason for us, in 2021, to ever write code in C++ or in C anymore. That was a pretty scary, risky, controversial decision at the time, but it’s paid off because now we have the adoption to kind of show it off.” While Morgan calls the project’s CNCF graduation “a nice moment for us to reflect and to be grateful for all the people around the world who worked so hard to get Linkerd to this point,” he says that there is a long roadmap ahead, which includes things like server and client-side policies, and “mesh expansion” to allow the Linkerd data plane to operate outside of Kubernetes. But when your focus is on simplicity, where do you draw the line on additional features? Morgan said that, as a project designer, you have to ask yourself some questions. “What is the maximum number of those problems that I can solve, and then the rest, we’re just not going to solve? Like, that’s the stopping point,” said Morgan. “There are going to be use cases that Linkerd is just not going to solve, and that’s okay. For those folks, I do actually sometimes tell people to use Istio. There’s a set of things that Istio can do, super complicated situations, where I just don’t want Linkerd to be able to solve that because it would be too complicated.” The post Linkerd Graduates CNCF with Focus on Simplicity appeared first on The New Stack.

Decentralized Chat: Matrix Offers Red Pill to Slack Users

One of the most interesting internet trends of 2021 is the experimentation going on with decentralized technologies. We’re seeing a blossoming of open source, decentralized internet applications — many of them attempting to provide alternatives to big tech products. Privacy breaches, misinformation, black box algorithms, lack of user control — these are just some of the problems inherent in the proprietary, centralized social media and communications products of Facebook, Twitter, Apple, Google, and others. The question is: can decentralized applications be a panacea? Richard MacManus Richard is senior editor at The New Stack and writes a weekly column about web and application development trends. Previously he founded ReadWriteWeb in 2003 and built it into one of the world’s most influential technology news and analysis sites. In today’s column, I look at an emerging decentralized, open standard for real-time communications: defined as “an open standard for interoperable, decentralized, real-time communication over IP.” Products built on top of Matrix could provide an alternative to using commercial Instant Messaging products like Slack or WhatsApp.

Cloud Foundry HTTP/2 Support Thwarted by GoLang Indifference

A Go Router reverse proxy removes headers that would let a CF application know it can send and receive HTTP/2 traffic. Such capability could be coded in, bypassing the Go language library entirely, but the project team doesn’t want to take on the responsibility for supporting such a potentially widely-used function. spoke about this challenge at this year’s virtual

Turbocharging AKS Networking with Calico eBPF

Reza Ramezanpour Reza is a developer advocate at Tigera, working to promote adoption of Project Calico. Before joining Tigera, Reza worked as a systems engineer and network administrator. A single Kubernetes cluster expends a small percentage of its total available assigned resources on delivering in-cluster networking. We don’t have to be satisfied with this, though, achieving the lowest possible overhead can provide significant cost savings and performance improvements if you are running network-intensive workloads. This article explores and explains the improvements that can be achieved in Microsoft Azure using Calico instructions for installing Calico’s network policy engine with AKS use a version of Calico that pre-dates eBPF mode. Accelerating Network Performance Test Methodology To show how Calico accelerates AKS network performance using eBPF, the Calico team ran a series of network Continue reading

Real-Time Data Access Across Highly Distributed Environments

The goal is straightforward, but getting there has proven to be a challenge: how to offer real- or near real-time access to data that is continually refreshed on an as-needed basis across a number of different distributed environments. Consequently, as different systems of data and their locations can proliferate across different network environments — including multiclouds and on-premises and, in many cases, geographic zones — organizations can struggle to maintain low-latency connections to the data their applications require. The challenges are especially manifest when users require and increasingly demand that their experiences, which are often transactional-based, are met in near- or real-time that require data-intensive backend support. Many organizations continue to struggle with the challenges of maintaining and relying on data streaming and other ways, such as through so-called “speed layers” with cached memory, to maintain low-latency connections between multicloud and on-premises environments. In this article, we describe the different components necessary to maintain asynchronously updated data sources consisting of different systems of record for which real-time access is essential for the end-user experience. For the CIO, the challenges consist of the ability for applications to have low-latency access to data, often dispersed across a number of often highly distributed Continue reading

CNCF Projects Bring Service Mesh Interoperability, Benchmarks

Both the Service Mesh Performance (SMP) projects joined the Cloud Native Computing Foundation (CNCF) earlier this month at the Sandbox level. Meshery is a multiservice mesh management plane offering lifecycle, configuration, and performance management of service meshes and their workloads, while SMP is a standard for capturing and characterizing the details of infrastructure capacity, service mesh configuration, and workload metadata. When the projects first applied in April for inclusion, the Technical Oversight Committee (TOC) had one clarifying question for them: should they be combined with or aligned in some manner with the Lee Calcote, founder of verifies that, in fact, it is a certain kind of a service mesh,” said Calcote. “So all in one Continue reading

Infoblox: How DDI Can Help Solve Network Security and Management Ills 

Network connections can be likened to attending an amusement park, where Dynamic Host Configuration Protocol (DHCP), serves as the ticket to enter the park and the domain name system (DNS) is the map around the park. Network management and security provider Infoblox made a name for itself by collapsing those two core pieces into a single platform for enterprises to be able to control where IP addresses are assigned and how they manage network creation and movement. “They control their own DNS so that they can have better control over their traffic,” explained Infoblox: How DDI Can Help Solve Network Security and Management Ills  Also available on Google Podcasts, PlayerFM, Spotify, TuneIn Infoblox’s name for this unified service is DDI, which is

Scuttlebutt: Decentralize and Escape the Social Media Rat Race

Richard MacManus Richard is senior editor at The New Stack and writes a weekly column about web and application development trends. Previously he founded ReadWriteWeb in 2003 and built it into one of the world’s most influential technology news and analysis sites. When Twitter began imposing Diaspora — a kind of decentralized Facebook — was founded by four New York students. Later, in 2017, a federated social network named surge of popularity. Now, in 2021, there is a growing underground project called Manyverse and Dominic Tarr, a New Zealander who lived on a boat and had sporadic internet coverage. Tarr’s lifestyle (which, Continue reading

1 6 7 8 9 10 16