Archive

Category Archives for "Networking – The New Stack"

Redis Pub/Sub vs. Apache Kafka

Redis is the “Swiss Army knife” of it’s often used for caching, but it does even more. It can also function as a loosely coupled distributed message broker, so in this article, we’ll have a look at the original Redis messaging approach, Redis Pub/Sub, explore some use cases and compare it with Apache Kafka. 1. Redis Pub/Sub A Beatles-inspired submarine cocktail. Evlakhov Valerii The theme of “pub” pops up frequently in my articles. In a previous article, I wrote about a conversation in an outback pub, “

Tailscale: A Virtual Private Network for Zero Trust Security

Well before launching their company, the founders of problems with VPN security had already emerged before the pandemic. Since then, the big jump in remote work sparked by lockdowns has only revealed just how vulnerable they can be. Even enterprise-grade VPNs are riddled with security problems. In fact, a Zscaler David Cranshaw and Chief Operating Officer Avery Pennarun wanted to give developers a secure, scalable alternative to traditional VPNs. “Our big vision is to help developers be reasonable about scale,” said Pennarun, a former Google engineer. Although Continue reading

10 Criteria to Evaluate Your Cloud Network Security Solution

As organizations expand their cloud adoption and business-critical use cases, security of their cloud infrastructure often becomes more complex. For this reason, analysts and advisors recommend that organizations take a unified, multilayer approach to protect their cloud deployments and ensure a robust cloud security posture. Approaches like the one just mentioned have eased security concerns, as cited in a shared responsibility model, at the infrastructure layer (IaaS), cloud providers are responsible for securing their compute-network-storage infrastructure resources. This leaves cloud users responsible for protecting the data, apps and other assets deployed on the infrastructure. Cloud providers offer a number of tools and services to help users uphold their end of the shared responsibility model, and they are important elements Continue reading

Real-Time Observability with InfluxDB for BAI Communications

Jason Myers Jason is a technical marketing writer at InfluxData. In public transportation, there’s little room for error when it comes to passenger safety. At the same time, rail operators don’t have bottomless financial resources to oversee their rail system. The team at BAI Communications in Toronto faced these two, diametrically opposed realities. Fortunately, by using their existing network infrastructure, a time-series platform and a sizable helping of ingenuity, the BAI team was able to close that gap between their technical needs and cost. Here’s how they did it. Background BAI Communications is a global company and a leader in providing communications infrastructure, pioneering the future of advanced connectivity and delivering the ubiquitous coverage that can transform lives, power business ambitions and shape the future of our cities. The company focuses on three key verticals: broadcast, neutral host and 5G, and transit. It seeks to enrich lives by connecting communities and advancing economies. BAI manages and operates the networking infrastructure for T-Connect, the wireless network used by the Toronto Transit Commission (TTC). T-Connect averages over 200,000 daily sessions, and over 5 million every month from approximately 100,000 unique devices every weekday. The T-Connect network consists of more than 1,000 access Continue reading

Confluent Platform 7.0: Data Streaming Across Multiclouds

The challenge is clear: How to offer real- or near real-time access to data that is continually refreshed across a number of different distributed environments. With different types of data streaming from various sources such as multicloud and on-premises environments, the data, often in shared digital layers such as so-called digital information hubs (DIHs), must be updated asynchronously. This is necessary in order to maintain a consistent user experience. To that end, data streaming platform provider Apache Kafka, hundreds of different applications and data systems can use it to migrate to the cloud or share data between their data center and the public cloud, Confluent says. Traditionally, syncing data between multiple clouds or between on-premises and the cloud was “like a bad game of telephone,”

SC21: Fugaku Still Fastest Supercomputer as Exascale Looms

The latest release of the list of the fastest supercomputers in the world showed little movement for an HPC industry that is anxiously waiting for long-discussed exascale systems to come online. Japan’s massive Top500 list of the world’s fastest systems, a position it first reached in the summer of 2020. The latest list was released this week at the start of the

Prossimo: Making the Internet Memory Safe

The Let’s Encrypt certificate authority, but it has also turned its hand to fixing memory problems. It sponsors, via Google, so Rust in Linux in no small part to fix its built-in C memory problems. And, it also has a whole department, Rustls, a safer memory-safe code. Memory-safe programs are written in languages that avoid the usual use after free problems. C, C++, and Assembly, for all their speed, make it all too easy to make these kinds of mistakes. Languages such as Rust, Go, and C#, however, Continue reading

How eBPF Streamlines the Service Mesh

There are several service mesh products and projects today, promising simplified connectivity between application microservices, while at the same time offering additional capabilities like secured connections, observability, and traffic management. But as we’ve seen repeatedly over the last few years, the excitement about service mesh has been tempered by practical additional overhead. Let’s explore how Envoy or wrote about his experiences configuring Istio to reduce consumption from around 1GB per proxy (!) to a much more reasonable 60-70MB each. But even in our Continue reading

Create a Monitoring Subnet in Microsoft Azure to Feed a Security Stack

Andy Idsinga Andy Idsinga is a Cloud Engineering Manager and senior Cloud Solutions Architect at cPacket Networks. Andy has been a software engineer and architect since 1994 at Symantec, Intel and other technology companies. He’s worked on firmware for smart watches, RFID transceiver chipsets, and led a team in developing a new smart bracelet as part of Intel’s internal startup incubator. He lives in Portland, OR. The 2021 Verizon

Progressive Delivery on OpenShift

Hai Huang Hai is a research scientist at IBM T. J. Watson Research Center. He is a contributing member of Kubernetes, Istio, Iter8 and TPM. We are accustomed to having high expectations of our apps. We want a constant stream of new features and bug fixes, and yet, we don’t want these updates to affect our user experience adversely. As a result, these expectations put a tremendous amount of pressure on developers. This is where

Google SRE: Site Reliability Engineering at a Global Scale

When DevOps was coined around 2009, its purpose was to break down silos between development and IT operations. DevOps has since become a game of tug-of-war between the reliability needs of the operations team and the velocity goals on the developer side. Site reliably engineering became that balancer. As Benjamin Treynor Sloss, designer of Google’s SRE program, puts it: “SRE is what happens when you ask a software engineer to design and run operations.” The SRE team has emerged as the answer to how you can build systems at scale, striking that balance between velocity, maintainability and efficiency. It was only logical that this year’s books on site reliability engineering. Of course, almost everyone outside of Google will probably not work on anything at this scale, but, because increasingly distributed systems are constantly integrating with others, Continue reading

The Advantages and Challenges of Going ‘Edge Native’

As the internet fills every nook and cranny of our lives, it runs into greater complexity for developers, operations engineers, and the organizations that employ them. How do you reduce latency? How do you comply with the regulations of each region or country where you have a virtual presence? How do you keep data near where it’s actually used? For a growing number of organizations, the answer is to use the edge. In this episode of the New Stack Makers podcast, Sheraline Barthelmy, head of product,  marketing and customer success for Cox Edge, were joined by The Advantages and Challenges of Going ‘Edge Native’ Also available on Google Podcasts, PlayerFM, Spotify, TuneIn The edge is composed of servers that are physically located close to the customers who will use them — the “last Continue reading

Reference Architectures and Experience Kits for Cloud Native

Dana Nehama Dana is product management director for Cloud Networks at Intel. She has deep technical experience in the wireless and telecom networking arenas and collaborates with communities on technology initiatives such as SDN/NFV, cloud native, LTE, WiMAX, VoIP, DOCSIS and more. With core network infrastructure on a rapid path to becoming fully virtualized with cloud native practices, it’s critical for systems developers to be able to efficiently design, produce and deploy reliable applications and services from myriad software, networking and hardware components. I’ve been developing networking products for the telecommunications sector for most of my career, starting in Israel and then immigrating to the United States two decades ago. I’ve always had a systems engineering perspective and a passion for helping service providers better understand how they can more easily consume the latest technologies to build their applications and services. In my most recent role, I was faced with the challenge of how to help communication service providers (CoSPs) accelerate the design and deployment of applications and services running on virtualized, multi-vendor solutions tailored for their unique operating environments. These service providers want to take advantage of the latest-generation platforms and open source software innovations. Collaborating with the CNCF Continue reading

Is Network Security Relevant in the Cloud?

Vishal Jain Vishal Jain is the co-founder and CTO of Valtix. Vishal is a seasoned executive and has held engineering leadership roles across many successful startups and big companies in the networking and security space. Vishal was an early member of Andiamo Systems, Nuova Systems, and Insieme Networks, which were acquired by Cisco Systems. Vishal was also responsible for leading the security engineering team at Akamai and built their live streaming service in their early days. Is Network Security Relevant in the Cloud? Short answers: yes, and no. But the details matter. For the last 15 months, we’ve seen a previously unimaginable acceleration in the use of cloud and greater reliance on technology overall, all of which pushes more app efforts to cloud faster than originally planned. This acceleration brings several discussions to a head, but we’re here to talk about network security (netsec). Within netsec in the cloud, there are a few different ways of segmenting, but where this article will draw the line is between protecting users as they access the cloud and protecting apps deployed into the cloud. The former, protecting users, has seen plenty of investment and innovation and is a relatively well-understood problem. The latter Continue reading

Arrcus Brings Network Automation and API Accessibility to the Edge

Arrcus, a well-funded edge network software startup that is working to make a name for itself in the expanding multicloud arena. But even as enterprise adoption of multicloud and hybrid cloud strategies continues to rise, he sees the future being at the network and compute edge. “Everybody talks about how you can get benefits from large pools of centralized capacity in the public cloud,” said Ayyar, whose was announced as chairman and CEO on Sept. 15. “What I feel very, very confident about is that this action is almost passé in terms of the clouds, and it’s moving a lot more into the edge. The pendulum is swinging from consolidated and large data centers doing everything to highly distributed and disaggregated infrastructures doing things that are point of consumption, point of sale, Continue reading

What You Can Learn from the AWS Tokyo Outage

Jason Yee Jason is director of advocacy at Gremlin where he helps companies build more resilient systems by learning from how they fail. He also helps lead Gremlin's internal chaos engineering practices to make it more reliable. In the movies, it seems like Tokyo is constantly facing disasters — natural ones in the forms of earthquakes and tsunamis, and unnatural ones like giant kaiju and oversized robots. On the morning of Sept. 1, the mechanized behemoth was Amazon Web Services. At around 7:30 am JST, AWS began experiencing networking issues in its AP-Northeast-1 region based in Tokyo. The outage affected business across all sectors, from financial services to retail stores, travel systems and telecommunications. Despite the troubles with not being able to access money, purchase goods, travel or call each other, the Japanese people demonstrated resilience, proving that at least some things from the movies are true. However, the financial losses due to the outage are expected to be huge. After the six-hour outage, AWS explained the issue

Black Friday Downtime: How to Avoid Impacts on Your Business

Hannah Culver Hannah is a solutions marketer at PagerDuty interested in how real-time urgent work plays out across all industries in this digital era. It’s a brisk Friday morning in November. You’re sipping your coffee and mentally preparing yourself for the day that’ll define your fiscal year. How will you fare this Black Friday? Are your teams prepared? We’ve all heard the 2020 Holiday Shopping Season Report, “The online holiday season exceeded $188B resulting in a strong growth rate of 32% over the 2019 season.” This trend didn’t start with COVID-19, however. A

gRPC: A Deep Dive into the Communication Pattern

Danesh Kuruppu is a technical lead at WSO2, with expertise in microservices, messaging protocols and service governance. Danesh has spearheaded development of Ballerina’s standard libraries including gRPC, data and microservices framework. He has co-authored 'gRPC Up and Running' published by O’Reilly media. If you have built gRPC applications and know about the communication fundamentals, you may already know there are four fundamental communication patterns used in gRPC-based applications: simple RPC, server-side streaming, client-side streaming and bidirectional streaming. In this article, I dive deeper into these communication patterns and discuss the importance of each pattern as well as how to pick the right one, according to the use case. Before I discuss each pattern, I’ll discuss what they have in common, such as how gRPC sends messages between clients and servers over the network and how request/response messages are structured. gRPC over HTTP/2 According to official documentation, the gRPC core supports different transport protocols; however, HTTP/2 is the most common among them. In HTTP/2, communication between a client and a server happens through a single TCP connection. Within the connection, there can be multiple bidirectional flows of bytes, which are called streams. In gRPC terms, one RPC call is mapped to Continue reading

Are ISPs Better Bets to Offer Cloud Computing for the Edge?

Edge computing is getting more attention of late — because there are advantages to having computing power and data storage near the location where it’s needed. As Edge computing needs grow, users are likely to take a hard look at whether public cloud giants like AWS, Google are their best choice, or whether their local ISP is best suited for the job. ISPs — including cable, DSL and mobile providers — claim to offer benefits when delivering SaaS and other services compared to public cloud providers: low latency, high-bandwidth connections, fewer security vulnerabilities, regional regulation compliance, and greater data sovereignty. While they must also demonstrate that they can deliver services robust enough to meet DevOps needs, ISPs can offer tremendous benefits and fill gaps in current cloud computing offerings. “A key concern cloud customers have when leveraging their microservices architecture for the applications they offer or rely on is how to achieve and maintain ultra-low latency,” said

1 4 5 6 7 8 15