In January 2021, we gave you a behind-the-scenes look at how we built Waiting Room on Cloudflare’s Durable Objects. Today, we are thrilled to announce the launch of Waiting Room Analytics and tell you more about how we built this feature. Waiting Room Analytics offers insights into end-user experience and provides visualizations of your waiting room traffic. These new metrics enable you to make well-informed configuration decisions, ensuring an optimal end-user experience while protecting your site from overwhelming traffic spikes.
If you’ve ever bought tickets for a popular concert online you’ll likely have been put in a virtual queue. That’s what Waiting Room provides. It keeps your site up and running in the face of overwhelming traffic surges. Waiting Room sends excess visitors to a customizable virtual waiting room and admits them to your site as spots become available.
While customers have come to rely on the protection Waiting Room provides against traffic surges, they have faced challenges analyzing their waiting room’s performance and impact on end-user flow. Without feedback about waiting room traffic as it relates to waiting room settings, it was challenging to make Waiting Room configuration decisions.
Up until now, customers could only monitor their waiting room's Continue reading
Distributed systems are complicated. Add networking to the mix, and you get traumatic challenges like the CAP theorem and Byzantine fault tolerance. Most of those challenges are unknown to engineers who have to suffer through the vendor marketing presentations, making it hard to determine whether the latest shiny gizmo works outside of PowerPoint.
I started collecting articles describing distributed-system gotchas years ago, wrote numerous blog posts on the topic in the heydays of the SDN Will Save the World lemming run, and organized them into the Distributed Systems Resources page.
Distributed systems are complicated. Add networking to the mix, and you get traumatic challenges like the CAP theorem and Byzantine fault tolerance. Most of those challenges are unknown to engineers who have to suffer through the vendor marketing presentations, making it hard to determine whether the latest shiny gizmo works outside of PowerPoint.
I started collecting articles describing distributed-system gotchas years ago, wrote numerous blog posts on the topic in the heydays of the SDN Will Save the World lemming run, and organized them into the Distributed Systems Resources page.
Today, the cloud platform engineers are facing new challenges when running cloud native applications. Those applications are designed, deployed, maintained and monitored unlike traditional monolithic applications they are used to working with.
Cloud native applications are designed and built to exploit the scale, elasticity, resiliency, and flexibility the cloud provides. They are a group of micro-services that are run in containers within a Kubernetes cluster and they all talk to each other. It can quickly become overwhelming for any cloud engineer to understand and visualize their environment.
Visualizing Kubernetes network traffic and service dependencies presents significant challenges due to the dynamic and distributed nature of Kubernetes environments. The dynamic nature of Kubernetes clusters, with frequent scaling of pods, creation and deletion of services, and changes in network connections, makes it difficult to capture an accurate and up-to-date representation of network traffic and service dependencies.
Additionally, the complexity of Kubernetes networking, involving multiple components such as pods, services, and network policies, further complicates the visualization of network traffic flow and understanding of service dependencies. The use of microservices architecture in Kubernetes, where applications consist of interconnected services, adds to the complexity, particularly as the number of services grows.
Moreover, the scalability Continue reading
In June 2022, after the publication of a set of HTTP-related Internet standards, including the RFC that formally defined HTTP/3, we published HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends. One year on, as the RFC reaches its first birthday, we thought it would be interesting to look back at how these trends have evolved over the last year.
Our previous post reviewed usage trends for HTTP/1.1, HTTP/2, and HTTP/3 observed across Cloudflare’s network between May 2021 and May 2022, broken out by version and browser family, as well as for search engine indexing and social media bots. At the time, we found that browser-driven traffic was overwhelmingly using HTTP/2, although HTTP/3 usage was showing signs of growth. Search and social bots were mixed in terms of preference for HTTP/1.1 vs. HTTP/2, with little-to-no HTTP/3 usage seen.
Between May 2022 and May 2023, we found that HTTP/3 usage in browser-retrieved content continued to grow, but that search engine indexing and social media bots continued to effectively ignore the latest version of the web’s core protocol. (Having said that, the benefits of HTTP/3 are very user-centric, and arguably offer minimal benefits to Continue reading
On today’s Heavy Networking we have a conversation about monitoring, visibility, and observability with sponsor Palo Alto Networks. More specifically we’ll dig into Palo Alto Networks’ Autonomous Digital Experience Management, or ADEM product, and how Palo Alto Networks is integrating ADEM with AIOps.
The post Heavy Networking 683: Palo Alto Networks Integrates AIOps Into ADEM For Faster Remediation (Sponsored) appeared first on Packet Pushers.
To achieve active/active redundancy for a Network Virtual Appliance (NVA) in a Hub-and-Spoke VNet design, we can utilize an Internal Load Balancer (ILB) to enable Spoke-to-Spoke traffic.
Figure 5-1 illustrates our example topology, which consists of a vnet-hub and spoke VNets. The ILB is associated with the subnet 10.0.1.0/24, where we allocate a Frontend IP address (FIP) using dynamic or static methods. Unlike a public load balancer's inbound rules, we can choose the High-Availability (HA) ports option to load balance all TCP and UDP flows. The backend pool and health probe configurations remain the same as those used with a Public Load Balancer (PLB).
From the NVA perspective, the configuration is straightforward. We enable IP forwarding in the Linux kernel and virtual NIC but not pre-routing (destination NAT). We can use Post-routing policies (source NAT) if we want to hide real IP addresses or if symmetric traffic paths are required. To route egress traffic from spoke sites to the NVAs via the ILB, we create subnet-specific route tables in the spoke VNets. The reason why the "rt-spoke1" route table has an entry "10.2.0.0/24 > 10.0.1.6 (ILB)" is that vm-prod-1 has a public IP address used for external access. If we were to set the default route, as we have in the subnet 10.2.0.0/24 in "vnet-spoke2", the external connection would fail.
EIGRP routing updates have always contained the next hop field (similar to BGP updates), which was unused until Cisco IOS release 12.3 when the no ip next-hop-self eigrp AS-number interface configuration command was implemented.
EIGRP does not set the next hop field by default. An EIGRP router receiving a routing update thus assumes that the next hop of the received routes is the sending router. This behavior usually works well, but prevents site-to-site shortcuts to be established in DMVPN networks, and results in suboptimal routing in some route redistribution scenarios.