In the previous post, we covered VPC Peering, which is a quick and easy way to create a connection between two VPCs. We also discussed its limitations, primarily that it is non-transitive. This means if VPC 'A' is peered with VPC 'B', and VPC 'B' is peered with VPC 'C', VPC 'A' cannot communicate with VPC 'C' through VPC 'B'. Because of this, to connect multiple VPCs together, you need to create a full mesh, where every VPC has a direct peering connection to every other VPC.
This complexity (when you have many VPCs) is why, in this post, we will look at AWS Transit Gateway (TGW). A Transit Gateway is an incredibly important networking resource in AWS that solves these scaling challenges. You will see the TGW featured in many modern AWS architecture diagrams because of the flexibility and simplicity it provides.
As always, if you find this post helpful, press the ‘clap’ button. It means a lot to me and helps me know you enjoy Continue reading
Welcome back to the AWS Networking series. So far, we have covered a wide range of foundational topics. We started with the basics of building a VPC, creating subnets, configuring route tables, and providing Internet access with an Internet Gateway and a NAT Gateway. We then looked at the difference between stateful Security Groups attached to an instance's ENI and stateless Network ACLs applied at the subnet level. Most recently, we covered how to build a hybrid network using a Site-to-Site VPN.
In this post, we will continue to expand on VPC connectivity by looking at what AWS VPC Peering is and how to configure one.
If you are completely new to AWS networking, I highly recommend checking out our introductory posts linked below. However, if you are already familiar with the basics, you can carry on with this post.
As always, if you find this post helpful, press the ‘clap’ button. Continue reading
So far in the AWS Networking series, we have covered VPCs, subnets, route tables, Internet Gateways, NAT Gateways, EC2 instances, Security Groups, Network ACLs, and Elastic Network Interfaces. In this post, we will look at using a Site-to-Site VPN in AWS so you can securely connect your on-premise workloads to and from your AWS environment. This is a very important aspect of AWS networking, and this is a service you will use almost always.
If you have been following the series, you can easily follow along with this post. If you just stumbled upon this post, you can still continue, assuming you are already familiar with AWS networking basics. However, if you are completely new to AWS, I highly recommend checking out the previous posts linked below.
When we launch an instance in a public subnet with a public IP address, we have seen that we can connect to Continue reading
Are you stressed? Everyone in IT seems to be continuously stressed–but what can we do about it? Sonia Cuff joins the Hedge to talk about stress.
From time to time we like to repost episodes of significance–this week we’re reposting episode 1.
I have PA-440 in my home lab and was happily running PAN-OS 10.2.10-h9. But with the recent announcement that PAN-OS 10.2 will enter limited support from 26th August 2025, I decided it was time to upgrade. I was deciding between 11.1 and 11.2 for a while, but after reading through a few forums and discussions, I ended up choosing 11.2, specifically PAN-OS 11.2.4-h7.
Since I was already on 10.2, I could upgrade directly to 11.2 without going through any intermediate versions. As per the upgrade guide, all I had to do was download the 11.2.0 base image, then download and install 11.2.4-h7.
After downloading both the base image and the target image, just click 'Install' on the target image. As usual, make sure to take a backup before starting. If you’re running in HA, you can upgrade the firewalls one at a time without any downtime.
The whole process took about 10 to 15 minutes, and now I'm running 11.2.4-h7. If I come across any issues, I'll be sure to update this post.
Martin Fowler published an interesting article about Expert Generalists. Straight from the abstract:
As computer systems get more sophisticated we’ve seen a growing trend to value deep specialists. But we’ve found that our most effective colleagues have a skill in spanning many specialties.
Also:
There are two sides to real expertise. The first is the familiar depth: a detailed command of one domain’s inner workings. The second, crucial in our fast-moving field is the ability to learn quickly, spot the fundamentals that run beneath shifting tools and trends, and apply them wherever we land.
Remember how I told you to focus on the fundamentals? 😎
As Kubernetes environments grow in scale and complexity, platform teams face increasing pressure to secure workloads without slowing down application delivery. But managing and enforcing network policies in Kubernetes is notoriously difficult—especially when visibility into pod-to-pod communication is limited or nonexistent. Teams are often forced to rely on manual traffic inspection, standalone logs, or trial-and-error policy changes, increasing the risk of misconfiguration and service disruption. Safe policy management and microsegmentation becomes a daunting task without clear knowledge or insight into which services should communicate with each other.
In this detailed look, we’ll explore how Calico Cloud Free Tier builds upon Calico Open Source, and helps platform teams visualize traffic with a dynamic service graph, simplifies policy management, and even analyzes actual traffic to recommend policies.
Calico Cloud Free Tier is a managed SaaS, no-cost offering that extends the capabilities of Calico Open Source 3.30 and higher to help Kubernetes teams improve network visibility, simplify policy management, and improve security by simplifying microsegmentation. Designed for single-cluster environments, it provides platform engineers and operators with powerful observability and policy management tools. With a seamless onboarding experience for users already Continue reading
It has taken the better part of a year and a half and some wrangling with the US Department of Justice to get it done, but Hewlett Packard Enterprise has finally completed its $14 billion acquisition of Juniper Networks. …
How Will Juniper Change HPE’s Datacenter Networking Strategy? was written by Timothy Prickett Morgan at The Next Platform.
In this blog post, we'll look at how to create a site-to-site VPN between AWS and a Palo Alto firewall. We'll go through both static routing and BGP options. This post assumes you're already somewhat familiar with AWS and Palo Alto, so we won't cover the basics like creating a VPC in AWS or setting up zones and policies on the firewall.
To create a VPN connection, you first need a compatible IPsec VPN device, like a firewall or router, at your on-premise location. In AWS, the resource you create to represent this device is called a Customer Gateway. In our example, the customer gateway is the Palo Alto firewall.
To send traffic from your VPC to your on-premise network, you route it to a Virtual Private Gateway (VGW). The VGW is a logical, redundant resource on the AWS side of the connection that you attach to your VPC. It serves as the target in your Continue reading
Have you ever managed to type reload in the wrong terminal window and brought down a core switch (I probably did)? I managed to do the Ubuntu equivalent of that stupidity: I told my main Ubuntu server to sudo poweroff instead of doing that to a Vagrant VM.
Fortunately, the open-source world doesn’t have to rely on the roadmaps created by networking vendors’ product managers; if there’s a big enough pain, someone will solve it.
The advent of cloud native applications in the 2025 era (CRM, SaaS, storage, or ERP apps) and the public cloud has caused a re-architecture of traditional WANs based on popular Ethernet and IP across cloud boundaries. Arista has been the thought leader and pioneer of this leaf-spine cloud network for data centers, and now we can see a seamless extension of this concept to the WAN and inter data center using the same principles that have served our customers. The distribution of applications across AI, cloud, SaaS, edge, and enterprise environments creates new challenges for wide area networking architecture and Internet routing to refine branch and WAN networks.
Almost 30 years ago, two graduate students at Stanford University — Larry Page and Sergey Brin — began working on a research project they called Backrub. That, of course, was the project that resulted in Google. But also something more: it created the business model for the web.
The deal that Google made with content creators was simple: let us copy your content for search, and we'll send you traffic. You, as a content creator, could then derive value from that traffic in one of three ways: running ads against it, selling subscriptions for it, or just getting the pleasure of knowing that someone was consuming your stuff.
Google facilitated all of this. Search generated traffic. They acquired DoubleClick and built AdSense to help content creators serve ads. And acquired Urchin to launch Google Analytics to let you measure just who was viewing your content at any given moment in time.
For nearly thirty years, that relationship was what defined the web and allowed it to flourish.
But that relationship is changing. For the first time in its history, the number of searches run on Google is declining. What's taking its place? AI.
If you're like me, you've been amazed Continue reading
As a site owner, how do you know which bots to allow on your site, and which you’d like to block? Existing identification methods rely on a combination of IP address range (which may be shared by other services, or change over time) and user-agent header (easily spoofable). These have limitations and deficiencies. In our last blog post, we proposed using HTTP Message Signatures: a way for developers of bots, agents, and crawlers to clearly identify themselves by cryptographically signing requests originating from their service.
Since we published the blog post on Message Signatures and the IETF draft for Web Bot Auth in May 2025, we’ve seen significant interest around implementing and deploying Message Signatures at scale. It’s clear that well-intentioned bot owners want a clear way to identify their bots to site owners, and site owners want a clear way to identify and manage bot traffic. Both parties seem to agree that deploying cryptography for the purposes of authentication is the right solution.
Today, we’re announcing that we’re integrating HTTP Message Signatures directly into our Verified Bots Program. This announcement has two main parts: (1) for bots, crawlers, and agents, we’re simplifying enrollment into the Verified Continue reading
Web crawlers are not new. The World Wide Web Wanderer debuted in 1993, though the first web search engines to truly use crawlers and indexers were JumpStation and WebCrawler. Crawlers are part of one of the backbones of the Internet’s success: search. Their main purpose has been to index the content of websites across the Internet so that those websites can appear in search engine results and direct users appropriately. In this blog post, we’re analyzing recent trends in web crawling, which now has a crucial and complex new role with the rise of AI.
Not all crawlers are the same. Bots, automated scripts that perform tasks across the Internet, come in many forms: those considered non-threatening or “good” (such as API clients, search indexing bots like Googlebot, or health checkers) and those considered malicious or “bad” (like those used for credential stuffing, spam, or scraping content without permission). In fact, around 30% of global web traffic today, according to Cloudflare Radar data, comes from bots, and even exceeds human Internet traffic in some locations.
A new category, AI crawlers, has emerged in recent years. These bots collect data from across the web to train Continue reading
Many publishers, content creators and website owners currently feel like they have a binary choice — either leave the front door wide open for AI to consume everything they create, or create their own walled garden. But what if there was another way?
At Cloudflare, we started from a simple principle: we wanted content creators to have control over who accesses their work. If a creator wants to block all AI crawlers from their content, they should be able to do so. If a creator wants to allow some or all AI crawlers full access to their content for free, they should be able to do that, too. Creators should be in the driver’s seat.
After hundreds of conversations with news organizations, publishers, and large-scale social media platforms, we heard a consistent desire for a third path: They’d like to allow AI crawlers to access their content, but they’d like to get compensated. Currently, that requires knowing the right individual and striking a one-off deal, which is an insurmountable challenge if you don’t have scale and leverage.
We believe your choice need not be binary — Continue reading
Cloudflare is giving all website owners two new tools to easily control whether AI bots are allowed to access their content for model training. First, customers can let Cloudflare create and manage a robots.txt file, creating the appropriate entries to let crawlers know not to access their site for AI training. Second, all customers can choose a new option to block AI bots only on portions of their site that are monetized through ads.
Creators that monetize their content by showing ads depend on traffic volume. Their livelihood is directly linked to the number of views their content receives. These creators have allowed crawlers on their sites for decades, for a simple reason: search crawlers such as Googlebot
made their sites more discoverable, and drove more traffic to their content. Google benefitted from delivering better search results to their customers, and the site owners also benefitted through increased views, and therefore increased revenues.
But recently, a new generation of crawlers has appeared: bots that crawl sites to gather data for training AI models. While these crawlers operate in the same technical way as search crawlers, the relationship is no longer symbiotic. AI Continue reading
Content publishers welcomed crawlers and bots from search engines because they helped drive traffic to their sites. The crawlers would see what was published on the site and surface that material to users searching for it. Site owners could monetize their material because those users still needed to click through to the page to access anything beyond a short title.
Artificial Intelligence (AI) bots also crawl the content of a site, but with an entirely different delivery model. These Large Language Models (LLMs) do their best to read the web to train a system that can repackage that content for the user, without the user ever needing to visit the original publication.
The AI applications might still try to cite the content, but we’ve found that very few users actually click through relative to how often the AI bot scrapes a given website. We have discussed this challenge in smaller settings, and today we are excited to publish our findings as a new metric shown on the AI Insights page on Cloudflare Radar.
Visitors to Cloudflare Radar can now review how often a given AI model sends traffic to a site relative to how often it crawls that site. We Continue reading