Calculate “1/(40rods/​hogshead) → L/100km” from your Zsh prompt

I often need a quick calculation or a unit conversion. Rather than reaching for a separate tool, a few lines of Zsh configuration turn = into a calculator. Typing = 660km / (2/3)c * 2 -> ms gives me 6.60457 ms1 without leaving my terminal, thanks to the Zsh line editor.

The equal alias

The main idea looks simple: define = as an alias to a calculator command. I prefer Numbat, a scientific calculator that supports unit conversions. Qalculate is a close second.2 If neither is available, we fall back to Zsh’s built-in zcalc module.

As the alias built-in uses = as a separator for name and value, we need to alter the aliases associative array:

if (( $+commands[numbat] )); then
  aliases[=]='numbat -e'
elif (( $+commands[qalc] )); then
  aliases[=]='qalc'
else
  autoload -Uz zcalc
  aliases[=]='zcalc -f -e'
fi

With this in place, = 847/11 becomes numbat -e 847/11.

The quoting problem

The first problem surfaces quickly. Typing = 5 * 3 fails: Zsh expands the * character as a glob Continue reading

Calico Load Balancer: Simplifying Network Traffic Management with eBPF

Authors: Alex O’Regan, Aadhil Abdul Majeed

Ever had a load balancer become the bottleneck in an on-prem Kubernetes cluster? You are not alone. Traditional hardware load balancers add cost, create coordination overhead, and can make scaling painful. A Kubernetes-native approach can overcome many of those challenges by pushing load balancing into the cluster data plane. Calico Load Balancer is an eBPF powered Kubernetes-native load balancer that uses consistent hashing (Maglev) and Direct Server Return (DSR) to keep sessions stable while allowing you to scale on-demand.

Below is a developer-focused walkthrough: what problem Calico Load Balancer solves, how Maglev consistent hashing works, the life of a packet with DSR, and a clear configuration workflow you can follow to roll it out.


Why a Kubernetes-native load balancer matters

On-prem clusters often rely on dedicated hardware or proprietary appliances to expose services. That comes with a few persistent problems:

  • Cost and scaling friction – You have to scale the network load balancer vertically as the size and throughput requirements of your Kubernetes cluster/s grows.
  • Operational overhead – Virtual IPs (VIPs) are often owned by another team, so simple service changes require coordination.
  • Stateful failure modes – Kube-proxy load balancing is stateful per node, Continue reading

Lift-and-Shift VMs to Kubernetes with Calico L2 Bridge Networks

On paper, lift-and-shift VM migration to Kubernetes sounds simple. Compute can be moved. Storage can be remapped. But many migration projects stall at the network boundary. VM workloads are often tied to IP addresses, network segments, firewall rules, and routing models that already exist in the wider environment. That is where lift-and-shift becomes much harder than it first appears.

Why lift-and-shift migration is challenging

In a traditional hypervisor environment:

  • A VM connects to a network the rest of the data center already understands.
  • Its IP address is a first-class citizen of the network.
  • Firewalls, routers, monitoring tools, and peer applications know how to reach it.
  • Existing application dependencies are often built around that network identity.

Default Kubernetes pod networking works very differently:

  • Pod IPs usually come from a cluster-managed pod CIDR.
  • Those IPs are mainly meaningful inside the Kubernetes cluster.
  • The upstream network usually does not have direct visibility into pod networks.
  • The original network segments from the VM world are not preserved by default.

This creates a major problem for VM migration:

  • The workload can no longer keep the same network presence it had before.
  • Teams often need to introduce VIPs or reconfigure the networking settings of the Continue reading

HN819: Recipes for Automation – A Look Inside Eric Chou’s AI Networking Cookbook

Eric Chou, author of the AI Networking Cookbook and host of Network Automation Nerds, joins Ethan and Drew to discuss adding artificial intelligence to your network automation toolbox. The AI Networking Cookbook is aimed at network engineers and provides a systematic approach to learning AI for network automation. Together they break down pros and cons... Read more »

TNO058: Detect. Prove. Predict. Turning Network Monitoring Into Operational Intelligence (Sponsored)

In this sponsored episode, Dylan Hensler, Customer Solutions Specialist with Statseeker, joins Scott for a breakdown of what allows Statseeker to move beyond traditional network monitoring. Together they discuss Statseeker’s ability to help NetOps teams detect issues faster, prove root cause, and operate with confidence by turning raw data into operational intelligence. They also discuss... Read more »

Why flat Kubernetes networks fail at scale

Rethinking network security hierarchies for cloud-native platforms Kubernetes networking is powerful. Its flexibility lets teams connect hundreds of microservices across namespaces, clusters, and environments. But as platforms grow, that same flexibility can turn a neat setup into a tangled, fragile system. For many organizations, networking is where friction shows up first. Engineers struggle to debug connectivity issues. Security teams wrestle with enforcing global controls. Platform architects feel the pressure to prove compliance. And most of these headaches come from a common root cause: flat network security models that don’t scale. The limits of flat networking Kubernetes NetworkPolicy gives teams a way to control traffic between workloads. By default, all policies exist at the same level with no built-in manageable priority. “As policies grow, it’s increasingly hard to predict what will happen when you make a change.” That works fine in a small, single-team cluster. But in large, multi-team environments, it quickly becomes risky. In a flat model, security is managed by exception rather than enforcement. Protecting a critical service often means listing every allowed connection and hoping nothing else accidentally overrides it. As policies grow, it’s increasingly hard to predict what will happen when you make a change. Without Continue reading

Technology Short Take 192

Welcome to Technology Short Take #192! Who’s interested in some links to data center technology-related articles and posts? If that’s you, you’re in the right place. Here’s hoping you find something useful!

Networking

Security

Cloud Computing/Cloud Management

  • I’ve had this article on AWS Lambda for the containers developer sitting in my read queue since first publication in 2023. (Sorry, Massimo.) I finally got around to reading it—really reading it, not just skimming it—and I found it to be helpful in helping me get a better grasp on Lambda.
  • The AWS Load Balancer Controller recently gained Continue reading

Deprecated AS_SET: Why the IETF changed the rules of BGP aggregation.

as set deprecatedFor over three decades, BGP’s AS_SET path segment has been a legal, if problematic, feature of Internet routing. In May 2025, the IETF formally ended that era. RFC 9774 doesn’t merely discourage AS_SET: it prohibits it entirely.

This post unpacks what AS_SET is, why it was created, what went wrong, and what network operators need to do now that the IETF has made its deprecation a binding standard requirement.

Background: What is the AS_PATH attribute?

Every BGP UPDATE message carries an AS_PATHattribute – a record of the Autonomous Systems a route advertisement has traversed on its way from origin to destination. It serves two critical functions: loop prevention (a router seeing its own AS in the path discards the route) and policy (operators use AS_PATH to make routing decisions based on where traffic comes from or how it’s being forwarded.

The AS_PATH is composed of path segments, each of which is one of four types:

Type Description Status
AS_SEQUENCE An ordered list of ASes the route has passed through. The most common and well-understood type. Valid
AS_SET An unordered set of ASes created during route aggregation. Now deprecated. Deprecated
AS_CONFED_SEQUENCE Ordered list of Member AS Numbers within a Continue reading

Lab: Anycast Gateways on VXLAN Segments

Most vendors “discovered” anycast gateways when they tried implementing routing between MAC-VRFs in an EVPN environment and hit all the usual tripwires (more about that later). A few exceptions (like Arista) supported them on VLAN segments for over a decade, and it was a no-brainer to extend that support to VXLAN segments.

Want to try out how that works? The Anycast Gateways on VXLAN Segments lab exercise is just what you need.

You can run the lab on your own netlab-enabled infrastructure (more details), but also within a free GitHub Codespace or even on your Apple-silicon Mac (installation, using Arista cEOS container, using VXLAN/EVPN labs).

AI Assistant for Calico: Troubleshooting at the Speed of Thought

Despite the wealth of data available, distilling a coherent narrative from a Kubernetes cluster remains a challenge for modern infrastructure teams. Even with powerful visualization tools like the Policy Board, Service Graph, and specialized dashboards, users often find themselves spending significant time piecing together context across different screens. Making good use of this data to secure a cluster or troubleshoot an issue becomes nearly impossible when it requires manually searching across multiple sources to find a single “connecting thread.”

Inevitably, security holes happen, configurations conflict causing outages, and teams scramble to find that needle-in-the-haystack cause of cluster instability. A new approach is needed to understand the complex layers of security and the interconnected relationships among numerous microservices. Observability tools need to not only organize and present data in a coherent manner but proactively help to filter and interpret it, cutting through the noise to get to the heart of an issue. As we discussed in our 2026 outlook on the rise of AI agents, this represents a fundamental shift in Kubernetes management.

Key Insight: With AI Assistant for Calico, observability takes a leap forward, providing a proactive, conversational, and context-aware intelligence layer to extract actionable insights from a Continue reading

Powering the agents: Workers AI now runs large models, starting with Kimi K2.5

We're making Cloudflare the best place for building and deploying agents. But reliable agents aren't built on prompts alone; they require a robust, coordinated infrastructure of underlying primitives.

At Cloudflare, we have been building these primitives for years: Durable Objects for state persistence, Workflows for long running tasks, and Dynamic Workers or Sandbox containers for secure execution. Powerful abstractions like the Agents SDK are designed to help you build agents on top of Cloudflare’s Developer Platform.

But these primitives only provided the execution environment. The agent still needed a model capable of powering it. 

Starting today, Workers AI is officially in the big models game. We now offer frontier open-source models on our AI inference platform. We’re starting by releasing Moonshot AI’s Kimi K2.5 model on Workers AI. With a full 256k context window and support for multi-turn tool calling, vision inputs, and structured outputs, the Kimi K2.5 model is excellent for all kinds of agentic tasks. By bringing a frontier-scale model directly into the Cloudflare Developer Platform, we’re making it possible to run the entire agent lifecycle on a single, unified platform.

The heart of an agent is the AI model that powers it, and that Continue reading

N4N051: MPLS Fundamentals

Today’s topic is Multiprotocol Label Switching or MPLS, a foundational technology that powers service provider networks and enterprise WANs all over the world. To help us break it down, we’ve invited James Bensley, a Network Tech Lead who’s spent fifteen years with MPLS. James explains what spurred the creation of MPLS and how it works... Read more »

Arista EOS MPLS P/PE-router Behavior

Something didn’t feel right as I tried to check whether the IPv4 ECMP I observed in the latest version of Arista cEOS containers works with my MPLS/anycast scenario. The forwarding tables seemed OK, but I wasn’t getting MPLS labels in the ICMP replies (see RFC 4950 for details), even though I know Arista EOS can generate them.

I decided to go down that rabbit hole and built the simplest possible BGP-free core (the addition of BGP will become evident in a few seconds) to investigate PE/P-router behavior:

Lab topology

Lab topology

What Your EKS Flow Logs Aren’t Telling You

If you’re running workloads on Amazon EKS, there’s a good chance you already have some form of network observability in place. VPC Flow Logs have been a staple of AWS networking for years, and AWS has since introduced Container Network Observability, a newer set of capabilities built on Amazon CloudWatch Network Flow Monitor, that adds pod-level visibility and a service map directly in the EKS console.

It’s a reasonable assumption that between these tools, you have solid visibility into what’s happening on your cluster’s network. But for teams focused on Kubernetes security and policy enforcement, there’s a significant gap — and it’s not the one you might expect.

In this post, we’ll break down exactly what EKS native observability gives you, where it falls short for security-focused use cases, and what Calico’s observability tools, Goldmane and Whisker, provide that you simply cannot get from AWS alone.

What EKS Gives You Out of the Box

AWS offers two main sources of network observability for EKS clusters:

VPC Flow Logs capture IP traffic at the network interface level across your VPC. For each flow, you get source and destination IP addresses, ports, protocol, and whether traffic was accepted or rejected at Continue reading

NAN116: From NSoT to Operational Automation: Fast Time-to-Value with Nautobot Cloud (Sponsored)

Building a Network Source of Truth (NSoT) is only step one in an automation effort — turning it into operational automation is where outcomes happen. In this sponsored episode by Network to Code, Eric Fetty, a self-taught network engineer who literally automated his way through his CCIE lab, shares how he’s doing exactly that at... Read more »
1 2 3 3,856