In this episode, Ed, Scott, and Tom talk about DNS and IPv6. We cover legacy IPv6 brokeness and DNS, how DNS performs over v6, and how DNS works with v6-only networks.
The post IPv6 Buzz 115: DNS And IPv6 appeared first on Packet Pushers.
CONTAINERlab is a Docker orchestration tool for creating virtual network topologies. The sflow-rt/containerlab project contains a number of topologies demonstrating industry standard streaming sFlow telemetry with realistic data center topologies. This article extends the examples in Real-time telemetry from a 5 stage Clos fabric and Real-time EVPN fabric visibility to demonstrate visibility into IPv6 traffic flows.
docker run --rm -it --privileged --network host --pid="host" \
-v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \
-v $(pwd):$(pwd) -w $(pwd) \
ghcr.io/srl-labs/clab bash
Run the above command to start Containerlab if you already have Docker installed. Otherwise, Installation provides detailed instructions for a variety of platforms.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/clos5.yml
Download the topology file for the 5 stage Clos fabric shown above.
containerlab deploy -t clos5.yml
Finally, deploy the topology.
The screen capture shows a real-time view of traffic flowing across the network during an iperf3 test. Click on the sFlow-RT Apps menu and select the browse-flows application, or click here for a direct link to a chart with the settings shown above.docker exec -it clab-clos5-h1 iperf3 -c 2001:172:16:4::2
Each of the hosts in the network has an iperf3 server, so running the above command will test bandwidth between Continue reading
After VMware launched DPU-based acceleration for VMware NSX, marketing-focused websites frantically started discussing the benefits of DPUs. Although I’ve been writing about SmartNICs and DPUs for years, it’s time for another closer look at the emperor’s clothes.
DPU (Data Processing Unit) is a fancier name for a network adapter formerly known as SmartNIC – a server repackaged into an interface card form factor. We had them for decades (anyone remembers iSCSI offload adapters?)
After VMware launched DPU-based acceleration for VMware NSX, marketing-focused websites frantically started discussing the benefits of DPUs. Although I’ve been writing about SmartNICs and DPUs for years, it’s time for another closer look at the emperor’s clothes.
DPU (Data Processing Unit) is a fancier name for a network adapter formerly known as SmartNIC – a server repackaged into an interface card form factor. We had them for decades (anyone remembers iSCSI offload adapters?)
https://codingpackets.com/blog/eufy-camera-shenanigans
https://codingpackets.com/blog/pfsense-qemu-guest-agent
https://codingpackets.com/blog/eufy-camera-shenanigans
You wouldn’t think that AWS re:Invent would be a big week for networking, would you? Most of the announcements are focused on everything related to the data center but teasing out the networking specific pieces isn’t as easy. That’s why I found mention of a new-ish protocol in an unrelated article to be fascinating.
In this Register piece about CPUs there’s a mention of the Nitro DPU. More importantly there’s also a reference to something that Amazon has apparently been working on for the last couple of years. It turns out that the world’s largest online bookstore and data center company is looking to get rid of TCP.
The new protocol was developed in 2020. Referred to as Scalable Reliable Datagram (SRD), it was build to solve specific challenges Amazon was seeing related to performance in their cloud. Amazon decided that TCP had bigger issues for them that they needed to address.
The first was that dropped packets required retransmission. In an environment like the Internet that makes sense. You want to get the data you lost. However, when TCP was developed fifty years ago the amount of data that was lost in transit was tiny compared to Continue reading
Welcome to this new new blog post series about Container Networking with Antrea. In this blog, we’ll take a look at the Egress
feature and show how to implement it on vSphere with Tanzu.
According to the official Antrea documentation Egress
is a Kubernetes Custom Resource Definition (CRD) which allows you to specify which Egress
(SNAT) IP the traffic from the selected Pods to the external network should use. When a selected Pod accesses the external network, the Egress
traffic will be tunneled to the Node that hosts the Egress
IP if it’s different from the Node that the Pod runs on and will be SNATed to the Egress
IP when leaving that Node. You can see the traffic flow in the following picture.
When the Egress
IP is allocated from an externalIPPool
, Antrea even provides automatic high availability; i.e. if the Node hosting the Egress
IP fails, another node will be elected from the remaining Nodes selected by the nodeSelector
of the externalIPPool
.
Note: The standby node will not only take over the IP but also send a layer 2 advertisement (e.g. Gratuitous ARP for IPv4) to notify the other hosts and routers on the Continue reading