The vQFX is a virtualized version of the Juniper Networks QFX10000 Ethernet switches portfolio. It […]
The post Juniper vQFX on GNS3 first appeared on Brezular's Blog.
This article uses Containerlab to emulate a simple network and experiment with Nokia SR Linux and sFlow telemetry. Containerlab provides a convenient method of emulating network topologies and configurations before deploying into production on physical switches.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/srlinux.yml
Download the Containerlab topology file.
containerlab deploy -t srlinux.yml
Deploy the topology.
docker exec -it clab-srlinux-h1 traceroute 172.16.2.2
Run traceroute on h1 to verify path to h2.
traceroute to 172.16.2.2 (172.16.2.2), 30 hops max, 46 byte packets
1 172.16.1.1 (172.16.1.1) 2.234 ms * 1.673 ms
2 172.16.2.2 (172.16.2.2) 0.944 ms 0.253 ms 0.152 ms
Results show path to h2 (172.16.2.2) via router interface (172.16.1.1).
docker exec -it clab-srlinux-switch sr_cli
Access SR Linux command line on switch.
Using configuration file(s): []
Welcome to the srlinux CLI.
Type 'help' (and press <ENTER>) if you need any help using this.
--{ + running }--[ ]--
A:switch#
SR Linux CLI describes how to use the interface.
A:switch# show system sflow status
Get status of sFlow telemetry.
-------------------------------------------------------------------------
Admin State Continue reading
On today's Heavy Networking podcast we talk with sponsor Arelion about how it continues to build and maintain global IP networks, and why you should be considering them for your backhaul needs.
The post Heavy Networking 637: The Ongoing Evolution Of Arelion’s Global Network (Sponsored) appeared first on Packet Pushers.
Here at Cloudflare we're constantly working on improving our service. Our engineers are looking at hundreds of parameters of our traffic, making sure that we get better all the time.
One of the core numbers we keep a close eye on is HTTP request latency, which is important for many of our products. We regard latency spikes as bugs to be fixed. One example is the 2017 story of "Why does one NGINX worker take all the load?", where we optimized our TCP Accept queues to improve overall latency of TCP sockets waiting for accept().
Performance tuning is a holistic endeavor, and we monitor and continuously improve a range of other performance metrics as well, including throughput. Sometimes, tradeoffs have to be made. Such a case occurred in 2015, when a latency spike was discovered in our processing of HTTP requests. The solution at the time was to set tcp_rmem to 4 MiB, which minimizes the amount of time the kernel spends on TCP collapse processing. It was this collapse processing that was causing the latency spikes. Later in this post we discuss TCP collapse processing in more detail.
The tradeoff is that using a low value for Continue reading
This post originally appeared on the Packet Pushers’ now-defunct Ignition site on October 1, 2019. Insurance companies that offer cyberinsurance policies are looking at ways to reduce their risk (and improve profit margins) by discounting for companies that deploy reviewed and approved technologies. Company executives will make decisions about the cost and value of […]
The post Analysis: Will Your Security Infrastructure Be Determined By Your Cyberinsurance? appeared first on Packet Pushers.
Cloud-native applications running on Kubernetes rely on container network plugins to establish workload communication. While Azure Kubernetes Service (AKS) provides several supported networking options (kubenet and Azure CNI) that address the needs of most deployments, Microsoft recently introduced the ability to bring your own networking solution, called BYOCNI, to help users address more advanced networking requirements. This new feature enables AKS customers to run Calico networking on AKS.
This blog will walk you through some exciting capabilities you can unlock with Calico running in your AKS deployments.
Calico is the most widely adopted container networking and security solution for Kubernetes. Powering more than 100M containers across 2M+ nodes in 166 countries, Calico is supported across all major cloud providers and Kubernetes distributions. Calico gives you a choice of data planes, including eBPF, standard Linux networking, and Windows HNS-based workloads running in public clouds and/or on-prem, on a single node, or across a multi-thousand-node cluster. Whether you need to scale to thousands of microservices with eBPF, or add Windows workloads to your Kubernetes deployments, Calico has you covered.
Calico’s core design principles leverage cloud-native design best practices, combined with proven, standards-based network protocols trusted by Continue reading
This post originally appeared on the Packet Pushers’ Ignition site on January 14, 2020. There is a slow but steady trend for Governements’ to take back control of internet in their countries. For China the “great firewall” is now a rigid access control on content. Russia has been progressing changes to to be isolate itself […]
The post Reading: The Case for a Mostly Open Internet appeared first on Packet Pushers.
This lesson walks through the basics of reaching an application running in a Kubernetes pod. Instructor Michael Levan brings his background in system administration, software development, and DevOps to this series. He has Kubernetes experience as both a developer and infrastructure engineer. He’s also a consultant and Pluralsight author, and host of the “Kubernetes Unpacked” […]
The post Kubernetes For Network Engineers: Lesson 2 – Services, Nodeports, And Load Balancers – Video appeared first on Packet Pushers.
In this IPv6 Buzz episode we talk about the benefit of IPv6 connectivity when IPv4 fails. We examine the types of IPv4 failures, how IPv6 behaves during IPv4 failure, application dependencies, and more.
The post IPv6 Buzz 104: IPv6 For Redundancy When IPv4 Fails appeared first on Packet Pushers.