Bruce Schneier wrote an article on the dangers of cryptocurrencies and the uselessness of blockchain, including this gem:
From its inception, this technology has been a solution in search of a problem and has now latched onto concepts such as financial inclusion and data transparency to justify its existence, despite far better solutions to these issues already in use.
Please feel free to tell me how he’s just another individual full of misguided opinions… after all, what does he know about crypto?
Bruce Schneier wrote an article on the dangers of cryptocurrencies and the uselessness of blockchain, including this gem:
From its inception, this technology has been a solution in search of a problem and has now latched onto concepts such as financial inclusion and data transparency to justify its existence, despite far better solutions to these issues already in use.
Please feel free to tell me how he’s just another individual full of misguided opinions… after all, what does he know about crypto?
This post originally appeared on the Packet Pushers’ Ignition site on April 22, 2020. In this post I review what might happen to networking when we return to work. We won’t return to normal, but we will be back at work. To start, here are nine ideas about the pandemic’s impact, divided into two […]
The post Possible Impacts Of Covid-19 On Data Networking appeared first on Packet Pushers.
The vQFX is a virtualized version of the Juniper Networks QFX10000 Ethernet switches portfolio. It […]
The post Juniper vQFX on GNS3 first appeared on Brezular's Blog.
This article uses Containerlab to emulate a simple network and experiment with Nokia SR Linux and sFlow telemetry. Containerlab provides a convenient method of emulating network topologies and configurations before deploying into production on physical switches.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/srlinux.yml
Download the Containerlab topology file.
containerlab deploy -t srlinux.yml
Deploy the topology.
docker exec -it clab-srlinux-h1 traceroute 172.16.2.2
Run traceroute on h1 to verify path to h2.
traceroute to 172.16.2.2 (172.16.2.2), 30 hops max, 46 byte packets
1 172.16.1.1 (172.16.1.1) 2.234 ms * 1.673 ms
2 172.16.2.2 (172.16.2.2) 0.944 ms 0.253 ms 0.152 ms
Results show path to h2 (172.16.2.2) via router interface (172.16.1.1).
docker exec -it clab-srlinux-switch sr_cli
Access SR Linux command line on switch.
Using configuration file(s): []
Welcome to the srlinux CLI.
Type 'help' (and press <ENTER>) if you need any help using this.
--{ + running }--[ ]--
A:switch#
SR Linux CLI describes how to use the interface.
A:switch# show system sflow status
Get status of sFlow telemetry.
-------------------------------------------------------------------------
Admin State Continue reading
After almost two years with data storage giant Western Digital, Ashley Gorakhpurwalla is getting used to the questions. …
Hard Drives Are The Mark Twain Of Technology was written by Jeffrey Burt at The Next Platform.
On today's Heavy Networking podcast we talk with sponsor Arelion about how it continues to build and maintain global IP networks, and why you should be considering them for your backhaul needs.
The post Heavy Networking 637: The Ongoing Evolution Of Arelion’s Global Network (Sponsored) appeared first on Packet Pushers.


Here at Cloudflare we're constantly working on improving our service. Our engineers are looking at hundreds of parameters of our traffic, making sure that we get better all the time.
One of the core numbers we keep a close eye on is HTTP request latency, which is important for many of our products. We regard latency spikes as bugs to be fixed. One example is the 2017 story of "Why does one NGINX worker take all the load?", where we optimized our TCP Accept queues to improve overall latency of TCP sockets waiting for accept().
Performance tuning is a holistic endeavor, and we monitor and continuously improve a range of other performance metrics as well, including throughput. Sometimes, tradeoffs have to be made. Such a case occurred in 2015, when a latency spike was discovered in our processing of HTTP requests. The solution at the time was to set tcp_rmem to 4 MiB, which minimizes the amount of time the kernel spends on TCP collapse processing. It was this collapse processing that was causing the latency spikes. Later in this post we discuss TCP collapse processing in more detail.
The tradeoff is that using a low value for Continue reading