
Author Archives: Ivan Pepelnjak
Author Archives: Ivan Pepelnjak
Continuing the how real is the decade-old SDN hype thread, let’s try to figure out if anyone still uses OpenFlow. OpenFlow was declared dead by the troubadour of the SDN movement in 2016, so it looks like the question is moot. However, nothing ever dies in networking (including hop-by-hop IPv6 extension headers), so here we go.
Ignoring for the moment the embarrassing we solved the global load balancing with per-flow forwarding academic blunders1, OpenFlow wasn’t the worst tool for programming forwarding exceptions (ACL/PBR) into TCAM.
Even though Gartner declared SDN obsolete before plateau in their 2021 Networking Hype Cycle, most vendor marketers never got the memo. Anything that interacts with network devices in any way1 is called an SDN controller. Let’s try to throw some minimal amount of taxonomy into that mess based on how these controllers interact with network elements (physical or virtual).
Christoph Jaggi, the author of Ethernet Encryption webinar, published a new version of Ethernet Encryptor Market Overview including:
I started preparing the materials for the SDN – 10 years later webinar, and plan to publish a series of blog posts documenting what I found on various aspects of what could be considered SDN1. I’m pretty sure I missed quite a few things; your comments are most welcome.
Let’s start with an easy one: software/hardware disaggregation in network devices.
I found several widely-used open-source2 network operating systems:
Another interesting column by Geoff Huston: performance of TCP congestion control protocols when using Low-Earth Orbit or Geosynchronous Orbit satellites for Internet access.
The last part of Network Addressing section of How Networks Really Work webinar covered other addressing-related topics starting with address assignment mechanisms.
Here’s another “do these things ever disappear?” question from Enrique Vallejo:
Regarding storage, is Fibre Channel still a thing in 2022, or most people employ SATA over Ethernet and NVMe over fabrics?
TL&DR: Yes. So is COBOL.
To understand why some people still use Fibre Channel, we have to start with an observation made by Howard Marks: “Storage is different.” It’s OK to drop a packet in transit. It’s NOT OK to lose data at rest.
Here’s a short list of major goodies included in netsim-tools release 1.2.2:
More details in the release notes.
To upgrade netsim-tools, use pip3 install --upgrade netsim-tools
; if you’re starting from scratch, read the installation instructions.
Recent news from the Department of Unintended Consequences: RFC 6724 changed the IPv4/IPv6 source/destination address selection rules a decade ago, and it seems that the common interpretation of those rules makes IPv6 Unique Local Addresses (ULA) less preferred than the IPv4 addresses, at least according to the recent Unintended Operational Issues With ULA draft by Nick Buraglio, Chris Cummings and Russ White.
End result: If you use only ULA addresses in your dual-stack network1, IPv6 won’t be used at all. Even worse, if you use ULA addresses together with global IPv6 addresses (GUA) as a fallback mechanism, there might be hidden gotchas that you won’t discover until you turn off IPv4. Looks like someone did a Truly Great Job, and ULA stands for Useless Local Addresses.
A friend of mine working for a mid-sized networking vendor sent me an intriguing question:
We have a product using an old ASIC that has 12K forwarding entries, and would like to extend its lifetime. I know you were mentioning some useful tricks, would you happen to remember what they were?
This challenge has no perfect solution, but there are at least three tricks I’ve encountered so far (as always, comments are most welcome):
Erik Auerswald sent me a pointer to a blog post by Dave Taht: The state of fq_codel and sch_cake worldwide. It’s so nice to see what a huge impact Dave made since he started the Bufferbloat project.
Hint: if you have no idea what Bufferbloat or fq_codel are, you REALLY SHOULD explore Dave’s web site.
Most large content providers use some sort of egress traffic engineering on edge web proxy/caching servers to optimize the end-user experience (avoid congested transit autonomous systems) and link utilization on egress links.
I was planning to write a blog post about the tricks they use for ages, and never found time to do it… but if you don’t mind watching a video, the Source Routing on the Edge presentation Oliver Herms had at iNOG::14v does a pretty good job explaining the concepts and a particular implementation.
Christopher Werny has tons of hands-on experience with IPv6 security (or lack thereof), and described some of his findings in the Practical Aspects of IPv6 Security part of IPv6 security webinar, including:
netsim-tools started as a simple tool to create virtual lab topologies (I hated creating Vagrantfiles describing complex topologies), but when it morphed into an ever-growing “configure all the boring stuff in your lab from a high-level description” thingie, it gave creative networking engineers an interesting idea: could we use this tool to do all the stuff we always hated doing in our physical labs?
My answer was always “of course, please feel free to submit a PR”, and Stefano Sasso did just that: he implemented external orchestration provider that allows you to use netsim-tools to configure IPv4, IPv6, VLANs, VRFs, LLDP, BFD, OSPFv2, OSPFv3, EIGRP, IS-IS, BGP, MPLS, BGP-LU, L3VPN (VPNv4 + VPNv6), SR-MPLS, or SRv6 on supported hardware devices.
Nicola Modena created an interesting presentation describing IBGP designs using BGP Additional Paths and Optimal Route Reflection functionality
Hope you’ll enjoy the presentation as much as I did… and make sure you understand potential circular dependencies you might be introducing when running a route reflector as a virtual machine.
Continuing the what happened to old technologies saga, here’s another question by Enrique Vallejo:
Are FabricPath, TRILL or SPB still alive, or has everyone moved to VXLAN? Are they worth studying?
TL&DR: Barely. Yes. No.
Layer-2 Fabric craziness exploded in 2010 with vendors playing the usual misinformation games that eventually resulted in totally fragmented market full of partial- or proprietary solutions. At one point in time, some HP data center switches supported only TRILL, and other data center switches from the same company supported only SPB.
Now for individual technologies:
It’s time for the bad part of AI/ML in Networking: Good, Bad, and Ugly webinar. After describing the potential AI/ML wins, Javier Antich walked us through the long tail of AI/ML problems.
Two week ago I described how to create a simple VRF Lite lab with netsim-tools VRF configuration module. Adding MPLS/VPN to the mix and creating a full-blown MPLS/VPN lab is a piece of cake. In this blog post we’ll build a simple topology with two VRFs (red and blue) and two PE-routers:
Lab topology
Enrique Vallejo asked an interesting question a while ago:
When was X.25 official declared dead? Note that the wikipedia claims that it is still in use in parts of the world.
Wikipedia is probably right, and had several encounters with X.25 that would corroborate that claim. If you happen to have more up-to-date information, please leave a comment.
One of my readers has to deal with a crappy Network Termination Equipment (NTE)1 that does not drop local link carrier2 when the remote link fails. Here’s the original ASCII art describing the topology:
PE---------------NTE--FW---NMS
<--------IP-------->
He’d like to use interface SNMP counters on the firewall to detect the PE-NTE link failure. He’s using static default route toward PE on FW, and tried to detect the link failure with ifOutDiscards counter.