Almost 30 webinars, an online course, and over 140 blog posts later it’s time for another summer break.
While we’ll do our best to reply to support and sales requests (it might take us a bit longer than usual), don’t expect anything deeply technical for the new two months… but of course you can still watch over 280 hours of existing content, listen to over 100 podcast episodes, or read over 3500 blog posts.
We’ll be back with tons of new content in early September.
In the meantime, automate everything, get away from work, turn off the Internet, and enjoy a few days in your favorite spot with your loved ones!
This podcast introduction was written by Nick Buraglio, the host of today’s podcast.
As we all know, BGP runs the networked world. It is a protocol that has existed and operated in the vast expanse of the internet in one form or another since early 1990s, and despite the fact that it has been extended, enhanced, twisted, and warped into performing a myriad of tasks that one would never have imagined in the silver era of internetworking, it has remained largely unchanged in its operational core.
The world as we know it would never exist without BGP, and because of the fact that it is such a widely deployed protocol with such a solid track record of “just working”, the transition to a better security model surrounding it has been extraordinarily slow to modernize.
This blog post was initially sent to the subscribers of my SDN and Network Automation mailing list. Subscribe here.
Adam left a thoughtful comment addressing numerous interesting aspects of network design in the era of booming automation hype on my How Should Network Architects Deal with Network Automation blog post. He started with:
A question I keep tasking myself with addressing but never finding the best answer, is how appropriate is it to reform a network environment into a flattened design such as spine-and-leaf, if that reform is with the sole intent and purpose to enable automation?
A few basic facts first:
After I published the blog post describing how infrastructure cloud provides (example: AWS) might use smart Network Interface Cards (NICs) as the sweet spot to implement overlay virtual networking, my friend Christoph Jaggi sent me links to two interesting presentations:
Both presentations describe how you can take over a smart NIC with a properly crafted packet, and even bypass CPU on a firewall using smart NICs.
Daniel Teycheney published an excellent blog post with numerous hints on starting your automation journey including:
Brian Krebs wrote an interesting analysis of CIA’s Wikileaks report. In a nutshell, they were a victim of “move fast to get the mission done” shadow IT.
It could have been worse. Someone with a credit card could have started deploying stuff in AWS ;))
Not that anyone would learn anything from the PR nightmare that followed.
A while ago Russ White invited me to be a guest on his fantastic History of Networking podcast, and we spent almost an hour talking about networking in 1980s and 1990s in what some people love to call “behind iron curtain” (we also fixed that misconception).
One of the readers commenting the ideas in my Disaster Recovery and Failure Domains blog post effectively said “In an active/passive DR scenario, having L3 DCI separation doesn’t protect you from STP loop/flood in your active DC, so why do you care?”
He’s absolutely right - if you have a cold disaster recovery site, it doesn’t matter if it’s bombarded by a gazillion flooded packets per second… but how often do you have a cold recovery site?
Michael Mullany analyzed 20 years of Gartner hype cycles and got some (expected but still interesting) conclusions including:
Enjoy the reading, and keep these lessons in mind the next time you’ll be sitting in a software-defined, intent-based or machine-learning $vendor presentation.
I claimed that “EVPN is the control plane for layer-2 and layer-3 VPNs” in the Using VXLAN and EVPN to Build Active-Active Data Centers interview a long long while ago and got this response from one of the readers:
To me, that doesn’t compute. For layer-3 VPNs I couldn’t care less about EVPN, they have their own control planes.
Apart from EVPN, there’s a single standardized scalable control plane for layer-3 VPNs: BGP VPNv4 address family using MPLS labels. Maybe EVPN could be a better solution (opinions differ, see EVPN Technical Deep Dive webinar for more details).
This blog post was initially sent to the subscribers of my SDN and Network Automation mailing list. Subscribe here.
In late 2018 Juniper started aggressively promoting Network Reliability Engineering - the networking variant of concepts of software-driven operations derived from GIFEE SRE concept (because it must make perfect sense to mimic whatever Google is doing, right?).
There’s nothing wrong with promoting network automation, or infrastructure-as-code concepts, and Matt Oswalt and his team did an awesome job with NRE Labs (huge “Thank you!” to whoever is financing them), but is that really all NRE should be?
In early May 2020 I wrote a blog post introducing SuzieQ, a network observability platform Dinesh Dutt worked on for the last few years. If that blog post made you look for more details, you might like the Episode 111 of Software Gone Wild in which we went deeper and covered these topics:
Regular readers of my blog probably remember the detailed explanations Erik Auerswald creates while solving hands-on exercises from our Networking in Public Cloud Deployments online course (previous ones: create a virtual network, deploy a web server).
This time he documented the process he went through to develop a Terraform configuration file that deploys full-blown AWS networking infrastructure (VPC, subnets, Internet gateway, route tables, security groups) and multiple servers include an SSH bastion host. You’ll also see what he found out when he used Elastic Network Interfaces (spoiler: routing on multi-interface hosts is tough).
A network architect friend of mine sent me a series of questions trying to figure out how he should approach network automation, and how deep he should go.
There is so much focus right now on network automation, but it’s difficult for me to know how to apply it, and how it all makes sense from an Architect’s PoV.
A network architect should be the bridge between the customer requirements and the underlying technologies, which (in my opinion) means he has to have a good grasp of both as opposed to fluffy opinions glanced from vendor white papers, or brushed off so-called thought leaders.
There’s one thing no cloud vendor ever managed to change: virtual machines running on top of cloud infrastructure expect to have Ethernet interfaces.
It doesn’t matter if the virtual Ethernet Network Interface Cards (NICs) are implemented with software emulation of actual hardware (VMware emulated the ancient Novell NE1000 NIC) or with paravirtual drivers - the virtual machines expect to send and receive Ethernet frames. What happens beyond the Ethernet NIC depends on the cloud implementation details.
A long while ago I decided to write an article explaining how you could run VMware NSX on ESXi servers with redundant connections to two top-of-rack switches on top of a layer-3-only fabric (a fabric with IP subnets and VLANs limited to a single top-of-rack switch). Turns out that’s Mission Impossible, so I put the article on the back burner and slowly forgot about it.
Well, not exactly. Every now and then my subconsciousness would kick it up and I’d figure out yet-another reason why it’s REALLY hard to do it right. After a while, I decided to try again, and completely rewrote the article. The first part is already online, more details coming (hopefully) soon.
Every few years someone within the ITU-T (the standard organization that mattered when we were still dealing with phones, virtual circuits and modems) realizes how obsolete they are and tries to hijack and/or fork the Internet protocol development. Their latest attempt is the “New IP” framework, and Geoff Huston did a great job completely tearing that stupidity apart in his May 2020 ISP column. My favorite quote:
It’s really not up to some crusty international committee to dictate future consumer preferences. Time and time again these committees with their lofty titles, such as “the Focus Group on Technologies for Network 2030” have been distinguished by their innate ability to see their considered prognostications comprehensively contradicted by reality! Their forebears in similar committees missed computer mainframes, then they failed to see the personal computer revolution, and were then totally surprised by the smartphone.
Enjoy!
Donal O Duibhir was trying to get me to present at INOG for ages, and as much as I’d love to get to Ireland we always had a scheduling conflict.
Last week we finally made it work - unfortunately only in a virtual event, so I got none of the famous Irish beer - and the video about alternate universes of public cloud networking is already online.
Maximilian Wilhelm had great fun turning my usual black-and-white statements into tweets, here’s a selection of them:
CloudFlare launched yet another service: transfer speed- and latency measurements done from a web browser. While it’s pretty obvious how you could measure transfer speed (start an asynchronous transfer, register for the JavaScript onreadystatechange event to notice out when it has completed, and compute the transfer rate), measuring latency seems like a bit of black magic. After all, you can’t do a ping from a web browser, can you?
IPv6 is old enough to buy its own beer (in US, not just in Europe), but there are still tons of naysayers explaining how hard it is to deploy. That’s probably true if you’re forced to work with decades-old boxes, or if you handcrafted your environment with a gazillion clicks in a fancy GUI, but if you used Terraform to deploy your application in AWS, it’s as hard as adding a few extra lines in your configuration files.
Nadeem Lughmani did a great job documenting the exact changes needed to get IPv6 working in AWS VPC, including adjusting the IPv6 routing tables, and security groups. Enjoy ;)