A few weeks ago I described the basics of AWS networking, now it’s time to describe how different Azure is.
As always, it would be best to watch my Azure Networking webinar to get the details. This blog post is the abridged CliffsNotes version of the webinar (and here‘s the reason I won’t write a similar blog post for other public clouds ;).
Here’s the final push before we hit the summer break at the end of June (and recover a bit from the relentless production of new content we had throughout the first half of 2020):
Helping a friend of mine figure out the details of using Salt in Zero-Touch-Provisioning environments, Zach Moody sent me a description of their process, and was kind enough to allow me to turn it into a blog post.
We follow the same basic ZTP process you would with anything else. Salt drives the parts that interface with the network devices with information from our source-of-truth, NetBox.
David Varnum created a fantastic leaf-and-spine fabric of vEOS switches running with GNS3 and automated with Ansible playbooks.
Not only that - his blog post includes detailed setup instructions, and the corresponding GitHub repository contains all the source code you need to get it up and running.
After describing Cisco SD-WAN fundamentals and its network abstraction mechanisms, David Penaloza explained the components of Cisco SD-WAN solution and its architecture, including in which plane each element operates and its assigned role in the overlay network.
I got this question about the use of AS numbers on data center leaf switches participating in an MLAG cluster:
In the Leaf-and-Spine Fabric Architectures you made the recommendation to have the same AS number on all members of an MLAG cluster and run iBGP between them. In the Autonomous Systems and AS Numbers article you discuss the option of having different AS number per leaf. Which one should I use… and do I still need the EBGP peering between the leaf pair?
As always, there’s a bit of a gap between theory and practice ;), but let’s start with a leaf-and-spine fabric diagram illustrating both concepts:
When I started designing Data Center Infrastructure for Networking Engineers webinar I wanted to create something that would allow someone fluent in networking but not in adjacent fields like servers or storage to grasp the fundamentals of data center technologies, from server virtualization and containers to data center fabrics and storage protocols.
Here’s what a network architect said about the webinar:
During the Comparing Transparent Bridging and IP Routing part of How Networks Really Work webinar I said something along the lines of:
While packets should never be reordered in transit in transparent bridging, there’s no such guarantee in IP networks, and IP applications should tolerate out-of-order packets.
One of my regular readers who designs and builds networks supporting VoIP applications disagreed with that citing numerous real-life examples.
Of course he was right, but let’s get the facts straight first:
A few days ago Greg Ferro published an interesting post claiming DHCP is an example of intent-based networking (a bit less tongue-in-cheek than my “so is OSPF configuration” rant from 2017). BTW, so is RADIUS or TACACS+ ;)
He got quickly “corrected” by Phil Gervasi who loosely relied on Gartner’s definition of Intent-Based Networking, and claimed that an intent-based networking system should have three major components:
If you treat your engineers like interchangeable human resources you’ll get the results you deserve ;)… but if you wonder how to make them keep them happy and production, there’s no better place to start than the Key Factors in Attracting and Retaining Talent by Daniel Dib.
A while ago we discussed a software-focused view of Network Interface Cards (NICs) with Luke Gorrie, and a hardware-focused view of them with Or Gerlitz (Mellanox), Andy Gospodarek (Broadcom) and Jiri Pirko (Mellanox).
Why would anyone want to implement features in hardware and not in software, and what would be the best hardware implementation? We discussed these dilemmas with Silvano Gai in Episode 110 of Software Gone Wild podcast.
I don’t know what’s wrong with me, but I rarely get emails along the lines of “I deployed SD-WAN and it was the best thing we did in the last decade” (trust me, I would publish those if they’d come from a semi-trusted source).
What I usually get are sad experiences from people being exposed to vendor brainwashing or deployments that failed to meet expectations (but according to Systems Engineering Director working for an aggressive SD-WAN vendor that’s just because they didn’t do their research, and thus did everything wrong).
Here’s another story coming from Adrian Giacometti.
Whenever I was comparing VMware NSX and Cisco ACI a few years ago (in late 2010s in case you’re reading this in a far-away future), someone would inevitably ask “and how would you connect a bare metal server to a VMware NSX environment?”
While NSX-T has that capability since release 2.5 (more about that in a later blog post), let’s start with the big question: why would you need to?
In early April 2020 I ran another live session in my How Networks Really Work webinar. It was supposed to be an easy one, explaining the concepts of packet forwarding and routing protocols… but of course I decided to cover most solutions we’ve encountered in the last 50 years, ranging from Virtual Circuits and Source Route Bridging to Segment Routing (which, when you think about it, is just slightly better SRB over IPv6), so I never got to routing protocols.
That webinar was supposed to be an introductory one, but of course I got pulled down all sorts of rabbit trails, and even as I was explaining interesting stuff I realized a beginner would have a really hard time following along… but then I silently gave up. Obviously I’m not meant to create introduction-to-something material.
Imagine a life where you would be able to…
… and be able to do all that in a multi-vendor environment without writing tons of Ansible playbooks or Python code.
A tweet by Corey Quinn pointed me to his hilarious riff on the you are not Google and don’t have the same problems theme. Enjoy!
Mat Jovanovic decided to follow my lead and migrate his blog from Blogger to Hugo, using Docsy theme, AWS Amplify as the CI/CD pipeline, and AWS S3 as the hosting platform.
Nice job… but he did way more than that - he documented the whole process, including tool selection, setup, and Blogger migration.
Thank you Mat! Every time I see someone publishing blog posts about open-source tools on Medium I’ll send them a link to your blog (with a comment “this is how you should blog about open-source solutions").
It’s amazing how many people assume that The Internet is a thing, whereas in reality it’s a mishmash of interconnected independent operators running mostly on goodwill, misplaced trust in other people’s competence, and (sometimes) pure dumb luck.
I described a few consequences of this sad reality in the Internet Has More than One Administrator video (part of How Networks Really Work webinar), and Nick Buraglio and Elisa Jasinska provided even more details in their Surviving the Internet Default-Free Zone webinar.
The third hands-on exercise in our Networking in Public Cloud Deployments online course asks the students to deploy a web server in a public cloud of their choice using infrastructure-as-code principles.
Not surprisingly, Erik Auerswald created another fantastic writeup when solving that exercise, including exploring the problem space, detailed description of his Terraform-based solution, and testing procedures. Enjoy!
There was an obvious invisible elephant in the virtual Cloud Field Day 7 (CFD7v) event I attended in late April 2020. Most everyone was talking about AWS, how their stuff runs on AWS, how it integrates with AWS, or how it will help others leapfrog AWS (yeah, sure…).
Although you REALLY SHOULD watch my AWS Networking webinar (or something equivalent) to understand what problems vendors like VMWare or Pensando are facing or solving, I’m pretty sure a lot of people think they can get away with CliffsNotes version of it, so here they are ;)