
Author Archives: Ivan Pepelnjak
Author Archives: Ivan Pepelnjak
We had the usual gloomy December weather during the end-of-year holidays, and together with the partial lockdown (with confusing ever-changing rules only someone in Balkans could dream up) it managed to put me in OCD mood… and so I decided to remove broken links from the old blog posts.
While doing that I figured out how fragile our industry is – I encountered a graveyard of ideas and products that would make Google proud. Some of those blog posts were removed, I left others intact because they still have some technical merits, and I made sure to write sarcastic update notices on product-focused ones. Consider those comments Easter eggs… now go and find them ;))
Anyway, I also salvaged some of the old content I was stupid enough to publish somewhere else from archive.org. Here it is:
We had the usual gloomy December weather during the end-of-year holidays, and together with the partial lockdown (with confusing ever-changing rules only someone in Balkans could dream up) it managed to put me in OCD mood… and so I decided to remove broken links from the old blog posts.
While doing that I figured out how fragile our industry is – I encountered a graveyard of ideas and products that would make Google proud. Some of those blog posts were removed, I left others intact because they still have some technical merits, and I made sure to write sarcastic update notices on product-focused ones. Consider those comments Easter eggs… now go and find them ;))
As always, it’s time to shut down our virtual office and disappear until early January… unless of course you have an urgent support problem. Any paperwork ideas your purchasing department might have will have to wait until 2021.
I hope you’ll be able to disconnect from the crazy pace of networking world, forget all the unicorns and rainbows (and broccoli forest of despair), and focus on your loved ones – they need you more than the dusty router sitting in a remote office. We would also like to wish you all the best in 2021!
As always, it’s time to shut down our virtual office and disappear until early January… unless of course you have an urgent support problem. Any paperwork ideas your purchasing department might have will have to wait until 2021.
I hope you’ll be able to disconnect from the crazy pace of networking world, forget all the unicorns and rainbows (and broccoli forest of despair), and focus on your loved ones – they need you more than the dusty router sitting in a remote office. We would also like to wish you all the best in 2021!
Imagine the following network running OSPF as the routing protocol. PE1–P1–PE2 is the primary path and PE1–P2–PE2 is the backup path. What happens on PE1 when the PE1–P1 link fails? What happens on PE2?
Sample 4-router network with a primary and a backup path
The second question is much easier to answer, and the answer is totally unambiguous as it only involves OSPF:
Imagine the following network running OSPF as the routing protocol. PE1–P1–PE2 is the primary path and PE1–P2–PE2 is the backup path. What happens on PE1 when the PE1–P1 link fails? What happens on PE2?
Sample 4-router network with a primary and a backup path
The second question is much easier to answer, and the answer is totally unambiguous as it only involves OSPF:
Deciding to create AWS Networking and Azure Networking webinars wasn’t easy – after all, there’s so much content out there covering all aspects of public cloud services, and a plethora of certification trainings (including free training from AWS).
Having that in mind, it’s so nice to hear from people who found our AWS webinar useful ;)
Even though we are working with these technologies and have the certifications, there are always nuggets of information in these webinars that make it totally worthwhile. A good example in this series was the ingress routing feature updates in AWS.
It can be hard to filter through the noise from cloud providers to get to the new features that actually make a difference to what we are doing. This series does exactly that for me. Brilliant as always.
AWS Networking and Azure Networking webinars are available with Standard ipSpace.net Subscription. For even deeper dive into cloud networking check out our Networking in Public Cloud Deployments online course.
Deciding to create AWS Networking and Azure Networking webinars wasn’t easy – after all, there’s so much content out there covering all aspects of public cloud services, and a plethora of certification trainings (including free training from AWS).
Having that in mind, it’s so nice to hear from people who found our AWS webinar useful ;)
Even though we are working with these technologies and have the certifications, there are always nuggets of information in these webinars that make it totally worthwhile. A good example in this series was the ingress routing feature updates in AWS.
It can be hard to filter through the noise from cloud providers to get to the new features that actually make a difference to what we are doing. This series does exactly that for me. Brilliant as always.
Other engineers use our webinars to prepare for AWS certifications – read this blog post by Jedadiah Casey for more details.
AWS Networking and Azure Networking webinars are available with Standard ipSpace.net Subscription. For even deeper dive into cloud networking check out our Networking in Public Cloud Deployments online course.
Yesterday I wrote a frustrated tweet after wasting an hour trying to figure out why a combination of OSPF and IS-IS routing worked on Cisco IOS but not on Nexus OS. Having to wait for a minute (after Vagrant told me SSH on Nexus 9300v was ready) for NX-OS to “boot” its Ethernet module did’t improve my mood either, and the inconsistencies in NX-OS interface naming (Ethernet1/1 is uppercase while loopback0 and mgmt0 are lowercase) were just the cherry on top of the pile of ****. Anyway, here’s what I wrote:
Can’t tell you how much I hate Ansible’s lame attempts to do idempotent device configuration changes. Wasted an hour trying to figure out what’s wrong with my Nexus OS config… only to find out that “interface X” cannot appear twice in the configuration you want to push.
Not unexpectedly, I got a few (polite and diplomatic) replies from engineers who felt addressed by that tweet, so it’s only fair to document exactly what made me so angry.
Yesterday I wrote a frustrated tweet after wasting an hour trying to figure out why a combination of OSPF and IS-IS routing worked on Cisco IOS but not on Nexus OS. Having to wait for a minute (after Vagrant told me SSH on Nexus 9300v was ready) for NX-OS to “boot” its Ethernet module did’t improve my mood either, and the inconsistencies in NX-OS interface naming (Ethernet1/1 is uppercase while loopback0 and mgmt0 are lowercase) were just the cherry on top of the pile of ****. Anyway, here’s what I wrote:
Can’t tell you how much I hate Ansible’s lame attempts to do idempotent device configuration changes. Wasted an hour trying to figure out what’s wrong with my Nexus OS config… only to find out that “interface X” cannot appear twice in the configuration you want to push.
Not unexpectedly, I got a few (polite and diplomatic) replies from engineers who felt addressed by that tweet, and less enthusiastic response from the product manager (no surprise there), so it’s only fair to document exactly what made me so angry.
Update 2020-12-23: In the meantime, Ganesh Nalawade already implemented a fix that solves my problem. Thanks you, awesome job!
Remember my rant how “fail fast, fail often sounds great in a VC pitch deck, and sucks when you have to deal with its results”? Streaming telemetry is no exception to this rule, and Avi Freedman (CEO of Kentik) has been on the receiving end of this gizmo long enough to have to deal with several generations of experiments… and formed a few strong opinions.
Unfortunately Avi is still a bit more diplomatic than Artur Bergman – another CEO I love for his blunt statements – but based on his NFD16 presentation I expected a lively debate, and I was definitely not disappointed.
Remember my rant how “fail fast, fail often sounds great in a VC pitch deck, and sucks when you have to deal with its results”? Streaming telemetry is no exception to this rule, and Avi Freedman (CEO of Kentik) has been on the receiving end of this gizmo long enough to have to deal with several generations of experiments… and formed a few strong opinions.
Unfortunately Avi is still a bit more diplomatic than Artur Bergman – another CEO I love for his blunt statements – but based on his NFD16 presentation I expected a lively debate, and I was definitely not disappointed.
I love my new Vagrant+Libvirt virtual lab environment – it creates virtual machines in parallel and builds labs much faster than my previous VirtualBox-based setup. Eight CPU cores and 32 GB of RAM in my Intel NUC don’t hurt either.
However, it’s still ridiculously boring to set up a new lab. Vagrantfiles describing the private networks I need for routing protocol focused network simulations are a mess to write, and it takes way too long to log into all the devices, configure common parameters, enable interfaces…
I love my new Vagrant+Libvirt virtual lab environment – it creates virtual machines in parallel and builds labs much faster than my previous VirtualBox-based setup. Eight CPU cores and 32 GB of RAM in my Intel NUC don’t hurt either.
However, it’s still ridiculously boring to set up a new lab. Vagrantfiles describing the private networks I need for routing protocol focused network simulations are a mess to write, and it takes way too long to log into all the devices, configure common parameters, enable interfaces…
Last week I described how I configured PVLAN on a Linux bridge. After checking the desired partial connectivity with ios_ping I wanted to verify it with LLDP neighbors. Ansible ios_facts module collects LLDP neighbor information, and it should be really easy using those facts to check whether port isolation works as expected.
---
- name: Display LLDP neighbors on selected interface
hosts: all
gather_facts: true
vars:
target_interface: GigabitEthernet0/1
tasks:
- name: Display neighbors gathered with ios_facts
debug:
var: ansible_net_neighbors[target_interface]
Alas, none of the routers saw any neighbors on the target interface.
Last week I described how I configured PVLAN on a Linux bridge. After checking the desired partial connectivity with ios_ping I wanted to verify it with LLDP neighbors. Ansible ios_facts module collects LLDP neighbor information, and it should be really easy using those facts to check whether port isolation works as expected.
---
- name: Display LLDP neighbors on selected interface
hosts: all
gather_facts: true
vars:
target_interface: GigabitEthernet0/1
tasks:
- name: Display neighbors gathered with ios_facts
debug:
var: ansible_net_neighbors[target_interface]
Alas, none of the routers saw any neighbors on the target interface.
In early November we organized a 2-day network automation event as part of our Network Automation course and the participants loved the new format… so we decided to use the same approach for the Spring 2021 Networking in Public Clouds course.
This time we’re trying out another bit of the puzzle: while we have plenty of ideas whom to invite, we’d love to get the most relevant speakers with hands-on deployment experience. If you’ve built an interesting public cloud solution, created a networking focused automation or monitoring tool, helped organizations migrate into a public cloud, or experienced a phenomenal failure, we’d like to hear from you. Please check out our Call for Papers and send us your ideas. Thank you!
Minh Ha left a great comment describing additional pitfalls of cut-through switching on my Chasing CRC Errors blog post. Here it is (lightly edited).
Ivan, I don’t know about you, but I think cut-through and deep buffer are nothing but scams, and it’s subtle problems like this [fabric-wide crc errors] that open one’s eyes to the difference between reality and academy. Cut-through switching might improve nominal device latency a little bit compared to store-and-forward (SAF), but when one puts it into the bigger end-to-end context, it’s mostly useless.
Minh Ha left a great comment describing additional pitfalls of cut-through switching on my Chasing CRC Errors blog post. Here it is (lightly edited).
Ivan, I don’t know about you, but I think cut-through and deep buffer are nothing but scams, and it’s subtle problems like this [fabric-wide crc errors] that open one’s eyes to the difference between reality and academy. Cut-through switching might improve nominal device latency a little bit compared to store-and-forward (SAF), but when one puts it into the bigger end-to-end context, it’s mostly useless.
Continuing my archeological explorations, I found a dusty bag of old QoS content:
I kept digging and turned out a few MPLS, BGP and ADSL nuggets worth saving: