Ivan Pepelnjak

Author Archives: Ivan Pepelnjak

Repost: Drawbacks and Pitfalls of Cut-Through Switching

Minh Ha left a great comment describing additional pitfalls of cut-through switching on my Chasing CRC Errors blog post. Here it is (lightly edited).


Ivan, I don’t know about you, but I think cut-through and deep buffer are nothing but scams, and it’s subtle problems like this [fabric-wide crc errors] that open one’s eyes to the difference between reality and academy. Cut-through switching might improve nominal device latency a little bit compared to store-and-forward (SAF), but when one puts it into the bigger end-to-end context, it’s mostly useless.

Repost: Drawbacks and Pitfalls of Cut-Through Switching

Minh Ha left a great comment describing additional pitfalls of cut-through switching on my Chasing CRC Errors blog post. Here it is (lightly edited).


Ivan, I don’t know about you, but I think cut-through and deep buffer are nothing but scams, and it’s subtle problems like this [fabric-wide crc errors] that open one’s eyes to the difference between reality and academy. Cut-through switching might improve nominal device latency a little bit compared to store-and-forward (SAF), but when one puts it into the bigger end-to-end context, it’s mostly useless.

Learning Networking Fundamentals at University?

One of my readers sent me this interesting question:

It begs the question in how far graduated students with a degree in computer science or applied IT infrastructure courses (on university or college level or equivalent) are actually aware of networking fundamentals. I work for a vendor independent networking firm and a lot of my new colleagues are college graduates. Positively, they are very well versed in automation, scripting and other programming skills, but I never asked them what actually happens when a packet traverses a network. I wonder what the result would be…

I can tell you what the result would be in my days: blank stares and confusion. I “enjoyed” a half-year course in computer networking that focused exclusively on history of networking and academic view of layering, and whatever I know about networking I learned after finishing my studies.

Learning Networking Fundamentals at University?

One of my readers sent me this interesting question:

It begs the question in how far graduated students with a degree in computer science or applied IT infrastructure courses (on university or college level or equivalent) are actually aware of networking fundamentals. I work for a vendor independent networking firm and a lot of my new colleagues are college graduates. Positively, they are very well versed in automation, scripting and other programming skills, but I never asked them what actually happens when a packet traverses a network. I wonder what the result would be…

I can tell you what the result would be in my days: blank stares and confusion. I “enjoyed” a half-year course in computer networking that focused exclusively on history of networking and academic view of layering, and whatever I know about networking I learned after finishing my studies.

Implement Private VLAN Functionality with Linux Bridge and Libvirt

I wanted to test routing protocol behavior (IS-IS in particular) on partially meshed multi-access layer-2 networks like private VLANs or Carrier Ethernet E-Tree service. I recently spent plenty of time creating a Vagrant/libvirt lab environment on my Intel NUC running Ubuntu 20.04, and I wanted to use that environment in my tests.

Challenge-of-the-day: How do you implement private VLAN functionality with Vagrant using libvirt plugin?

There might be interesting KVM/libvirt options I’ve missed, but so far I figured two ways of connecting Vagrant-controlled virtual machines in libvirt environment:

Implement Private VLAN Functionality with Linux Bridge and Libvirt

I wanted to test routing protocol behavior (IS-IS in particular) on partially meshed multi-access layer-2 networks like private VLANs or Carrier Ethernet E-Tree service. I recently spent plenty of time creating a Vagrant/libvirt lab environment on my Intel NUC running Ubuntu 20.04, and I wanted to use that environment in my tests.

Challenge-of-the-day: How do you implement private VLAN functionality with Vagrant using libvirt plugin?

There might be interesting KVM/libvirt options I’ve missed, but so far I figured two ways of connecting Vagrant-controlled virtual machines in libvirt environment:

Lessons Learned: Automating Site Deployments

Some networking engineers renew their ipSpace.net subscription every year, and when they drop off the radar, I try to get in touch with them to understand whether they moved out of networking or whether we did a bad job.

One of them replied that he retired after building a fully automated site deployment solution (first lesson learned: you’re never too old to start automating your network), and graciously shared numerous lessons learned while building that solution.

Lessons Learned: Automating Site Deployments

Some networking engineers renew their ipSpace.net subscription every year, and when they drop off the radar, I try to get in touch with them to understand whether they moved out of networking or whether we did a bad job.

One of them replied that he retired after building a fully automated site deployment solution (first lesson learned: you’re never too old to start automating your network), and graciously shared numerous lessons learned while building that solution.

Updated: Getting Network Device Operational Data with Ansible

Recording the same content for the third time because software developers decided to write code before figuring out what needs to be done is disgusting… so it took me a long long while before I collected enough willpower to rewrite and retest all the examples and re-record the Getting Operational Data section of Ansible for Networking Engineers webinar.

The new videos explain how to consume data generated by show commands in JSON or XML format, and how to parse the traditional text-based show printouts. I dropped mentions of (semi)failed experiments like Ansible parse_cli and focused on things that work well: TextFSM, in particular with ntc-templates library, pyATS/Genie, and TTP. On the positive side, I liked the slick new cli_parse module… let’s hope it will stay that way for at least a few years.

On a totally unrelated topic, I realized (again) that fail fast, fail often sounds great in a VC pitch deck, and sucks when you have to deal with its results.

Updated: Getting Network Device Operational Data with Ansible

Recording the same content for the third time because software developers decided to write code before figuring out what needs to be done is disgusting… so it took me a long long while before I collected enough willpower to rewrite and retest all the examples and re-record the Getting Operational Data section of Ansible for Networking Engineers webinar.

The new videos explain how to consume data generated by show commands in JSON or XML format, and how to parse the traditional text-based show printouts. I dropped mentions of (semi)failed experiments like Ansible parse_cli and focused on things that work well: TextFSM, in particular with ntc-templates library, pyATS/Genie, and TTP. On the positive side, I liked the slick new cli_parse module… let’s hope it will stay that way for at least a few years.

On a totally unrelated topic, I realized (again) that fail fast, fail often sounds great in a VC pitch deck, and sucks when you have to deal with its results.

Interesting: Differential Availability

Someone pointed me to a high-level overview of Google’s Spanner database which included this gem:

A second refinement is that there are many other sources of outages, some of which take out the users in addition to Spanner (“fate sharing”). We actually care about the differential availability, in which the user is up (and making a request) to notice that Spanner is down. This number is strictly higher (more available) than Spanner’s actual availability — that is, you have to hear the tree fall to count it as a problem.

In other words, it doesn’t matter if your distributed database fails if its user are also gone. Keep this concept in mind every time you’re designing a high availability solution – some corner cases are simply not worth solving.

Interesting: Differential Availability

Someone pointed me to a high-level overview of Google’s Spanner database which included this gem:

A second refinement is that there are many other sources of outages, some of which take out the users in addition to Spanner (“fate sharing”). We actually care about the differential availability, in which the user is up (and making a request) to notice that Spanner is down. This number is strictly higher (more available) than Spanner’s actual availability — that is, you have to hear the tree fall to count it as a problem.

In other words, it doesn’t matter if your distributed database fails if its user are also gone. Keep this concept in mind every time you’re designing a high availability solution – some corner cases are simply not worth solving.

Fast Failover: Techniques and Technologies

Continuing our Fast Failover saga, let’s focus on techniques and technologies available to implement it (assuming you still think it’s worth the effort).

The following text is heavily based on comments Jeff Tantsura wrote on one of my LinkedIn posts as well as the original blog post. Thank you!

There are numerous technologies you can use to implement fast reroute, from the most complex to the easiest one:

Fast Failover: Techniques and Technologies

Continuing our Fast Failover saga, let’s focus on techniques and technologies available to implement it (assuming you still think it’s worth the effort).

The following text is heavily based on comments Jeff Tantsura wrote on one of my LinkedIn posts as well as the original blog post. Thank you!

There are numerous technologies you can use to implement fast reroute, from the most complex to the easiest one:

Chasing CRC Errors in a Data Center Fabric

One of my readers encountered an interesting problem when upgrading a data center fabric to 100 Gbps leaf-to-spine links:

  • They installed new fiber cables and SFPs;
  • Everything looked great… until someone started complaining about application performance problems.
  • Nothing else has changed, so the culprit must have been the network upgrade.
  • A closer look at monitoring data revealed CRC errors on every leaf switch. Obviously something was badly wrong with the whole batch of SFPs.

Fortunately my reader took a closer look at the data before they requested a wholesale replacement… and spotted an interesting pattern:

Chasing CRC Errors in a Data Center Fabric

One of my readers encountered an interesting problem when upgrading a data center fabric to 100 Gbps leaf-to-spine links:

  • They installed new fiber cables and SFPs;
  • Everything looked great… until someone started complaining about application performance problems.
  • Nothing else has changed, so the culprit must have been the network upgrade.
  • A closer look at monitoring data revealed CRC errors on every leaf switch. Obviously something was badly wrong with the whole batch of SFPs.

Fortunately my reader took a closer look at the data before they requested a wholesale replacement… and spotted an interesting pattern: