Ivan Pepelnjak

Author Archives: Ivan Pepelnjak

Lessons Learned: Automating Site Deployments

Some networking engineers renew their ipSpace.net subscription every year, and when they drop off the radar, I try to get in touch with them to understand whether they moved out of networking or whether we did a bad job.

One of them replied that he retired after building a fully automated site deployment solution (first lesson learned: you’re never too old to start automating your network), and graciously shared numerous lessons learned while building that solution.

Lessons Learned: Automating Site Deployments

Some networking engineers renew their ipSpace.net subscription every year, and when they drop off the radar, I try to get in touch with them to understand whether they moved out of networking or whether we did a bad job.

One of them replied that he retired after building a fully automated site deployment solution (first lesson learned: you’re never too old to start automating your network), and graciously shared numerous lessons learned while building that solution.

Updated: Getting Network Device Operational Data with Ansible

Recording the same content for the third time because software developers decided to write code before figuring out what needs to be done is disgusting… so it took me a long long while before I collected enough willpower to rewrite and retest all the examples and re-record the Getting Operational Data section of Ansible for Networking Engineers webinar.

The new videos explain how to consume data generated by show commands in JSON or XML format, and how to parse the traditional text-based show printouts. I dropped mentions of (semi)failed experiments like Ansible parse_cli and focused on things that work well: TextFSM, in particular with ntc-templates library, pyATS/Genie, and TTP. On the positive side, I liked the slick new cli_parse module… let’s hope it will stay that way for at least a few years.

On a totally unrelated topic, I realized (again) that fail fast, fail often sounds great in a VC pitch deck, and sucks when you have to deal with its results.

Updated: Getting Network Device Operational Data with Ansible

Recording the same content for the third time because software developers decided to write code before figuring out what needs to be done is disgusting… so it took me a long long while before I collected enough willpower to rewrite and retest all the examples and re-record the Getting Operational Data section of Ansible for Networking Engineers webinar.

The new videos explain how to consume data generated by show commands in JSON or XML format, and how to parse the traditional text-based show printouts. I dropped mentions of (semi)failed experiments like Ansible parse_cli and focused on things that work well: TextFSM, in particular with ntc-templates library, pyATS/Genie, and TTP. On the positive side, I liked the slick new cli_parse module… let’s hope it will stay that way for at least a few years.

On a totally unrelated topic, I realized (again) that fail fast, fail often sounds great in a VC pitch deck, and sucks when you have to deal with its results.

Interesting: Differential Availability

Someone pointed me to a high-level overview of Google’s Spanner database which included this gem:

A second refinement is that there are many other sources of outages, some of which take out the users in addition to Spanner (“fate sharing”). We actually care about the differential availability, in which the user is up (and making a request) to notice that Spanner is down. This number is strictly higher (more available) than Spanner’s actual availability — that is, you have to hear the tree fall to count it as a problem.

In other words, it doesn’t matter if your distributed database fails if its user are also gone. Keep this concept in mind every time you’re designing a high availability solution – some corner cases are simply not worth solving.

Interesting: Differential Availability

Someone pointed me to a high-level overview of Google’s Spanner database which included this gem:

A second refinement is that there are many other sources of outages, some of which take out the users in addition to Spanner (“fate sharing”). We actually care about the differential availability, in which the user is up (and making a request) to notice that Spanner is down. This number is strictly higher (more available) than Spanner’s actual availability — that is, you have to hear the tree fall to count it as a problem.

In other words, it doesn’t matter if your distributed database fails if its user are also gone. Keep this concept in mind every time you’re designing a high availability solution – some corner cases are simply not worth solving.

Fast Failover: Techniques and Technologies

Continuing our Fast Failover saga, let’s focus on techniques and technologies available to implement it (assuming you still think it’s worth the effort).

The following text is heavily based on comments Jeff Tantsura wrote on one of my LinkedIn posts as well as the original blog post. Thank you!

There are numerous technologies you can use to implement fast reroute, from the most complex to the easiest one:

Fast Failover: Techniques and Technologies

Continuing our Fast Failover saga, let’s focus on techniques and technologies available to implement it (assuming you still think it’s worth the effort).

The following text is heavily based on comments Jeff Tantsura wrote on one of my LinkedIn posts as well as the original blog post. Thank you!

There are numerous technologies you can use to implement fast reroute, from the most complex to the easiest one:

Chasing CRC Errors in a Data Center Fabric

One of my readers encountered an interesting problem when upgrading a data center fabric to 100 Gbps leaf-to-spine links:

  • They installed new fiber cables and SFPs;
  • Everything looked great… until someone started complaining about application performance problems.
  • Nothing else has changed, so the culprit must have been the network upgrade.
  • A closer look at monitoring data revealed CRC errors on every leaf switch. Obviously something was badly wrong with the whole batch of SFPs.

Fortunately my reader took a closer look at the data before they requested a wholesale replacement… and spotted an interesting pattern:

Chasing CRC Errors in a Data Center Fabric

One of my readers encountered an interesting problem when upgrading a data center fabric to 100 Gbps leaf-to-spine links:

  • They installed new fiber cables and SFPs;
  • Everything looked great… until someone started complaining about application performance problems.
  • Nothing else has changed, so the culprit must have been the network upgrade.
  • A closer look at monitoring data revealed CRC errors on every leaf switch. Obviously something was badly wrong with the whole batch of SFPs.

Fortunately my reader took a closer look at the data before they requested a wholesale replacement… and spotted an interesting pattern:

Fifty Shades of High Availability

A while ago we had an interesting exchange of ideas around inserting high-availability network appliance into a public cloud environment (TL&DR: it was really hard until AWS introduced Gateway Load Balancing), and someone quickly pointed out we’re solving the wrong challenge because…

Azure Firewall […] is a fully stateful firewall-as-a-service with built-in high-availability.

Somehow he wasn’t too happy when I pointed out that there’s more to high availability than vendor marketing ;)

Fifty Shades of High Availability

A while ago we had an interesting exchange of ideas around inserting high-availability network appliance into a public cloud environment (TL&DR: it was really hard until AWS introduced Gateway Load Balancing), and someone quickly pointed out we’re solving the wrong challenge because…

Azure Firewall […] is a fully stateful firewall-as-a-service with built-in high-availability.

Somehow he wasn’t too happy when I pointed out that there’s more to high availability than vendor marketing ;)

Worth Exploring: Pluginized Protocols

Remember my BGP route selection rules are a clear failure of intent-based networking paradigm blog post? I wrote it almost three years ago, so maybe you want to start by rereading it…

Making long story short: every large network is a unique snowflake, and every sufficiently convoluted network architect has unique ideas of how BGP route selection should work, resulting in all sorts of crazy extended BGP communities, dozens if not hundreds of nerd knobs, and 2000+ pages of BGP documentation for a recent network operating system (no, unfortunately I’m not joking).

Worth Exploring: Pluginized Protocols

Remember my BGP route selection rules are a clear failure of intent-based networking paradigm blog post? I wrote it almost three years ago, so maybe you want to start by rereading it…

Making long story short: every large network is a unique snowflake, and every sufficiently convoluted network architect has unique ideas of how BGP route selection should work, resulting in all sorts of crazy extended BGP communities, dozens if not hundreds of nerd knobs, and 2000+ pages of BGP documentation for a recent network operating system (no, unfortunately I’m not joking).

Fun Times: Another Broken Linux ALG

Dealing with protocols that embed network-layer addresses into application-layer messages (like FTP or SIP) is great fun, more so if the said protocol traverses a NAT device that has to find the IP addresses embedded in application messages while translating the addresses in IP headers. For whatever reason, the content rewriting functionality is called application-level gateway (ALG).

Even when we’re faced with a monstrosity like FTP or SIP that should have been killed with napalm a microsecond after it was created, there’s a proper way of doing things and a fast way of doing things. You could implement a protocol-level proxy that would intercept control-plane sessions… or you could implement a hack that tries to snoop TCP payload without tracking TCP session state.

Not surprisingly, the fast way of doing things usually results in a wonderful attack surface, more so if the attacker is smart enough to construct HTTP requests that look like SIP messages. Enjoy ;)

Fun Times: Another Broken Linux ALG

Dealing with protocols that embed network-layer addresses into application-layer messages (like FTP or SIP) is great fun, more so if the said protocol traverses a NAT device that has to find the IP addresses embedded in application messages while translating the addresses in IP headers. For whatever reason, the content rewriting functionality is called application-level gateway (ALG).

Even when we’re faced with a monstrosity like FTP or SIP that should have been killed with napalm a microsecond after it was created, there’s a proper way of doing things and a fast way of doing things. You could implement a protocol-level proxy that would intercept control-plane sessions… or you could implement a hack that tries to snoop TCP payload without tracking TCP session state.

Not surprisingly, the fast way of doing things usually results in a wonderful attack surface, more so if the attacker is smart enough to construct HTTP requests that look like SIP messages. Enjoy ;)

Reviving Old Content, Part 1

More than a decade ago I published tons of materials on a web site that eventually disappeared into digital nirvana, leaving heaps of broken links on my blog. I decided to clean up those links, and managed to save some of the vanished content from the Internet Archive:

I also updated dozens of blog posts while pretending to be Indiana Jones, including:

Reviving Old Content, Part 1

More than a decade ago I published tons of materials on a web site that eventually disappeared into digital nirvana, leaving heaps of broken links on my blog. I decided to clean up those links, and managed to save some of the vanished content from the Internet Archive:

I also updated dozens of blog posts while pretending to be Indiana Jones, including:

1 93 94 95 96 97 176