Ivan Pepelnjak

Author Archives: Ivan Pepelnjak

Looking for a Simple Multihop EBGP Use Case

I plan to add several challenge labs using multihop EBGP sessions to the BGP labs project, including:

  1. Running BGP between VMs and central BGP route servers
  2. Using multihop EBGP session to send full Internet routing table to a customer without overloading the PE-router
  3. Running EBGP EVPN session between loopbacks advertised with EBGP IPv4 session (🤢)

However, I would love to start with a simple use case to help engineers unfamiliar with BGP realize when they might have to use multihop EBGP sessions. Unfortunately, I can’t find one, and the scenarios where I used multihop EBGP in the past (EBGP load balancing and using a low-end router in the EBGP path, where I was effectively using the reverse application of #2 as a customer) are mostly irrelevant.

Would you have an easy-to-understand use case that is best solved with a multihop EBGP session? Please share it in the comments. Thanks a million!

Looking for a Simple Multihop EBGP Use Case

I plan to add several challenge labs using multihop EBGP sessions to the BGP labs project, including:

  1. Running BGP between VMs and central BGP route servers
  2. Using multihop EBGP session to send full Internet routing table to a customer without overloading the PE-router
  3. Running EBGP EVPN session between loopbacks advertised with EBGP IPv4 session (🤢)

However, I would love to start with a simple use case to help engineers unfamiliar with BGP realize when they might have to use multihop EBGP sessions. Unfortunately, I can’t find one, and the scenarios where I used multihop EBGP in the past (EBGP load balancing and using a low-end router in the EBGP path, where I was effectively using the reverse application of #2 as a customer) are mostly irrelevant.

Would you have an easy-to-understand use case that is best solved with a multihop EBGP session? Please share it in the comments. Thanks a million!

Running BGP Labs in GitHub Codespaces

I love open-source tools (and their GitHub repositories). Someone launches a cool idea, and you can dig through their source code to figure out how it works. It beats reading documentation or fixing AI hallucinations every day of the week ;)

Not too long ago, the containerlab team launched the ability to run containerlab within a free1 container2 running on GitHub, and that seemed like a perfect solution to run the BGP labs (Jeroen van Bemmel pointing me in the right direction was another significant step forward).

Running BGP Labs in GitHub Codespaces

I love open-source tools (and their GitHub repositories). Someone launches a cool idea, and you can dig through their source code to figure out how it works. It beats reading documentation or fixing AI hallucinations every day of the week ;)

Not too long ago, the containerlab team launched the ability to run containerlab within a free1 container2 running on GitHub, and that seemed like a perfect solution to run the BGP labs (Jeroen van Bemmel pointing me in the right direction was another significant step forward).

netlab 1.8.4: vrnetlab Containers, Catalyst 8000v

I don’t think I ever created two netlab releases in a week, but last week, I stumbled upon a motherlode of goodies, and it would be a shame not to make them available.

Someone tried to use netlab with vrnetlab containers for CSR 1000v and Nexus 9300v. We got it to work, but when I started integrating his changes into the development branch, I wanted to test them, so I installed vrnetlab to create my own container images. vrnetlab is an excellent tool, and building containers is a breeze (running them is a different story), so I added support for vrnetlab containers for every device supported by that tool and netlab for which I happened to have a disk image.

netlab 1.8.4: vrnetlab Containers, Catalyst 8000v

I don’t think I ever created two netlab releases in a week, but last week, I stumbled upon a motherlode of goodies, and it would be a shame not to make them available.

Someone tried to use netlab with vrnetlab containers for CSR 1000v and Nexus 9300v. We got it to work, but when I started integrating his changes into the development branch, I wanted to test them, so I installed vrnetlab to create my own container images. vrnetlab is an excellent tool, and building containers is a breeze (running them is a different story), so I added support for vrnetlab containers for every device supported by that tool and netlab for which I happened to have a disk image.

Automated Validation of BGP Labs

In late 2023, I started playing with the idea of having automated validation in netlab. The early implementation was used in BGP labs, and a user liked it so much that he opened an issue saying:

I would suggest providing netlab validate for each lab.

Numerous rounds of yak-shaving later, I merged a humongous commit that adds automated validation to these lab exercises:

Automated Validation of BGP Labs

In late 2023, I started playing with the idea of having automated validation in netlab. The early implementation was used in BGP labs, and a user liked it so much that he opened an issue saying:

I would suggest providing netlab validate for each lab.

Numerous rounds of yak-shaving later, I merged a humongous commit that adds automated validation to these lab exercises:

netlab 1.8.3: RIPv2, BGP Route Servers

During the ITNOG8 netlab presentation, I jokingly said something along the lines “all that’s missing is RIPv2 and Babel.” That’s no longer true; someone asked me how hard it would be to add RIPv2 to netlab, and I said, “give me a few days 😎”

Other new features in netlab release 1.8.3 include support for BGP route servers (and route server clients), BGP Link Bandwidth community, and OSPF/BGP validation plugins for Arista EOS, Cumulus Linux and FRR. We also fixed the installation scripts to work with Ubuntu 24.04 and Debian Bookworm.

For more details, read the release notes.

netlab 1.8.3: RIPv2, BGP Route Servers

During the ITNOG8 netlab presentation, I jokingly said something along the lines “all that’s missing is RIPv2 and Babel.” That’s no longer true; someone asked me how hard it would be to add RIPv2 to netlab, and I said, “give me a few days 😎”

Other new features in netlab release 1.8.3 include support for BGP route servers (and route server clients), BGP Link Bandwidth community, and OSPF/BGP validation plugins for Arista EOS, Cumulus Linux and FRR. We also fixed the installation scripts to work with Ubuntu 24.04 and Debian Bookworm.

For more details, read the release notes.

The Mythical Use Cases: Traffic Engineering for Data Center Backups

Vendor product managers love discussing mythical use cases to warrant complex functionality in their gear. Long-distance VM mobility was one of those (using it for disaster avoidance was Mission Impossible under any real-world assumptions), and high-volume network-based backups seems to be another. Here’s what someone had to say about that particular unicorn in a LinkedIn comment when discussing whether we need traffic engineering in a data center fabric.

When you’re dealing with a large cluster on a fabric, you will see things like inband backup. The most common one I’ve seen is VEEAM. Those inband backups can flood a single link, and no amount of link scheduling really solves that; depending on the source, they can saturate 100G. There are a couple of solutions; IPv6 or eBGP SID has been used to avoid these links or schedule avoidance for other traffic.

It is true that (A) in-band backups can be bandwidth intensive and that (B) well-written applications can saturate 100G server links. However:

The Mythical Use Cases: Traffic Engineering for Data Center Backups

Vendor product managers love discussing mythical use cases to warrant complex functionality in their gear. Long-distance VM mobility was one of those (using it for disaster avoidance was Mission Impossible under any real-world assumptions), and high-volume network-based backups seems to be another. Here’s what someone had to say about that particular unicorn in a LinkedIn comment when discussing whether we need traffic engineering in a data center fabric.

When you’re dealing with a large cluster on a fabric, you will see things like inband backup. The most common one I’ve seen is VEEAM. Those inband backups can flood a single link, and no amount of link scheduling really solves that; depending on the source, they can saturate 100G. There are a couple of solutions; IPv6 or eBGP SID has been used to avoid these links or schedule avoidance for other traffic.

It is true that (A) in-band backups can be bandwidth intensive and that (B) well-written applications can saturate 100G server links. However:

Worth Exploring: Infrahub by Opsmill

A year or two after Damien Garros told me that “he moved to France and is working on something new” we can admire the results: Infrahub, a version-control-based system that includes a data store and a repository of all source code you use in your network automation environment. Or, straight from the GitHub repository,

A central hub to manage the data, templates and playbooks that powers your infrastructure by combining the version control and branch management capabilities of Git with the flexible data model and UI of a graph database.

I’ve seen an early demo, and it looks highly promising and absolutely worth exploring. Have fun ;)

Fun fact: the OpsMill team includes two guest speakers in the ipSpace.net automation course and a netlab contributor.

Worth Exploring: Infrahub by Opsmill

A year or two after Damien Garros told me that “he moved to France and is working on something new” we can admire the results: Infrahub, a version-control-based system that includes a data store and a repository of all source code you use in your network automation environment. Or, straight from the GitHub repository,

A central hub to manage the data, templates and playbooks that powers your infrastructure by combining the version control and branch management capabilities of Git with the flexible data model and UI of a graph database.

I’ve seen an early demo, and it looks highly promising and absolutely worth exploring. Have fun ;)

Fun fact: the OpsMill team includes two guest speakers in the ipSpace.net automation course and a netlab contributor.
1 4 5 6 7 8 176