
Author Archives: Ivan Pepelnjak
Author Archives: Ivan Pepelnjak
Here’s another question from the excellent list posted by Daniel Dib on Twitter:
BGP Split Horizon rule says “Don’t advertise IBGP-learned routes to another IBGP peer.” The purpose is to avoid loops because it’s assumed that all of IBGP peers will be on full mesh connectivity. What is the reason the BGP protocol designers made this assumption?
Time for another history lesson. BGP was designed in late 1980s (RFC 1105 was published in 1989) as a replacement for the original Exterior Gateway Protocol (EGP). In those days, the original hub-and-spoke Internet topology with NSFNET core was gradually replaced with a mesh of interconnections, and EGP couldn’t cope with that.
Here’s another question from the excellent list posted by Daniel Dib on Twitter:
BGP Split Horizon rule says “Don’t advertise IBGP-learned routes to another IBGP peer.” The purpose is to avoid loops because it’s assumed that all of IBGP peers will be on full mesh connectivity. What is the reason the BGP protocol designers made this assumption?
Time for another history lesson. BGP was designed in late 1980s (RFC 1105 was published in 1989) as a replacement for the original Exterior Gateway Protocol (EGP). In those days, the original hub-and-spoke Internet topology with NSFNET core was gradually replaced with a mesh of interconnections, and EGP couldn’t cope with that.
Whenever I compare MPLS-based Segment Routing (SR-MPLS) with it’s distant IPv6-based cousin (SRv6), someone invariably mentions the specter of large label stacks that some hardware cannot handle, for example:
Do you think vendors current supported label max stack might be an issue when trying to route a packet from source using Adj-SIDs on relatively big sized (and meshed) cores? Many seem to be proposing to use SRv6 to overcome this.
I’d dare to guess that more hardware supports MPLS with decent label stacks than SRv6, and if I’ve learned anything from my chats with Laurent Vanbever, it’s that it sometimes takes surprisingly little to push the traffic into the right direction. You do need a controller that can figure out what that little push is and where to apply it though.
Whenever I compare MPLS-based Segment Routing (SR-MPLS) with it’s distant IPv6-based cousin (SRv6), someone invariably mentions the specter of large label stacks that some hardware cannot handle, for example:
Do you think vendors current supported label max stack might be an issue when trying to route a packet from source using Adj-SIDs on relatively big sized (and meshed) cores? Many seem to be proposing to use SRv6 to overcome this.
I’d dare to guess that more hardware supports MPLS with decent label stacks than SRv6, and if I’ve learned anything from my chats with Laurent Vanbever, it’s that it sometimes takes surprisingly little to push the traffic into the right direction. You do need a controller that can figure out what that little push is and where to apply it though.
In early June 2022 I described a netlab topology using VLAN trunks in netlab. That topology provided pure bridging service for two IP subnets. Now let’s go a step further and add a router-on-a-stick:
Lab topology
In early June 2022 I described a netlab topology using VLAN trunks in netlab. That topology provided pure bridging service for two IP subnets. Now let’s go a step further and add a router-on-a-stick:
Lab topology
Just FYI: I pushed out netlab release 1.3.3 yesterday. It’s a purely bug fix release, new functionality and a few breaking changes are coming in release 1.4 in a few weeks.
Some of the bugs we fixed weren’t exactly pleasant; if you’re using release 1.3.2 you might want to upgrade with pip3 install --upgrade networklab
.
Just FYI: I pushed out netlab release 1.3.3 yesterday. It’s a purely bug fix release, new functionality and a few breaking changes are coming in release 1.4 in a few weeks.
Some of the bugs we fixed weren’t exactly pleasant; if you’re using release 1.3.2 you might want to upgrade with pip3 install --upgrade networklab
.
In this week’s update of the Data Center Infrastructure for Networking Engineers webinar we talked about VLANs, VRFs, and modern data center fabrics.
Those videos are available with Standard or Expert ipSpace.net Subscription; if you’re still sitting on the fence, you might want to watch the how networks really work version of the same topic that’s available with Free Subscription – it describes the principles-of-operation of bridging fabrics that don’t use STP (TRILL, SPBM, VXLAN, EVPN)
In this week’s update of the Data Center Infrastructure for Networking Engineers webinar, we talked about VLANs, VRFs, and modern data center fabrics.
Those videos are available with Standard or Expert ipSpace.net Subscription; if you’re still sitting on the fence, you might want to watch the how networks really work version of the same topic that’s available with Free Subscription – it describes the principles-of-operation of bridging fabrics that don’t use STP (TRILL, SPBM, VXLAN, EVPN)
In the EVPN/MPLS Bridging Forwarding Model blog post I mentioned numerous services defined in RFC 7432. That blog post focused on VLAN-Based Service Interface that mirrors the Carrier Ethernet VLAN mode.
RFC 7432 defines two other VLAN services that can be used to implement Carrier Ethernet services:
And then there’s the VLAN-Aware Bundle Service, where a bunch of VLANs share the same MPLS pseudowires while having separate bridging tables.
In the EVPN/MPLS Bridging Forwarding Model blog post I mentioned numerous services defined in RFC 7432. That blog post focused on VLAN-Based Service Interface that mirrors the Carrier Ethernet VLAN mode.
RFC 7432 defines two other VLAN services that can be used to implement Carrier Ethernet services:
And then there’s the VLAN-Aware Bundle Service, where a bunch of VLANs share the same MPLS pseudowires while having separate bridging tables.
Daniel Dib posted a number of excellent questions on Twitter, including:
While forwarding a received Type-5 LSA to other areas, why does the ABR not change the Advertising Router ID to it’s own IP address? If ABR were able to change the Advertising Router ID in the Type-5 LSA, then there would be no need for Type-4 LSA which meant less OSPF overhead on the network.
TL&DR: The current implementation of external routes in OSPF minimizes topology database size (memory utilization)
Before going to the details, try to imagine the environment in which OSPF was designed, and the problems it was solving.
Daniel Dib posted a number of excellent questions on Twitter, including:
While forwarding a received Type-5 LSA to other areas, why does the ABR not change the Advertising Router ID to it’s own IP address? If ABR were able to change the Advertising Router ID in the Type-5 LSA, then there would be no need for Type-4 LSA which meant less OSPF overhead on the network.
TL&DR: The current implementation of external routes in OSPF minimizes topology database size (memory utilization)
Before going to the details, try to imagine the environment in which OSPF was designed, and the problems it was solving.
A few weeks ago I described how Cumulus Linux tried to put lipstick on a pig reduce the Linux data plane configuration pains with Network Command Line Utility. NCLU is a thin shim that takes CLI arguments, translates them into FRR or ifupdown configuration syntax, and updates the configuration files (similar to what Ansible is doing with something_config modules).
Obviously that wasn’t good enough. Cumulus Linux 4.4 introduced NVIDIA User Experience1 – a full-blown configuration engine with its own data model and REST API2.
A few weeks ago I described how Cumulus Linux tried to put lipstick on a pig reduce the Linux data plane configuration pains with Network Command Line Utility. NCLU is a thin shim that takes CLI arguments, translates them into FRR or ifupdown configuration syntax, and updates the configuration files (similar to what Ansible is doing with something_config modules).
Obviously that wasn’t good enough. Cumulus Linux 4.4 introduced NVIDIA User Experience1 – a full-blown configuration engine with its own data model and REST API2.
The star of the netlab release 1.3.2 is Mikrotik RouterOS version 7. Stefano Sasso did a fantastic job adding support for VLANs, VRFs, OSPFv2, OSPFv3, BGP, MPLS, and MPLS/VPN, plus the libvirt box-building recipe.
Jeroen van Bemmel contributed another major PR1 adding VLANs, VRFs, VXLAN, EVPN, and OSPFv3 to Nokia SR OS.
Other platform improvements include:
The star of the netlab release 1.3.2 is Mikrotik RouterOS version 7. Stefano Sasso did a fantastic job adding support for VLANs, VRFs, OSPFv2, OSPFv3, BGP, MPLS, and MPLS/VPN, plus the libvirt box-building recipe.
Jeroen van Bemmel contributed another major PR1 adding VLANs, VRFs, VXLAN, EVPN, and OSPFv3 to Nokia SR OS.
Other platform improvements include:
Ian Nightingale published an interesting story of connectivity problems he had in a VXLAN-based campus network. TL&DR: it’s always the MTU (unless it’s DNS or BGP).
The really fun part: even though large L2 segments might have magical properties (according to vendor fluff), there’s no host-to-network communication in transparent bridging, so there’s absolutely no way that the ingress VTEP could tell the host that the packet is too big. In a layer-3 network you have at least a fighting chance…
For more details, watch the Switching, Routing and Bridging part of How Networks Really Work webinar (most of it available with Free Subscription).
Ian Nightingale published an interesting story of connectivity problems he had in a VXLAN-based campus network. TL&DR: it’s always the MTU (unless it’s DNS or BGP).
The really fun part: even though large L2 segments might have magical properties (according to vendor fluff), there’s no host-to-network communication in transparent bridging, so there’s absolutely no way that the ingress VTEP could tell the host that the packet is too big. In a layer-3 network you have at least a fighting chance…
For more details, watch the Switching, Routing and Bridging part of How Networks Really Work webinar (most of it available with Free Subscription).