Henk Smit conscientiously pointed out a major omission I made when summarizing Peter Paluch’s excellent description of how bits get parsed in network headers:
EtherType? What do you mean EtherType? There are/were 4 types of Ethernet encapsulation. Only one of them (ARPA encapsulation) has an EtherType. The other 3 encapsulations do not have an EtherType field.
What is he talking about? Time for another history lesson1.
Henk Smit conscientiously pointed out a major omission I made when summarizing Peter Paluch’s excellent description of how bits get parsed in network headers:
EtherType? What do you mean EtherType? There are/were 4 types of Ethernet encapsulation. Only one of them (ARPA encapsulation) has an EtherType. The other 3 encapsulations do not have an EtherType field.
What is he talking about? Time for another history lesson1.
Some of the blog comments never cease to amaze me. Here’s one questioning the value of network automation:
I think there is a more fundamental reason than the (in my opinion simplistic) lack of skills argument. As someone mentioned on twitter
“Rules make it harder to enact change. Automation is essentially a set of rules.”
We underestimated the fact that infrastructure is a value differentiator for many and that customization and rapid change don’t go hand in hand with automation.
Whenever someone starts using MBA-speak like value differentiator in a technical arguments, I get an acute allergic reaction, but maybe he’s right.
Some of the blog comments never cease to amaze me. Here’s one questioning the value of network automation:
I think there is a more fundamental reason than the (in my opinion simplistic) lack of skills argument. As someone mentioned on twitter
“Rules make it harder to enact change. Automation is essentially a set of rules.”
We underestimated the fact that infrastructure is a value differentiator for many and that customization and rapid change don’t go hand in hand with automation.
Whenever someone starts using MBA-speak like value differentiator in a technical arguments, I get an acute allergic reaction, but maybe he’s right.
I started one of my VXLAN tests with a simple setup – two switches connecting two hosts over a VXLAN-enabled (gray tunnel) red VLAN. The switches are connected with a single blue link.
I configured VLANs and VXLANs, and started OSPF on S1 and S2 to get connectivity between their loopback interfaces. Here’s the configuration of one of the Arista cEOS switches:
I started one of my VXLAN tests with a simple setup – two switches connecting two hosts over a VXLAN-enabled (gray tunnel) red VLAN. The switches are connected with a single blue link.
I configured VLANs and VXLANs, and started OSPF on S1 and S2 to get connectivity between their loopback interfaces. Here’s the configuration of one of the Arista cEOS switches:
I promised you a blog post explaining the intricacies of implementing MLAG with EVPN, but (as is often the case) it’s taking longer than expected. In the meantime, enjoy the EVPN Multihoming Taxonomy and Overview video from Lukas Krattiger’s EVPN Multihoming versus MLAG presentation (part of EVPN Deep Dive webinar).
I promised you a blog post explaining the intricacies of implementing MLAG with EVPN, but (as is often the case) it’s taking longer than expected. In the meantime, enjoy the EVPN Multihoming Taxonomy and Overview video from Lukas Krattiger’s EVPN Multihoming versus MLAG presentation (part of EVPN Deep Dive webinar).
I’m always in a bit of a bind when I get an invitation to speak at a security conference (after all, I know just enough about security to make a fool of myself), but when the organizers of the DEEP Conference invited me to talk about Internet routing security I simply couldn’t resist – the topic is dear and near to my heart, and I planned to do a related webinar for a very long time.
Even better, that conference would have been my first on-site presentation since the COVID-19 craze started, and I love going to Dalmatia (where the conference is taking place). Alas, it was not meant to be – I came down with high fever just days before the conference and had to cancel the talk.
I’m always in a bit of a bind when I get an invitation to speak at a security conference (after all, I know just enough about security to make a fool of myself), but when the organizers of the DEEP Conference invited me to talk about Internet routing security I simply couldn’t resist – the topic is dear and near to my heart, and I planned to do a related webinar for a very long time.
Even better, that conference would have been my first on-site presentation since the COVID-19 craze started, and I love going to Dalmatia (where the conference is taking place). Alas, it was not meant to be – I came down with high fever just days before the conference and had to cancel the talk.
Here’s another question from the excellent list posted by Daniel Dib on Twitter:
BGP Split Horizon rule says “Don’t advertise IBGP-learned routes to another IBGP peer.” The purpose is to avoid loops because it’s assumed that all of IBGP peers will be on full mesh connectivity. What is the reason the BGP protocol designers made this assumption?
Time for another history lesson. BGP was designed in late 1980s (RFC 1105 was published in 1989) as a replacement for the original Exterior Gateway Protocol (EGP). In those days, the original hub-and-spoke Internet topology with NSFNET core was gradually replaced with a mesh of interconnections, and EGP couldn’t cope with that.
Here’s another question from the excellent list posted by Daniel Dib on Twitter:
BGP Split Horizon rule says “Don’t advertise IBGP-learned routes to another IBGP peer.” The purpose is to avoid loops because it’s assumed that all of IBGP peers will be on full mesh connectivity. What is the reason the BGP protocol designers made this assumption?
Time for another history lesson. BGP was designed in late 1980s (RFC 1105 was published in 1989) as a replacement for the original Exterior Gateway Protocol (EGP). In those days, the original hub-and-spoke Internet topology with NSFNET core was gradually replaced with a mesh of interconnections, and EGP couldn’t cope with that.
Whenever I compare MPLS-based Segment Routing (SR-MPLS) with it’s distant IPv6-based cousin (SRv6), someone invariably mentions the specter of large label stacks that some hardware cannot handle, for example:
Do you think vendors current supported label max stack might be an issue when trying to route a packet from source using Adj-SIDs on relatively big sized (and meshed) cores? Many seem to be proposing to use SRv6 to overcome this.
I’d dare to guess that more hardware supports MPLS with decent label stacks than SRv6, and if I’ve learned anything from my chats with Laurent Vanbever, it’s that it sometimes takes surprisingly little to push the traffic into the right direction. You do need a controller that can figure out what that little push is and where to apply it though.
Whenever I compare MPLS-based Segment Routing (SR-MPLS) with it’s distant IPv6-based cousin (SRv6), someone invariably mentions the specter of large label stacks that some hardware cannot handle, for example:
Do you think vendors current supported label max stack might be an issue when trying to route a packet from source using Adj-SIDs on relatively big sized (and meshed) cores? Many seem to be proposing to use SRv6 to overcome this.
I’d dare to guess that more hardware supports MPLS with decent label stacks than SRv6, and if I’ve learned anything from my chats with Laurent Vanbever, it’s that it sometimes takes surprisingly little to push the traffic into the right direction. You do need a controller that can figure out what that little push is and where to apply it though.
In early June 2022 I described a netlab topology using VLAN trunks in netlab. That topology provided pure bridging service for two IP subnets. Now let’s go a step further and add a router-on-a-stick:
In early June 2022 I described a netlab topology using VLAN trunks in netlab. That topology provided pure bridging service for two IP subnets. Now let’s go a step further and add a router-on-a-stick:
Just FYI: I pushed out netlab release 1.3.3 yesterday. It’s a purely bug fix release, new functionality and a few breaking changes are coming in release 1.4 in a few weeks.
Some of the bugs we fixed weren’t exactly pleasant; if you’re using release 1.3.2 you might want to upgrade with pip3 install --upgrade networklab
.
Just FYI: I pushed out netlab release 1.3.3 yesterday. It’s a purely bug fix release, new functionality and a few breaking changes are coming in release 1.4 in a few weeks.
Some of the bugs we fixed weren’t exactly pleasant; if you’re using release 1.3.2 you might want to upgrade with pip3 install --upgrade networklab
.
In this week’s update of the Data Center Infrastructure for Networking Engineers webinar we talked about VLANs, VRFs, and modern data center fabrics.
Those videos are available with Standard or Expert ipSpace.net Subscription; if you’re still sitting on the fence, you might want to watch the how networks really work version of the same topic that’s available with Free Subscription – it describes the principles-of-operation of bridging fabrics that don’t use STP (TRILL, SPBM, VXLAN, EVPN)
In this week’s update of the Data Center Infrastructure for Networking Engineers webinar, we talked about VLANs, VRFs, and modern data center fabrics.
Those videos are available with Standard or Expert ipSpace.net Subscription; if you’re still sitting on the fence, you might want to watch the how networks really work version of the same topic that’s available with Free Subscription – it describes the principles-of-operation of bridging fabrics that don’t use STP (TRILL, SPBM, VXLAN, EVPN)