
Author Archives: Ivan Pepelnjak
Author Archives: Ivan Pepelnjak
Using the typical default router configurations, it can take minutes between a failure of an inter-AS link and the convergence of BGP routes. You can fine-tune that behavior with BGP timers and BFD (and still get pwned by Graceful Restart). While you can’t influence link failures, you could drain the traffic from a link before starting maintenance operations on it, and it would be a shame not to do that considering there’s a standard way to do that – the GRACEFUL_SHUTDOWN BGP community defined in RFC 8326. That’s what you’ll practice in the next BGP lab exercise.
Using the typical default router configurations, it can take minutes between a failure of an inter-AS link and the convergence of BGP routes. You can fine-tune that behavior with BGP timers and BFD (and still get pwned by Graceful Restart). While you can’t influence link failures, you could drain the traffic from a link before starting maintenance operations on it, and it would be a shame not to do that considering there’s a standard way to do that – the GRACEFUL_SHUTDOWN BGP community defined in RFC 8326. That’s what you’ll practice in the next BGP lab exercise.
Tom Limoncelli wrote another must-read masterpiece: sometimes you’ll save time if you make two trips instead of one.
The same lesson applies to network design: cramming too many features into a single device will inevitably result in complex, hard-to-understand configurations and weird bugs. Sometimes, it’s cheaper to split the required functionality across multiple devices.
Tom Limoncelli wrote another must-read masterpiece: sometimes you’ll save time if you make two trips instead of one.
The same lesson applies to network design: cramming too many features into a single device will inevitably result in complex, hard-to-understand configurations and weird bugs. Sometimes, it’s cheaper to split the required functionality across multiple devices.
The recent IBGP Full Mesh Between EVPN Leaf Switches blog post generated an interesting discussion on LinkedIn focused on whether we need route reflectors (in small fabrics) and whether they do more harm than good. Here are some of the highlights of that discussion, together with a running commentary.
The recent IBGP Full Mesh Between EVPN Leaf Switches blog post generated an interesting discussion on LinkedIn focused on whether we need route reflectors (in small fabrics) and whether they do more harm than good. Here are some of the highlights of that discussion, together with a running commentary.
Many network automation solutions generate device configurations from a data model and deploy those configurations. Last week, we focused on “how do we know the device data model is correct?” This time, we’ll take a step further and ask ourselves, “how do we know the device configurations work as expected?”
There are four (increasingly complex) questions our tests should answer:
Many network automation solutions generate device configurations from a data model and deploy those configurations. Last week, we focused on “how do we know the device data model is correct?” This time, we’ll take a step further and ask ourselves, “how do we know the device configurations work as expected?”
There are four (increasingly complex) questions our tests should answer:
AWS started charging for public IPv4 addresses a few months ago, supposedly to encourage users to move to IPv6. As it turns out, you need public IPv4 addresses (or a private link) to access many AWS services, clearly demonstrating that it’s just another way of fleecing the sheep Hotel California tax. I’m so glad I moved my videos to Cloudflare ;)
For more details, read AWS: Egress Traffic and Using AWS Services via IPv6 (rendered in beautiful, easy-to-read teletype font).
AWS started charging for public IPv4 addresses a few months ago, supposedly to encourage users to move to IPv6. As it turns out, you need public IPv4 addresses (or a private link) to access many AWS services, clearly demonstrating that it’s just another way of fleecing the sheep Hotel California tax. I’m so glad I moved my videos to Cloudflare ;)
For more details, read AWS: Egress Traffic and Using AWS Services via IPv6 (rendered in beautiful, easy-to-read teletype font).
In the previous blog post in the EVPN Designs series, we explored the simplest possible VXLAN-based fabric design: static ingress replication without any L2VPN control plane. This time, we’ll add the simplest possible EVPN control plane: a full mesh of IBGP sessions between the leaf switches.
In the previous blog post in the EVPN Designs series, we explored the simplest possible VXLAN-based fabric design: static ingress replication without any L2VPN control plane. This time, we’ll add the simplest possible EVPN control plane: a full mesh of IBGP sessions between the leaf switches.
In the previous two blog posts (Dealing with LAG Member Failures, LAG Member Failures in VXLAN Fabrics) we discovered that it’s almost trivial to deal with a LAG member failure in an MLAG cluster if we have a peer link between MLAG members. What about the holy grail of EVPN pundits: ESI-based MLAG with no peer link between MLAG members?
In the previous two blog posts (Dealing with LAG Member Failures, LAG Member Failures in VXLAN Fabrics) we discovered that it’s almost trivial to deal with a LAG member failure in an MLAG cluster if we have a peer link between MLAG members. What about the holy grail of EVPN pundits: ESI-based MLAG with no peer link between MLAG members?
Let’s open another juicy can of BGP worms: load balancing. In the first lab exercise, you’ll configure equal-cost load balancing across EBGP paths and tweak the “What is equal cost?” algorithm to consider just the AS path length, not the contents of the AS path.
Let’s open another juicy can of BGP worms: load balancing. In the first lab exercise, you’ll configure equal-cost load balancing across EBGP paths and tweak the “What is equal cost?” algorithm to consider just the AS path length, not the contents of the AS path.
Every complex enough network automation solution has to introduce a high-level (user-manageable) data model that is eventually transformed into a low-level (device) data model.
High-level overview of the process
The transformation code (business logic) is one of the most complex pieces of a network automation solution, and there’s only one way to ensure it works properly: you test the heck out of it ;) Let me show you how we solved that challenge in netlab.
Every complex enough network automation solution has to introduce a high-level (user-manageable) data model that is eventually transformed into a low-level (device) data model.
High-level overview of the process
The transformation code (business logic) is one of the most complex pieces of a network automation solution, and there’s only one way to ensure it works properly: you test the heck out of it ;) Let me show you how we solved that challenge in netlab.
All the Kubernetes Service Mesh videos from the Kubernetes Networking Deep Dive webinar with Stuart Charlton are now public. Enjoy!
Daniel Dib found the ancient OSPF Protocol Analysis (RFC 1245) that includes the Router CPU section. Please keep in mind the RFC was published in 1991 (35 years ago):
Steve Deering presented results for the Dijkstra calculation in the “MOSPF meeting report” in [3]. Steve’s calculation was done on a DEC 5000 (10 mips processor), using the Stanford internet as a model. His graphs are based on numbers of networks, not number of routers. However, if we extrapolate that the ratio of routers to networks remains the same, the time to run Dijkstra for 200 routers in Steve’s implementation was around 15 milliseconds.