Dmitry Perets wrote an excellent description of how typical firewall cluster solutions implement control-plane high availability, in particular, the routing protocol Graceful Restart feature (slightly edited):
Most of the HA clustering solutions for stateful firewalls that I know implement a single-brain model, where the entire cluster is seen by the outside network as a single node. The node that is currently primary runs the control plane (hence, I call it single-brain). Sessions and the forwarding plane are synchronized between the nodes.
Dmitry Perets wrote an excellent description of how typical firewall cluster solutions implement control-plane high availability, in particular, the routing protocol Graceful Restart feature (slightly edited):
Most of the HA clustering solutions for stateful firewalls that I know implement a single-brain model, where the entire cluster is seen by the outside network as a single node. The node that is currently primary runs the control plane (hence, I call it single-brain). Sessions and the forwarding plane are synchronized between the nodes.
A long-time friend sent me this question:
I would like your advice or a reference to a security framework I must consider when building a green field backbone in SR/MPLS.
Before going into the details, keep in mind that the core SR/MPLS functionality is not much different than the traditional MPLS:
A long-time friend sent me this question:
I would like your advice or a reference to a security framework I must consider when building a green field backbone in SR/MPLS.
Before going into the details, keep in mind that the core SR/MPLS functionality is not much different than the traditional MPLS:
netlab release 1.8.1 added a interesting few features, including:
This time, most of the work was done behind the scenes1.
netlab release 1.8.1 added a interesting few features, including:
This time, most of the work was done behind the scenes1.
Another cybersecurity rant worth reading: cybersecurity is broken due to lack of consequences.
Bonus point: pointer to RFC 602 written in December 1973.
Another cybersecurity rant worth reading: cybersecurity is broken due to lack of consequences.
Bonus point: pointer to RFC 602 written in December 1973.
Dan left an interesting comment on one of my previous blog posts:
It strikes me that the entire industry lost out when we didn’t do SPB or TRILL. Specifically, I like how Avaya did SPB.
Oh, we did TRILL. Three vendors did it in different proprietary ways, but I’m digressing.
Dan left an interesting comment on one of my previous blog posts:
It strikes me that the entire industry lost out when we didn’t do SPB or TRILL. Specifically, I like how Avaya did SPB.
Oh, we did TRILL. Three vendors did it in different proprietary ways, but I’m digressing.
Here’s another challenge for BGP aficionados: build an MPLS-based transit network without BGP running on core routers.
That should be an easy task if you configured MPLS in the past, so try to spice it up a bit:
Here’s another challenge for BGP aficionados: build an MPLS-based transit network without BGP running on core routers.
That should be an easy task if you configured MPLS in the past, so try to spice it up a bit:
In this series of blog posts, we’ll explore numerous routing protocol designs that can be used to implement EVPN-with-VXLAN L2VPNs in a leaf-and-spine data center fabric. Every design will come with a companion netlab topology you can use to create a lab and explore the behavior of leaf- and spine switches.
Our leaf-and-spine fabric will have four leaves and two spines (but feel free to adjust the lab topology fabric parameters to build larger fabrics). The fabric will provide layer-2 connectivity to orange and blue VLANs. Two hosts will be connected to each VLAN to check end-to-end connectivity.
In this series of blog posts, we’ll explore numerous routing protocol designs that can be used to implement EVPN-with-VXLAN L2VPNs in a leaf-and-spine data center fabric. Every design will come with a companion netlab topology you can use to create a lab and explore the behavior of leaf- and spine switches.
Our leaf-and-spine fabric will have four leaves and two spines (but feel free to adjust the lab topology fabric parameters to build larger fabrics). The fabric will provide layer-2 connectivity to orange and blue VLANs. Two hosts will be connected to each VLAN to check end-to-end connectivity.
An RSS hiccup brought an old blog post from Urs Baumann into my RSS reader. I’m always telling networking engineers that it’s essential to set up realistic WAN environments when testing distributed software, and wemulate (a nice tc front-end) seemed like a perfect match. Even better, it runs in a container – an ideal component for a netlab-generated virtual WAN network.
wemulate acts as a bump in the wire; it uses Linux bridges to connect two container interfaces. We’ll use it to introduce jitter into an IP subnet:
┌──┐ ┌────────┐ ┌──┐ │h1├───┤wemulate├───┤h2│ └──┘ └────────┘ └──┘ ◄──────────────────────► 192.168.33.0/24
An RSS hiccup brought an old blog post from Urs Baumann into my RSS reader. I’m always telling networking engineers that it’s essential to set up realistic WAN environments when testing distributed software, and wemulate (a nice tc front-end) seemed like a perfect match. Even better, it runs in a container – an ideal component for a netlab-generated virtual WAN network.
wemulate acts as a bump in the wire; it uses Linux bridges to connect two container interfaces. We’ll use it to introduce jitter into an IP subnet:
┌──┐ ┌────────┐ ┌──┐ │h1├───┤wemulate├───┤h2│ └──┘ └────────┘ └──┘ ◄──────────────────────► 192.168.33.0/24
Daryll Swer left a long comment describing how he designed a Service Provider network running in numerous private autonomous systems. While I might not agree with everything he wrote, it’s an interesting idea and conceptually pretty similar to what we did 25 years ago (IBGP without IGP, running across physical interfaces, with every router being a route-reflector client of every other router), or how some very large networks were using BGP confederations.
Just remember (as someone from Cisco TAC told me in those days) that “you might be the only one in the world doing it and might hit bugs no one has seen before.”
Daryll Swer left a long comment describing how he designed a Service Provider network running in numerous private autonomous systems. While I might not agree with everything he wrote, it’s an interesting idea and conceptually pretty similar to what we did 25 years ago (IBGP without IGP, running across physical interfaces, with every router being a route-reflector client of every other router), or how some very large networks were using BGP confederations.
Just remember (as someone from Cisco TAC told me in those days) that “you might be the only one in the world doing it and might hit bugs no one has seen before.”
In 2020, Lukas Krattiger ran a short webinar describing how to insert L4-L7 services into EVPN fabrics. The videos are now public, and you can view them without an ipSpace.net account.
If you’re an Internet Service Provider running BGP with your customers, you might not want to send them the whole Internet routing table. Sending the regional prefixes and the default route is usually good enough.