
Author Archives: Ivan Pepelnjak
Author Archives: Ivan Pepelnjak
I had no idea how convoluted VLANs could get until I tried to implement them in netsim-tools. We’re not done yet – we have access VLANs, VLAN trunks (including native VLAN support), and VLAN (SVI) interfaces, but we’re still missing routed VLAN subinterfaces – but we have enough functionality to show you a few VLAN examples.
We’ll start with the simplest option: a single VLAN stretched across two bridges switches with two Linux hosts connected to it. netsim-tools can configure VLANs on Arista EOS, Cisco IOSv, VyOS, Dell OS10, and Nokia SR Linux. We’ll use the quickest (deployment-wise) option: Arista EOS on containerlab.
I had no idea how convoluted VLANs could get until I tried to implement them in netlab.
We’ll start with the simplest option: a single VLAN stretched across two bridges switches with two Linux hosts connected to it. netlab can configure VLANs on Arista EOS, Cisco IOSv, Cisco Nexus OS, VyOS, Dell OS10, and Nokia SR Linux. We’ll use the quickest (deployment-wise) option: Arista EOS on containerlab.
Simple VLAN topology
Using Terraform to deploy networking elements with an SDN controller that cannot replace the current state of a tenant with the desired state specified in a text file (because nobody ever wants to do that, right) sounds like a great idea… until you try to do it at scale.
Noël Boulene hit interesting scalability limits when trying to provision VLANs on Cisco ACI with Terraform. If you’re thinking about doing something similar, you REALLY SHOULD read his article.
Using Terraform to deploy networking elements with an SDN controller that cannot replace the current state of a tenant with the desired state specified in a text file (because nobody ever wants to do that, right) sounds like a great idea… until you try to do it at scale.
Noël Boulene hit interesting scalability limits when trying to provision VLANs on Cisco ACI with Terraform. If you’re thinking about doing something similar, you REALLY SHOULD read his article.
Are you afraid the network automation will eat your job? You might have to worry if you’re a VLAN-provisioning CLI jockey, but then you’re not alone. Textile workers faces the same challenges in 19th century and automation report from 1958 the clerical workers were facing the same dilemma when the first computers were introduced.
Guess what: unemployment rate has been going up and down in the meantime (US data), but mostly due to various crisis. Automation had little impact.
Are you afraid the network automation will eat your job? You might have to worry if you’re a VLAN-provisioning CLI jockey, but then you’re not alone. Textile workers faces the same challenges in 19th century and automation report from 1958 the clerical workers were facing the same dilemma when the first computers were introduced.
Guess what: unemployment rate has been going up and down in the meantime (US data), but mostly due to various crisis. Automation had little impact.
Javier Antich concluded the AI/ML in Networking webinar with the ugly challenges of using AI/ML in networking. I won’t spoil the fun, you REALLY SHOULD watch the video (keeping in mind he was trying to stay polite and diplomatic).
Javier Antich concluded the AI/ML in Networking webinar with the ugly challenges of using AI/ML in networking. I won’t spoil the fun, you REALLY SHOULD watch the video (keeping in mind he was trying to stay polite and diplomatic).
Every network engineer should be familiar with the DNS basics – after all, all network failures are caused by DNS… unless it’s BGP.
The May 2022 ISP Column by Geoff Huston is an excellent place to brush up on your DNS basics and learn about new ideas, including a clever one to push DNS entries that will be needed in the future to a web client through a DNS-over-HTTPS session.
Every network engineer should be familiar with the DNS basics – after all, all network failures are caused by DNS… unless it’s BGP.
The May 2022 ISP Column by Geoff Huston is an excellent place to brush up on your DNS basics and learn about new ideas, including a clever one to push DNS entries that will be needed in the future to a web client through a DNS-over-HTTPS session.
I migrated my blog to Hugo two years ago, and never regretted the decision. At the same time I implemented version control with Git, and started using GitHub (and GitLab for a convoluted set of reasons) to host the blog repository.
After hesitating for way too long, I decided to go one step further and made the blog repository public. The next time a blatant error of mine annoys you fork it, fix my blunder(s), and submit a pull request (or write a comment and I’ll fix stuff like I did in the past).
I migrated my blog to Hugo two years ago, and never regretted the decision. At the same time I implemented version control with Git, and started using GitHub (and GitLab for a convoluted set of reasons) to host the blog repository.
After hesitating for way too long, I decided to go one step further and made the blog repository public. The next time a blatant error of mine annoys you fork it, fix my blunder(s), and submit a pull request (or write a comment and I’ll fix stuff like I did in the past).
I’m usually telling networking engineers seriously considering whether to automate their networks to cleanup their design and simplify the network services first.
The only reasonable way forward is to simplify your processes – get rid of all corner cases, all special deals that are probably costing you more than you earned on them, all one-off kludges to support badly-designed applications – and once you get that done, you might realize you don’t need a magic platform anymore, because you can run your simpler network using traditional tools.
While seasoned automation practitioners agree with me, a lot of enterprise engineers face a different reality. Straight from a source that wished to remain anonymous…
I’m usually telling networking engineers seriously considering whether to automate their networks to cleanup their design and simplify the network services first.
The only reasonable way forward is to simplify your processes – get rid of all corner cases, all special deals that are probably costing you more than you earned on them, all one-off kludges to support badly-designed applications – and once you get that done, you might realize you don’t need a magic platform anymore, because you can run your simpler network using traditional tools.
While seasoned automation practitioners agree with me, a lot of enterprise engineers face a different reality. Straight from a source that wished to remain anonymous…
I stumbled upon a blog post by Diptanshu Singh discussing whether IS-IS flooding in highly meshed fabric is as much of a problem as some people would like to make it. I won’t spoil the fun, read his blog post ;)
The really interesting part (for me) was the topology he built with netsim-tools and containerlab: seven leaf-and-spine fabrics connected with WAN links and superspines for a total of 68 instances of Arista cEOS. I hope he automated building the topology file (I’m a bit sorry we haven’t implemented composite topologies yet); after that all he had to do was to execute netlab up to get a fully-configured lab running IS-IS.
I stumbled upon a blog post by Diptanshu Singh discussing whether IS-IS flooding in highly meshed fabric is as much of a problem as some people would like to make it. I won’t spoil the fun, read his blog post ;)
The really interesting part (for me) was the topology he built with netlab and containerlab: seven leaf-and-spine fabrics connected with WAN links and superspines for a total of 68 instances of Arista cEOS. I hope he automated building the topology file (I’m a bit sorry we haven’t implemented composite topologies yet); after that all he had to do was to execute netlab up to get a fully-configured lab running IS-IS.
Stuart Charlton did his best to explain the concept of pods in the Kubernetes Networking Deep Dive webinar, but we were still a bit confused. Next step: let’s talk about typical inter-pod traffic scenario.
Stuart Charlton did his best to explain the concept of pods in the Kubernetes Networking Deep Dive webinar, but we were still a bit confused. Next step: let’s talk about typical inter-pod traffic scenario.
Continuing the how real is the decade-old SDN hype thread, let’s try to figure out if anyone still uses OpenFlow. OpenFlow was declared dead by the troubadour of the SDN movement in 2016, so it looks like the question is moot. However, nothing ever dies in networking (including hop-by-hop IPv6 extension headers), so here we go.
Ignoring for the moment the embarrassing we solved the global load balancing with per-flow forwarding academic blunders1, OpenFlow wasn’t the worst tool for programming forwarding exceptions (ACL/PBR) into TCAM.
Continuing the how real is the decade-old SDN hype thread, let’s try to figure out if anyone still uses OpenFlow. OpenFlow was declared dead by the troubadour of the SDN movement in 2016, so it looks like the question is moot. However, nothing ever dies in networking (including hop-by-hop IPv6 extension headers), so here we go.
Ignoring for the moment the embarrassing we solved the global load balancing with per-flow forwarding academic blunders1, OpenFlow wasn’t the worst tool for programming forwarding exceptions (ACL/PBR) into TCAM.