Archive

Category Archives for "ipSpace.net"

Dynamic MAC Learning: Hardware or CPU Activity?

An ipSpace.net subscriber sent me a question along the lines of “does it matter that EVPN uses BGP to implement dynamic MAC learning whereas in traditional switching that’s done in hardware?” Before going into those details, I wanted to establish the baseline: is dynamic MAC learning really implemented in hardware?

Hardware-based switching solutions usually use a hash table to implement MAC address lookups. The above question should thus be rephrased as is it possible to update the MAC hash table in hardware without punting the packet to the CPU? One would expect high-end (expensive) hardware to be able do it, while low-cost hardware would depend on the CPU. It turns out the reality is way more complex than that.

netlab: Change Stub Networks into Loopbacks

One of the least-documented limitations of virtual networking labs is the number of network interfaces a virtual machine could have. vSphere supports up to 10 interfaces per VM, the default setting for vagrant-libvirt is eight, and I couldn’t find the exact numbers for KVM. Many vendors claim their KVM limit is around 25; I was able to bring up a Nexus 9300v device with 40 adapters.

Anyway, a dozen interfaces should be good enough if you’re building a proof-of-concept fabric, but it might get a bit tight if you want to emulate plenty of edge subnets.

netlab: Change Stub Networks into Loopbacks

One of the least-documented limitations of virtual networking labs is the number of network interfaces a virtual machine could have. vSphere supports up to 10 interfaces per VM, the default setting for vagrant-libvirt is eight, and I couldn’t find the exact numbers for KVM. Many vendors claim their KVM limit is around 25; I was able to bring up a Nexus 9300v device with 40 adapters.

Anyway, a dozen interfaces should be good enough if you’re building a proof-of-concept fabric, but it might get a bit tight if you want to emulate plenty of edge subnets.

Video: Getting Started with netlab

After explaining how netlab fits into the virtual lab orchestration picture and what exactly it can do, let’s focus on what’s the easiest way to get started.

The next video in the Using netlab to Build Networking Labs series describes:

You need Free ipSpace.net Subscription to watch the video and Standard ipSpace.net Subscription to watch the rest of the webinar.

Video: Getting Started with netlab

After explaining how netlab fits into the virtual lab orchestration picture and what exactly it can do, let’s focus on what’s the easiest way to get started.

The next video in the Using netlab to Build Networking Labs series describes:

You need Free ipSpace.net Subscription to watch the video and Standard ipSpace.net Subscription to watch the rest of the webinar.

History of IP TTL in EBGP Sessions

Chris Parker wrote a wonderful blog post going deep into the weeds on how EBGP sessions use IP TTL and why we need multihop EBGP sessions between adjacent devices. However, he couldn’t find a source explaining why early BGP implementations decided to use IP TTL set to one on EBGP sessions:

If there’s a source on the internet that explains when it was decided that EBGP should use a TTL of 1, I can’t find it. I can’t even find it in any RFC. I looked in the RFC for BGP v4, and went all the way back to BGP v1. None of these documents contain the text “TTL or “time to live” or “time-to-live.” It’s not even in the RFC for EGP, back in 1984.

History of IP TTL in EBGP Sessions

Chris Parker wrote a wonderful blog post going deep into the weeds on how EBGP sessions use IP TTL and why we need multihop EBGP sessions between adjacent devices. However, he couldn’t find a source explaining why early BGP implementations decided to use IP TTL set to one on EBGP sessions:

If there’s a source on the internet that explains when it was decided that EBGP should use a TTL of 1, I can’t find it. I can’t even find it in any RFC. I looked in the RFC for BGP v4, and went all the way back to BGP v1. None of these documents contain the text “TTL or “time to live” or “time-to-live.” It’s not even in the RFC for EGP, back in 1984.

Feedback: Microsoft Azure Networking

Numerous networking engineers found my cloud webinars (AWS, Azure) useful when preparing for a cloud migration project. Here’s what one of them wrote:

We are beginning to migrate some of our offerings to Microsoft Azure and I need to get up to speed with Azure products. I found this webinar very informative, and Ivan explained the concepts in a clear manner and easy to follow along. I would recommend watching these webinars and then read Microsoft documentation to get a thorough understanding.

Want to have some hands-on work sprinkled on top of that? You’ll find deployment examples in the Networking in Public Clouds GitHub repository.

Feedback: Microsoft Azure Networking

Numerous networking engineers found my cloud webinars (AWS, Azure) useful when preparing for a cloud migration project. Here’s what one of them wrote:

We are beginning to migrate some of our offerings to Microsoft Azure and I need to get up to speed with Azure products. I found this webinar very informative, and Ivan explained the concepts in a clear manner and easy to follow along. I would recommend watching these webinars and then read Microsoft documentation to get a thorough understanding.

Want to have some hands-on work sprinkled on top of that? You’ll find deployment examples in the Networking in Public Clouds GitHub repository.

Alternatives to IBGP within Multihomed Sites

Two weeks ago I explained why you might want to run IBGP between CE-routers on a multihomed site. One of the blog readers didn’t like my ideas:

In such a small deployment I assume that both ISPs offer transit, so that both CEs would get a default route from their upstream.

In this case I would not iBGP the CEs together but have HSRP running on the two CEs and track the uplink (interface and/of BGP session) to determine the active gateway.

Let’s see what could possibly go wrong with that design.

Alternatives to IBGP within Multihomed Sites

Two weeks ago I explained why you might want to run IBGP between CE-routers on a multihomed site. One of the blog readers didn’t like my ideas:

In such a small deployment I assume that both ISPs offer transit, so that both CEs would get a default route from their upstream.

In this case I would not iBGP the CEs together but have HSRP running on the two CEs and track the uplink (interface and/of BGP session) to determine the active gateway.

Let’s see what could possibly go wrong with that design.

Video: Packet Buffers in Data Center ASICs

A few years ago, we were fortunate enough to have Pete Lumbis talking about ASICs for Networking Engineers as part of the Data Center Fabric Architectures webinar.

One of the topics he couldn’t possible skip was the question of how many packet buffers one needs in a data center switch.

How Many Spines Should a Leaf-and-Spine Fabric Have?

One of my readers sent me a question along these lines:

How do we determine the number of spines needed in a leaf-and-spine fabric? It’s easy to calculate the number of leaf nodes from the required number of server ports, and two spines give you the redundancy. Does it make sense to have more spines if two are good enough from the capacity perspective?

There are at least two factors to consider:

How Many Spines Should a Leaf-and-Spine Fabric Have?

One of my readers sent me a question along these lines:

How do we determine the number of spines needed in a leaf-and-spine fabric? It’s easy to calculate the number of leaf nodes from the required number of server ports, and two spines give you the redundancy. Does it make sense to have more spines if two are good enough from the capacity perspective?

There are at least two factors to consider:

Measuring Virtual Network Device Boot Times

A senior engineer at Juniper Networks wasn’t happy with me mentioning resource hogs and Junos platforms in the same statement. Instead of engaging in never-ending angels dancing on pins deliberations comparing the virtues of Junos with other network operating systems, I decided to throw a bit of real-life data into the mix – I created a simple script that measures:

  • The time it takes to execute vagrant up to start a single network device.
  • The time it takes to deploy simple initial configuration on that device.

Measuring Virtual Network Device Boot Times

A senior engineer at Juniper Networks wasn’t happy with me mentioning resource hogs and Junos platforms in the same statement. Instead of engaging in never-ending angels dancing on pins deliberations comparing the virtues of Junos with other network operating systems, I decided to throw a bit of real-life data into the mix – I created a simple script that measures:

  • The time it takes to execute vagrant up to start a single network device.
  • The time it takes to deploy simple initial configuration on that device.

Some Operations Are Not Worth Automating

Ish wrote an interesting comment on my Network Automation Expert Beginners blog post. He started with:

[Our network has] about 40 sites, but we don’t do total refresh cycles in bulk, just as needed. Everything we do is sporadic, and I’m trying to see the ROI on learning automation for things that are done once in a while that don’t take much time to do manually anyway.

There are two aspects to this part of his comment:

1 41 42 43 44 45 182