Anuradha Karuppiah

Author Archives: Anuradha Karuppiah

A video walk through of EVPN multihoming

You may have overheard someone talking about EVPN multihoming but do you know what it is? If you have, are you up to speed on the latest around it? I walk you through it all, beginning to end, in this three part video series. Watch all three below.

Chapter 1:

EVPN multihoming provides support for all-active server redundancy. In this intro to EVPN multihoming you will hear an overview of the feature and how it compares with EVPN-MLAG.


Chapter 2:

In this episode we dive into the various unicast packet flows in a network with EVPN multihoming. This includes, new data plane constructs such as MAC-ECMP and layer-2 nexthop-groups that have been introduced for the express purpose of EVPN-MH.


Chapter 3:

PIM-SM is used for optimizing flooded traffic in network with EVPN-MH. In this episode we walk through the implementation aspects of flooded traffic, including DF election and Split horizon filtering.


Want to know more? You can find more resources about EVPN and all things networking in our resource hub here.

EVPN-PIM: Anycast VTEPs

This is the second of the two part EVPN-PIM blog series exploring the feature and network deployment choices. If you missed part one, learn about BUM optimization using PIM-SM here.

Anycast VTEPs

Servers in a data-center Clos are typically dual connected to a pair of Top-of-Rack switches for redundancy purposes. These TOR switches are setup as a MLAG (Multichassis Link Aggregation) pair i.e. the server sees them as a single switch with two or more bonded links. Really there are two distinct switches with an ISL/peerlink between them syncing databases and pretending to be one.

The MLAG switches (L11, L12 in the sample setup) use a single VTEP IP address i.e. appear as an anycast-VTEP or virtual-VTEP.

Additional procedures involved in EVPN-PIM with anycast VTEPs are discussed in this blog.

EVPN-PIM in a MLAG setup vs. PIM-MLAG

Friend: “So you are working on PIM-MLAG?”
Me: “No, I am implementing EVPN-PIM in a MLAG setup”
Friend: “Yup, same difference”
Me: “No, it is not!”
Friend: “OK, OK, so you are implementing PIM-EVPN with MLAG?”
Me: “Yes!”
Friend: “i.e. PIM-MLAG?”
Me: “Well, now that you put it like that….……..NO, I AM NOT!! Continue reading

EVPN-PIM: BUM optimization using PIM-SM

Does “PIM” make you break out into hives? Toss and turn at night?! You are not alone. While PIM can present some interesting troubleshooting challenges, it serves a specific and simple purpose of optimizing flooding in an EVPN underlay.

The right network design choices can eliminate some of the elements of complexity inherent to PIM while retaining efficiency. We will explore PIM-EVPN and its deployment choices in this two part blog.

Why use multicast VxLAN tunnels?

Head-end-replication

Overlay BUM (broadcast, unknown-unicast and intra-subnet unknown-multicast) traffic is vxlan-encapsulated and flooded to all VTEPs participating in an L2-VNI. One mechanism currently available for this is ingress-replication or HREP (head-end-replication).

In this mechanism BUM traffic from a local server (say H11 on rack-1 in the sample network) is replicated as many times as the number of remote VTEPs, by the origination VTEP L11. It is then encapsulated with individual tunnel header DIPs L21, L31 and sent over the underlay.

The number of copies created by the ingress VTEP increases proportionately with the number of VTEPs associated with a L2-VNI and this can quickly become a scale problem. Consider a POD with a 100 VTEPs; here the originating VTEP would need to create 99 Continue reading

Silicon Choices

Cumulus Networks has always strived to provide our customers with choice. And now, Cumulus Linux 3.0 has been refactored to make the user experience fun and easy … and at the same time bring you, even more choice:

  • In hardware platforms, from a variety of manufacturers.
  • In CPU architectures — x86 and ARM.
  • In Broadcom networking chips for a variety of use cases — 1G, 10G, 10GBase-T, 40G and 100G switches with Helix4, Hurricane2, Trident-II, Trident-II+ or Tomahawk chips inside.
  • And now, for the first time, choice in silicon vendors with the introduction of Cumulus Linux-powered 40G and 100G switches with Mellanox Spectrum chips inside.

Shrijeet Mukherjee (our fearless engineering leader) kicked off the Cumulus Linux 3.0 development cycle with this as our main goal — to offer even more choice to our customers. And then this happened…

three-dragons

 

So we took on the challenge of unifying the user experience across this sweeping range of hardware platforms and switch silicon without muting the unique prowess and feature richness of any of them.

How did we do this? By deploying the three dragons in our arsenal: ONIE, switchd and the Linux kernel itself (which is now better fed by Continue reading