You may have overheard someone talking about EVPN multihoming but do you know what it is? If you have, are you up to speed on the latest around it? I walk you through it all, beginning to end, in this three part video series. Watch all three below.
EVPN multihoming provides support for all-active server redundancy. In this intro to EVPN multihoming you will hear an overview of the feature and how it compares with EVPN-MLAG.
In this episode we dive into the various unicast packet flows in a network with EVPN multihoming. This includes, new data plane constructs such as MAC-ECMP and layer-2 nexthop-groups that have been introduced for the express purpose of EVPN-MH.
PIM-SM is used for optimizing flooded traffic in network with EVPN-MH. In this episode we walk through the implementation aspects of flooded traffic, including DF election and Split horizon filtering.
There are many aspects to developing the skills to be an effective network engineer and this skill set falls into a few different categories. Logically the first step to conquer is understanding the various networking technologies and protocols. This requires a more traditional form of learning— studying protocols through specs or RFCs, reading whitepapers etc. The next step is implementing this knowledge through configuring network devices. Learning this skill is more like trying to learn a different language. The BGP protocol itself adheres to a set of standards, but each network device might present the configuration of BGP in a different way. The final, and possibly most difficult skill to acquire is a combination of the first two: troubleshooting.
Effectively troubleshooting requires not just a solid foundational knowledge about the technology and how it works, but also the need to understand how to configure and validate that configuration on the network devices. The foundational knowledge permeates through the various implementations regardless of vendor, but configuration and validation vary drastically from one to the next. This leads to perhaps the most difficult aspect of troubleshooting. It’s not just enough to understand how a technology works, but you must also understand a Continue reading
Discussions about networking in a work-from-home world often focus on employees and endpoints, but how can network administrators do more than just keep the lights on if they can’t go to the data center? Maintaining what exists isn’t enough, especially as the entire world is redefining the future of work. Organizations need to be able to adapt to change, so how is that possible when administrators can’t go hands on?
There are any number of remote administration options available today, and any number of ways to compare them. Deciding between them is all about finding the right balance between cost, capability, and the labor intensity of implementation. In other words, they’re subject to all of the same considerations as any other technology implementation.
To dispense with the network administration 101 portion of the discussion*, yes, networking is mostly a matter of remote administration anyway. If you can remote into something that has access to the management network, you can use SSH, HTTPS, or what-have-you to administer networks just as you would if you were in the office. That’s maintenance, not change.
Accomplishing change remotely and at scale requires automation and orchestration. In practice, this is heavily dependent upon virtualization and/or Continue reading
Want to try open networking for free? Try NVIDIA® Cumulus VX – a free virtual appliance that provides all the features of NVIDIA Cumulus Linux. You can preview and test NVIDIA Cumulus Linux in your own environment, at your own pace, without organizational and economic barriers. You can also produce sandbox environments for prototype assessment, pre-production rollouts, and script development.
NVIDIA Cumulus VX runs on all popular hypervisors, such VirtualBox and VMware VSphere, and orchestrators, such as Vagrant and GNS3.
Our website has the images needed to run NVIDIA Cumulus VX on your preferred hypervisor—download is simple. What’s more, we provide a detailed guide on how to install and set up NVIDIA Cumulus VX to create this simple two leaf, one spine topology:
With these three switches up and running, you are all set to try out NVIDIA Cumulus Linux features, such as traditional networking protocols (BGP and MLAG), and NVIDIA, formally Cumulus Networks-specific technologies, such as ONIE and Prescriptive Topology Manager (PTM). And, not to worry, the NVIDIA Cumulus Linux user guide is always close at hand to help you out, as well as the community Slack channel, where you can submit questions and engage with the wider Continue reading
Click here for our previous episode.
Some of your favorites are back together on this episode of the Kernel of Truth podcast— specifically Roopa Prabhu, Brian O’Sullivan and Pete Lumbis. Things have changed a little around here since the last time the three of them were chatting together on the podcast but one thing hasn’t, how much they love to talk all things open networking. In this episode the group talks about how to navigate the open networking operating systems space. From figuring out how to choose an open network operating system, to understanding what works best for deployments, and even what resources and communities are out there for you to tap into. We have it all here for you to help you get started. All you have to do is sit back and enjoy the episode and don’t forget to also check out the links below with resources referenced in the podcast.
Roopa Prabhu: Roopa is a Linux Architect at NVIDIA, formally Cumulus Networks. She and her team work on all things kernel networking and Linux system infrastructure areas. Her primary Continue reading
Virtual private networks (VPNs) provide security when remote workers access corporate networks, but they’re notoriously slow. Backhauling all traffic for all remote users through the corporate data center just isn’t practical when work from home really starts to scale. Fortunately, VPNs can be configured to operate in more than one way.
Today, most organizations—regardless of size—use some combination of on-premises and public cloud computing. This means that some requests need to go to one or more corporate data centers, while some need to find their way to the Internet.
Traditional VPNs send all requests—both corporate-bound and Internet-bound—through the corporate network because that’s where the corporate information security defenses are located. Today, this approach is causing significant performance problems.
The most popular traditional solution to the problem of VPN performance problems was to just buy a bigger router or firewall. The overhead of the VPN tunnel on throughput isn’t that large, and many traditional corporate applications weren’t latency sensitive. This meant that performance problems usually occurred because the device where the VPNs terminated—the router or firewall—just didn’t have enough processing power to handle the required number of concurrent sessions at the current level of throughput usage.
Times have changed, Continue reading
Supply chains are fragile things. They’re a web of suppliers and distributors, of storage and shipping facilities, and of resellers, all working at just the right speeds and with just the right margin of error to keep things flowing smoothly. But any fragile system is inevitably vulnerable to world events.
With the increasing requirement to support remote work, a robust, adaptable network is a business necessity. But it can be a challenge to source the networking equipment you need when global trade is disrupted. Open networking—where you’re not locked into specific network components—gives you many supplier and platform options to choose from, increasing your flexibility to deal with sudden and substantial change.
Lean manufacturing has become a common business practice. An IndustryWeek survey in 2016 ranked lean manufacturing systems as one of the most important technological advancements (second only to quality management systems).
Lean companies prioritize efficiency and work to reduce waste. This often means that they don’t stockpile components or keep a large inventory of completed products, which keeps money from being tied up in excess goods or unused warehouse space.
Companies source parts and labor from across the globe in an effort to trim Continue reading
Today’s modern datacenter and cloud architectures are horizontally scalable disaggregated distributed systems. Distributed systems have many individual components that work together independently creating a powerful cohesive solution. Just like how compute is the brains behind a datacenter’s distributed system, the network is the nervous system, responsible for ensuring communication gets to all the individual components. This blog tells you why NVIDIA Mellanox gives NVIDIA a larger footprint in the datacenter. The combination of NVIDIA, Mellanox and Cumulus together can provide end-to-end acceleration technologies for the modern disaggregated data-center.
All parties coming together in this acquisition are involved in acceleration technologies in the modern data center:
Click here for our previous episode.
In this episode, Kernel of Truth host Roopa Prabhu is joined by Barak Gafni. The two of them chat about the evolution of hardware telemetry and its software interfaces as well as catch up some of the work on IOAM Barak’s been involved with. We hope you enjoy this episode and don’t forget to also check out the links below with resources referenced in the podcast.
Roopa Prabhu: Roopa is a Linux Architect at NVIDIA, formally Cumulus Networks. She and her team work on all things kernel networking and Linux system infrastructure areas. Her primary focus areas in the Linux kernel are Linux bridge, Netlink, VxLAN, Lightweight tunnels. She is currently focused on building Linux kernel dataplane for E-VPN. She loves working with the Linux kernel networking and debian communities. Her past experience includes Linux clusters, ethernet drivers and Linux KVM virtualization platforms. She has a BS and MS in Computer Science. You can find her on Twitter at @__roopa.
Barak Gafni: Barak is a Staff Architect at NVIDIA, formally Mellanox Technologies, focusing on enabling Continue reading
The NVIDIA® Cumulus Linux 4.2.0 release introduces a nifty new feature called auto BGP, which makes BGP ASN assignment in a two-tier leaf and spine network configuration a breeze. Auto BGP does the work for you without making changes to standard BGP behavior or configuration so that you don’t have to think about which numbers to allocate to your switches. This helps you build optimal ASN configurations in your data center and avoid suboptimal routing and path hunting, which occurs when you assign the wrong spine ASNs.
If you don’t care about ASNs then this feature is for you. But if you do, you can always configure BGP the traditional way where you have control over which ASN to allocate to your switch. What I like about this feature is that you can mix and match; you don’t have to use auto BGP across all switches in your configuration – you can use it to configure one switch but allocate ASN numbers manually to other switches.
June seems like a lifetime ago but there was so much content we wanted to make sure was on your radar. We know you may be thinking but wait, didn’t something big happen to Cumulus Networks in June? You would be right! We’re excited to share that we are now officially NVIDIA®. Along with the news, we kept very busy with fresh podcast episodes, informative blog posts and much more so take a minute to dive on in and catch up on it all here.
Cumulus Networks’ President and Chief Product Officer, Partho Mishra, on the NVIDIA-Cumulus acquisition.: Partho Mishra answers your questions regarding the strategic focus of the new networking business unit at NVIDIA & the future of open networking.
Open source — the great equalizer.: Technology is a great equalizer and the open source movement has played a huge role in making this true and accelerating the process.
Remote work makes network visibility more critical than ever: We’re living through a major shift in the way employees work, extending the boundaries of what was once a tightly controlled environment.
Organizational change, growth, and environmental diversity are all challenges for IT teams, and they’re going to be a part of everyday life for the foreseeable future. As the number of device models and network architectures increases, so, too, does management complexity. Coping with 2020’s ongoing gift of unpredictability requires technological agility, something Cumulus Networks, acquired by NVIDIA, can help you with.
It’s easy to worry about the consequences of our collective, rapidly changing economic circumstances as though the problems presented are somehow novel. They’re not.
2020 has increased uncertainty, leading to an increased velocity of change, but change is the only constant in life, and the need for agile networking has been obvious to many in the industry for some time. Even without problems like having to rapidly figure out how to cope with large chunks of the workforce working from home, change-responsive networking has been a challenge for organizations experiencing growth for decades, a problem many continue to struggle with today.
At a practical level, one of the biggest problems with rapid change is that it quickly leads to a dilemma: precisely meet the needs of the moment, resulting in a significant uptick in equipment diversity, or deploy Continue reading
Click here for our previous episode.
If you’ve listened to the podcast before you may have heard us reference our customers from time to time. In this episode we’re switching things up and instead of referencing a customer, you’re going to hear directly from one! Manuel Schweizer, CEO of Cloudscale, joins host Roopa Prabhu, Attilla de Groot and Mark Horsfield to chat about Cloudscale’s first hand experience with open networking and what they hope the near and distant future of open networking will look like.
Roopa Prabhu: Roopa Prabhu is a Linux Architect at Cumulus Networks, now NVIDIA. At Cumulus she and her team work on all things kernel networking and Linux system infrastructure areas. Her primary focus areas in the Linux kernel are Linux bridge, Netlink, VxLAN, Lightweight tunnels. She is currently focused on building Linux kernel dataplane for E-VPN. She loves working with the Linux kernel networking and debian communities. Her past experience includes Linux clusters, ethernet drivers and Linux KVM virtualization platforms. She has a BS and MS in Computer Science. You can find her on Twitter at Continue reading
What, exactly, is on your network? More to the point, where is your network? Ask yourself that now, then compare this to how your network looked a year ago. The answers have almost certainly changed, with most organizations seeing a rapid increase in the number of employees working remotely.
Hardened, policy-managed corporate networks are being exposed via remote VPNs to home network environments and, in some cases, employees’ home computers. This increases network complexity and may introduce new security and performance issues. To keep things running smoothly, having an in-depth view of the devices and events on your network is crucial.
When employees work from home, troubleshooting becomes more complex. Even if an employee is using a company-supplied computer, it is operating on an unmanaged network, and is exposed to everything else that happens to be on that network.
Today’s home networks often have multiple computers, smartphones, tablets, smart TVs, game consoles, and even Internet of Things devices like security camera doorbells. In addition to the security risks of putting a company computer on an insecure network, there are IT infrastructure problems that can arise when work-from-home becomes normalized.
Technology is a great equalizer and the open source movement has played a huge role in making this true and accelerating the process. Open source levels the playing field for many. Gone are the days where you had to get a job or invest to learn a technology. Open source and Linux opened up access and opportunities to learn and innovate.
We live in an era where hardware and software architectures are powered by open source technologies. Modern system architectures (distributed, cloud native and others) are built with open source technologies and many others continue to move to open technologies (open networking, open Firmware, Linux BIOS just to name a few). Open source has been the driving force in commoditizing hardware in many markets. Open communities like OCP are taking this to the next level.
Open source platforms like github and gitlab have promoted open source development and made it easier to build open source communities and ecosystems. Existing open source communities fuel new open communities. Success stories from disaggregated and open server operating systems fueled the open networking revolution leading to the birth of open network operating systems. Investing in open source and having open source development centers are Continue reading
With the acquisition of Cumulus Networks and Mellanox by NVIDIA, there have been a lot of questions regarding the strategic focus of the new networking business unit at NVIDIA and the future of the open networking approach that Cumulus Networks pioneered.
Mellanox and Cumulus are absolutely committed to open networking and allowing our customers to pick best-of-breed solutions. Cumulus will continue to support all of the hardware platforms on our Hardware Compatibility List (HCL) and to add new hardware platforms from multiple hardware partners to the HCL. Mellanox already offers multiple Network-Operating-Systems — ONYX, SONIC, Cumulus Linux – to customers and this continues unchanged. In terms of total code commits to open source projects, such as FRR, SONIC and the Linux networking kernel, Cumulus+Mellanox have contributed very heavily in the past and will continue to do so in the future. This is an integral part of our DNA.
Check out the latest episode of Kernel of Truth to hear me and Amit Katz discuss more about the future of open networking including how SONIC and Cumulus Linux will work together, what happens to open “campus” networking and the next generation of in-band telemetry.
May means more content! There was a very exciting announcement from us this month, and if you missed it don’t worry, you can read all about it below. In addition, we were keeping very busy with fresh podcast episodes, informative blog posts and much more. Ready to dive into all things open networking? Get comfortable and let’s dive in.
UCMP: Augmenting L3 only designs: So what makes a purely L3 design so aspirational? Can a UCMP increase efficiency? Read this blog post by Rama Dharba as he addresses these questions and more. He delves into the challenges surrounding this type of design, possible solutions and a recent augmentation in Cumulus Linux 4.1 that increases the design’s ability for flexibility.
Kernel of Truth season 3 episode 6: Building modern campus networks: Let’s talk about all things modern campus networks. In this new Kernel of Truth episode, Brain O’Sullivan, Roopa Prabhu, Eric Pulvino and David Marshall dive into trends, technologies, architecture and much more. Grab your headphones and get ready to hear first-hand experiences from building these networks as well as tips and tricks learned along the way.
Click here for our previous episode.
With the recent acquisition of Cumulus Networks by Nvidia, what does that mean for open networking? Kernel of Truth host Roopa Prabhu is joined by Partho Mishra and Amit Katz to discuss what does the acquisition mean for the future of accelerated data center and for open networking in the data center. This is a must listen episode!
Roopa Prabhu: Roopa Prabhu is Chief Linux Architect at Cumulus Networks. At Cumulus she and her team work on all things kernel networking and Linux system infrastructure areas. Her primary focus areas in the Linux kernel are Linux bridge, Netlink, VxLAN, Lightweight tunnels. She is currently focused on building Linux kernel dataplane for E-VPN. She loves working at Cumulus and with the Linux kernel networking and debian communities. Her past experience includes Linux clusters, ethernet drivers and Linux KVM virtualization platforms. She has a BS and MS in Computer Science. You can find her on Twitter at @__roopa.
Amit Katz: Amit is Vice President Ethernet Switch at Mellanox, Nvidia Business Unit. Amit served as Senior Director of Continue reading
I’m going to assume that at this stage, you’ve got a fully working (and tested) GNS3 install on a suitably powerful Linux host. Once that is complete, the next step is to download the two virtual machine images we discussed in part 1 of this blog, and integrate them into GNS3.
In my setup, I downloaded the Cumulus VX 4.0 QCOW2 image (though you are welcome to try newer releases which should work), which you can obtain by visiting this link: https://cumulusnetworks.com/accounts/login/?next=/products/cumulus-vx/download/
I also downloaded the Ubuntu Server 18.04.4 QCOW2 image from here: https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
Once you have downloaded these two images, the next task is to integrate them into GNS3. To do this:
I must have built OpenStack demos a dozen times or more over the past few years, for the purposes of learning, training others, or providing proof of concept environments to clients. However these environments always had one thing in common – they were purely demo environments, bearing little relation to how you would build OpenStack in a real production environment. Indeed, most of them were “all-in-one” environments, where every single service runs on a single node, and the loss of that node would mean the loss of the entire environment – never mind the lack of scalability!
Having been tasked with building a prototype OpenStack environment for an internal proof of concept, I decided that it was time to start looking at how to build OpenStack “properly”. However I had a problem – I didn’t have at my disposal the half-dozen or so physical nodes one might typically build a production cluster on, never mind a highly resilient switch core for the network. The on-going lockdown in which I write this didn’t help – in fact it made obtaining hardware more difficult.
I’ve always been inspired by the “cldemo” environments on Cumulus Networks’ GitHub and my first thought was Continue reading