Click here for our previous episode.
Guess who’s back? Back again? The real Kernel of Truth podcast is back with season 2 and we’re starting off this season with all things EVPN! This topic is near and dear to Attilla de Groots’ heart having talked about it in his recent blog here. He now joins Atul Patel and our host Brian O’Sullivan to talk more about EVPN on host for multi-tenancy.
Join as we as discuss the problem that we’re solving for, how to deploy EVPN on the host, what the caveats are when deploying and more.
Brian O’Sullivan: Brian currently heads Product Management for Cumulus Linux. For 15 or so years he’s held software Product Management positions at Juniper Networks as well as other smaller companies. Once he saw the change that was happening in the networking space, he decided to join Cumulus Networks to be a part of the open networking innovation. When not working, Brian is a voracious reader and has held a variety of jobs, including bartending in three countries and working as an extra in a German Continue reading
Continuing Integration and Continuing Development (CI/CD), and containers are both at the heart of modern software development. CI/CD developers regularly break up applications into microservices, each running in their own container. Individual microservices can be updated independently of one another, and CI/CD developers aim to make those updates frequently.
This approach to application development has serious implications for networking.
There are a lot of things to consider when talking about the networking implications of CI/CD, containers, microservices and other modern approaches to application development. For starters, containers offer more density than virtual machines (VMs); you can stuff more containers into a given server than is possible with VMs.
Meanwhile, containers have networking requirements just like VMs do, meaning more workloads per server. This means more networking resources are required per server. More MAC addresses, IPs, DNS entries, load balancers, monitoring, intrusion detection, and so forth. Network plumbing hasn’t changed, so more workloads means more plumbing to instantiate and keep track of.
Containers can live inside a VM or on a physical server. This means that they may have different types of networking requirements than traditional VMs, (only talking to other containers within the same VM, for example) than other workloads. Continue reading
The new year is now in full swing and we’re excited about all the great content we’ve shared with you so far! In case you missed some of it, here’s our Cumulus content roundup- January edition. As always, we’ve kept busy last month with lots of great resources and news for you to read. One of the biggest things we announced was our new partnership with Nutanix but wait, there’s so much more! We’ve rounded up the rest of the right here, so settle in and stay a while!
From Cumulus Networks:
Cumulus + Nutanix = Building and Simplifying Open, Modern Data Centers at Scale: We are excited to announce that Cumulus and Nutanix are partnering to build and operate modern data centers with open networking software.
Cumulus Networks Strengthens Board of Directors Amid Record Growth and Market Adoption of its Open, Modern Networking Software: Former Deutsche Bank Group COO, Kim Hammonds, joins board as company leads the transition to open networking and data center modernization
Moving a Prototype Network to Production: With prototyping production networks, the network becomes elevated to a standard far superior to the traditional approaches.
We are excited to announce that Cumulus and Nutanix are partnering to build and operate modern data centers with open networking software. We’ve worked closely with Nutanix, a leader in enterprise cloud computing, to develop a joint integration that will solve one of the most pressing enterprise infrastructure problems by unlocking the power of hyperconverged systems with open networking.
It’s a challenge every enterprise knows all too well: siloed servers, storage and compute make traditional IT infrastructure expensive and complex to maintain and creates a dynamic that holds back business innovation. Hyperconverged infrastructure with modern, open networking software allows for agility, flexibility, and a greatly simplified operational model across compute, storage, and networking. Our joint solution brings a fully automated and highly distributed network fabric to hyperconverged workloads for the modern data center.
Cumulus Linux and NetQ with Nutanix delivers tangible business value by increasing operational efficiency shortening the time required to stand up Nutanix clusters, organizational agility by improving the user experience via a single interface using Nutanix Prism, streamlined procurement through common hardware partners such Continue reading
Network Engineers create and operate prototype networks all the time. Prototype networks are used to validate designs, test features or changes, troubleshoot use-case scenarios, and often just for learning. Typically, pre-prod testing environments are set up in such a way that device host names, attributes, configurations, IP assignments, software versions, and topologies are mostly inconsistent with production environments. This inconsistency is counter-intuitive, considering that accurate design validations should closely match reality to avoid any mistakes when deploying in production.
Cumulus Linux can run as a virtual appliance, allowing network engineers to build to-scale virtual networks for activities like modeling changes and performing validations, while opening the door for similar DevOps methodologies application developers have operated with for years: validated testing before deploying in production for continuous integration.
Cumulus VX (Virtual Experience) is a Cumulus Linux virtual appliance. You can test drive Cumulus Linux on a laptop, while those fluent with Cumulus Linux can prototype large networks and develop software integrations before deploying into production environments.
Cumulus VX is a platform — just like Cumulus Linux on a real switch — and therefore is designed to perform just like an actual switch running Cumulus Linux. Every feature you Continue reading
One of the most common requests we, as consultants, get from our customers is for an operations guide as the final deliverable for any data center build out. There are a few goals for such a guide:
Since Scott and I have been working on many operations guides, we thought it would be great to document our process so that customers can write their own operations guides.
The operations guide for web scale networking goes beyond just documenting configuration backups, user account access and change requests though. Web scale networking integrates proven software development processes and as such, the operations guide needs to account for these workflows.
The starting point of all operations guides is the initial build. Most of the cabling architecture, traffic flows and features, along with decision making and architectural choices, are captured within the High level Design and Low Level Design document. The operations guide on the other Continue reading
Who controls containers: developers, or operations teams? While this might seem like something of an academic discussion, the question has very serious implications for the future of IT in any organization. IT infrastructure is not made up of islands; each component interacts with, and depends on, others. Tying all components of all infrastructures together is the network.
If operations teams control containers, they can carefully review the impact that the creation of those containers will have on all the rest of an organization’s infrastructure. They can carefully plan for the consequences of new workloads, assign and/or reserve resources, map out lifecycle, and plan for the retirement of the workload, including the return of those resources.
If developers control containers, they don’t have the training to see how one small piece fits into the wider puzzle, and almost certainly don’t have the administrative access to all the other pieces of the puzzle to gain that insight. Given the above, it might seem like a no-brainer to let operations teams control containers, yet in most organizations deploying containers, developers are responsible for the creation and destruction of containers, which they do as they see fit.
This is not as irrational as it Continue reading
Click here for our previous episode.
As we enter the year, many, if not most, organizations have already been engaging in 2019 planning and strategizing. With that in mind, we thought what better way to wrap up our first season of Kernel of Truth than with an episode dedicated to trends and predictions straight from the brains of some of Cumulus’ brightest — CTO and Co-founder, JR Rivers, TME manager Pete Lumbis, and consultant David Marshall.
Join as we as discuss EVPN, virtualization, keeping up with the demands of digital transformation and more. This will be our last episode of the season, with our next season kicking off later this month.
JR Rivers: JR is a co-founder and CTO of Cumulus Networks where he works on company, technology, and product direction. JR’s early involvement in home-grown networking at Google and as the VP of System Architecture for Cisco’s Unified Computing System both helped fine tunehis perspective on networking for the modern datacenter. Follow him on Twitter at @JRCumulus
Pete Lumbis: Pete is a Technical Marketing Engineer at Cumulus Networks. He helps Continue reading
Today we are launching our partnership with FS.com and with that comes an opportunity to engage our customers in a new and unique way. FS.com has been providing networking solutions since 2009. The joint partnership of Cumulus and FS.com allows a new way for our collective customers to achieve web-scale networking solutions in a convenient and timely manner. FS.com’s commitment to fast response times and comprehensive networking solutions brings a layer of convenience we feel our clients will appreciate.
Cumulus Networks is driven to provide flexibility, choice and affordability when it comes to building out the next generation of network infrastructures. By adding FS.com as an additional option to our portfolio we continue that commitment to our customers. It is exciting to see how this space will evolve and the new ways in which customers will source network infrastructure moving forward.
Whether you are looking for Data Center TOR solutions with Enterprise feature set corporate buying behavior is evolving as our consumer buying habits blend more into our corporate lives. This method of sourcing and buying consumer goods has grown significantly over the past decade as our consumer selves buy more and more of Continue reading
Containers are unlike any other compute infrastructure. Prior to containers, compute infrastructure was composed of a set of brittle technologies that often took weeks to deploy. Containers made the automation of workload deployment mainstream, and brought workload deployment down to minutes, if not seconds.
Now, to be perfectly clear, containers themselves aren’t some sort of magical automation sauce that changed everything. Containers are something of a totem for IT operations automation, for a few different reasons.
Unlike the Virtual Machines (VMs) that preceded them, containers don’t require a full operating system for every workload. A single operating system can host hundreds or even thousands of containers, moving the necessary per-workload RAM requirement from several gigabytes to a few dozen megabytes. Similarly, containerized workloads share certain basic functions – libraries, for instance – from the host operating system, which can make maintaining key aspects of the container operating environment easier. When you update the underlying host, you update all the containers running on it.
Unlike VMs, however, containers are feature poor. For example, they have no resiliency: traditional vMotion-like workload migration doesn’t exist, and we’re only just now – several years after containers went mainstream – starting to get decent persistent Continue reading
If you’re a consumer-facing business, Black Friday and Cyber Monday are the D-Day for IT operations. Low-level estimates indicate that upwards of 20% of all revenues for companies can occur within these two days. The stakes are even higher if you’re a payment processor as you aggregate the purchases across all consumer businesses. This means that the need to remain available during these crucial 96 hours is paramount.
My colleague, David, and I have been working the past 10 months preparing for this day. In January 2018 we started a new deployment with a large payment processor to help them build out capacity for their projected 2018 holiday payment growth. Our goal was to create a brand new, 11 rack data center to create a third region to supplement the existing two regions used for payment processing. In addition, we helped deploy additional Cumulus racks and capacity at the existing two regions, which were historically built with traditional vendors.
Now that both days have come and gone, read on to find out what we learned from this experience.
Payment processing has most of its weight on the payment applications running in the data center. As with Continue reading
As most know, Cumulus Linux was originally intended for data center switching and routing but over the years, our customer base has requested that we expand into the enterprise campus feature set too. Slowly, we’ve done just that.
With this expansion though, there are a few items that IT managers tend to take for granted in an all Cisco environment that may need some extra attention when using Cumulus Linux as a campus switch. This is especially the case when it comes to IEEE 802.1x, desk phones, etc.
Most of the phones we inter-operate with have been of the Cisco variety and quite often, those phones are connected to Cisco switches. There are a few tweaks from the default Cumulus settings that need to be called out in this environment and we’ll now go over what those are and how you can tweek them.
Cisco IP phones may revert to a different VLAN after initial negotiation. One of our enterprise customers found that according to a Cisco tech note on LLDP-MED and CDP, CDP should be disabled on non-Cisco switches connecting to Cisco phones.
To eliminate this behavior, make the following adjustment to the Continue reading
We’re at it again with the Cumulus content roundup- November edition. As always, we’ve kept busy this month with lots of great resources and news for you to read. From EVPN Underlay Routing Protocol to the benefits of Layer 3, we’ve rounded it all up right here, so settle in and stay a while!
From Cumulus Networks:
Choosing an EVPN Underlay Routing Protocol: We take a look at the options in routing protocols that could use as an underlay with the objective of understanding what might make them a fit or not for deployment in an EVPN network.
The Benefits of Flexible Multi-Cloud and Multi-Region Networking: Here we explore some of the reasons multi-cloud is fantastic for enterprises when they consider security, flexibility, reliability, and cost-effectiveness.
Cumulus Linux Automation with Standard Linux Tooling: This blog focuses on the different options available for modern automation, & how the Cumulus Linux approach provides the greatest amount of flexibility.
Cumulus Networks Open-Ended NCLU Net Example Command: NCLU is the always helpful Network Command Line Utility and supports both inspection and modification of Cumulus Networks configuration data.
Layer 3 can do it better. I’m convinced. You should be too.: Are you bringing the best solution Continue reading
There are lots of reasons why we have a tendency to stick to what we know best, but when new solutions present themselves, as the decision makers, we have to make sure we’re still bringing the best solution to our business and our customers. This post will highlight the virtues of building an IP based fabric of point to point routed links arranged in a Clos spine and leaf topology and why it is superior to legacy layer 2 hierarchical designs in the data center.
It’s not only possible, but far easier to build, maintain and operate a pure IP based fabric than you might think. The secret is that by pushing layer 2 broadcast domains as far out to the edges as possible, the data center network can be simpler, more reliable and easier to scale. For context, consider the existing layer 2 hierarchical model illustrated below:
This design depends heavily on MLAG. The peer link is compulsory between two switches providing an MLAG. An individual link failure on the peer link would be more consequential than any of the other links. Ideally, we try to avoid linchpin situations like this. This design does provide redundancy, but depending on Continue reading
NCLU is the always helpful Network Command Line Utility. It’s a command interface for our products and platforms that’s designed to provide direct, simple access to network configuration information. Thus, NCLU supports both inspection and modification of Cumulus Networks configuration data. Better yet, NCLU is easy to customize for local environments and naming conventions using its net example facility.
In general, NCLU enables users at the command line to learn about current configurations, and make changes or additions to such configurations. NCLU reports on Interfaces and can provide information about IP addresses, VLANs, Access controls, Trunking, STP, and more. At the routing level, NCLU provides information about Border Gateway Protocol (BGP) and Open Shortest Path First (OSPF) routing protocol settings and configurations. NCLU also offers information about services, including hostnames, NTP (Network Time Protocol), Timezone, and so on.
NCLU also includes comprehensive, context-sensitive help. Starting with the basic net command, users can learn about the various sub-commands available to them. Similarly, entering net <sub-command-name> provides help for that specific sub-command. This is how Cumulus (and other forms of) Linux delivers help information for users of complex commands like net.
In addition, NCLU commands provide control over configuration staging, Continue reading
One thing’s for sure: The world of networking and networking administration is quickly changing. Part of this change is an evolution from old-school, proprietary centralized networking to more open options. This evolution has several different effects on the way network designers, administrators and engineers design and operate the network. This blog will focus on the different options available for modern automation, and how the Cumulus Linux approach provides the greatest amount of flexibility.
It wasn’t too long ago that the few big networking vendors had an almost unbreakable grip on organizational networking implementations, and correspondingly, with the way these implementations were managed. For most, this included the configuration of the various types of networking equipment using a command-line interface (CLI) and proprietary commands. Automating these types of solutions most often required either an offering developed by the vendors themselves, or the use of an application programming interface (API) written to interface with their products.
The question is whether this was a good thing or not. Generally, vendor-specific solutions have their advantages because they’re able to interface closely with the specific device code and take advantage of communications between the device coding team and the tools coding team.
EVPN is all the rage these days. The ability to do L2 extension and L3 isolation over a single IP fabric is a cornerstone to building the next-generation of private clouds. BGP extensions spelled out in RFC 7432 and the addition of VxLAN in IETF draft-ietf-bess-evpn-overlay established VxLAN as the datacenter overlay encapsulation and BGP as the control plane from VxLAN endpoint (VTEP) to VxLAN endpoint. Although RFC 7938 tells us how to use BGP in the data center, it doesn’t discuss how it would behave with BGP as an overlay as well. As a result, every vendor seems to have their own ideas about how we should build the “underlay” network to get from VTEP to VTEP, allowing BGP-EVPN to run over the top.
Let’s take a look at our options in routing protocols we could use as an underlay and understand their strengths and weaknesses that make them a good or bad fit for deployment in an EVPN network. We’ll go through IS-IS, OSPF, iBGP and eBGP. I won’t discuss EIGRP. Although it’s now an IETF standard, it’s still not widely supported Continue reading
.We’re back with the Cumulus content roundup- October edition. We’ve kept busy this month with a new white boarding video series, podcasts, resources and more. Covering everything from Open Source to digital transformation, we’ve rounded it all up right here, so settle in and stay a while!
From Cumulus Networks:
Preparing your network for digital transformation: Learn about the primary challenges with digital transformation and how web-scale networking principles make digital transformation possible and profitable. Is your network ready for the future?
Web-scale networking for cloud service providers: Find out why cloud service providers need to have agile, highly scalable and cost effective infrastructure in order to stand out to their customers.
Our dedicated approach to open source networking: Read our philosophy and how we’ve contributed to and participated in the open source community.
Web-scale Whiteboarding: Openstack Overview: Watch our brand new series of whiteboarding videos with our very own Pete Lumbis
Kernel of Truth: Episode 9: Tune into this podcast episode as we dive into Layer 3 networking and why we believe it’s the future of network design.
News from the web:
Gartner Peer Insights: See the full list of companies recognized for Best Data Networking of 2018, including Cumulus Networks!
Cumulus Linux includes a RESTful programming interface for accessing network devices running that OS. It’s called HTTP API, and it implements an API to access the OpenStack ML2 driver and Network Command Line Utility, or NCLU. Understanding exactly what this means, and how it works, is essential before digging into the possibilities it presents. Here’s an overview to get this going.
The ML2 Driver, a.k.a. (in OpenStack’s terms) the Modular Layer 2 neutron plug-in, provides a framework. It enables OpenStack-based networking to use a variety of Layer 2 networking technologies, including those from Cumulus (for which a specific ML2 driver is available and ready to use). To use the OpenStack ML2 driver with Cumulus Linux switches, two essential ingredients must be present:
The Border Gateway Protocol (BGP) is an IP reachability protocol that you can use to exchange IP prefixes. Traditionally, one of the nuisances of configuring BGP is that if you want to exchange IPv4 prefixes you have to configure an IPv4 address for each BGP peer. In a large network, this can consume a lot of your address space, requiring a separate IP address for each peer-facing interface.
To understand where BGP unnumbered fits in, it helps to understand how BGP has historically worked over IPv4. Peers connect via IPv4 over TCP port 179. Once they’ve established a session, they exchange prefixes. When a BGP peer advertises an IPv4 prefix, it must include an IPv4 next hop address, which is usually the address of the advertising router. This requires, of course, that each BGP peer has an IPv4 address.
As a simple example, using the Cumulus Reference Topology, let’s configure BGP peerings as follows:
Between spine01 (AS 65020, 10.1.0.0/31) and leaf01 (AS 65011, 10.1.0.1/31)
Between spine01 (10.1.0.4/31) and leaf02 (AS 65012, 10.1.0.5/31)
Leaf01 will advertise the prefix 192.0.2.1/32 and leaf02 will Continue reading