Author Archives: Nolan Leake
Author Archives: Nolan Leake
Meet us at OpenStack Summit Tokyo and learn how to build fast, scalable, secure OpenStack networking.
Mark McClain (CTO, Akanda Inc) and I will be presenting at the OpenStack Summit in Tokyo about the next-generation physical and virtual network that DreamHost is deploying for their DreamCompute cloud.
The design marries Cumulus Networks Dynamic LNV (Lightweight Network Virtualization) with Akanda’s Astara L3-7 services, all being orchestrated by the OpenStack Neutron.
We’ll be expanding on the talk we gave at the last OpenStack summit in Vancouver. That talk was about the design and why we should deploy it. In this one, we will be discussing in depth about our experiences deploying it in production.
If you can’t make it to Tokyo, don’t worry, the talk will be recorded.
Watch out for this space for updates on the talk!
The post OpenStack Summit Tokyo: Learn Open Networking with OpenStack appeared first on Cumulus Networks Blog.
Meet us at OpenStack Summit Tokyo and learn how to build fast, scalable, secure OpenStack networking.
Mark McClain (CTO, Akanda Inc) and I will be presenting at the OpenStack Summit in Tokyo about the next-generation physical and virtual network that DreamHost is deploying for their DreamCompute cloud.
The design marries Cumulus Networks Dynamic LNV (Lightweight Network Virtualization) with Akanda’s Astara L3-7 services, all being orchestrated by the OpenStack Neutron.
We’ll be expanding on the talk we gave at the last OpenStack summit in Vancouver. That talk was about the design and why we should deploy it. In this one, we will be discussing in depth about our experiences deploying it in production.
If you can’t make it to Tokyo, don’t worry, the talk will be recorded.
Watch out for this space for updates on the talk!
The post OpenStack Summit Tokyo: Learn Open Networking with OpenStack appeared first on Cumulus Networks Blog.
Gregory Pickett of Hellfire Security reached out to me last Wednesday about some interesting research he is presenting tomorrow at Black Hat USA. There are two parts to his research: a security bug in Cumulus Linux (that we already patched) and other network operating systems, and a serious design issue with how all network switches are designed and built.
The security bug was the easy part: it is not exploitable in our default configuration, and Gregory politely gave us a heads up well ahead of time, so we put the fix out last Friday to protect customers who have modified their sudoers configuration in a way that exposed them to the vulnerability. You can see the details in our security fix announcement from last Friday. (If you’re interested in being notified about future security fixes in Cumulus Linux, please sign up for our security mailing list.)
The much more serious issue he will present is the exploitability of firmware in all network switches. This same exploitability has been known about in servers, laptops and PCs for years (and in some cases mitigated with technologies like Trusted Platform Modules), but its application to networking devices is new.
This issue means Continue reading
Why drives its popularity? Being open source, it puts cloud builders in charge of their own destiny, whether they choose to work with a partner, or deploy it themselves. Because it is Linux based, it is highly amenable to automation, whether you’re building out your network or are running it in production. At build time, it’s great for provisioning, installing and configuring the physical resources. In production, it’s just as effective, since provisioning tenants, users, VMs, virtual networks and storage is done via self-service Web interfaces or automatable APIs. Finally, it’s always been designed to run well on commodity servers, avoiding reliance on proprietary vendor features.
Cumulus Linux fits naturally into an OpenStack cloud, because it shares a similar design and philosophy. Built on open source, Cumulus Linux is Linux, allowing common management, monitoring and configuration on both servers and switches. The same automation and provisioning tools that you commonly use for OpenStack servers you can also use unmodified on Cumulus Linux switches, giving a single Continue reading
There has been an aspiration to replace Nova-net with Neutron for about 4 years now, hasn’t happened yet. The latest is that Neutron is being threatened with being demoted back into “Incubation”, due to promising to make itself production ready for each of the last 4 releases, and then totally failing to follow through. All of the handful of production deployments of Neutron are in conjunction with Nicira/NSX-MH, which does all the heavy lifting.
The Neutron folks are optimistic that they will be production ready in Juno (the next release, Oct this year), but I’m betting on Kilo, the release early next year.
The post Nova-net is the networking component of Nova appeared first on Cumulus Networks Blog.
This is a useful tool, as there are clear similarities.
Server virtualization changed the amount of time it took to get a new compute resource up and running from weeks (order hardware, rack gear, install OS) to hours or even minutes. It allowed location independence, so admins could start VMs wherever capacity was available, and move them around at will.
Network virtualization is starting to provide similar benefits to the network. Creating a new virtual network can be done in minutes, compared to hours if we have to file a ticket with the networking team to provision a new VLAN and plumb it across a the physical network. And the scope of VM mobility can be increased radically, as VMs are no longer bound by size-limited physical L2 domains.
But there is one place the analogy breaks down, at least with networking from OEMs with the traditional proprietary appliance approach.
First, let’s back up briefly and examine something I glossed over when talking Continue reading
We recently published a program that we wrote in conjunction with our friends at MetaCloud: the VXLAN Flooder, or vxfld. vxfld is the basis of our Lightweight Network Virtualization feature (new with Cumulus Linux 2.2!), as well as MetaCloud’s next generation OpenStack networking. It enables easy to deploy, scalable virtual switched networks built on top of an L3 fabric.
Of course, vxfld is just the latest in a series of contributions! There are projects we’ve written from scratch, such as ONIE, the Open Network Install Environment, which we contributed to the Open Compute Project. Like Prescriptive Topology Manager, which simplifies the deployment of large L3 networks. And ifupdown2, a rewrite of Debian’s tool for configuring networks that greatly simplifies large, complicated networking configurations.
And then there are our contributions back to the upstream projects that we include in Cumulus Linux. These include (in order of decreasing number of contributions) the Quagga routing protocol suite, the Linux Continue reading
It is used to collect statistics, such as packet counts, error counts, CPU usage, etc from a large number of individual switches. What is especially interesting is that it can be used to collect sampled packets (usually only the first n bytes, containing the header), along with some metadata about those packets.
Bringing sFlow to Cumulus Linux was particuarly easy, because “hsflowd” was already available for implementing sFlow support on Linux servers. We were able to reuse that existing code, with extremely minimal modification, to implement sFlow on our Linux based switches.
sFlow allows a collector to get a statistical view of what is going on in a collection of switches, approaching per-flow granularity. This is extremely useful information to present to users for capacity planning and debugging purposes, but things really get interesting when the collector can make decisions based on the information.
For example, our friends at inMon implemented detection of elephant flows (high bandwidth), followed by marking those flows on the switch at network ingress for special QoS handling. This nearly Continue reading