AT&T’s dNOS Initiative Spotlights Fake News from Cisco (et al.)

network-serverOne thing that’s clear from recent events is that the “alternative” path for network infrastructure refresh and build-outs – disaggregation – has just become exciting again due, in part, to AT&T’s recent announcement of the company’s dNOS (disaggregated Networking Operating System) initiative. Actually, prior to this proposal the fact that Pica8 and Cumulus – the only two pure open-standards-based NOS vendors in the market – combined have close to 2,000 current customers running on common hardware suggests that it’s been pretty exciting for some time now.

But AT&T’s new proposal does present us with a perfect opportunity to finally throw a bright spotlight on the palette of Fake News that has been tossed around the industry about disaggregated networking solutions and white-box networking in general. Of course, the elephant in the networking room is always Cisco, so let’s start there to see why AT&T pushed out this proposal in the first place.

Over the years, Cisco has successfully stared down any real threats to its account-control-plus-per-hardware-port-revenue business model, building itself up to the hegemony that it has today and, in the process, inadvertently laying waste to its customers’ ability to innovate in their own market segments based on differentiated network services. Continue reading

AT&T’s dNOS Initiative Spotlights Fake News from Cisco (et al.)

network-serverOne thing that’s clear from recent events is that the “alternative” path for network infrastructure refresh and build-outs – disaggregation – has just become exciting again due, in part, to AT&T’s recent announcement of the company’s dNOS (disaggregated Networking Operating System) initiative. Actually, prior to this proposal the fact that Pica8 and Cumulus – the only two pure open-standards-based NOS vendors in the market – combined have close to 2,000 current customers running on common hardware suggests that it’s been pretty exciting for some time now.

But AT&T’s new proposal does present us with a perfect opportunity to finally throw a bright spotlight on the palette of Fake News that has been tossed around the industry about disaggregated networking solutions and white-box networking in general. Of course, the elephant in the networking room is always Cisco, so let’s start there to see why AT&T pushed out this proposal in the first place.

Over the years, Cisco has successfully stared down any real threats to its account-control-plus-per-hardware-port-revenue business model, building itself up to the hegemony that it has today and, in the process, inadvertently laying waste to its customers’ ability to innovate in their own market segments based on differentiated network services. Continue reading

5 Ways you can leverage the Linux community for your data center network

Here at Cumulus, we often talk about the benefits of having an operating system on Linux (if you need to be re-schooled on the benefits of unifying the stack, head here). But something that possibly goes overlooked, or at least under appreciated, is the value of the Linux community itself. The community is made up of 50,000 or so engineers all passionate about learning, improving and creating code. People like to say that when you go with a Linux operating system, you’re “standing on the shoulder of giants,” meaning that you don’t only have to rely on your inhouse engineering team (even if they’re world-class engineers), but rather you’re relying on thousands of engineers, including some of the absolute best in the business. Since Cumulus Linux runs on Linux, our customers have this community at their disposal. So why does that really matter? Here are five reasons to consider.

1. Security

The most widely cited benefit of having a community of 50,000 behind you is security. Basically it looks something like this. Let’s say you’re with a proprietary vendor (*cough* Cisco *cough* Juniper *cough*), and there is a glitch in your latest package installation causing a security vulnerability. Maybe Continue reading

Don’t Forget! Tune Into Our CCNA/CCNP Q&A: February 2018

Presented by INE instructor Keith Bogart (CCIE #4923), this free 60 minute session is an open forum for anyone seeking information regarding the Cisco CCNA or CCNP Routing & Switching exam and related technologies. Ask questions live with an experienced industry expert!

 

When: February 9th AT 10 am (PST)/1 pm (EST)

Who Should Watch: Anyone with questions about earning their associate or professional level Cisco certification

Instructor: Keith Bogart CCIE #4923

Aerohive’s Atom boldly goes where no Wi-Fi has gone before

The opening to “Star Trek: The Original Series” featured Captain Kirk proclaiming that space was the “Final Frontier” and that the Enterprise was going to “boldly go where no man has gone before.”In networking, Wi-Fi is really the final frontier, as it lets us explore strange new apps and seek out new tweets regardless of where we are. Untethered from cables, we are as free to roam around as the Enterprise was in space. There should be no question that good Wi-Fi is as important to us today as dilithium crystals were to the Enterprise.But what happens when Wi-Fi isn’t available? Or just as bad, when the connectivity is almost there but not quite strong enough to be useful. I recall being in a hotel where I couldn’t connect to Wi-Fi at the desk in the room, but I could connect if I sat in the hallway by the entry door, so I wound up sitting there all night trying to get work done.  It’s easy to say that Wi-Fi should be everywhere, but sometimes it’s hard to achieve that because of interference or cabling problems.To read this article in full, please click here

Different Server Workhorses For Different Workload Courses

Co-design is all the rage these days in systems design, where the hardware and software components of a system – whether it is aimed at compute, storage, or networking – are designed in tandem, not one after the other, and immediately affect how each aspect of a system are ultimate crafted. It is a smart idea that wrings the maximum amount of performance out of a system for very precise workloads.

The era of general purpose computing, which is on the wane, brought an ever-increasing amount of capacity to bear in the datacenter at an ever -lower cost, enabling an

Different Server Workhorses For Different Workload Courses was written by Timothy Prickett Morgan at The Next Platform.

It’s launch day for Sylabs: Promoting portable high-performance containers for Linux

Today is launch day for Sylabs — a new company focused on promoting Singularity within the enterprise and high-performance computing (HPC) environments and on advancing the fields of artificial intelligence (AI), machine/deep learning, and advanced analytics.And while it's launch day for Sylabs, it's not launch day for the technology it will be promoting. Singularity has already made great strides for HPC and has given Linux itself more prominence in HPC as it has moved more deeply into the areas of scientific and enterprise computing. With its roots at Lawrence Berkeley National Laboratory (Berkeley Lab), Singularity is already providing a platform for a lot of heavy-duty scientific research and is expected to move into many other areas, such as machine learning, and may even change the way some difficult analytical problems are approached.To read this article in full, please click here

Researchers find malware samples that exploit Meltdown and Spectre

It was inevitable. Once Google published its findings for the Meltdown and Spectre vulnerabilities in CPUs, the bad guys used that as a roadmap to create their malware. And so far, researchers have found more than 130 malware samples designed to exploit Spectre and Meltdown.If there is any good news, it’s that the majority of the samples appear to be in the testing phase, according to antivirus testing firm AV-TEST, or are based on proof-of-concept software created by security researchers. Still, the number is rising fast.To read this article in full, please click here

Researchers find malware samples that exploit Meltdown and Spectre

It was inevitable. Once Google published its findings for the Meltdown and Spectre vulnerabilities in CPUs, the bad guys used that as a roadmap to create their malware. And so far, researchers have found more than 130 malware samples designed to exploit Spectre and Meltdown.If there is any good news, it’s that the majority of the samples appear to be in the testing phase, according to antivirus testing firm AV-TEST, or are based on proof-of-concept software created by security researchers. Still, the number is rising fast.To read this article in full, please click here

A Statistical View Of Cloud Storage

Cloud datacenters in many ways are like melting pots of technologies. The massive facilities hold a broad array of servers, storage systems, and networking hardware that come in a variety of sizes. Their components come with different speeds, capacities, bandwidths, power consumption, and pricing, and they are powered by different processor architectures, optimized for disparate applications, and carry the logos of a broad array of hardware vendors, from the largest OEMs to the smaller ODMs. Some hardware systems are homegrown or built atop open designs.

As such, they are good places to compare and contrast how the components of these

A Statistical View Of Cloud Storage was written by Jeffrey Burt at The Next Platform.

5G won’t cope, terahertz will provide more bandwidth

Terahertz data links promise significant advantages over existing microwave-based wireless data transmissions, and the technology will ultimately beat out the upcoming 5G millimeter frequencies if progress continues on it, researchers say.The reason for the optimism is that terahertz is more capacious than existing radio bands. It’s also less power hungry, and new technical advances are being made in it.Also read: New wireless science promises 100-times faster Wi-Fi The latest terahertz-level advance, announced this week by scientists at Brown University, is the ability to bounce the mega-bandwidth-carrying waves of energy around corners. Quashing that line-of-sight requirement introduces a level of robustness not seen before.To read this article in full, please click here

IDG Contributor Network: Why WAN metrics are not enough in SD-WAN policy enforcement

On the topic of measuring WAN metrics, most engineers think to look at the standard statistics of loss, latency, jitter, and reachability for determining path quality. This is good information for a routing protocol that is making decisions for packet flow at layer 3 of the OSI model. However, it is incomplete information when looking at it from the perspective of the overall user experience.  In order for an SD-WAN solution to provide materially better value than a typical packet router, it must look beyond the metrics considered by the router.    SD-WAN devices shouldn’t be considered routers in the conventional sense. Routers use local tables and algorithms such as Dijkstra to determine the shortest path to a destination for a packet. The term packet is important here. It is all that the router cares about. If you look up the definition of a router, it is a device that functions at layer 3 to deliver packets to their destination network. When there is a problem the router will process the topology change and compute new routing table entries that are a point in time decision of the available paths. These topology changes take time to process. This can Continue reading

Coming Soon: Networking Features in Ansible 2.5

Ansible 2.5 Networking Features

The upcoming Ansible 2.5 open source project release has some really exciting improvements, and the following blog highlights just a few of the notable additions. In typical Ansible fashion, development of networking enhancements is done in the open with the help of the community. You can follow along by watching the networking GitHub project board, as well as the roadmap for Ansible 2.5 via the networking wiki page.

A few highlighted features include:

New Connection Types: network_cli and NETCONF

Ansible Fact Improvements

Improved Logging

Continued Enablement for Declarative Intent

Persistent SSH Connection Improvements

Additional Platforms and Modules

Let's dive into each of these topics and elaborate on what they mean for your Ansible Playbooks!

New Connection Types: network_cli and NETCONF

Prior to Ansible 2.5, using networking modules required the connection type to be set to local. A playbook executed the python module locally, and then connected to a networking platform to perform tasks. This was sufficient, but different than how most non-networking Ansible modules functioned. In general, most Ansible modules are executed on the remote host, compared to being executed locally on the Ansible control node. Although many networking platforms can execute Python code, the vast Continue reading