Why Network Automation Won’t Kill Your Job

I’ve been focusing lately on shortening the gap between traditional automation teams and network engineering. This week I was fortunate enough to attend the DevOps 4 Networks event, and though I’d like to save most of my thoughts for a post dedicated to the event, I will say I was super pleased to spend the time with the legends of this industry. There are a lot of bright people looking at this space right now, and I am really enjoying the community that is emerging.

SSLv3 Support Disabled By Default Due to POODLE Vulnerability

SSLv3 Vulnerability

For the last week we've been tracking rumors about a new vulnerability in SSL. This specific vulnerability, which was just announced, targets SSLv3. The vulnerability allows an attacker to add padding to a request in order to then calculate the plaintext of encryption using the SSLv3 protocol. Effectively, this allows an attacker to compromise the encryption when using the SSLv3 protocol. Full details have been published by Google in a paper which dubs the bug POODLE (PDF).

Generally, modern browsers will default to a more modern encryption protocol (e.g., TLSv1.2). However, it's possible for an attacker to simulate conditions in many browsers that will cause them to fall back to SSLv3. The risk from this vulnerability is that if an attacker could force a downgrade to SSLv3 then any traffic exchanged over an encrypted connection using that protocol could be intercepted and read.

In response, CloudFlare has disabled SSLv3 across our network by default for all customers. This will have an impact on some older browsers, resulting in an SSL connection error. The biggest impact is Internet Explorer 6 running on Windows XP or older. To quantify this, we've been tracking SSLv3 usage.

SSLv3 Continue reading

vPC order of operations

Cisco Nexus can be very temperamental or capricious (pick the one you prefer ) and the vPC technology is not an isolated case. There is a certain way to configure vPC and we will see that in that blogpost. The following topology will be used:     Enabling the feature Obviously we need to activate the […]

Standards are a farce

Today (October 14) is "World Standards Day", celebrating the founding of the ISO, also known as the "International Standards Organization". It's a good time to point out that people are wrong about standards.

You are reading this blog post via "Internet standards". It's important to note that through it's early existence, the Internet was officially not a standard. Through the 1980s, the ISO was busy standardizing a competing set of internetworking standards.

What made the Internet different is that it's standards were de facto not de jure. In other words, the Internet standards body, the IETF, documented things that worked, not how they should work. Whenever somebody came up with a new protocol to replace an old one, and if people started using it, then the IETF would declare this as "something people are using". Protocols were documented so that others could interoperate with them if they wanted, but there was no claim that they should. Internet evolution in these times was driven by rogue individualism -- people rushed to invent new things with waiting for the standards body to catch up.

The ISO's approach was different. Instead of individualism, it was based on "design by committee", where committees were Continue reading

Don’t Be Afraid of Changing Jobs

Some people are corporate survivors, sticking with one company for decades. Some people move around when it suits, while others would like to move, but are fearful of change. Here’s a few things I’ve learnt about adapting to new work environments. It’s not that scary.

Corporate Survivors

We’ve all seen the people who seem to survive in a corporate environment. They seem to know everyone, and almost everything about the business. Return to a company after 10 years, and they’re still there. Somehow they survive, through mergers, acquisitions, and round after round of re-organisation. But often they seem to be doing more or less the same job for years, with little change.

Why Do People Stay?

There’s four possible reasons for staying at a job for a long time:

  1. You’re really happy with what you do, and you’re well looked after.
  2. You just don’t care. You come to work to eat your lunch and talk to your friends. You don’t care how you’re treated, or what work you do, as long as you get paid.
  3. This is the only possible job you can get, due to location/skills/whatever.
  4. You’re comfortable where you are, and you’re scared of moving, scared of what Continue reading

OpenStack and Cumulus Linux: Two Great Tastes that Taste Great Together

OpenStack is a very popular open source technology stack used to build private and public cloud computing platforms. It powers clouds for thousands of companies like Yahoo!, Dreamhost, Rackspace, eBay, and many more.

Why drives its popularity? Being open source, it puts cloud builders in charge of their own destiny, whether they choose to work with a partner, or deploy it themselves. Because it is Linux based, it is highly amenable to automation, whether you’re building out your network or are running it in production. At build time, it’s great for provisioning, installing and configuring the physical resources. In production, it’s just as effective, since provisioning tenants, users, VMs, virtual networks and storage is done via self-service Web interfaces or automatable APIs. Finally, it’s always been designed to run well on commodity servers, avoiding reliance on proprietary vendor features.

Cumulus Linux fits naturally into an OpenStack cloud, because it shares a similar design and philosophy. Built on open source, Cumulus Linux is Linux, allowing common management, monitoring and configuration on both servers and switches. The same automation and provisioning tools that you commonly use for OpenStack servers you can also use unmodified on Cumulus Linux switches, giving a single Continue reading

MindshaRE: Statically Extracting Malware C2s Using Capstone Engine

It’s been far too long since the last MindshaRE post, so I decided to share a technique I’ve been playing around with to pull C2 and other configuration information out of malware that does not store all of its configuration information in a set structure or in the resource section (for a nice set of publicly available decoders check out KevTheHermit’s RATDecoders repository on GitHub). Being able to statically extract this information becomes important in the event that the malware does not run properly in your sandbox, the C2s are down or you don’t have the time / sandbox bandwidth to manually run and extract the information from network indicators.

Intro

To find C2 info, one could always just extract all hostname-/IP-/URI-/URL-like elements via string regex matching, but it’s entirely possible to end up false positives or in some cases multiple hostname and URI combinations and potentially mismatch the information. In addition to that issue, there are known families of malware that will include benign or junk hostnames in their disassembly that may never get referenced or only referenced to make false phone-homes. Manually locating references and then disassembling using a disassembler (in my case, Capstone Engine) can help Continue reading

VMware Meets The Physical Network — What If?

With other acquisition rumors floating around, I figured I would add my own 2 cents and do some speculating. 

It’s not uncommon to hear that VMware might acquire Cumulus.  Like others, it’s one acquisition that I’ve speculated about for a while.  There is already an interesting dynamic between Cisco and VMware, but as both companies continue to go to market with their Software Defined Networking (SDN), or controller based solutions, VMware still needs to run over a physical data center network.  The physical network market is still largely dominated by Cisco though.  Does VMware want or need to control the physical network?
Extending the SDDC

If VMware took their network strategy one step further and kept true to the Software Defined Data Center (SDDC), they would need a network operating system (NOS) that could run on approved hardware, e.g. hardware compatibility list (HCL).  They need a bare metal (white box) switch company.  Cumulus fits this build well because they are focused on creating open IP fabrics using tried and true protocols and already have their own HCL.  They’ve also already partnered with VMware and support VXLAN termination on certain platformsContinue reading

The Current State of SDN Protocols

The Current State of SDN Protocols


by Hariharan Ananthakrishnan, Distinguished Engineer - October 14, 2014

In last week’s blog post I started outlining the various standards needed to make SDN a reality. Here is more detail about the relevant protocols and the IETF’s progress on each one. 

OpenFlow has emerged as a Layer 2 software defined networking (SDN) southbound protocol. Similarly, Path Computation Element Protocol (PCEP), BGP Link State Distribution (BGP-LS), and NetConf/YANG are becoming the de-facto SDN southbound protocols for Layer 3. The problem is that these protocols are stuck in various draft forms that are not interoperable, which limits the industry’s SDN progress. 

Path Computation Element Protocol (PCEP) 

PCEP is used for communicating Label Switched Paths (LSPs) between a Path Computation Client (PCC) and a Path Computation Element (PCE). PCEP has been in use since 2006. The stateful [draft-ietf-pce-stateful-pce] and PCE-initiated LSP [draft-ietf-pce-pce-initiated-lsp] extensions were added more recently and enable PCEP use for SDN deployments. The IETF drafts for both extensions have not yet advanced to “Proposed Standard” status after more than two years. 

Because the drafts went through many significant revisions, vendors are struggling to keep up with the Continue reading

The Great Tech Reaving

It seems as though the entire tech world is splitting up.  HP announced they are splitting the Personal Systems Group into HP, Inc and the rest of the Enterprise group in HP Enterprise.  Symantec is forming Veritas into a separate company as it focuses on security and leaves the backup and storage pieces to the new group.  IBM completed the sale of their x86 server business to Lenovo.  There are calls for EMC and Cisco to split as well.  It’s like the entire tech world is breaking up right before the prom.

Acquisition Fever

The Great Tech Reaving is a logical conclusion to the acquisition rush that has been going on throughout the industry for the past few years.  Companies have been amassing smaller companies like trading cards.  Some of the acquisitions have been strategic.  Buying a company that focuses on a line of work similar to the one you are working on makes a lot of sense.  For instance, EMC buying XtremIO to help bolster flash storage.

Other acquisitions look a bit strange.  Cisco buying Flip Video.  Yahoo buying Tumblr. There’s always talk around these left field mergers.  Is the CEO Continue reading

Automatic protection for common web platforms

If you are a CloudFlare Pro or above customer you enjoy the protection of the CloudFlare WAF. If you use one of the common web platforms, such as WordPress, Drupal, Plone, WHMCS, or Joomla, then it's worth checking if the relevant CloudFlare WAF ruleset is enabled.

That's because CloudFlare pushes updates to these rules automatically when new vulnerabilities are found. If you enable the relevant ruleset for your technology then you'll be protected the moment new rules are published.

For example, here's a screenshot of the WAF Settings for a customer who uses WordPress (but doesn't use Joomla). If CloudFlare pushes rules to the WordPress set then they'll be protected automatically.

Enabling a ruleset is simple. Just click the ON/OFF button and make sure it's set to ON.

Here's a customer with the Drupal ruleset disabled. Clicking the ON/OFF button would enable that ruleset and provide protection from existing vulnerabilities and automatic protection if new rules are deployed.

For common problems we've rolled out protection across the board. For example, we rolled out Heartbleed protection and Shellshock automatically, but for technology-specific updates it's best to enable the appropriate ruleset in the WAF Settings.

Should you run OSPF over DMVPN?

Original content from Roger's CCIE Blog Tracking the journey towards getting the ultimate Cisco Certification. The Routing & Switching Lab Exam
You may have heard that you should not run OSPF over DMVPN but do you actually know why? There is actually no technical reason why you cannot run OSPF over DMVPN but it does not scale very well. So why is that? The reason is that OSPF is a link state protocol so each spoke […]

Post taken from CCIE Blog

Original post Should you run OSPF over DMVPN?

Theory of Constraints and common staffing mistakes

In many companies—both large and small—getting staffing right is a challenge. Critical teams are always starved for resources, and the common peanut butter approach to distributing headcount rarely maps to an explicit strategy. In fact, over time, some teams will grow disproportionately large, leaving other teams (system test, anyone?) struggling to keep pace.

But why is it that staffing is so difficult to get right?

Theory of Constraints

In yesterday’s blog post, I referenced Eliyahu Goldratt’s seminal business book The Goal, and highlighted the Theory of Constraints. I’ll summarize the theory here (if you read yesterday, you can safely skip to the next section).

The Theory of Constraints is basically a management paradigm that promotes attention to bottleneck resources above all else. In any complex system that has multiple subsystems required to deliver something of value, there will be one or more bottleneck resources. A bottleneck resource is the limiting subsystem in the overall system. To increase the overall output of the system as a whole, you have to address the bottlenecks.

The book focuses on a manufacturing plant. Basically, there are different stations in the manufacturing process. Each makes a different part of the widget. To Continue reading

Dude… cover me, I’m going in…

You know it needs to be done, it could be easy… or it could get messy, and you’re sure that the world will be a better place when you’re finished.

That’s the dilemma that some of our enterprise customers have when grappling with “the cloud.”

We’ve noticed a distinct trend among customers that grew up outside the “cloud era”; they’ve been trying to bolt “cloud” onto their legacy IT blueprint and it has been a struggle. They expected to realize operational and capital efficiencies that approximate high scale Internet businesses. Unfortunately, they are missing by a long shot.

At some point along the way, these customers realize that they need to be willing to drive structural change. They need to create a “cloud blueprint” for their applications and IT infrastructure. In some cases, this means a transition to public/hosted infrastructure; in other cases, it means building new private infrastructure based on cloud principles. In many cases, it’s a mixture of both.

When private cloud is part of the answer, we’ve consistently found design patterns built on infrastructure platforms like VMware vSphere and OpenStack and big data platforms such as Hortonworks.  Customers want to get these services operational quickly so they often stay with Continue reading

Accelerating Hadoop With Cumulus Linux

One of the questions I’ve encountered in talking to our customers has been “What environments are a good example of working on top of the Layer-3 Clos design?”  Most engineers are familiar with the classic Layer-2 based Core/Distribution/Access/Edge model for building a data center.  And while that has served us well in the older client-server north-south traffic flow approaches and in smaller deployments, modern distributed applications stress the approach to its breaking point.  Since L2 designs normally need to be built around pairs of devices, relying on individual platforms to carry 50% of your data center traffic can present a risk at scale.  On top of this you have to have a long list of protocols that can result in a brittle and operationally complex environment as you deploy 10′s of devices.

Hence the rise of the L3 Clos approach allowing for combining many small boxes, each carrying only a subset of your traffic, along with running industry standard protocols that have a long history of operational stability and troubleshooting ease.  And, while the approach can be applied to many different problems, building a practical implementation of a problem is the best way to show Continue reading

Open Networking for VMware vSphere: The Last Piece of the SDDC Puzzle?

VMware vSphere is one of the most prevalent virtualization platforms in the market today. Despite stiff competition from proprietary and open-source initiatives, VMware vSphere has continued to innovate and provide value to the enterprise.

Now that I’ve kept marketing/SEO happy, lets dive right in. Two of the latest initiatives from the VMware marketing engine are ‘“SDDC”’ (software-defined data center) and “Hyper-Converged.” Hype aside, these two concepts are fundamentally aligned with what hyper-scale operators have been doing for years. It boils right down to having generic hardware configured for complex and varied roles by layering different software.

At Cumulus Networks, we help businesses build cost-effective networks by leveraging “white box” switches together with a Linux® operating system. We feel this is the crucial missing piece to the overall SDDC vision.

sddc-cumulus networks

What is VMware’s SDDC and Hyper-converged Strategy All about Anyway?

First, let’s back up a little and talk a little history. VMware started as a hypervisor stack for abstracting compute resources from the underlying server (and before that, workstations… but I digress). They moved on to more advanced management of that newly disaggregated platform, with the rise of vCenter and everything that followed.

Then VMware turned their attention to the Continue reading

Nova-net is the networking component of Nova

There has been an aspiration to replace Nova-net with Neutron for about 4 years now, hasn’t happened yet.  The latest is that Neutron is being threatened with being demoted back into “Incubation”, due to promising to make itself production ready for each of the last 4 releases, and then totally failing to follow through.  All of the handful of production deployments of Neutron are in conjunction with Nicira/NSX-MH, which does all the heavy lifting.

The Neutron folks are optimistic that they will be production ready in Juno (the next release, Oct this year), but I’m betting on Kilo, the release early next year.

The post Nova-net is the networking component of Nova appeared first on Cumulus Networks Blog.

Linux for Network Admins: The Good stuff just got Better!

Man is a tool-using animal. Without tools he is nothing, with tools he
is all. – Thomas Carlyle
The most satisfactory definition of man from the scientific point of view is probably Man the Tool-maker. – Kenneth Oakley

The Artist and the Scientist both agree that tools are an integral part of what it means to be human. Even human history is divided into ages based on the power and sophistication of the tools: Stone Age, Bronze Age, Iron Age and now, Information Age. Our capabilities are enormously magnified by the tools we have at hand.

A server admin has the tools to manage 100,000 servers fairly easily, while the network admin struggles to manage a small fraction of that size of networking equipment. In the modern data center, where scale, agility and flexibility are part of its DNA, this lack of netadmin tools is a major hurdle in the operation of the data center. Subsequently, a big part of the transformation and turmoil in networking today is an attempt to get rid of this limitation.

Among the tools the server admin has at his or her disposal, automation is key.  It is impossible to manually manage devices at the Continue reading