Archive

Category Archives for "Security"

Freaking out over the DBIR

Many in the community are upset over the recent "Verizon DBIR" because it claims widespread exploitation of the "FREAK" vulnerability. They know this is impossible, because of the vulnerability details. But really, the problem lies in misconceptions about how "intrusion detection" (IDS) works. As a sort of expert in intrusion detection (by which, I mean the expert), I thought I'd describe what really went wrong.

First let's talk FREAK. It's a man-in-the-middle attack. In other words, you can't attack a web server remotely by sending bad data at it. Instead, you have to break into a network somewhere and install a man-in-the-middle computer. This fact alone means it cannot be the most widely exploited attack.

Second, let's talk FREAK. It works by downgrading RSA to 512-bit keys, which can be cracked by supercomputers. This fact alone means it cannot be the most widely exploited attack -- even the NSA does not have sufficient compute power to crack as many keys as the Verizon DBIR claim were cracked.

Now let's talk about how Verizon calculates when a vulnerability is responsible for an attack. They use this methodology:
  1. look at a compromised system (identified by AV scanning, IoCs, etc.)
  2. look at Continue reading

Vulns are sparse, code is dense

The question posed by Bruce Schneier is whether vulnerabilities are "sparse" or "dense". If they are sparse, then finding and fixing them will improve things. If they are "dense", then all this work put into finding/disclosing/fixing them is really doing nothing to improve things.

I propose a third option: vulns are sparse, but code is dense.

In other words, we can secure specific things, like OpenSSL and Chrome, by researching the heck out of them, finding vulns, and patching them. The vulns in those projects are sparse.

But, the amount of code out there is enormous, considering all software in the world. And it changes fast -- adding new vulns faster than our feeble efforts at disclosing/fixing them.

So measured across all software, no, the secure community hasn't found any significant amount of bugs. But when looking at critical software, like OpenSSL and Chrome, I think we've made great strides forward.

More importantly, let's ignore the actual benefits/costs of fixing bugs for the moment. What all this effort has done is teach us about the nature of vulns. Critical software is written to day in a vastly more secure manner than it was in the 1980s, 1990s, or even the Continue reading

Software-Defined Security and VMware NSX Events

I’m presenting at two Data Center Interest Group Switzerland events organized by Gabi Gerber in Zurich in early June:

  • In the morning of June 7th we’ll talk about software-defined security, data center automation and open networking;
  • In the afternoon of the same day (so you can easily attend both events) we’ll talk about VMware NSX microsegmentation and real-life implementations.

I hope to see you in Zurich in a bit more than a month!

Security ‘net: Privacy and Cybercrime Edition

DDoS blackmail is an increasingly common form of cybercrime, it appears. The general pattern is something like this: the administrator of a large corporate site receives an email, threatening a large scale DDoS attack unless the company deposits some amount of bitcoin in an untraceable account. Sometimes, if the company doesn’t comply, the blackmail is followed up with a small “sample attack,” and a second contact or email asking for more bitcoin than the first time.

The best reaction to these types of things is either to work with your service provider to hunker down and block the attack, or to simply ignore the threat. For instance, there has been a spate of threats from someone called Armada Collective over the last several weeks that appear to be completely empty; while threats have been reported, no action appears to have been taken.

We heard from more than 100 existing and prospective CloudFlare customers who had received the Armada Collective’s emailed threats. We’ve also compared notes with other DDoS mitigation vendors with customers that had received similar threats. -via Cloudflare

The bottom line is this: you should never pay against these threats. It’s always better to contact your provider and work Continue reading

Satoshi: how Craig Wright’s deception worked

My previous post shows how anybody can verify Satoshi using a GUI. In this post, I'll do the same, with command-line tools (openssl). It's just a simple application of crypto (hashes, public-keys) to the problem.

I go through this step-by-step discussion in order to demonstrate Craig Wright's scam. Dan Kaminsky's post and the redditors comes to the same point through a different sequence, but I think my way is clearer.

Step #1: the Bitcoin address


We know certain Bitcoin addresses correspond to Satoshi Nakamoto him/her self. For the sake of discussion, we'll use the address 15fszyyM95UANiEeVa4H5L6va7Z7UFZCYP. It's actually my address, but we'll pretend it's Satoshi's. In this post, I'm going to prove that this address belongs to me.

The address isn't the public-key, as you'd expect, but the hash of the public-key. Hashes are a lot shorter, and easier to pass around. We only pull out the public-key when we need to do a transaction. The hashing algorithm is explained on this website [http://gobittest.appspot.com/Address]. It's basically base58(ripemd(sha256(public-key)).

Step #2: You get the public-key


Hashes are one-way, so given a Bitcoin address, we can't immediately convert it into a public-key. Instead, we have to look it Continue reading

Satoshi: That’s not how any of this works

In this WIRED article, Gaven Andresen says why he believes Craig Wright's claim to be Satoshi Nakamoto:
“It’s certainly possible I was bamboozled,” Andresen says. “I could spin stories of how they hacked the hotel Wi-fi so that the insecure connection gave us a bad version of the software. But that just seems incredibly unlikely. It seems the simpler explanation is that this person is Satoshi.”
That's not how this works. That's not how any of this works.

The entire point of Bitcoin is that it's decentralized. We don't need to take Andresen's word for it. We don't need to take anybody's word for it. Nobody needs to fly to London and check it out on a private computer. Instead, you can just send somebody the signature, and they can verify it themselves. That the story was embargoed means nothing -- either way, Andresen was constrained by an NDA. Since they didn't do it the correct way, and were doing it the roundabout way, the simpler explanation is that he was being bamboozled.

Below is an example of this, using the Electrum Bitcoin wallet software:


This proves that the owner of the Bitcoin Address has signed the Message Continue reading

Securing BGP: A Case Study (9)

There are a number of systems that have been proposed to validate (or secure) the path in BGP. To finish off this series on BGP as a case study, I only want to look at three of them. At some point in the future, I will probably write a couple of posts on what actually seems to be making it to some sort of deployment stage, but for now I just want to compare various proposals against the requirements outlined in the last post on this topic (you can find that post here).securing-bgp

The first of these systems is BGPSEC—or as it was known before it was called BGPSEC, S-BGP. I’m not going to spend a lot of time explaining how S-BGP works, as I’ve written a series of posts over at Packet Pushers on this very topic:

Part 1: Basic Operation
Part 2: Protections Offered
Part 3: Replays, Timers, and Performance
Part 4: Signatures and Performance
Part 5: Leaks

Considering S-BGP against the requirements:

  • Centralized versus decentralized balance:S-BGP distributes path validation information throughout the internetwork, as this information is actually contained in a new attribute carried with route advertisements. Authorization and authentication are implicitly centralized, however, with the Continue reading

Touch Wipe: a question for you lawyers

Whether the police can force you to unlock your iPhone depends upon technicalities. They can't ask you for your passcode, because that would violate the 5th Amendment right against "self incrimination". On the other hand, they can force you to press your finger on the TouchID button, or (as it has been demonstrated) unlock the phone themselves using only your fingerprint.

So I propose adding a new technicality into the mix: "Touch Wipe". In addition to recording fingerprints to unlock the phone, Apple/Android should add the feature where users record fingerprints to wipe (erase) the phone. For example, I may choose my thumb to unlock, and my forefinger to wipe.

Indeed, I may record only one digit to unlock, and all nine remaining digits to wipe. Or even, I may decide to record all 10 digits on both hands to wipe, and not use Touch ID at all to unlock (relying solely on the passcode).

This now presents the problem for the police. They can't force me to unlock the phone. They can't get around that by using my fingerprints, because they might inadvertently destroy evidence.

The legal system is resilient against legal trickery such as this. If think you've Continue reading

Errata Security 2016-04-27 17:48:00

Who's your lawyer. Insights & Wisdom via HBO's Silicon Valley (S.3, E. 1)

The company's attorney may be your friend, but they're not your lawyer.  In this guest post, friend of Errata Elizabeth Wharton (@lawyerliz) looks at the common misconception highlighted in this week's Silicon Valley episode.

 
by Elizabeth Wharton


Amidst the usual startup shenanigans and inside-valley-jokes, HBO's Silicon Valley Season 3, Episode 1 contained a sharp reminder: lawyer loyalty runs with the "client," know whether you are the client.   A lawyer hired by a company has an entity as its client, not the individuals or officers of that company.  If you want an attorney then hire your own. 

Silicon Valley Season 3, Episode 1- Setting the Scene (without too many spoilers, I promise)
Upon learning of a board room ouster from the CEO to the CTO role, the startup company's founder Richard storms into the meeting with two board "friends" in Continue reading

My next scan

So starting next week, running for a week, I plan on scanning for ports 0-65535 (TCP). Each probe will be completely random selection of IP+port. The purpose is to answer the question about the most common open ports.

This would take a couple years to scan for all ports, so I'm not going to do that. But, scanning for a week should give me a good statistical sampling of 1% of the total possible combinations.

Specifically, the scan will open a connection and wait a few seconds for a banner. Protocols like FTP, SSH, and VNC reply first with data, before you send requests. Doing this should find such things lurking at odd ports. We know that port 22 is the most common for SSH, but what is the second most common?

Then, if I get no banner in response, I'll send an SSL "Hello" message. We know that port 443 is the most common SSL port, but what is the second most common?

In other words, by waiting for SSH, then sending SSL, I'll find SSH even it's on the (wrong) port of 443, and I'll find SSL even if it's on port 22. And all other ports, too.

Continue reading

Securing BGP: A Case Study (8)

Throughout the last several months, I’ve been building a set of posts examining securing BGP as a sort of case study around protocol and/or system design. The point of this series of posts isn’t to find a way to secure BGP specifically, but rather to look at the kinds of problems we need to think about when building such a system. The interplay between technical and business requirements are wide and deep. In this post, I’m going to summarize the requirements drawn from the last seven posts in the series.

Don’t try to prove things you can’t. This might feel like a bit of an “anti-requirement,” but the point is still important. In this case, we can’t prove which path along which traffic will flow. We also can’t enforce policies, specifically “don’t transit this AS;” the best we can do is to provide information and letting other operators make a local decision about what to follow and what not to follow. In the larger sense, it’s important to understand what can, and what can’t, be solved, or rather what the practical limits of any solution might be, as close to the beginning of the design phase as possible.

In the Continue reading

Technology Short Take #65

Welcome to Technology Short Take #65! As usual, I gathered an odd collection of links and articles from around the web on key data center technologies and trends. I hope you find something useful!

Networking

  • Michael Ryom has a nice (but short) article on using Log Insight along with a NetFlow proxy to help provide more detailed visibility into traffic flows between VMs on NSX logical networks.
  • Brent Salisbury has an article on GoBGP, a Go-based BGP implementation. BGP seems to be emerging as an early front-runner for a standards-based control plane for software networking. Couple something like GoBGP with IPVLAN L3 (see Brent’s article) and you’ve got a new model for your data center network.
  • Andy Hill has an article on doing rolling F5 upgrades using Ansible.
  • Filip Verloy has an article that discusses the integration between Nuage Networks and Fortinet.
  • This should probably go in the “Cloud Computing/Cloud Management” section, but the boundaries between areas are getting more and more blurry every day. (Thankfully, due to LASIK my vision is sharper than ever.) In any case, here’s a post by Marcos Hernandex on the use of subnet pools in OpenStack. Although Marcos’ post discusses them Continue reading

Securing BGP: A Case Study (7)

In the last post on this series on securing BGP, I considered a couple of extra questions around business problems that relate to BGP. This time, I want to consider the problem of convergence speed in light of any sort of BGP security system. The next post (to provide something of a road map) should pull all the requirements side together into a single post, so we can begin working through some of the solutions available. Ultimately, as this is a case study, we’re after a set of tradeoffs for each solution, rather than a final decision about which solution to use.

The question we need to consider here is: should the information used to provide validation for BGP be somewhat centralized, or fully distributed? The CAP theorem tells us that there are a range of choices here, with the two extreme cases being—

  • A single copy of the database we’re using to provide validation information which is always consistent
  • Multiple disconnected copies of the database we’re using to provide validation which is only intermittently consistent

Between these two extremes there are a range of choices (reducing all possibilities to these two extremes is, in fact, a misuse of the Continue reading