At CloudFlare, we spend a lot of time talking about the PoPs (Points of Presence) we have around the globe, however, on December 14th, another kind of POP came to the world: a vulnerability being exploited in the wild against Joomla’s Content Management System. This is known as a zero day attack, where it has been zero days since a patch has been released for that bug. A CVE ID has been issued for this particular vulnerability as CVE-2015-8562. Jaime Cochran and I decided to take a closer look.
In this blog post we’ll explain what the vulnerability is, give examples of actual attack payloads we’ve seen, and show how CloudFlare automatically protects Joomla users. If you are using Joomla with CloudFlare today and have our WAF enabled, you are already protected.
The Joomla Web Application Firewall rule set is enabled by default for CloudFlare customers with a Pro or higher plan, which blocks this attack. You can find it in the Joomla section of the CloudFlare Rule Set in the WAF Dashboard.
Joomla is an open source Content Management System which allows you to build web applications and control every aspect of the content of your Continue reading
In a previous post we described our work on a new netmap mode called single-rx-queue.
After submitting the pull request, the netmap maintainers told us that the patch was interesting, but they would prefer something more configurable instead of a tailored custom mode.
After an exchange of ideas and some more work, our patch just got merged to mainline netmap.
Before our patch netmap used to be an all-or-nothing deal. That is: there was no way to put a network adapter partially in netmap mode. All of the queues would have to be detached from the host network stack. Even a netmap mode called “single ring pair” didn't help.
Our final patch is extended and more generic, while still supporting the simple functionality of our original single-rx-queue mode.
First we modified netmap to leave queues that are not explicitly requested to be in netmap mode attached to the host stack. In this way, if a user requests a pair of rings (for example using nm_open(“netmap:eth0-4”)
) it will actually get a reference to both the number 4 RX and TX rings, while keeping the other rings attached to the kernel stack.
But since the NIC is Continue reading
At first glance, the potential performance improvements of HTTP/1.1 versus HTTP/2 on our demo page may seem a bit hard to believe. So, we put together a technical explanation of how this demo actually works. We’d also like to credit the Gophertiles demo, which served as a basis for our own HTTP/2 demo.
A web page can only be served over either HTTP/1.1 or HTTP/2—mixing protocols is not allowed. Our demo page is HTTP/2-enabled, so there’s no way to load HTTP/1.1 content directly on the same page. Inline frames (iframes) can be used to solve this issue. We embedded two iframes on our demo page, both containing the same source code. The key difference is that one iframe loads over an HTTP/1.1 CDN while the other loads over an HTTP/2 CDN.
We chose Amazon CloudFront for the HTTP/1.1 CDN because it can only serve content over HTTP/1.1. For the HTTP/2 CDN, we’re using our own HTTP/2-enabled network. You can take a look at the individual HTTP/1.1 and HTTP/2 iframe content, which should have similar load times to the side-by-side example on our demo page.
So, what is contained in Continue reading
HTTP/2 changes the way web developers optimize their websites. In HTTP/1.1, it’s become common practice to eek out an extra 5% of page load speed by hacking away at your TCP connections and HTTP requests with techniques like spriting, inlining, domain sharding, and concatenation.
Life’s a little bit easier in HTTP/2. It gives the typical website a 30% performance gain without a complicated build and deploy process. In this article, we’ll discuss the new best practices for website optimization in HTTP/2.
Most of the website optimization techniques in HTTP/1.1 revolved around minimizing the number of HTTP requests to an origin server. A browser can only open a limited number of simultaneous TCP connections to an origin, and downloading assets over each of those connections is a serial process: the response for one asset has to be returned before the next one can be sent. This is called head-of-line blocking.
As a result, web developers began squeezing as many assets as they could into a single connection and finding other ways to trick browsers into avoiding head-of-line blocking. In HTTP/2, some of these practices can actually hurt page load times.
After December 31, 2015, SSL certificates that use the SHA-1 hash algorithm for their signature will be declared technology non grata on the modern Internet. Google's Chrome browser has already begun displaying a warning for SHA-1 based certs that expire after 2015. Other browsers are mirroring Google and, over the course of 2016, will begin issuing warnings and eventually completely distrust connections to sites using SHA-1 signed certs. And, starting January 1, 2016, you will no longer be able to get a new SHA-1 certificate from most certificate authorities.
For the most part, that's a good thing. Prohibitively difficult to forge certificate signatures are part of what keeps encryption systems secure. As computers get faster, the risk that, for any given hashing algorithm, you can forge a certificate with the same signature increases. If an attacker can forge a certificate then they could potentially impersonate the identity of a real site and intercept its encrypted traffic or masquerade as that site.
This isn't the first time we've been through this exercise. The original hashing algorithm used for most certificate signatures in the early days of the web was MD5. In 2008, researchers demonstrated they were able to Continue reading
Last month, CloudFlare participated the tenth annual Internet Governance Forum (IGF) in Joao Pessoa, Brazil. Since it was launched at the United Nations’ World Summit on the Information Society (WSIS) in 2005, the IGF has provided valuable opportunities for thousands of representatives of non-profit groups, businesses, governments, and others to debate decisions that will affect the future of the Internet. While the Forum does not negotiate any treaties or other agreements, what participants learn there can influence corporate strategies, standards proposals, and national government policies. Even more importantly, discussions in the hallways (or in the bar or on the beach) can lead to new projects, new thinking, and new collaborations.
The range of issues and the diversity of speakers on panels and at the podium was even greater this year than at previous IGFs. Issues ranged from the need for strong encryption to whether net neutrality regulations are needed—from countering the abuse of women online to how to foster deployment of IPv6 and Internet Exchange Points. You can watch all 167 IGF sessions, which were webcast and archived. I represent CloudFlare as a member of the Multistakeholder Advisory Group (MAG), which organizes the IGF program. Together with the other MAG Continue reading
With CloudFlare's release of HTTP/2 for all our customers the web suddenly has a lot of HTTP/2 connections. To get the most out of HTTP/2 you'll want to be using an up to date web browser (all the major browsers support HTTP/2).
But there are some non-browser tools that come in handy when working with HTTP/2. This blog post starts with a useful browser add-on, and then delves into command-line tools, load testing, conformance verification, development libraries and packet decoding for HTTP/2.
If you know of something that I've missed please write a comment.
For Google Chrome there's a handy HTTP/2 and SPDY Indicator extension that adds a colored lightning bolt to the browser bar showing the protocol being used when a web page is viewed.
The blue lightning bolt shown here indicates that the CloudFlare home page was served using HTTP/2:
A green lightning bolt indicates the site was served using SPDY and gives the SPDY version number. In this case SPDY/3.1:
A grey lightning bolt indicates that neither HTTP/2 no SPDY were used. Here the web page was served using HTTP/1.1.
There's a similar extension for Firefox.
There's also a handy online Continue reading
Why choose, if you can have both? Today CloudFlare is introducing HTTP/2 support for all customers using SSL/TLS connections, while still supporting SPDY. There is no need to make a decision between SPDY or HTTP/2. Both are automatically there for you and your customers.
If you are a customer on the Free or Pro plan, there is no need to do anything at all. Both SPDY and HTTP/2 are already enabled for you. With this improvement, your website’s audience will always use the fastest protocol version when accessing your site over TLS/SSL.
Customers on Business and Enterprise plans may enable HTTP/2 within the "Network" application of the CloudFlare Dashboard.
In February of 2015, the IETF’s steering group for publication as standards-track RFCs approved the HTTP/2 and associated HPACK specifications.
After more than 15 years, the Hypertext Transfer Protocol (HTTP) received a long-overdue upgrade. HTTP/2 is largely based on Google's experimental SPDY protocol, which was first announced in November 2009 as an internal project to increase the speed of the web.
The main focus of both SPDY and HTTP/2 is on performance, especially latency as perceived by the end-user while using Continue reading
Grüetzi Zürich, our 5th point of presence (PoP) to be announced this week, and 69th globally! Located at the northern tip of Lake Zürich in Switzerland, the city of Zürich, often referred to as "Downtown Switzerland," is the largest city in the country. Following this expansion, traffic from Switzerland's seven million internet users to sites and apps using CloudFlare is now mere milliseconds away. Although best known to some for its chocolate and banks, Switzerland is home to many of the most significant developments prefacing the modern internet.
It was in 1989 that Tim Berners-Lee, a British scientist at CERN, the large particle physics laboratory near Geneva, Switzerland, invented the World Wide Web (WWW). Tim laid out his vision to meet the demand for automatic information-sharing between scientists in universities and institutes around the world in a memo titled, "Information Management: a Proposal". Amusingly, his initial proposal wasn't immediately accepted. In fact, his boss at the time noted that the proposal was, "vague but exciting" on the cover page.
The first website at CERN—and in the world—was dedicated to the Continue reading
India is home to 400 million Internet users, second only to China, and will add more new users this year than any other country in the world. CloudFlare protects and accelerates 4 million websites, mobile apps and APIs, and is trusted by over 10,000 new customers each day. Combine these forces, and we are positioned to connect hundreds of millions of Indian users with the millions of internet applications they use each day.
Today, we accelerate this momentum with the announcement of three new points of presence (PoPs) in Mumbai, Chennai and New Delhi. These new sites represent the 66th, 67th and 68th data centers respectively across our global network.
The beginnings of the “internet” in India as we know it started in 1986 when the country launched ERNET (the Education and Research Network). Six years later, a 64 Kbps digital leased line was commissioned from the National Centre for Software Technology in Mumbai to UUNet in Virginia to connect India with the rest of the internet. By comparison, a single port on our router in each of Mumbai, Chennai and New Delhi has nearly 160,000 times the capacity today.
The pace of progress has Continue reading
A customer reported an unusual problem with our CloudFlare CDN: our servers were responding to some HTTP requests slowly. Extremely slowly. 30 seconds slowly. This happened very rarely and wasn't easily reproducible. To make things worse all our usual monitoring hadn't caught the problem. At the application layer everything was fine: our NGINX servers were not reporting any long running requests.
Time to send in The Wolf.
He solves problems.
First, we attempted to reproduce what the customer reported—long HTTP responses. Here is a chart of of test HTTP requests time measured against our CDN:
We ran thousands of HTTP queries against one server over a couple of hours. Almost all the requests finished in milliseconds, but, as you can clearly, see 5 requests out of thousands took as long as 1000ms to finish. When debugging network problems the delays of 1s, 30s are very characteristic. They may indicate packet loss since the SYN packets are usually retransmitted at times 1s, 3s, 7s, 15, 31s.
At first we thought the spikes in HTTP load times might indicate some sort of network problem. To be sure we ran ICMP pings against two IPs over many Continue reading
To get the week started it's our distinct pleasure to introduce CloudFlare's latest PoP (point of presence) in Copenhagen, Denmark. Our Copenhagen data center extends the CloudFlare network to 65 PoPs across 34 countries, with 17 in Europe alone. The CloudFlare network, including all of the Internet applications and content of our users, is now delivered with a median latency of under 40ms throughout the entire continent—by comparison, it takes 300-400ms to blink one's eyes!
As can be seen above, traffic has already started to reach Copenhagen, with steady increases over the course of the day (all times in UTC). The new site is also already mitigating cyber attacks launched against our customers. The spike in traffic around 08:46 UTC is a modest portion of a globally distributed denial of service (DDoS) attack targeted at CloudFlare. By distributing the attack across an ever growing footprint of data centers, mitigation is made easy (and our site reliability engineers can sleep soundly!).
In December 2014 we announced our intention to launch one data center per week throughout 2015. It's an ambitious goal, but we're well on Continue reading
CloudFlare launched just five years ago with the goal of building a better Internet. That’s why we are excited to announce that beginning today, anyone on CloudFlare can secure their traffic with DNSSEC in just one simple step.
This follows one year after we made SSL available for free, and in one week, more than doubled the size of the encrypted web. Today we will do the same with DNSSEC, and this year, we’ll double the size of the DNSSEC-enabled web, bringing DNSSEC to millions of websites, for free.
If DNS is the phone book of the Internet, DNSSEC is the unspoofable caller ID. DNSSEC ensures that a website’s traffic is safely directed to the correct servers, so that a connection to a website is not intercepted by a man-in-the-middle.
Every website visit begins with a DNS query. When I visit cloudflare.com, my browser first needs to find the IP address:
cloudflare.com. 272 IN A 198.41.215.163
When DNS was invented in 1983, the Internet was used by only a handful of professors and researchers, and no one imagined that there could be foul play. Thus, DNS relies on Continue reading
Three years and 46 data centers later our expansion returns to the United States. Phoenix, the latest addition to the CloudFlare network, is our 10th point of presence in North America, and the start of our effort to further regionalize traffic across the continent. This means faster page loads and transaction speeds for your sites and applications, as well as for the 6 million Internet users throughout the Southwestern US that use them.
The vast majority of Internet traffic in the US is exchanged in only a small handful of cities: Los Angeles, the San Francisco Bay Area, Dallas, Chicago, Miami, Ashburn (Virginia) and New York. These locations evolved into key interconnection points largely as a result of their status as population and economic centers. However, if you're one of the 236 million Americans that live outside of these metro areas, you have to hike quite a bit further to access your favorite content on the Internet.
To illustrate this, we measured the level of local interconnection between a handful of our Tier 1 Internet providers—NTT, TeliaSonera, Tata Communications and Cogent—in different metro areas. For the uninitiated, Tier 1 networks are the group of networks that Continue reading
The Payment Card Industry Data Security Standard (PCI DSS) is a global financial information security standard that keeps credit card holders safe. It ensures that any company processing credit card transactions adheres to the highest technical standards.
PCI certification has several levels. Level one (the highest level) is reserved for those companies that handle the greatest numbers of credit cards. Companies at level one PCI compliance are subject to the most stringent checks.
CloudFlare’s mission leads it to provide security for some of the most important companies in the world. This is why CloudFlare chose to be audited as a level one service provider. By adhering to PCI’s rigorous financial security controls, CloudFlare ensures that security is held to the highest standard and that those controls are validated independently by a recognised body.
If you are interested in learning more, see these details about the Payment Card Industry Data Security Standard.
This year’s update from PCI 2.0 to 3.1 was long overdue. PCI DSS 2.0 was issued in October 2010, and the information security threat landscape does not stand still—especially when it comes to industries that deal with financial payments or credit cards. New attacks are almost Continue reading
Hi, I'm Filippo and today I managed to surprise myself! (And not in a good way.)
I'm developing a new module ("filter" as we call them) for RRDNS, CloudFlare's Go DNS server. It's a rewrite of the authoritative module, the one that adds the IP addresses to DNS answers.
It has a table of CloudFlare IPs that looks like this:
type IPMap struct {
sync.RWMutex
M map[string][]net.IP
}
It's a global filter attribute:
type V2Filter struct {
name string
IPTable *IPMap
// [...]
}
CC-BY-NC-ND image by Martin SoulStealer
The table changes often, so a background goroutine periodically reloads it from our distributed key-value store, acquires the lock (f.IPTable.Lock()
), updates it and releases the lock (f.IPTable.Unlock()
). This happens every 5 minutes.
Everything worked in tests, including multiple and concurrent requests.
Today we deployed to an off-production test machine and everything worked. For a few minutes. Then RRDNS stopped answering queries for the beta domains served by the new code.
What. That worked on my laptop™.
Here's the IPTable consumer function. You can probably spot the bug.
func (f *V2Filter) getCFAddr(...) (result []dns.RR) {
f. Continue reading
I’m sure some of you are scratching your head right about now wondering why I would join an Internet security and optimization company. But, Ben, this is not even close to your passion: operating systems.
I had the same reaction when I first saw the CloudFlare website. I wasn’t even sure it made sense for me to go interview here. After taking a closer look, however, I realized that it would be the perfect new home for me. Take a look at this page for a brief introduction to what CloudFlare does and how we do it.
If you know me, you know that I'm a sucker for distributed systems. I fall for a hard computer science problem every time. So, it shouldn’t be a surprise to you that CloudFlare’s John Graham-Cumming, had me at “hello” when he nonchalantly described one of the company's projects: a globally distributed key value store with sub-second consistency guarantees! Ho hum! No big deal.
As the interview process progressed, the team graciously spent several hours walking me through the architecture as well as future plans and product roadmaps. These discussions and email exchanges were frequently interrupted by my cries of protest: Continue reading
Compression is one of the most important tools CloudFlare has to accelerate website performance. Compressed content takes less time to transfer, and consequently reduces load times. On expensive mobile data plans, compression even saves money for consumers. However, compression is not free—it comes at a price. It is one of the most compute expensive operations our servers perform, and the better the compression rate we want, the more effort we have to spend.
The most popular compression format on the web is gzip. We put a great deal of effort into improving the performance of the gzip compression, so we can perform compression on the fly with fewer CPU cycles. Recently a potential replacement for gzip, called Brotli, was announced by Google. Being early adopters for many technologies, we at CloudFlare want to see for ourselves if it is as good as claimed.
This post takes a look at a bit of history behind gzip and Brotli, followed by a performance comparison.
Many popular lossless compression algorithms rely on LZ77 and Huffman coding, so it’s important to have a basic understanding of these two techniques before getting into gzip or Brotli.
LZ77 is a simple technique developed Continue reading
Since January, CloudFlare has been running a small, private beta for DNSSEC. Starting today, the DNSSEC beta is open for everyone. To request access, email [email protected].
DNS is the system that lets your browser know which web server to connect to when you request to visit a website. It’s the underlying backbone of the usable internet, and yet, is vulnerable to man in the middle attacks.
In DNS, an attacker sitting in the middle of your connection to the internet can tell your browser to connect to any web server they’d like. Browsers trust any DNS records they receive as a response to a DNS query, because DNS, invented in 1983 before the public consumption of the Internet, does not perform any authentication.
There is a solution. It’s called DNSSEC and it adds cryptographic hashes and signatures for authenticating DNS records. You can read more about DNSSEC and how it works in a previous blog post.
The DNSSEC beta is open to all websites that use CloudFlare for DNS. If you want to be a part of our beta and be one of the first CloudFlare websites with DNSSEC, email us for beta Continue reading
Recently, a new brute force attack method for WordPress instances was identified by Sucuri. This latest technique allows attackers to try a large number of WordPress username and password login combinations in a single HTTP request.
The vulnerability can easily be abused by a simple script to try a significant number of username and password combinations with a relatively small number of HTTP requests. The following diagram shows a 4-fold increase in login attempts to HTTP requests, but this can trivially be expanded to a thousand logins.
This form of brute force attack is harder to detect, since you won’t necessarily see a flood of requests. Fortunately, all CloudFlare paid customers have the option to enable a Web Application Firewall ruleset to stop this new attack method.
To understand the vulnerability, it’s important to understand the basics of the XML remote procedure protocol (XML-RPC).
XML-RPC uses XML encoding over HTTP to provide a remote procedure call protocol. It’s commonly used to execute various functions in a WordPress instance for APIs and other automated tasks. Requests that modify, manipulate, or view data using XML-RPC require user credentials with sufficient permissions.
Here is an example that requests a list Continue reading