In a recent post we discussed how we have been adding resilience to our network.
The strength of the Internet is its ability to interconnect all sorts of networks — big data centers, e-commerce websites at small hosting companies, Internet Service Providers (ISP), and Content Delivery Networks (CDN) — just to name a few. These networks are either interconnected with each other directly using a dedicated physical fiber cable, through a common interconnection platform called an Internet Exchange (IXP), or they can even talk to each other by simply being on the Internet connected through intermediaries called transit providers.
The Internet is like the network of roads across a country and navigating roads means answering questions like “How do I get from Atlanta to Boise?” The Internet equivalent of that question is asking how to reach one network from another. For example, as you are reading this on the Cloudflare blog, your web browser is connected to your ISP and packets from your computer found their way across the Internet to Cloudflare’s blog server.
Figuring out the route between networks is accomplished through a protocol designed 25 years ago (on two napkins) called BGP.
BGP allows interconnections between Continue reading
Come join us on Cloudflare HQ in San Francisco on Tuesday, November 22 for another cryptography meetup. We had such a great time at the last one, we decided to host another.
We’ll start the evening at 6:00p.m. with time for networking, followed up with short talks by leading experts starting at 6:30p.m. Pizza and beer are provided! RSVP here.
Here are the confirmed speakers:
Emily Stark is a software engineer on the Google Chrome security team, where she focuses on making TLS more usable and secure. She spends lots of time analyzing field data about the HTTPS ecosystem and improving web platform features like Referrer Policy and Content Security Policy that help developers migrate their sites to HTTPS. She has also worked on the DevTools security panel and the browser plumbing that supports other security UI surfaces like the omnibox. (That green lock icon is more complicated than you'd think!)
Previously, she was a core developer at Meteor Development Group, where she worked on web framework security and internal infrastructure, and a graduate student researching client-side cryptography in web browsers. Emily has a master's degree from MIT and a bachelor's degree from Stanford, Continue reading
Last Friday the popular DNS service Dyn suffered three waves of DDoS attacks that affected users first on the East Coast of the US, and later users worldwide. Popular websites, some of which are also Cloudflare customers, were inaccessible. Although Cloudflare was not attacked, joint Dyn/Cloudflare customers were affected.
Almost as soon as Dyn came under attack we noticed a sudden jump in DNS errors on our edge machines and alerted our SRE and support teams that Dyn was in trouble. Support was ready to help joint customers and we began looking in detail at the effect the Dyn outage was having on our systems.
An immediate concern internally was that since our DNS servers were unable to reach Dyn they would be consuming resources waiting on timeouts and retrying. The first question I asked the DNS team was: “Are we seeing increased DNS response latency?” rapidly followed by “If this gets worse are we likely to?”. Happily, the response to both those questions (after the team analyzed the situation) was no.
CC BY-SA 2.0 image by tracyshaun
However, that didn’t mean we had nothing to do. Operating a large scale system like Cloudflare that Continue reading
The last few weeks have seen several high-profile outages in legacy DNS and DDoS-mitigation services due to large scale attacks. Cloudflare's customers have, understandably, asked how we are positioned to handle similar attacks.
While there are limits to any service, including Cloudflare, we are well architected to withstand these recent attacks and continue to scale to stop the larger attacks that will inevitably come. We are, multiple times per day, mitigating the very botnets that have been in the news. Based on the attack data that has been released publicly, and what has been shared with us privately, we have been successfully mitigating attacks of a similar scale and type without customer outages.
I thought it was a good time to talk about how Cloudflare's architecture is different than most legacy DNS and DDoS-mitigation services and how that's helped us keep our customers online in the face of these extremely high volume attacks.
Before delving into our architecture, it's worth taking a second to think about another analogous technology problem that is better understood: scaling databases. From the mid-1980s, when relational databases started taking off, through the early 2000s the way companies thought of scaling Continue reading
Today there is an ongoing, large scale Denial-of-Service attack directed against Dyn DNS. While Cloudflare services are operating normally, if you are using both Cloudflare and Dyn services, your website may be affected.
Specifically, if you are using CNAME records which point to a zone hosted on Dyn, our DNS queries directed to Dyn might fail making your website unavailable, and presenting a “1001” error message.
Some popular services that might rely on Dyn for part of their operations include GitHub Pages, Heroku, Shopify and AWS.
As a possible workaround, you might be able to update your Cloudflare DNS records from CNAMEs (referring to Dyn hosted records) to A/AAAA records specifying the origin IP of your website. This will allow Cloudflare to reach your origin without the need for an external DNS lookup.
Note that if you use different origin IP addresses, for example based on the geographical location, you may lose some of that functionality by using plain A/AAAA records. We recommend that you provide addresses for many of your different locations, so that load will be shared amongst them.
Customers with a CNAME setup (which means Cloudflare is not configured in your domain NS records) where the main Continue reading
One of the base principles of cryptography is that you can't just encrypt multiple messages with the same key. At the very least, what will happen is that two messages that have identical plaintext will also have identical ciphertext, which is a dangerous leak. (This is similar to why you can't encrypt blocks with ECB.)
If you think about it, a pure encryption function is just like any other pure computer function: deterministic. Given the same set of inputs (key and message) it will always return the same output (the encrypted message). And we don't want an attacker to be able to tell that two encrypted messages came from the same plaintext.
The solution is the use of IVs (Initialization Vectors) or nonces (numbers used once). These are byte strings that are different for each encrypted message. They are the source of non-determinism that is needed to make duplicates indistinguishable. They are usually not secret, and distributed prepended to the ciphertext since they are necessary for decryption.
The distinction between IVs and nonces is controversial and not binary. Different encryption schemes require different properties to be secure: some just need them to never repeat, in which case we commonly Continue reading
Over the last few weeks we've seen DDoS attacks hitting our systems that show that attackers have switched to new, large methods of bringing down web applications. They appear to come from the Mirai botnet (and relations) which were responsible for the large attacks against Brian Krebs.
Our automatic DDoS mitigation systems have been handling these attacks, but we thought it would be interesting to publish some of the details of what we are seeing. In this article we'll share data on two attacks, which are perfect examples of the new trends in DDoS.
In the past we've written extensively about volumetric DDoS attacks and how to mitigate them. The Mirai attacks are distinguished by their heavy use of L7 (i.e. HTTP) attacks as opposed to the more familiar SYN floods, ACK floods, and NTP and DNS reflection attacks.
Many DDoS mitigation systems are tuned to handle volumetric L3/4 attacks; in this instance attackers have switched to L7 attacks in an attempt to knock web applications offline.
Seeing the move towards L7 DDoS attacks we put in place a new system that recognizes and blocks these attacks as they happen. The Continue reading
Over the last six years, we’ve built the tooling, infrastructure and expertise to run a DNS network that handles our scale - we’ve answered a few million DNS queries in the few seconds since you started reading this.
DNS is the backbone of the internet. Every email, website visit, and API call ultimately begins with a DNS lookup. Internet is built on DNS, so every hosting company, registrar, TLD operator, and cloud provider must be able to run reliable DNS.
Last year CloudFlare launched Virtual DNS, providing DDoS mitigation and a strong caching layer of 100 global data centers to those running DNS infrastructure.
Today we’re expanding that offering with two new features for an extra layer of reliability: Serve Stale and DNS Rate Limiting.
Virtual DNS sits in front of your DNS infrastructure. When DNS resolvers lookup answers on your authoritative DNS, the query first goes to CloudFlare Virtual DNS. We either serve the answer from cache if we have the answer in cache, or we reach out to your nameservers to get the answer to respond to the DNS resolver.
Even if your DNS servers are down, Virtual DNS can now answer on your behalf Continue reading
Cloudflare has certified with the U.S. Department of Commerce for the new EU-U.S. Privacy Shield framework.
Beginning this summer, the U.S. Department of Commerce began accepting submissions to certify under the EU-U.S. Privacy Shield framework, a new mechanism by which European companies can transfer personal data to their counterparts in the United States. By certifying under Privacy Shield, Cloudflare is taking a strong and pro-active stance towards further protecting the security and privacy of our customers.
Since 1998, following the European Union’s implementation of EU Data Protection Directive 95/46/EC, companies in Europe wishing to transfer the personal data of Europeans overseas have had to ensure that the recipient of such data practices an adequate level of protection when handling this information. Until last October, American companies were able to certify under the U.S.-EU Safe Harbor Accord, which provided a legal means to accept European personal data, in exchange for assurances of privacy commitments and the enactment of specific internal controls.
However, after having been in effect for roughly fifteen years, in October 2015 the European Court of Justice overturned the Safe Harbor and declared that a new mechanism for transatlantic data transfers would need Continue reading
CC BY 2.0 image by Brian Hefele
Cloudflare helps customers control their own traffic at the edge. One of two products that we introduced to empower customers to do so is Cloudflare Traffic Control.
Traffic Control allows a customer to rate limit, shape or block traffic based on the rate of requests per client IP address, cookie, authentication token, or other attributes of the request. Traffic can be controlled on a per-URI (with wildcards for greater flexibility) basis giving pinpoint control over a website, application, or API.
Cloudflare has been dogfooding Traffic Control to add more granular controls against Layer 7 DOS and brute-force attacks. For example, we've experienced attacks on cloudflare.com from more than 4,000 IP addresses sending 600,000+ requests in 5 minutes to the same URL but with random parameters. These types of attacks send large volumes of HTTP requests intended to bring down our site or to crack login passwords.
Traffic Control protects websites and APIs from similar types of bad traffic. By leveraging our massive network, we are able to process and enforce rate limiting near the client, shielding the customer's application from unnecessary load.
To make this more concrete, let's look at a Continue reading
When we launched Universal SSL in September 2014 we eliminated the costly and confusing process of securing a website or application with SSL, and replaced it with one free step: sign up for Cloudflare.
When you complete the sign-up process, we batch your domain together with a few dozen other recently signed-up domains, and fire off a request to one of our Certificate Authority (CA) partners. The CA then sends us back a shared certificate covering the root (e.g. example.com) and top-level wildcard (e.g. *.example.com) of your domain, along with the hostnames of the other customers in the request. We then package this shared certificate with its encrypted private key and distribute it to our datacenters around the world, ensuring that your visitors’ HTTPS sessions are speedy no matter where they originate.
Since that process was created, we have used it to secure millions of domains with free Universal SSL certificates and helped make the Internet a faster and more secure place.
But along the way we heard from customers who wanted more control over the certificates used for their domains. They want Continue reading
Cloudflare's investment into building a large global network protects our customers from DDoS attacks, secures them with our Web Application Firewall and Universal SSL, as well as improving performance through our CDN and the many network-level optimizations we're constantly iterating on.
Building on these products, we just introduced Cloudflare Traffic. To explain the benefits, we'll dive into the nitty-gritty details of how the monitoring and load-balancing features of Traffic Manager can be configured, and how we use it within our own website to reduce visitor latency, and improve redundancy across continents. We'll do a similar post on Traffic Control soon.
We're also kicking off the Early Access program for Traffic Manager, with details at the end of this post.
One of our primary goals when building Traffic Manager was to make it available to everyone: from those with just two servers in a single region, to those with 400 scattered across the globe (and everything in between). We also wanted to solve some of the key limitations of existing products: slow failover times (measured in minutes), and a lack of granular decision making when failing over.
Today, we're introducing two new Cloudflare Traffic products to give customers control over how Cloudflare’s edge network handles their traffic, allowing them to shape and direct it for their specific needs.
More than 10 trillion requests flow through Cloudflare every month. More than 4 million customers and 10% of internet requests benefit from our global network. Cloudflare's virtual backbone gives every packet improved performance, security, and reliability.
That's the macro picture.
What's more interesting is keeping each individual customer globally available. While every customer benefits from the network effect of Cloudflare, each customer is (appropriately) focused on their application uptime, security and performance.
Cloudflare’s new Traffic Control allows a customer to rate limit, shape or block traffic based on the number of requests per second per IP, cookie, or authentication token. Traffic can be controlled on a per-URI (with wildcards for greater flexibility) basis giving pinpoint control over a website, application, or API.
Customers seek reliability and availability in the face of popularity or unexpected traffic such as slow brute force attacks on a WordPress site, Denial of Service against dynamic pages, or the stampede of requests that comes with success. We are the leader at stopping significant Continue reading
Today, Cloudflare turns six years old, and if you’re reading this on our blog, you may have noticed that we look a bit different today than the cloudflare.com that you’ve visited in the past. More on that a bit later in this post.
What we’re most excited about today is that over the past six years, we’ve made the Internet safer, faster and a more reliable place for any domain whether it’s used for a website, web application or API.
We currently count more than 4,000,000 customers as members of the Cloudflare community, and we’ve been working very hard to bring the best of the modern Internet to you.
Levelling the Internet playing field is Cloudflare’s mission and it’s what gets us out of bed every morning and into one of our offices. Last week, we took away what we think are the last excuses for any domain to not be encrypted with our three launches during Encryption Week.
Yesterday, we announced the 100th city added to the Cloudflare global network of data centers. In the coming days, we have more exciting products that we’re opening up to the public for early access that will expand our offering to Continue reading
We’re excited to kick off Cloudflare’s sixth birthday celebrations by announcing data center locations in 14 new cities across 5 continents. This expansion makes our global network one of the largest in the world, spanning 100 unique cities across 49 countries. Every new Cloudflare data center improves the performance, security and reliability of millions of websites, as we expand our surface area to fight growing attacks and serve web requests even closer to the Internet user.
Each birthday has given us the opportunity to thank our customers with new announcements, from our automatic IPv6 Gateway to making SSL free and easy for all to unveiling our China network. Launching 14 new data center locations is one of many gifts to our users we’ll reveal this week.
Six years ago, within weeks of Cloudflare launching, we passed a major milestone: serving one billion web requests across our network every month. Since then, our traffic has grown 10,000x, and we now see over a billion web requests every month just from the country of Angola — located on the western coast of southern Africa and three times the geographic size Continue reading
CloudFlare's mission is to make HTTPS accessible for all our customers. It provides security for their websites, improved ranking on search engines, better performance with HTTP/2, and access to browser features such as geolocation that are being deprecated for plaintext HTTP. With Universal SSL or similar features, a simple button click can now enable encryption for a website.
Unfortunately, as described in a previous blog post, this is only half of the problem. To make sure that a page is secure and can't be controlled or eavesdropped by third-parties, browsers must ensure that not only the page itself but also all its dependencies are loaded via secure channels. Page elements that don't fulfill this requirement are called mixed content and can either result in the entire page being reported as insecure or even completely blocked, thus breaking the page for the end user.
When we conceived the Automatic HTTPS Rewrites project, we aimed to automatically reduce the amount of mixed content on customers' web pages without breaking their websites and without any delay noticeable by end users while receiving a page that is being rewritten on the fly.
A naive way Continue reading
The CloudFlare London office hosts weekly internal Tech Talks (with free lunch picked by the speaker). My recent one was an explanation of the latest version of TLS, 1.3, how it works and why it's faster and safer.
You can watch the complete talk below or just read my summarized transcript.
The Q&A session is open! Send us your questions about TLS 1.3 at [email protected] or leave them in the Disqus comments below and I'll answer them in an upcoming blog post.
To understand why TLS 1.3 is awesome, we need to take a step back and look at how TLS 1.2 works. In particular we will look at modern TLS 1.2, the kind that a recent browser would use when connecting to the CloudFlare edge.
The client starts by sending a message called the ClientHello
that essentially says "hey, I want to speak TLS 1.2, with one of these cipher suites".
The server receives that and answers with a ServerHello
that says "sure, let's speak TLS 1.2, and I pick this cipher suite".
Along with that the server sends its key share. The Continue reading
CloudFlare aims to put an end to the unencrypted Internet. But the web has a chicken and egg problem moving to HTTPS.
Long ago it was difficult, expensive, and slow to set up an HTTPS capable web site. Then along came services like CloudFlare’s Universal SSL that made switching from http:// to https:// as easy as clicking a button. With one click a site was served over HTTPS with a freshly minted, free SSL certificate.
Boom.
Suddenly, the website is available over HTTPS, and, even better, the website gets faster because it can take advantage of the latest web protocol HTTP/2.
Unfortunately, the story doesn’t end there. Many otherwise secure sites suffer from the problem of mixed content. And mixed content means the green padlock icon will not be displayed for an https:// site because, in fact, it’s not truly secure.
Here’s the problem: if an https:// website includes any content from a site (even its own) served over http:// the green padlock can’t be displayed. That’s because resources like images, JavaScript, audio, video etc. included over http:// open up a security hole into the secure web site. A backdoor to trouble.
Web browsers have known this was a problem Continue reading
Encrypting the web is not an easy task. Various complexities prevent websites from migrating from HTTP to HTTPS, including mixed content, which can prevent sites from functioning with HTTPS.
Opportunistic Encryption provides an additional level of security to websites that have not yet moved to HTTPS and the performance benefits of HTTP/2. Users will not see a security indicator for HTTPS in the address bar when visiting a site using Opportunistic Encryption, but the connection from the browser to the server is encrypted.
In December 2015, CloudFlare introduced HTTP/2, the latest version of HTTP, that can result in improved performance for websites. HTTP/2 can’t be used without encryption, and before now, that meant HTTPS. Opportunistic Encryption, based on an IETF draft, enables servers to accept HTTP requests over an encrypted connection, allowing HTTP/2 connections for non-HTTPS sites. This is a first.
Combined with TLS 1.3 and HTTP/2 Server Push, Opportunistic Encryption can result in significant performance gains, while also providing security benefits.
Opportunistic Encryption is now available to all CloudFlare customers, enabled by default for Free and Pro plans. The option is available in the Crypto tab of the CloudFlare dashboard:
Opportunistic Encryption Continue reading
CloudFlare is turbocharging the encrypted internet
The encrypted Internet is about to become a whole lot snappier. When it comes to browsing, we’ve been driving around in a beat-up car from the 90s for a while. Little does anyone know, we’re all about to trade in our station wagons for a smoking new sports car. The reason for this speed boost is TLS 1.3, a new encryption protocol that improves both speed and security for Internet users everywhere. As of today, TLS 1.3 is available to all CloudFlare customers.
Many of the major web properties you visit are encrypted, which is indicated by the padlock icon and the presence of “https” instead of “http” in the address bar. The “s” stands for secure. When you connect to an HTTPS site, the communication between you and the website is encrypted, which makes browsing the web dramatically more secure, protecting your communication from prying eyes and the injection of malicious code. HTTPS is not only used by websites, it also secures the majority of APIs and mobile application backends.
The underlying technology that enables secure communication on the Internet is a protocol called Transport Layer Security (TLS). Continue reading