Archive

Category Archives for "CloudFlare"

CloudFlare Publishes Semiannual Transparency Report:

Painting by Rene Margritte

Today CloudFlare is publishing its third Transparency Report covering the first half of 2014. This report covers government information requests from January 1, 2013 to June 30, 2014, and updates our two existing transparency reports: partial January 2013 Transparency Report and complete 2013 Transparency Report.

CloudFlare’s Transparency Reports shows how many subpoenas, court orders, search warrants, pen register/trap and trace (PRTT) orders, and national security orders CloudFlare received during the reporting period. In this current Transparency Report, we have also added a separate category for wiretap orders CloudFlare received. CloudFlare’s Transparency Reports also shows how many domains and accounts were affected by our response to those requests during the reporting period. CloudFlare’s Transparency Reports do not include non-governmental requests.

We will continue to update this report on a semiannual basis at Transparency Report.

Special thanks to our legal intern, Murtaza Sajjad, for helping to compile this report.

DNSSEC: An Introduction

At CloudFlare our mission is to help build a better Internet. Part of this effort includes making web sites faster, more reliable, and more trustworthy. The obvious first choice in protocols to help make websites more secure is HTTPS. CloudFlare’s latest product—Universal SSL—helps web site operators provide a trustworthy browsing experience for their site visitors by giving their site HTTPS support for free. In this blog post we look at another protocol, DNS, and explore one proposal to improve its trustworthiness: DNSSEC.

DNS is one of the pillars of authority on the Internet. DNS is used to translate domain names (like www.cloudflare.com) to numeric Internet addresses (like 198.41.214.163)—it’s often referred to as the “phone book of the Internet”.

DNSSEC is a set of security extensions to DNS that provides the means for authenticating DNS records. CloudFlare is planning to introduce DNSSEC in the next six months, and has brought Olafur Gudmundsson, one of the co-inventors of DNSSEC, on board to help lead the project.

CC BY 2.0 by Eric Fischer

Introduction

The Domain Name System (DNS) is one of the oldest and most fundamental components of the modern Internet. As the Continue reading

The little extra that comes with Universal SSL

CC BY 2.0 by JD Hancock

Last Monday we announced our SSL for Free plan users called Universal SSL. Universal SSL means that any site running on CloudFlare gets a free SSL certificate, and is automatically secured over HTTPS.

Using SSL for a web site helps make the site more secure, but there's another benefit: it can also make the site faster. That's because the SPDY protocol, created by Google to speed up the web, actually requires SSL and only web sites that support HTTPS can use SPDY.

CloudFlare has long supported SPDY, and kept up to date with improvements in the protocol. We currently support the most recent version of SPDY: 3.1.

CloudFlare's mission to bring the tools of the Internet giants to everyone is two fold: security and performance. As part of the Universal SSL launch, we also rolled out SPDY for everyone. Many of the web's largest sites use SPDY; now all sites that use CloudFlare are in the same league.

If your site is on CloudFlare, and you use a modern browser that supports SPDY, you'll find that the HTTPS version of your site is now served over SPDY. SPDY allows the Continue reading

Route leak incident on October 2, 2014

Today, CloudFlare suffered downtime which caused customers’ sites to be inaccessible in certain parts of the world. We take the availability of our customers’ web properties very seriously. Incidents like this get the absolute highest priority, attention, and follow up. The pain felt by our customers is also felt deeply by the CloudFlare team in London and San Francisco.

This downtime was the result of a BGP route leak by Internexa, an ISP in Latin America. Internexa accidentally directed large amounts of traffic destined for CloudFlare data centers around the world to a single data center in Medellín, Colombia. At the same time Internexa also leaked routes belonging to Telecom Argentina causing disruption in Argentina. This was the result of Internexa announcing via BGP that their network, instead of ours, handled traffic for CloudFlare. This miscommunication caused a flood of traffic to quickly overwhelm the data center in Medellín. The incident lasted 49 minutes, from 15:08UTC to 15:57UTC.

The exact impact of the route leak to our customers’ visitors depended on the geography of the Internet. Traffic to CloudFlare’s customers sites dropped by 50% in North America and 12% in Europe. The impact on our network in Asia was isolated Continue reading

Universal SSL: How It Scales

On Monday, we announced Universal SSL, enabling HTTPS for all websites using CloudFlare’s Free plan. Universal SSL represents a massive increase in the number of sites we serve over HTTPS—from tens of thousands, to millions. People have asked us, both in comments and in person, how our servers handle this extra load. The answer, in a nutshell, is this: we found that with the right hardware, software, and configuration, the cost of SSL on web servers can be reduced to almost nothing.

Modern Hardware

CloudFlare’s entire infrastructure is built on modern commodity hardware. Specifically, our web servers are running on CPUs manufactured by Intel that were designed with cryptography in mind.

All Intel CPUs based on the Westmere CPU microarchitecture (introduced in 2010) and later have specialized cryptographic instructions. Important for CloudFlare’s Universal SSL rollout are the AES-NI instructions which speed up the Advanced Encryption Standard (AES) algorithm. There’s also a set of instructions called Carry-less Multiplication (CLMUL) that computes mathematical operations binary finite fields. CLMUL can be used to speed up AES in Galois Counter-mode (GCM): our preferred mode of encryption due to its resistance against recent attacks like BEAST.

As we described in our primer on TLS Continue reading

Inside Shellshock: How hackers are using it to exploit systems

On Wednesday of last week details, of the Shellshock bash bug emerged. This bug started a scramble to patch computers, servers, routers, firewalls, and other computing appliances using vulnerable versions of bash.

CloudFlare immediately rolled out protection for Pro, Business, and Enterprise customers through our Web Application Firewall. On Sunday, after studying the extent of the problem, and looking at logs of attacks stopped by our WAF, we decided to roll out protection for our Free plan customers as well.

Since then we've been monitoring attacks we've stopped in order to understand what they look like, and where they come from. Based on our observations, it's clear that hackers are exploiting Shellshock worldwide.

(CC BY 2.0 aussiegall)

Eject

The Shellshock problem is an example of an arbitrary code execution (ACE) vulnerability. Typically, ACE vulnerability attacks are executed on programs that are running, and require a highly sophisticated understanding the internals of code execution, memory layout, and assembly language—in short, this type of attack requires an expert.

Attacker will also use an ACE vulnerability to upload or run a program that gives them a simple way of controlling the targeted machine. This is often achieved by running a "shell". Continue reading

Universal SSL: Be just a bit more patient

Universal SSL

It turns out it takes a while to deploy SSL certificates for 2 million websites. :-) Even longer when you get a flood of new sign ups. While we'd hoped to have the deployment complete within 24 hours of the announcement, it now looks like it's going to take a bit longer. We now expect that the full deployment will be complete about 48 hours from now (0700 UTC). Beyond that, nothing about the plan for Universal SSL has changed and hundreds of thousands of sites are already active.

Errors you may see

In order to get through the highest priority sites first, we've prioritized provisioning the sites with the most traffic.

While you wait for your site to get provisioned, you may see a certificate mismatch error if you try and visit it over HTTPS. (Rest assured, there are no errors if you visit over HTTP.) The errors over HTTPS are expected and normal during the provisioning process. Examples of what these error looks like in various browsers )Chrome, Safari, Firefox, and Internet Explorer) are below:

Chrome

Safari

Firefox

Internet Explorer

Tracking our progress

To give you a sense of our progress provisioning Universal SSL for your sites, we've updated the alert that Continue reading

Origin Server Connection Security with Universal SSL

Earlier today, CloudFlare enabled Universal SSL: HTTPS support for all sites by default. Universal SSL provides state-of-the-art encryption between browsers and CloudFlare’s edge servers keeping web traffic private and secure from tampering.

CloudFlare’s Flexible SSL mode is the default for CloudFlare sites on the Free plan. Flexible SSL mode means that traffic from browsers to CloudFlare will be encrypted, but traffic from CloudFlare to a site's origin server will not be. To take advantage of our Full and Strict SSL mode—which encrypts the connection between CloudFlare and the origin server—it’s necessary to install a certificate on the origin server.

We made Universal SSL free so that everyone can use modern, strong encryption tools to protect their web traffic. More encrypted traffic helps build a safer, better Internet. In keeping with CloudFlare’s goal to help build a better Internet, we have some tips on how to upgrade your site from Flexible SSL to Full or Strict SSL.

Option 1: Full SSL: create a self-signed certificate

Dealing with Certificate Authorities (CAs) can be frustrating, and the process of obtaining a certificate can be time consuming. In the meantime, you can get started by installing a self-signed certificate on your origin server. This Continue reading

Introducing Universal SSL

CloudFlare's Universal SSL

The team at CloudFlare is excited to announce the release of Universal SSL™. Beginning today, we will support SSL connections to every CloudFlare customer, including the 2 million sites that have signed up for the free version of our service.

This morning we began rolling out the Universal SSL across all our current customers. We expect this process to be complete for all current customers before the end of the day. Yesterday, there were about 2 million sites active on the Internet that supported encrypted connections. By the end of the day today, we'll have doubled that.

For new customers who sign up for CloudFlare's free plan, after we get through provisioning existing customers, it will take up to 24 hours to activate Universal SSL. As always, SSL for paid plans will be provisioned instantly upon signup.

How does it work?

For all customers, we will now automatically provision a SSL certificate on CloudFlare's network that will accept HTTPS connections for a customer's domain and subdomains. Those certificates include an entry for the root domain (e.g., example.com) as well as a wildcard entry for all first-level subdomains (e.g., www.example.com, blog.example.com, etc. Continue reading

Shellshock protection enabled for all customers

On Thursday, we rolled out protection against the Shellshock bash vulnerability for all paying customers through the CloudFlare WAF. This protection was enabled automatically and immediately starting blocking malicious requests.

We had a number of requests for protection from Shellshock for all our customers, including those on the Free plan.

After observing the actual Shellshock traffic across our network and after seeing the true severity of the vulnerability become clear, we've built and tested a special Basic ShellShock Protection for all customers.

That protection is now operating and enabled for every CloudFlare customer (Free, Pro, Business and Enterprise). Paying customers have the additional protection of more complex Shellshock rules in the CloudFlare WAF.

Every CloudFlare customer is now being protected from the most common attack vectors based on the Shellshock problem and paying customers continue to have the more advanced protection that was rolled out yesterday.

One More Thing: Keyless SSL and CloudFlare’s Growing Network

One more thing...

I wanted to write one more thing about Keyless SSL, our announcement from last week, before attention shifts to what we'll be announcing on Monday. Keyless allows us to provide CloudFlare's service without having private SSL keys stored locally on our edge servers. The news last week focused on how this could allow very large customers, like major financial institutions, to use CloudFlare without trusting us with their private keys.

But there's another use that will benefit the entire CloudFlare userbase, not just our largest enterprise customers, and it's this: Keyless SSL is a key part of our strategy to continue to expand CloudFlare's global network.

CloudFlare's Global Network Today

CloudFlare's network today consists of 28 edge data centers that span much of the globe. We have technical and security requirements for these facilities in order to ensure that the equipment they house remains secure. Generally, we're in Tier III or IV data center facilities with the highest level of security. In our San Jose facility, for instance, you have to pass through 5 biometric scans, in addition to multiple 24x7 manned guard check points, before you can get to the electronically locked cabinets housing our servers.

There Continue reading

Celebrating CloudFlare’s 4th Birthday

Save the web / CloudFlare

Since CloudFlare launched to the public four years ago today, we've always considered September 27th our birthday. We like to celebrate by doing something nice for our team and also for our customers. Two years ago, for example, we brought a cake into the office and then enabled free IPv6 support for all our customers.

Saturday is our birthday this year, so we decided to celebrate it a few days later when we'd all be back in the office on Monday, September 29th. That actually corresponds to the day we presented at the finals of the TechCrunch Disrupt startup contest where we launched. We ended up coming in second. Mike Arrington, the founder of TechCrunch, said we were basically "muffler repair for the Internet."

Looking back, that's actually not a bad description. At core, CloudFlare's mission is to help build a better Internet by fixing its biggest problems -- its metaphorical rusty mufflers. This year, we thought it would be great to repair a big, ugly muffler that should have been fixed a long time ago.

This Monday, we'll bring a cake into the office. (It'll have to be a lot bigger as our team has grown substantially.) Continue reading

Bash vulnerability CVE-2014-6271 patched

This morning, Stephane Chazelas disclosed a vulnerability in the program bash, the GNU Bourne-Again-Shell. This software is widely used, especially on Linux servers, such as the servers used to provide CloudFlare’s performance and security cloud services.

This vulnerability is a serious risk to Internet infrastructure, as it allows remote code execution in many common configurations, and the severity is heightened due to bash being in the default configuration of most Linux servers. While bash is not directly used by remote users, it is used internally by popular software packages such as web, mail, and administration servers. In the case of a web server, a specially formatted web request, when passed by the web server to the bash application, can cause the bash software to run commands on the server for the attacker. More technical information was posted on the oss-sec mailing list.

The security community has assigned this bash vulnerability the ID CVE-2014-6271.

As soon as we became aware of this vulnerability, CloudFlare’s engineering and operations teams tested a patch to protect our servers, and deployed it across our infrastructure. As of now, all CloudFlare servers are protected against CVS-2014-6271.

Everyone who is using the bash software package should upgrade Continue reading

Keyless SSL: The Nitty Gritty Technical Details

CloudFlare's Keyless SSL

We announced Keyless SSL yesterday to an overwhelmingly positive response. We read through the comments on this blog, Reddit, Hacker News, and people seem interested in knowing more and getting deeper into the technical details. In this blog post we go into extraordinary detail to answer questions about how Keyless SSL was designed, how it works, and why it’s secure. Before we do so, we need some background about how encryption works on the Internet. If you’re already familiar, feel free to skip ahead.

TLS

Transport Layer Security (TLS) is the workhorse of web security. It lets websites prove their identity to web browsers, and protects all information exchanged from prying eyes using encryption. The TLS protocol has been around for years, but it’s still mysterious to even hardcore tech enthusiasts. Understanding the fundamentals of TLS is the key to understanding Keyless SSL.

Dual goals

TLS has two main goals: confidentiality and authentication. Both are critically important to securely communicating on the Internet.

Communication is considered confidential when two parties are confident that nobody else can understand their conversation. Confidentiality can be achieved using symmetric encryption: use a key known only to the two parties involved to encrypt Continue reading

Announcing Keyless SSL™: All the Benefits of CloudFlare Without Having to Turn Over Your Private SSL Keys

Alt text

CloudFlare is an engineering-driven company. This is a story we're proud of because it embodies the essence of who we are: when faced with a problem, we found a novel solution. Technical details to follow but, until then, welcome to the no hardware world.

Fall in San Francisco

The story begins on a Saturday morning, in the Fall of 2012, almost exactly two years ago. I got a call on my cell phone that woke me. It was a man who introduced himself as the Chief Information Security Officer (CISO) at one of the world's largest banks.

"I got your number from a reporter," he said. "We have an incident. Could you and some of your team be in New York Monday morning? We'd value your advice." We were a small startup. Of course we were going to drop everything and fly across the country to see if we could help.

I called John Roberts and Sri Rao, two members of CloudFlare's team. John had an air of calm about him and owned more khaki pants than any of the rest of us. Sri was a senior member of our technical operations team and could, already at that point, Continue reading

How Stacks are Handled in Go.

At CloudFlare, We use Go for a variety of services and applications. In this blog post, We're going to take a deep dive into some of the technical intricacies of Go.

One of the more important features of Go is goroutines. They are cheap, cooperatively scheduled threads of execution that are used for a variety of operations, like timeouts, generators and racing multiple backends against each other. To make goroutines suitable for as many tasks as possible, we have to make sure that each goroutine takes up the least amount of memory, but also have people be able to start them up with a minimal amount of configuration.

To achieve this, Go manages stacks in way that behaves like any other language, but is quite different in how they're implemented.

An introduction to thread stacks

Before we look at Go, let's look at how stacks are managed in a traditional language like C.

When you start up a thread in C, the standard library is responsible for allocating a block of memory to be used as that thread's stack. It will allocate this block, tell the kernel where it is and let the kernel handle the execution of the thread. Continue reading

Participate in the “Internet Slowdown” with One Click

Net Neutrality is an important issue for CloudFlare as well as for our more than 2 million customers, whose success depends on a vibrant, dynamic, and open Internet. An open Internet promotes innovation, removes barriers to entry, and provides a platform for free expression.

That's why we’re announcing a new app that lets you easily participate in the “Internet Slowdown” on September 10th, 2014.

Battleforthenet.com (a project of Demand Progress, Engine Advocacy, Fight for the Future, and Free Press) has organized a day of protest against the United States Federal Communications Commission (FCC) proposal that will allow Internet providers to charge companies additional fees to provide access to those companies’ content online. Those additional fees will allow Internet service providers to essentially choose which parts of the Internet you will get to access normally, and which parts may be slow or inaccessible.

As we’ve seen that bandwidth pricing is not reflective of the underlying fair market value when Internet service providers have monopolistic control, we can only fret that a similar situation will be presented by a lack of net neutrality.

The Battle for the Net pop-up (intentionally obtrusive) will simulate a loading screen that website users may see Continue reading

SXSW Interactive 2015: Vote for CloudFlare’s Submissions

Has your Twitter feed been flooded with “vote for my SXSW panel” tweets? With so much buzz all over the place, we wanted to keep it simple and share all of the presentations and panels affiliated with CloudFlare, in one place. Check out CloudFlare's presentations and panels below. If our topics interest you, casting a vote will take just a few minutes!

How to vote:

  1. To sign up go to this link
  2. Enter your name & email address, then confirm your account
  3. Log in with your new account and go to the “PanelPicker”
  4. Click “search/vote” and search for your panel by title
  5. VOTE

Please note: Voting ends on September 6th!

PanelPicker voting counts for 30% of a sessions acceptance to SXSW. Our panels cover a variety of topics from a tell-all that reveals the real story behind the male/female co-founder dynamic to exploring ways to protect human rights online. There’s something for everyone so check them out and vote for your favorite! Every vote counts!

Help CloudFlare get to SXSW!

Presentations:

“Lean On” is the New “Lean In”
Matthew Prince, co-founder and CEO of CloudFlare will sit down with Michelle Zatlyn, co-founder and Head of User Experience at CloudFlare for Continue reading

Go interfaces make test stubbing easy

Go's "object-orientation" approach is through interfaces. Interfaces provide a way of specifying the behavior expected of an object, but rather than saying what an object itself can do, they specify what's expected of an object. If any object meets the interface specification it can be used anywhere that interface is expected.

I was working on a new, small piece of software that does image compression for CloudFlare and found a nice use for interfaces when stubbing out a complex piece of code in the unit test suite. Central to this code is a collection of goroutines that run jobs. Jobs are provided from a priority queue and performed in priority order.

The jobs ask for images to be compressed in myriad ways and the actual package that does the work contained complex code for compressing JPEGs, GIFs and PNGs. It had its own unit tests that checked that the compression worked as expected.

But I wanted a way to test the part of the code that runs the jobs (and, itself, doesn't actually know what the jobs do). Because I only want to test if the jobs got run correctly (and not the compression) I don't want to have to create (and configure) the complex job type that gets used when the code really runs.

What I wanted was a DummyJob.

The Worker package actually runs jobs in a goroutine like this:

func (w *Worker) do(id int, ready chan int) {
    for {
        ready <- id

        j, ok := <-w.In
        if !ok {
            return
        }

        if err := j.Do(); err != nil {
            logger.Printf("Error performing job %v: %s", j, err)
        }
    }
}

do gets started as a goroutine passed a unique ID (the id parameter) and a channel called ready. Whenever do is able to perform work it sends a message containing its id down ready and then waits for a job on the worker w.In channel. Many such workers run concurrently and a separate goroutine pulls the IDs of workers that are ready for work from the ready channel and sends them work.

If you look at do above you'll see that the job (stored in j) is only required to offer a single method:

func (j *CompressionJob) Do() error

The worker's do just calls the job's Do function and checks for an error return. But the code originally had w.In defined like this:

w := &Worker{In: make(chan *job.CompressionJob)}

which would have required that the test suite for Worker know how to create a CompressionJob and make it runnable. Instead I defined a new interface like this:

type Job interface {
    Priority() int
    Do() error
}

The Priority method is used by the queueing mechanism to figure out the order in which jobs should be run. Then all I needed to do was change the creation of the Worker to

w := &Worker{In: make(chan job.Job)}

The w.In channel is no longer a channel of CompressionJobs, but of interfaces of type Job. This shows a really powerful aspect of Go: anything that meets the Job interface can be sent down that channel and only a tiny amount of code had to be changed to use an interface instead of the more 'concrete' type CompressionJob.

Then in the unit test suite for Worker I was able to create a DummyJob like this:

var Done bool

type DummyJob struct {
}

func (j DummyJob) Priority() int {
    return 1
}

func (j DummyJob) Do() error {
   Done = true
   return nil
}

It sets a Done flag when the Worker's do function actually runs the DummyJob. Since DummyJob meets the Job interface it can be sent down the w.In channel to a Worker for processing.

Creating that Job interface totally isolated the interface that the Worker needs to be able to run jobs and hides any of the other details greatly simplifying the unit test suite. Most interesting of all, no changes at all were needed to CompressionJob to achieve this.

The Relative Cost of Bandwidth Around the World

CC BY 2.0 by Kendrick Erickson

Over the last few months, there’s been increased attention on networks and how they interconnect. CloudFlare runs a large network that interconnects with many others around the world. From our vantage point, we have incredible visibility into global network operations. Given our unique situation, we thought it might be useful to explain how networks operate, and the relative costs of Internet connectivity in different parts of the world.

A Connected Network

The Internet is a vast network made up of a collection of smaller networks. The networks that make up the Internet are connected in two main ways. Networks can connect with each other directly, in which case they are said to be “peered”, or they can connect via an intermediary network known as a “transit provider”.

At the core of the Internet are a handful of very large transit providers that all peer with one another. This group of approximately twelve companies are known as Tier 1 network providers. Whether directly or indirectly, every ISP (Internet Service Provider) around the world connects with one of these Tier 1 providers. And, since the Tier 1 providers are all interconnected themselves, from any point on the network you should be able to reach any other point. That's what makes the Internet the Internet: it’s a huge group of networks that are all interconnected.

Paying to Connect

To be a part of the Internet, CloudFlare buys bandwidth, known as transit, from a number of different providers. The rate we pay for this bandwidth varies from region to region around the world. In some cases we buy from a Tier 1 provider. In other cases, we buy from regional transit providers that either peer with the networks we need to reach directly (bypassing any Tier 1), or interconnect themselves with other transit providers.

CloudFlare buys transit wholesale and on the basis of the capacity we use in any given month. Unlike some cloud services like Amazon Web Services (AWS) or traditional CDNs that bill for individual bits delivered across a network (called "stock"), we pay for a maximum utilization for a period of time (called "flow"). Typically, we pay based on the maximum number of megabits per second we use during a month on any given provider.

Traffic levels across CloudFlare's global network over the last 3 months. Each color represents one of our 28 data centers.

Most transit agreements bill the 95th percentile of utilization in any given month. That means you throw out approximately 36 not-necessarily-contiguous hours worth of peak utilization when calculating usage for the month. Legend has it that in its early days, Google used to take advantage of these contracts by using very little bandwidth for most of the month and then ship its indexes between data centers, a very high bandwidth operation, during one 24-hour period. A clever, if undoubtedly short-lived, strategy to avoid high bandwidth bills.

Another subtlety is that when you buy transit wholesale you typically only pay for traffic coming in (“ingress") or traffic going out (“egress”) of your network, not both. Generally you pay which ever one is greater.

CloudFlare is a caching proxy so egress (out) typically exceeds ingress (in), usually by around 4-5x. Our bandwidth bill is therefore calculated on egress so we don't pay for ingress. This is part of the reason we don't charge extra when a site on our network comes under a DDoS attack. An attack increases our ingress but, unless the attack is very large, our ingress traffic will still not exceed egress, and therefore doesn’t increase our bandwidth bill.

Peering

While we pay for transit, peering directly with other providers is typically free — with some notable exceptions recently highlighted by Netflix. In CloudFlare's case, unlike Netflix, at this time, all our peering is currently "settlement free," meaning we don't pay for it. Therefore, the more we peer the less we pay for bandwidth. Peering also typically increases performance by cutting out intermediaries that may add latency. In general, peering is a good thing.

The chart above shows how CloudFlare has increased the number of networks we peer with over the last three months (both over IPv4 and IPv6). Currently, we peer around 45% of our total traffic globally (depending on the time of day), across nearly 3,000 different peering sessions. The chart below shows the split between peering and transit and how it's improved over the last three months as we’ve added more peers.

North America

We don't disclose exactly what we pay for transit, but I can give you a relative sense of regional differences. To start, let's assume as a benchmark in North America you'd pay a blended average across all the transit providers of $10/Mbps (megabit per second per month). In reality, we pay less than that, but it can serve as a benchmark, and keep the numbers round as we compare regions. If you assume that benchmark, for every 1,000Mbps (1Gbps) you'd pay $10,000/month (again, acknowledge that’s higher than reality, it’s just an illustrative benchmark and keeps the numbers round, bear with me).

While that benchmark establishes the transit price, the effective price for bandwidth in the region is the blended price of transit ($10/Mbps) and peering ($0/Mbps). Every byte delivered over peering is a would-be transit byte that doesn't need to be paid for. While North America has some of the lowest transit pricing in the world, it also has below average rates of peering. The chart below shows the split between peering and transit in the region. While it's gotten better over the last three months, North America still lags behind every other region in the world in terms of peering..

While we peer nearly 40% of traffic globally, we only peer around 20-25% in North America. Assuming the price of transit is the benchmark $10/Mbps in North America without peering, with peering it is effectively $8/Mbps. Based only on bandwidth costs, that makes it the second least expensive region in the world to provide an Internet service like CloudFlare. So what's the least expensive?

Europe

Europe's transit pricing roughly mirrors North America's so, again, assume a benchmark of $10/Mbps. While transit is priced similarly to North America, in Europe there is a significantly higher rate of peering. CloudFlare peers 50-55% of traffic in the region, making the effective bandwidth price $5/Mbps. Because of the high rate of peering and the low transit costs, Europe is the least expensive region in the world for bandwidth.

The higher rate of peering is due in part to the organization of the region's “peering exchanges”. A peering exchange is a service where networks can pay a fee to join, and then easily exchange traffic between each other without having to run individual cables between each others' routers. Networks connect to a peering exchange, run a single cable, and then can connect to many other networks. Since using a port on a router has a cost (routers cost money, have a finite number of ports, and a port used for one network cannot be used for another), and since data centers typically charge a monthly fee for running a cable between two different customers (known as a "cross connect"), connecting to one service, using one port and one cable, and then being able to connect to many networks can be very cost effective.

The value of an exchange depends on the number of networks that are a part of it. The Amsterdam Internet Exchange (AMS-IX), Frankfurt Internet Exchange (DE-CIX), and the London Internet Exchange (LINX) are three of the largest exchanges in the world. (Note: these links point to PeeringDB.com which provides information on peering between networks. You'll need to use the username/password guest/guest in order to login.)

In Europe, and most other regions outside North America, these and other exchanges are generally run as non-profit collectives set up to benefit their member networks. In North America, while there are Internet exchanges, they are typically run by for-profit companies. The largest of these for-profit exchanges in North America are run by Equinix, a data center company, which uses exchanges in its facilities to increase the value of locating equipment there. Since they are run with a profit motive, pricing to join North American exchanges is typically higher than exchanges in the rest of the world.

CloudFlare is a member of many of Equinix's exchanges, but, overall, fewer networks connect with Equinix compared with Europe's exchanges (compare, for instance, Equinix Ashburn, which is their most popular exchange with about 400 networks connected, versus 1,200 networks connected to AMS-IX). In North America the combination of relatively cheap transit, and relatively expensive exchanges lowers the value of joining an exchange. With less networks joining exchanges, there are fewer opportunities for networks to easily peer. The corollary is that in Europe transit is also cheap but peering is very easy, making the effective price of bandwidth in the region the lowest in the world.

Asia

Asia’s peering rates are similar to Europe. Like in Europe, CloudFlare peers 50-55% of traffic in Asia. However, transit pricing is significantly more expensive. Compared with the benchmark of $10/Mbps in North America and Europe, Asia's transit pricing is approximately 7x as expensive ($70/Mbps, based on the benchmark). When peering is taken into account, however, the effective price of bandwidth in the region is $32/Mbps.

There are three primary reasons transit is so much more expensive in Asia. First, there is less competition, and a greater number of large monopoly providers. Second, the market for Internet services is less mature. And finally, if you look at a map of Asia you’ll see a lot of one thing: water. Running undersea cabling is more expensive than running fiber optic cable across land so transit pricing offsets the cost of the infrastructure to move bytes.

Latin America

Latin America is CloudFlare's newest region. When we opened our first data center in Valparaíso, Chile, we delivered 100 percent of our traffic over transit, which you can see from the graph above. To peer traffic in Latin America you need to either be in a "carrier neutral" data center — which means multiple network operators come together in a single building where they can directly plug into each other's routers — or you need to be able to reach an Internet exchange. Both are in short supply in much of Latin America.

The country with the most robust peering ecosystem is Brazil, which also happens to be the largest country and largest source of traffic in the region. You can see that as we brought our São Paulo, Brazil data center online about two months ago we increased our peering in the region significantly. We've also worked out special arrangements with ISPs in Latin America to set up facilities directly in their data centers and peer with their networks, which is what we did in Medellín, Colombia.

While today our peering ratio in Latin America is the best of anywhere in the world at approximately 60 percent, the region's transit pricing is 8x ($80/Mbps) the benchmark of North America and Europe. That means the effective bandwidth pricing in the region is $32/Mbps, or approximately the same as Asia.

Australia

Australia is the most expensive region in which we operate, but for an interesting reason. We peer with virtually every ISP in the region except one: Telstra. Telstra, which controls approximately 50% of the market, and was traditionally the monopoly telecom provider, charges some of the highest transit pricing in the world — 20x the benchmark ($200/Mbps). Given that we are able to peer approximately half of our traffic, the effective bandwidth benchmark price is $100/Mbps.

To give you some sense of how out-of-whack Australia is, at CloudFlare we pay about as much every month for bandwidth to serve all of Europe as we do to for Australia. That’s in spite of the fact that approximately 33x the number of people live in Europe (750 million) versus Australia (22 million).

If Australians wonder why Internet and many other services are more expensive in their country than anywhere else in the world they need only look to Telstra. What's interesting is that Telstra maintains their high pricing even if only delivering traffic inside the country. Given that Australia is one large land mass with relatively concentrated population centers, it's difficult to justify the pricing based on anything other than Telstra's market power. In regions like North America where there is increasing consolidation of networks, Australia's experience with Telstra provides a cautionary tale.

Conclusion

The chart above shows the relative cost of bandwidth assuming a benchmark transit cost of $10/Megabits per second (Mbps) per month (which we know is higher than actual pricing, it’s just a benchmark) in North America and Europe.

While we keep our pricing at CloudFlare straight forward, charging a flat rate regardless of where traffic is delivered around the world, actual bandwidth prices vary dramatically between regions. We’ll continue to work to decrease our transit pricing, and increasing our peering in order to offer the best possible service at the lowest possible price. In the meantime, if you’re an ISP who wants to offer better connectivity to the increasing portion of the Internet behind CloudFlare’s network, we have an open policy and are always happy to peer.