Cryptographic code is everywhere: it gets run when we connect to the bank, when we send messages to our friends, or when we watch cat videos. But, it is not at all easy to take a cryptographic specification written in a natural language and produce running code from it, and it is even harder to validate both the theoretical assumptions and the correctness of the implementation itself. Mathematical proofs, as we talked about in our previous blog post, and code inspection are simply not enough. Testing and fuzzing can catch common or well-known bugs or mistakes, but might miss rare ones that can, nevertheless, be triggered by an attacker. Static analysis can detect mistakes in the code, but cannot check whether the code behaves as described by the specification in natural-language (for functional correctness). This gap between implementation and validation can have grave consequences in terms of security in the real world, and we need to bridge this chasm.
In this blog post, we will be talking about ways to make this gap smaller by making the code we deploy better through analyzing its security properties and its implementation. This blog post continues our work on high assurance Continue reading
This post is also available in Français and Español.
Cloudflare’s mission is to help build a better Internet. We’ve invested heavily in building the world’s most powerful cloud network to deliver a faster, safer and more reliable Internet for our users. Today, we’re taking a big step towards enhancing our ability to secure our customers.
Earlier today we announced that Cloudflare has agreed to acquire Area 1 Security. Area 1’s team has built exceptional cloud-native technology to protect businesses from email-based security threats. Cloudflare will integrate Area 1’s technology with our global network to give customers the most complete Zero Trust security platform available.
Back at the turn of the century I was involved in the fight against email spam. At the time, before the mass use of cloud-based email, spam was a real scourge. Clogging users’ inboxes, taking excruciatingly long to download, and running up people’s Internet bills. The fight against spam involved two things, one technical and one architectural.
Technically, we figured out how to use machine-learning to successfully differentiate between spam and genuine. And fairly quickly email migrated to being largely cloud-based. But together these changes didn’t kill spam, but they relegated to a Continue reading
Ever since the (public) invention of cryptography based on mathematical trap-doors by Whitfield Diffie, Martin Hellman, and Ralph Merkle, the world has had key agreement and signature schemes based on discrete logarithms. Rivest, Shamir, and Adleman invented integer factorization-based signature and encryption schemes a few years later. The core idea, that has perhaps changed the world in ways that are hard to comprehend, is that of public key cryptography. We can give you a piece of information that is completely public (the public key), known to all our adversaries, and yet we can still securely communicate as long as we do not reveal our piece of extra information (the private key). With the private key, we can then efficiently solve mathematical problems that, without the secret information, would be practically unsolvable.
In later decades, there were advancements in our understanding of integer factorization that required us to bump up the key sizes for finite-field based schemes. The cryptographic community largely solved that problem by figuring out how to base the same schemes on elliptic curves. The world has since then grown accustomed to having algorithms where public keys, secret keys, and signatures are just a handful of Continue reading
This is not what I imagined my first blog article would look like, but here we go.
On February 1, 2022, a configuration error on one of our routers caused a route leak of up to 2,000 Internet prefixes to one of our Internet transit providers. This leak lasted for 32 seconds and at a later time 7 seconds. We did not see any traffic spikes or drops in our network and did not see any customer impact because of this error, but this may have caused an impact to external parties, and we are sorry for the mistake.
All timestamps are UTC.
As part of our efforts to build the best network, we regularly update our Internet transit and peering links throughout our network. On February 1, 2022, we had a “hot-cut” scheduled with one of our Internet transit providers to simultaneously update router configurations on Cloudflare and ISP routers to migrate one of our existing Internet transit links in Newark to a link with more capacity. Doing a “hot-cut” means that both parties will change cabling and configuration at the same time, usually while being on a conference call, to reduce downtime and impact on the network. Continue reading
The Internet is accustomed to the fact that any two parties can exchange information securely without ever having to meet in advance. This magic is made possible by key exchange algorithms, which are core to certain protocols, such as the Transport Layer Security (TLS) protocol, that are used widely across the Internet.
Key exchange algorithms are an elegant solution to a vexing, seemingly impossible problem. Imagine a scenario where keys are transmitted in person: if Persephone wishes to send her mother Demeter a secret message, she can first generate a key, write it on a piece of paper and hand that paper to her mother, Demeter. Later, she can scramble the message with the key, and send the scrambled result to her mother, knowing that her mother will be able to unscramble the message since she is also in possession of the same key.
But what if Persephone is kidnapped (as the story goes) and cannot deliver this key in person? What if she can no longer write it on a piece of paper because someone (by chance Hades, the kidnapper) might read that paper and use the key to decrypt any messages between them? Key exchange algorithms Continue reading
To provide authentication is no more than to assert, to provide proof of, an identity. We can claim who we claim to be but if there is no proof of it (recognition of our face, voice or mannerisms) there is no assurance of that. In fact, we can claim to be someone we are not. We can even claim we are someone that does not exist, as clever Odysseus did once.
The story goes that there was a man named Odysseus who angered the gods and was punished with perpetual wandering. He traveled and traveled the seas meeting people and suffering calamities. On one of his trips, he came across the Cyclops Polyphemus who, in short, wanted to eat him. Clever Odysseus got away (as he usually did) by wounding the cyclops’ eye. As he was wounded, he asked for Odysseus name to which the latter replied:
“Cyclops, you asked for my glorious name, and I will tell it; but do give the stranger's gift, just as you promised. Nobody I am called. Nobody they called me: by mother, father, and by all my comrades”
(As seen in The Odyssey, book 9. Translation by the authors of the blogpost).
The Continue reading
Tonga, the South Pacific archipelago nation (with 169 islands), was reconnected to the Internet this early morning (UTC) and is back online after successful repairs to the undersea cable that was damaged on Saturday, January 15, 2022, by the January 14, volcanic eruption.
After 38 days without full access to the Internet, Cloudflare Radar shows that a little after midnight (UTC) — it was around 13:00 local time — on February 22, 2022, Internet traffic in Tonga started to increase to levels similar to those seen before the eruption.
The faded line shows what was normal in Tonga at the start of the year, and the dark blue line shows the evolution of traffic in the last 30 days. Digicel, Tonga’s main ISP announced at 02:13 UTC that “data connectivity has been restored on the main island Tongatapu and Eua after undersea submarine cable repairs”.
When we expand the view to the previous 45 days, we can see more clearly how Internet traffic evolved before the volcanic eruption and after the undersea cable was repaired.
The repair ship Reliance took 20 days to replace a 92 km (57 mile) section of the 827 km submarine fiber optical cable that Continue reading
At Cloudflare, we help to build a better Internet. In the face of quantum computers and their threat to cryptography, we want to provide protections for this future challenge. The only way that we can change the future is by analyzing and perusing the past. Only in the present, with the past in mind and the future in sight, can we categorize and unfold events. Predicting, understanding and anticipating quantum computers (with the opportunities and challenges they bring) is a daunting task. We can, though, create a taxonomy of these challenges, so the future can be better unrolled.
This is the first blog post in a post-quantum series, where we talk about our past, present and future “adventures in the Post-Quantum land”. We have written about previous post-quantum efforts at Cloudflare, but we think that here first we need to understand and categorize the problem by looking at what we have done and what lies ahead. So, welcome to our adventures!
A taxonomy of the challenges ahead that quantum computers and their threat to cryptography bring (for more information about it, read our other blog posts) could be a good way to approach this problem. This taxonomy should Continue reading
Not only is the universe stranger than we think, but it is stranger than we can think of
— Werner Heisenberg
Even for a physicist as renowned as Heisenberg, the universe was strange. And it was strange because several phenomena could only be explained through the lens of quantum mechanics. This field changed the way we understood the world, challenged our imagination, and, since the Fifth Solvay Conference in 1927, has been integrated into every explanation of the physical world (it is, to this day, our best description of the inner workings of nature). Quantum mechanics created a rift: every physical phenomena (even the most micro and macro ones) stopped being explained only by classical physics and started to be explained by quantum mechanics. There is another world in which quantum mechanics has not yet created this rift: the realm of computers (note, though, that manufacturers have been affected by quantum effects for a long time). That is about to change.
In the 80s, several physicists (including, for example, Richard Feynman and Yuri Manin) asked themselves these questions: are there computers that can, with high accuracy and in a reasonable amount of time, simulate physics? And, specifically, can they Continue reading
During CIO week we announced the general availability of our client-side security product, Page Shield. Page Shield protects websites’ end users from client-side attacks that target vulnerable JavaScript dependencies in order to run malicious code in the victim’s browser. One of the biggest client-side threats is the exfiltration of sensitive user data to an attacker-controlled domain (known as a Magecart-style attack). This kind of attack has impacted large organizations like British Airways and Ticketmaster, resulting in substantial GDPR fines in both cases. Today we are sharing details of how we detect these types of attacks and how we’re going to be developing the product into the future.
Magecart-style attacks are generally quite simple, involving just two stages. First, an attacker finds a way to compromise one of the JavaScript files running on the victim’s website. The attacker then inserts malicious code which reads personally identifiable information (PII) being entered by the site’s users, and exfiltrates it to an attacker-controlled domain. This is illustrated in the diagram below.
Magecart-style attacks are of particular concern to online retailers with users entering credit card details on the checkout page. Forms for online banking are also high-value Continue reading
As we develop new products, we often push our operating system - Linux - beyond what is commonly possible. A common theme has been relying on eBPF to build technology that would otherwise have required modifying the kernel. For example, we’ve built DDoS mitigation and a load balancer and use it to monitor our fleet of servers.
This software usually consists of a small-ish eBPF program written in C, executed in the context of the kernel, and a larger user space component that loads the eBPF into the kernel and manages its lifecycle. We’ve found that the ratio of eBPF code to userspace code differs by an order of magnitude or more. We want to shed some light on the issues that a developer has to tackle when dealing with eBPF and present our solutions for building rock-solid production ready applications which contain eBPF.
For this purpose we are open sourcing the production tooling we’ve built for the sk_lookup hook we contributed to the Linux kernel, called tubular. It exists because we’ve outgrown the BSD sockets API. To deliver some products we need features that are just not possible using the standard API.
“It's ridiculous for a country to get all worked up about a game—except the Super Bowl, of course. Now that's important.”
- Andy Rooney, American radio and television writer
When the Super Bowl is on, there are more winners than just one of the teams playing, especially when we look at Internet trends. By now, everyone knows that the Los Angeles Rams won, but we also want to look at which Super Bowl advertisers were the biggest winners, and how traffic to food delivery services, social media and messaging apps, and sports and betting websites changed throughout the game.
We covered some of these questions during our Super Bowl live-tweeting on our Cloudflare Radar account. (Hint: follow us if you’re interested in Internet trends).
Cloudflare Radar uses a variety of sources to provide aggregate information about Internet traffic and attack trends. In this blog post, as we did last year, we use DNS name resolution data to estimate traffic to websites. We can’t see who visited the websites mentioned, or what anyone did on the websites, but DNS can give us an estimate of the interest generated by the ads or across a set of sites in Continue reading
We are excited to share that Vectrix has been acquired by Cloudflare!
Vectrix helps IT and security teams detect security issues across their SaaS applications. We look at both data and users in SaaS apps to alert teams to issues ranging from unauthorized user access and file exposure to misconfigurations and shadow IT.
We built Vectrix to solve a problem that terrified us as security engineers ourselves: how do we know if the SaaS apps we use have the right controls in place? Is our company data protected? SaaS tools make it easy to work with data and collaborate across organizations of any size, but that also makes them vulnerable.
The past two years have accelerated SaaS adoption much faster than any of us could have imagined and without much input on how to secure this new business stack.
Google Workspace for collaboration. Microsoft Teams for communication. Workday for HR. Salesforce for customer relationship management. The list goes on.
With this new reliance on SaaS, IT and security teams are faced with a new set of problems like files and folders being made public on the Internet, external users joining private chat channels, or an Continue reading
Earlier today, Cloudflare announced that we have acquired Vectrix, a cloud-access security broker (CASB) company focused on solving the problem of control and visibility in the SaaS applications and public cloud providers that your team uses.
We are excited to welcome the Vectrix team and their technology to the Cloudflare Zero Trust product group. We don’t believe a CASB should be a point solution. Instead, the features of a CASB should be one component of a comprehensive Zero Trust deployment. Each piece of technology, CASB included, should work better together than they would as a standalone product.
We know that this migration is a journey for most customers. That’s true for our own team at Cloudflare, too. We’ve built our own Zero Trust platform to solve problems for customers at any stage of that journey.
Several years ago, we protected the internal resources that Cloudflare employees needed by creating a private network with hardware appliances. We deployed applications in a data center and made them available to this network. Users inside the San Francisco office connected to a secure Wi-Fi network that placed them on the network.
For everyone else, we punched a Continue reading
I won’t beat around the bush: we’ve moved Cloudflare Email Routing from closed beta to open beta 🎉
What does this mean? It means that there’s no waitlist anymore; every zone* in every Cloudflare account has Email Routing available to them.
To get started just open one of the zones in your Cloudflare Dashboard and click on Email in the navigation pane.
Back in September 2021, during Cloudflare’s Birthday Week, we introduced Email Routing as the simplest solution for creating custom email addresses for your domains without the hassle of managing multiple mailboxes.
Many of us at Cloudflare saw a need for this type of product, and we’ve been using it since before it had a UI. After Birthday Week, we started gradually opening it to Cloudflare customers that requested access through the wait list; starting with just a few users per week and gradually ramping up access as we found and fixed edge cases.
Most recently, with users wanting to set up Email Routing for more of their domains and with some of G Suite legacy users looking for an alternative to starting a subscription, we have been onboarding tens of thousands of new zones Continue reading
During Speed Week 2021 we announced a new offering for Enterprise customers, Instant Logs. Since then, the team has not slowed down and has been working on new ways to enable our customers to consume their logs and gain powerful insights into their HTTP traffic in real time.
We recognize that as developers, UIs are useful but sometimes there is the need for a more powerful alternative. Today, I am going to introduce you to Instant Logs in your terminal! In order to get started we need to install a few open-source tools to help us:
For enterprise zones with access to Instant Logs, you can create a new session by sending a POST request to our jobs' endpoint. You will need:
curl -X POST 'https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/logpush/edge/jobs' \
-H 'X-Auth-Key: <KEY>' \
-H 'X-Auth-Email: <EMAIL>' \
-H 'Content-Type: application/json' \
--data-raw '{
"fields": "ClientIP,ClientRequestHost,ClientRequestMethod,ClientRequestPath,EdgeEndTimestamp,EdgeResponseBytes,EdgeResponseStatus,EdgeStartTimestamp,RayID",
"sample": 1,
"filter": "",
"kind": "instant-logs"
}'
The Continue reading
Chances are you might have heard of io_uring
. It first appeared in Linux 5.1, back in 2019, and was advertised as the new API for asynchronous I/O. Its goal was to be an alternative to the deemed-to-be-broken-beyond-repair AIO, the “old” asynchronous I/O API.
Calling io_uring
just an asynchronous I/O API doesn’t do it justice, though. Underneath the API calls, io_uring is a full-blown runtime for processing I/O requests. One that spawns threads, sets up work queues, and dispatches requests for processing. All this happens “in the background” so that the user space process doesn’t have to, but can, block while waiting for its I/O requests to complete.
A runtime that spawns threads and manages the worker pool for the developer makes life easier, but using it in a project begs the questions:
1. How many threads will be created for my workload by default?
2. How can I monitor and control the thread pool size?
I could not find the answers to these questions in either the Efficient I/O with io_uring article, or the Lord of the io_uring guide – two well-known pieces of available documentation.
And while a recent enough io_uring
man page touches on the Continue reading
A recent decision from the Austrian Data Protection Authority (the Datenschutzbehörde) has network engineers scratching their heads and EU companies that use Google Analytics scrambling. The Datenschutzbehörde found that an Austrian website’s use of Google Analytics violates the EU General Data Protection Regulation (GDPR) as interpreted by the “Schrems II” case because Google Analytics can involve sending full or truncated IP addresses to the United States.
While disabling such trackers might be one (extreme) solution, doing so would leave website operators blind to how users are engaging with their site. A better approach: find a way to use tools like Google Analytics, but do so with an approach that protects the privacy of personal information and keeps it in the EU, avoiding a data transfer altogether. Enter Cloudflare Zaraz.
But before we get into just how Cloudflare Zaraz can help, we need to explain a bit of the background for the Datenschutzbehörde’s ruling, and why it’s a big deal.
The GDPR is a comprehensive data privacy law that applies to EU residents’ personal data, regardless of where it is processed. The GDPR itself does not insist that personal data must Continue reading
Often programmers have assumptions that turn out, to their surprise, to be invalid. From my experience this happens a lot. Every API, technology or system can be abused beyond its limits and break in a miserable way.
It's particularly interesting when basic things used everywhere fail. Recently we've reached such a breaking point in a ubiquitous part of Linux networking: establishing a network connection using the connect()
system call.
Since we are not doing anything special, just establishing TCP and UDP connections, how could anything go wrong? Here's one example: we noticed alerts from a misbehaving server, logged in to check it out and saw:
marek@:~# ssh 127.0.0.1
ssh: connect to host 127.0.0.1 port 22: Cannot assign requested address
You can imagine the face of my colleague who saw that. SSH to localhost refuses to work, while she was already using SSH to connect to that server! On another occasion:
marek@:~# dig cloudflare.com @1.1.1.1
dig: isc_socket_bind: address in use
This time a basic DNS query failed with a weird networking error. Failing DNS is a bad sign!
In both cases the problem was Linux running out of ephemeral ports. When Continue reading
Today we are launching Cloudflare’s paid public bug bounty program. We believe bug bounties are a vital part of every security team’s toolbox and have been working hard on improving and expanding our private bug bounty program over the last few years. The first iteration of our bug bounty was a pure vulnerability disclosure program without cash bounties. In 2018, we added a private bounty program and are now taking the next step to a public program.
Starting today, anyone can report vulnerabilities related to any Cloudflare product to our public bug bounty program, hosted on HackerOne’s platform.
Let's walk through our journey so far.
In 2014, when the company had fewer than 100 employees, we created a responsible disclosure policy to provide a safe place for security researchers to submit potential vulnerabilities to our security team, with some established rules of engagement. A vulnerability disclosure policy is an important first step for a company to take because it is an invitation to researchers to look at company assets without fear of repercussions, provided the researchers follow certain guidelines intended to protect everyone involved. We still stand by that policy and welcome Continue reading