Archive

Category Archives for "CloudFlare"

Introducing Low-Latency HLS Support for Cloudflare Stream

Introducing Low-Latency HLS Support for Cloudflare Stream
Introducing Low-Latency HLS Support for Cloudflare Stream

Stream Live lets users easily scale their live streaming apps and websites to millions of creators and concurrent viewers without having to worry about bandwidth costs or purchasing hardware for real-time encoding at scale. Stream Live lets users focus on the content rather than the infrastructure — taking care of the codecs, protocols, and bitrate automatically. When we launched Stream Live last year, we focused on bringing high quality, feature-rich streaming to websites and applications with HTTP Live Streaming (HLS).

Today, we're excited to introduce support for Low-Latency HTTP Live Streaming (LL-HLS) in a closed beta, offering you an even faster streaming experience. LL-HLS will reduce the latency a viewer may experience on their player from highs of around 30 seconds to less than 10 in many cases. Lower latency brings creators even closer to their viewers, empowering customers to build more interactive features like Q&A or chat and enabling the use of live streaming in more time-sensitive applications like sports, gaming, and live events.

Broadcast with less than 10-second latency

LL-HLS is an extension of HLS and allows us to reduce glass-to-glass latency — the time between something happening on the broadcast end and a user seeing it on Continue reading

Every request, every microsecond: scalable machine learning at Cloudflare

Every request, every microsecond: scalable machine learning at Cloudflare
Every request, every microsecond: scalable machine learning at Cloudflare

In this post, we will take you through the advancements we've made in our machine learning capabilities. We'll describe the technical strategies that have enabled us to expand the number of machine learning features and models, all while substantially reducing the processing time for each HTTP request on our network. Let's begin.

Background

For a comprehensive understanding of our evolved approach, it's important to grasp the context within which our machine learning detections operate. Cloudflare, on average, serves over 46 million HTTP requests per second, surging to more than 63 million requests per second during peak times.

Machine learning detection plays a crucial role in ensuring the security and integrity of this vast network. In fact, it classifies the largest volume of requests among all our detection mechanisms, providing the final Bot Score decision for over 72% of all HTTP requests. Going beyond, we run several machine learning models in shadow mode for every HTTP request.

At the heart of our machine learning infrastructure lies our reliable ally, CatBoost. It enables ultra low-latency model inference and ensures high-quality predictions to detect novel threats such as stopping bots targeting our customers' mobile apps. However, it's worth noting that machine learning Continue reading

How Orpheus automatically routes around bad Internet weather

How Orpheus automatically routes around bad Internet weather
How Orpheus automatically routes around bad Internet weather

Cloudflare’s mission is to help build a better Internet for everyone, and Orpheus plays an important role in realizing this mission. Orpheus identifies Internet connectivity outages beyond Cloudflare’s network in real time then leverages the scale and speed of Cloudflare’s network to find alternative paths around those outages. This ensures that everyone can reach a Cloudflare customer’s origin server no matter what is happening on the Internet. The end result is powerful: Cloudflare  protects customers from Internet incidents outside our network while maintaining the average latency and speed of our customer’s traffic.

A little less than two years ago, Cloudflare made Orpheus automatically available to all customers for free. Since then, Orpheus has saved 132 billion Internet requests from failing by intelligently routing them around connectivity outages, prevented 50+ Internet incidents from impacting our customers, and made our customer’s origins more reachable to everyone on the Internet. Let’s dive into how Orpheus accomplished these feats over the last year.

Increasing origin reachability

One service that Cloudflare offers is a reverse proxy that receives Internet requests from end users then applies any number of services like DDoS protection, caching, load balancing, and / or encryption. If the response Continue reading

Smart Hints make code-free performance simple

Smart Hints make code-free performance simple

This post is also available in 简体中文, 日本語, Deutsch, Français and Español.

Smart Hints make code-free performance simple

Today, we’re excited to announce how we’re making Early Hints and Fetch Priorities automatic using the power of Cloudflare’s network. Almost a year ago we launched Early Hints. Early Hints are a method that allows web servers to asynchronously send instructions to the browser whilst the web server is getting the full response ready. This gives proactive suggestions to the browser on how to load the webpage faster for the visitor rather than idly waiting to receive the full webpage response.

In initial lab experiments, we observed page load improvements exceeding 30%. Since then, we have sent about two-trillion hints on behalf of over 150,000 websites using the product.

In order to effectively use Early Hints on a website, HTTP link headers or HTML link elements must be configured to specify which assets should be preloaded or which third-party servers should be preconnected. Making these decisions requires understanding how your website interacts with browsers, and identifying render-blocking assets to hint on without implementing prioritization strategies that saturate network bandwidth on non-critical assets (i.e. you can’t just Early Hint everything and expect good results).

For Continue reading

Cloudflare’s global network grows to 300 cities and ever closer to end users with connections to 12,000 networks

Cloudflare's global network grows to 300 cities and ever closer to end users with connections to 12,000 networks
Cloudflare's global network grows to 300 cities and ever closer to end users with connections to 12,000 networks

We make no secret about how passionate we are about building a world-class global network to deliver the best possible experience for our customers. This means an unwavering and continual dedication to always improving the breadth (number of cities) and depth (number of interconnects) of our network.

This is why we are pleased to announce that Cloudflare is now connected to over 12,000 Internet networks in over 300 cities around the world!

The Cloudflare global network runs every service in every data center so your users have a consistent experience everywhere—whether you are in Reykjavík, Guam or in the vicinity of any of the 300 cities where Cloudflare lives. This means all customer traffic is processed at the data center closest to its source, with no backhauling or performance tradeoffs.

Having Cloudflare’s network present in hundreds of cities globally is critical to providing new and more convenient ways to serve our customers and their customers. However, the breadth of our infrastructure network provides other critical purposes. Let’s take a closer look at the reasons we build and the real world impact we’ve seen to customer experience:

Reduce latency

Our network allows us to sit approximately 50 ms from 95% Continue reading

How Cloudflare runs machine learning inference in microseconds

How Cloudflare runs machine learning inference in microseconds
How Cloudflare runs machine learning inference in microseconds

Cloudflare executes an array of security checks on servers spread across our global network. These checks are designed to block attacks and prevent malicious or unwanted traffic from reaching our customers’ servers. But every check carries a cost - some amount of computation, and therefore some amount of time must be spent evaluating every request we process. As we deploy new protections, the amount of time spent executing security checks increases.

Latency is a key metric on which CDNs are evaluated. Just as we optimize network latency by provisioning servers in close proximity to end users, we also optimize processing latency - which is the time spent processing a request before serving a response from cache or passing the request forward to the customers’ servers. Due to the scale of our network and the diversity of use-cases we serve, our edge software is subject to demanding specifications, both in terms of throughput and latency.

Cloudflare's bot management module is one suite of security checks which executes during the hot path of request processing. This module calculates a variety of bot signals and integrates directly with our front line servers, allowing us to customize behavior based on those signals. This module Continue reading

Welcome to Speed Week 2023

Welcome to Speed Week 2023
Welcome to Speed Week 2023

What we consider ‘fast’ is changing. In just over a century we’ve cut the time taken to travel to the other side of the world from 28 days to 17 hours. We developed a vaccine for a virus causing a global pandemic in just one year - 10% of the typical time. AI has reduced the time taken to complete software development tasks by 55%. As a society, we are driven by metrics - and the need to beat what existed before.

At Cloudflare we don't focus on metrics of days gone by. We’re not aiming for “faster horses”. Instead we are driven by questions such as “What does it actually look like for users?”, “How is this actually speeding up the Internet?”, and “How does this make the customer faster?”.

This innovation week we are helping users measure what matters. We will cover a range of topics including how we are fastest at Zero Trust, have the fastest network and a deep dive on cache purge and why global purge latency mightn’t be the gold star it's made out to be. We’ll also cover why Time to First Byte is generally a bad measurement. And what Continue reading

Exam-related Internet shutdowns in Iraq and Algeria put connectivity to the test

Exam-related Internet shutdowns in Iraq and Algeria put connectivity to the test
Exam-related Internet shutdowns in Iraq and Algeria put connectivity to the test

Over the last several years, governments in a number of countries in the Middle East/Northern Africa (MENA) region have taken to implementing widespread nationwide shutdowns in an effort to prevent cheating on nationwide academic exams. Although it is unclear whether such shutdowns are actually successful in curbing cheating, it is clear that they take a financial toll on the impacted countries, with estimated losses in the millions of US dollars.

During the first two weeks of June 2023, we’ve seen Iraq implementing a series of multi-hour shutdowns that will reportedly occur through mid-July, as well as Algeria taking similar actions to prevent cheating on baccalaureate exams. Shutdowns in Syria were reported to begin on June 7, but there’s been no indication of them in traffic data as of this writing (June 13). These actions echo those taken in Iraq, Syria, Sudan, and Algeria in 2022 and in Syria and Sudan in 2021.

(Note: The interactive graphs below have been embedded directly into the blog post using a new Cloudflare Radar feature. This post is best viewed in landscape mode when on a mobile device.)

Iraq

Iraq had reportedly committed on May 15 to not implementing Internet shutdowns during the Continue reading

Protecting GraphQL APIs from malicious queries

Protecting GraphQL APIs from malicious queries
Protecting GraphQL APIs from malicious queries

Starting today, Cloudflare’s API Gateway can protect GraphQL APIs against malicious requests that may cause a denial of service to the origin. In particular, API Gateway will now protect against two of the most common GraphQL abuse vectors: deeply nested queries and queries that request more information than they should.

Typical RESTful HTTP APIs contain tens or hundreds of endpoints. GraphQL APIs differ by typically only providing a single endpoint for clients to communicate with and offering highly flexible queries that can return variable amounts of data. While GraphQL’s power and usefulness rests on the flexibility to query an API about only the specific data you need, that same flexibility adds an increased risk of abuse. Abusive requests to a single GraphQL API can place disproportional load on the origin, abuse the N+1 problem, or exploit a recursive relationship between data dimensions. In order to add GraphQL security features to API Gateway, we needed to obtain visibility inside the requests so that we could apply different security settings based on request parameters. To achieve that visibility, we built our own GraphQL query parser. Read on to learn about how we built the parser and the security features it enabled.

Continue reading

Cloudflare Area 1 earns SOC 2 report

Cloudflare Area 1 earns SOC 2 report
Cloudflare Area 1 earns SOC 2 report

Cloudflare Area 1 is a cloud-native email security service that identifies and blocks attacks before they hit user inboxes, enabling more effective protection against spear phishing, Business Email Compromise (BEC), and other advanced threats. Cloudflare Area 1 is part of the Cloudflare Zero Trust platform and an essential component of a modern security and compliance strategy, helping organizations to reduce their attackers surface, detect and respond to threats faster, and improve compliance with industry regulations and security standards.

This announcement is another step in our commitment to remaining strong in our security posture.

Our SOC 2 Journey

Many customers want assurance that the sensitive information they send to us can be kept safe. One of the best ways to provide this assurance is a SOC 2 Type II report. We decided to obtain the report as it is the best way for us to demonstrate the controls we have in place to keep Cloudflare Area 1 and its infrastructure secure and available.  

Cloudflare Area 1’s SOC 2 Type II report covers a 3 month period from 1 January 2023 to 31 March 2023. Our auditors assessed the operating effectiveness of the 70 controls we’ve implemented to meet the Continue reading

Understand the impact of Waiting Room settings with Waiting Room Analytics

Understand the impact of Waiting Room settings with Waiting Room Analytics
Understand the impact of Waiting Room settings with Waiting Room Analytics

In January 2021, we gave you a behind-the-scenes look at how we built Waiting Room on Cloudflare’s Durable Objects. Today, we are thrilled to announce the launch of Waiting Room Analytics and tell you more about how we built this feature. Waiting Room Analytics offers insights into end-user experience and provides visualizations of your waiting room traffic. These new metrics enable you to make well-informed configuration decisions, ensuring an optimal end-user experience while protecting your site from overwhelming traffic spikes.

If you’ve ever bought tickets for a popular concert online you’ll likely have been put in a virtual queue. That’s what Waiting Room provides. It keeps your site up and running in the face of overwhelming traffic surges. Waiting Room sends excess visitors to a customizable virtual waiting room and admits them to your site as spots become available.

While customers have come to rely on the protection Waiting Room provides against traffic surges, they have faced challenges analyzing their waiting room’s performance and impact on end-user flow. Without feedback about waiting room traffic as it relates to waiting room settings, it was challenging to make Waiting Room configuration decisions.

Up until now, customers could only monitor their waiting room's Continue reading

Examining HTTP/3 usage one year on

Examining HTTP/3 usage one year on
Examining HTTP/3 usage one year on

In June 2022, after the publication of a set of HTTP-related Internet standards, including the RFC that formally defined HTTP/3, we published HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends. One year on, as the RFC reaches its first birthday, we thought it would be interesting to look back at how these trends have evolved over the last year.

Our previous post reviewed usage trends for HTTP/1.1, HTTP/2, and HTTP/3 observed across Cloudflare’s network between May 2021 and May 2022, broken out by version and browser family, as well as for search engine indexing and social media bots. At the time, we found that browser-driven traffic was overwhelmingly using HTTP/2, although HTTP/3 usage was showing signs of growth. Search and social bots were mixed in terms of preference for HTTP/1.1 vs. HTTP/2, with little-to-no HTTP/3 usage seen.

Between May 2022 and May 2023, we found that HTTP/3 usage in browser-retrieved content continued to grow, but that search engine indexing and social media bots continued to effectively ignore the latest version of the web’s core protocol. (Having said that, the benefits of HTTP/3 are very user-centric, and arguably offer minimal benefits to Continue reading

Nine years of Project Galileo and how the last year has changed it

Nine years of Project Galileo and how the last year has changed it
Nine years of Project Galileo and how the last year has changed it

If you follow Cloudflare, you know that Birthday Week is a big deal. We’ve taken a similar approach to Project Galileo since its founding in 2014. For the anniversary, we typically give an overview of what we have learned to protect the most vulnerable in the last year and announce new product features, partnerships, and how we’ve been able to expand the project.

When our Cloudflare Impact team was preparing for the anniversary, we noticed a theme. Many of the projects we worked on throughout the year involved Project Galileo. From access to new products, development of privacy-enhancing technologies, collaborations with civil society and governments, we saw that the project played a role in either facilitating conversation with the right people or bridging gaps.

After reflecting on the last year, we’ve seen a project that was initially intended to keep journalism and media sites online grew into more. So, for this year, in addition to new announcements, we want to take the time to reflect on how we have seen Project Galileo transform and how we look toward the future in protecting the most vulnerable on the Internet.

Project Galileo +

The original goal of Project Galileo was simple. Although Continue reading

Dynamic data collection with Zaraz Worker Variables

Dynamic data collection with Zaraz Worker Variables

Bringing dynamic data to the server

Dynamic data collection with Zaraz Worker Variables

Since its inception, Cloudflare Zaraz, the server-side third-party manager built for speed, privacy and security, has strived to offer a way for marketers and developers alike to get the data they need to understand their user journeys, without compromising on page performance. Cloudflare Zaraz makes it easy to transition from traditional client-side data collection based on marketing pixels in users’ browsers, to a server-side paradigm that shares events with vendors from the edge.

When implementing data collection on websites or mobile applications, analysts and digital marketers usually first define the set of interactions and attributes they want to measure, formalizing those requirements along technical specifications in a central document (“tagging plan”). Developers will later implement the required code to make those attributes available for the third party manager to pick it up. For instance, an analyst may want to analyze page views based on an internal name instead of the page title or page pathname. They would therefore define an example “page name” attribute that would need to be made available in the context of the page, by the developer. From there, the analyst would configure the tag management system to pick the attribute’s Continue reading

Cloudflare is deprecating Railgun

Cloudflare is deprecating Railgun
Cloudflare is deprecating Railgun

Cloudflare will deprecate the Railgun product on January 31, 2024. At that time, existing Railgun deployments and connections will stop functioning. Customers have the next eight months to migrate to a supported Cloudflare alternative which will vary based on use case.

Cloudflare first launched Railgun more than ten years ago. Since then, we have released several products in different areas that better address the problems that Railgun set out to solve. However, we shied away from the work to formally deprecate Railgun.

That reluctance led to Railgun stagnating and customers suffered the consequences. We did not invest time in better support for Railgun. Feature requests never moved. Maintenance work needed to occur and that stole resources away from improving the Railgun replacements. We allowed customers to deploy a zombie product and, starting with this deprecation, we are excited to correct that by helping teams move to significantly better alternatives that are now available in Cloudflare’s network.

We know that this will require migration effort from Railgun customers over the next eight months. We want to make that as smooth as possible. Today’s announcement features recommendations on how to choose a replacement, how to get started, and guidance on where you Continue reading

Reduce latency and increase cache hits with Regional Tiered Cache

Reduce latency and increase cache hits with Regional Tiered Cache
Reduce latency and increase cache hits with Regional Tiered Cache

Today we’re excited to announce an update to our Tiered Cache offering: Regional Tiered Cache.

Tiered Cache allows customers to organize Cloudflare data centers into tiers so that only some “upper-tier” data centers can request content from an origin server, and then send content to “lower-tiers” closer to visitors. Tiered Cache helps content load faster for visitors, makes it cheaper to serve, and reduces origin resource consumption.

Regional Tiered Cache provides an additional layer of caching for Enterprise customers who have a global traffic footprint and want to serve content faster by avoiding network latency when there is a cache miss in a lower-tier, resulting in an upper-tier fetch in a data center located far away. In our trials, customers who have enabled Regional Tiered Cache have seen a 50-100ms improvement in tail cache hit response times from Cloudflare’s CDN.

What problem does Tiered Cache help solve?

First, a quick refresher on caching: a request for content is initiated from a visitor on their phone or computer. This request is generally routed to the closest Cloudflare data center. When the request arrives, we look to see if we have the content cached to respond to Continue reading

How Oxy uses hooks for maximum extensibility

How Oxy uses hooks for maximum extensibility
How Oxy uses hooks for maximum extensibility

We recently introduced Oxy, our Rust framework for building proxies. Through a YAML file, Oxy allows applications to easily configure listeners (e.g. IP, MASQUE, HTTP/1), telemetry, and much more. However, when it comes to application logic, a programming language is often a better tool for the job. That’s why in this post we’re introducing Oxy’s rich dependency injection capabilities for programmatically modifying all aspects of a proxy.

The idea of extending proxies with scripting is well established: we've had great past success with Lua in our OpenResty/NGINX deployments and there are numerous web frameworks (e.g. Express) with middleware patterns. While Oxy is geared towards the development of forward proxies, they all share the model of a pre-existing request pipeline with a mechanism for integrating custom application logic. However, the use of Rust greatly helps developer productivity when compared to embedded scripting languages. Having confidence in the types and mutability of objects being passed to and returned from callbacks is wonderful.

Oxy exports a series of hook traits that “hook” into the lifecycle of a connection, not just a request. Oxy applications need to control almost every layer of the OSI model: how Continue reading

Unbounded memory usage by TCP for receive buffers, and how we fixed it

Unbounded memory usage by TCP for receive buffers, and how we fixed it
Unbounded memory usage by TCP for receive buffers, and how we fixed it

At Cloudflare, we are constantly monitoring and optimizing the performance and resource utilization of our systems. Recently, we noticed that some of our TCP sessions were allocating more memory than expected.

The Linux kernel allows TCP sessions that match certain characteristics to ignore memory allocation limits set by autotuning and allocate excessive amounts of memory, all the way up to net.ipv4.tcp_rmem max (the per-session limit). On Cloudflare’s production network, there are often many such TCP sessions on a server, causing the total amount of allocated TCP memory to reach net.ipv4.tcp_mem thresholds (the server-wide limit). When that happens, the kernel imposes memory use constraints on all TCP sessions, not just the ones causing the problem. Those constraints have a negative impact on throughput and latency for the user. Internally within the kernel, the problematic sessions trigger TCP collapse processing, “OFO” pruning (dropping of packets already received and sitting in the out-of-order queue), and the dropping of newly arriving packets.

This blog post describes in detail the root cause of the problem and shows the test results of a solution.

TCP receive buffers are excessively big for some sessions

Our journey began when we started noticing a lot Continue reading

Recapping Developer Week

Recapping Developer Week
Recapping Developer Week

Developer Week 2023 is officially a wrap. Last week, we shipped 34 posts highlighting what has been going on with our developer platform and where we’re headed in the future – including new products & features, in-depth tutorials to help you get started, and customer stories to inspire you.

We’ve loved already hearing feedback from you all about what we’ve shipped:

Launching our new Open Source Software Sponsorships Program

Launching our new Open Source Software Sponsorships Program
Launching our new Open Source Software Sponsorships Program

In 2018, we first launched our Open Source Software Sponsorships program, and since then, we've been listening to your feedback, and realized that it's time to introduce a fresh and enhanced version of the program that's more inclusive and better addresses the needs of the OSS community.

Launching our new Open Source Software Sponsorships Program
A subset of open source projects on Cloudflare. See more >>

Previously, our sponsorship focused on engineering tools, but we're excited to announce that we've now opened it to include any non-profit and open source projects.

Program criteria and eligibility

To qualify for our Open Source Sponsorship Program, projects must be open source and meet the following criteria:

  1. Operate on a non-profit basis.
  2. Include a link back to our home page.

Please keep in mind that this program isn't intended for event sponsorships, but rather for project-based support.

Sponsorship benefits

As part of our sponsorship program, we offer the following benefits to projects:

Can Cloudflare help your open source project be successful and sustainable? Fill out the application form to submit your project Continue reading

1 22 23 24 25 26 137