Introducing Low-Latency HLS Support for Cloudflare Stream

Introducing Low-Latency HLS Support for Cloudflare Stream
Introducing Low-Latency HLS Support for Cloudflare Stream

Stream Live lets users easily scale their live streaming apps and websites to millions of creators and concurrent viewers without having to worry about bandwidth costs or purchasing hardware for real-time encoding at scale. Stream Live lets users focus on the content rather than the infrastructure — taking care of the codecs, protocols, and bitrate automatically. When we launched Stream Live last year, we focused on bringing high quality, feature-rich streaming to websites and applications with HTTP Live Streaming (HLS).

Today, we're excited to introduce support for Low-Latency HTTP Live Streaming (LL-HLS) in a closed beta, offering you an even faster streaming experience. LL-HLS will reduce the latency a viewer may experience on their player from highs of around 30 seconds to less than 10 in many cases. Lower latency brings creators even closer to their viewers, empowering customers to build more interactive features like Q&A or chat and enabling the use of live streaming in more time-sensitive applications like sports, gaming, and live events.

Broadcast with less than 10-second latency

LL-HLS is an extension of HLS and allows us to reduce glass-to-glass latency — the time between something happening on the broadcast end and a user seeing it on Continue reading

Every request, every microsecond: scalable machine learning at Cloudflare

Every request, every microsecond: scalable machine learning at Cloudflare
Every request, every microsecond: scalable machine learning at Cloudflare

In this post, we will take you through the advancements we've made in our machine learning capabilities. We'll describe the technical strategies that have enabled us to expand the number of machine learning features and models, all while substantially reducing the processing time for each HTTP request on our network. Let's begin.

Background

For a comprehensive understanding of our evolved approach, it's important to grasp the context within which our machine learning detections operate. Cloudflare, on average, serves over 46 million HTTP requests per second, surging to more than 63 million requests per second during peak times.

Machine learning detection plays a crucial role in ensuring the security and integrity of this vast network. In fact, it classifies the largest volume of requests among all our detection mechanisms, providing the final Bot Score decision for over 72% of all HTTP requests. Going beyond, we run several machine learning models in shadow mode for every HTTP request.

At the heart of our machine learning infrastructure lies our reliable ally, CatBoost. It enables ultra low-latency model inference and ensures high-quality predictions to detect novel threats such as stopping bots targeting our customers' mobile apps. However, it's worth noting that machine learning Continue reading

How Orpheus automatically routes around bad Internet weather

How Orpheus automatically routes around bad Internet weather
How Orpheus automatically routes around bad Internet weather

Cloudflare’s mission is to help build a better Internet for everyone, and Orpheus plays an important role in realizing this mission. Orpheus identifies Internet connectivity outages beyond Cloudflare’s network in real time then leverages the scale and speed of Cloudflare’s network to find alternative paths around those outages. This ensures that everyone can reach a Cloudflare customer’s origin server no matter what is happening on the Internet. The end result is powerful: Cloudflare  protects customers from Internet incidents outside our network while maintaining the average latency and speed of our customer’s traffic.

A little less than two years ago, Cloudflare made Orpheus automatically available to all customers for free. Since then, Orpheus has saved 132 billion Internet requests from failing by intelligently routing them around connectivity outages, prevented 50+ Internet incidents from impacting our customers, and made our customer’s origins more reachable to everyone on the Internet. Let’s dive into how Orpheus accomplished these feats over the last year.

Increasing origin reachability

One service that Cloudflare offers is a reverse proxy that receives Internet requests from end users then applies any number of services like DDoS protection, caching, load balancing, and / or encryption. If the response Continue reading

Smart Hints make code-free performance simple

Smart Hints make code-free performance simple

This post is also available in 简体中文, 日本語, Deutsch, Français and Español.

Smart Hints make code-free performance simple

Today, we’re excited to announce how we’re making Early Hints and Fetch Priorities automatic using the power of Cloudflare’s network. Almost a year ago we launched Early Hints. Early Hints are a method that allows web servers to asynchronously send instructions to the browser whilst the web server is getting the full response ready. This gives proactive suggestions to the browser on how to load the webpage faster for the visitor rather than idly waiting to receive the full webpage response.

In initial lab experiments, we observed page load improvements exceeding 30%. Since then, we have sent about two-trillion hints on behalf of over 150,000 websites using the product.

In order to effectively use Early Hints on a website, HTTP link headers or HTML link elements must be configured to specify which assets should be preloaded or which third-party servers should be preconnected. Making these decisions requires understanding how your website interacts with browsers, and identifying render-blocking assets to hint on without implementing prioritization strategies that saturate network bandwidth on non-critical assets (i.e. you can’t just Early Hint everything and expect good results).

For Continue reading

Cloudflare’s global network grows to 300 cities and ever closer to end users with connections to 12,000 networks

Cloudflare's global network grows to 300 cities and ever closer to end users with connections to 12,000 networks
Cloudflare's global network grows to 300 cities and ever closer to end users with connections to 12,000 networks

We make no secret about how passionate we are about building a world-class global network to deliver the best possible experience for our customers. This means an unwavering and continual dedication to always improving the breadth (number of cities) and depth (number of interconnects) of our network.

This is why we are pleased to announce that Cloudflare is now connected to over 12,000 Internet networks in over 300 cities around the world!

The Cloudflare global network runs every service in every data center so your users have a consistent experience everywhere—whether you are in Reykjavík, Guam or in the vicinity of any of the 300 cities where Cloudflare lives. This means all customer traffic is processed at the data center closest to its source, with no backhauling or performance tradeoffs.

Having Cloudflare’s network present in hundreds of cities globally is critical to providing new and more convenient ways to serve our customers and their customers. However, the breadth of our infrastructure network provides other critical purposes. Let’s take a closer look at the reasons we build and the real world impact we’ve seen to customer experience:

Reduce latency

Our network allows us to sit approximately 50 ms from 95% Continue reading

How Cloudflare runs machine learning inference in microseconds

How Cloudflare runs machine learning inference in microseconds
How Cloudflare runs machine learning inference in microseconds

Cloudflare executes an array of security checks on servers spread across our global network. These checks are designed to block attacks and prevent malicious or unwanted traffic from reaching our customers’ servers. But every check carries a cost - some amount of computation, and therefore some amount of time must be spent evaluating every request we process. As we deploy new protections, the amount of time spent executing security checks increases.

Latency is a key metric on which CDNs are evaluated. Just as we optimize network latency by provisioning servers in close proximity to end users, we also optimize processing latency - which is the time spent processing a request before serving a response from cache or passing the request forward to the customers’ servers. Due to the scale of our network and the diversity of use-cases we serve, our edge software is subject to demanding specifications, both in terms of throughput and latency.

Cloudflare's bot management module is one suite of security checks which executes during the hot path of request processing. This module calculates a variety of bot signals and integrates directly with our front line servers, allowing us to customize behavior based on those signals. This module Continue reading

How to navigate the co-management conundrum in MSP engagements

Co-management is a key part of many arrangements between enterprise IT teams and their managed service providers (MSP), but it’s not always clear where the management boundaries and overlaps exist and how they should be handled.Oftentimes, enterprises land on a co-management approach because they don’t want to give up total control, and the MSP may be promising productive cooperation with prospective customers to provide reassurance and close the deal. In practice, co-managed technology services can vary widely depending on the type of services being offered and the parties involved.For the sake of this article, let’s assume that enterprises are already committed to outsourcing some elements of their IT and communications services to an MSP partner. The benefits of outsourcing – such as expense or headcount reduction, increased expertise, improved productivity, core business focus and enhanced capability – are well established, and the potential risks and concerns – including loss of control, reduced flexibility, dwindling internal expertise and fears about data protection and ownership – are also well known.To read this article in full, please click here

Tech Byte: DWDM at the Edge with Nokia PSE6 Coherent Optics

Today’s Tech Byte is a discussion on Nokia’s Photonic Service Engine (PSE) optics. Release 6 of its PSEs promises huge changes to DWDM Edge by bringing coherent optical DWDM circuits directly to your Nokia routers and switches. No more costly DWDM shelves and transponders just to terminate a tail circuit, reducing lead times and providing more options for resilience.

Tech Byte: DWDM at the Edge with Nokia PSE6 Coherent Optics

Today’s Tech Byte is a discussion on Nokia’s Photonic Service Engine (PSE) optics. Release 6 of its PSEs promises huge changes to DWDM Edge by bringing coherent optical DWDM circuits directly to your Nokia routers and switches. No more costly DWDM shelves and transponders just to terminate a tail circuit, reducing lead times and providing more options for resilience.

The post Tech Byte: DWDM at the Edge with Nokia PSE6 Coherent Optics appeared first on Packet Pushers.

Welcome to Speed Week 2023

Welcome to Speed Week 2023
Welcome to Speed Week 2023

What we consider ‘fast’ is changing. In just over a century we’ve cut the time taken to travel to the other side of the world from 28 days to 17 hours. We developed a vaccine for a virus causing a global pandemic in just one year - 10% of the typical time. AI has reduced the time taken to complete software development tasks by 55%. As a society, we are driven by metrics - and the need to beat what existed before.

At Cloudflare we don't focus on metrics of days gone by. We’re not aiming for “faster horses”. Instead we are driven by questions such as “What does it actually look like for users?”, “How is this actually speeding up the Internet?”, and “How does this make the customer faster?”.

This innovation week we are helping users measure what matters. We will cover a range of topics including how we are fastest at Zero Trust, have the fastest network and a deep dive on cache purge and why global purge latency mightn’t be the gold star it's made out to be. We’ll also cover why Time to First Byte is generally a bad measurement. And what Continue reading

Worth Reading: A Primer on Communication Fundamentals

Dip Singh published an excellent primer on communication fundamentals including:

  • Waves: frequency, amplitude, wavelength, phase
  • Composite signals, frequency domain and Fourier transform
  • Bandwidth, fundamental and harmonic frequency
  • Decibels in a nutshell
  • Transmission impairments: attenuation, distortion, noise
  • Principles of modern communications: Nyquist theorem, Shannon’s law, bit and baud rate
  • Line encoding techniques, quadrature methods (including QPSK and QAM)

Even if you don’t care about layer-1 technologies you MUST read it to get at least a basic appreciation of why stuff you’re using to read this blog post works.

Worth Reading: A Primer on Communication Fundamentals

Dip Singh published an excellent primer on communication fundamentals including:

  • Waves: frequency, amplitude, wavelength, phase
  • Composite signals, frequency domain and Fourier transform
  • Bandwidth, fundamental and harmonic frequency
  • Decibels in a nutshell
  • Transmission impairments: attenuation, distortion, noise
  • Principles of modern communications: Nyquist theorem, Shannon’s law, bit and baud rate
  • Line encoding techniques, quadrature methods (including QPSK and QAM)

Even if you don’t care about layer-1 technologies you MUST read it to get at least a basic appreciation of why stuff you’re using to read this blog post works.

Observations on Seven Years of Maintaining Open Source

June 27th marks the seventh anniversary of NetBox, a one-time hobby project which quickly took off and today largely consumes my life. What began as a proof-of-concept solution for the network engineering team at DigitalOcean is now perhaps the most widely deployed network source of truth in the world.

This feels like an opportune time to reflect on some lessons I've learned along the way, with the hope of offering mixed encouragement and caution to those considering a similar path. And as I've felt the urge to pick up blogging again, this post will also serve to share what I've been up to recently.

Most articles about open source in general are boring. Reading about licenses and software governance feels like a punishment. Keenly aware of this fact, I'll do my best to navigate around the theory and stick with observations that are of practical use to the aspiring open source maintainer.

Continue reading · No comments