Our mission is to enable developers to build their applications, end to end, on our platform, and ruthlessly eliminate limitations that may get in the way. Today, we're excited to announce you can build large, data-intensive applications on our network, all without breaking the bank; starting today, we're dropping egress fees to zero.
More Affordable: No Egress Fees
Building more on any platform historically comes with a caveat — high data transfer cost. These costs often come in the form of egress fees. Especially in the case of data intensive workloads, egress data transfer costs can come at a high premium, depending on the provider.
What exactly are data egress fees? They are the costs of retrieving data from a cloud provider. Cloud infrastructure providers generally pay for bandwidth based on capacity, but often bill customers based on the amount of data transferred. Curious to learn more about what this means for end users? We recently wrote an analysis of AWS’ Egregious Egress — a good read if you would like to learn more about the ‘Hotel California’ model AWS has spun up. Effectively, data egress fees lock you into their platform, making you choose your provider based not on Continue reading
Two months ago we launched Cloudflare Images for everyone, and we are amazed about the adoption and the feedback we received.
Let’s start with some numbers:
More than 70 million images delivered per day on average in the week of November 5 to 12.
More than 1.5 million images have been uploaded so far, growing faster every day.
But we are just getting started and are happy to announce the release of the most requested features, first we talk about the AVIF support for Images, converting as many images as possible with AVIF results in highly compressed, fast delivered images without compromising on the quality.
Secondly we introduce blur. By blurring an image, in combination with the already supported protection of private images via signed URL, we make Cloudflare Images a great solution for previews for paid content.
For many of our customers it is important to be able to serve Images from their own domain and not only via imagedelivery.net. Here we show an easy solution for this using a custom Worker or a special URL.
Last but not least we announce the launch of new attractively priced bundles for both Cloudflare Images and Stream.
Next up on the Developer Spotlight is another favourite of mine. Today’s post is by Jacob Hands. Jacob operates TriTails Premium Beef, which is an online store for meat, a very perishable good. So he has a lot of unique challenges when it comes to shipping. To deal with their growth, Jacob, a developer by trade, turned to Airtable and Cloudflare Workers to automate a lot of their workflow.
One of Jacob’s quotes is one of my favourites:
“Sure, Cloudflare Workers allows you to scale to billions of requests per day, but it is also awesome for a few hundred requests a day.”
Here is Jacob talking about how it only took him a few days to put together a fully customised workflow tool by integrating Airtable and Workers. And how it saves them multiple hours every single day.
Shipping Requirements
Working at a new e-commerce business shipping perishable goods has several challenges as operations scale up. One of our biggest challenges is that daily shipping throughput is limited. Partly because of a small workspace, limiting how many employees can simultaneously pack orders, and also because despite having a requested pickup time with UPS, they often show up Continue reading
HTTP headers are central to how the web works. They are used for passing additional information between the client and server, such as which security permissions to apply and information about the client, allowing the correct content to be served.
Today we are announcing the immediate availability of the third action within Transform Rules, “HTTP Response Header Modification”, available for all Cloudflare plans. This new functionality provides Cloudflare users the ability to set or remove HTTP response headers as traffic returns through Cloudflare back to the client. This allows customers to enrich responses with information about how their request was handled, debugging information and even recruitment messages.
Previously, HTTP response header modification was done using a Cloudflare Worker. Today we’re introducing an easier way to do this without writing a single line of code.
Luggage tags of the World Wide Web
Think of HTTP headers as the “luggage tag” attached to your bags when you check in at the airport.
Generally, you don't need to know what those numbers and words mean. You just know they are important in getting your suitcase from the boarding desk, to the correct airplane, and back to the correct luggage carousel at your destination.
Today we’re launching the Cloudflare Developer Expert Program: an initiative to support and recognize our VIP users who build with Workers, Pages, and the entire Cloudflare developer ecosystem.
A Cloudflare Developer Expert is an early adopter of new releases, a frequent participant in feedback sessions, and an evangelist for Cloudflare products made for the larger developer community.
But first, what are the benefits of becoming a Cloudflare Developer Expert?
Early access to features (e.g., private betas)
Admission to a private community of power users
Routine calls with product managers, engineers, and developer advocates
Sponsorships for OSS work
Our best swag, of course
We have already sent invites to our first batch of power users, but if you’d like to join or want to nominate a developer, please fill out this form.
Why We Made This Program
We ship very quickly at Cloudflare.
This is because we want feedback early in development, allowing users to challenge our assumptions and validate what we’re building. In the Workers team, this strategy has been very successful.
For example, we began beta testing custom builds for Wrangler (our CLI tool) that allow you to run any JavaScript bundler you want. This was Continue reading
I recently had the opportunity to present our Red Hat Ansible Automation Platform cloud strategy at Cloud Field Day 12.
Cloud Field Day 12 was a three day event that focused on the impact of cloud on enterprise IT. As a presenter, you can use any combination of slides and live demos to foster a discussion with a group of thought leaders. This roundtable included people from many different companies, skill sets, backgrounds and favorite tools. Check out the Cloud Field Day website to see the delegate panel, their backgrounds and Twitter handles. I quite enjoyed, and preferred, the conversational tone of Cloud Field Day, and the delegates who asked questions during the demo made it a lot more interactive.
Red Hat presented three products at Cloud Field Day: Red Hat OpenShift, which is our enterprise-ready Kubernetes container platform, Ansible Automation Platform, which I co-presented with Richard Henshall, our Head of Product and Strategy for Ansible Automation Platform, and finally Red Hat Advanced Cluster Management for Kubernetes, which extends the value of Red Hat OpenShift by deploying apps, managing multiple clusters and enforcing policies across multiple clusters at scale. I will list all three videos below.
Is there a real difference in the underlying hardware of switches and routers in terms of the traffic processing chips and their capabilities in terms of routing and switching (or should I say only switching)?
Let’s get the terminology straight. Router is a technical term for a device that forwards packets based on network layer information. Switch is a marketing term for a device that does something with packets.
Rephrasing the question: is there a hardware difference between a box marketed as a router and another box marketed as a layer-3 switch?
Is there a real difference in the underlying hardware of switches and routers in terms of the traffic processing chips and their capabilities in terms of routing and switching (or should I say only switching)?
Let’s get the terminology straight. Router is a technical term for a device that forwards packets based on network layer information. Switch is a marketing term for a device that does something with packets.
Rephrasing the question: is there a hardware difference between a box marketed as a router and another box marketed as a layer-3 switch?
TL&DR: Yes.
Doing packet forwarding at high speeds is expensive, and simpler forwarding pipeline results in cheaper (or faster) silicon.
If you don’t need complex high-speed functionality (like a thousand interface output queues with per-flow classifier), you create a simpler ASIC and call the device a switch. If you thrive on overpriced products, you create as complex an ASIC as you can make it and call the device using it a router. EX9200 is an obvious counterexample, but then Juniper always looked like DEC of networking to me.
There’s even a difference in capabilities between spine- and leaf data Continue reading
Today on Day Two Cloud, we talk about new ways of thinking about security for cloud. As organizations adopt cloud services, they're applying on-prem security designs. Our guest Adeel Ahmad is here to argue that this doesn't work, and that you need a different approach.
Today on Day Two Cloud, we talk about new ways of thinking about security for cloud. As organizations adopt cloud services, they're applying on-prem security designs. Our guest Adeel Ahmad is here to argue that this doesn't work, and that you need a different approach.
The world would be a simpler place for all processing engine makers if they just had to make one device to cover all use cases, thus maximizing volumes and minimizing per unit costs. …
The latest release of the list of the fastest supercomputers in the world showed little movement for an HPC industry that is anxiously waiting for long-discussed exascale systems to come online.
Japan’s massive Top500 list of the world’s fastest systems, a position it first reached in the summer of 2020. The latest list was released this week at the start of the
High performance computing hardware is really a software game, and the software we are referring to is at a very low level where deep expertise in libraries and solvers can make the difference between a capable device performing up to its specifications and, well, not so much. …
Advanced capabilities and analytics take center stage as enterprises consider third-party offerings to span the premises, cloud, and hybrid communications scenarios.
A commonly used term in the sports betting world is handicapper. A handicapper is a person who analyzes sports events to predict the winning team or player. This person (or team) focuses on all the moving pieces in a chaotic or high-stakes environment to make business-critical decisions. Similarly, in managing a multi-cloud environment, organizations have a lot at stake, and they must make crucial operational choices for the sake of security and the end-user experience. Having the ability to spot challenges in advance when moving through a multi-cloud journey will make the difference between success and failure. We’re going to look at three of the key multi-cloud challenges organizations face, as well as a real-life customer success story, William Hill, and how they overcame some of their biggest obstacles in their quest for multi-cloud success.
3 Roadblocks to Multi-Cloud
Regardless of where your organization started, there are three primary challenges you will likely face in moving to multi-cloud. To begin, every cloud is different in the way that it operates. This creates issues when it comes to connecting services across different cloud environments. Second, each cloud has its own methods and APIs when it comes to securing workloads. Thus, the process can lose consistency when different clouds are trying to communicate with one another. Lastly, providing a winning end-user experience requires strong observability within a multi-cloud environment. If that doesn’t exist, the bread and butter of your enterprise is at stake.
So, how do you move past these roadblocks?
There are three must-haves to keep in mind — and to keep you calm, cool, and collected when facing Continue reading
Paid Post Intel has been at the forefront of democratizing high performance computing (HPC) for the past three decades, and the HPC leader is taking its efforts up several more notches with the Aurora exascale HPC and AI supercomputer being designed and built by Intel and Hewlett Packard Enterprise for Argonne National Laboratory. …
When we announced Cloudflare Pages as generally available in April, we promised you it was just the beginning. The journey of our platform started with support for static sites with small bits of dynamic functionality like setting redirects and custom headers. But we wanted to give even more power to you and your teams to begin building the unimaginable. We envisioned a future where your entire application — frontend, APIs, storage, data — could all be deployed with a single commit, easily testable in staging and requiring a single merge to deploy to production. So in the spirit of “Full Stack” Week, we’re bringing you the tools to do just that.
Welcome to the future, everyone. We’re thrilled to announce that Pages is now a Full Stack platform with help from But how?
Cloudflare Workers provides a serverless execution environment that allows you to create entirely new applications or augment existing ones without configuring or maintaining infrastructure. Before today, it was possible to connect Workers to a Pages project—installing Wrangler and manually deploying a Worker by writing your app in both Pages and Workers. But we didn’t just want “possible”, we wanted something that came as second nature to you so you wouldn’t have to think twice about adding dynamic functionality to your site.
How it works
By using your repo’s filesystem convention and exporting one or more function handlers, Pages can leverage Workers to deploy serverless functions on your behalf. To begin, simply add a ./functions directory in the root of your project, and inside a JavaScript or TypeScript file, export a function handler. For example, let’s say in your ./functions directory, you have a file, hello.js, containing:
// GET requests to /filename would return "Hello, world!"
export const onRequestGet = () => {
return new Response("Hello, world!")
}
// POST requests to /filename with a JSON-encoded body would return "Hello, <name>!"
export const onRequestPost = async ({ request }) => {
const { name } = await request.json()
return new Response(`Hello, ${name}!`)
}
If you perform a git commit, it will trigger a new Pages build to deploy your dynamic site! During the build pipeline, Pages traverses your directory, mapping the filenames to URLs relative to your repo structure.
Under the hood, Pages generates Workers which include all your routing and functionality from the source. Functions supports deeply-nested routes, wildcard matching, middleware for things like authentication and error-handling, and more! To demonstrate all of its bells and whistles, we’ve created a blog post to walk through an example full stack application.
Letting you do what you do best
As your site grows in complexity, with Pages’ new full stack functionality, your developer experience doesn’t have to. You can enjoy the workflow you know and love while unlocking even more depth to your site.
Seamlessly build
In the same way we’ve handled builds and deployments with your static sites — with a `git commit` and `git push` — we’ll deploy your functions for you automatically. As long as your directory follows the proper structure, Pages will identify and deploy your functions to our network with your site.
Define your bindings
While bringing your Workers to Pages, bindings are a big part of what makes your application a full stack application. We’re so excited to bring to Pages all the bindings you’ve previously used with regular Workers!
KV namespace: Our serverless and globally accessible key-value storage solution. Within Pages, you can integrate with any of the KV namespaces you set in your Workers dashboard for your Pages project.
Durable Object namespace: Our strongly consistent coordination primitive that makes connecting WebSockets, handling state and building entire applications a breeze. As with KV, you can set your namespaces within the Workers dashboard and choose from that list within the Pages interface.
Environment Variable:An injected value that can be accessed by your functions and is stored as plain-text. You can set your environment variables directly within the Pages interface for both your production and preview environments at build-time and run-time.
Secret (coming soon!):An encrypted environment variable, which cannot be viewed by wrangler or any dashboard interfaces. Secrets are a great home for sensitive data including passwords and API tokens.
Preview deployments — now for your backend too
With the deployment of your serverless functions, you can still enjoy the ease of collaboration and testing like you did previously. Before you deploy to production, you can easily deploy your project to a preview environment to stage your changes. Even with your functions, Pages lets you keep a version history of every commit with a unique URL for each, making it easy to gather feedback whether it’s from a fellow developer, PM, designer or marketer! You can also enjoy the same infinite staging privileges that you did for static sites, with a consistent URL for the latest changes.
Develop and preview locally too
However, we realize that building and deploying with every small change just to stage your changes can be cumbersome at times if you’re iterating quickly. You can now develop full stack Pages applications with the latest release of our wrangler CLI. Backed by Miniflare, you can run your entire application locally with support for mocked secrets, environment variables, and KV (Durable Objects support coming soon!). Point wrangler at a directory of static assets, or seamlessly connect to your existing tools:
# Install wrangler v2 beta
npm install wrangler@beta
# Serve a folder of static assets
npx wrangler pages dev ./dist
# Or automatically proxy your existing tools
npx wrangler pages dev -- npx react-scripts start
This is just the beginning of Pages' integrations with wrangler. Stay tuned as we continue to enhance your developer experience.
What else can you do?
Everything you can do with HTTP Workers today!
When deploying a Pages application with functions, Pages is compiling and deploying first class Workers on your behalf. This means there is zero functionality loss when deploying a Worker within your Pages application — instead, there are only new benefits to be gained!
Integrate with SvelteKit — out of the box!
SvelteKit is a web framework for building Svelte applications. It’s built and maintained by the Svelte team, which makes it the Svelte user’s go-to solution for all their application needs. Out of the box, SvelteKit allows users to build projects with complex API backends.
As of today, SvelteKit projects can attach and configure the @sveltejs/adapter-cloudflare package. After doing this, the project can be added to Pages and is ready for its first deployment! With Pages, your SvelteKit project(s) can deploy with API endpoints and full server-side rendering support. Better yet, the entire project — including the API endpoints — can enjoy the benefits of preview deployments, too! This, even on its own, is a huge victory for advanced projects that were previously on the Workers adapter. Check out this example to see the SvelteKit adapter for Pages in action!
Use server-side rendering
You are now able to intercept any request that comes into your Pages project. This means that you can define Workers logic that will receive incoming URLs and, instead of serving static HTML, your Worker can render fresh HTML responses with dynamic data.
For example, an application with a product page can define a single product/[id].js file that will receive the id parameter, retrieve the product information from a Workers KV binding, and then generate an HTML response for that page. Compared to a static-site generator approach, this is more succinct and easier to maintain over time since you do not need to build a static HTML page per product at build-time… which may potentially be tens or even hundreds of thousands of pages!
Already have a Worker? We’ve got you!
If you already have a single Worker and want to bring it right on over to Pages to reap the developer experience benefits of our platform, our announcement today also enables you to do precisely that. Your build can generate an ES module Worker called _worker.js in the output directory of your project, perform your git commands to deploy, and we’ll take care of the rest! This can be especially advantageous to you if you’re a framework author or have a more complex use case that doesn’t follow our provided file structure.
Try it at no cost — for a limited time only
We’re thrilled to be releasing our open beta today for everyone to try at no additional cost to your Cloudflare plan. While we will still have limits in place, we are using this open beta period to learn more about how you and your teams are deploying functions with your Pages projects. For the time being, we encourage you to lean into your creativity and build out that site you’ve been thinking about for a long time — without the worry of getting billed.
In just a few short months, when we announce General Availability, you can expect our billing to reflect that of the Workers Bundled plan — after all, these are just Workers under the hood!
Coming up…
As we’re only announcing this release as an open beta, we have some really exciting things planned for the coming weeks and months. We want to improve on the quick and easy Pages developer experience that you're already familiar with by adding support for integrated logging and more analytics for your deployed functions.
Beyond that, we'll be expanding our first-class support for the next generation of frontend frameworks. As we've shown with SvelteKit, Pages' ability to seamlessly deploy both static and dynamic code together enables unbeatable end-user performance & developer ease, and we're excited to unlock that for more people. Fans of similar frameworks & technologies, such as NextJS, NuxtJS, React Server Components, Remix, Hydrogen, etc., stay tuned to this blog for more announcements. Or better yet, come join us and help make it happen!
Additionally, as we’ve done with SvelteKit, we’re looking to include more first-class integration with existing frameworks, so Pages can become the primary home for your preferred frameworks of choice. Work is underway on making NextJS, NuxtJS, React Server Components, Shopify Hydrogen and more integrate seamlessly as you develop your full stack apps.
Finally, we’re working to speed up those build times, so you can focus on pushing changes and iterating quickly — without the wait!
Getting started
To get started head over to our Pages docs and check out our demo blog to learn more about how to deploy serverless functions to Pages using Cloudflare Workers.
Of course, what we love most is seeing what you build! Pop into our Discord and show us how you’re using Pages to build your full stack apps.
Chris Coyier has been building on the web for over 15 years. Chris made his mark on the web development world with CSS-Tricks in 2007, one of the web's leading publications for frontend and full-stack developers.
In 2012, Chris co-founded CodePen, which is an online code editor that lives in the browser and allows developers to collaborate and share code examples written in HTML, CSS, and JavaScript.
Due to the nature of CodePen — namely, hosting code and an incredibly popular embedding feature, allowing developers to share their CodePen “pens” around the world — any sort of optimization can have a massive impact on CodePen’s business. Increasingly, CodePen relies on the ability to both execute code and store data on Cloudflare’s network as a first stop for those optimizations. As Chris puts it, CodePen uses Cloudflare Workers for "so many things":
"We pull content from an external CMS and use Workers to manipulate HTML before it arrives to the user's browser. For example, we fetch the original page, fetch the content, then stitch them together for a full response."
Workers allows you to work with responses directly using the native Request/Response classes and, with the addition of our Continue reading
We were so excited to announce support for full stack applications in Cloudflare Pages that we knew we had to show it off in a big way. We've built a sample image-sharing platform to demonstrate how you can add serverless functions right from within Pages with help from Cloudflare Workers. With just one new file to your project, you can add dynamic rendering, interact with other APIs, and persist data with KV and Durable Objects. The possibilities for full-stack applications, in combination with Pages' quick development cycles and unlimited preview environments, gives you the power to create almost any application.
Today, we're walking through our example image-sharing platform. We want to be able to share pictures with friends while still also keeping some images private. We'll build a JSON API with Functions (storing data on KV and Durable Objects), integrate with Cloudflare Images and Cloudflare Access, and use React for our front end.