Making static sites dynamic with Cloudflare D1

Making static sites dynamic with Cloudflare D1

Introduction

Making static sites dynamic with Cloudflare D1

There are many ways to store data in your applications. For example, in Cloudflare Workers applications, we have Workers KV for key-value storage and Durable Objects for real-time, coordinated storage without compromising on consistency. Outside the Cloudflare ecosystem, you can also plug in other tools like NoSQL and graph databases.

But sometimes, you want SQL. Indexes allow us to retrieve data quickly. Joins enable us to describe complex relationships between different tables. SQL declaratively describes how our application's data is validated, created, and performantly queried.

D1 was released today in open alpha, and to celebrate, I want to share my experience building apps with D1: specifically, how to get started, and why I’m excited about D1 joining the long list of tools you can use to build apps on Cloudflare.

Making static sites dynamic with Cloudflare D1

D1 is remarkable because it's an instant value-add to applications without needing new tools or stepping out of the Cloudflare ecosystem. Using wrangler, we can do local development on our Workers applications, and with the addition of D1 in wrangler, we can now develop proper stateful applications locally as well. Then, when it's time to deploy the application, wrangler allows us to both access and execute commands to Continue reading

Automate an isolated browser instance with just a few lines of code

Automate an isolated browser instance with just a few lines of code
Automate an isolated browser instance with just a few lines of code

If you’ve ever created a website that shows any kind of analytics, you’ve probably also thought about adding a “Save Image” or “Save as PDF” button to store and share results. This isn’t as easy as it seems (I can attest to this firsthand) and it’s not long before you go down a rabbit hole of trying 10 different libraries, hoping one will work.

This is why we’re excited to announce a private beta of the Workers Browser Rendering API, improving the browser automation experience for developers. With browser automation, you can programmatically do anything that a user can do when interacting with a browser.

The Workers Browser Rendering API, or just Rendering API for short, is our out-of-the-box solution for simplifying developer workflows, including capturing images or screenshots, by running browser automation in Workers.

Browser automation, everywhere

As with many of the best Cloudflare products, Rendering API was born out of an internal need. Many of our teams were setting up or wanted to set up their own tools to perform what sounds like an incredibly simple task: taking automated screenshots.

When gathering use cases, we realized that much of what our internal teams wanted would also be useful Continue reading

Xata Workers: client-side database access without client-side secrets

Xata Workers: client-side database access without client-side secrets
Xata Workers: client-side database access without client-side secrets

We’re excited to have Xata building their serverless functions product – Xata Workers – on top of Workers for Platforms. Xata Workers act as middleware to simplify database access and allow their developers to deploy functions that sit in front of their databases. Workers for Platforms opens up a whole suite of use cases for Xata developers all while providing the security, scalability and performance of Cloudflare Workers.

Now, handing it over to Alexis, a Senior Software Engineer at Xata to tell us more.

Introduction

In the last few years, there's been a rise of Jamstack, and new ways of thinking about the cloud that some people call serverless or edge computing. Instead of maintaining dedicated servers to run a single service, these architectures split applications in smaller services or functions.

By simplifying the state and context of our applications, we can benefit from external providers deploying these functions in dozens of servers across the globe. This architecture benefits the developer and user experience alike. Developers don’t have to manage servers, and users don’t have to experience latency. Your application simply scales, even if you receive hundreds of thousands of unexpected visitors.

When it comes to databases though, we still Continue reading

BGP in ipSpace.net Design Clinic

The ipSpace.net Design Clinic has been running for a bit over than a year. We covered tons of interesting technologies and design challenges, resulting in over 13 hours of content (so far), including several BGP-related discussions:

All the Design Clinic discussions are available with Standard or Expert ipSpace.net Subscription, and anyone can submit new design/discussion challenges.

BGP in ipSpace.net Design Clinic

The ipSpace.net Design Clinic has been running for a bit over than a year. We covered tons of interesting technologies and design challenges, resulting in over 13 hours of content (so far), including several BGP-related discussions:

All the Design Clinic discussions are available with Standard or Expert ipSpace.net Subscription, and anyone can submit new design/discussion challenges.

Using the MITRE ATT&CK framework to understand container security

As innovations in the world of application development and data computation grow every year, the “attack surface” of these technologies grows as well. The attack surface has to be understood from two sides—from the attacker’s side and from the organization being attacked. For the attacker, they benefit from the entry point into a system, either through the ever-growing perimeter of public-facing applications or the people managing these applications, because the probability of finding a weakness to enter from these entry points is higher. For the organization, managing the attack surface requires investing in more security tools and personnel. This can cascade into bigger security issues, which is why addressing the attack surface is essential.

The MITRE adversarial tactics, techniques, and common knowledge (ATT&CK) framework can help us understand how this large attack surface can be exploited by an adversary and how they strategize an attack. In this two-part blog, I will cover the new ATT&CK matrix for containers and how Calico provides mitigation solutions for each tactic in the matrix. In this blog, we will explore the first four tactics, which mostly deal with reconnaissance. In the second part, we will discuss the techniques and mitigation strategies once an attacker Continue reading

DPUs And The Future Of Distributed Infrastructure: A Packet Pushers Livestream Event

If you want to understand Data Processing Units (DPUs) and how they might impact your work as a network or infrastructure professional, join the Packet Pushers and Dell Technologies for a sponsored Livestream event on December 13th. What’s A DPU? Is It Like A DUI? It’s definitely not a DUI. DPUs are dedicated hardware that […]

The post DPUs And The Future Of Distributed Infrastructure: A Packet Pushers Livestream Event appeared first on Packet Pushers.

Scientific network tags (scitags)

The data shown in the chart was gathered from The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC22) being held this week in Dallas. The conference network, SCinet, is described as the fastest and most powerful network on Earth, connecting the SC community to the world. The chart shows data generated as part of the Packet Marking for Networked Scientific Workflows demonstration using SCinet - Booth 2847 (StarLight).

Scientific network tags (scitags) is an initiative promoting identification of the science domains and their high-level activities at the network level. Participants include, dCacheESnet, GÉANT, Internet2, Jisc, NORDUnet, OFTS, OSG, RNP, RUCIO, StarLight, XRootD.

This article will demonstrate how industry standard sFlow telemetry streaming from switches and routers can be used to report on science domain activity in real-time using the sFlow-RT analytics engine.

The scitags initiative makes use of the IPv6 packet header to mark traffic. Experiment and activity identifiers are encoded in the IPv6 Flow label field. Identifiers are published in an online registry in the form of a JSON document, https://www.scitags.org/api.json.

One might expect IPFIX / NetFlow to be a Continue reading

Full Stack Journey 072: A Peek Inside The Comp Sci Ivory Tower

On today's Full Stack Journey podcast we climb the ivory tower to get a glimpse of academic life in the field of networking and computer science with guest Dave Levin. Dr. Levin is Assistant Professor of Computer Science at the University of Maryland. His research focuses on networking and security, including measurement, cryptography, artificial intelligence, and economics.

Full Stack Journey 072: A Peek Inside The Comp Sci Ivory Tower

On today's Full Stack Journey podcast we climb the ivory tower to get a glimpse of academic life in the field of networking and computer science with guest Dave Levin. Dr. Levin is Assistant Professor of Computer Science at the University of Maryland. His research focuses on networking and security, including measurement, cryptography, artificial intelligence, and economics.

The post Full Stack Journey 072: A Peek Inside The Comp Sci Ivory Tower appeared first on Packet Pushers.

Cisco study: Network teams look to SDN, automation to manage multicloud operations

Networking teams have been challenged to provide hybrid workers with secure access to cloud-based applications, and they’re looking for greater automation and additional visibility tools to help them manage today’s diverse enterprise workloads.That’s among the key findings of Cisco’s new Global Hybrid Cloud Trends report, which surveyed 2,500 IT decision makers in 13 countries to discern the trends and priorities of organizations that are managing workloads across multiple private, public cloud and edge computing environments. Cisco said its report is aimed at looking at how these multicloud environments impact network operations.To read this article in full, please click here

AMD gives new Epyc processors a big launch with help from partners

AMD has officially launched the fourth-generation of its Epyc server processors for high performance computing (HPC) in the data center, and all the top OEMs showed up for the party.Officially named the Epyc 9004 but commonly referred to by its codename “Genoa,” the new chip is based on the fourth generation of AMD’s Zen microarchitecture and built on a 5nm manufacturing process by TSMC.Thanks to its chiplet design, which breaks the monolithic CPU into smaller “chiplets” that are tied together with a high speed interconnect, Genoa has up to 96 cores (double Intel’s best at the moment). The chiplets, with 16 cores each, are easier to manufacture than a single 96-core CPU. Genoa includes the latest I/O technology, such as PCI Express 5.0, 12 channels of DDR5 memory, and CXL 1.1.To read this article in full, please click here

AMD gives new Epyc processors a big launch with help from partners

AMD has officially launched the fourth-generation of its Epyc server processors for high performance computing (HPC) in the data center, and all the top OEMs showed up for the party.Officially named the Epyc 9004 but commonly referred to by its codename “Genoa,” the new chip is based on the fourth generation of AMD’s Zen microarchitecture and built on a 5nm manufacturing process by TSMC.Thanks to its chiplet design, which breaks the monolithic CPU into smaller “chiplets” that are tied together with a high speed interconnect, Genoa has up to 96 cores (double Intel’s best at the moment). The chiplets, with 16 cores each, are easier to manufacture than a single 96-core CPU. Genoa includes the latest I/O technology, such as PCI Express 5.0, 12 channels of DDR5 memory, and CXL 1.1.To read this article in full, please click here

Migrate from S3 easily with the R2 Super Slurper

Migrate from S3 easily with the R2 Super Slurper
Migrate from S3 easily with the R2 Super Slurper

R2 is an S3-compatible, globally distributed object storage, allowing developers to store large amounts of unstructured data without the costly egress bandwidth fees you commonly find with other providers.

To enjoy this egress freedom, you’ll have to start planning to send all that data you have somewhere else into R2. You might want to do it all at once, moving as much data as quickly as possible while ensuring data consistency. Or do you prefer moving the data to R2 slowly and gradually shifting your reads from your old provider to R2? And only then decide whether to cut off your old storage or keep it as a backup for new objects in R2?

There are multiple options for architecture and implementations for this movement, but taking terabytes of data from one cloud storage provider to another is always problematic, always involves planning, and likely requires staffing.

And that was hard. But not anymore.

Today we're announcing the R2 Super Slurper, the feature that will enable you to move all your data to R2 in one giant slurp or sip by sip — all in a friendly, intuitive UI and API.

Migrate from S3 easily with the R2 Super Slurper

The first step: R2 Super Slurper Private Beta

One Continue reading

Indexing millions of HTTP requests using Durable Objects

Indexing millions of HTTP requests using Durable Objects
Indexing millions of HTTP requests using Durable Objects

Our customers rely on their Cloudflare logs to troubleshoot problems and debug issues. One of the biggest challenges with logs is the cost of managing them, so earlier this year, we launched the ability to store and retrieve Cloudflare logs using R2.

In this post, I’ll explain how we built the R2 Log Retrieval API using Cloudflare Workers with a focus on Durable Objects and the Streams API. Using these, allows a customer to index and query millions of their Cloudflare logs stored in batches on R2.

Before we dive into the internals you might be wondering why one doesn't just use a traditional database to index these logs? After all, databases are a well proven technology. Well, the reason is that individual developers or companies, both large and small, often don't have the resources necessary to maintain such a database and the surrounding infrastructure to create this kind of setup.

Our approach instead relies on Durable Objects to maintain indexes of the data stored in R2, removing the complexity of managing and maintaining your own database. It was also super easy to add Durable Objects to our existing Workers code with just a few lines of config and some Continue reading

Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve
Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

Earlier this year, we introduced Cache Reserve. Cache Reserve helps users serve content from Cloudflare’s cache for longer by using R2’s persistent data storage. Serving content from Cloudflare’s cache benefits website operators by reducing their bills for egress fees from origins, while also benefiting website visitors by having content load faster.

Cache Reserve has been in closed beta for a few months while we’ve collected feedback from our initial users and continued to develop the product. After several rounds of iterating on this feedback, today we’re extremely excited to announce that Cache Reserve is graduating to open beta – users will now be able to test it and integrate it into their content delivery strategy without any additional waiting.

If you want to see the benefits of Cache Reserve for yourself and give us some feedback– you can go to the Cloudflare dashboard, navigate to the Caching section and enable Cache Reserve by pushing one button.

How does Cache Reserve fit into the larger picture?

Content served from Cloudflare’s cache begins its journey at an origin server, where the content is hosted. When a request reaches the origin, the origin compiles the content needed for the response and sends Continue reading

Store and process your Cloudflare Logs… with Cloudflare

Store and process your Cloudflare Logs... with Cloudflare
Store and process your Cloudflare Logs... with Cloudflare

Millions of customers trust Cloudflare to accelerate their website, protect their network, or as a platform to build their own applications. But, once you’re running in production, how do you know what’s going on with your application? You need logs from Cloudflare – a record of what happened on our network when your customers interacted with your product that uses Cloudflare.

Cloudflare Logs are an indispensable tool for debugging applications, identifying security vulnerabilities, or just understanding how users are interacting with your product. However, our customers generate petabytes of logs, and store them for months or years at a time. Log data is tantalizing: all those answers, just waiting to be revealed with the right query! But until now, it’s been too hard for customers to actually store, search, and understand their logs without expensive and cumbersome third party tools.

Today we’re announcing Cloudflare Logs Engine: a new product to enable any kind of investigation with Cloudflare Logs — all within Cloudflare.

Starting today, Cloudflare customers who push their logs to R2 can retrieve them by time range and unique identifier. Over the coming months we want to enable customers to: