Archive

Category Archives for "Networking"

Cloudflare Queues: messages at your speed with consumer concurrency and explicit acknowledgement

Cloudflare Queues: messages at your speed with consumer concurrency and explicit acknowledgement
Cloudflare Queues: messages at your speed with consumer concurrency and explicit acknowledgement

Communicating between systems can be a balancing act that has a major impact on your business. APIs have limits, billing frequently depends on usage, and end-users are always looking for more speed in the services they use. With so many conflicting considerations, it can feel like a challenge to get it just right. Cloudflare Queues is a tool to make this balancing act simple. With our latest features like consumer concurrency and explicit acknowledgment, it’s easier than ever for developers to focus on writing great code, rather than worrying about the fees and rate limits of the systems they work with.

Queues is a messaging service, enabling developers to send and receive messages across systems asynchronously with guaranteed delivery. It integrates directly with Cloudflare Workers, making for easy message production and consumption working with the many products and services we offer.

What’s new in Queues?

Consumer concurrency

Oftentimes, the systems we pull data from can produce information faster than other systems can consume them. This can occur when consumption involves processing information, storing it, or sending and receiving information to a third party system. The result of which is that sometimes, a queue can fall behind where it should be. Continue reading

Workers Browser Rendering API enters open beta

Workers Browser Rendering API enters open beta
Workers Browser Rendering API enters open beta

The Workers Browser Rendering API allows developers to programmatically control and interact with a headless browser instance and create automation flows for their applications and products.

Since the private beta announcement, based on the feedback we've been receiving and our own roadmap, the team has been working on the developer experience and improving the platform architecture for the best possible performance and reliability. Today we enter the open beta and will start onboarding the customers on the wait list.

Developer experience

Starting today, Wrangler, our command-line tool for configuring, building, and deploying applications with Cloudflare developer products, has support for the Browser Rendering API bindings.

You can install Wrangler Beta using npm:

npm install wrangler --save-dev

Bindings allow your Workers to interact with resources on the Cloudflare developer platform. In this case, they will provide your Worker script with an authenticated endpoint to interact with a dedicated Chromium browser instance.

This is all you need in your wrangler.toml once this service is enabled for your account:

browser = { binding = "MYBROWSER", type = "browser" }

Now you can deploy any Worker script that requires Browser Rendering capabilities. You can spawn Chromium instances and interact with Continue reading

Developer Week Performance Update: Spotlight on R2

Developer Week Performance Update: Spotlight on R2
Developer Week Performance Update: Spotlight on R2

For developers, performance is everything. If your app is slow, it will get outclassed and no one will use it. In order for your application to be fast, every underlying component and system needs to be as performant as possible. In the past, we’ve shown how our network helps make your apps faster, even in remote places. We’ve focused on how Workers provides the fastest compute, even in regions that are really far away from traditional cloud datacenters.

For Developer Week 2023, we’re going to be looking at one of the newest Cloudflare developer offerings and how it compares to an alternative when retrieving assets from buckets: R2 versus Amazon Simple Storage Service (S3). Spoiler alert: we’re faster than S3 when serving media content via public access. Our test showed that on average, Cloudflare R2 was 20-40% faster than Amazon S3. For this test, we used 95th percentile Response tests, which measures the time it takes for a user to make a request to the bucket, and get the entirety of the response. This test was designed with the goal of measuring end-user performance when accessing content in public buckets.

In this blog we’re going to talk about why your Continue reading

D1: We turned it up to 11

D1: We turned it up to 11

This post is also available in Deutsch, 简体中文, 日本語, Español, Français.

D1: We turned it up to 11

We’re not going to bury the lede: we’re excited to launch a major update to our D1 database, with dramatic improvements to performance and scalability. Alpha users (which includes any Workers user) can create new databases using the new storage backend right now with the following command:

$ wrangler d1 create your-database --experimental-backend

In the coming weeks, it’ll be the default experience for everyone, but we want to invite developers to start experimenting with the new version of D1 immediately. We’ll also be sharing more about how we built D1’s new storage subsystem, and how it benefits from Cloudflare’s distributed network, very soon.

Remind me: What’s D1?

D1 is Cloudflare’s native serverless database, which we launched into alpha in November last year. Developers have been building complex applications with Workers, KV, Durable Objects, and more recently, Queues & R2, but they’ve also been consistently asking us for one thing: a database they can query.

We also heard consistent feedback that it should be SQL-based, scale-to-zero, and (just like Workers itself), take a Region: Earth approach to replication. And so we took that feedback and set Continue reading

How to quickly make minor changes to complex Linux commands

When working in the Linux terminal window, you have a lot of options for moving on the Linux command line; backing up over a command you’ve just typed is only one of them.Using the Backspace key We likely all use the backspace key fairly often to fix typos. It can also make running a series of related commands easier. For example, you can type a command, press the up arrow key to redisplay it and then use the backspace key to back over and replace some of the characters to run a similar command. In the examples below, a single character is backed over and replaced.To read this article in full, please click here

Ampere launches 192-core AmpereOne server processor

Ampere has announced it has begun shipping its next-generation AmpereOne processor, a server chip with up to 192 cores and special instructions aimed at AI processing.It is also the first generation of chips from the company using homegrown cores rather than cores licensed from Arm. Among the features of these new cores is support for bfloat16, the popular instruction set used in AI training and inferencing.“AI is a big piece [of the processor] because you need more compute power,” said Jeff Wittich, chief products officer for Ampere. ”AI inferencing is one of the big workloads that is driving the need for more and more compute, whether it’s in your big hyperscale data centers or the need for more compute performance out at the edge.”To read this article in full, please click here

How to quickly make minor changes to complex Linux commands

When working in the Linux terminal window, you have a lot of options for moving on the Linux command line; backing up over a command you’ve just typed is only one of them.Using the Backspace key We likely all use the backspace key fairly often to fix typos. It can also make running a series of related commands easier. For example, you can type a command, press the up arrow key to redisplay it and then use the backspace key to back over and replace some of the characters to run a similar command. In the examples below, a single character is backed over and replaced.To read this article in full, please click here

Ampere launches 192-core AmpereOne server processor

Ampere has announced it has begun shipping its next-generation AmpereOne processor, a server chip with up to 192 cores and special instructions aimed at AI processing.It is also the first generation of chips from the company using homegrown cores rather than cores licensed from Arm. Among the features of these new cores is support for bfloat16, the popular instruction set used in AI training and inferencing.“AI is a big piece [of the processor] because you need more compute power,” said Jeff Wittich, chief products officer for Ampere. ”AI inferencing is one of the big workloads that is driving the need for more and more compute, whether it’s in your big hyperscale data centers or the need for more compute performance out at the edge.”To read this article in full, please click here

Improving customer experience in China using China Express

Improving customer experience in China using China Express
Improving customer experience in China using China Express

Global organizations have always strived to provide a consistent app experience for their Internet users all over the world. Cloudflare has helped in this endeavor with our mission to help build a better Internet. In 2021, we announced an upgraded Cloudflare China Network, in partnership with JD Cloud to help improve performance for users in China. With this option, Cloudflare customers can serve cached content locally within China without all requests having to go to a data center outside of China. This results in significant performance benefits for end users, but requests to the origin still need to travel overseas.

We wanted to go a step further to solve this problem. In early 2023, we launched China Express, a suite of connectivity and performance offerings in partnership with China Mobile International (CMI), CBC Tech and Niaoyun. One of the services available through China Express is Private Link, which is an optimized, high-quality circuit for overseas connectivity. Offered by our local partners, a more reliable and high performance connection from China to the global internet.

A real world example

“Acme Corp” is a global Online Shopping Platform business that serves lots of direct to consumer brands, transacting primarily over Continue reading

DOE funds $40 million for advanced data-center cooling

The Department of Energy has awarded $40 million to 15 vendors and university labs as part of a government program that aims to reduce the portion of data centers' power usage that's used for cooling to just 5% of their total energy consumption.The DOE's Advanced Research Projects Agency–Energy (ARPA-E) is providing the funding to jumpstart a program called COOLERCHIPS, an acronym for Cooling Operations Optimized for Leaps in Energy, Reliability, and Carbon Hyperefficiency for Information Processing Systems.For chip cooling to account for just 5% of total energy consumption, that would translate to a PUE of 1.05. (Power usage effectiveness, or PUE, is a metric to measure data center efficiency. It’s the ratio of the total amount of energy used by a data center facility to the energy delivered to computing equipment.)To read this article in full, please click here

DOE funds $40 million for advanced data-center cooling

The Department of Energy has awarded $40 million to 15 vendors and university labs as part of a government program that aims to reduce the portion of data centers' power usage that's used for cooling to just 5% of their total energy consumption.The DOE's Advanced Research Projects Agency–Energy (ARPA-E) is providing the funding to jumpstart a program called COOLERCHIPS, an acronym for Cooling Operations Optimized for Leaps in Energy, Reliability, and Carbon Hyperefficiency for Information Processing Systems.For chip cooling to account for just 5% of total energy consumption, that would translate to a PUE of 1.05. (Power usage effectiveness, or PUE, is a metric to measure data center efficiency. It’s the ratio of the total amount of energy used by a data center facility to the energy delivered to computing equipment.)To read this article in full, please click here

Case study: Calico Enterprise empowers Aldagi to achieve EU GDPR compliance

Founded in 1990, Aldagi is Georgia’s first and biggest private insurance firm. With a 32% market share in Georgia’s insurance sector, Aldagi provides a broad range of services to corporate and retail clients.

With the onset of the pandemic in 2019, Aldagi wanted to make its services available to customers online. To this end, the company adopted an Agile methodology for software development and re-architected its traditional VM-based applications into cloud-native applications. Aldagi then began using containers and Kubernetes as a part of this process. Using self-managed clusters on Rancher Kubernetes Engine (RKE), Aldagi created distributed, multi-tenant applications to serve its broad EU customer base.

In collaboration with Tigera, Aldagi details its journey using Calico to achieve EU GDPR compliance, in order to share its experience with the rest of the Kubernetes community.

Vasili Grigolaia, Vice President of Engineering, Aldagi, on his company’s experience with using Calico

Case study highlights

Because Aldagi’s applications are distributed and multi-tenanted, the company faced three major challenges when it came to achieving EU GDPR compliance:

  1. Granular access control
  2. Visibility and security controls for workloads with sensitive data
  3. Continuous compliance reporting and auditing

By deploying Calico, Aldagi solved these challenges and achieved EU GDPR compliance, Continue reading

IBM wants drag-and-drop connectivity for hybrid cloud applications

IBM is developing a SaaS package to help enterprises securely network heterogenous environments, including edge, on-prem and multicloud resources.The IBM Hybrid Cloud Mesh is a SaaS service that implements a virtualized Layer 3-7 environment to rapidly enable secure connectivity between users, applications, and data distributed across multiple locations and environments, according to Andrew Coward, general manager of IBM’s software defined networking group. In a nutshell, Hybrid Cloud Mesh deploys gateways within the clouds – including on-premises, AWS or other providers’ clouds, and transit points, if needed – to support the infrastructure, and then it builds a secure Layer 3-7 mesh overlay to deliver applications, Coward said. At the application level, the exposure to developers occurs at Layer 7, and the networking teams see Layer 3 and 4 activities, Coward said.To read this article in full, please click here

IBM wants drag-and-drop connectivity for hybrid cloud applications

IBM is developing a SaaS package to help enterprises securely network heterogenous environments, including edge, on-prem and multicloud resources.The IBM Hybrid Cloud Mesh is a SaaS service that implements a virtualized Layer 3-7 environment to rapidly enable secure connectivity between users, applications, and data distributed across multiple locations and environments, according to Andrew Coward, general manager of IBM’s software defined networking group. In a nutshell, Hybrid Cloud Mesh deploys gateways within the clouds – including on-premises, AWS or other providers’ clouds, and transit points, if needed – to support the infrastructure, and then it builds a secure Layer 3-7 mesh overlay to deliver applications, Coward said. At the application level, the exposure to developers occurs at Layer 7, and the networking teams see Layer 3 and 4 activities, Coward said.To read this article in full, please click here