D1: open beta is here

D1: open beta is here
D1: open beta is here

D1 is now in open beta, and the theme is “scale”: with higher per-database storage limits and the ability to create more databases, we’re unlocking the ability for developers to build production-scale applications on D1. Any developers with an existing paid Workers plan don’t need to lift a finger to benefit: we’ve retroactively applied this to all existing D1 databases.

If you missed the last D1 update back during Developer Week, the multitude of updates in the changelog, or are just new to D1 in general: read on.

Remind me: D1? Databases?

D1 our native serverless database, which we launched into alpha in November last year: the queryable database complement to Workers KV, Durable Objects and R2.

When we set out to build D1, we knew a few things for certain: it needed to be fast, it needed to be incredibly easy to create a database, and it needed to be SQL-based.

That last one was critical: so that developers could a) avoid learning another custom query language and b) make it easier for existing query buildings, ORM (object relational mapper) libraries and other tools to connect to D1 with minimal effort. From this, we’ve seen a Continue reading

New Workers pricing — never pay to wait on I/O again

New Workers pricing — never pay to wait on I/O again
New Workers pricing — never pay to wait on I/O again

Today we are announcing new pricing for Cloudflare Workers and Pages Functions, where you are billed based on CPU time, and never for the idle time that your Worker spends waiting on network requests and other I/O. Unlike other platforms, when you build applications on Workers, you only pay for the compute resources you actually use.

Why is this exciting? To date, all large serverless compute platforms have billed based on how long your function runs — its duration or “wall time”. This is a reflection of a new paradigm built on a leaky abstraction — your code may be neatly packaged up into a “function”, but under the hood there’s a virtual machine (VM). A VM can’t be paused and resumed quickly enough to execute another piece of code while it waits on I/O. So while a typical function might take 100ms to run, it might typically spend only 10ms doing CPU work, like crunching numbers or parsing JSON, with the rest of time spent waiting on I/O.

This status quo has meant that you are billed for this idle time, while nothing is happening.

With this announcement, Cloudflare is the first and only global serverless platform to Continue reading

Announcing Containerized Ansible Automation Platform

Everything you know and love about Ansible Automation Platform in containerized form

We’re excited to announce something that we’ve been working on for a while now, the technical preview of a containerized Red Hat Ansible Automation Platform solution.

Currently, this will allow you to install and run containerized automation controller, Ansible automation hub, and the Event-Driven Ansible controller services on just one or more underlying RHEL hosts on x86_64 and ARM64 architectures. This does not require a kubernetes-based platform, as it just uses native RHEL podman on top of a RHEL host.

 

The rationale behind containerized Ansible Automation Platform

As Ansible Automation Platform evolved, we added more services and components into the stack. Over time, the increasing complexity and inter-dependencies between these components have introduced new challenges in terms of maintenance, installation, and support. They have also opened up opportunities for growth and innovation.

Containerized Ansible Automation Platform is the first step towards a more streamlined and improved platform management experience, incorporating our future vision and strategy.

 

The benefits

Just containerizing existing services was not enough for us, so we set some goals to provide:

  • a slimmed down installation experience
  • a layered installation approach
  • a containerized Continue reading

How GitHub Saved My Day

I always tell networking engineers who aspire to be more than VLAN-munging CLI jockeys to get fluent with Git. I should also be telling them that while doing local version control is the right thing to do, you should always have backups (in this case, a remote repository).

I’m eating my own dog food1 – I’m using a half dozen Git repositories in ipSpace.net production2. If they break, my blog stops working, and I cannot publish new documents3.

Now for a fun fact: Git is not transactionally consistent.

Cloudflare wants you to build AI applications on its edge network

Content delivery network (CDN), security and web services company Cloudflare is opening its worldwide network to companies looking to build and deploy AI models with new serverless AI, database and observability features, working with several new tech partners to do so.Part one of Cloudflare’s new AI-focused initiative, announced today, is the Workers AI framework, which offers access to GPUs in Cloudflare’s network for a serverless way to run AI models. For users trying to run AI systems that are heavily latency dependent, the framework should offer the option of running workloads much closer to the network edge, reducing round-trip time. The company said that Workers AI is also designed to separate inference from training data, ensuring that consumer information is not misused.To read this article in full, please click here

Cloudflare wants you to build AI applications on its edge network

Content delivery network (CDN), security and web services company Cloudflare is opening its worldwide network to companies looking to build and deploy AI models with new serverless AI, database and observability features, working with several new tech partners to do so.Part one of Cloudflare’s new AI-focused initiative, announced today, is the Workers AI framework, which offers access to GPUs in Cloudflare’s network for a serverless way to run AI models. For users trying to run AI systems that are heavily latency dependent, the framework should offer the option of running workloads much closer to the network edge, reducing round-trip time. The company said that Workers AI is also designed to separate inference from training data, ensuring that consumer information is not misused.To read this article in full, please click here

Day Two Cloud 212: Cloud Essentials – Object, File, And Block Storage

Day Two Cloud continues the Cloud Essentials series with cloud storage. We focus specifically on AWS's offering, which include object, file, and block storage options. We also discuss special file systems, file caching, instance stores, and more. We cover use cases for the major storage options and their costs. We also touch briefly on storage services including data migration, hybrid cloud storage, and disaster recovery and backup.

Day Two Cloud 212: Cloud Essentials – Object, File, And Block Storage

Day Two Cloud continues the Cloud Essentials series with cloud storage. We focus specifically on AWS's offering, which include object, file, and block storage options. We also discuss special file systems, file caching, instance stores, and more. We cover use cases for the major storage options and their costs. We also touch briefly on storage services including data migration, hybrid cloud storage, and disaster recovery and backup.

The post Day Two Cloud 212: Cloud Essentials – Object, File, And Block Storage appeared first on Packet Pushers.

HPE restructures around hybrid cloud

Hewlett Packard Enterprise is undergoing a reorganization that includes the formation of a new hybrid cloud business unit and top-level executive shifts.Fortunately, CEO Antonio Neri is going nowhere. Neri may not have a rock star profile, but his success as a leader is undeniable.Two key executives are departing, however. Vishal Lall, general manager of HPE GreenLake and the cloud solutions group, is leaving the company. Pradeep Kumar, senior vice president and general manager of HPE services, is retiring after 27 years with the company.With Kumar’s departure, all operational activities for HPE services, supply chain and quote-to-cash will now be handled by Mark Bakker, executive vice president and general manager of global operations.To read this article in full, please click here

HPE restructures around hybrid cloud

Hewlett Packard Enterprise is undergoing a reorganization that includes the formation of a new hybrid cloud business unit and top-level executive shifts.Fortunately, CEO Antonio Neri is going nowhere. Neri may not have a rock star profile, but his success as a leader is undeniable.Two key executives are departing, however. Vishal Lall, general manager of HPE GreenLake and the cloud solutions group, is leaving the company. Pradeep Kumar, senior vice president and general manager of HPE services, is retiring after 27 years with the company.With Kumar’s departure, all operational activities for HPE services, supply chain and quote-to-cash will now be handled by Mark Bakker, executive vice president and general manager of global operations.To read this article in full, please click here

Cisco boosts Catalyst SD-WAN capabilities

Cisco is unwrapping a range of enhancements for its SD-WAN package that it says will help enterprise IT organizations secure, simplify and optimize their wide-area network operations and management. The upgrades include new routing management capabilities, integration with Microsoft Sentinel and Skyhigh Security systems, a new Catalyst edge device, and improved support for Catalyst cellular connectivity. Cisco’s SD-WAN package includes myriad features to tie together routers, switches or virtualized customer premises equipment (vCPE) from cloud, branch and remote sites, all managed through a single console, the Catalyst SD-WAN Manager.To read this article in full, please click here

You can now use WebGPU in Cloudflare Workers

You can now use WebGPU in Cloudflare Workers
You can now use WebGPU in Cloudflare Workers

The browser as an app platform is real and stronger every day; long gone are the Browser Wars. Vendors and standard bodies have done amazingly well over the last years, working together and advancing web standards with new APIs that allow developers to build fast and powerful applications, finally comparable to those we got used to seeing in the native OS' environment.

Today, browsers can render web pages and run code that interfaces with an extensive catalog of modern Web APIs. Things like networking, rendering accelerated graphics, or even accessing low-level hardware features like USB devices are all now possible within the browser sandbox.

One of the most exciting new browser APIs that browser vendors have been rolling out over the last months is WebGPU, a modern, low-level GPU programming interface designed for high-performance 2D and 3D graphics and general purpose GPU compute.

Today, we are introducing WebGPU support to Cloudflare Workers. This blog will explain why it's important, why we did it, how you can use it, and what comes next.

The history of the GPU in the browser

To understand why WebGPU is a big deal, we must revisit history and see how browsers went from relying only Continue reading

Workers AI: serverless GPU-powered inference on Cloudflare’s global network

Workers AI: serverless GPU-powered inference on Cloudflare’s global network
Workers AI: serverless GPU-powered inference on Cloudflare’s global network

If you're anywhere near the developer community, it's almost impossible to avoid the impact that AI’s recent advancements have had on the ecosystem. Whether you're using AI in your workflow to improve productivity, or you’re shipping AI based features to your users, it’s everywhere. The focus on AI improvements are extraordinary, and we’re super excited about the opportunities that lay ahead, but it's not enough.

Not too long ago, if you wanted to leverage the power of AI, you needed to know the ins and outs of machine learning, and be able to manage the infrastructure to power it.

As a developer platform with over one million active developers, we believe there is so much potential yet to be unlocked, so we’re changing the way AI is delivered to developers. Many of the current solutions, while powerful, are based on closed, proprietary models and don't address privacy needs that developers and users demand. Alternatively, the open source scene is exploding with powerful models, but they’re simply not accessible enough to every developer. Imagine being able to run a model, from your code, wherever it’s hosted, and never needing to find GPUs or deal with setting up the infrastructure to support Continue reading

Writing poems using LLama 2 on Workers AI

Writing poems using LLama 2 on Workers AI
Writing poems using LLama 2 on Workers AI

Matthew and Michelle, co-founders of Cloudflare, published their annual founders’ letter today. The letter ends with a poem written by an AI running using Workers AI on Cloudflare’s global network.

Here’s the code that wrote the poem. It uses Workers AI and the Meta Llama 2 model with 7B parameters and 8-bit integers. Just 14 lines of code running on the Cloudflare global network, and you’ve got your very own AI to chat with.

import { Ai } from "@cloudflare/ai";

export default {
    async fetch(request: Request, env: Env): Promise<Response> {
        const body = await request.json();
        const ai = new Ai(env.AI);
        const response = await ai.run("@cf/meta/llama-2-7b-chat-int8", body);
        return new Response(JSON.stringify(response));
    },
};

export interface Env {
    AI: any;
}

That was deployed on Workers AI and all I had to do was ask for poems. Here’s my terminal output (with just the domain name changed).

% curl -X POST https://example.com/ -d '{"prompt":"Write a poem \
that talks about the connectivity cloud"}' | jq -r .response

Cloud computing provides a connectivity that's unmatched,
A bridge that spans the globe with ease and grace.
It brings us closer, no matter where we are,
And makes the world a Continue reading