Archive

Category Archives for "Networking"

Cloudflare is now powering Microsoft Edge Secure Network

Cloudflare is now powering Microsoft Edge Secure Network
Cloudflare is now powering Microsoft Edge Secure Network

Between third-party cookies that track your activity across websites, to highly targeted advertising based on your IP address and browsing data, it's no secret that today’s Internet browsing experience isn’t as private as it should be. Here at Cloudflare, we believe everyone should be able to browse the Internet free of persistent tracking and prying eyes.

That’s why we’re excited to announce that we’ve partnered with Microsoft Edge to provide a fast and secure VPN, right in the browser. Users don’t have to install anything new or understand complex concepts to get the latest in network-level privacy: Edge Secure Network VPN is available on the latest consumer version of Microsoft Edge in most markets, and automatically comes with 5 GB of data. Just enable the feature by going to [Microsoft Edge Settings & more (…) > Browser essentials, and click Get VPN for free]. See Microsoft’s Edge Secure Network page for more details.

Cloudflare’s Privacy Proxy platform isn’t your typical VPN

To take a step back: a VPN is a way in which the Internet traffic leaving your device is tunneled through an intermediary server operated by a provider – in this case, Cloudflare! There are many important pieces that Continue reading

D1: open beta is here

D1: open beta is here
D1: open beta is here

D1 is now in open beta, and the theme is “scale”: with higher per-database storage limits and the ability to create more databases, we’re unlocking the ability for developers to build production-scale applications on D1. Any developers with an existing paid Workers plan don’t need to lift a finger to benefit: we’ve retroactively applied this to all existing D1 databases.

If you missed the last D1 update back during Developer Week, the multitude of updates in the changelog, or are just new to D1 in general: read on.

Remind me: D1? Databases?

D1 our native serverless database, which we launched into alpha in November last year: the queryable database complement to Workers KV, Durable Objects and R2.

When we set out to build D1, we knew a few things for certain: it needed to be fast, it needed to be incredibly easy to create a database, and it needed to be SQL-based.

That last one was critical: so that developers could a) avoid learning another custom query language and b) make it easier for existing query buildings, ORM (object relational mapper) libraries and other tools to connect to D1 with minimal effort. From this, we’ve seen a Continue reading

New Workers pricing — never pay to wait on I/O again

New Workers pricing — never pay to wait on I/O again
New Workers pricing — never pay to wait on I/O again

Today we are announcing new pricing for Cloudflare Workers and Pages Functions, where you are billed based on CPU time, and never for the idle time that your Worker spends waiting on network requests and other I/O. Unlike other platforms, when you build applications on Workers, you only pay for the compute resources you actually use.

Why is this exciting? To date, all large serverless compute platforms have billed based on how long your function runs — its duration or “wall time”. This is a reflection of a new paradigm built on a leaky abstraction — your code may be neatly packaged up into a “function”, but under the hood there’s a virtual machine (VM). A VM can’t be paused and resumed quickly enough to execute another piece of code while it waits on I/O. So while a typical function might take 100ms to run, it might typically spend only 10ms doing CPU work, like crunching numbers or parsing JSON, with the rest of time spent waiting on I/O.

This status quo has meant that you are billed for this idle time, while nothing is happening.

With this announcement, Cloudflare is the first and only global serverless platform to Continue reading

How GitHub Saved My Day

I always tell networking engineers who aspire to be more than VLAN-munging CLI jockeys to get fluent with Git. I should also be telling them that while doing local version control is the right thing to do, you should always have backups (in this case, a remote repository).

I’m eating my own dog food1 – I’m using a half dozen Git repositories in ipSpace.net production2. If they break, my blog stops working, and I cannot publish new documents3.

Now for a fun fact: Git is not transactionally consistent.

How GitHub Saved My Day

I always tell networking engineers who aspire to be more than VLAN-munging CLI jockeys to get fluent with Git. I should also be telling them that while doing local version control is the right thing to do, you should always have backups (in this case, a remote repository).

I’m eating my own dog food1 – I’m using a half dozen Git repositories in ipSpace.net production2. If they break, my blog stops working, and I cannot publish new documents3.

Now for a fun fact: Git is not transactionally consistent.

Cloudflare wants you to build AI applications on its edge network

Content delivery network (CDN), security and web services company Cloudflare is opening its worldwide network to companies looking to build and deploy AI models with new serverless AI, database and observability features, working with several new tech partners to do so.Part one of Cloudflare’s new AI-focused initiative, announced today, is the Workers AI framework, which offers access to GPUs in Cloudflare’s network for a serverless way to run AI models. For users trying to run AI systems that are heavily latency dependent, the framework should offer the option of running workloads much closer to the network edge, reducing round-trip time. The company said that Workers AI is also designed to separate inference from training data, ensuring that consumer information is not misused.To read this article in full, please click here

Cloudflare wants you to build AI applications on its edge network

Content delivery network (CDN), security and web services company Cloudflare is opening its worldwide network to companies looking to build and deploy AI models with new serverless AI, database and observability features, working with several new tech partners to do so.Part one of Cloudflare’s new AI-focused initiative, announced today, is the Workers AI framework, which offers access to GPUs in Cloudflare’s network for a serverless way to run AI models. For users trying to run AI systems that are heavily latency dependent, the framework should offer the option of running workloads much closer to the network edge, reducing round-trip time. The company said that Workers AI is also designed to separate inference from training data, ensuring that consumer information is not misused.To read this article in full, please click here

Day Two Cloud 212: Cloud Essentials – Object, File, And Block Storage

Day Two Cloud continues the Cloud Essentials series with cloud storage. We focus specifically on AWS's offering, which include object, file, and block storage options. We also discuss special file systems, file caching, instance stores, and more. We cover use cases for the major storage options and their costs. We also touch briefly on storage services including data migration, hybrid cloud storage, and disaster recovery and backup.

The post Day Two Cloud 212: Cloud Essentials – Object, File, And Block Storage appeared first on Packet Pushers.

Day Two Cloud 212: Cloud Essentials – Object, File, And Block Storage

Day Two Cloud continues the Cloud Essentials series with cloud storage. We focus specifically on AWS's offering, which include object, file, and block storage options. We also discuss special file systems, file caching, instance stores, and more. We cover use cases for the major storage options and their costs. We also touch briefly on storage services including data migration, hybrid cloud storage, and disaster recovery and backup.

HPE restructures around hybrid cloud

Hewlett Packard Enterprise is undergoing a reorganization that includes the formation of a new hybrid cloud business unit and top-level executive shifts.Fortunately, CEO Antonio Neri is going nowhere. Neri may not have a rock star profile, but his success as a leader is undeniable.Two key executives are departing, however. Vishal Lall, general manager of HPE GreenLake and the cloud solutions group, is leaving the company. Pradeep Kumar, senior vice president and general manager of HPE services, is retiring after 27 years with the company.With Kumar’s departure, all operational activities for HPE services, supply chain and quote-to-cash will now be handled by Mark Bakker, executive vice president and general manager of global operations.To read this article in full, please click here

HPE restructures around hybrid cloud

Hewlett Packard Enterprise is undergoing a reorganization that includes the formation of a new hybrid cloud business unit and top-level executive shifts.Fortunately, CEO Antonio Neri is going nowhere. Neri may not have a rock star profile, but his success as a leader is undeniable.Two key executives are departing, however. Vishal Lall, general manager of HPE GreenLake and the cloud solutions group, is leaving the company. Pradeep Kumar, senior vice president and general manager of HPE services, is retiring after 27 years with the company.With Kumar’s departure, all operational activities for HPE services, supply chain and quote-to-cash will now be handled by Mark Bakker, executive vice president and general manager of global operations.To read this article in full, please click here

Cisco boosts Catalyst SD-WAN capabilities

Cisco is unwrapping a range of enhancements for its SD-WAN package that it says will help enterprise IT organizations secure, simplify and optimize their wide-area network operations and management. The upgrades include new routing management capabilities, integration with Microsoft Sentinel and Skyhigh Security systems, a new Catalyst edge device, and improved support for Catalyst cellular connectivity. Cisco’s SD-WAN package includes myriad features to tie together routers, switches or virtualized customer premises equipment (vCPE) from cloud, branch and remote sites, all managed through a single console, the Catalyst SD-WAN Manager.To read this article in full, please click here

You can now use WebGPU in Cloudflare Workers

You can now use WebGPU in Cloudflare Workers
You can now use WebGPU in Cloudflare Workers

The browser as an app platform is real and stronger every day; long gone are the Browser Wars. Vendors and standard bodies have done amazingly well over the last years, working together and advancing web standards with new APIs that allow developers to build fast and powerful applications, finally comparable to those we got used to seeing in the native OS' environment.

Today, browsers can render web pages and run code that interfaces with an extensive catalog of modern Web APIs. Things like networking, rendering accelerated graphics, or even accessing low-level hardware features like USB devices are all now possible within the browser sandbox.

One of the most exciting new browser APIs that browser vendors have been rolling out over the last months is WebGPU, a modern, low-level GPU programming interface designed for high-performance 2D and 3D graphics and general purpose GPU compute.

Today, we are introducing WebGPU support to Cloudflare Workers. This blog will explain why it's important, why we did it, how you can use it, and what comes next.

The history of the GPU in the browser

To understand why WebGPU is a big deal, we must revisit history and see how browsers went from relying only Continue reading

Workers AI: serverless GPU-powered inference on Cloudflare’s global network

Workers AI: serverless GPU-powered inference on Cloudflare’s global network
Workers AI: serverless GPU-powered inference on Cloudflare’s global network

If you're anywhere near the developer community, it's almost impossible to avoid the impact that AI’s recent advancements have had on the ecosystem. Whether you're using AI in your workflow to improve productivity, or you’re shipping AI based features to your users, it’s everywhere. The focus on AI improvements are extraordinary, and we’re super excited about the opportunities that lay ahead, but it's not enough.

Not too long ago, if you wanted to leverage the power of AI, you needed to know the ins and outs of machine learning, and be able to manage the infrastructure to power it.

As a developer platform with over one million active developers, we believe there is so much potential yet to be unlocked, so we’re changing the way AI is delivered to developers. Many of the current solutions, while powerful, are based on closed, proprietary models and don't address privacy needs that developers and users demand. Alternatively, the open source scene is exploding with powerful models, but they’re simply not accessible enough to every developer. Imagine being able to run a model, from your code, wherever it’s hosted, and never needing to find GPUs or deal with setting up the infrastructure to support Continue reading

Writing poems using LLama 2 on Workers AI

Writing poems using LLama 2 on Workers AI
Writing poems using LLama 2 on Workers AI

Matthew and Michelle, co-founders of Cloudflare, published their annual founders’ letter today. The letter ends with a poem written by an AI running using Workers AI on Cloudflare’s global network.

Here’s the code that wrote the poem. It uses Workers AI and the Meta Llama 2 model with 7B parameters and 8-bit integers. Just 14 lines of code running on the Cloudflare global network, and you’ve got your very own AI to chat with.

import { Ai } from "@cloudflare/ai";

export default {
    async fetch(request: Request, env: Env): Promise<Response> {
        const body = await request.json();
        const ai = new Ai(env.AI);
        const response = await ai.run("@cf/meta/llama-2-7b-chat-int8", body);
        return new Response(JSON.stringify(response));
    },
};

export interface Env {
    AI: any;
}

That was deployed on Workers AI and all I had to do was ask for poems. Here’s my terminal output (with just the domain name changed).

% curl -X POST https://example.com/ -d '{"prompt":"Write a poem \
that talks about the connectivity cloud"}' | jq -r .response

Cloud computing provides a connectivity that's unmatched,
A bridge that spans the globe with ease and grace.
It brings us closer, no matter where we are,
And makes the world a Continue reading

Announcing AI Gateway: making AI applications more observable, reliable, and scalable

Announcing AI Gateway: making AI applications more observable, reliable, and scalable
Announcing AI Gateway: making AI applications more observable, reliable, and scalable

Today, we’re excited to announce our beta of AI Gateway – the portal to making your AI applications more observable, reliable, and scalable.

AI Gateway sits between your application and the AI APIs that your application makes requests to (like OpenAI) – so that we can cache responses, limit and retry requests, and provide analytics to help you monitor and track usage. AI Gateway handles the things that nearly all AI applications need, saving you engineering time, so you can focus on what you're building.

Connecting your app to AI Gateway

It only takes one line of code for developers to get started with Cloudflare’s AI Gateway. All you need to do is replace the URL in your API calls with your unique AI Gateway endpoint. For example, with OpenAI you would define your baseURL as "https://gateway.ai.cloudflare.com/v1/ACCOUNT_TAG/GATEWAY/openai" instead of "https://api.openai.com/v1" – and that’s it. You can keep your tokens in your code environment, and we’ll log the request through AI Gateway before letting it pass through to the final API with your token.

// configuring AI gateway with the dedicated OpenAI endpoint

const openai = new OpenAI({
  apiKey: env.OPENAI_API_KEY,
  baseURL: "https://gateway.ai. Continue reading

Partnering with Hugging Face to make deploying AI easier and more affordable than ever 🤗

Partnering with Hugging Face to make deploying AI easier and more affordable than ever 🤗
Partnering with Hugging Face to make deploying AI easier and more affordable than ever 🤗

Today, we’re excited to announce that we are partnering with Hugging Face to make AI models more accessible and affordable than ever before to developers.

There are three things we look forward to making available to developers over the coming months:

  1. We’re excited to bring serverless GPU models to Hugging Face — no more wrangling infrastructure or paying for unused capacity. Just pick your model, and go;
  2. Bringing popular Hugging Face optimized models to Cloudflare’s model catalog;
  3. Introduce Cloudflare integrations as a part of Hugging Face’s Inference solutions.

Hosting over 500,000 models and serving over one million model downloads a day, Hugging Face is the go-to place for developers to add AI to their applications.

Meanwhile, over the past six years at Cloudflare, our goal has been to make it as easy as possible for developers to bring their ideas and applications to life on our developer platform.

As AI has become a critical part of every application, this partnership has felt like a natural match to put tools in the hands of developers to make deploying AI easy and affordable.

“Hugging Face and Cloudflare both share a deep focus on making the latest AI innovations as accessible and affordable Continue reading