YOU are the community

YOU are the community

New year, new role, new strategy...2023 is officially the year when I return to my roots. Back in 2014, I officially became part of the Ansible community. Admittingly, back then my focus was solely on figuring out how to best demonstrate to my customers the power of having a OpenStack private cloud. Anyone who has ever stood up or experimented with OpenStack knows that this is a tall order. Imagine having to stand up that platform over and over again on a daily basis. My focus was to find a way---a tool---that could help me do that, so I could focus on helping solve the customers' true challenges. Fast forward to now, and the decision to do it with Ansible still stands as the best choice hands down.

Many of you have stories just like mine. You are seeking out a way to simplify your daily tasks, so you can focus on the business. Just like me, you have decided that Ansible is the tool to do it. Before I started in this new role, I did some reflecting on my experience as part of the community. I have so many encouraging, positive, and fun Continue reading

VPP MPLS – Part 3

VPP

About this series

Special Thanks: Adrian vifino Pistol for writing this code and for the wonderful collaboration!

Ever since I first saw VPP - the Vector Packet Processor - I have been deeply impressed with its performance and versatility. For those of us who have used Cisco IOS/XR devices, like the classic ASR (aggregation service router), VPP will look and feel quite familiar as many of the approaches are shared between the two.

In the [first article] of this series, I took a look at MPLS in general, and how setting up static Label Switched Paths can be done in VPP. A few details on special case labels (such as Implicit Null which enabled the fabled Penultimate Hop Popping) were missing, so I took a good look at them in the [second article] of the series.

This was all just good fun but also allowed me to buy some time for @vifino who has been implementing MPLS handling within the Linux Control Plane plugin for VPP! This final article in the series shows the engineering considerations that went in to writing the plugin, which is currently under review but reasonably complete. Considering the VPP Continue reading

NVA Part III: NVA Redundancy – Connection from the Internet

This chapter is the first part of a series on Azure's highly available Network Virtual Appliance (NVA) solutions. It explains how we can use load balancers to achieve active/active NVA redundancy for connections initiated from the Internet.

In Figure 4-1, Virtual Machine (VM) vm-prod-1 uses the load balancer's Frontend IP address 20.240.9.27 to publish an application (SSH connection) to the Internet. Vm-prod-1 is located behind an active/active NVA FW cluster. Vm-prod-1 and NVAs have vNICs attached to the subnet 10.0.2.0/24.

Both NVAs have identical Pre- and Post-routing policies. If the ingress packet's destination IP address is 20.240.9.27 (load balancer's Frontend IP) and the transport layer protocol is TCP, the policy changes the destination IP address to 10.0.2.6 (vm-prod-1). Additionally, before routing the packet through the Ethernet 1 interface, the Post-routing policy replaces the original source IP with the IP address of the egress interface Eth1.

The second vNICs of the NVAs are connected to the subnet 10.0.1.0/24. We have associated these vNICs with the load balancer's backend pool. The Inbound rule binds the Frontend IP address to the Backend pool and defines the load-sharing policies. In our example, the packets of SSH connections from the remote host to the Frontend IP are distributed between NVA1 and NVA2. Moreover, an Inbound rule determines the Health Probe policy associated with the Inbound rule.

Note! Using a single VNet design eliminates the need to define static routes in the subnet-specific route table and the VM's Linux kernel. This solution is suitable for small-scale implementations. However, the Hub-and-Spoke VNet topology offers simplified network management, enhanced security, scalability, performance, and hybrid connectivity. I will explain how to achieve NVA redundancy in the Hub-and-Spoke VNet topology in upcoming chapters.



Figure 4-1: Example Diagram. 

Why Is Source Address Validation Still a Problem?

I mentioned IP source address validation (SAV) as one of the MANRS-recommended actions in the Internet Routing Security webinar but did not go into any details (as the webinar deals with routing security, not data-plane security)… but I stumbled upon a wonderful companion article published by RIPE Labs: Why Is Source Address Validation Still a Problem?.

The article goes through the basics of SAV, best practices, and (most interesting) using free testing tools to detect non-compliant networks. Definitely worth reading!

Why Is Source Address Validation Still a Problem?

I mentioned IP source address validation (SAV) as one of the MANRS-recommended actions in the Internet Routing Security webinar but did not go into any details (as the webinar deals with routing security, not data-plane security)… but I stumbled upon a wonderful companion article published by RIPE Labs: Why Is Source Address Validation Still a Problem?.

The article goes through the basics of SAV, best practices, and (most interesting) using free testing tools to detect non-compliant networks. Definitely worth reading!

Heavy Networking 680: Speed Up Mean Time To WAN Innocence With Broadcom NetOps (Sponsored)

It's common for SD-WAN vendors to offer monitoring as part of the solution, but leaves the question … how do I monitor the rest of the network? Today’s sponsor Broadcom offers digital experience monitoring that is independent of the underlying WAN infrastructure. We explore how it works with guest is Jeremy Rossbach, Chief Technical Evangelist, NetOps by Broadcom.

Heavy Networking 680: Speed Up Mean Time To WAN Innocence With Broadcom NetOps (Sponsored)

It's common for SD-WAN vendors to offer monitoring as part of the solution, but leaves the question … how do I monitor the rest of the network? Today’s sponsor Broadcom offers digital experience monitoring that is independent of the underlying WAN infrastructure. We explore how it works with guest is Jeremy Rossbach, Chief Technical Evangelist, NetOps by Broadcom.

The post Heavy Networking 680: Speed Up Mean Time To WAN Innocence With Broadcom NetOps (Sponsored) appeared first on Packet Pushers.

UK announces $1.2B chip strategy, faces criticism over funding size

The UK government has finally unveiled its delayed 10-year strategy for supporting the country’s semiconductor industry, which includes £1 billion ($1.24 billion) in  investments to drive research and development efforts and shore up the industry’s talent pipeline.More than two years after the strategy was first promised, Prime Minister Rishi Sunak announced the policy Friday at a meeting of leaders of the G7 group of nations in Japan, coinciding with an agreement to launch a "semiconductors partnership" between the two countries in order to boost supply-chain resilience.“Semiconductors underpin the devices we use every day and will be crucial to advancing the technologies of tomorrow,” Sunak said in a statement. “Our new strategy focuses our efforts on where our strengths lie, in areas like research and design, so we can build our competitive edge on the global stage.”To read this article in full, please click here

Meta is working on its own chip, data center design for AI workloads

Facebook parent company Meta has revealed plans for the development of its own custom chip for running artifical intelligence models, and a new data center architecture for AI workloads.“We are executing on an ambitious plan to build the next generation of Meta’s AI infrastructure and today, we’re sharing some details on our progress. This includes our first custom silicon chip for running AI models, a new AI-optimized data center design and the second phase of our 16,000 GPU supercomputer for AI research,” Santosh Janardhan, head of infrastructure at Meta, wrote in a blog post Thursday.To read this article in full, please click here

Meta is working on its own chip, data center design for AI workloads

Facebook parent company Meta has revealed plans for the development of its own custom chip for running artifical intelligence models, and a new data center architecture for AI workloads.“We are executing on an ambitious plan to build the next generation of Meta’s AI infrastructure and today, we’re sharing some details on our progress. This includes our first custom silicon chip for running AI models, a new AI-optimized data center design and the second phase of our 16,000 GPU supercomputer for AI research,” Santosh Janardhan, head of infrastructure at Meta, wrote in a blog post Thursday.To read this article in full, please click here

More Node.js APIs in Cloudflare Workers — Streams, Path, StringDecoder

More Node.js APIs in Cloudflare Workers — Streams, Path, StringDecoder
More Node.js APIs in Cloudflare Workers — Streams, Path, StringDecoder

Today we are announcing support for three additional APIs from Node.js in Cloudflare Workers. This increases compatibility with the existing ecosystem of open source npm packages, allowing you to use your preferred libraries in Workers, even if they depend on APIs from Node.js.

We recently added support for AsyncLocalStorage, EventEmitter, Buffer, assert and parts of util. Today, we are adding support for:

We are also sharing a preview of a new module type, available in the open-source Workers runtime, that mirrors a Node.js environment more closely by making some APIs available as globals, and allowing imports without the node: specifier prefix.

You can start using these APIs today, in the open-source runtime that powers Cloudflare Workers, in local development, and when you deploy your Worker. Get started by enabling the nodejs_compat compatibility flag for your Worker.

Stream

The Node.js streams API is the original API for working with streaming data in JavaScript that predates the WHATWG ReadableStream standard. Now, a full implementation of Node.js streams (based directly on the official implementation provided by the Node.js project) is available within the Workers runtime.

Let's start with a quick example:

 Continue reading

Cloudflare Queues: messages at your speed with consumer concurrency and explicit acknowledgement

Cloudflare Queues: messages at your speed with consumer concurrency and explicit acknowledgement
Cloudflare Queues: messages at your speed with consumer concurrency and explicit acknowledgement

Communicating between systems can be a balancing act that has a major impact on your business. APIs have limits, billing frequently depends on usage, and end-users are always looking for more speed in the services they use. With so many conflicting considerations, it can feel like a challenge to get it just right. Cloudflare Queues is a tool to make this balancing act simple. With our latest features like consumer concurrency and explicit acknowledgment, it’s easier than ever for developers to focus on writing great code, rather than worrying about the fees and rate limits of the systems they work with.

Queues is a messaging service, enabling developers to send and receive messages across systems asynchronously with guaranteed delivery. It integrates directly with Cloudflare Workers, making for easy message production and consumption working with the many products and services we offer.

What’s new in Queues?

Consumer concurrency

Oftentimes, the systems we pull data from can produce information faster than other systems can consume them. This can occur when consumption involves processing information, storing it, or sending and receiving information to a third party system. The result of which is that sometimes, a queue can fall behind where it should be. Continue reading

Workers Browser Rendering API enters open beta

Workers Browser Rendering API enters open beta
Workers Browser Rendering API enters open beta

The Workers Browser Rendering API allows developers to programmatically control and interact with a headless browser instance and create automation flows for their applications and products.

Since the private beta announcement, based on the feedback we've been receiving and our own roadmap, the team has been working on the developer experience and improving the platform architecture for the best possible performance and reliability. Today we enter the open beta and will start onboarding the customers on the wait list.

Developer experience

Starting today, Wrangler, our command-line tool for configuring, building, and deploying applications with Cloudflare developer products, has support for the Browser Rendering API bindings.

You can install Wrangler Beta using npm:

npm install wrangler --save-dev

Bindings allow your Workers to interact with resources on the Cloudflare developer platform. In this case, they will provide your Worker script with an authenticated endpoint to interact with a dedicated Chromium browser instance.

This is all you need in your wrangler.toml once this service is enabled for your account:

browser = { binding = "MYBROWSER", type = "browser" }

Now you can deploy any Worker script that requires Browser Rendering capabilities. You can spawn Chromium instances and interact with Continue reading

Developer Week Performance Update: Spotlight on R2

Developer Week Performance Update: Spotlight on R2
Developer Week Performance Update: Spotlight on R2

For developers, performance is everything. If your app is slow, it will get outclassed and no one will use it. In order for your application to be fast, every underlying component and system needs to be as performant as possible. In the past, we’ve shown how our network helps make your apps faster, even in remote places. We’ve focused on how Workers provides the fastest compute, even in regions that are really far away from traditional cloud datacenters.

For Developer Week 2023, we’re going to be looking at one of the newest Cloudflare developer offerings and how it compares to an alternative when retrieving assets from buckets: R2 versus Amazon Simple Storage Service (S3). Spoiler alert: we’re faster than S3 when serving media content via public access. Our test showed that on average, Cloudflare R2 was 20-40% faster than Amazon S3. For this test, we used 95th percentile Response tests, which measures the time it takes for a user to make a request to the bucket, and get the entirety of the response. This test was designed with the goal of measuring end-user performance when accessing content in public buckets.

In this blog we’re going to talk about why your Continue reading

D1: We turned it up to 11

D1: We turned it up to 11

This post is also available in Deutsch, 简体中文, 日本語, Español, Français.

D1: We turned it up to 11

We’re not going to bury the lede: we’re excited to launch a major update to our D1 database, with dramatic improvements to performance and scalability. Alpha users (which includes any Workers user) can create new databases using the new storage backend right now with the following command:

$ wrangler d1 create your-database --experimental-backend

In the coming weeks, it’ll be the default experience for everyone, but we want to invite developers to start experimenting with the new version of D1 immediately. We’ll also be sharing more about how we built D1’s new storage subsystem, and how it benefits from Cloudflare’s distributed network, very soon.

Remind me: What’s D1?

D1 is Cloudflare’s native serverless database, which we launched into alpha in November last year. Developers have been building complex applications with Workers, KV, Durable Objects, and more recently, Queues & R2, but they’ve also been consistently asking us for one thing: a database they can query.

We also heard consistent feedback that it should be SQL-based, scale-to-zero, and (just like Workers itself), take a Region: Earth approach to replication. And so we took that feedback and set Continue reading

Ampere launches 192-core AmpereOne server processor

Ampere has announced it has begun shipping its next-generation AmpereOne processor, a server chip with up to 192 cores and special instructions aimed at AI processing.It is also the first generation of chips from the company using homegrown cores rather than cores licensed from Arm. Among the features of these new cores is support for bfloat16, the popular instruction set used in AI training and inferencing.“AI is a big piece [of the processor] because you need more compute power,” said Jeff Wittich, chief products officer for Ampere. ”AI inferencing is one of the big workloads that is driving the need for more and more compute, whether it’s in your big hyperscale data centers or the need for more compute performance out at the edge.”To read this article in full, please click here