In October 2024, we started publishing roundup blog posts to share the latest features and updates from our teams. Today, we are announcing general availability for Account Owned Tokens, which allow organizations to improve access control for their Cloudflare services. Additionally, we are launching Zaraz Automated Actions, which is a new feature designed to streamline event tracking and tool integration when setting up third-party tools. By automating common actions like pageviews, custom events, and e-commerce tracking, it removes the need for manual configurations.
Cloudflare is critical infrastructure for the Internet, and we understand that many of the organizations that build on Cloudflare rely on apps and integrations outside the platform to make their lives easier. In order to allow access to Cloudflare resources, these apps and integrations interact with Cloudflare via our API, enabled by access tokens and API keys. Today, the API Access Tokens and API keys on the Cloudflare platform are owned by individual users, which can lead to some difficulty representing services, and adds an additional dependency on managing users alongside token permissions.
First, a little explanation because the terms can Continue reading
The previous chapter explained how Feed-forward Neural Networks (FNNs) can be used for multi-class classification of 28 x 28 pixel handwritten digits from the MNIST dataset. While FNNs work well for this type of task, they have significant limitations when dealing with larger, high-resolution color images.
In neural network terminology, each RGB value of an image is treated as an input feature. For instance, a high-resolution 600 dpi RGB color image with dimensions 3.937 x 3.937 inches contains approximately 5.58 million pixels, resulting in roughly 17 million RGB values.
If we use a fully connected FNN for training, all these 17 million input values are fed into every neuron in the first hidden layer. Each neuron must compute a weighted sum based on these 17 million inputs. The memory required for storing the weights depends on the numerical precision format used. For example, using the 16-bit floating-point (FP16) format, each weight requires 2 bytes. Thus, the memory requirement per neuron would be approximately 32 MB. If the first hidden layer has 10,000 neurons, the total memory required for storing the weights in this layer would be around 316 GB.
In contrast, Convolutional Neural Networks (CNNs) use Continue reading
If you want to sell a lot of hardware to support AI workloads, then the best way to do that is to convince every country on Earth that AI is so important that they must have a lot of it within their borders. …
How Convenient: Every Country Needs AI Sovereignty was written by Jeffrey Burt at The Next Platform.
Any time you can get a lot of companies with very technically adept and strongly opinionated people to work together on a problem, or a set of problems, then you know for a fact that there is a real problem. …
The Collaboration That Will Drive Ethernet Into The HPC And AI Future was written by Timothy Prickett Morgan at The Next Platform.
Industry standard sFlow telemetry is widely supported by network equipment vendors and network management platforms. However, the advent of real-time sFlow analytics has opened up a range of new applications for sFlow. The map above shows the proportion of sFlow-RT instances running in each of the over 70 countries in which it is deployed.
The following use cases are driving current deployments:
Addressing the challenge of operating AI / ML clusters is the emerging application for sFlow visibility. High speed (400/800G) data center switches needed to handle machine learning traffic flows include sFlow agents and real-time analytics are essential to optimize the network so that expensive GPU and compute resources are fully utilized, see Leveraging open technologies to monitor packet drops in AI cluster fabrics.
If you would like to see how real-time network analytics can transform network operations, Getting Started describes how to download and configure sFlow-RT analytics software for use in your network, or how to try it out using an emulator, or pre-captured data.
Anyone who thinks that Intel is easy to kill need look no further than the historical trends of the Mercury Research market share statistics that we see each quarter. …
The Server Recession Ends, And Both Intel And AMD Won was written by Timothy Prickett Morgan at The Next Platform.
COMMISSIONED As the AI era unfolds, I have been reflecting on my journey in the tech industry. …
AI Workloads Are Changing IT Demands At The Edge was written by Timothy Prickett Morgan at The Next Platform.
It’s time for another “the vendor IS-IS defaults are all wrong” blog post. Wide IS-IS metrics were standardized in RFC 3784 in June 2004, yet most vendors still use the ancient narrow metrics as the default setting.
Want to know more? The Using IS-IS Metrics lab exercise provides all the gory details.
Sponsored Feature Arm is starting to fulfill its promise of transforming the nature of compute in the datacenter, and it is getting some big help from traditional chip makers as well as the hyperscalers and cloud builders that have massive computing requirements and who also need to drive efficiency up and costs down each and every year. …
Custom Arm CPUs Drive Many Different AI Approaches In The Datacenter was written by Timothy Prickett Morgan at The Next Platform.
Caddy is an open-source web server written in Go. It handles TLS certificates automatically and comes with a simple configuration syntax. Users can extend its functionality through plugins1 to add features like rate limiting, caching, and Docker integration.
While Caddy is available in Nixpkgs, adding extra plugins is not
simple.2 The compilation process needs Internet access, which Nix
denies during build to ensure reproducibility. When trying to build the
following derivation using xcaddy, a tool for building Caddy with plugins,
it fails with this error: dial tcp: lookup proxy.golang.org on [::1]:53:
connection refused
.
{ pkgs }: pkgs.stdenv.mkDerivation { name = "caddy-with-xcaddy"; nativeBuildInputs = with pkgs; [ go xcaddy cacert ]; unpackPhase = "true"; buildPhase = '' xcaddy build --with github.com/caddy-dns/[email protected] ''; installPhase = '' mkdir -p $out/bin cp caddy $out/bin ''; }
Fixed-output derivations are an exception to this rule and get network access
during build. They need to specify their output hash. For example, the
fetchurl
function produces a fixed-output derivation:
{ stdenv, fetchurl }: stdenv.mkDerivation rec { pname = "hello"; version = "2.12.1"; src Continue reading
Hello my friend,
We continue our blog series about learning Go (Golang) as second programming language, which you can use for network and IT infrastructure automation. Today we’ll talk about the basic data types and variables both in Python and Go
Any programming language, whether it is Python or Go (Golang), is a tool to implement your business logic. Whilst it is very important to be experienced with the tool, it is important also to understand the wide context of network automation, and this is where our trainings will kick start you:
We offer the following training programs in network automation for you:
During these trainings you will learn the following topics:
For most of the history of high performance computing, a supercomputer was a freestanding, isolated machine that was designed to run some simulation or model and the only link it needed to the outside world was a relatively small one to show some visualization. …
The Back End AI Network Puts Pressure On The Front End was written by Timothy Prickett Morgan at The Next Platform.