On this episode of the Hedge, Anil Varanasi joins Russ to talk about the complexities of network operations and what Meter is doing in this space.
note: even though this is a more product-heavy episode of the Hedge than usual, it is not sponsored
download
In this challenge lab, you’ll configure a BIRD daemon running in a container as a BGP route reflector in a transit autonomous system. You should be familiar with the configuration concepts if you completed the IBGP lab exercises, but will probably struggle with BIRD configuration if you’re not familiar with it.
Click here to start the lab in your browser using GitHub Codespaces (or set up your own lab infrastructure). After starting the lab environment, change the directory to challenge/01-bird-rr, build the BIRD container with netlab clab build bird if needed, and execute netlab up.
An intellectual is a man who says a simple thing in a difficult way; an artist is a man who says a difficult thing in a simple way. Charles Bukowski - Notes of a Dirty Old Man, 1969 In a world where technology seems to be working against us and seemingly simple things require far READ MORE
The post Containerlab – The Anti-Pattern appeared first on The Gratuitous Arp.
Of all of the hyperscalers and cloud builders, Meta Platforms has always been the one that we expected to design and have manufactured its own CPU and XPU accelerator compute engines. …
Meta Buys Rivos To Accelerate Compute Engine Engineering was written by Timothy Prickett Morgan at The Next Platform.
For years, netlab has had custom configuration templates that can be used to deploy custom configurations onto lab devices. The custom configuration templates can be Jinja2 templates, and you can create different templates (for the same functionality) for different platforms. However, using that functionality if you need an extra command or two makes approximately as much sense as using a Kubernetes cluster to deploy a BusyBox container.
netlab release 25.09 solves that problem with the files plugin and the inline config functionality.
The Address Vector (AV) is a provider-managed mapping that connects remote fabric addresses to compact integer handles (fi_addr_t) used in communication operations. Unlike a routing table, the AV does not store IP (device mappings). Instead, it converts an opaque Fabric Address (FA)—which may contain IP, port, and transport-specific identifiers—into a simple handle that endpoints can use for sending and receiving messages. The application never needs to reference the raw IP addresses directly.
The application begins by requesting an Address Vector (AV) through the fi_av_open() call. To do this, it first defines the desired AV properties in a fi_av_attr structure:
int fi_av_open(struct fid_domain *domain, struct fi_av_attr *attr, struct fid_av **av, void *context);
struct fi_av_attr av_attr = {
.type = FI_AV_TABLE,
.count = 16,
.rx_ctx_bits = 0,
.ep_per_node = 1,
.name = "my_av",
.map_addr = NULL,
.flags = 0
};
Example 4-1: structure Continue reading
I am so happy the first post about sharing my Python scripts got such a […]
The post From Classroom to Community 2 - ROT13 and Math Quizzer first appeared on Brezular's Blog.
What does Cerebras Systems, the first successful waferscale computing commercializer and a contender in the race to provide compute for the world’s burgeoning AI inference workload, do for an encore? …
What Is Cerebras Going To Do With That $1.1 Billion In New Funding? was written by Timothy Prickett Morgan at The Next Platform.
Back at the end of July, we had a discussion with CPU maker AMD and the topic of conversation was hybrid cloud. …
Arm Says Neoverse Is A More Universal Compute Substrate Than X86 was written by Timothy Prickett Morgan at The Next Platform.
In the EVPN Designs: Layer-3 Inter-AS Option A, I described the simplest multi-site design in which the WAN edge routers exchange IP routes in individual VRFs, resulting in two isolated layer-2 fabrics connected with a layer-3 link.
Today, let’s explore a design that will excite the True Believers in end-to-end layer-2 networks: two EVPN fabrics connected with an EBGP session to form a unified, larger EVPN fabric. We’ll use the same “physical” topology as the previous example; the only modification is that the WA-WB link is now part of the underlay IP network.
What’s the difference between Meta Platforms and OpenAI? The big one – and perhaps the most important one in the long run – is that when Meta Platforms does a deal with neocloud CoreWeave, it actually has a revenue stream from its advertising on various Web properties that it can pump back into AI investments while OpenAI is still burning money much faster than it is making it. …
Get Your Money For Nothing, Chips Definitely Not For Free was written by Timothy Prickett Morgan at The Next Platform.
At some point, when the cloud bills get very high and the AI usage starts going up and up, the enterprise is going to want to stop paying a cloud and get their own iron and co-locate it in a rentable datacenter. …
Managing AI Datacenter Networks With – You Guessed It – AI was written by Timothy Prickett Morgan at The Next Platform.
Tucked behind the administrator login screen of countless websites is one of the Internet’s unsung heroes: the Content Management System (CMS). This seemingly basic piece of software is used to draft and publish blog posts, organize media assets, manage user profiles, and perform countless other tasks across a dizzying array of use cases. One standout in this category is a vibrant open-source project called Payload, which has over 35,000 stars on GitHub and has generated so much community excitement that it was recently acquired by Figma.
Today we’re excited to showcase a new template from the Payload team, which makes it possible to deploy a full-fledged CMS to Cloudflare’s platform in a single click: just click the Deploy to Cloudflare button to generate a fully-configured Payload instance, complete with bindings to Cloudflare D1 and R2. Below we’ll dig into the technical work that enables this, some of the opportunities it unlocks, and how we’re using Payload to help power Cloudflare TV. But first, a look at why hosting a CMS on Workers is such a game changer.
Behind the scenes: Cloudflare TV’s Payload instance
Most CMSs are designed to be hosted on a conventional server that runs Continue reading
The purpose of this phase is to define the queue where operation completions will be reported. Completion queues are used to report the completion of operations submitted to endpoints, such as data transfers, RMA accesses, or remote write requests. By preparing a struct fi_cq_attr, the application describes exactly what it needs, so the provider can allocate a CQ that meets its requirements.
Example API Call:
struct fi_cq_attr cq_attr = {
.size = 2048,
.format = FI_CQ_FORMAT_DATA,
.wait_obj = FI_WAIT_FD,
.flags = FI_WRITE | FI_REMOTE_WRITE | FI_RMA,
.data_size = 64
};
struct fid_cq *cq;
int ret = fi_cq_open(domain, &cq_attr, &cq, NULL);
Explanation of fields:
.size = 2048: The CQ can hold up to 2048 completions. This determines how many completed operations can be buffered before the application consumes them.
.format = FI_CQ_FORMAT_DATA: This setting determines the level of detail included in each completion entry. With FI_CQ_FORMAT_DATA, the CQ entries contain information about the operation, such as the buffer pointer, the length of data, and optional completion data. If the application uses tagged messaging, choosing FI_CQ_FORMAT_TAGGED expands the entries to Continue reading