The web is the most powerful application platform in existence. As long as you have the right API, you can safely run anything you want in a browser.
Well… anything but cryptography.
It is as true today as it was in 2011 that Javascript cryptography is Considered Harmful. The main problem is code distribution. Consider an end-to-end-encrypted messaging web application. The application generates cryptographic keys in the client’s browser that lets users view and send end-to-end encrypted messages to each other. If the application is compromised, what would stop the malicious actor from simply modifying their Javascript to exfiltrate messages?
It is interesting to note that smartphone apps don’t have this issue. This is because app stores do a lot of heavy lifting to provide security for the app ecosystem. Specifically, they provide integrity, ensuring that apps being delivered are not tampered with, consistency, ensuring all users get the same app, and transparency, ensuring that the record of versions of an app is truthful and publicly visible.
It would be nice if we could get these properties for our end-to-end encrypted web application, and the web as a whole, without requiring a single central authority like Continue reading

The National Research Platform (NRP) operates a globally distributed, high-performance computing and networking environment, with an average of 15,000 pods across 450 nodes supporting more than 3,000 scientific project namespaces. With its head node in San Diego, NRP connects research institutions and data centers worldwide via links ranging from 10 to 400 Gbps, serving more than 5,000 users in 70+ locations.
NRP is a partnership of more than 50 institutions, led by researchers at UC San Diego, University of Nebraska-Lincoln, and Massachusetts Green High Performance Computing Center and includes contributions by the National Science Foundation, the Department of Energy, the Department of Defense, and many research universities and R&E networking organizations in the US and around the world.

NRP needed a way to diagnose connectivity problems across globally distributed storage nodes. Frequent changes to edge network configurations, ACLs, firewalls, and static routes caused blocked ports, forcing manual troubleshooting with tools such as nmap and iperf. This process slowed down root-cause analysis and problem resolution.
Scientific workflows demanded maximum throughput over 100/400 Gbps links and jumbo frames. Traditional host firewalls introduced unacceptable Continue reading
For years, Oracle has found itself solidly in the second tier of cloud providers, well behind the top three of Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform, which combined account for more than 60 percent of global cloud infrastructure services market. …
Ellison: Oracle Is Leveraging All Of Its Advantages To Build A Different Cloud was written by Jeffrey Burt at The Next Platform.
To enable Remote Memory Access (RMA) operations between processes, each endpoint — representing a communication channel much like a TCP socket — must know the destination process’s location within the fabric. This location is represented by the Fabric Address (FA) assigned to a Fabric Endpoint (FEP).
During job initialization, FAs are distributed through a control-plane–like procedure in which the master rank collects FAs from all ranks and then broadcasts the complete Rank-to-FA mapping to every participant (see Chapter 3 for details). Each process stores this Rank–FA mapping locally as a structure, which can then be inserted into the Address Vector (AV) Table.
When FAs from the distributed Rank-to-FA table are inserted into the AV Table, the provider assigns each entry an index number, which is published to the application as an fi_addr_t handle. After an endpoint object is bound to the AV Table, the application uses this handle — rather than the full address — when referencing a destination process. This abstraction hides the underlying address structure from the application and allows fast and efficient lookups during communication.
This mechanism resembles the functionality of a BGP Route Reflector (RR) in IP networks. Each RR client advertises its Continue reading
In the multi-pod EVPN design, I described a simple way to merge two EVPN fabrics into a single end-to-end fabric. Here are a few highlights of that design:
In that design, the WAN edge routers have to support EVPN (at least in the control plane) and carry all EVPN routes for both fabrics. Today, we’ll change the design to use simpler WAN edge routers that support only IP forwarding.
On October 4, independent developer Theo Browne published a series of benchmarks designed to compare server-side JavaScript execution speed between Cloudflare Workers and Vercel, a competing compute platform built on AWS Lambda. The initial results showed Cloudflare Workers performing worse than Node.js on Vercel at a variety of CPU-intensive tasks, by a factor of as much as 3.5x.
We were surprised by the results. The benchmarks were designed to compare JavaScript execution speed in a CPU-intensive workload that never waits on external services. But, Cloudflare Workers and Node.js both use the same underlying JavaScript engine: V8, the open source engine from Google Chrome. Hence, one would expect the benchmarks to be executing essentially identical code in each environment. Physical CPUs can vary in performance, but modern server CPUs do not vary by anywhere near 3.5x.
On investigation, we discovered a wide range of small problems that contributed to the disparity, ranging from some bad tuning in our infrastructure, to differences between the JavaScript libraries used on each platform, to some issues with the test itself. We spent the week working on many of these problems, which means over the past week Workers got better and faster Continue reading
It is Oracle OpenWorld CloudWorld AI World this week, so we expect a lot of AI infrastructure announcements from Big Red, with AI being the biggest new workload to hit the enterprise in decades. …
Oracle First In Line For AMD “Altair” MI450 GPUs, “Helios” Racks was written by Timothy Prickett Morgan at The Next Platform.
“Platformization” eventually comes to every high-profile IT space as the number of tools and amount of complexity increases. …
Google Rolls Up Gemini And AI Tools Into An Enterprise Platform was written by Jeffrey Burt at The Next Platform.
Daniel Dib asked a sad question on LinkedIn:
Where did all the great documentation go?
In more detail:
There was a time when documentation answered almost all questions:
- What is the thing?
- What does the thing do?
- Why would you use the thing?
- How do you configure the thing?
I’ve seen the same thing happening in training, and here’s my cynical TL&DR answer: because the managers of the documentation/training departments don’t understand the true value of what they’re producing and thus cannot justify a decent budget to make it happen.
If it seems like OpenAI is shaking up the IT market every other day or so, that is because that is precisely what it is doing. …
Broadcom Goes Wide With AI Systems And Takes On The ODMs was written by Timothy Prickett Morgan at The Next Platform.
A few days ago, I described how you can use the new config.inline functionality to apply additional configuration commands to individual devices in a netlab-powered lab.
However, sometimes you have to apply the same set of commands to several devices. Although you could use device groups to do that, netlab release 25.09 offers a much better mechanism: you can embed custom configuration templates in the lab topology file.
[Updated 12-October, 2025: Figure & uet addressing section]
In libfabric and Ultra Ethernet Transport (UET), the endpoint, represented by the object fid_ep, serves as the primary communication interface between a process and the underlying network fabric. Every data exchange, whether it involves message passing, remote memory access (RMA), or atomic operations, ultimately passes through an endpoint. It acts as a software abstraction of the transport hardware, exposing a programmable interface that the application can use to perform high-performance data transfers.
Conceptually, an endpoint resembles a socket in the TCP/IP world. However, while sockets hide much of the underlying network stack behind a simple API, endpoints expose far more detail and control. They allow the process to define which completion queues to use, what capabilities to enable, and how multiple communication contexts are managed concurrently. This design gives applications, especially large distributed training frameworks and HPC workloads, direct control over latency, throughput, and concurrency in ways that traditional sockets cannot provide.
Furthermore, socket-based communication typically relies on the operating system’s networking stack and consumes CPU cycles for data movement and protocol handling. In contrast, endpoint communication paths can interact directly with the NIC, enabling user-space data transfers Continue reading