As we have noted before, vector databases aren’t new even though people talk about them that way, and in fact can trace their origins back a few decades. …
Couchbase Joins The Vector Search In Database Fray was written by Jeffrey Burt at The Next Platform.
The edge is continuing to become a place where IT infrastructure vendors need to be, and that includes chip makers, all of whom have strategies to push their silicon to where the data is increasingly being generated and needs to be stored, processed, and analyzed. …
AMD Flexing Spartan FPGA Muscles In Clouds And At Edges was written by Jeffrey Burt at The Next Platform.
Things would go a whole lot better for server designs if we had a two year or better still a four year moratorium on adding faster compute engines to machines. …
Pushing PCI-Express Switches And Retimers To Boost Server Bandwidth was written by Timothy Prickett Morgan at The Next Platform.
Note: This story augments and corrects information that originally appeared in Half Eos’d: Even Nvidia Can’t Get Enough H100s For Is Supercomputers, which was published on February 15. …
A Tale Of Two Nvidia Eos Supercomputers was written by Timothy Prickett Morgan at The Next Platform.
By the time that the founders of Achronix, who were all techies from Cornell University, decided to found their own FPGA company twenty years ago, FPGAs had already been in the field for twenty years and the market was dominated by Xilinx (now part of AMD) and Altera (still part of Intel until it gets spun out sometime in the future). …
There Is Still A Place For FPGAs In The Datacenter was written by Timothy Prickett Morgan at The Next Platform.
It is a strange time in the generative AI revolution, with things changing on so many vectors so quickly it is hard to figure out what all of this hardware and software and people-hours costs and what it might be worth when it comes to transforming, well, just about everything. …
Anthropic Fires Off Performance And Price Salvos In AI War was written by Timothy Prickett Morgan at The Next Platform.
It is beginning to look like the Dell Technologies and Hewlett Packard Enterprose, the world’s two biggest original equipment manufacturers, are finally going to start benefitting from the generative AI wave, mainly because they are finally getting enough allocations of GPUs from Nvidia and AMD that they can start addressing the needs of customers who don’t happen to be among the hyperscalers and largest cloud builders. …
The AI Wave Finally Starts Lifting Dell And HPE was written by Timothy Prickett Morgan at The Next Platform.
The Ethernet roadmap has had a few bumps and potholes in the four and a half decades since the 10M generation was first published in 1980. …
Synopsys Shepherds Circuits Towards 1.6T Ethernet was written by Jeffrey Burt at The Next Platform.
In the ten years since Google released Kubernetes to the open source community, it has become the dominant platform for orchestrating and managing software containers and microservices, along the way muscling out competitors like Docker Swarm and Mesosphere. …
Kubernetes Clusters Have Massive Overprovisioning Of Compute And Memory was written by Jeffrey Burt at The Next Platform.
Back in 2015, when we were launching The Next Platform, a lot of stuff was going on all at the same time, which is part of the zeitgeist that we were tapping into and that we wanted to chronical upon and participate within. …
The Once And Future FPGA Maker Altera was written by Timothy Prickett Morgan at The Next Platform.
Sponsored Feature: With technology, as with real estate, location is everything. …
A Different View From The Edge was written by Joseph Martins at The Next Platform.
There is more than one way to get to a large language model with over 1 trillion parameters that can do lots of different things and enterprises can use to create AI training and inference infrastructure to extend and enrich their thousands of applications. …
SambaNova Pits LLM Collective Against Monolithic AI Models was written by Timothy Prickett Morgan at The Next Platform.
What is the most important factor that will drive the Nvidia datacenter GPU accelerator juggernaut in 2024? …
He Who Can Pay Top Dollar For HBM Memory Controls AI Training was written by Timothy Prickett Morgan at The Next Platform.
PARTNER CONTENT: High performance computing (HPC) decision-makers are starting to prioritize energy efficiency in operations and procurement plans. …
How Does HPC In The Cloud Enable Energy Efficiency? was written by Martin Courtney at The Next Platform.
Pat Gelsinger, current chief executive officer at Intel and formerly the head of its Data Center Group as well as its chief technology officer, famously invented the tick-tock method of chip launches to bring some order and reason to the way the world’s largest chip maker – as it was in the mid-2000s – mitigated risk and spurred innovation in its products. …
Intel: I Was Lostry, But Now I Am Foundry was written by Timothy Prickett Morgan at The Next Platform.
SPONSORED FEATURE: We might all be working for the same organization, even on the same infrastructure. …
Tighter IT/OT Integration Starts With Zero Touch was written by Joseph Martins at The Next Platform.
Here is a history question for you: How many IT suppliers who do a reasonable portion of their business in the commercial IT sector – and a lot of that in the datacenter – have ever broken through the $100 billion barrier? …
Nvidia Will Be The Next IT Giant To Break $100 Billion In Sales was written by Timothy Prickett Morgan at The Next Platform.
Spoiler alert!
A lot of neat things have just been added to the Arm Neoverse datacenter compute roadmap, but one of them is not a datacenter-class, discrete GPU accelerator. …
Arm Neoverse Roadmap Brings CPU Designs, But No Big Fat GPU was written by Timothy Prickett Morgan at The Next Platform.
For a lot of state universities in the United States, and their equivalent political organizations of regions or provinces in other nations across the globe, it is a lot easier to find extremely interested undergraduate and graduate students who want to contribute to the font of knowledge in high performance computing than it is to find the budget to build a top-notch supercomputer of reasonable scale. …
OSC Blends Intel HBM CPUs And Nvidia HBM GPUs For “Cardinal” Supercomputer was written by Timothy Prickett Morgan at The Next Platform.
Riding high on the AI hype cycle, Lambda – formerly known as Lambda Labs and well known to readers of The Next Platform – has received a $320 million cash infusion to expand its GPU cloud to support training clusters spanning thousands of Nvidia’s top specced accelerators. …
Lambda Snags $320 Million To Grow Its Rent-A-GPU Cloud was written by Tobias Mann at The Next Platform.