Author Archives: Timothy Prickett Morgan
Author Archives: Timothy Prickett Morgan
As far as we have been concerned since founding The Next Platform literally a decade ago this week, AI training and inference in the datacenter are a kind of HPC. …
HPC Market Bigger Than Expected, And Growing Faster was written by Timothy Prickett Morgan at The Next Platform.
Every couple of years, Lawrence Livermore National Laboratory gets to install the world’s fastest supercomputer. …
“El Capitan” Supercomputer Blazes The Trail for Converged CPU-GPU Compute was written by Timothy Prickett Morgan at The Next Platform.
There has been a lot more churn on the November Top500 supercomputer rankings that is the talk of the SC24 conference in Atlanta this week than there was in the list that came out in June at the ISC24 conference in Hamburg, Germany back in May, and there are some interesting developments in the new machinery that is being installed. …
AMD Now Has More Compute On The Top500 Than Nvidia was written by Timothy Prickett Morgan at The Next Platform.
Lawrence Livermore National Laboratory, Sandia National Laboratories, and Los Alamos National Laboratory are known by the shorthand “Tri-Labs” in the HPC community, but these HPC centers perhaps could be called “Try-Labs” because they historically have tried just about any new architecture to see what promise it might hold in advancing the missions of the US Department of Energy. …
Sandia To Push Both HPC And AI With Cerebras “Kingfisher” Cluster was written by Timothy Prickett Morgan at The Next Platform.
Welcome to the second part in our series of chats with J Metz, chair of the Ultra Ethernet Consortium. …
UEC Doesn’t Want To Kill InfiniBand, But It Wants Ethernet To Beat It was written by Timothy Prickett Morgan at The Next Platform.
Any time you can get a lot of companies with very technically adept and strongly opinionated people to work together on a problem, or a set of problems, then you know for a fact that there is a real problem. …
The Collaboration That Will Drive Ethernet Into The HPC And AI Future was written by Timothy Prickett Morgan at The Next Platform.
Anyone who thinks that Intel is easy to kill need look no further than the historical trends of the Mercury Research market share statistics that we see each quarter. …
The Server Recession Ends, And Both Intel And AMD Won was written by Timothy Prickett Morgan at The Next Platform.
COMMISSIONED As the AI era unfolds, I have been reflecting on my journey in the tech industry. …
AI Workloads Are Changing IT Demands At The Edge was written by Timothy Prickett Morgan at The Next Platform.
Sponsored Feature Arm is starting to fulfill its promise of transforming the nature of compute in the datacenter, and it is getting some big help from traditional chip makers as well as the hyperscalers and cloud builders that have massive computing requirements and who also need to drive efficiency up and costs down each and every year. …
Custom Arm CPUs Drive Many Different AI Approaches In The Datacenter was written by Timothy Prickett Morgan at The Next Platform.
For most of the history of high performance computing, a supercomputer was a freestanding, isolated machine that was designed to run some simulation or model and the only link it needed to the outside world was a relatively small one to show some visualization. …
The Back End AI Network Puts Pressure On The Front End was written by Timothy Prickett Morgan at The Next Platform.
COMMISSIONED In the fast-paced world of AI, GPUs are often hailed as the quiet powerhouse driving innovation. …
Stop Bottlenecking Your AI: Storage Hacks for Hungry GPUs was written by Timothy Prickett Morgan at The Next Platform.
There are lots of ways that we might build out the memory capacity and memory bandwidth of compute engines to drive AI and HPC workloads better than we have been able to do thus far. …
We Can’t Get Enough HBM, Or Stack It Up High Enough was written by Timothy Prickett Morgan at The Next Platform.
Think of it as the ultimate offload model.
One of the geniuses of the cloud – perhaps the central genius – is that a big company that would have a large IT budget, perhaps on the order of hundreds of millions of dollars per year, and that has a certain amount of expertise creates a much, much larger IT organization with billions of dollars – and with AI now tens of billions of dollars – in investments and rents out the vast majority of that capacity to third parties, who essentially allow that original cloud builder to get their own IT operations for close to free. …
You Are Paying The Clouds To Build Better AI Than They Will Rent You was written by Timothy Prickett Morgan at The Next Platform.
It is beginning to look like chip maker Intel hit the bottom in its products and foundry businesses in the second quarter of this year and that revenues are slowly – we won’t go so far as to say surely – improving. …
Intel Takes The Big Restructuring Hits As It Looks Ahead was written by Timothy Prickett Morgan at The Next Platform.
COMMISSIONED As enterprises increasingly adopt GenAI-powered AI agents, making high-quality data available for these software assistants will come into sharper focus. …
High Quality Data Is Key For Effective AI Agents was written by Timothy Prickett Morgan at The Next Platform.
The minute that search engine giant Google wanted to be a cloud, and the several years later that Google realized that companies were not ready to buy full-on platform services that masked the underlying hardware but wanted lower level infrastructure services that gave them more optionality as well as more responsibility, it was inevitable that Google Cloud would have to buy compute engines from Intel, AMD, and Nvidia for its server fleet. …
Google Covers Its Compute Engine Bases Because It Has To was written by Timothy Prickett Morgan at The Next Platform.
Lisu Su has turned in her first ten years at the helm of AMD, and what a hell of a run it has been. …
AMD Will Need Another Decade To Try To Pass Nvidia was written by Timothy Prickett Morgan at The Next Platform.
No matter how elegant and clever the design is for a compute engine, the difficulty and cost of moving existing – and sometimes very old – code from the device it currently runs on to that new compute engine is a very big barrier to adoption. …
HPC Gets A Reconfigurable Dataflow Engine To Take On CPUs And GPUs was written by Timothy Prickett Morgan at The Next Platform.
The third quarter earnings season starts this week for the hyperscaler and cloud giants, and it is fortuitous that the economists and IT analysts at Gartner have updated their forecast for IT spending for 2024 and added an jaw-dropping forecast for 2025 and hinted at a brave new world of massive datacenter spending out to 2028. …
System Spending Forecast Goes Through The Datacenter Roof was written by Timothy Prickett Morgan at The Next Platform.
It was only a few months ago when waferscale compute pioneer Cerebras Systems was bragging that a handful of its WSE-3 engines lashed together could run circles around Nvidia GPU instances based on Nvidia’s “Hopper” H100 GPUs when running the open source Llama 3.1 foundation model created by Meta Platforms. …
Cerebras Trains Llama Models To Leap Over GPUs was written by Timothy Prickett Morgan at The Next Platform.