Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

Nvidia’s $2 Billion Investment In CoreWeave Is A Drop In A $250 Billion Bucket

With the hyperscalers and the cloud builders all working on their own CPU and AI XPU designs, it is no wonder that Nvidia has been championing the neoclouds that can’t afford to try to be everything to everyone – this is the very definition of enterprise computing – and that, frankly, are having trouble coming up with the trillions of dollars to cover the 150 gigawatts to more than 200 gigawatts of datacenter capacity that is estimated to be on the books between 2025 and 2030 for AI workloads.

Nvidia’s $2 Billion Investment In CoreWeave Is A Drop In A $250 Billion Bucket was written by Timothy Prickett Morgan at The Next Platform.

Intel Is Still Struggling In The Datacenter, But It Could Get Better

Intel has been pushing its two-core server CPU strategy for so long, in one form or another, that we have become accustomed to differentiating products the way Intel does and then try to figure out what workloads these chips might be useful for.

Intel Is Still Struggling In The Datacenter, But It Could Get Better was written by Timothy Prickett Morgan at The Next Platform.

Upscale AI Nabs Cash To Forge “SkyHammer” Scale Up Fabric Switch

The first company that can make a UALink switch with high radix – meaning lots of ports – and high aggregate bandwidth across those ports that can compete toe-to-toe with Nvidia’s NVSwitch memory fabric and NVLink ports is going to make a lot of money.

Upscale AI Nabs Cash To Forge “SkyHammer” Scale Up Fabric Switch was written by Timothy Prickett Morgan at The Next Platform.

By Decade’s End, AI Will Drive More Than Half Of All Chip Sales

As the year came to an end, we tore apart IDC’s assessments for server spending, including the huge jump in accelerated supercomputers for running GenAI and more traditional machine learning workloads and as this year got started, we did forensic analysis and modeling based on the company’s reckoning of Ethernet switching and routing revenues.

By Decade’s End, AI Will Drive More Than Half Of All Chip Sales was written by Timothy Prickett Morgan at The Next Platform.

Pushed By GenAI And Front End Upgrades, Ethernet Switching Hits New Highs

But virtue of its scale out capability, which is key for driving the size of absolutely enormous AI clusters, and to its universality, Ethernet switch sales are booming, and if the recent history is any guide, we can expect Ethernet revenues will climb exponentially higher in the coming quarters as well.

Pushed By GenAI And Front End Upgrades, Ethernet Switching Hits New Highs was written by Timothy Prickett Morgan at The Next Platform.

Nvidia Is The Only AI Model Maker That Can Afford To Give It Away

An alien flying in from space aboard a comet would look down on Earth and see that there is this highly influential and famous software company called Nvidia that just so happens to have a massively complex and ridiculously profitable hardware business running a collection of proprietary and open source software that about three quarters of its approximately 40,000 employees create.

Nvidia Is The Only AI Model Maker That Can Afford To Give It Away was written by Timothy Prickett Morgan at The Next Platform.

How Sustainable Is This Crazy Server Spending?

We can all talk until we are blue in the face about how weird it is for so much money to be spent on servers during the GenAI boom, but after reviewing the latest market report from IDC – which is one again but sporadically giving out some stats to the public – we thought that to feel the full impact of this change, we should draw you a picture of the past 26 years of server revenues by quarter so you can take it all in.

How Sustainable Is This Crazy Server Spending? was written by Timothy Prickett Morgan at The Next Platform.

What Do You Do When You Want GPFS On The Cloud?

While there are a lot of different file system and object storage options available for HPC and AI customers, many AI organizations and a lot of traditional HPC simulation and modeling centers choose either the open source Lustre parallel file system or the modern variants of IBM’s General Parallel File System (GPFS), known previously as Spectrum Scale and now known as IBM Storage Scale, as the storage underpinning of their applications.

What Do You Do When You Want GPFS On The Cloud? was written by Timothy Prickett Morgan at The Next Platform.