
Author Archives: Timothy Prickett Morgan
Author Archives: Timothy Prickett Morgan
Wide area networks and datacenter interconnects, or DCIs, as we have known them for the past decade or so are nowhere beefy enough or fast enough to take on the job of scaling AI training workloads across multiple datacenters. …
Cisco Takes On Broadcom, Nvidia For Fat AI Datacenter Interconnects was written by Timothy Prickett Morgan at The Next Platform.
It is one thing for the market researchers of the world to make prognostications about hardware, software, and services spending relating to the GenAI boom. …
Dell Says It Can Finally Make Some Big Money On GenAI was written by Timothy Prickett Morgan at The Next Platform.
In early 2024, we all had been wondering how OpenAI was going to pay for its $100 billion Stargate datacenter infrastructure project it was rumored to be planning with Microsoft. …
Did AMD Use ChatGPT To Come Up With Its OpenAI Partnership Deal? was written by Timothy Prickett Morgan at The Next Platform.
Of all of the hyperscalers and cloud builders, Meta Platforms has always been the one that we expected to design and have manufactured its own CPU and XPU accelerator compute engines. …
Meta Buys Rivos To Accelerate Compute Engine Engineering was written by Timothy Prickett Morgan at The Next Platform.
What does Cerebras Systems, the first successful waferscale computing commercializer and a contender in the race to provide compute for the world’s burgeoning AI inference workload, do for an encore? …
What Is Cerebras Going To Do With That $1.1 Billion In New Funding? was written by Timothy Prickett Morgan at The Next Platform.
Back at the end of July, we had a discussion with CPU maker AMD and the topic of conversation was hybrid cloud. …
Arm Says Neoverse Is A More Universal Compute Substrate Than X86 was written by Timothy Prickett Morgan at The Next Platform.
What’s the difference between Meta Platforms and OpenAI? The big one – and perhaps the most important one in the long run – is that when Meta Platforms does a deal with neocloud CoreWeave, it actually has a revenue stream from its advertising on various Web properties that it can pump back into AI investments while OpenAI is still burning money much faster than it is making it. …
Get Your Money For Nothing, Chips Definitely Not For Free was written by Timothy Prickett Morgan at The Next Platform.
At some point, when the cloud bills get very high and the AI usage starts going up and up, the enterprise is going to want to stop paying a cloud and get their own iron and co-locate it in a rentable datacenter. …
Managing AI Datacenter Networks With – You Guessed It – AI was written by Timothy Prickett Morgan at The Next Platform.
It is a now well-known fact in the datacenters of the world, which are trying to cram ten pounds of power usage into a five pound bit barn bag, that liquid cooling is an absolute necessity for the density of high performance computing systems to be increased to drive down latency between components and therefore drive up performance. …
Microsoft And Corintis Champion Microfluidics Cooling Pioneered By IBM was written by Timothy Prickett Morgan at The Next Platform.
When we try to predict the weather, we use ensembles of the initial conditions on the ground, in the oceans, and throughout the air to create a kind of probabilistic average forecast and then we take ensembles of models, which often have very different answers for extreme weather conditions like hurricanes and typhoons, to get a better sense of what might happen wherever and whenever we are concerned. …
Dicing, Slicing, And Augmenting Gartner’s AI Spending Forecast was written by Timothy Prickett Morgan at The Next Platform.
The United States may not have an indigenous foundry that makes high performance XPU compute engines for AI and HPC applications, but it certainly does have a home-based maker of high performance memory in Micron Technology. …
Micron Humming Along On All Memory Cylinders was written by Timothy Prickett Morgan at The Next Platform.
UPDATED: Watching Nvidia grow over the past two decades, from a gaming and visualization GPU supplier to the behemoth of the modern AI datacenter, has been nothing short of amazing. …
With Enlightened Self-Interest, Nvidia Reshapes The Tech World In Its Own Image was written by Timothy Prickett Morgan at The Next Platform.
If the hyperscalers are masters of anything, it is driving scale up and driving costs down so that a new type of information technology can be cheap enough so it can be widely deployed. …
Google Shows Off Its Inference Scale And Prowess was written by Timothy Prickett Morgan at The Next Platform.
Well, that didn’t take long. In April 2020, Nvidia completed its $6.9 billion acquisition of Mellanox Technologies for its InfiniBand and Ethernet switching, and a little more than five years and a GenAI boom later Nvidia has been crowned the leading revenue generator for Ethernet switching in the datacenter by IDC. …
Nvidia Takes The Commanding Lead In Datacenter Ethernet Switching was written by Timothy Prickett Morgan at The Next Platform.
“If you build it, they will come,” as we all learned from watching Field of Dreams two and a half decades ago. …
Field Of GPUs was written by Timothy Prickett Morgan at The Next Platform.
Wouldn’t it be funny if Larry Ellison, who has become the elder statesman of the datacenter, had the last laugh on the cloud builders and model builders by beating them at their own game? …
Oracle Cloud Can Be As Big As AWS This Decade was written by Timothy Prickett Morgan at The Next Platform.
It is beginning to look like that the period spanning from the second half of 2026 through the first half of 2027 is going to be a local maximum in spending on XPU-accelerated systems for AI workloads. …
Nvidia Disaggregates Long-Context Inference To Drive Bang For The Buck was written by Timothy Prickett Morgan at The Next Platform.
If AI is to become pervasive, as the model builders and datacenter builders who are investing enormous sums of money are clearly banking on it to be, then it really goes have to be a global phenomenon. …
OpenAI Lays Out The Principles Of Global-Scale Computing was written by Timothy Prickett Morgan at The Next Platform.
Ever since Nvidia reported its most recent financial results, where company co-founder and chief executive officer Jensen Huang said that there would be somewhere between $3 trillion and $4 trillion in spending on AI between now and the end of the decade, we have been on the prowl for any market research that backs up this claim or is its source. …
IDC Makes Ebullient AI Spending Forecast Out To 2029 was written by Timothy Prickett Morgan at The Next Platform.
Broadcom turned in its financial results for its third quarter last night, and all of the tongues in the IT sector are wagging about how the chip maker and enterprise software giant has landed a fourth customer for its burgeoning custom XPU design and shepherding business. …
Broadcom Lands Shepherding Deal For OpenAI “Titan” XPU was written by Timothy Prickett Morgan at The Next Platform.