Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

With “Big Chip,” China Lays Out Aspirations For Waferscale

The end of Moore’s Law – the real Moore’s Law where transistors get cheaper and faster with every process shrink – is making chip makers crazy.

The post With “Big Chip,” China Lays Out Aspirations For Waferscale first appeared on The Next Platform.

With “Big Chip,” China Lays Out Aspirations For Waferscale was written by Timothy Prickett Morgan at The Next Platform.

Ethernet Switching Bucks The Server Recession Trend

You might be thinking that with all of the investment in AI systems these days that the boom in InfiniBand interconnect sales would be eating into sales of high-end Ethernet interconnects in the datacenter.

The post Ethernet Switching Bucks The Server Recession Trend first appeared on The Next Platform.

Ethernet Switching Bucks The Server Recession Trend was written by Timothy Prickett Morgan at The Next Platform.

Great Accelerations: Just How Much Will We Spend On GenAI Again?

Ever since the launch of the “Antares” MI300X and MI300A compute engines by AMD back in early December, we have been mulling over the spending forecasts for AI spending in general and for infrastructure and accelerators more specifically.

The post Great Accelerations: Just How Much Will We Spend On GenAI Again? first appeared on The Next Platform.

Great Accelerations: Just How Much Will We Spend On GenAI Again? was written by Timothy Prickett Morgan at The Next Platform.

Building A Hassle-Free Way To Port CUDA Code To AMD GPUs

Emulation is not just the sincerest form of flattery. It is also how you jump start the adoption of a new compute engine or move an entire software stack from one platform to another with a different architecture.

The post Building A Hassle-Free Way To Port CUDA Code To AMD GPUs first appeared on The Next Platform.

Building A Hassle-Free Way To Port CUDA Code To AMD GPUs was written by Timothy Prickett Morgan at The Next Platform.

University Of Stuttgart Spends €115M To Go Exascale

The University of Stuttgart’s High Performance Computing Center (HLRS) in Germany has tapped Hewlett Packard Enterprise to build a pair of its next-generation supercomputers.

The post University Of Stuttgart Spends €115M To Go Exascale first appeared on The Next Platform.

University Of Stuttgart Spends €115M To Go Exascale was written by Timothy Prickett Morgan at The Next Platform.

Datacenter Infrastructure Report Card, Q3 2023

It is hard to keep a model of datacenter infrastructure spending in your head at the same time you want to look at trends in cloud and on-premises spending as well as keep score among the key IT suppliers to figure out who is winning and who is losing.

The post Datacenter Infrastructure Report Card, Q3 2023 first appeared on The Next Platform.

Datacenter Infrastructure Report Card, Q3 2023 was written by Timothy Prickett Morgan at The Next Platform.

Intel “Emerald Rapids” Xeon SPs: A Little More Bang, A Little Less Bucks

With each successive Intel Xeon SP server processor launch, we can’t help but think the same thing: it would have been better for Intel and customers alike if this chip was out the door a year ago, or two years ago, as must have been planned.

The post Intel “Emerald Rapids” Xeon SPs: A Little More Bang, A Little Less Bucks first appeared on The Next Platform.

Intel “Emerald Rapids” Xeon SPs: A Little More Bang, A Little Less Bucks was written by Timothy Prickett Morgan at The Next Platform.

Dell, Lenovo Also Waiting For Their AI Server Waves

If the original equipment manufacturers of the world had software massive software divisions – like many of them tried to do a two decades ago as they tried to emulate IBM and then changed their minds a decade ago when they sold off their software and services businesses and pared down to pushing tin and iron – then maybe they might have been on the front edge of AI software development and maybe they would have been closer to the front of the line for GPU allocations from Nvidia and AMD.

The post Dell, Lenovo Also Waiting For Their AI Server Waves first appeared on The Next Platform.

Dell, Lenovo Also Waiting For Their AI Server Waves was written by Timothy Prickett Morgan at The Next Platform.

AMD Is The Undisputed Datacenter GPU Performance Champ – For Now

There is nothing quite like great hardware to motivate people to create and tune software to take full advantage of it during a boom time.

The post AMD Is The Undisputed Datacenter GPU Performance Champ – For Now first appeared on The Next Platform.

AMD Is The Undisputed Datacenter GPU Performance Champ – For Now was written by Timothy Prickett Morgan at The Next Platform.

How AWS Can Undercut Nvidia With Homegrown AI Compute Engines

Amazon Web Services may not be the first of the hyperscalers and cloud builders to create its own custom compute engines, but it has been hot on the heels of Google, which started using its homegrown TPU accelerators for AI workloads in 2015.

The post How AWS Can Undercut Nvidia With Homegrown AI Compute Engines first appeared on The Next Platform.

How AWS Can Undercut Nvidia With Homegrown AI Compute Engines was written by Timothy Prickett Morgan at The Next Platform.

If You Want To Sell AI To Enterprises, You Need To Sell Ethernet

Server makers Dell, Hewlett Packard Enterprise, and Lenovo, who are the three largest original manufacturers of systems in the world, ranked in that order, are adding to the spectrum of interconnects they offer to their enterprise customers.

The post If You Want To Sell AI To Enterprises, You Need To Sell Ethernet first appeared on The Next Platform.

If You Want To Sell AI To Enterprises, You Need To Sell Ethernet was written by Timothy Prickett Morgan at The Next Platform.

AWS Taps Nvidia NVSwitch For Beefy AI GPU Nodes

Since the advent of distributed computing, there has been a tension between the tight coherency of memory and its compute within a node – the base level of a unit of compute – and the looser coherency over the network across those nodes.

The post AWS Taps Nvidia NVSwitch For Beefy AI GPU Nodes first appeared on The Next Platform.

AWS Taps Nvidia NVSwitch For Beefy AI GPU Nodes was written by Timothy Prickett Morgan at The Next Platform.

AWS Adopts Arm V2 Cores For Expansive Graviton4 Server CPU

For more than a year, we have been expecting for Amazon Web Services to launch its Graviton4 processor for its homegrown servers at this year’s re:Invent, and lo and behold, chief executive officer Adam Selipsky rolled out the fourth generation in the Graviton CPU lineup – and the fifth iteration including last year’s overclocked Graviton3E processor aimed at HPC workloads – during his thrombosis-inducing keynote at the conference.

The post AWS Adopts Arm V2 Cores For Expansive Graviton4 Server CPU first appeared on The Next Platform.

AWS Adopts Arm V2 Cores For Expansive Graviton4 Server CPU was written by Timothy Prickett Morgan at The Next Platform.

Groq Says It Can Deploy 1 Million AI Inference Chips In Two Years

If you are looking for an alternative to Nvidia GPUs for AI inference – and who isn’t these days with generative AI being the hottest thing since a volcanic eruption – then you might want to give Groq a call.

The post Groq Says It Can Deploy 1 Million AI Inference Chips In Two Years first appeared on The Next Platform.

Groq Says It Can Deploy 1 Million AI Inference Chips In Two Years was written by Timothy Prickett Morgan at The Next Platform.

Nvidia Proves The Enormous Potential For Generative AI

The exorbitant cost of GPU-accelerated systems for training and inference and latest to rush to find gold in mountains of corporate data are combining to exert tectonic forces on the datacenter landscape and push up a new Himalaya range – with Nvidia as its steepest and highest peak.

The post Nvidia Proves The Enormous Potential For Generative AI first appeared on The Next Platform.

Nvidia Proves The Enormous Potential For Generative AI was written by Timothy Prickett Morgan at The Next Platform.

Pushing The Limits Of HPC And AI Is Becoming A Sustainability Headache

As Moore’s law continues to slow, delivering more powerful HPC and AI clusters means building larger, more power hungry facilities.

The post Pushing The Limits Of HPC And AI Is Becoming A Sustainability Headache first appeared on The Next Platform.

Pushing The Limits Of HPC And AI Is Becoming A Sustainability Headache was written by Timothy Prickett Morgan at The Next Platform.