Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

The Datacenter GPU Gravy Train That No One Will Derail

We have five decades of very fine-grained analysis of CPU compute engines in the datacenter, and changes come at a steady but glacial pace when it comes to CPU serving.

The post The Datacenter GPU Gravy Train That No One Will Derail first appeared on The Next Platform.

The Datacenter GPU Gravy Train That No One Will Derail was written by Timothy Prickett Morgan at The Next Platform.

The Future We Simulate Is The One We Create

Investment in supercomputing and related HPC technologies is not just a sign of how much we are willing to bet on the future with someone else’s money, but how much we believe in it ourselves, and more importantly, how much we believe in the core idea that we can predict and therefore shape the future of the world.

The post The Future We Simulate Is The One We Create first appeared on The Next Platform.

The Future We Simulate Is The One We Create was written by Timothy Prickett Morgan at The Next Platform.

With “Big Chip,” China Lays Out Aspirations For Waferscale

The end of Moore’s Law – the real Moore’s Law where transistors get cheaper and faster with every process shrink – is making chip makers crazy.

The post With “Big Chip,” China Lays Out Aspirations For Waferscale first appeared on The Next Platform.

With “Big Chip,” China Lays Out Aspirations For Waferscale was written by Timothy Prickett Morgan at The Next Platform.

Ethernet Switching Bucks The Server Recession Trend

You might be thinking that with all of the investment in AI systems these days that the boom in InfiniBand interconnect sales would be eating into sales of high-end Ethernet interconnects in the datacenter.

The post Ethernet Switching Bucks The Server Recession Trend first appeared on The Next Platform.

Ethernet Switching Bucks The Server Recession Trend was written by Timothy Prickett Morgan at The Next Platform.

Great Accelerations: Just How Much Will We Spend On GenAI Again?

Ever since the launch of the “Antares” MI300X and MI300A compute engines by AMD back in early December, we have been mulling over the spending forecasts for AI spending in general and for infrastructure and accelerators more specifically.

The post Great Accelerations: Just How Much Will We Spend On GenAI Again? first appeared on The Next Platform.

Great Accelerations: Just How Much Will We Spend On GenAI Again? was written by Timothy Prickett Morgan at The Next Platform.

Building A Hassle-Free Way To Port CUDA Code To AMD GPUs

Emulation is not just the sincerest form of flattery. It is also how you jump start the adoption of a new compute engine or move an entire software stack from one platform to another with a different architecture.

The post Building A Hassle-Free Way To Port CUDA Code To AMD GPUs first appeared on The Next Platform.

Building A Hassle-Free Way To Port CUDA Code To AMD GPUs was written by Timothy Prickett Morgan at The Next Platform.

University Of Stuttgart Spends €115M To Go Exascale

The University of Stuttgart’s High Performance Computing Center (HLRS) in Germany has tapped Hewlett Packard Enterprise to build a pair of its next-generation supercomputers.

The post University Of Stuttgart Spends €115M To Go Exascale first appeared on The Next Platform.

University Of Stuttgart Spends €115M To Go Exascale was written by Timothy Prickett Morgan at The Next Platform.

Datacenter Infrastructure Report Card, Q3 2023

It is hard to keep a model of datacenter infrastructure spending in your head at the same time you want to look at trends in cloud and on-premises spending as well as keep score among the key IT suppliers to figure out who is winning and who is losing.

The post Datacenter Infrastructure Report Card, Q3 2023 first appeared on The Next Platform.

Datacenter Infrastructure Report Card, Q3 2023 was written by Timothy Prickett Morgan at The Next Platform.

Intel “Emerald Rapids” Xeon SPs: A Little More Bang, A Little Less Bucks

With each successive Intel Xeon SP server processor launch, we can’t help but think the same thing: it would have been better for Intel and customers alike if this chip was out the door a year ago, or two years ago, as must have been planned.

The post Intel “Emerald Rapids” Xeon SPs: A Little More Bang, A Little Less Bucks first appeared on The Next Platform.

Intel “Emerald Rapids” Xeon SPs: A Little More Bang, A Little Less Bucks was written by Timothy Prickett Morgan at The Next Platform.

Dell, Lenovo Also Waiting For Their AI Server Waves

If the original equipment manufacturers of the world had software massive software divisions – like many of them tried to do a two decades ago as they tried to emulate IBM and then changed their minds a decade ago when they sold off their software and services businesses and pared down to pushing tin and iron – then maybe they might have been on the front edge of AI software development and maybe they would have been closer to the front of the line for GPU allocations from Nvidia and AMD.

The post Dell, Lenovo Also Waiting For Their AI Server Waves first appeared on The Next Platform.

Dell, Lenovo Also Waiting For Their AI Server Waves was written by Timothy Prickett Morgan at The Next Platform.

AMD Is The Undisputed Datacenter GPU Performance Champ – For Now

There is nothing quite like great hardware to motivate people to create and tune software to take full advantage of it during a boom time.

The post AMD Is The Undisputed Datacenter GPU Performance Champ – For Now first appeared on The Next Platform.

AMD Is The Undisputed Datacenter GPU Performance Champ – For Now was written by Timothy Prickett Morgan at The Next Platform.

How AWS Can Undercut Nvidia With Homegrown AI Compute Engines

Amazon Web Services may not be the first of the hyperscalers and cloud builders to create its own custom compute engines, but it has been hot on the heels of Google, which started using its homegrown TPU accelerators for AI workloads in 2015.

The post How AWS Can Undercut Nvidia With Homegrown AI Compute Engines first appeared on The Next Platform.

How AWS Can Undercut Nvidia With Homegrown AI Compute Engines was written by Timothy Prickett Morgan at The Next Platform.

If You Want To Sell AI To Enterprises, You Need To Sell Ethernet

Server makers Dell, Hewlett Packard Enterprise, and Lenovo, who are the three largest original manufacturers of systems in the world, ranked in that order, are adding to the spectrum of interconnects they offer to their enterprise customers.

The post If You Want To Sell AI To Enterprises, You Need To Sell Ethernet first appeared on The Next Platform.

If You Want To Sell AI To Enterprises, You Need To Sell Ethernet was written by Timothy Prickett Morgan at The Next Platform.

1 9 10 11 12 13 80