Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

Dell, Lenovo Also Waiting For Their AI Server Waves

If the original equipment manufacturers of the world had software massive software divisions – like many of them tried to do a two decades ago as they tried to emulate IBM and then changed their minds a decade ago when they sold off their software and services businesses and pared down to pushing tin and iron – then maybe they might have been on the front edge of AI software development and maybe they would have been closer to the front of the line for GPU allocations from Nvidia and AMD.

The post Dell, Lenovo Also Waiting For Their AI Server Waves first appeared on The Next Platform.

Dell, Lenovo Also Waiting For Their AI Server Waves was written by Timothy Prickett Morgan at The Next Platform.

AMD Is The Undisputed Datacenter GPU Performance Champ – For Now

There is nothing quite like great hardware to motivate people to create and tune software to take full advantage of it during a boom time.

The post AMD Is The Undisputed Datacenter GPU Performance Champ – For Now first appeared on The Next Platform.

AMD Is The Undisputed Datacenter GPU Performance Champ – For Now was written by Timothy Prickett Morgan at The Next Platform.

How AWS Can Undercut Nvidia With Homegrown AI Compute Engines

Amazon Web Services may not be the first of the hyperscalers and cloud builders to create its own custom compute engines, but it has been hot on the heels of Google, which started using its homegrown TPU accelerators for AI workloads in 2015.

The post How AWS Can Undercut Nvidia With Homegrown AI Compute Engines first appeared on The Next Platform.

How AWS Can Undercut Nvidia With Homegrown AI Compute Engines was written by Timothy Prickett Morgan at The Next Platform.

If You Want To Sell AI To Enterprises, You Need To Sell Ethernet

Server makers Dell, Hewlett Packard Enterprise, and Lenovo, who are the three largest original manufacturers of systems in the world, ranked in that order, are adding to the spectrum of interconnects they offer to their enterprise customers.

The post If You Want To Sell AI To Enterprises, You Need To Sell Ethernet first appeared on The Next Platform.

If You Want To Sell AI To Enterprises, You Need To Sell Ethernet was written by Timothy Prickett Morgan at The Next Platform.

AWS Taps Nvidia NVSwitch For Beefy AI GPU Nodes

Since the advent of distributed computing, there has been a tension between the tight coherency of memory and its compute within a node – the base level of a unit of compute – and the looser coherency over the network across those nodes.

The post AWS Taps Nvidia NVSwitch For Beefy AI GPU Nodes first appeared on The Next Platform.

AWS Taps Nvidia NVSwitch For Beefy AI GPU Nodes was written by Timothy Prickett Morgan at The Next Platform.

AWS Adopts Arm V2 Cores For Expansive Graviton4 Server CPU

For more than a year, we have been expecting for Amazon Web Services to launch its Graviton4 processor for its homegrown servers at this year’s re:Invent, and lo and behold, chief executive officer Adam Selipsky rolled out the fourth generation in the Graviton CPU lineup – and the fifth iteration including last year’s overclocked Graviton3E processor aimed at HPC workloads – during his thrombosis-inducing keynote at the conference.

The post AWS Adopts Arm V2 Cores For Expansive Graviton4 Server CPU first appeared on The Next Platform.

AWS Adopts Arm V2 Cores For Expansive Graviton4 Server CPU was written by Timothy Prickett Morgan at The Next Platform.

Groq Says It Can Deploy 1 Million AI Inference Chips In Two Years

If you are looking for an alternative to Nvidia GPUs for AI inference – and who isn’t these days with generative AI being the hottest thing since a volcanic eruption – then you might want to give Groq a call.

The post Groq Says It Can Deploy 1 Million AI Inference Chips In Two Years first appeared on The Next Platform.

Groq Says It Can Deploy 1 Million AI Inference Chips In Two Years was written by Timothy Prickett Morgan at The Next Platform.

Nvidia Proves The Enormous Potential For Generative AI

The exorbitant cost of GPU-accelerated systems for training and inference and latest to rush to find gold in mountains of corporate data are combining to exert tectonic forces on the datacenter landscape and push up a new Himalaya range – with Nvidia as its steepest and highest peak.

The post Nvidia Proves The Enormous Potential For Generative AI first appeared on The Next Platform.

Nvidia Proves The Enormous Potential For Generative AI was written by Timothy Prickett Morgan at The Next Platform.

Pushing The Limits Of HPC And AI Is Becoming A Sustainability Headache

As Moore’s law continues to slow, delivering more powerful HPC and AI clusters means building larger, more power hungry facilities.

The post Pushing The Limits Of HPC And AI Is Becoming A Sustainability Headache first appeared on The Next Platform.

Pushing The Limits Of HPC And AI Is Becoming A Sustainability Headache was written by Timothy Prickett Morgan at The Next Platform.

What To Do When You Can’t Get Nvidia H100 GPUs

In a world where allocations of “Hopper” H100 GPUs coming out of Nvidia’s factories are going out well into 2024, and the allocations for the impending “Antares” MI300X and MI300A GPUs are probably long since spoken for, anyone trying to build a GPU cluster to power a large language model for training or inference has to think outside of the box.

The post What To Do When You Can’t Get Nvidia H100 GPUs first appeared on The Next Platform.

What To Do When You Can’t Get Nvidia H100 GPUs was written by Timothy Prickett Morgan at The Next Platform.

Microsoft Holds Chip Makers’ Feet To The Fire With Homegrown CPU And AI Chips

After many years of rumors, Microsoft has finally confirmed that it is following rivals Amazon Web Services and Google into the design of custom processors and accelerators for their clouds.

The post Microsoft Holds Chip Makers’ Feet To The Fire With Homegrown CPU And AI Chips first appeared on The Next Platform.

Microsoft Holds Chip Makers’ Feet To The Fire With Homegrown CPU And AI Chips was written by Timothy Prickett Morgan at The Next Platform.

Will Isambard 4 Be The UK’s First True Exascale Machine?

Here is a story you don’t hear very often: A supercomputing center was just given a blank check up to the peak power consumption of its facility to build a world-class AI/HPC supercomputer instead of a sidecar partition with some GPUs to play around with and wish its researchers had a lot more capacity.

The post Will Isambard 4 Be The UK’s First True Exascale Machine? first appeared on The Next Platform.

Will Isambard 4 Be The UK’s First True Exascale Machine? was written by Timothy Prickett Morgan at The Next Platform.

Top500 Supercomputers: Who Gets The Most Out Of Peak Performance?

The most exciting thing about the Top500 rankings of supercomputers that come out each June and November is not who is on the top of the list.

The post Top500 Supercomputers: Who Gets The Most Out Of Peak Performance? first appeared on The Next Platform.

Top500 Supercomputers: Who Gets The Most Out Of Peak Performance? was written by Timothy Prickett Morgan at The Next Platform.

Nvidia Pushes Hopper HBM Memory, And That Lifts GPU Performance

For very sound technical and economic reasons, processors of all kinds have been overprovisioned on compute and underprovisioned on memory bandwidth – and sometimes memory capacity depending on the device and depending on the workload – for decades.

The post Nvidia Pushes Hopper HBM Memory, And That Lifts GPU Performance first appeared on The Next Platform.

Nvidia Pushes Hopper HBM Memory, And That Lifts GPU Performance was written by Timothy Prickett Morgan at The Next Platform.

You Can Load Up On Cheap Cores With Updated Milan Epycs

There are two ways that CPU makers can deliver more bang for the buck, and those running distributed computing workloads can go either way – or somewhere in between – as they build out their server clusters.

The post You Can Load Up On Cheap Cores With Updated Milan Epycs first appeared on The Next Platform.

You Can Load Up On Cheap Cores With Updated Milan Epycs was written by Timothy Prickett Morgan at The Next Platform.

Supermicro Racks Up The System Revenues

There is wracking up the money, and racking up the servers – and Supermicro, which is sometimes an OEM and sometimes an ODM as well as a motherboard and component supplier to those who want to be either, is doing both here at the beginning of its fiscal 2024 year.

The post Supermicro Racks Up The System Revenues first appeared on The Next Platform.

Supermicro Racks Up The System Revenues was written by Timothy Prickett Morgan at The Next Platform.