Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

Lining Up The “El Capitan” Supercomputer Against The AI Upstarts

The question is no longer whether or not the “El Capitan” supercomputer that has been in the process of being installed at Lawrence Livermore National Laboratory for the past week – with photographic evidence to prove it – will be the most powerful system in the world.

The post Lining Up The “El Capitan” Supercomputer Against The AI Upstarts first appeared on The Next Platform.

Lining Up The “El Capitan” Supercomputer Against The AI Upstarts was written by Timothy Prickett Morgan at The Next Platform.

PCI-Express Must Match The Cadence Of Compute Engines And Networks

When system architects sit down to design their next platforms, they start by looking at a bunch of roadmaps from suppliers of CPUs, accelerators, memory, flash, network interface cards – and PCI-Express controllers and switches.

The post PCI-Express Must Match The Cadence Of Compute Engines And Networks first appeared on The Next Platform.

PCI-Express Must Match The Cadence Of Compute Engines And Networks was written by Timothy Prickett Morgan at The Next Platform.

The $1 Billion And Higher Ante To Play The AI Game

If you want to get the attention of server makers and compute engine providers and especially if you are going to be building GPU-laden clusters with shiny new gear to drive AI training and possibly AI inference for large language models and recommendation engines, the first thing you need is $1 billion.

The post The $1 Billion And Higher Ante To Play The AI Game first appeared on The Next Platform.

The $1 Billion And Higher Ante To Play The AI Game was written by Timothy Prickett Morgan at The Next Platform.

Cisco Guns For InfiniBand With Silicon One G200

It was a fortuitous coincidence that Nvidia was already working on massively parallel GPU compute engines for doing calculations in HPC simulations and models when the machine learning tipping point happened, and similarly, it was fortunate for InfiniBand that it had the advantage of high bandwidth, low latency, and remote direct memory access across GPUs at that same moment.

Cisco Guns For InfiniBand With Silicon One G200 was written by Timothy Prickett Morgan at The Next Platform.

The Third Time Charm Of AMD’s Instinct GPU

The great thing about the Cambrian explosion in compute that has been forced by the end of Dennard scaling of clock frequencies and Moore’s Law lowering in the cost of transistors is not only that we are getting an increasing diversity of highly tuned compute engines and broadening SKU stacks across those engines, but also that we are getting many different interpretations of the CPU, GPU, DPU, and FPGA themes.

The Third Time Charm Of AMD’s Instinct GPU was written by Timothy Prickett Morgan at The Next Platform.

No Server Recession At Lenovo And Supermicro So Far

We think that server spending is a leading indicator of economic growth or decline, and we are tracking the public companies that peddle systems to try to get a sense of how they are doing to get a better sense of what enterprises, governments, academic institutions, and other organizations separate from the hyperscalers and cloud builders, the latter of which comprise around half of server shipments and slightly less than half of server spending.

No Server Recession At Lenovo And Supermicro So Far was written by Timothy Prickett Morgan at The Next Platform.

Intel Downplays Hybrid CPU-GPU Engines, Merges NNP Into GPU

When Intel announced its “Falcon Shores” project to build a hybrid CPU-GPU compute engine back in February 2022 that allowed the independent scaling of CPU and GPU capacity within a single socket, it looked like the chip maker was preparing to take on rivals Nvidia and AMD head on with hybrid compute motors, which Intel calls XPUs, AMD calls APUs, and Nvidia doesn’t really have if you want to be strict about what its “superchips” are and what they are not.

Intel Downplays Hybrid CPU-GPU Engines, Merges NNP Into GPU was written by Timothy Prickett Morgan at The Next Platform.

MGX: Nvidia Standardizes Multi-Generation Server Designs

Whenever a compute engine maker also does motherboards as well as system designs, those companies that make motherboards (there are dozens who do) and create system designs (the original design manufacturers and the original – get a little bit nervous as well as a bit relieved.

MGX: Nvidia Standardizes Multi-Generation Server Designs was written by Timothy Prickett Morgan at The Next Platform.

1 7 8 9 10 11 73