Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

Cisco Guns For InfiniBand With Silicon One G200

It was a fortuitous coincidence that Nvidia was already working on massively parallel GPU compute engines for doing calculations in HPC simulations and models when the machine learning tipping point happened, and similarly, it was fortunate for InfiniBand that it had the advantage of high bandwidth, low latency, and remote direct memory access across GPUs at that same moment.

Cisco Guns For InfiniBand With Silicon One G200 was written by Timothy Prickett Morgan at The Next Platform.

The Third Time Charm Of AMD’s Instinct GPU

The great thing about the Cambrian explosion in compute that has been forced by the end of Dennard scaling of clock frequencies and Moore’s Law lowering in the cost of transistors is not only that we are getting an increasing diversity of highly tuned compute engines and broadening SKU stacks across those engines, but also that we are getting many different interpretations of the CPU, GPU, DPU, and FPGA themes.

The Third Time Charm Of AMD’s Instinct GPU was written by Timothy Prickett Morgan at The Next Platform.

No Server Recession At Lenovo And Supermicro So Far

We think that server spending is a leading indicator of economic growth or decline, and we are tracking the public companies that peddle systems to try to get a sense of how they are doing to get a better sense of what enterprises, governments, academic institutions, and other organizations separate from the hyperscalers and cloud builders, the latter of which comprise around half of server shipments and slightly less than half of server spending.

No Server Recession At Lenovo And Supermicro So Far was written by Timothy Prickett Morgan at The Next Platform.

Intel Downplays Hybrid CPU-GPU Engines, Merges NNP Into GPU

When Intel announced its “Falcon Shores” project to build a hybrid CPU-GPU compute engine back in February 2022 that allowed the independent scaling of CPU and GPU capacity within a single socket, it looked like the chip maker was preparing to take on rivals Nvidia and AMD head on with hybrid compute motors, which Intel calls XPUs, AMD calls APUs, and Nvidia doesn’t really have if you want to be strict about what its “superchips” are and what they are not.

Intel Downplays Hybrid CPU-GPU Engines, Merges NNP Into GPU was written by Timothy Prickett Morgan at The Next Platform.

MGX: Nvidia Standardizes Multi-Generation Server Designs

Whenever a compute engine maker also does motherboards as well as system designs, those companies that make motherboards (there are dozens who do) and create system designs (the original design manufacturers and the original – get a little bit nervous as well as a bit relieved.

MGX: Nvidia Standardizes Multi-Generation Server Designs was written by Timothy Prickett Morgan at The Next Platform.

Aurora Rising: A Massive Machine For HPC And AI

As long as great science gets done on the final incarnation of the “Aurora” supercomputer at Argonne National Laboratory, based on Intel’s CPUs and GPUs but not on its now defunct Omni-Path interconnect, people will eventually forget all of – well, most of – the grief that it took to get the massive machine to market.

Aurora Rising: A Massive Machine For HPC And AI was written by Timothy Prickett Morgan at The Next Platform.

Ampere Gets Out In Front Of X86 With 192-Core “Siryn” AmpereOne

The largest clouds will always have to buy X86 processors from Intel or AMD so long as the enterprises of the world – and the governments and educational institutions who also consume a fair number of servers – have X86 applications that are not easily ported to Arm or RISC-V architectures.

Ampere Gets Out In Front Of X86 With 192-Core “Siryn” AmpereOne was written by Timothy Prickett Morgan at The Next Platform.

1 13 14 15 16 17 79