Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

How Much Of A Premium Will Nvidia Charge For Hopper GPUs?

There is increasing competition coming at Nvidia in the AI training and inference market, and at the same time, researchers at Google, Cerebras, and SambaNova are showing off the benefits of porting sections of traditional HPC simulation and modeling code to their matrix math engines.

How Much Of A Premium Will Nvidia Charge For Hopper GPUs? was written by Timothy Prickett Morgan at The Next Platform.

Lawrence Livermore Kicks In Funds Foster Omni-Path Networking

Decades before there were hyperscalers and cloud builders started creating their own variants of compute, storage, and networking for their massive distributed systems, the major HPC centers of the world fostered innovative technologies that may have otherwise died on the vine and never been propagated in the market at large.

Lawrence Livermore Kicks In Funds Foster Omni-Path Networking was written by Timothy Prickett Morgan at The Next Platform.

Intel Adjusts, However Slowly, To New Realities In The Datacenter

While chip designer and maker Intel has a new strategy and a new executive team to implement it, it is going to take a long time for changes made last year and this year to be felt and for product and process roadmap changes to put the company into a better competitive situation.

Intel Adjusts, However Slowly, To New Realities In The Datacenter was written by Timothy Prickett Morgan at The Next Platform.

With Aquila, Google Abandons Ethernet To Outdo InfiniBand

Frustrated by the limitations of Ethernet, Google has taken the best ideas from InfiniBand and Cray’s “Aries” interconnect and created a new distributed switching architecture called Aquila and a new GNet protocol stack that delivers the kind of consistent and low latency that the search engine giant has been seeking for decades.

With Aquila, Google Abandons Ethernet To Outdo InfiniBand was written by Timothy Prickett Morgan at The Next Platform.

Expanding DevOps With Infrastructure As Code

The hyperscalers have taught us many lessons in the past two decades, and one of them is that everything that can be defined in software should be so that it can be controlled automatically and programmatically – and that goes double for hardware, which has required so much human babysitting over the decades.

Expanding DevOps With Infrastructure As Code was written by Timothy Prickett Morgan at The Next Platform.

Stacking Up L2 Cache, RIKEN Shows 10X Speedup For A64FX By 2028

Let the era of 3D V-Cache in HPC begin.

Inspired by the idea of AMD’s “Milan-X” Epyc 7003 processors with their 3D V-Cache stacked L3 cache memory and then propelled by actual benchmark tests pitting regular Milan CPUs against Milan-X processors using real-world and synthetic HPC applications, researchers at RIKEN Lab in Japan, where the “Fugaku” supercomputer based on Fujitsu’s impressive A64FX vectorized Arm server chip, have fired up a simulation of a hypothetical A64FX follow-on that could, in theory, be built in 2028 and provide nearly an order of magnitude more performance than the current A64FX.

Stacking Up L2 Cache, RIKEN Shows 10X Speedup For A64FX By 2028 was written by Timothy Prickett Morgan at The Next Platform.

1 2 3 55