Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

AMD Now Has More Compute On The Top500 Than Nvidia

There has been a lot more churn on the November Top500 supercomputer rankings that is the talk of the SC24 conference in Atlanta this week than there was in the list that came out in June at the ISC24 conference in Hamburg, Germany back in May, and there are some interesting developments in the new machinery that is being installed.

AMD Now Has More Compute On The Top500 Than Nvidia was written by Timothy Prickett Morgan at The Next Platform.

Sandia To Push Both HPC And AI With Cerebras “Kingfisher” Cluster

Lawrence Livermore National Laboratory, Sandia National Laboratories, and Los Alamos National Laboratory are known by the shorthand “Tri-Labs” in the HPC community, but these HPC centers perhaps could be called “Try-Labs” because they historically have tried just about any new architecture to see what promise it might hold in advancing the missions of the US Department of Energy.

Sandia To Push Both HPC And AI With Cerebras “Kingfisher” Cluster was written by Timothy Prickett Morgan at The Next Platform.

Custom Arm CPUs Drive Many Different AI Approaches In The Datacenter

Sponsored Feature  Arm is starting to fulfill its promise of transforming the nature of compute in the datacenter, and it is getting some big help from traditional chip makers as well as the hyperscalers and cloud builders that have massive computing requirements and who also need to drive efficiency up and costs down each and every year.

Custom Arm CPUs Drive Many Different AI Approaches In The Datacenter was written by Timothy Prickett Morgan at The Next Platform.

You Are Paying The Clouds To Build Better AI Than They Will Rent You

Think of it as the ultimate offload model.

One of the geniuses of the cloud – perhaps the central genius – is that a big company that would have a large IT budget, perhaps on the order of hundreds of millions of dollars per year, and that has a certain amount of expertise creates a much, much larger IT organization with billions of dollars – and with AI now tens of billions of dollars – in investments and rents out the vast majority of that capacity to third parties, who essentially allow that original cloud builder to get their own IT operations for close to free.

You Are Paying The Clouds To Build Better AI Than They Will Rent You was written by Timothy Prickett Morgan at The Next Platform.

Google Covers Its Compute Engine Bases Because It Has To

The minute that search engine giant Google wanted to be a cloud, and the several years later that Google realized that companies were not ready to buy full-on platform services that masked the underlying hardware but wanted lower level infrastructure services that gave them more optionality as well as more responsibility, it was inevitable that Google Cloud would have to buy compute engines from Intel, AMD, and Nvidia for its server fleet.

Google Covers Its Compute Engine Bases Because It Has To was written by Timothy Prickett Morgan at The Next Platform.

HPC Gets A Reconfigurable Dataflow Engine To Take On CPUs And GPUs

No matter how elegant and clever the design is for a compute engine, the difficulty and cost of moving existing – and sometimes very old – code from the device it currently runs on to that new compute engine is a very big barrier to adoption.

HPC Gets A Reconfigurable Dataflow Engine To Take On CPUs And GPUs was written by Timothy Prickett Morgan at The Next Platform.

System Spending Forecast Goes Through The Datacenter Roof

The third quarter earnings season starts this week for the hyperscaler and cloud giants, and it is fortuitous that the economists and IT analysts at Gartner have updated their forecast for IT spending for 2024 and added an jaw-dropping forecast for 2025 and hinted at a brave new world of massive datacenter spending out to 2028.

System Spending Forecast Goes Through The Datacenter Roof was written by Timothy Prickett Morgan at The Next Platform.

Cerebras Trains Llama Models To Leap Over GPUs

It was only a few months ago when waferscale compute pioneer Cerebras Systems was bragging that a handful of its WSE-3 engines lashed together could run circles around Nvidia GPU instances based on Nvidia’s “Hopper” H100 GPUs when running the open source Llama 3.1 foundation model created by Meta Platforms.

Cerebras Trains Llama Models To Leap Over GPUs was written by Timothy Prickett Morgan at The Next Platform.

1 2 3 79