Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

What Do You Do When You Want GPFS On The Cloud?

While there are a lot of different file system and object storage options available for HPC and AI customers, many AI organizations and a lot of traditional HPC simulation and modeling centers choose either the open source Lustre parallel file system or the modern variants of IBM’s General Parallel File System (GPFS), known previously as Spectrum Scale and now known as IBM Storage Scale, as the storage underpinning of their applications.

What Do You Do When You Want GPFS On The Cloud? was written by Timothy Prickett Morgan at The Next Platform.

Driving HPC Performance Up Is Easier Than Keeping The Spending Constant

We are still mulling over all of the new HPC-AI supercomputer systems that were announced in recent months before and during the SC25 supercomputing conference in St Louis, particularly how the slew of new machines announced by the HPC national labs will be advancing not just the state of the art, but also pushing down the cost of the FP64 floating point operations that still drives a lot of HPC simulation and modeling work.

Driving HPC Performance Up Is Easier Than Keeping The Spending Constant was written by Timothy Prickett Morgan at The Next Platform.

With Celestial AI Buy, Marvell Scales Up The Datacenter And Itself

It was only a matter of time before Marvell was going to make another silicon photonics acquisition, and the $2.5 billion sale of its automotive Ethernet business to Infineon for $2.5 billion has given the company this past summer netted out to about half of the $3.25 billion that the company is shelling out to get its hands on Celestial AI, one of the several upstarts that hopes to hook compute engines, memory, and switches together using on-chip optical engines and light pipes.

With Celestial AI Buy, Marvell Scales Up The Datacenter And Itself was written by Timothy Prickett Morgan at The Next Platform.

With Trainium4, AWS Will Crank Up Everything But The Clocks

The AI model makers of the world have been waiting for more than a year to get their hands on the Trainium3 XPUs, which have been designed explicitly for both training and inference and which present a credible alternative to Nvidia’s “Blackwell” B200 and B300 GPUs as well as Google’s “Trillium” TPU v6e and “Ironwood” TPU v7p accelerators.

With Trainium4, AWS Will Crank Up Everything But The Clocks was written by Timothy Prickett Morgan at The Next Platform.

The Road To HPC And AI Profits Is Paved With Good Intentions

With a profitable PC business that has 25 percent of global shipments (thanks in large part to its acquisition of IBM’s PC business two decades ago) plus a respectable smartphone business (by virtue of its Motorola acquisition), the client device business at Lenovo is finally back to where it was during the peak of the coronavirus pandemic and is consistently delivering what are decent profits for this cut-throat part of the IT sector.

The Road To HPC And AI Profits Is Paved With Good Intentions was written by Timothy Prickett Morgan at The Next Platform.

IBM Ships Homegrown “Spyre” Accelerators, Embraces Anthropic For AI Push

Big Blue may have missed the boat on being one of the big AI model builders, but its IBM Research division has built its own enterprise-grade family of models and its server and research divisions have plenty of experience building accelerators and supercomputers.

IBM Ships Homegrown “Spyre” Accelerators, Embraces Anthropic For AI Push was written by Timothy Prickett Morgan at The Next Platform.