Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

Broadcom Tries To Kill InfiniBand And NVSwitch With One Ethernet Stone

InfiniBand was always supposed to be a mainstream fabric to be used across PCs, servers, storage, and networks, but the effort collapsed and the remains of the InfiniBand effort found a second life at the turn of the millennium as a high performance, low latency interconnect for supercomputers running simulations and models.

Broadcom Tries To Kill InfiniBand And NVSwitch With One Ethernet Stone was written by Timothy Prickett Morgan at The Next Platform.

The World’s Most Powerful Server Embiggens A Bit With Power11

If you need a big, badass box that can support tens of terabytes of memory, dozens of PCI-Express peripheral slots, thousands of directly attached storage devices, all feeding into hundreds of cores that can span that memory footprint with lots of bandwidth, you do not have a lot of options.

The World’s Most Powerful Server Embiggens A Bit With Power11 was written by Timothy Prickett Morgan at The Next Platform.

The Art Of The GPU Deal

Perhaps the most interesting conversation that has happened so far in the White House in 2025, at least from the point of view of the IT sector, is when Nvidia co-founder and chief executive officer, Jensen Huang, put on his Sunday best suit and visited President Donald Trump to presumably talk about technology, AI, trade, and war on July 10.

The Art Of The GPU Deal was written by Timothy Prickett Morgan at The Next Platform.

Brazil Lays The Hardware Foundation For Its AI Ambitions

Every major economy that is not the United States or China, which has a disproportionate share of HPC national labs as well as hyperscaler and cloud builder tech titans, wants AI sovereignty a whole lot more than they ever worried about HPC simulation and modeling.

Brazil Lays The Hardware Foundation For Its AI Ambitions was written by Timothy Prickett Morgan at The Next Platform.

Sizing Up AWS “Blackwell” GPU Systems Against Prior GPUs And Trainiums

This week, Amazon Web Services announced the availability of its first UltraServer pre-configured supercomputers based on Nvidia’s “Grace” CG100 CPUs and its “Blackwell” B200 GPUs in what is called a GB200 NVL72 shared GPU memory configuration.

Sizing Up AWS “Blackwell” GPU Systems Against Prior GPUs And Trainiums was written by Timothy Prickett Morgan at The Next Platform.

Will Companies Build Or Buy Their GenAI Models?

One of the biggest questions that enterprises, governments, academic institutions, and HPC centers the world over are going to have to answer very soon – if they have not made the decision already – is if they are going to train their own AI models and the inference software stacks that make them useful or just buy them from third parties and get to work integrating AI with their applications a lot faster.

Will Companies Build Or Buy Their GenAI Models? was written by Timothy Prickett Morgan at The Next Platform.

Some Thoughts On The Future “Doudna” NERSC-10 Supercomputer

Right or wrong, we still believe that we live in a world where traditional HPC simulation and modeling at high precision matters more than mashing up the sum total of human knowledge and mixing with the digital exhaust of our lives to create a globe-spanning automation that will leave us all with very little to do and a commensurate amount of wealth and power to show for it.

Some Thoughts On The Future “Doudna” NERSC-10 Supercomputer was written by Timothy Prickett Morgan at The Next Platform.

Intel Starts Re-Engineering Its Executive Ranks

It has been two and a half months since new chief executive officer Lip-Bu Tan gave the keynote at Intel’s Vision 2025 event, and the company has been relatively quiet by its own standards over the past several decades as Tan gets the lay of the land and tries to plot out the course of the company to rebuild its foundry business and reorient and focus its chip design and sales business.

Intel Starts Re-Engineering Its Executive Ranks was written by Timothy Prickett Morgan at The Next Platform.

AMD Plots Interception Course With Nvidia GPU And System Roadmaps

To a certain extent, Nvidia and AMD are not really selling GPU compute capacity as much as they are reselling just enough HBM memory capacity and bandwidth to barely balance out the HBM memory they can get their hands on, thereby justifying the ever-embiggening amount of compute their GPU complexes get overstuffed with.

AMD Plots Interception Course With Nvidia GPU And System Roadmaps was written by Timothy Prickett Morgan at The Next Platform.

1 2 3 87