Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

AMD Feels The Server Recession, Too, But Growth Is Looming Large

With a server recession underway and its latest Epyc CPUs and Instinct GPU accelerators still ramping, this was a predictably soft, but still not terrible in the scheme of things, quarter for AMD.

The post AMD Feels The Server Recession, Too, But Growth Is Looming Large first appeared on The Next Platform.

AMD Feels The Server Recession, Too, But Growth Is Looming Large was written by Timothy Prickett Morgan at The Next Platform.

Unleashing An Open Source Torrent On CPUs And AI Engines

When you combine the forces of open source and the wide and deep semiconductor experience of legendary chip architect Jim Keller, something interesting is bound to happen.

The post Unleashing An Open Source Torrent On CPUs And AI Engines first appeared on The Next Platform.

Unleashing An Open Source Torrent On CPUs And AI Engines was written by Timothy Prickett Morgan at The Next Platform.

Vast Data Intentionally Blurs The Line Between Storage And Database

Depending on how you look at it, a database is a kind of sophisticated storage system or storage is a kind of a reduction of a database.

The post Vast Data Intentionally Blurs The Line Between Storage And Database first appeared on The Next Platform.

Vast Data Intentionally Blurs The Line Between Storage And Database was written by Timothy Prickett Morgan at The Next Platform.

Bookkeeping Helps Intel Recover From Server Recession A Little

Accounting is something of an art, and companies always save some accounting tricks – perfectly legitimate items that meet the discerning eye of financial standards – to goose their numbers when they really need it.

The post Bookkeeping Helps Intel Recover From Server Recession A Little first appeared on The Next Platform.

Bookkeeping Helps Intel Recover From Server Recession A Little was written by Timothy Prickett Morgan at The Next Platform.

Micron Revs Up Bandwidth And Capacity On HBM3 Stacks

As we have seen with various kinds of high bandwidth, stacked DRAM memory to compute engines in the past decade, just adding this wide, fast, and expensive memory to a compute engine can radically improve the effective performance of the device.

The post Micron Revs Up Bandwidth And Capacity On HBM3 Stacks first appeared on The Next Platform.

Micron Revs Up Bandwidth And Capacity On HBM3 Stacks was written by Timothy Prickett Morgan at The Next Platform.

In G42, Cerebras Finds The Deep Pockets And Partnership It Needs To Grow

When you are competing against the hyperscalers and cloud builders in the AI revolution, you need backers as well as customers that have deep-pockets and that can not only think big, but pay big.

The post In G42, Cerebras Finds The Deep Pockets And Partnership It Needs To Grow first appeared on The Next Platform.

In G42, Cerebras Finds The Deep Pockets And Partnership It Needs To Grow was written by Timothy Prickett Morgan at The Next Platform.

Stampede3: A Smaller HPC System That Will Get More Work Done

All of the major HPC centers of the world, whether they are funded by straight science or nuclear weapons management, have enough need and enough money to have two classes of supercomputers.

The post Stampede3: A Smaller HPC System That Will Get More Work Done first appeared on The Next Platform.

Stampede3: A Smaller HPC System That Will Get More Work Done was written by Timothy Prickett Morgan at The Next Platform.

AI Is A Modest – But Important – Slice Of TSMC’s Business

Given the exorbitant demand for compute and networking for running Ai workloads and the dominance of Taiwan Semiconductor Manufacturing Co in making the compute engine chips and providing the complex packaging for them, you would think that the world’s largest foundry would be making money hands over fist in the second quarter.

The post AI Is A Modest – But Important – Slice Of TSMC’s Business first appeared on The Next Platform.

AI Is A Modest – But Important – Slice Of TSMC’s Business was written by Timothy Prickett Morgan at The Next Platform.

Ethernet Consortium Shoots For 1 Million Node Clusters That Beat InfiniBand

Here we go again. Some big hyperscalers and cloud builders and their ASIC and switch suppliers are unhappy about Ethernet, and rather than wait for the IEEE to address issues, they are taking matters in their own hands to create what will ultimately become an IEEE standard that moves Ethernet forward in a direction and speed of their choosing.

The post Ethernet Consortium Shoots For 1 Million Node Clusters That Beat InfiniBand first appeared on The Next Platform.

Ethernet Consortium Shoots For 1 Million Node Clusters That Beat InfiniBand was written by Timothy Prickett Morgan at The Next Platform.

Microsoft’s Chiplet Cloud To Bring The Cost Of LLMs Way Down

If Nvidia and AMD are licking their lips thinking about all of the GPUs they can sell to Microsoft to support its huge aspirations in generative AI – particularly when it comes to the OpenAI GPT large language model that is the centerpiece of all of the company’s future software and services – they had better think again.

The post Microsoft’s Chiplet Cloud To Bring The Cost Of LLMs Way Down first appeared on The Next Platform.

Microsoft’s Chiplet Cloud To Bring The Cost Of LLMs Way Down was written by Timothy Prickett Morgan at The Next Platform.

NCSA Builds Out Delta Supercomputer With An AI Extension

The National Center for Supercomputing Applications at the University of Illinois just fired up its Delta system back in April 2022, and now it has just been given $10 million by the National Science Foundation to expand that machine with an AI partition, called DeltaAI appropriately enough, that is based on Nvidia’s “Hopper” H100 GPU accelerators.

The post NCSA Builds Out Delta Supercomputer With An AI Extension first appeared on The Next Platform.

NCSA Builds Out Delta Supercomputer With An AI Extension was written by Timothy Prickett Morgan at The Next Platform.

Lining Up The “El Capitan” Supercomputer Against The AI Upstarts

The question is no longer whether or not the “El Capitan” supercomputer that has been in the process of being installed at Lawrence Livermore National Laboratory for the past week – with photographic evidence to prove it – will be the most powerful system in the world.

The post Lining Up The “El Capitan” Supercomputer Against The AI Upstarts first appeared on The Next Platform.

Lining Up The “El Capitan” Supercomputer Against The AI Upstarts was written by Timothy Prickett Morgan at The Next Platform.

PCI-Express Must Match The Cadence Of Compute Engines And Networks

When system architects sit down to design their next platforms, they start by looking at a bunch of roadmaps from suppliers of CPUs, accelerators, memory, flash, network interface cards – and PCI-Express controllers and switches.

The post PCI-Express Must Match The Cadence Of Compute Engines And Networks first appeared on The Next Platform.

PCI-Express Must Match The Cadence Of Compute Engines And Networks was written by Timothy Prickett Morgan at The Next Platform.

The $1 Billion And Higher Ante To Play The AI Game

If you want to get the attention of server makers and compute engine providers and especially if you are going to be building GPU-laden clusters with shiny new gear to drive AI training and possibly AI inference for large language models and recommendation engines, the first thing you need is $1 billion.

The post The $1 Billion And Higher Ante To Play The AI Game first appeared on The Next Platform.

The $1 Billion And Higher Ante To Play The AI Game was written by Timothy Prickett Morgan at The Next Platform.

1 12 13 14 15 16 79