Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

Enfabrica Nabs $125 Million To Ramp Networking Godbox

For the past decade or so, we have been convinced by quite a large number of IT suppliers that security functions, network and storage virtualization functions, and even the server virtualization hypervisor for carving up compute itself should be offloaded from servers to intermediaries somewhat illogically called data processing units, or DPUs.

The post Enfabrica Nabs $125 Million To Ramp Networking Godbox first appeared on The Next Platform.

Enfabrica Nabs $125 Million To Ramp Networking Godbox was written by Timothy Prickett Morgan at The Next Platform.

Supply Chain Easing Creates Ethernet Switching Boom

There are a lot of things going on in the datacenter and campus interconnect markets, but one of the weirder things we observe from the most recent market data coming out of IDC about the Ethernet portion of this market is that it is like a country music record being played backwards.

The post Supply Chain Easing Creates Ethernet Switching Boom first appeared on The Next Platform.

Supply Chain Easing Creates Ethernet Switching Boom was written by Timothy Prickett Morgan at The Next Platform.

Other Than Nvidia, Who Will Use Arm’s Neoverse V2 Core?

We are still plowing through the many, many presentions from the Hot Interconnects, Hot Chips, Google Cloud Next, and Meta Networking @ Scale conferences that all happened recently and at essentially the same time.

The post Other Than Nvidia, Who Will Use Arm’s Neoverse V2 Core? first appeared on The Next Platform.

Other Than Nvidia, Who Will Use Arm’s Neoverse V2 Core? was written by Timothy Prickett Morgan at The Next Platform.

Optimizing AI Inference Is As Vital As Building AI Training Beasts

The history of computing teaches us that software always and necessarily lags hardware, and unfortunately that lag can stretch for many years when it comes to wringing the best performance out of iron by tweaking algorithms.

The post Optimizing AI Inference Is As Vital As Building AI Training Beasts first appeared on The Next Platform.

Optimizing AI Inference Is As Vital As Building AI Training Beasts was written by Timothy Prickett Morgan at The Next Platform.

Is Mojo The Fortran For AI Programming, Or More?

When Jim Keller talks about compute engines, you listen. And when Keller name drops a programming language and AI runtime environment, as he did in a recent interview with us, you do a little research and you also keep an eye out for developments.

The post Is Mojo The Fortran For AI Programming, Or More? first appeared on The Next Platform.

Is Mojo The Fortran For AI Programming, Or More? was written by Timothy Prickett Morgan at The Next Platform.

Just How Big – Or Small – Is The Quantum Computing Racket?

There is no question in our minds here at The Next Platform that quantum computing, in some fashion, will be part of the workflow for solving some of the peskiest computational problems the world can think of.

The post Just How Big – Or Small – Is The Quantum Computing Racket? first appeared on The Next Platform.

Just How Big – Or Small – Is The Quantum Computing Racket? was written by Timothy Prickett Morgan at The Next Platform.

HashiCorp Retools Licenses And Software To Grow Its Business

It is hard to make a living in the open source software business, although it is possible, through the contributions of many, to make great software.

The post HashiCorp Retools Licenses And Software To Grow Its Business first appeared on The Next Platform.

HashiCorp Retools Licenses And Software To Grow Its Business was written by Timothy Prickett Morgan at The Next Platform.

What Would You Do With A 16.8 Million Core Graph Processing Beast?

If you look back at it now, especially with the advent of massively parallel computing on GPUs, maybe the techies at Tera Computing and then Cray had the right idea with their “ThreadStorm” massively threaded processors and high bandwidth interconnects.

The post What Would You Do With A 16.8 Million Core Graph Processing Beast? first appeared on The Next Platform.

What Would You Do With A 16.8 Million Core Graph Processing Beast? was written by Timothy Prickett Morgan at The Next Platform.

Dell Making The Most Of Its GPU Allocations, Like Everyone Else

In a world where Nvidia is allocating proportional shares of its GPU hotcakes to all of the OEMs and ODMs, companies like Dell, Hewlett Packard, Lenovo, and Supermicro get their shares and then they turn around and try to sell systems using them at the highest possible price.

The post Dell Making The Most Of Its GPU Allocations, Like Everyone Else first appeared on The Next Platform.

Dell Making The Most Of Its GPU Allocations, Like Everyone Else was written by Timothy Prickett Morgan at The Next Platform.

The Edge Propels HPE While Datacenter Taps The Brakes

Customers of Hewlett Packard Enterprise have one foot on the gas and one foot on the brakes at the same time that the company is transitioning from selling gear outright to customers to selling them subscriptions that spread the cost – and therefore HPE’s recognized revenues – out over time.

The post The Edge Propels HPE While Datacenter Taps The Brakes first appeared on The Next Platform.

The Edge Propels HPE While Datacenter Taps The Brakes was written by Timothy Prickett Morgan at The Next Platform.

The Next 100X For AI Hardware Performance Will Be Harder

For those of us who like hardware and were hoping for a big reveal about the TPUv5e AI processor and surrounding system, interconnect, and software stack at the Hot Chips 2023 conference this week, the opening keynote by Jeff Dean and Amin Vahdat, the two most important techies at Google, was a bit of a disappointment.

The post The Next 100X For AI Hardware Performance Will Be Harder first appeared on The Next Platform.

The Next 100X For AI Hardware Performance Will Be Harder was written by Timothy Prickett Morgan at The Next Platform.

Cornelis Unveils Ambitious Omni-Path Interconnect Roadmap

As we are fond of pointing out, when it comes to high performance, low latency InfiniBand-style networks, Nvidia is not the only choice in town and has not been since the advent of InfiniBand interconnects back in the late 1990s.

The post Cornelis Unveils Ambitious Omni-Path Interconnect Roadmap first appeared on The Next Platform.

Cornelis Unveils Ambitious Omni-Path Interconnect Roadmap was written by Timothy Prickett Morgan at The Next Platform.

Nvidia Gooses Grace-Hopper GPU Memory, Gangs Them Up For LLM

If large language models are the foundation of a new programming model, as Nvidia and many others believe it is, then the hybrid CPU-GPU compute engine is the new general purpose computing platform.

The post Nvidia Gooses Grace-Hopper GPU Memory, Gangs Them Up For LLM first appeared on The Next Platform.

Nvidia Gooses Grace-Hopper GPU Memory, Gangs Them Up For LLM was written by Timothy Prickett Morgan at The Next Platform.

The Interplay Of GDP, Inflation, And IT Spending

Data changes behavior and behavior changes data. It is a phenomenon that is akin to the Observer Effect in physics in that you can’t observe something without changing its behavior.

The post The Interplay Of GDP, Inflation, And IT Spending first appeared on The Next Platform.

The Interplay Of GDP, Inflation, And IT Spending was written by Timothy Prickett Morgan at The Next Platform.

Supermicro Sets Its Sights On $20 Billion Business

Only a few years ago, motherboard and system maker Supermicro set a target of breaking through $10 billion in sales, and thanks to the explosion in systems for training and inference for AI applications, it looks like the company is going to bust through that goal in its fiscal 2025 ending next June.

The post Supermicro Sets Its Sights On $20 Billion Business first appeared on The Next Platform.

Supermicro Sets Its Sights On $20 Billion Business was written by Timothy Prickett Morgan at The Next Platform.

Crafting A DGX-Alike AI Server Out Of AMD GPUs And PCI Switches

Not everybody can afford an Nvidia DGX AI server loaded up with the latest “Hopper” H100 GPU accelerators or even one of its many clones available from the OEMs and ODMs of the world.

The post Crafting A DGX-Alike AI Server Out Of AMD GPUs And PCI Switches first appeared on The Next Platform.

Crafting A DGX-Alike AI Server Out Of AMD GPUs And PCI Switches was written by Timothy Prickett Morgan at The Next Platform.

GPU Shortages Will Prop Up The Clouds In More Ways Than One

For the last two quarters at least, the generic infrastructure server market – the one running databases, application servers, various web layers, and print and file serving workloads the world over – has been in a recession.

The post GPU Shortages Will Prop Up The Clouds In More Ways Than One first appeared on The Next Platform.

GPU Shortages Will Prop Up The Clouds In More Ways Than One was written by Timothy Prickett Morgan at The Next Platform.

1 11 12 13 14 15 79