Archive

Category Archives for "The Next Platform"

How AWS Can Undercut Nvidia With Homegrown AI Compute Engines

Amazon Web Services may not be the first of the hyperscalers and cloud builders to create its own custom compute engines, but it has been hot on the heels of Google, which started using its homegrown TPU accelerators for AI workloads in 2015.

The post How AWS Can Undercut Nvidia With Homegrown AI Compute Engines first appeared on The Next Platform.

How AWS Can Undercut Nvidia With Homegrown AI Compute Engines was written by Timothy Prickett Morgan at The Next Platform.

If You Want To Sell AI To Enterprises, You Need To Sell Ethernet

Server makers Dell, Hewlett Packard Enterprise, and Lenovo, who are the three largest original manufacturers of systems in the world, ranked in that order, are adding to the spectrum of interconnects they offer to their enterprise customers.

The post If You Want To Sell AI To Enterprises, You Need To Sell Ethernet first appeared on The Next Platform.

If You Want To Sell AI To Enterprises, You Need To Sell Ethernet was written by Timothy Prickett Morgan at The Next Platform.

Meta Sees Little Risk in RISC-V Custom Accelerators

Many have waited years to hear someone like Prahlad Venkatapuram, Senior Director of Engineering at Meta, say what came out this week at the RISC-V Summit:

“We’ve identified that RISC-V is the way to go for us moving forward for all the products we have in the roadmap.

The post Meta Sees Little Risk in RISC-V Custom Accelerators first appeared on The Next Platform.

Meta Sees Little Risk in RISC-V Custom Accelerators was written by Nicole Hemsoth Prickett at The Next Platform.

Arrow Hits the Mark for Petabyte-Class Analytics Problems

When we first talked to Voltron Data following their launch in early 2022, we had to take care to explain why Apache Arrow was worth paying attention to and why it might warrant the level of enterprise support the startup promised.

The post Arrow Hits the Mark for Petabyte-Class Analytics Problems first appeared on The Next Platform.

Arrow Hits the Mark for Petabyte-Class Analytics Problems was written by Nicole Hemsoth Prickett at The Next Platform.

Redefining datacenter connectivity with open source networking

SPONSORED FEATURE: The face of modern networking is changing dramatically in parallel with the exponential increase in the volume of data traffic over the last several years.

The post Redefining datacenter connectivity with open source networking first appeared on The Next Platform.

Redefining datacenter connectivity with open source networking was written by Martin Courtney at The Next Platform.

AWS Taps Nvidia NVSwitch For Beefy AI GPU Nodes

Since the advent of distributed computing, there has been a tension between the tight coherency of memory and its compute within a node – the base level of a unit of compute – and the looser coherency over the network across those nodes.

The post AWS Taps Nvidia NVSwitch For Beefy AI GPU Nodes first appeared on The Next Platform.

AWS Taps Nvidia NVSwitch For Beefy AI GPU Nodes was written by Timothy Prickett Morgan at The Next Platform.

AWS Adopts Arm V2 Cores For Expansive Graviton4 Server CPU

For more than a year, we have been expecting for Amazon Web Services to launch its Graviton4 processor for its homegrown servers at this year’s re:Invent, and lo and behold, chief executive officer Adam Selipsky rolled out the fourth generation in the Graviton CPU lineup – and the fifth iteration including last year’s overclocked Graviton3E processor aimed at HPC workloads – during his thrombosis-inducing keynote at the conference.

The post AWS Adopts Arm V2 Cores For Expansive Graviton4 Server CPU first appeared on The Next Platform.

AWS Adopts Arm V2 Cores For Expansive Graviton4 Server CPU was written by Timothy Prickett Morgan at The Next Platform.

Groq Says It Can Deploy 1 Million AI Inference Chips In Two Years

If you are looking for an alternative to Nvidia GPUs for AI inference – and who isn’t these days with generative AI being the hottest thing since a volcanic eruption – then you might want to give Groq a call.

The post Groq Says It Can Deploy 1 Million AI Inference Chips In Two Years first appeared on The Next Platform.

Groq Says It Can Deploy 1 Million AI Inference Chips In Two Years was written by Timothy Prickett Morgan at The Next Platform.

Nvidia Proves The Enormous Potential For Generative AI

The exorbitant cost of GPU-accelerated systems for training and inference and latest to rush to find gold in mountains of corporate data are combining to exert tectonic forces on the datacenter landscape and push up a new Himalaya range – with Nvidia as its steepest and highest peak.

The post Nvidia Proves The Enormous Potential For Generative AI first appeared on The Next Platform.

Nvidia Proves The Enormous Potential For Generative AI was written by Timothy Prickett Morgan at The Next Platform.

Pushing The Limits Of HPC And AI Is Becoming A Sustainability Headache

As Moore’s law continues to slow, delivering more powerful HPC and AI clusters means building larger, more power hungry facilities.

The post Pushing The Limits Of HPC And AI Is Becoming A Sustainability Headache first appeared on The Next Platform.

Pushing The Limits Of HPC And AI Is Becoming A Sustainability Headache was written by Timothy Prickett Morgan at The Next Platform.

What To Do When You Can’t Get Nvidia H100 GPUs

In a world where allocations of “Hopper” H100 GPUs coming out of Nvidia’s factories are going out well into 2024, and the allocations for the impending “Antares” MI300X and MI300A GPUs are probably long since spoken for, anyone trying to build a GPU cluster to power a large language model for training or inference has to think outside of the box.

The post What To Do When You Can’t Get Nvidia H100 GPUs first appeared on The Next Platform.

What To Do When You Can’t Get Nvidia H100 GPUs was written by Timothy Prickett Morgan at The Next Platform.

Billion-Dollar AI Promise a Bright Spot in Gloomy Quarter for Cisco

Cisco navigated a rocky road in its first quarter of the year as evidenced by the dips in share price for the networking giant this morning.

The post Billion-Dollar AI Promise a Bright Spot in Gloomy Quarter for Cisco first appeared on The Next Platform.

Billion-Dollar AI Promise a Bright Spot in Gloomy Quarter for Cisco was written by Nicole Hemsoth Prickett at The Next Platform.

Microsoft Holds Chip Makers’ Feet To The Fire With Homegrown CPU And AI Chips

After many years of rumors, Microsoft has finally confirmed that it is following rivals Amazon Web Services and Google into the design of custom processors and accelerators for their clouds.

The post Microsoft Holds Chip Makers’ Feet To The Fire With Homegrown CPU And AI Chips first appeared on The Next Platform.

Microsoft Holds Chip Makers’ Feet To The Fire With Homegrown CPU And AI Chips was written by Timothy Prickett Morgan at The Next Platform.

Will Isambard 4 Be The UK’s First True Exascale Machine?

Here is a story you don’t hear very often: A supercomputing center was just given a blank check up to the peak power consumption of its facility to build a world-class AI/HPC supercomputer instead of a sidecar partition with some GPUs to play around with and wish its researchers had a lot more capacity.

The post Will Isambard 4 Be The UK’s First True Exascale Machine? first appeared on The Next Platform.

Will Isambard 4 Be The UK’s First True Exascale Machine? was written by Timothy Prickett Morgan at The Next Platform.

Top500 Supercomputers: Who Gets The Most Out Of Peak Performance?

The most exciting thing about the Top500 rankings of supercomputers that come out each June and November is not who is on the top of the list.

The post Top500 Supercomputers: Who Gets The Most Out Of Peak Performance? first appeared on The Next Platform.

Top500 Supercomputers: Who Gets The Most Out Of Peak Performance? was written by Timothy Prickett Morgan at The Next Platform.

Nvidia Pushes Hopper HBM Memory, And That Lifts GPU Performance

For very sound technical and economic reasons, processors of all kinds have been overprovisioned on compute and underprovisioned on memory bandwidth – and sometimes memory capacity depending on the device and depending on the workload – for decades.

The post Nvidia Pushes Hopper HBM Memory, And That Lifts GPU Performance first appeared on The Next Platform.

Nvidia Pushes Hopper HBM Memory, And That Lifts GPU Performance was written by Timothy Prickett Morgan at The Next Platform.

1 12 13 14 15 16 154