Jeffrey Burt

Author Archives: Jeffrey Burt

Nvidia Rolls Out Blueprints For The Next Wave Of Generative AI

Hardware is always the star of Nvidia’s GPU Technology Conference, and this year we got previews of “Blackwell” datacenter GPUs, the cornerstone of a 2025 platform that includes “Grace” CPUs, the NVLink Switch 5 chip, the Bluefield-3 DPU, and other components, all of which Nvidia is talking about again this week at the Hot Chips 2024 conference.

Nvidia Rolls Out Blueprints For The Next Wave Of Generative AI was written by Jeffrey Burt at The Next Platform.

For Meta Platforms, An Open AI Policy Is The Best Policy

For Mark Zuckerberg, the decision by Meta Platforms – and way back when it was still known as Facebook – to open much of its technology – including server and storage designs, datacenter designs, and most recently its Llama AI large language models – came about because the company often found itself trailing competitors when it came to deploying advanced technologies.

For Meta Platforms, An Open AI Policy Is The Best Policy was written by Jeffrey Burt at The Next Platform.

Meta Lets Its Largest Llama AI Model Loose Into The Open Field

A scant three months ago, when Meta Platforms released the Llama 3 AI model in 8B and 70B versions, which correspond to the billions of parameters they can span, we asked the question we ask of every open source tool or platform since the dawn of Linux: Who’s going to profit from it and how are they going to do it?

Meta Lets Its Largest Llama AI Model Loose Into The Open Field was written by Jeffrey Burt at The Next Platform.

The Increasing Impatience Of The Speed Of The PCI-Express Roadmap

Richard Solomon has heard the rumblings over the years. As vice president of PCI-SIG, the organization that controls the development of the PCI-Express specification, he has listened to questions about how long it takes the group to bring the latest spec to the industry.

The Increasing Impatience Of The Speed Of The PCI-Express Roadmap was written by Jeffrey Burt at The Next Platform.

Cisco Pushes Nvidia Enterprise AI, But Has Its Own Network Agenda

In those heady months following OpenAI’s launch of ChatGPT in November 2022, much of the IT industry’s focus was on huge and expensive cloud infrastructures running on powerful GPU clusters to train the large language models that underpin the chatbots and other generative AI workloads.

Cisco Pushes Nvidia Enterprise AI, But Has Its Own Network Agenda was written by Jeffrey Burt at The Next Platform.

Power Efficiency, Customization Will Drive Arm’s Role In AI

More than a decade ago, executives at Arm Ltd saw the energy costs in datacenters soaring and sensed an opportunity to extend the low-power architecture of its eponymous systems-on-a-chip that has dominated the mobile phone markets from the get-go and took over the embedded device market from PowerPC into enterprise servers.

Power Efficiency, Customization Will Drive Arm’s Role In AI was written by Jeffrey Burt at The Next Platform.

AWS Hedges Its Bets With Nvidia GPUs And Homegrown AI Chips

There was a time – and it doesn’t seem like that long ago – that the datacenter chip market was a big-money but relatively simple landscape, with CPUs from Intel and AMD and Arm looking to muscle its way in and GPUs mostly from Nvidia with some from AMD and Intel looking to muscle its way in.

AWS Hedges Its Bets With Nvidia GPUs And Homegrown AI Chips was written by Jeffrey Burt at The Next Platform.