Jeffrey Burt

Author Archives: Jeffrey Burt

Accenture Melds Smarts And Wares With Nvidia For Agentic AI Push

Over the past two years, enterprises have tried to keep up with the staggering pace of the innovation with generative AI, mapping out ways to implement the emerging technology into their operations in hopes of saving time and money, increasing productivity, improving customer service and support, and driving efficiencies.

Accenture Melds Smarts And Wares With Nvidia For Agentic AI Push was written by Jeffrey Burt at The Next Platform.

Oracle Puts AI, Automation Between Its Cloud And the Bad Guys

Despite a slow start several years ago, Oracle has refashioned itself into a cloud builder, rapidly expanding its Oracle Cloud Infrastructure to make it among the top second-tier providers, although still well behind the likes of Amazon Web Services, Microsoft Azure, and Google Cloud.

Oracle Puts AI, Automation Between Its Cloud And the Bad Guys was written by Jeffrey Burt at The Next Platform.

Nvidia Rolls Out Blueprints For The Next Wave Of Generative AI

Hardware is always the star of Nvidia’s GPU Technology Conference, and this year we got previews of “Blackwell” datacenter GPUs, the cornerstone of a 2025 platform that includes “Grace” CPUs, the NVLink Switch 5 chip, the Bluefield-3 DPU, and other components, all of which Nvidia is talking about again this week at the Hot Chips 2024 conference.

Nvidia Rolls Out Blueprints For The Next Wave Of Generative AI was written by Jeffrey Burt at The Next Platform.

For Meta Platforms, An Open AI Policy Is The Best Policy

For Mark Zuckerberg, the decision by Meta Platforms – and way back when it was still known as Facebook – to open much of its technology – including server and storage designs, datacenter designs, and most recently its Llama AI large language models – came about because the company often found itself trailing competitors when it came to deploying advanced technologies.

For Meta Platforms, An Open AI Policy Is The Best Policy was written by Jeffrey Burt at The Next Platform.

Meta Lets Its Largest Llama AI Model Loose Into The Open Field

A scant three months ago, when Meta Platforms released the Llama 3 AI model in 8B and 70B versions, which correspond to the billions of parameters they can span, we asked the question we ask of every open source tool or platform since the dawn of Linux: Who’s going to profit from it and how are they going to do it?

Meta Lets Its Largest Llama AI Model Loose Into The Open Field was written by Jeffrey Burt at The Next Platform.

The Increasing Impatience Of The Speed Of The PCI-Express Roadmap

Richard Solomon has heard the rumblings over the years. As vice president of PCI-SIG, the organization that controls the development of the PCI-Express specification, he has listened to questions about how long it takes the group to bring the latest spec to the industry.

The Increasing Impatience Of The Speed Of The PCI-Express Roadmap was written by Jeffrey Burt at The Next Platform.

Cisco Pushes Nvidia Enterprise AI, But Has Its Own Network Agenda

In those heady months following OpenAI’s launch of ChatGPT in November 2022, much of the IT industry’s focus was on huge and expensive cloud infrastructures running on powerful GPU clusters to train the large language models that underpin the chatbots and other generative AI workloads.

Cisco Pushes Nvidia Enterprise AI, But Has Its Own Network Agenda was written by Jeffrey Burt at The Next Platform.

1 2 3 22