Archive

Category Archives for "The Next Platform"

The Sugar Daddy Boomerang Effect: How AI Investments Puff Up The Clouds

Here’s a question for you: How much of the growth in cloud spending at Microsoft Azure, Amazon Web Services, and Google Cloud in the second quarter came from OpenAI and Anthropic spending money they got as investments out of the treasure chests of Microsoft, Amazon, and Google?

The Sugar Daddy Boomerang Effect: How AI Investments Puff Up The Clouds was written by Timothy Prickett Morgan at The Next Platform.

AMD Breaks $1 Billion In Datacenter GPU Sales In Q2

As expected, AMD has once again raised its forecast for sales of its Instinct MI300 series GPUs, and as it has broken through $1 billion in revenues for its “Antares” line of compute engines in the second quarter, it is now expecting to surpass $4.5 billion in sales of these devices for all of 2024.

AMD Breaks $1 Billion In Datacenter GPU Sales In Q2 was written by Timothy Prickett Morgan at The Next Platform.

For Meta Platforms, An Open AI Policy Is The Best Policy

For Mark Zuckerberg, the decision by Meta Platforms – and way back when it was still known as Facebook – to open much of its technology – including server and storage designs, datacenter designs, and most recently its Llama AI large language models – came about because the company often found itself trailing competitors when it came to deploying advanced technologies.

For Meta Platforms, An Open AI Policy Is The Best Policy was written by Jeffrey Burt at The Next Platform.

Meta Lets Its Largest Llama AI Model Loose Into The Open Field

A scant three months ago, when Meta Platforms released the Llama 3 AI model in 8B and 70B versions, which correspond to the billions of parameters they can span, we asked the question we ask of every open source tool or platform since the dawn of Linux: Who’s going to profit from it and how are they going to do it?

Meta Lets Its Largest Llama AI Model Loose Into The Open Field was written by Jeffrey Burt at The Next Platform.

Scaling The Datacenter: Five Best Practices For CSPs

In today’s dynamic technological environment, service providers such as cloud service providers (CSPs), managed service providers (MSPs), software-as-a-service (SaaS) providers, and enterprise private cloud operators face a myriad of challenges in the modern datacenter.

Scaling The Datacenter: Five Best Practices For CSPs was written by Timothy Prickett Morgan at The Next Platform.

AMD’s Long And Winding Road To The Hybrid CPU-GPU Instinct MI300A

Back in 2012, when AMD was in the process of backing out of the datacenter CPU business and did not really have its datacenter GPU act together at all, the US Department of Energy exhibited the enlightened self-interest that is a strong foundation of both economics and politics and took a chance and invested in AMD to do research in memory technologies and hybrid CPU-GPU computing at exascale.

AMD’s Long And Winding Road To The Hybrid CPU-GPU Instinct MI300A was written by Timothy Prickett Morgan at The Next Platform.

Ongoing Saga: How Much Money Will Be Spent On AI Chips?

Everybody knows that companies, particularly hyperscalers and cloud builders but now increasingly enterprises hoping to leverage generative AI, are spending giant round bales of money on AI accelerators and related chips to create AI training and inference clusters.

Ongoing Saga: How Much Money Will Be Spent On AI Chips? was written by Timothy Prickett Morgan at The Next Platform.

1 5 6 7 8 9 155