We have been watching the big original equipment manufactures like a hawk to see how they are generating revenues and income from GPU-accelerated system sales. …
Dell’s AI Server Business Now Bigger Than VMware Used To Be was written by Timothy Prickett Morgan at The Next Platform.
PARTNER CONTENT Given the size and complexity of modern semiconductor designs, functional verification has become a dominant phase in the development cycle. …
Reduce Manual Effort, Achieve Better Coverage With AI And Formal Techniques was written by Timothy Prickett Morgan at The Next Platform.
Welcome to the most important earnings call in history, with the weight of the aggregate stock markets of the entire world hanging on what Nvidia says and doesn’t say. …
Nvidia Says “Blackwell” GPU Issues Are Fixed, Ramp Starts In Fiscal Q4 was written by Timothy Prickett Morgan at The Next Platform.
After Wall Street closed the markets for the day and Nvidia reported its financial results for the second quarter of fiscal 2025, we had the opportunity to chat with Colette Kress, chief financial officer of the accelerated computing giant. …
Interview: Post-Earnings Insight With Nvidia CFO Colette Kress was written by Timothy Prickett Morgan at The Next Platform.
We all had been wondering what VMware would look like when it became part of Broadcom’s massive universe following the semiconductor giant’s $69 billion acquisition of the virtualization juggernaut. …
VMware Wants To Redefine Private Cloud With VCF 9 was written by Jeffrey Burt at The Next Platform.
Big Blue might be a little late to the AI acceleration game, but it has a captive audience in its System z mainframe and Power Systems servers. …
IBM Shows Off Next-Gen AI Acceleration, On Chip DPU For Big Iron was written by Timothy Prickett Morgan at The Next Platform.
Hardware is always the star of Nvidia’s GPU Technology Conference, and this year we got previews of “Blackwell” datacenter GPUs, the cornerstone of a 2025 platform that includes “Grace” CPUs, the NVLink Switch 5 chip, the Bluefield-3 DPU, and other components, all of which Nvidia is talking about again this week at the Hot Chips 2024 conference. …
Nvidia Rolls Out Blueprints For The Next Wave Of Generative AI was written by Jeffrey Burt at The Next Platform.
COMMISSIONED Organizations must consider many things before deploying generative AI services, from choosing models and tech stacks to selecting relevant use cases. …
Get Your Data House In Order To Unlock Your Generative AI Strategy was written by Timothy Prickett Morgan at The Next Platform.
Effects are multiplicative, not additive, when it comes to increasing compute engine performance. …
Bechtolsheim Outlines Scaling XPU Performance By 100X By 2028 was written by Timothy Prickett Morgan at The Next Platform.
When you are designing applications that run across the scale of an entire datacenter and that are comprised of hundreds to thousands of microservices running on countless individual servers and that have to be called within a matter of microseconds to give the illusion of a monolithic application, building fully connected, high bi-section bandwidth Clos networks is a must. …
This AI Network Has No Spine – And That’s A Good Thing was written by Timothy Prickett Morgan at The Next Platform.
Nvidia hit a rare patch of bad news earlier this month when reports started circulating claiming that the company’s much-anticipated “Blackwell” GPU accelerators could be delayed by as much as three months due to design flaws. …
When Nvidia Says Hot Chips, It Means Hot Platforms was written by Jeffrey Burt at The Next Platform.
Rackspace Technology has admittedly been relatively quiet in recent years when it’s come to OpenStack, the open source cloud infrastructure platform that was born in 2010 out of the collaboration between the cloud computing company and NASA. …
Rackspace Goes All In – Again – On OpenStack was written by Jeffrey Burt at The Next Platform.
A complex world demands complex systems.
Designing and improving new industrial systems, semiconductors, or vehicles, whether earth or space bound, presents massive engineering and manufacturing challenges. …
When You’re Building The Future, The Past Is No Longer A Guide was written by Timothy Prickett Morgan at The Next Platform.
Current AI training and some AI inference clusters have two networks. …
Arista Banks On The AI Network Double Whammy was written by Timothy Prickett Morgan at The Next Platform.
It’s still Ketchup Week here at The Next Platform, and we are going to be circling back to look at the financials of a number of bellwether datacenter companies that we could not get to during a number of medical crisis – including but not limited to our family catching COVID when we took a week of vacation at a lake in Michigan. …
Supermicro Financials Get Better As The Company Gets Bigger was written by Timothy Prickett Morgan at The Next Platform.
COMMISSIONED On a bustling factory floor, an advanced AI system orchestrates a symphony of robotic arms, each performing its task with precision. …
Building Blocks of AI: How Storage Architecture Shapes AI Success was written by Timothy Prickett Morgan at The Next Platform.
If AMD is willing and eager to spend $4.9 billion to buy a systems company – that is more than its entire expected haul for sales of datacenter GPUs for 2024 – then you have to figure that acquisition is pretty important. …
Why AMD Spent $4.9 Billion To Buy ZT Systems was written by Timothy Prickett Morgan at The Next Platform.
Like many other suppliers of hardware and systems software, Cisco Systems is trying to figure out how to make money on the AI revolution. …
AI Pervades The Cisco Stack, But Is Only Starting To Drive Sales was written by Timothy Prickett Morgan at The Next Platform.
We were away on vacation at a lakeside beach in northern Michigan when we caught the news that the UK government was pulling the plug on a plan for an exascale supercomputer to be installed at the EPCC at the University of Edinburgh. …
Why The UK Should Have Its Own Exascale AI/HPC Machine, And How was written by Timothy Prickett Morgan at The Next Platform.
Each time that the United States has figured out that it needed to do export controls on massively parallel compute engines to try to discourage China from buying such gear and building supercomputers with them, it has already been too late to have much of a long term effect on China’s ability to run the advanced HPC simulations and AI training workloads that we were worried would be enabled by such computing oomph. …
Huawei’s HiSilicon Can Compete With Nvidia GPUs In China was written by Timothy Prickett Morgan at The Next Platform.