If you think about it for a minute, it is amazing that any of the old-time IT suppliers, like IBM and Hewlett Packard, and to a certain extent now Microsoft and Dell, have persisted in the datacenter for decades or, in the case of Big Blue, for more than a century. …
To Be Always Surfing On Tectonic Shifts was written by Timothy Prickett Morgan at The Next Platform.
The San Diego Supercomputing Center is overdue for a new supercomputer and thanks to a $10 million grant from the National Science Foundation, next year it will finally get one. …
SDSC Doubles Up Performance With Expanse Supercomputer was written by Michael Feldman at The Next Platform.
Processor hardware for machine learning is in their early stages but it already taking different paths. …
Habana Takes Training And Inference Down Different Paths was written by Michael Feldman at .
More than a decade ago, VMware and its new server virtualization technology represented significant threat to traditional OEMs like Dell, IBM, and Hewlett-Packard (now Hewlett Packard Enterprise) who were selling their boxes to enterprises that had to over-provision the systems they were bringing into make sure there was enough compute capacity to handle the biggest spikes in demand over the course of the year. …
VMware’s Head – And Future – Is In The Clouds was written by Jeffrey Burt at .
Startup Cerebras Systems has unveiled the world’s largest microprocessor, a waferscale chip custom-built for machine learning. …
Machine Learning Chip Breaks New Ground With Waferscale Integration was written by Michael Feldman at .
Carey Kloss has been intimately involved with the rise of AI hardware over the last several years, most notably with his work building the first Nervana compute engine, which Intel captured and is rolling into two separate products: one chip for training, another for inference. …
Intel Focuses on Scale Out AI Training With New Chip was written by Nicole Hemsoth at .
It has been a long time coming, and it might have been better if this had been done a decade ago. …
Big Blue Open Sources Power Chip Instruction Set was written by Timothy Prickett Morgan at .
The rapid changes underway in modern datacenters and HPC environment are demanding more compute power from a tech industry that is running into significant barriers to supplying that capacity. …
The View From On High On How To Beat Moore’s Law was written by Jeffrey Burt at .
If you want for the rapid pace of innovation in datacenter networking to continue, then you had better hope that the hyperscalers and major public cloud builders don’t run out of money. …
The Future Of Networks Depends On Hyperscalers And Big Clouds was written by Timothy Prickett Morgan at .
Forget in-memory computing for the moment because it requires a complete re-architecting of applications and most of the time the underlying hardware, too. …
Getting Around The Limits Of Memory To Accelerate Applications was written by Timothy Prickett Morgan at .
Last fall, supercomputer maker Cray announced that it was getting back to making high performance cluster interconnects after a six year hiatus, but the company had already been working on its “Rosetta” switch ASIC for the Slingshot interconnect for quite a while before it started talking publicly about it. …
How Cray Makes Ethernet Suited For HPC And AI With Slingshot was written by Timothy Prickett Morgan at .
If you were expecting Nvidia to start talking about its future “Einstein” GPUs for Tesla accelerated computing soon just because AMD is getting ready to roll out “Navi” GPUs next year and Intel is working on its Xe GPU cards for delivery next year, too, you are going to have to wait a bit longer. …
Nvidia Readies For Battle In The Datacenter In 2020 was written by Timothy Prickett Morgan at .
In any chip design, the devil – and the angel – is always in the details. …
A Deep Dive Into AMD’s Rome Epyc Architecture was written by Timothy Prickett Morgan at .
As artificial neural networks for natural language processing (NLP) continue to improve, it is becoming easier and easier to chat with our computers. …
Nvidia Elevates The Conversation For Natural Language Processing was written by Michael Feldman at .
After a long wait, now we know. All three of the initial exascale-class supercomputer systems being funded by the US Department of Energy through its CORAL-2 procurement are going to be built by Cray, with that venerable maker of supercomputers being the prime contractor on two of them. …
Cray Runs The Exascale Table In The United States was written by Timothy Prickett Morgan at .
Under two unrelated US Department of Defense procurements, Cray has been awarded a total of $71 million to supply the Air Force and Army with a trio of HPC systems. …
US Military Buys Three Cray Supercomputers was written by Michael Feldman at .
Accelerators of many kinds, but particularly those with GPUs and FPGAs, can be pretty hefty compute engines that meet or exceed the power, thermal, and spatial envelopes of modern processors. …
Xilinx Keeps A Low Profile With Mainstream FPGA Accelerator was written by Michael Feldman at .
AMD had been down this road before. In 2003, the chip maker launched the “SledgeHammer” Opteron, the first 64-bit X86 server processor with backward compatibility to its 32-bit predecessors that came at a time when much larger rival Intel was still pumping up Itanium as the next-generation architecture – and its only 64-bit option. …
With Rome, AMD Will Build Off Momentum For Naples Epyc Chips was written by Jeffrey Burt at .
Nvidia has unveiled GPUDirect Storage, a new capability that enables its GPUs to talk directly with NVM-Express storage. …
Nvidia GPU Accelerators Get A Direct Pipe To Big Data was written by Michael Feldman at .
It has been a long time coming: The day when AMD can put a processor up against any Xeon that Intel can deliver and absolutely compete on technology, price, predictability of availability, and consistency of roadmap looking ahead. …
AMD Doubles Down – And Up – With Rome Epyc Server Chips was written by Timothy Prickett Morgan at .