Hyperconverged infrastructure has been with us for a while now, and it looks like the technology is still a growing market, if analyst figures can be believed. …
Red Hat Takes On VMware, Nutanix For Hyperconverged Storage was written by Daniel Robinson at .
Every important benchmark needs to start somewhere.
The first round of MLperf results are in and while they might not deliver on what we would have expected in terms of processor diversity and a complete view into scalability and performance, they do shed light on some developments that go beyond sheer hardware when it comes to deep learning training. …
Reading Between the MLPerf Lines was written by Nicole Hemsoth at .
It took a very, very long time, but if current conditions persist, we could see a server market that rakes in more than $100 billion next year. …
The Vital Engines Of Commerce was written by Timothy Prickett Morgan at .
Innovation requires motivation, and there is nothing quite like a competitor trying to each your lunch to wake you up every morning to the reality of the situation. …
Intel Bets Heavily On Chip Stacking For The Future Of Compute was written by Timothy Prickett Morgan at .
The list of technologies that has been created because of the limitations of traditional, enterprise-grade relational databases is quite long, and yet these tried and true technologies persist at the heart of the modern enterprise. …
Talking Databases With Hadoop Creator Doug Cutting was written by Timothy Prickett Morgan at .
The history of digital computing is to provide increasing levels of abstraction to get programmers further and further away from directly manipulating the ones and zeros. …
Red Hat, Google, IBM, And SAP Go Knative For Serverless was written by Jeffrey Burt at .
The husband and wife team of Abdurrahman and Tülay Ateşin are experimental scientists who unexpectedly became involved with supercomputers when they moved to Texas in 2013. …
Exploring The Frontiers of Chemistry With HPC was written by Timothy Prickett Morgan at .
For more than a decade, datacenter system administrators have been trying to figure out how to get their increasingly complex infrastructure under control and to manage them in a way that allows them to keep up with, adapt to, and scale with the rapid changes that are the new norm. …
Pulling The Puppet Strings In HPC was written by Jeffrey Burt at .
The idea of a “private” file system—one that runs within a user’s specific job and is tailored to the I/O requirements of a particular workload—is not necessarily new, but it is gaining steam given changing hardware capabilities and workload demands in large HPC environments. …
File Systems Go Private to Meet Evolving HPC Demands was written by Nicole Hemsoth at .
When it comes to parallel file systems, few people understand the evolution of challenges better than Sven Oehme, who was part of the original team at IBM building GPFS. …
Long Live the HPC Parallel File System was written by Nicole Hemsoth at .
Barefoot Networks is on a mission, and it is a simple one: To give datacenter switches the same kind of openness and programmability that X86 servers have enjoyed for decades in the datacenter. …
Programmable Networks Get a Bigger Foot In The Datacenter Door was written by Timothy Prickett Morgan at .
What if a hyperscale rack designer decided not to locally optimize legacy form factors for thermal management, but instead to start over and design a rack based on optimizing thermal efficiency? …
The Leading Edge Of Air-Cooled Servers Leads To The Edge was written by Paul Teich at .
Proctor & Gamble, the massive multinational conglomerate and the world’s largest seller of consumer goods, has long been an advocate of computer modeling and simulation running atop HPC clusters. …
The Critical And Pervasive Role HPC Plays At Proctor & Gamble was written by Jeffrey Burt at .
The Sequana line of supercomputers from the Bull division of Atos offers some of the highest compute density available in the HPC realm. …
Atos Rejiggers Sequana Supercomputers, Adds AMD Rome CPUs was written by Timothy Prickett Morgan at .
As we well know by now, workloads at supercomputing sites large and small are changing with the introduction of machine learning and more complex applications that require both large and small files. …
Re-Architecting Storage Around HPC’s Mixed Workloads Problem was written by Nicole Hemsoth at .
We have spent the past several years speculating about what the “Summit” supercomputer built by IBM, Nvidia, and Mellanox Technologies for the US Department of Energy and installed at Oak Ridge National Laboratory might be. …
Opening Up The Aperture On The World With Summit was written by Timothy Prickett Morgan at .
What is the difference between a SmartNIC and a server processor? …
AWS Tests The Waters With Homegrown Arm Servers was written by Timothy Prickett Morgan at .
HPC and the cloud have an uneasy, lukewarm relationship. Some corporations running HPC environments take the view that they have the infrastructure and software capabilities they need to run their own often massive workloads and taking on the networking costs, security concerns and management hassles of running applications and keep data in the cloud doesn’t make sense to them. …
Pratt & Whitney Launches HPC to The Cloud To Push Jet Engine Design was written by Jeffrey Burt at .
It tends to get overlooked in favor of GPU acceleration but scaling deep learning on existing CPU-based HPC infrastructure is not just possible, but with the right level of optimization and fine-tuning, the performance and efficiency results can be comparable. …
Scaling with Accuracy on CPU-Only HPC Infrastructure was written by Nicole Hemsoth at .
Emerging technologies like machine learning, deep learning, and natural language processing can promise significant improvements in an array of industries, including the healthcare field. …
Machine Learning Sharpens Medical Imaging was written by Jeffrey Burt at .