In the early days of artificial intelligence, Hans Moravec asserted what became known as Moravec’s paradox: “It is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”
This assertion is now unraveling primarily due to the ascent of deep learning. …
Deep Learning Is Coming Of Age was written by Timothy Prickett Morgan at .
When you look at IBM, it is as if you are seeing many different instantiations of Big Blue across time playing out in the present, side by side. …
Someone Has To Pay To Push The Bleeding Edge Of Systems was written by Timothy Prickett Morgan at .
Ever since the “Aurora” vector processor designed by NEC was launched last year, we have been wondering if it might be used as a tool to accelerate workloads other than the traditional HPC simulation and modeling jobs that are based on crunching numbers in single and double precision floating point. …
Hadoop And Spark Get A Vector Performance Boost was written by Timothy Prickett Morgan at .
The gap between processor architectures in the datacenter server and on the desktop or on our laps or now in our hands thanks to transistor shrinks over the past decade is getting bigger. …
A New Datacenter Compels Arm To Create A New Chip Line was written by Timothy Prickett Morgan at .
The growth of a new technology as it enters the industry can tend to take on a certain pattern. …
Red Hat Flexes CoreOS Muscle In OpenShift Kubernetes Platform was written by Jeffrey Burt at .
There are an increasing number of ways to do machine learning inference in the datacenter, but one of the increasingly popular means of running inference workloads is the combination of traditional CPUs acting as a host for FPGAs that run the bulk of the inferring. …
Where The FPGA Hits The Server Road For Inference Acceleration was written by Timothy Prickett Morgan at .
There is a battle heating up in the datacenter, and there are tens of billions of dollars at stake as chip makers chase the burgeoning market for engines that do machine learning inference. …
Teasing Out The Bang For The Buck Of Inference Engines was written by Timothy Prickett Morgan at .
We have written much over the last few years about the convergence of deep learning and traditional supercomputing but as the two grow together, it is clear that the tools for one area don’t always mesh well with those of the other. …
Getting HPC Simulations to Speak Deep Learning was written by Nicole Hemsoth at .
There are a lot of different kinds of machine learning, and some of them are not based exclusively on deep neural networks that learn from tagged text, audio, image, and video data to analyze and sometimes transpose that data into a different form. …
Shooting The Machine Learning Rapids With Open Source was written by Timothy Prickett Morgan at .
When IBM’s Summit supercomputer was officially unveiled in June, some hailed it as the first exascale system because of its peak performance in applications that made heavy use of GPU acceleration at low precision. …
Architecting Storage And Compute At Exascale was written by Daniel Robinson at .
Now that deep learning at traditional supercomputing centers is becoming a more pervasive combination, the infrastructure challenges of making both AI and simulations run efficiently on the same hardware and software stacks are emerging. …
HPC File Systems Fail for Deep Learning at Scale was written by Nicole Hemsoth at .
Advances in visualization are essential for managing—and maximizing value from—the rising flood of data, the growing sophistication of simulation codes, the convergence of machine learning (ML) and simulation workloads, and the development of extreme-scale computers. …
Accelerating the Shift to Software Defined Visualization was written by Nicole Hemsoth at .
Editors Note: We are arranging interviews with leads on both the hardware and software side of this story and will update it with more information throughout the day. …
Deep Learning Just Dipped into Exascale Territory was written by Nicole Hemsoth at .
It is safe to say that a little more than a decade ago, when the clone of Google’s MapReduce and Google File System distributed storage and computing platform was cloned at Yahoo and offered up to the world as a way to transform the nature of data analytics at scale, that we all had much higher hopes for the emergence of platforms centered around Hadoop that would change enterprise, not just webscale, computing. …
Hadoop Needs To Be A Business, Not Just A Platform was written by Timothy Prickett Morgan at .
As we argued a few weeks ago, the cloud is where quantum competition gets real. …
Full Qubit, Tooling Access a Game-Changer for Quantum Development was written by Nicole Hemsoth at .
Many programs have a tough time spanning across high levels of concurrency, but if they are cleverly coded, databases can make great use of massively parallel compute based in hardware to radically speed up the time it takes to run complex queries against large datasets. …
In A Parallel Universe, Data Warehouses Run On GPUs was written by Timothy Prickett Morgan at .
European supercomputing centers are known for deploying innovative architectures and working on cutting-edge applications, but the sheer number of these systems lags rather far behind the U.S. …
Will a Billion Dollars Buy Europe Exascale Dominance? was written by Nicole Hemsoth at .
Technologies often start out in one place and then find themselves in another. …
Inferring The Future Of The FPGA, And Then Making It was written by Timothy Prickett Morgan at .
It is certainly true that no technology company can grow if they are not able to do business in China. …
Open Compute A Foot In the Datacenter Door For Inspur was written by Timothy Prickett Morgan at .
A few years ago the market was rife with deep learning chip startups aiming at AI training. …
Boosting the Clock for High Performance FPGA Inference was written by Nicole Hemsoth at .