Google has paused its Fiber initiative.
We have written much about large-scale deep learning implementations over the last couple of years, but one question that is being posed with increasing frequency is how these workloads (training in particular) will scale to many nodes. While different companies, including Baidu and others, have managed to get their deep learning training clusters to scale across many GPU-laden nodes, for the non-hyperscale companies with their own development teams, this scalability is a sticking point.
The answer to deep learning framework scalability can be found in the world of supercomputing. For the many nodes required for large-scale jobs, the de facto …
Pushing MPI into the Deep Learning Training Stack was written by Nicole Hemsoth at The Next Platform.
The post Worth Reading: Addressing in 2016 appeared first on 'net work.
The United States for years was the dominant player in the high-performance computing world, with more than half of the systems on the Top500 list of the world’s fastest supercomputers being housed in the country. At the same time, most HPC systems around the globe were powered by technologies from such major US tech companies as Intel, IBM, AMD, Cray and Nvidia.
That has changed rapidly over the last several years, as the Chinese government has invested tens of billions of dollars to expand the capabilities of the country’s own technology community and with a promise to spend even more …
Rise of China, Real-World Benchmarks Top Supercomputing Agenda was written by Nicole Hemsoth at The Next Platform.