We don’t have a Moore’s Law problem so much as we have a materials science or alchemy problem. If you believe in materials science, what seems abundantly clear in listening to so many discussions about the end of scale for chip manufacturing processes is that, for whatever reason, the industry as a whole has not done enough investing to discover the new materials that will allow us to enhance or move beyond CMOS chip technology.
The only logical conclusion is that people must actually believe in alchemy, that some kind of magic is going to save the day and allow …
Alchemy Can’t Save Moore’s Law was written by Timothy Prickett Morgan at The Next Platform.
The rumors that supercomputer maker Fujitsu would be dropping the Sparc architecture and moving to ARM cores for its next generation of supercomputers have been going around since last fall, and at the International Supercomputing Conference in Frankfurt, Germany this week, officials at the server maker and RIKEN, the research and development arm of the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT) that currently houses the mighty K supercomputer, confirmed that this is indeed true.
The ARM architecture now gets a heavy-hitter system maker with expertise in developing processors to support diverse commercial and technical workloads, and …
Inside Japan’s Future Exascale ARM Supercomputer was written by Timothy Prickett Morgan at The Next Platform.
As we have written about extensively here at The Next Platform, there is no shortage of use cases in deep learning and machine learning where HPC hardware and software approaches have bled over to power next generation applications in image, speech, video, and other classification and learning tasks.
Since we focus on high performance computing systems here in their many forms, that trend has been exciting to follow, particularly watching GPU computing and matrix math-based workloads find a home outside of the traditional scientific supercomputing center.
This widened attention has been good for HPC as well since it has …
HPC is Great for AI, But What Does Supercomputing Stand to Gain? was written by Nicole Hemsoth at The Next Platform.
Sales of HPC systems were a lot brisker in 2015 than anticipated, and according to the latest prognostications from the market researchers at IDC presented from the International Supercomputing Conference in Frankfurt, Germany this week, growth in the HPC sector will continue to outpace that of the overall IT market for many years to come.
In a sense, the good numbers that the HPC market turned in last year are perhaps a little undercounted. In his traditional early morning breakfast briefing at the conference, Earl Joseph, program vice president for high performance computing at IDC, said that he had been …
HPC Spending Outpaces The IT Market, And Will Continue To was written by Timothy Prickett Morgan at The Next Platform.
When we cover the bi-annual listing of the world’s most powerful supercomputers, the metric at the heart of those results, the high performance Linpack benchmark, the gold standard for over two decades, is the basis. However, many have argued the benchmark is getting long in tooth with its myopic focus on sheer floating point performance over other important factors that determine a supercomputer’s value for real-world applications.
This shift in value stands to reason, since larger machines mean more data coursing through the system, thus an increased reliance on memory and the I/O subsystem, among other factors. While raw floating …
Measuring Top Supercomputer Performance in the Real World was written by Nicole Hemsoth at The Next Platform.
Intel has finally opened the first public discussions of its investment in the future of machine learning and deep learning and while some might argue it is a bit late in the game with its rivals dominating the training market for such workloads, the company had to wait for the official rollout of Knights Landing and extensions to the scalable system framework to make it official—and meaty enough to capture real share from the few players doing deep learning at scale.
Yesterday, we detailed the announcement of the first volume shipments of Knights Landing, which already is finding a home …
Knights Landing Proves Solid Ground for Intel’s Stake in Deep Learning was written by Nicole Hemsoth at The Next Platform.
The long wait for volume shipments of Intel’s “Knights Landing” parallel X86 processors is over, and at the International Supercomputing Conference in Frankfurt, Germany is unveiling the official lineup of the Xeon Phi chips that are aimed at high performance computing and machine learning workloads alike.
The lineup is uncharacteristically simple for a Xeon product line, which tends to have a lot of different options turned on and off to meet the myriad requirements of features and price points that a diverse customer base usually compels Intel to support. Over time, the Xeon Phi lineup will become more complex, with …
Intel Knights Landing Yields Big Bang For The Buck Jump was written by Timothy Prickett Morgan at The Next Platform.
For the first time since the Top 500 rankings of the most powerful supercomputers in the world was started 23 years ago, the United States is not home to the largest number of machines on the list – and China, after decades of intense investment and engineering, is.
Supercomputing is not just an academic or government endeavor, but it is an intensely nationalistic one given the enormous sums that are required to create the components of these massive machines, write software for them, and keep them running until some new approach comes along. And given that the machines support the …
China Topples United States As Top Supercomputer User was written by Timothy Prickett Morgan at The Next Platform.
Much to the surprise of the supercomputing community, which is gathered in Germany for the International Supercomputing Conference this morning, news arrived that a new system has dramatically topped the Top 500 list of the world’s fastest and largest machines. And like the last one that took this group by surprise a few years ago, the new system is also in China.
Recall that the reigning supercomputer in China, the Tianhe-2 machine, has stood firmly at the top of that list for three years, outpacing the U.S. “Titan” system at Oak Ridge National Laboratory. We have a more detailed analysis …
A Look Inside China’s Chart-Topping New Supercomputer was written by Nicole Hemsoth at The Next Platform.
Nvidia wants for its latest “Pascal” GP100 generation of GPUs to be broadly adopted in the market, not just used in capability-class supercomputers that push the limits of performance for traditional HPC workloads as well as for emerging machine learning systems. And to accomplish this, Nvidia needs to put Pascal GPUs into a number of distinct devices that fit into different system form factors and offer various capabilities at multiple price points.
At the International Supercomputing Conference in Frankfurt, Germany, Nvidia is therefore taking the wraps off two new Tesla accelerators based on the Pascal GPUs that plug into systems …
Nvidia Rounds Out Pascal Tesla Accelerator Lineup was written by Timothy Prickett Morgan at The Next Platform.
This week at the International Supercomputing Conference (ISC ’16) we are expecting a wave of vendors and high performance computing pros to blur the borders between traditional supercomputing and what is around the corner on the application front—artificial intelligence and machine learning.
For some, merging those two areas is a stretch, but for others, particularly GPU maker, Nvidia, which just extended its supercomputing/deep learning roadmap this morning, the story is far more direct since much of the recent deep learning work has hinged on GPUs for training of neural networks and machine learning algorithms.
We have written extensively over …
What Will GPU Accelerated AI Lend to Traditional Supercomputing? was written by Nicole Hemsoth at The Next Platform.
Might doesn’t make right, but it sure does help. One of the recurring bothers about any technology upstart is that they are smaller Davids usually up against vastly larger Goliaths, usually with a broader and deeper set of technologies covering multiple markets. The best way to get traction in one market, then, seems to be to have significant footing in several markets.
This is the strategy that ARM server chip and switch ASIC maker Cavium is taking as it shells out approximately $1.36 billion to acquire network and storage switch chip maker QLogic. The combination of the two companies will …
Cavium Buys Access To Enterprise With QLogic Deal was written by Timothy Prickett Morgan at The Next Platform.
There are two endpoints in any network connection, and you have to focus on both the server adapter and the switch to get the best and most balanced performance out of the network and the proper return on what amounts to be a substantial investment in a cluster.
With the upcoming ConnectX-5 server adapters, Mellanox Technologies is continuing in its drive to have more and more of the network processing in a server node offloaded to its adapter cards. And it is also rolling out significant new functionality such as background checkpointing and switchless networking, and of course there is …
Next-Gen Network Adapters: More Oomph, Switchless Clusters was written by Timothy Prickett Morgan at The Next Platform.
Open source software has done a lot to transform the IT industry, but perhaps more than anything else it has reminded those who architect complex systems that all elements of a datacenter have to be equally open and programmable for them to make the customizations that are necessary to run specific workloads efficiently and therefore cost effectively.
Servers have been smashed wide open in large enterprises, HPC centers, hyperscalers, and cloud builders (excepting Microsoft Azure, of course) by the double whammy of the ubiquity of the X86 server and the open source Linux operating system, and storage has followed suit …
The Walls Come Down On The Last Bastion Of Proprietary was written by Timothy Prickett Morgan at The Next Platform.
Building high performance systems at the bleeding edge hardware-wise without considering the way data actually moves through such a system is too common—and woefully so, given the fact that understanding and articulating an application’s requirements can lead to dramatic I/O improvements.
A range of “Frequently Unanswered Questions” are at the root of inefficient storage design due to a lack of specified workflows, and this problem is widespread, especially in verticals where data isn’t the sole business driver.
One could make the argument that data is at the heart of any large-scale computing endeavor, but as workflows change, the habit of …
Framing Questions for Optimized I/O Subsystems was written by Nicole Hemsoth at The Next Platform.
AMD gets a lot of credit for creating Accelerated Processing Units that merge CPUs and GPUs on a single package or on a single die, but Intel also has a line of chips Core and Xeon processors that do the same thing for workstation and server workloads. The “Skylake-H” Xeon E3-1500 v5 chips that Intel recently announced with its new Iris Pro Graphics P580 GPUs pack quite a wallop. Enough in fact that for certain kinds of floating point math on hybrid workloads that system architects should probably give them consideration as they are building out clusters to do various …
Skylake Xeon E3s Serve Up Cheap Flops was written by Timothy Prickett Morgan at The Next Platform.
The powerful Cori supercomputer, now being readied for deployment at NERSC (The National Energy Research Scientific Computing Center), has been named in honor of Gerty Cori. Cori was a Czech-American biochemist (August 15, 1896 – October 26, 1957) who became the first American woman to be awarded the Nobel Prize.
Cori (a.k.a. NERSC-8) is the Center’s newest supercomputer. Phase 1 of the system is currently installed with Phase 2 slated to be up and running this year. Phase 1 is a Cray XC40 supercomputer based on the Intel Haswell multi-core processor with a theoretical peak performance of 1.92 petaflops/sec. It …
NERSC Preps for Next Generation “Cori” Supercomputer was written by Nicole Hemsoth at The Next Platform.
With the International Supercomputing 2016 conference fast approaching, the HPC community is champing at the bit to share insights on the latest technologies and techniques to make simulation and modeling applications scale further and run faster.
The hot topic of conversation is often hardware at such conferences, but hardware is easy. Software is the hard part, and techniques for exploiting the compute throughput of an increasingly diverse collection of engines – from multicore CPUs to GPUs to DSPs and to FPGAs – evolve more slowly than hardware. And they do so by necessity.
The OpenACC group is getting out ahead …
The Challenge Of Coding Across HPC Architectures was written by Timothy Prickett Morgan at The Next Platform.
Compute is by far still the largest part of the hardware budget at most IT organizations, and even with the advance of technology, which allows more compute, memory, storage, and I/O to be crammed into a server node, we still seem to always want more. But with a tighter coupling of flash in systems and new memories coming to market like 3D XPoint, the server is set to become a more complex bit of machinery.
To try to figure out what is going on out there with memory on systems in the real world and how future technologies might affect …
Systems To Morph As Memory Options Expand was written by Timothy Prickett Morgan at The Next Platform.
There are two competing trends in platform designs that architects always have to contend with. They can build a platform that performs a specific function and does it well, or create a more generic platform that sacrifices some efficiency but does a lot of jobs well. Sometimes you try to shoot the gap between these two poles.
That is precisely what Arista Networks, the networking upstart that has serial entrepreneur Andy Bechtolsheim as its chief development officer, is doing with a new line of what it is calling “universal leaf” switches. The leaf switches (does one say “leafs” or “leaves” …
Leaving Fixed Function Switches Behind For Universal Leafs was written by Timothy Prickett Morgan at The Next Platform.