Archive

Category Archives for "The Next Platform"

The New Server Economies Of Scale For AMD

In the first story of this series, we discussed the Infinity fabric that is at the heart of the new “Naples” Epyc processor from AMD, and how this modified and extended HyperTransport interconnect glues together the cores, dies, and sockets based on Eypc processors into a unified system.

In this follow-on story, we will expand out from the Epyc processor design to the basic feeds and speeds of the system components based on this chip and then take a look at some of the systems that AMD and its partners were showing off at the Epyc launch a few

The New Server Economies Of Scale For AMD was written by Timothy Prickett Morgan at The Next Platform.

The Convergence Or Divergence Of HPC And AI Iron

Based on datacenter practices of the past two decades, it is a matter of faith that it is always better to run a large number of applications on a given set of generic infrastructure than it is to have highly tuned machines running specific workloads. Siloed applications on separate machines are a thing of the past. However, depending on how Moore’s Law progresses (or doesn’t) and how the software stacks shake out for various workloads, organizations might be running applications on systems with very different architectures, either in a siloed, standalone fashion or across a complex workflow that links the

The Convergence Or Divergence Of HPC And AI Iron was written by Timothy Prickett Morgan at The Next Platform.

The Heart Of AMD’s Epyc Comeback Is Infinity Fabric

At AMD’s Epyc launch few weeks ago, Lisa Su, Mark Papermaster, and the rest of the AMD Epyc team hammered home that AMD designed its new Zen processor core for servers first. This server-first approach has implications for performance, performance per watt, and cost perspectives in both datacenter and consumer markets.

AMD designed Epyc as a modular architecture around its “Zeppelin” processor die with its eight “Zen” architecture cores. To allow multi-die scalability, AMD first reworked its HyperTransport socket-to-socket system I/O architecture for use on a chip, across a multi-chip module (MCM), and for inter-socket connectivity. AMD has named this

The Heart Of AMD’s Epyc Comeback Is Infinity Fabric was written by Timothy Prickett Morgan at The Next Platform.

Securing The HPC Infrastructure

In the world of high performance computing (HPC), the most popular buzzwords include speed, performance, durability, and scalability. Security is one aspect of HPC that is not often discussed, or else seems to be relatively low on the list of priorities when organizations begin building out their infrastructures to support a demanding new applications, whether for oil and gas exploration, machine learning, simulations, or visualization of complex datasets.

While IT security is paramount for businesses in the digital age, HPC systems typically do not encounter the same risks as public-facing infrastructures – in the same way as a web server

Securing The HPC Infrastructure was written by Timothy Prickett Morgan at The Next Platform.

High Expectations for Low Precision at CERN

The last couple of years has seen a steady drumbeat for the use of low precision in growing numbers of workloads driven in large part by the rise of machine learning and deep learning applications and the ongoing desire to cut back on the amount of power consumed.

The interest in low precision is rippling through the high-performance computing (HPC) field, spanning companies that are running applications sets to the tech vendors that are creating the systems and components on which the work is done.

The Next Platform has kept a steady eye on the developments the deep-learning and machine-learning

High Expectations for Low Precision at CERN was written by Nicole Hemsoth at The Next Platform.

The X86 Battle Lines Drawn With Intel’s Skylake Launch

At long last, Intel’s “Skylake” converged Xeon server processors are entering the field, and the competition with AMD’s “Naples” Epyc X86 alternatives can begin and the ARM server chips from Applied Micro, Cavium, and Qualcomm and the Power9 chip from IBM know exactly what they are aiming at.

It is a good time to be negotiating with a chip maker for compute power.

The Skylake chips, which are formally known as the Xeon Scalable Processor family, are the result of the convergence of the workhorse Xeon E5 family of chips for two-socket and four-socket servers with the higher-end Xeon E7

The X86 Battle Lines Drawn With Intel’s Skylake Launch was written by Timothy Prickett Morgan at The Next Platform.

China Tunes Neural Networks for Custom Supercomputer Chip

Supercomputing centers around the world are preparing their next generation architectural approaches for the insertion of AI into scientific workflows. For some, this means retooling around an existing architecture to make capability of double-duty for both HPC and AI.

Teams in China working on the top performing supercomputer in the world, the Sunway TaihuLight machine with its custom processor, have shown that their optimizations for theSW26010 architecture on deep learning models have yielded a 1.91-9.75X speedup over a GPU accelerated model using the Nvidia Tesla K40m in a test convolutional neural network run with over 100 parameter configurations.

Efforts on

China Tunes Neural Networks for Custom Supercomputer Chip was written by Nicole Hemsoth at The Next Platform.

OpenPower, Efficiency Tweaks Define Europe’s DAVIDE Supercomputer

When talking about the future of supercomputers and high-performance computing, the focus tends to fall on the ongoing and high-profile competition between the United States with its slowly eroding place as the kingpin in the industry and China and the tens of billions of dollars that the government has invested in recent years to rapidly expand the reach of the country’s tech community and the use of home-grown technologies in massive new systems.

Both trends were on display at the recent International Supercomputing Conference in Frankfurt, Germany, where China not only continued to hold the top two spots on the

OpenPower, Efficiency Tweaks Define Europe’s DAVIDE Supercomputer was written by Nicole Hemsoth at The Next Platform.

Ethernet Getting Back On The Moore’s Law Track

It would be ideal if we lived in a universe where it was possible to increase the capacity of compute, storage, and networking at the same pace so as to keep all three elements expanding in balance. The irony is that over the past two decades, when the industry needed for networking to advance the most, Ethernet got a little stuck in the mud.

But Ethernet has pulls out of its boots and left them in the swamp and is back to being barefoot again on much more solid ground where it can run faster. The move from 10 Gb/sec

Ethernet Getting Back On The Moore’s Law Track was written by Timothy Prickett Morgan at The Next Platform.

Parameter Encoding on FPGAs Boosts Neural Network Efficiency

The key to creating more efficient neural network models is rooted in trimming and refining the many parameters in deep learning models without losing accuracy. Much of this work is happening on the software side, but devices like FPGAs that can be tuned for trimmed parameters are offering promising early results for implementation.

A team from UC San Diego has created a reconfigurable clustering approach to deep neural networks that encodes the parameters the network according the accuracy requirements and limitations of the platform—which are often bound by memory access bandwidth. Encoding the trimmed parameters in an FPGA resulted in

Parameter Encoding on FPGAs Boosts Neural Network Efficiency was written by Nicole Hemsoth at The Next Platform.

Using The Network To Break Down Server Silos

Virtual machines and virtual network functions, or VMs and VNFs for short, are the standard compute units in modern enterprise, cloud, and telecommunications datacenters. But varying VM and VNF resource needs as well as networking and security requirements often force IT departments to manage servers in separate silos, each with their own respective capabilities.

For example, some VMs or VNFs may require a moderate number of vCPU cores and lower I/O bandwidth, while VMs and VNFs associated with real-time voice and video, IoT, and telco applications require a moderate-to-high number of vCPU cores, rich networking services, and high I/O bandwidth,

Using The Network To Break Down Server Silos was written by Timothy Prickett Morgan at The Next Platform.

A High Performance Hiatus

The Next Platform will be on summer holiday for the coming week.

We will return to our normal publishing schedule on Monday, July 10th.

For our U.S. readership, we hope you have an excellent Independence Day holiday week and for our many readers outside the states, here’s hoping you find time to relax and enjoy great weather.

Thanks as always for reading, and for our sponsors, we appreciate your support.

Cheers,

Timothy Prickett Morgan, Nicole Hemsoth

Co-Founders, Co-Editors, The Next Platform

A High Performance Hiatus was written by Nicole Hemsoth at The Next Platform.

InfiniBand And Proprietary Networks Still Rule Real HPC

With the network comprising as much as a quarter of the cost of a high performance computing system and being absolutely central to the performance of applications running on parallel systems, it is fair to say that the choice of network is at least as important as the choice of compute engine and storage hierarchy. That’s why we like to take a deep dive into the networking trends present in each iteration of the Top 500 supercomputer rankings as they come out.

It has been a long time since the Top 500 gave a snapshot of pure HPC centers that

InfiniBand And Proprietary Networks Still Rule Real HPC was written by Timothy Prickett Morgan at The Next Platform.

Fix Your NAS With Metadata

Enterprises are purchasing storage by the truckload to support an explosion of data in the datacenter. IDC reports that in the first quarter of 2017, total capacity shipments were up 41.4 percent year-over-year and reached 50.1 exabytes of storage capacity shipped. As IT departments continue to increase their spending on capacity, few realize that their existing storage is a pile of gold that can be fully utilized once enterprises overcome the inefficiencies created by storage silos.

A metadata engine can virtualize the view of data for an application by separating the data (physical) path from the metadata (logical) path. This

Fix Your NAS With Metadata was written by Timothy Prickett Morgan at The Next Platform.

Momentum is Building for ARM in HPC

2011 marked ARM’s first step into the world of HPC with the European Mont-Blanc project. The premise was simple: leverage the energy efficiency of ARM-based mobile designs for high performance computing applications.

Unfortunately, making the leap from the mobile market to HPC was not an easy feat. Long time players in this space, such as Intel and IBM, hold home field advantage: that of legacy software. HPC-optimized libraries, compilers and applications were already present for these platforms. This was not, however, the case for ARM. Early adopters had to start, largely from scratch, porting and building an ecosystem with a

Momentum is Building for ARM in HPC was written by Nicole Hemsoth at The Next Platform.

A Deep Learning Performance Lens for Low Precision Inference

Few companies have provided better insight into how they think about new hardware for large-scale deep learning than Chinese search giant, Baidu.

As we have detailed in the past, the company’s Silicon Valley Research Lab (SVAIL) in particular has been at the cutting edge of model development and hardware experimentation, some of which is evidenced in their publicly available (and open source) DeepBench deep learning benchmarking effort, which allowed users to test different kernels across various hardware devices for training.

Today, Baidu SVAIL extended DeepBench to include support for inference as well as expanded training kernels. Also of

A Deep Learning Performance Lens for Low Precision Inference was written by Nicole Hemsoth at The Next Platform.

Giving Out Grades For Exascale Efforts

Just by being the chief architect of the IBM’s BlueGene massively parallel supercomputer, which was built as part of a protein folding simulation grand challenge effort undertaken by IBM in the late 1990s, Al Gara would be someone whom the HPC community would listen to whenever he spoke. But Gara is now an Intel Fellow and also chief exascale architect at Intel, which has emerged as the second dominant supplier of supercomputer architectures alongside Big Blue’s OpenPower partnership with founding members Nvidia, Mellanox Technologies, and Google.

It may seem ironic that Gara did not stay around IBM to help this

Giving Out Grades For Exascale Efforts was written by Timothy Prickett Morgan at The Next Platform.

U.S. Military Sees Future in Neuromorphic Computing

The novel architectures story is still shaping out for 2017 when it comes machine learning, hyperscale, supercomputing and other areas.

From custom ASICs at Google, new uses for quantum machines, FPGAs finding new routes into wider application sets, advanced GPUs primed for deep learning, hybrid combinations of all of the above, it is clear there is serious exploration of non-CMOS devices. When the Department of Energy in the U.S. announced its mission to explore novel architectures, one of the clear candidates for investment appeared to be neuromorphic chips—efficient pattern matching devices that are in development at Stanford (NeuroGrid), The

U.S. Military Sees Future in Neuromorphic Computing was written by Nicole Hemsoth at The Next Platform.

Machine Learning and the Language of the Brain

For years, researchers have been trying to figure out how the human brain organizes language – what happens in the brain when a person is presented with a word or an image. The work has academic rewards of its own, given the ongoing push by researchers to better understand the myriad ways in which the human brain works.

At the same time, ongoing studies can help doctors and scientists learn how to better treat people with aphasia or other brain disorders caused by strokes, tumors or trauma that impair a person’s ability to communicate – to speak, read, write and

Machine Learning and the Language of the Brain was written by Nicole Hemsoth at The Next Platform.

Exascale on the Far Horizon for Cash-Strapped Oil and Gas

All the compute power in the world is useless against code that cannot scale. And neither compute or code can be useful if growing data for simulations cannot be collected and managed.

But ultimately, none of this is useful at all if the industry that needs these HPC resources is having more trouble than ever acquiring them. It comes as no surprise that the national labs and major research centers will be the first to get exaflop-capable systems, but in a normal market (like the one oil and gas knew not long ago) these machines would be relatively quickly followed

Exascale on the Far Horizon for Cash-Strapped Oil and Gas was written by Nicole Hemsoth at The Next Platform.