Archive

Category Archives for "The Next Platform"

IBM Storage Rides Up Flash And NVM-Express

IBM’s systems hardware business finished 2017 in a stronger position than it has seen in years, due in large part to the continued growth of the company’s stalwart System z mainframes and Power platform. As we at The Next Platform noted, the last three months of last year were also the first full quarter of shipments of IBM’s new System z14 mainframes, while the first nodes of the “Summit” supercomputer at Oak Ridge National Laboratory and the “Sierra” system at Lawrence Livermore National Laboratory began to ship.

Not to be overlooked was the strong performance of the IBM’s storage

IBM Storage Rides Up Flash And NVM-Express was written by Jeffrey Burt at The Next Platform.

Quantum Computing Performance By the Glass

On today’s episode of “The Interview” with The Next Platform, we talk about quantum computing performance and functionality with Rigetti Computing quantum hardware engineer, Matt Raegor.

We talked with Rigetti not long ago about the challenges of having an end-to-end quantum computing startup (developing the full stack—from hardware to software to the fabs that make the quantum processing units). This conversation takes that one step further by looking at how performance can be considered via an analogy of wine glasses and their various resonances. Before we get to that, however, we talk more generally about Reagor’s early work

Quantum Computing Performance By the Glass was written by Nicole Hemsoth at The Next Platform.

Pushed To The Edge

Computing, which always includes storage and networking, evolves. Just like everything else on Earth. Anything with a benefit in efficiency will always find its niche, and it will change to plug into new niches as they arise and make use of ever-cheaper technologies as they advance from the edges.

It is with this in mind that we ponder the datacenter. As in the center of data, which has been expanding and thinning for a very long time now, and which is pushing itself – and us – to the edge. What, we wonder, is a datacenter that doesn’t have

Pushed To The Edge was written by Timothy Prickett Morgan at The Next Platform.

Machine Learning with a Memristor Boost

On today’s podcast episode of “The Interview” with The Next Platform, we talk with computer architecture researcher Roman Kaplan about the role memristors might play in accelerating common machine learning algorithms including K-means. Kaplan and team have been looking at performance and efficiency gains by letting ReRAM pick up some of the data movement tab on traditional architectures.

Kaplan, a researcher at the Viterbi faculty of Electrical Engineering in Israel, along with his team, have produced some interesting benchmarks comparing K-means and K-nearest neighbor computations on CPU, GPU, FPGA, and most notably, the Automata Processor from Micron to their

Machine Learning with a Memristor Boost was written by Nicole Hemsoth at The Next Platform.

Add Firepower to Your Data with HPC-Virtual GPU Convergence

High performance computing (HPC) enables organizations to work more quickly and effectively than traditional compute platforms—but that might not be enough to succeed in today’s evolving digital marketplace.

Mainstream HPC usage is transforming the modern workplace as organizations utilize individually deployed HPC clusters and composable infrastructures to increase IT speed and performance and help employees achieve higher levels of productivity. However, maintaining disparate and isolated systems can pose a significant challenge—such as preventing workloads from reaching optimal efficiency. By converging the muscle of HPC and virtualized environments, organizations can deliver a superior virtual graphics experience to any device in order

Add Firepower to Your Data with HPC-Virtual GPU Convergence was written by Nicole Hemsoth at The Next Platform.

Supercomputing At The Crossroads

The supercomputing business, the upper stratosphere of the much broader high performance computing segment of the IT industry, is without question one of the most exciting areas in data processing and visualization.

It is also one of the most frustrating sectors in which to try to make a profitable living. The customers are the most demanding, the applications are the most complex, the budget pressures are intense, the technical challenges are daunting, the governments behind major efforts can be capricious, and the competition is fierce.

This is the world where Cray, which literally invented the supercomputing field, and its competitors

Supercomputing At The Crossroads was written by Timothy Prickett Morgan at The Next Platform.

Inference is the Hammer That Breaks the Datacenter

Two important changes to the datacenter are happening in the same year—one on the hardware side, another on the software side. And together, they create a force big enough to blow away the clouds, at least over the long haul.

As we covered this year from a datacentric (and even supercomputing) point of view, 2018 is the time for Arm to shine. With a bevy of inroads to commercial markets at the high-end all the way down to the micro-device level, the architecture presents a genuine challenge to the processor establishment. And now, coupled with the biggest trend since

Inference is the Hammer That Breaks the Datacenter was written by Nicole Hemsoth at The Next Platform.

Gen-Z Interconnect Ready To Restore Compute Memory Balance

For several years, work has been underway to develop a standard interconnect that can address the increasing speeds in servers driven by the growing use of such accelerators as GPUs and field-programmable gate arrays (FPGAs) and the pressures put on memory by the massive amounts of data being generated and bottleneck between the CPUs and the memory.

Any time the IT industry wants a standard, you can always expect at least two, and this time around is no different. Today there is a cornucopia of emerging interconnects, some of them overlapping in purpose, some working side by side, to break

Gen-Z Interconnect Ready To Restore Compute Memory Balance was written by Jeffrey Burt at The Next Platform.

The Ins And Outs Of IBM’s Power9 ZZ Systems

It has taken nearly four years for the low end, workhorse machines in IBM’s Power Systems line to be updated, and the long awaited Power9 processors and the shiny new “ZZ” systems have been unveiled. We have learned quite a bit about these machines, many of which are not really intended for the kinds of IT organizations that The Next Platform is focused on. But several of the machines are aimed at large enterprises, service providers, and even cloud builders who want something with a little more oomph on a lot of fronts than an X86 server can deliver in

The Ins And Outs Of IBM’s Power9 ZZ Systems was written by Timothy Prickett Morgan at The Next Platform.

Programmable Networks Train Neural Nets Faster

When it comes to machine learning training, people tend to focus on the compute. We always want to know if the training is being done on specialized parallel X86 devices, like Intel’s Xeon Phi, or on massively parallel GPU devices, like Nvidia’s “Pascal” and “Volta” accelerators, or even on custom devices from the likes of Nervana Systems (now part of Intel), Wave Systems, Graphcore, Google, or Fujitsu.

But as is the case with other kinds of high performance computing, the network matters when it comes to machine learning, and it can be the differentiating

Programmable Networks Train Neural Nets Faster was written by Timothy Prickett Morgan at The Next Platform.

Looking Back: The Evolution of HPC Power, Efficiency and Reliability

On today’s podcast episode of “The Interview” with The Next Platform, we talk about exascale power and resiliency by way of a historical overview of architectures with long-time HPC researcher, Dr. Robert Fowler.

Fowler’s career in HPC began at his alma mater, Harvard in the early seventies with scientific codes and expanded across the decades to include roles at several universities, including the University of Washington, the University of Rochester, Rice University, and most recently, RENCI at the University of North Carolina at Chapel Hill where he spearheads high performance computing initiatives and projects, including one we will

Looking Back: The Evolution of HPC Power, Efficiency and Reliability was written by Nicole Hemsoth at The Next Platform.

A Look at What’s in Store for China’s Tianhe-2A Supercomputer

The field of competitors looking to bring exascale-capable computers to the market is a somewhat crowded one, but the United States and China continue to be the ones that most eyes are on.

It’s a clash of an established global superpower and another one on the rise, and one that that envelopes a struggle for economic, commercial and military advantages and a healthy dose of national pride. And because of these two countries, the future of exascale computing – which to a large extent to this point has been more about discussion, theory and promise – will come into sharper

A Look at What’s in Store for China’s Tianhe-2A Supercomputer was written by Jeffrey Burt at The Next Platform.

Establishing Early Neural Network Standards

Today’s podcast episode of “The Interview” with The Next Platform will focus on an effort to standardize key neural network features to make development and innovation easier and more productive.

While it is still too early to standardize across major frameworks for training, for instance, portability for new architectures via a common file format is a critical first step toward more interoperability between frameworks and between training and inferencing tools.

To explore this, we are joined by Neil Trevett, Vice President of the Developer Ecosystem at Nvidia and President of the Khronos Group, an industry consortium focused on creating open

Establishing Early Neural Network Standards was written by Nicole Hemsoth at The Next Platform.

A First Look At IBM’s Power9 ZZ Systems

The HPC crowd got a little taste of the IBM’s “Nimbus” Power9 processors for scale out systems, juiced by Nvidia “Volta” Tesla GPU accelerators, last December with the Power AC922 system that is the basis of the “Summit” and “Sierra” pre-exascale supercomputers being built by Big Blue for the US Department of Energy.

Now, IBM’s enterprise customers that use more standard iron in their clusters, and who predominantly have CPU-only setups rather than adding in GPUs or FPGAs and who need a lot more local storage, are getting more of a Power9 meal with the launch of six new machines

A First Look At IBM’s Power9 ZZ Systems was written by Timothy Prickett Morgan at The Next Platform.

Even at the Edge, Scale is the Real Challenge

Neural networks live on data and rely on computational firepower to help them take in that data, train on it and learn from it. The challenge increasingly is ensuring there is enough computational power to keep up with the massive amounts of data that is being generated today and the rising demands from modern neural networks for speed and accuracy in consuming the data and training on datasets that continue to grow in size.

These challenges can be seen playing out in the fast-growing autonomous vehicle market, where pure-play companies like Waymo – born from Google’s self-driving car initiative –

Even at the Edge, Scale is the Real Challenge was written by Jeffrey Burt at The Next Platform.

Delivering Predictive Outcomes with Superhuman Knowledge

Massive data growth and advances in acceleration technologies are pushing modern computing capabilities to unprecedented levels and changing the face of entire industries.

Today’s organizations are quickly realizing that the more data they have the more they can learn, and powerful new techniques like artificial intelligence (AI) and deep learning are helping them convert that data into actionable intelligence that can transform nearly every aspect of their business. NVIDIA GPUs and Hewlett Packard Enterprise (HPE) high performance computing (HPC) platforms are accelerating these capabilities and helping organizations arrive at deeper insights, enable dynamic correlation, and deliver predictive outcomes with superhuman

Delivering Predictive Outcomes with Superhuman Knowledge was written by Nicole Hemsoth at The Next Platform.

Exascale Storage Gets a GPU Boost

Alex St. John is a familiar name in the GPU and gaming industry given his role at Microsoft in the creation of DirectX technology in the 90s. And while his fame may be rooted in graphics for PC players, his newest venture has sparked the attention of both the supercomputing and enterprise storage crowds—and for good reason.

It likely helps to have some notoriety when it comes to securing funding, especially one that has roots in notoriously venture capital-denied supercomputing ecosystem. While St. John’s startup Nyriad may be a spin-out of technology developed for the Square Kilometre Array (SKA), the

Exascale Storage Gets a GPU Boost was written by Nicole Hemsoth at The Next Platform.

At the Cutting Edge of Quantum Computing Research

On today’s podcast episode of “The Interview” with The Next Platform, we focus on some of the recent quantum computing developments out of Oak Ridge National Lab’s Quantum Computing Institute with the center’s director, Dr. Travis Humble.

Regular readers will recall previous work Humble has done on the quantum simulator, as well as other lab and Quantum Insitute efforts on creating hybrid quantum and neuromorphic supercomputers and building software frameworks to support quantum interfacing. In our discussion we check in on progress along all of these fronts, including a more detailed conversation about the XACC programming framework for

At the Cutting Edge of Quantum Computing Research was written by Nicole Hemsoth at The Next Platform.

Google Boots Up Tensor Processors On Its Cloud

Google laid down its path forward in the machine learning and cloud computing arenas when it first unveiled plans for its tensor processing unit (TPU), an accelerator designed by the hyperscaler to speeding up machine learning workloads that are programmed using its TensorFlow framework.

Almost a year ago, at its Google I/O event, the company rolled out the architectural details of its second-generation TPUs – also called the Cloud TPU – for both neural network training and inference, with the custom ASICs providing up to 180 teraflops of floating point performance and 64 GB of High Bandwidth Memory.

Google Boots Up Tensor Processors On Its Cloud was written by Jeffrey Burt at The Next Platform.

Lustre Shines at HPC Peaks, But Rest of Market is Fertile Ground

The Lustre file system has been the canonical choice for the world’s largest supercomputers, but for the rest of high performance computing user base, it is moving beyond reach without the support and guidance it has had from its many backers, including most recently Intel, which dropped Lustre from its development ranks in mid-2017.

While Lustre users have seen the support story fall to pieces before, for many HPC shops, the need is greater than ever to look toward a fully supported scalable parallel file system that snaps well to easy to manage appliances. Some of these commercial HPC sites

Lustre Shines at HPC Peaks, But Rest of Market is Fertile Ground was written by Nicole Hemsoth at The Next Platform.