Nicole Hemsoth

Author Archives: Nicole Hemsoth

Japan Invests in Fusion Energy Future with New Cray Supercomputer

There are a number of key areas where exascale computing power will be required to turn simulations into real-world good. One of these is fusion energy research with the ultimate goal of building efficient plants that can safely deliver hundreds of megawatts of clean, renewable fusion energy.

Japan has announced that it will install its top-end XC50 supercomputer at the at the Rokkasho Fusion Institute.

The new system will achieve four petaflops, which is over double the capability of the current machine for international collaborations in fusion energy, Helios, which was built by European supercomputer maker, Bull. The Helios system

Japan Invests in Fusion Energy Future with New Cray Supercomputer was written by Nicole Hemsoth at The Next Platform.

Volkswagen Refining Machine Learning on D-Wave System

Researchers at Volkswagen have been at the cutting edge of implementing D-Wave quantum computers for a number of complex optimization problems, including traffic flow optimization, among other potential use cases.

These efforts are generally focused on developing algorithms suitable for the company’s recently purchased 2000-qubit quantum system and have expanded to a range of new machine learning possibilities, including what a research team at the company’s U.S. R&D office and the Volkswagen Data:Lab in Munich are calling quantum-assisted cluster analysis.

The art and science of clustering is well known for machine learning on classical computing architectures, but the VW approach

Volkswagen Refining Machine Learning on D-Wave System was written by Nicole Hemsoth at The Next Platform.

Open Source Data Management for All

On today’s episode of “The Interview” with The Next Platform, we talk about an open source data management platform (and related standards group) called iRODS, which many in scientific computing already know—but that also has applicability in enterprise.

We found that several of our readers had heard of iRODS and knew it was associated with a scientific computing base, but few understood what the technology was and were not aware that there was a consortium. To dispel any confusion, we spoke with Jason Coposky, executive director of the iRODS Consortium about both the technology itself and the group’s role

Open Source Data Management for All was written by Nicole Hemsoth at The Next Platform.

Networks Within Networks: Optimization at Massive Scale

On today’s episode of “The Interview” with The Next Platform we talk about the growing problem of networks within networks (within networks) and what that means for future algorithms and systems that will support smart cities, smart grids, and other highly complex and interdependent optimization problems.

Our guest on this audio interview episode (player below) is Hadi Amini, a researcher at Carnegie Mellon who has focused on the interdependency of many factors for power grids and smart cities in a recent book series on these and related interdependent network topics. Here, as in the podcast, the focus is on the

Networks Within Networks: Optimization at Massive Scale was written by Nicole Hemsoth at The Next Platform.

Changing HPC Workloads Mean Tighter Storage Stacks for Panasas

Changes to workloads in HPC mean alterations are needed up and down the stack—and that certainly includes storage. Traditionally these workloads were dominated by large file handling needs, but as newer applications (OpenFOAM is a good example) bring small file and mixed workload requirements to the HPC environment, it means storage approaches need to shift to meet the need.

With these changing workload demands in mind, recall that in the first part of our series on future directions for storage for enterprise HPC shops we focused on the ways open source parallel file systems like Lustre fall short for users

Changing HPC Workloads Mean Tighter Storage Stacks for Panasas was written by Nicole Hemsoth at The Next Platform.

Pushing Greater Stream Processing Platform Evolution

Today’s episode of “The Interview” with The Next Platform is focused on the evolution of stream processing—from the early days to more recent times with vast volumes of social, financial, and other data challenging data analysts and systems designers alike.

Our guest is Nathan Trueblood, a veteran of several companies like Mirantis, Western Digital, EMC, and current VP of product management at DataTorrent—a company comprised of many ex-Yahoo employees who worked with the Hadoop platform and have pushed the evolution of that framework to include more real-time requirements with Apache Apex.

Trueblood’s career has roots in high performance computing

Pushing Greater Stream Processing Platform Evolution was written by Nicole Hemsoth at The Next Platform.

Expanding Use Cases Mean Tape Storage is Here to Stay

On today’s episode of “The Interview” with The Next Platform we talk about the past, present, and future of tape storage with industry veteran Matt Starr.

Starr is CTO at tape giant, Spectra Logic and has been with the company for almost twenty-five years. He was the lead engineer and architect forthe design and production of Spectra’s enterprise tape library family, which is still a core product.

We talk about some of the key evolutions in tape capacity and access speeds over the course of his career before moving into where the new use cases at massive scale are. In

Expanding Use Cases Mean Tape Storage is Here to Stay was written by Nicole Hemsoth at The Next Platform.

Leverage Extreme Performance with GPU Acceleration

Hewlett Packard Enterprise (HPE) and NVIDIA have partnered to accelerate innovation, combining the extreme compute capabilities of high performance computing (HPC) with the groundbreaking processing power of NVIDIA GPUs.

In this fast-paced digital climate, traditional CPU technology is no longer sufficient to support growing data centers. Many enterprises are struggling to keep pace with escalating compute and graphics requirements, particularly as computational models become larger and more complex. NVIDIA GPU accelerators for HPC seamlessly integrate with HPE servers to achieve greater speed, optimal power efficiency, and dramatically higher application performance than CPUs. High-end data centers rely on these high performance

Leverage Extreme Performance with GPU Acceleration was written by Nicole Hemsoth at The Next Platform.

Machine Learning for Auto-Tuning HPC Systems

On today’s episode of “The Interview” with The Next Platform we discuss the art and science of tuning high performance systems for maximum performance—something that has traditionally come at high time cost for performance engineering experts.

While the role of performance engineer will not disappear anytime soon, machine learning is making tuning systems—everything from CPUs to application specific parameters—less of a burden. Despite the highly custom nature of systems and applications, reinforcement learning is allowing new leaps in time-saving tuning as software learns what works best for user applications and architectures, freeing up performance engineers to focus on the finer

Machine Learning for Auto-Tuning HPC Systems was written by Nicole Hemsoth at The Next Platform.

Enterprise HPC Tightens Storage, I/O Strategy Around Support

File system changes in high performance computing, take time. Good file systems are long-lived.  It took several years for some parallel file systems to win out over others and it will be many more decades before the file system as we know it is replaced by something entirely different.

In the meantime, however, there are important points to consider for real-world production HPC deployments that go beyond mere performance comparisons, especially as these workloads grow more complex, become more common, and put new pressures on storage and I/O systems.

The right combination of performance, stability, reliability, and ease of management

Enterprise HPC Tightens Storage, I/O Strategy Around Support was written by Nicole Hemsoth at The Next Platform.

OpenACC Developments: Past, Present, and Future

On today’s episode of “The Interview” with The Next Platform we talk with Doug Miles who runs the PGI compilers and tools team at Nvidia about the past, present, and future of OpenACC with an emphasis on what lies ahead in the next release.

Over the last few years we have described momentum with OpenACC in a number of articles covering everything from what it means in the wake of new memory options to how it pushes on OpenMP to develop alongside. Today, however, we take a step back with an HPC code veteran for the bigger picture and

OpenACC Developments: Past, Present, and Future was written by Nicole Hemsoth at The Next Platform.

IoT Will Force HPC Spending–But Not For the Reasons You Think

The high performance computing market might get some windfall from the processing requirements of IoT and edge devices, but the real driver for spending will come well before the device ever hits the market.

The rise in IoT and edge device demand means an exponential increase in the number of devices that need to be modeled, simulated and tested and this means greater investment in HPC hardware as well as the engineering software tools that have generally served the HPC set.

In other words, it is not the new, enlarged, complex dataset from IoT and edge that presents the next

IoT Will Force HPC Spending–But Not For the Reasons You Think was written by Nicole Hemsoth at The Next Platform.

Geographic Information Systems (GIS) Field Upended by Neural Networks

On today’s episode of “The Interview” with The Next Platform, we focus on how geographic information systems (GIS) is, as a field, being revolutionized by deep learning.

This stands to reason given the large volumes of satellite image data and robust deep learning frameworks that excel at image classification and analysis–a volume issue that has been compounded by more satellites with ever-higher resolution output.

Unlike other areas of large-scale scientific data analysis that have traditionally relied on massive supercomputers, our audio interview (player below) reveals that a great deal of GIS analysis can be done on smaller systems. However,

Geographic Information Systems (GIS) Field Upended by Neural Networks was written by Nicole Hemsoth at The Next Platform.

Deep Learning on HPC Systems for Astronomy

On today’s episode of “The Interview” with The Next Platform we talk about the use of petascale supercomputers for training deep learning algorithms. More specifically, how this happening in Astronomy to enable real-time analysis of LIGO detector data.

We are joined by Daniel George, a researcher in the Gravity Group at the National Center for Supercomputing Applications, or NCSA. His team garnered a great deal of attention at the annual supercomputing conference in November with work blending traditional HPC simulation data and deep learning.

George and his team have shown that deep learning with convolutional neural networks can provide many

Deep Learning on HPC Systems for Astronomy was written by Nicole Hemsoth at The Next Platform.

The Evolution of NAMD: A Scalability Story from Single-Core to GPU Boosted

On today’s episode of “The Interview” with The Next Platform, we take a look at the evolution of the NAMD molecular dynamics and how the introduction of GPU computing upended performance expectations and set the stage for new metrics now that the Volta GPU architecture will be available on large supercomputers like the Summit machine coming to Oak Ridge National Lab.

Our guest is well known for being part of the team that won a Gordon Bell Prize in 2002 for work on scaling NAMD. Dr. Jim Phillips is a Senior Research Programmer in the NCSA Blue Waters Project

The Evolution of NAMD: A Scalability Story from Single-Core to GPU Boosted was written by Nicole Hemsoth at The Next Platform.

The Next Platform Announces Renowned HPC Expert Joins Team

Former Harvard Computer Science Lead Brings Distributed Systems Experience to Top Publication’s Readers

The Next Platform is proud to announce that former Assistant Dean and Distinguished Engineer for Research Computing at Harvard, Dr. James Cuff, has joined the editorial team in a full-time capacity as Distinguished Technical Author.

As the leading publication covering distributed systems in research and large enterprise, Dr. Cuff rounds out a seasoned editorial team that delivers in-depth analysis from the worlds of supercomputing, artificial intelligence, cloud and hyperscale datacenters, and the many other technology areas that comprise the highest end of today’s IT ecosystems.

Dr. Cuff

The Next Platform Announces Renowned HPC Expert Joins Team was written by Nicole Hemsoth at The Next Platform.

Quantum Computing Performance By the Glass

On today’s episode of “The Interview” with The Next Platform, we talk about quantum computing performance and functionality with Rigetti Computing quantum hardware engineer, Matt Raegor.

We talked with Rigetti not long ago about the challenges of having an end-to-end quantum computing startup (developing the full stack—from hardware to software to the fabs that make the quantum processing units). This conversation takes that one step further by looking at how performance can be considered via an analogy of wine glasses and their various resonances. Before we get to that, however, we talk more generally about Reagor’s early work

Quantum Computing Performance By the Glass was written by Nicole Hemsoth at The Next Platform.

Machine Learning with a Memristor Boost

On today’s podcast episode of “The Interview” with The Next Platform, we talk with computer architecture researcher Roman Kaplan about the role memristors might play in accelerating common machine learning algorithms including K-means. Kaplan and team have been looking at performance and efficiency gains by letting ReRAM pick up some of the data movement tab on traditional architectures.

Kaplan, a researcher at the Viterbi faculty of Electrical Engineering in Israel, along with his team, have produced some interesting benchmarks comparing K-means and K-nearest neighbor computations on CPU, GPU, FPGA, and most notably, the Automata Processor from Micron to their

Machine Learning with a Memristor Boost was written by Nicole Hemsoth at The Next Platform.

Add Firepower to Your Data with HPC-Virtual GPU Convergence

High performance computing (HPC) enables organizations to work more quickly and effectively than traditional compute platforms—but that might not be enough to succeed in today’s evolving digital marketplace.

Mainstream HPC usage is transforming the modern workplace as organizations utilize individually deployed HPC clusters and composable infrastructures to increase IT speed and performance and help employees achieve higher levels of productivity. However, maintaining disparate and isolated systems can pose a significant challenge—such as preventing workloads from reaching optimal efficiency. By converging the muscle of HPC and virtualized environments, organizations can deliver a superior virtual graphics experience to any device in order

Add Firepower to Your Data with HPC-Virtual GPU Convergence was written by Nicole Hemsoth at The Next Platform.

Inference is the Hammer That Breaks the Datacenter

Two important changes to the datacenter are happening in the same year—one on the hardware side, another on the software side. And together, they create a force big enough to blow away the clouds, at least over the long haul.

As we covered this year from a datacentric (and even supercomputing) point of view, 2018 is the time for Arm to shine. With a bevy of inroads to commercial markets at the high-end all the way down to the micro-device level, the architecture presents a genuine challenge to the processor establishment. And now, coupled with the biggest trend since

Inference is the Hammer That Breaks the Datacenter was written by Nicole Hemsoth at The Next Platform.

1 17 18 19 20 21 35