Nicole Hemsoth

Author Archives: Nicole Hemsoth

Looking Back: The Evolution of HPC Power, Efficiency and Reliability

On today’s podcast episode of “The Interview” with The Next Platform, we talk about exascale power and resiliency by way of a historical overview of architectures with long-time HPC researcher, Dr. Robert Fowler.

Fowler’s career in HPC began at his alma mater, Harvard in the early seventies with scientific codes and expanded across the decades to include roles at several universities, including the University of Washington, the University of Rochester, Rice University, and most recently, RENCI at the University of North Carolina at Chapel Hill where he spearheads high performance computing initiatives and projects, including one we will

Looking Back: The Evolution of HPC Power, Efficiency and Reliability was written by Nicole Hemsoth at The Next Platform.

Establishing Early Neural Network Standards

Today’s podcast episode of “The Interview” with The Next Platform will focus on an effort to standardize key neural network features to make development and innovation easier and more productive.

While it is still too early to standardize across major frameworks for training, for instance, portability for new architectures via a common file format is a critical first step toward more interoperability between frameworks and between training and inferencing tools.

To explore this, we are joined by Neil Trevett, Vice President of the Developer Ecosystem at Nvidia and President of the Khronos Group, an industry consortium focused on creating open

Establishing Early Neural Network Standards was written by Nicole Hemsoth at The Next Platform.

Delivering Predictive Outcomes with Superhuman Knowledge

Massive data growth and advances in acceleration technologies are pushing modern computing capabilities to unprecedented levels and changing the face of entire industries.

Today’s organizations are quickly realizing that the more data they have the more they can learn, and powerful new techniques like artificial intelligence (AI) and deep learning are helping them convert that data into actionable intelligence that can transform nearly every aspect of their business. NVIDIA GPUs and Hewlett Packard Enterprise (HPE) high performance computing (HPC) platforms are accelerating these capabilities and helping organizations arrive at deeper insights, enable dynamic correlation, and deliver predictive outcomes with superhuman

Delivering Predictive Outcomes with Superhuman Knowledge was written by Nicole Hemsoth at The Next Platform.

Exascale Storage Gets a GPU Boost

Alex St. John is a familiar name in the GPU and gaming industry given his role at Microsoft in the creation of DirectX technology in the 90s. And while his fame may be rooted in graphics for PC players, his newest venture has sparked the attention of both the supercomputing and enterprise storage crowds—and for good reason.

It likely helps to have some notoriety when it comes to securing funding, especially one that has roots in notoriously venture capital-denied supercomputing ecosystem. While St. John’s startup Nyriad may be a spin-out of technology developed for the Square Kilometre Array (SKA), the

Exascale Storage Gets a GPU Boost was written by Nicole Hemsoth at The Next Platform.

At the Cutting Edge of Quantum Computing Research

On today’s podcast episode of “The Interview” with The Next Platform, we focus on some of the recent quantum computing developments out of Oak Ridge National Lab’s Quantum Computing Institute with the center’s director, Dr. Travis Humble.

Regular readers will recall previous work Humble has done on the quantum simulator, as well as other lab and Quantum Insitute efforts on creating hybrid quantum and neuromorphic supercomputers and building software frameworks to support quantum interfacing. In our discussion we check in on progress along all of these fronts, including a more detailed conversation about the XACC programming framework for

At the Cutting Edge of Quantum Computing Research was written by Nicole Hemsoth at The Next Platform.

Lustre Shines at HPC Peaks, But Rest of Market is Fertile Ground

The Lustre file system has been the canonical choice for the world’s largest supercomputers, but for the rest of high performance computing user base, it is moving beyond reach without the support and guidance it has had from its many backers, including most recently Intel, which dropped Lustre from its development ranks in mid-2017.

While Lustre users have seen the support story fall to pieces before, for many HPC shops, the need is greater than ever to look toward a fully supported scalable parallel file system that snaps well to easy to manage appliances. Some of these commercial HPC sites

Lustre Shines at HPC Peaks, But Rest of Market is Fertile Ground was written by Nicole Hemsoth at The Next Platform.

HPC System & Processor Trends for 2018

In this episode of The Interview from The Next Platform, we talk with Andrew Jones from independent high performance computing consulting firm, N.A.G. about processor and system acquisition trends in HPC for users at the smaller commercial end of the spectrum up through the large research centers.

In the course of the conversation, we cover how acquisition trends are being affected by machine learning entering the HPC workflow in the coming years, the differences over time between commercial HPC and academic supercomputing, and some of the issues around processor choices for both markets.

Given his experiences talking to end users

HPC System & Processor Trends for 2018 was written by Nicole Hemsoth at The Next Platform.

Momentum Gathers for Persistent Memory Preppers

While it is possible to reap at least some benefits from persistent memory, for those that are performance focused, the work to establish an edge is getting underway now with many of the OS and larger ecosystem players working together on new standards for existing codes.

Before we talk about some of the efforts to bring easier programming for persistent memory closer, it is useful to level-set about what it is, isn’t, how it works, and who will benefit in the near term. The most common point of confusion is that persistent memory is not necessarily about hardware, a fact

Momentum Gathers for Persistent Memory Preppers was written by Nicole Hemsoth at The Next Platform.

New Memory Challenges Legacy Approaches to HPC Code

From DRAM to NUMA to memory non-volatile, stacked, remote, or even phase change, the coming years will bring big changes to code developers on the world’s largest parallel supercomputers.

While these memory advancements can translate to major performance leaps, the code complexity these devices will create pose big challenges in terms of performance portability for legacy and newer codes alike.

While the programming side of the emerging memory story may not be as widely appealing as the hardware, work that people like Ron Brightwell, R&D manager at Sandia National Lab and head of numerous exascale programming efforts do to expose

New Memory Challenges Legacy Approaches to HPC Code was written by Nicole Hemsoth at The Next Platform.

AI Will Not Be Taking Away Code Jobs Anytime Soon

There has been much recent talk about the near future of code writing itself with the help of trained neural networks but outside of some limited use cases, that reality is still quite some time away—at least for ordinary development efforts.

Although auto-code generation is not a new concept, it has been getting fresh attention due to better capabilities and ease of use in neural network frameworks. But just as in other areas where AI is touted as being the near-term automation savior, the hype does not match the technological complexity need to make it reality. Well, at least not

AI Will Not Be Taking Away Code Jobs Anytime Soon was written by Nicole Hemsoth at The Next Platform.

What It Takes to Build a Quantum Computing Startup

If you thought the up-front costs and risks were high for a silicon startup, consider the economics of building a full-stack quantum computing company from the ground-up—and at a time when the applications are described in terms of their potential and the algorithms still in primitive stages.

Quantum computing company, D-Wave managed to bootstrap its annealing-based approach and secure early big name customers with a total of $200 million over the years but as we have seen with a range of use cases, they have been able to put at least some funds back in investor pockets with system sales

What It Takes to Build a Quantum Computing Startup was written by Nicole Hemsoth at The Next Platform.

OpenMP Has More in Store for GPU Supercomputing

Just before the large-scale GPU accelerated Titan supercomputer came online in 2012, the first use cases of the OpenACC parallel programming model showed efficient, high performance interfacing with GPUs on big HPC systems.

At the time, OpenACC and CUDA were the only higher-level tools for the job. However, OpenMP, which has had twenty-plus years to develop roots in HPC, was starting to see the opportunities for GPUs in HPC at about the same time of OpenACC was forming. As legend has it, OpenACC itself was developed based on early GPU work done in an OpenMP accelerator subcommittee, generating some bad

OpenMP Has More in Store for GPU Supercomputing was written by Nicole Hemsoth at The Next Platform.

How AlphaGo Sparked a New Approach to De Novo Drug Design

Researcher Olexandr Isayev wasn’t just impressed to see an AI framework best the top player of a game so complex it was considered impossible for an algorithm to track. He was inspired.

“The analogy of the complexity of chemistry, the number of possible molecule we don’t know about, is roughly the same order of complexity of Go, the University of North Carolina computational biology and chemistry expert explained.

“Instead of playing with checkers on a board, we envisioned a neural network that could play the game of generating molecules—one that did not rely on human intuition for this initial but

How AlphaGo Sparked a New Approach to De Novo Drug Design was written by Nicole Hemsoth at The Next Platform.

Putting Graph Analytics Back on the Board

Even though graph analytics has not disappeared, especially in the select areas where this is the only efficient way to handle large-scale pattern matching and analysis, the attention has been largely silenced by the new wave machine learning and deep learning applications.

Before this newest hype cycle displaced its “big data” predecessor, there was a small explosion of new hardware and software approaches to tackling graphs at scale—from system-level offerings from companies like Cray with their Eureka appliance (which is now available as software on its standard server platforms) to unique hardware startups (ThinCI, for example) and graph

Putting Graph Analytics Back on the Board was written by Nicole Hemsoth at The Next Platform.

Graphcore Builds Momentum with Early Silicon

There has been a great deal of interest in deep learning chip startup, Graphcore, since we first got the limited technical details of the company’s first-generation chip last year, which was followed by revelations about how their custom software stack can run a range of convolutional, recurrent, generative adversarial neural network jobs.

In our conversations with those currently using GPUs for large-scale training (often with separate CPU only inference clusters), we have found generally that there is great interest in all new architectures for deep learning workloads. But what would really seal the deal is something that could both training

Graphcore Builds Momentum with Early Silicon was written by Nicole Hemsoth at The Next Platform.

Oil and Gas Industry Gets GPU, Deep Learning Injection

Although oil and gas software giant, Baker Hughes, is not in the business of high performance computing, the software it creates for the world’s leading oil and gas companies requires supercomputing capabilities for some use cases and increasingly, these systems can serve double-duty for emerging deep learning workloads.

The HPC requirements make sense for an industry awash in hundreds of petabytes each year in sensor and equipment data and many terabytes per day for seismic and discovery simulations and the deep learning angle is becoming the next best way of building meaning out of so many bytes.

In effort to

Oil and Gas Industry Gets GPU, Deep Learning Injection was written by Nicole Hemsoth at The Next Platform.

Startup Builds GPU Native Custom Neural Network Framework

It is estimated that each day over a million malicious files are created and kicked to every corner of the web.

While there are plenty of options for security against these potential attacks, the methods for doing so at the pace, scope, and complexity of modern nasty files has left traditional detection in the dust—even those that are based on heuristics or machine learning versus signature-based.

With those traditional methods falling short of what large enterprises need for multi-device and system security the answer (to everything in IT in 2018 it seems) is to look to deep learning. But this

Startup Builds GPU Native Custom Neural Network Framework was written by Nicole Hemsoth at The Next Platform.

The Hard Limits for Deep Learning in HPC

If the hype is to be believed, there is no computational problem that cannot be tackled faster and better by artificial intelligence. But many of the supercomputing sites of the world beg to differ.

With that said, the deep learning boom has benefitted HPC in numerous ways, including bringing new cred to the years of hardware engineering around GPUs, software scalability tooling for complex parallel codes, and other feats of efficient performance at scale. And there are indeed areas of high performance computing that stand to benefit from integration of deep learning into the larger workflow including weather, cosmology, molecular

The Hard Limits for Deep Learning in HPC was written by Nicole Hemsoth at The Next Platform.

Could Algorithmic Accelerators Spur a Hardware Startup Revival?

It would be interesting to find out how many recent college graduates in electronics engineering, computer science, or related fields expect to roll out their own silicon startup in the next five years compared to similar polls from ten or even twenty years ago. Our guess is that only a select few now would even consider the possibility in the near term.

The complexity of chip designs is growing, which means higher design costs, which thus limits the number of startups that can make a foray into the market. Estimates vary, but bringing a new chip to market can cost

Could Algorithmic Accelerators Spur a Hardware Startup Revival? was written by Nicole Hemsoth at The Next Platform.

Intel, Nervana Shed Light on Deep Learning Chip Architecture

Almost two years after the acquisition by Intel, the deep learning chip architecture from startup Nervana Systems will finally be moving from its codenamed “Lake Crest” status to an actual product.

In that time, Nvidia, which owns the deep learning training market by a long shot, has had time to firm up its commitment to this expanding (if not overhyped in terms of overall industry dollar figures) market with new deep learning-tuned GPUs and appliances on the horizon as well as software tweaks to make training at scale more robust. In other words, even with solid technology at a reasonable

Intel, Nervana Shed Light on Deep Learning Chip Architecture was written by Nicole Hemsoth at The Next Platform.

1 18 19 20 21 22 35