Nicole Hemsoth

Author Archives: Nicole Hemsoth

Quantum Computing Performance By the Glass

On today’s episode of “The Interview” with The Next Platform, we talk about quantum computing performance and functionality with Rigetti Computing quantum hardware engineer, Matt Raegor.

We talked with Rigetti not long ago about the challenges of having an end-to-end quantum computing startup (developing the full stack—from hardware to software to the fabs that make the quantum processing units). This conversation takes that one step further by looking at how performance can be considered via an analogy of wine glasses and their various resonances. Before we get to that, however, we talk more generally about Reagor’s early work

Quantum Computing Performance By the Glass was written by Nicole Hemsoth at The Next Platform.

Machine Learning with a Memristor Boost

On today’s podcast episode of “The Interview” with The Next Platform, we talk with computer architecture researcher Roman Kaplan about the role memristors might play in accelerating common machine learning algorithms including K-means. Kaplan and team have been looking at performance and efficiency gains by letting ReRAM pick up some of the data movement tab on traditional architectures.

Kaplan, a researcher at the Viterbi faculty of Electrical Engineering in Israel, along with his team, have produced some interesting benchmarks comparing K-means and K-nearest neighbor computations on CPU, GPU, FPGA, and most notably, the Automata Processor from Micron to their

Machine Learning with a Memristor Boost was written by Nicole Hemsoth at The Next Platform.

Add Firepower to Your Data with HPC-Virtual GPU Convergence

High performance computing (HPC) enables organizations to work more quickly and effectively than traditional compute platforms—but that might not be enough to succeed in today’s evolving digital marketplace.

Mainstream HPC usage is transforming the modern workplace as organizations utilize individually deployed HPC clusters and composable infrastructures to increase IT speed and performance and help employees achieve higher levels of productivity. However, maintaining disparate and isolated systems can pose a significant challenge—such as preventing workloads from reaching optimal efficiency. By converging the muscle of HPC and virtualized environments, organizations can deliver a superior virtual graphics experience to any device in order

Add Firepower to Your Data with HPC-Virtual GPU Convergence was written by Nicole Hemsoth at The Next Platform.

Inference is the Hammer That Breaks the Datacenter

Two important changes to the datacenter are happening in the same year—one on the hardware side, another on the software side. And together, they create a force big enough to blow away the clouds, at least over the long haul.

As we covered this year from a datacentric (and even supercomputing) point of view, 2018 is the time for Arm to shine. With a bevy of inroads to commercial markets at the high-end all the way down to the micro-device level, the architecture presents a genuine challenge to the processor establishment. And now, coupled with the biggest trend since

Inference is the Hammer That Breaks the Datacenter was written by Nicole Hemsoth at The Next Platform.

Looking Back: The Evolution of HPC Power, Efficiency and Reliability

On today’s podcast episode of “The Interview” with The Next Platform, we talk about exascale power and resiliency by way of a historical overview of architectures with long-time HPC researcher, Dr. Robert Fowler.

Fowler’s career in HPC began at his alma mater, Harvard in the early seventies with scientific codes and expanded across the decades to include roles at several universities, including the University of Washington, the University of Rochester, Rice University, and most recently, RENCI at the University of North Carolina at Chapel Hill where he spearheads high performance computing initiatives and projects, including one we will

Looking Back: The Evolution of HPC Power, Efficiency and Reliability was written by Nicole Hemsoth at The Next Platform.

Establishing Early Neural Network Standards

Today’s podcast episode of “The Interview” with The Next Platform will focus on an effort to standardize key neural network features to make development and innovation easier and more productive.

While it is still too early to standardize across major frameworks for training, for instance, portability for new architectures via a common file format is a critical first step toward more interoperability between frameworks and between training and inferencing tools.

To explore this, we are joined by Neil Trevett, Vice President of the Developer Ecosystem at Nvidia and President of the Khronos Group, an industry consortium focused on creating open

Establishing Early Neural Network Standards was written by Nicole Hemsoth at The Next Platform.

Delivering Predictive Outcomes with Superhuman Knowledge

Massive data growth and advances in acceleration technologies are pushing modern computing capabilities to unprecedented levels and changing the face of entire industries.

Today’s organizations are quickly realizing that the more data they have the more they can learn, and powerful new techniques like artificial intelligence (AI) and deep learning are helping them convert that data into actionable intelligence that can transform nearly every aspect of their business. NVIDIA GPUs and Hewlett Packard Enterprise (HPE) high performance computing (HPC) platforms are accelerating these capabilities and helping organizations arrive at deeper insights, enable dynamic correlation, and deliver predictive outcomes with superhuman

Delivering Predictive Outcomes with Superhuman Knowledge was written by Nicole Hemsoth at The Next Platform.

Exascale Storage Gets a GPU Boost

Alex St. John is a familiar name in the GPU and gaming industry given his role at Microsoft in the creation of DirectX technology in the 90s. And while his fame may be rooted in graphics for PC players, his newest venture has sparked the attention of both the supercomputing and enterprise storage crowds—and for good reason.

It likely helps to have some notoriety when it comes to securing funding, especially one that has roots in notoriously venture capital-denied supercomputing ecosystem. While St. John’s startup Nyriad may be a spin-out of technology developed for the Square Kilometre Array (SKA), the

Exascale Storage Gets a GPU Boost was written by Nicole Hemsoth at The Next Platform.

At the Cutting Edge of Quantum Computing Research

On today’s podcast episode of “The Interview” with The Next Platform, we focus on some of the recent quantum computing developments out of Oak Ridge National Lab’s Quantum Computing Institute with the center’s director, Dr. Travis Humble.

Regular readers will recall previous work Humble has done on the quantum simulator, as well as other lab and Quantum Insitute efforts on creating hybrid quantum and neuromorphic supercomputers and building software frameworks to support quantum interfacing. In our discussion we check in on progress along all of these fronts, including a more detailed conversation about the XACC programming framework for

At the Cutting Edge of Quantum Computing Research was written by Nicole Hemsoth at The Next Platform.

Lustre Shines at HPC Peaks, But Rest of Market is Fertile Ground

The Lustre file system has been the canonical choice for the world’s largest supercomputers, but for the rest of high performance computing user base, it is moving beyond reach without the support and guidance it has had from its many backers, including most recently Intel, which dropped Lustre from its development ranks in mid-2017.

While Lustre users have seen the support story fall to pieces before, for many HPC shops, the need is greater than ever to look toward a fully supported scalable parallel file system that snaps well to easy to manage appliances. Some of these commercial HPC sites

Lustre Shines at HPC Peaks, But Rest of Market is Fertile Ground was written by Nicole Hemsoth at The Next Platform.

HPC System & Processor Trends for 2018

In this episode of The Interview from The Next Platform, we talk with Andrew Jones from independent high performance computing consulting firm, N.A.G. about processor and system acquisition trends in HPC for users at the smaller commercial end of the spectrum up through the large research centers.

In the course of the conversation, we cover how acquisition trends are being affected by machine learning entering the HPC workflow in the coming years, the differences over time between commercial HPC and academic supercomputing, and some of the issues around processor choices for both markets.

Given his experiences talking to end users

HPC System & Processor Trends for 2018 was written by Nicole Hemsoth at The Next Platform.

Momentum Gathers for Persistent Memory Preppers

While it is possible to reap at least some benefits from persistent memory, for those that are performance focused, the work to establish an edge is getting underway now with many of the OS and larger ecosystem players working together on new standards for existing codes.

Before we talk about some of the efforts to bring easier programming for persistent memory closer, it is useful to level-set about what it is, isn’t, how it works, and who will benefit in the near term. The most common point of confusion is that persistent memory is not necessarily about hardware, a fact

Momentum Gathers for Persistent Memory Preppers was written by Nicole Hemsoth at The Next Platform.

New Memory Challenges Legacy Approaches to HPC Code

From DRAM to NUMA to memory non-volatile, stacked, remote, or even phase change, the coming years will bring big changes to code developers on the world’s largest parallel supercomputers.

While these memory advancements can translate to major performance leaps, the code complexity these devices will create pose big challenges in terms of performance portability for legacy and newer codes alike.

While the programming side of the emerging memory story may not be as widely appealing as the hardware, work that people like Ron Brightwell, R&D manager at Sandia National Lab and head of numerous exascale programming efforts do to expose

New Memory Challenges Legacy Approaches to HPC Code was written by Nicole Hemsoth at The Next Platform.

AI Will Not Be Taking Away Code Jobs Anytime Soon

There has been much recent talk about the near future of code writing itself with the help of trained neural networks but outside of some limited use cases, that reality is still quite some time away—at least for ordinary development efforts.

Although auto-code generation is not a new concept, it has been getting fresh attention due to better capabilities and ease of use in neural network frameworks. But just as in other areas where AI is touted as being the near-term automation savior, the hype does not match the technological complexity need to make it reality. Well, at least not

AI Will Not Be Taking Away Code Jobs Anytime Soon was written by Nicole Hemsoth at The Next Platform.

What It Takes to Build a Quantum Computing Startup

If you thought the up-front costs and risks were high for a silicon startup, consider the economics of building a full-stack quantum computing company from the ground-up—and at a time when the applications are described in terms of their potential and the algorithms still in primitive stages.

Quantum computing company, D-Wave managed to bootstrap its annealing-based approach and secure early big name customers with a total of $200 million over the years but as we have seen with a range of use cases, they have been able to put at least some funds back in investor pockets with system sales

What It Takes to Build a Quantum Computing Startup was written by Nicole Hemsoth at The Next Platform.

OpenMP Has More in Store for GPU Supercomputing

Just before the large-scale GPU accelerated Titan supercomputer came online in 2012, the first use cases of the OpenACC parallel programming model showed efficient, high performance interfacing with GPUs on big HPC systems.

At the time, OpenACC and CUDA were the only higher-level tools for the job. However, OpenMP, which has had twenty-plus years to develop roots in HPC, was starting to see the opportunities for GPUs in HPC at about the same time of OpenACC was forming. As legend has it, OpenACC itself was developed based on early GPU work done in an OpenMP accelerator subcommittee, generating some bad

OpenMP Has More in Store for GPU Supercomputing was written by Nicole Hemsoth at The Next Platform.

How AlphaGo Sparked a New Approach to De Novo Drug Design

Researcher Olexandr Isayev wasn’t just impressed to see an AI framework best the top player of a game so complex it was considered impossible for an algorithm to track. He was inspired.

“The analogy of the complexity of chemistry, the number of possible molecule we don’t know about, is roughly the same order of complexity of Go, the University of North Carolina computational biology and chemistry expert explained.

“Instead of playing with checkers on a board, we envisioned a neural network that could play the game of generating molecules—one that did not rely on human intuition for this initial but

How AlphaGo Sparked a New Approach to De Novo Drug Design was written by Nicole Hemsoth at The Next Platform.

Putting Graph Analytics Back on the Board

Even though graph analytics has not disappeared, especially in the select areas where this is the only efficient way to handle large-scale pattern matching and analysis, the attention has been largely silenced by the new wave machine learning and deep learning applications.

Before this newest hype cycle displaced its “big data” predecessor, there was a small explosion of new hardware and software approaches to tackling graphs at scale—from system-level offerings from companies like Cray with their Eureka appliance (which is now available as software on its standard server platforms) to unique hardware startups (ThinCI, for example) and graph

Putting Graph Analytics Back on the Board was written by Nicole Hemsoth at The Next Platform.

Graphcore Builds Momentum with Early Silicon

There has been a great deal of interest in deep learning chip startup, Graphcore, since we first got the limited technical details of the company’s first-generation chip last year, which was followed by revelations about how their custom software stack can run a range of convolutional, recurrent, generative adversarial neural network jobs.

In our conversations with those currently using GPUs for large-scale training (often with separate CPU only inference clusters), we have found generally that there is great interest in all new architectures for deep learning workloads. But what would really seal the deal is something that could both training

Graphcore Builds Momentum with Early Silicon was written by Nicole Hemsoth at The Next Platform.

Oil and Gas Industry Gets GPU, Deep Learning Injection

Although oil and gas software giant, Baker Hughes, is not in the business of high performance computing, the software it creates for the world’s leading oil and gas companies requires supercomputing capabilities for some use cases and increasingly, these systems can serve double-duty for emerging deep learning workloads.

The HPC requirements make sense for an industry awash in hundreds of petabytes each year in sensor and equipment data and many terabytes per day for seismic and discovery simulations and the deep learning angle is becoming the next best way of building meaning out of so many bytes.

In effort to

Oil and Gas Industry Gets GPU, Deep Learning Injection was written by Nicole Hemsoth at The Next Platform.

1 2 3 16