Nicole Hemsoth

Author Archives: Nicole Hemsoth

FPGAs Give Microsoft a “Von Neumann Tax” Break

At the annual Supercomputing Conference (SC16) last week, the emphasis was on deep learning and its future role as part of supercomputing applications and systems. Before that focus, however, the rise of novel architectures and reconfigurable accelerators (as alternatives to building a custom ASIC) was swift.

Feeding on that trend, a panel exploring how non-Von Neumann architectures looked at the different ways the high performance computing set might consider non-stored program machines and what the many burgeoning options might mean for energy efficiency and performance.

Among the presenters was Gagan Gupta, a computer architect with Microsoft Research, who detailed the

FPGAs Give Microsoft a “Von Neumann Tax” Break was written by Nicole Hemsoth at The Next Platform.

Inside Intel’s Strategy to Integrate Nervana Deep Learning Assets

There is little doubt that 2017 will be a dense year for deep learning. With a sudden new wave of applications that integrate neural networks into existing workflows (not to mention entirely new uses) and a fresh array of hardware architectures to meet them, we expect the space to start shaking out its early winners and losers and show a clearer path ahead.

As we described earlier this week, Intel has big plans to integrate the Nervana ASIC and software stack with its Knights family of processors in the next several years. This effort, codenamed Knights Crest, is a long-term

Inside Intel’s Strategy to Integrate Nervana Deep Learning Assets was written by Nicole Hemsoth at The Next Platform.

Intel Declares War on GPUs at Disputed HPC, AI Border

In Supercomputing Conference (SC) years past, chipmaker Intel has always come forth with a strong story, either as an enabling processor or co-processor force, or more recently, as a prime contractor for a leading-class national lab supercomputer.

But outside of a few announcements at this year’s SC related to beefed-up SKUs for high performance computing and Skylake plans, the real emphasis back in Portland seemed to ring far fainter for HPC and much louder for the newest server tech darlings, deep learning and machine learning. Far from the HPC crowd last week was Intel’s AI Day, an event in

Intel Declares War on GPUs at Disputed HPC, AI Border was written by Nicole Hemsoth at The Next Platform.

Wringing Cost and Complexity Out of HPC

The race toward exascale supercomputing gets a lot of attention, as it should. Driving up top-end performance levels in high performance computing (HPC) is essential to generate new insights into the fundamental laws of physics, the origins of the universe, global climate systems, and more. The wow factor is huge.

There is another area of growth in HPC that is less glamorous, but arguably even more important. It is the increasing use of small to mid-sized clusters by individuals and small groups that have long been reliant on workstations to handle their most compute-intensive tasks. Instead of a small number

Wringing Cost and Complexity Out of HPC was written by Nicole Hemsoth at The Next Platform.

A Deep Learning Supercomputer Approach to Cancer Research

Deep learning and machine learning are major themes at this year’s annual Supercomputing Conference (SC16), both in terms of vendors showcasing systems that are a fit for both high performance computing and machine learning, and in the revelation of new efforts to combine traditional simulations with neural networks for greater efficiency and insight.

We have already described this momentum in the context of announcements from supercomputer makers like Cray, which just unveiled a Pascal GPU-based addition to their modeling and simulation-oriented XC supercomputer line, complete with deep learning frameworks integrated into the stack. The question was, how many HPC workloads

A Deep Learning Supercomputer Approach to Cancer Research was written by Nicole Hemsoth at The Next Platform.

A Closer Look at 2016 Top 500 Supercomputer Rankings

The bi-annual rankings of the Top 500 supercomputers for the November 2016 systems are now live. While the top of the list is static with the same two Chinese supercomputers dominating, there are several new machines that have cropped up to replace decommissioned systems throughout, the momentum at the very top shows some telling architectural trends, particularly among the newcomers in the top 20.

We already described the status of the major Chinese and Japanese systems in our analysis of the June 2016 list and thought it might be more useful to look at some of the broader

A Closer Look at 2016 Top 500 Supercomputer Rankings was written by Nicole Hemsoth at The Next Platform.

Inside Six of the Newest Top 20 Supercomputers

The latest listing of the Top 500 rankings of the world’s most powerful supercomputers has just been released. While there were no big surprises at the top of the list, there have been some notable additions to the top tier, all of which feature various elements of supercomputers yet to come as national labs and research centers prepare for their pre-exascale and eventual exascale systems.

We will be providing a deep dive on the list results this morning, but for now, what is most interesting about the list is what it is just beginning to contain at the top–and what

Inside Six of the Newest Top 20 Supercomputers was written by Nicole Hemsoth at The Next Platform.

Cray’s New Pascal XC50 Supercomputer Points to Richer HPC Future

Over the course of the last five years, GPU computing has featured prominently in supercomputing as an accelerator on some of the world’s fastest machines. If some supercomputer makers are correct, GPUs will continue to play a major role in high performance computing, but the acceleration they provide will go beyond boosts to numerical simulations. This has been great news for Nvidia’s bottom line since the market for GPU computing is swelling, and for HPC vendors that can integrate those and wrap the proper software stacks around both HPC and machine learning, it could be an equal boon.

Cray’s New Pascal XC50 Supercomputer Points to Richer HPC Future was written by Nicole Hemsoth at The Next Platform.

SC16 for HPC Programmers: What to Watch

An event as large and diverse as the annual Supercomputing Conference (SC16) presents a daunting array of content, even for those who specialize in a particular area inside the wider HPC spectrum. For HPC programmers, there are many sub-tracks to follow depending where on the stack on sits.

The conference program includes a “Programming Systems” label for easily finding additional relevant sessions, but we wanted to highlight a few of these here based on larger significance to the overall HPC programming ecosystem.

HPC programmers often have special considerations in how they program that other fields do not. For example, nothing

SC16 for HPC Programmers: What to Watch was written by Nicole Hemsoth at The Next Platform.

What Sort of Burst Buffer Are You?

Burst buffer technology is closely associated with HPC applications and supercomputer sites as a means of ensuring that persistent storage, typically a parallel file system, does not become a bottleneck to overall performance, specifically where checkpoints and restarts are concerned. But attention is now turning to how burst buffers might find broader use cases beyond this niche, and how they could be used for accelerating performance in other areas where the ability to handle a substantial volume of data with high speed and low latency is key.

The term burst buffer is applied to this storage technology simply because this

What Sort of Burst Buffer Are You? was written by Nicole Hemsoth at The Next Platform.

JVM Boost Shows Warm Java is Better than Cold

The Java Virtual Machine (JVM) is a vital part of modern distributed computing. It is the platform of big data applications like Spark, HDFS, Cassandra,and Hive. While the JVM provides “write once, run anywhere” platform independence, this comes at a cost. The JVM takes time to “warm up”, that is to load the classes, interpret the bytecode, and so on. This time may not matter much for a long-running Tomcat server, but big data jobs are typically short-lived. Thus the parallelization often used to speed up the time-to-results compounds the JVM warmup time problem.

David Lion and his colleagues examined

JVM Boost Shows Warm Java is Better than Cold was written by Nicole Hemsoth at The Next Platform.

Microsoft Research Pens Quill for Data Intensive Analysis

Collecting data is only useful to the extent that the data is analyzed. These days, human Internet usage is generating more data (particularly for advertising purposes) and Internet of Things devices are providing data about our homes, our cars, and our bodies.

Analyzing that data can become a challenge at scale. Streaming platforms work well with incoming data but aren’t designed for post hoc analysis. Traditional database management systems can perform complex queries against stored data, but cannot be put to real-time usage.

One proposal to address these challenges, called Quill, was developed by Badrish Chandramouli and colleagues at Microsoft

Microsoft Research Pens Quill for Data Intensive Analysis was written by Nicole Hemsoth at The Next Platform.

Physics Code Modifications Push Xeon Phi Peak Performance

In the worlds of high performance computing (HPC) and physics, seemingly straightforward challenges are frequently not what they seem at first glance.

For example, doing justice to an outwardly simple physics experiment involving a pendulum and drive motor can involve the need to process billions of data points. Moreover, even when aided by the latest high performance technology, such as the Intel Xeon Phi processor, achieving optimal compute levels requires ingenuity for addressing unexpected coding considerations.

Jeffery Dunham, the William R. Kenan Jr. Professor of Natural Sciences at Middlebury College in Vermont, should know. For about eight years, Professor Dunham

Physics Code Modifications Push Xeon Phi Peak Performance was written by Nicole Hemsoth at The Next Platform.

Advances in In Situ Processing Tie to Exascale Targets

Waiting for a simulation to complete before visualizing the results is often an unappealing prospect for researchers.

Verifying that output matches expectations early in a run helps prevent wasted computation time, which is particularly important on systems in high demand or when a limited allocation is availableIn addition, the growth in the ability to perform computation continues to outpace the growth in the ability to performantly store the results. The ability to analyze simulation output while it is still resident in memory, known as in situ processing, is appealing and sometimes necessary for researchers running large-scale simulations.

In light of

Advances in In Situ Processing Tie to Exascale Targets was written by Nicole Hemsoth at The Next Platform.

Mainstreaming Machine Learning: Emerging Solutions

In the course of this three-part series on the challenges and opportunities for enterprise machine learning, we have worked to define the landscape and ecosystem for these workloads in large-scale business settings and have taken an in-depth look at some of the roadblocks on the path to more mainstream machine learning applications.

In this final part of the series, we will turn from pointing to the problems and look at the ways the barriers can be removed, both in terms of leveraging the technology ecosystem around machine learning and addressing more difficult problems, most notably, how to implement the human

Mainstreaming Machine Learning: Emerging Solutions was written by Nicole Hemsoth at The Next Platform.

The State of HPC Cloud in 2016

We are pleased to announce that the first book from Next Platform Press, titled “The State of HPC Cloud: 2016 Edition” is complete. The printed book will be available on Amazon.com and other online bookstores in December, 2016. However, in the meantime, supercomputing cloud company, Nimbix, is backing an effort to offer a digital download edition for free for this entire week—from today, October 31 until November 6.

As you will note from looking at the Next Platform Press page, we have other books we will be delivering in a similar manner this year. However, that this is the

The State of HPC Cloud in 2016 was written by Nicole Hemsoth at The Next Platform.

Major Roadblocks on the Path to Machine Learning

In part one of this series last week, we discussed the emerging ecosystem of machine learning applications and what promise those portend. But of course, as with any emerging application area (although to be fair, machine learning is not new), there are bound to be some barriers.

Even in analytically sophisticated organizations, machine learning often operates in “silos of expertise.” For example, the financial crimes unit in a bank may use advanced techniques to catch anti-money laundering; the credit risk team uses completely different and incompatible tools to predict loan defaults and set risk-based pricing; while treasury uses still other

Major Roadblocks on the Path to Machine Learning was written by Nicole Hemsoth at The Next Platform.

It Takes a Lot of Supercomputing to Simulate Future Computing

The chip industry is quickly reaching the limits of traditional lithography in its effort to cram more transistors onto a piece of silicon at a pace consistent with Moore’s Law. Accordingly, new approaches, including using extreme ultraviolet light sources, are being developed. While this can promise new output for chipmakers, developing this technology to enhance future computing is going to take a lot of supercomputing.

Lawrence Livermore National Lab’s Dr. Fred Streitz and his teams at the HPC Innovation Center at LLNL are working with Dutch semiconductor company, ASML, to push advances in lithography for next-generation chips. Even as a

It Takes a Lot of Supercomputing to Simulate Future Computing was written by Nicole Hemsoth at The Next Platform.

It Takes a Lot of Supercomputing to Simulate Future Computing

The chip industry is quickly reaching the limits of traditional lithography in its effort to cram more transistors onto a piece of silicon at a pace consistent with Moore’s Law. Accordingly, new approaches, including using extreme ultraviolet light sources, are being developed. While this can promise new output for chipmakers, developing this technology to enhance future computing is going to take a lot of supercomputing.

Lawrence Livermore National Lab’s Dr. Fred Streitz and his teams at the HPC Innovation Center at LLNL are working with Dutch semiconductor company, ASML, to push advances in lithography for next-generation chips. Even as a

It Takes a Lot of Supercomputing to Simulate Future Computing was written by Nicole Hemsoth at The Next Platform.

The State of Enterprise Machine Learning

For a topic that generates so much interest, it is surprisingly difficult to find a concise definition of machine learning that satisfies everyone. Complicating things further is the fact that much of machine learning, at least in terms of its enterprise value, looks somewhat like existing analytics and business intelligence tools.

To set the course for this three-part series that puts the scope of machine learning into enterprise context, we define machine learning as software that extracts high-value knowledge from data with little or no human supervision. Academics who work in formal machine learning theory may object to a

The State of Enterprise Machine Learning was written by Nicole Hemsoth at The Next Platform.

1 28 29 30 31 32 35