Nicole Hemsoth

Author Archives: Nicole Hemsoth

AWS Outlines Current HPC Cloud User Trends

Last week we discussed the projected momentum for FPGAs in the cloud with Deepak Singh, general manager of container and HPC projects at Amazon Web Services. In the second half of our interview, we delved into the current state of high performance computing on Amazon’s cloud.

While the company tends to offer generalizations versus specific breakdowns of “typical” workloads for different HPC application types, the insight reveals a continued emphasis on pushing new instances to feed both HPC and machine learning, continued drive to push ISVs to expand license models, and continued work to make running complex workflows more seamless.

AWS Outlines Current HPC Cloud User Trends was written by Nicole Hemsoth at The Next Platform.

FPGAs Focal Point for Efficient Neural Network Inference

Over the last couple of years, we have focused extensively on the hardware required for training deep neural networks and other machine learning algorithms. Focal points have included the use of general purpose and specialized CPUs, GPUs, custom ASICs, and more recently, FPGAs.

As the battle to match the correct hardware devices for these training workloads continues, another has flared up on the deep learning inference side. Training neural networks has its own challenges that can be met with accelerators, but for inference, the efficiency, performance, and accuracy need to be in balance.

One developing area in inference is in

FPGAs Focal Point for Efficient Neural Network Inference was written by Nicole Hemsoth at The Next Platform.

Heating Up the Exascale Race by Staying Cool

High performance computing is a hot field, and not just in the sense that it gets a lot of attention. The hardware necessary to perform the countless simulations performed every day consumes a lot of power, which is largely turned into heat. How to handle all of that heat is a subject that is always on the mind of facilities managers. If the thermal energy is not moved elsewhere in short order, the delicate electronics that comprise the modern computer will cease to function.

The computer room air handler (CRAH) is the usual approach. Chilled water chills the air, which

Heating Up the Exascale Race by Staying Cool was written by Nicole Hemsoth at The Next Platform.

Refining Oil and Gas Discovery with Deep Learning

Over the last two years, we have highlighted deep learning use cases in enterprise areas including genomics, large-scale business analytics, and beyond, but there are still many market areas that are still building a profile for where such approaches fit into existing workflows. Even though model training and inference might be useful, for some areas that have complex simulation-driven workflows, there are great efficiencies that could come from deep neural nets, but integrating those elements is difficult.

The oil and gas industry is one area where deep learning holds promise, at least in theory. For some steps in the resource

Refining Oil and Gas Discovery with Deep Learning was written by Nicole Hemsoth at The Next Platform.

AWS Details FPGA Rationale and Market Trajectory

At the end of 2016, Amazon Web Services announced it would be making high-end Xilinx FPGAs available via a cloud delivery model, beginning first in a developer preview mode before branching with higher-level tools to help potential new users onboard and experiment with FPGA acceleration as the year rolls on.

As Deepak Singh, General Manager for the Container and HPC division within AWS tells The Next Platform, the application areas where the most growth is expected for cloud-based FPGAs are many of the same we detailed in our recent book, FPGA Frontiers: New Applications in Reconfigurable Computing. These

AWS Details FPGA Rationale and Market Trajectory was written by Nicole Hemsoth at The Next Platform.

Adapting InfiniBand for High Performance Cloud Computing

When it comes to low-latency interconnects for high performance computing, InfiniBand immediately springs to mind. On the most recent Top 500 list, over 37% of systems used some form of InfiniBand – the highest representation of any interconnect family. Since 2009, InfiniBand has occupied between 30 and 51 percent of every Top 500 list.

But when you look to the clouds, InfiniBand is hard to find. Of the three major public cloud offerings (Amazon Web Services, Google Cloud, and Microsoft Azure), only Azure currently has an InfiniBand offering. Some smaller players do as well (Profit Bricks, for example), but it’s

Adapting InfiniBand for High Performance Cloud Computing was written by Nicole Hemsoth at The Next Platform.

FPGA Frontiers: New Applications in Reconfigurable Computing

There is little doubt that this is a new era for FPGAs.

While it is not news that FPGAs have been deployed in many different environments, particularly on the storage and networking side, there are fresh use cases emerging in part due to much larger datacenter trends. Energy efficiency, scalability, and the ability to handle vast volumes of streaming data are more important now than ever before. At a time when traditional CPUs are facing a future where Moore’s Law is less certain and other accelerators and custom ASICs are potential solutions with their own sets of expenses and hurdles,

FPGA Frontiers: New Applications in Reconfigurable Computing was written by Nicole Hemsoth at The Next Platform.

Looking Ahead To The Next Platforms That Will Define 2017

The old Chinese saying, “May you live in interesting times,” is supposed to be a curse, uttered perhaps with a wry smile and a glint in the eye. But it is also, we think, a blessing, particularly in an IT sector that could use some constructive change.

Considering how much change the tech market, and therefore the companies, governments, and educational institutions of the world, have had to endure in the past three decades – and the accelerating pace of change over that period – you might be thinking it would be good to take a breather, to coast for

Looking Ahead To The Next Platforms That Will Define 2017 was written by Nicole Hemsoth at The Next Platform.

Bolstering Lustre on ZFS: Highlights of Continuing Work

The Zetta File System (ZFS), as a back-end file system to Lustre, has had support in Lustre for a long time. But in the last few years it has gained greater importance, likely due to Lustre’s push into enterprise and the increasing demands by both enterprise and non-enterprise IT to add more reliability and flexibility features to Lustre. So, ZFS has had significant traction in recent Lustre deployments.

However, over the last 18 months, a few challenges have been the focus of several open source projects in the Lustre developer community to improve performance, align the enterprise-grade features in ZFS

Bolstering Lustre on ZFS: Highlights of Continuing Work was written by Nicole Hemsoth at The Next Platform.

Molecular Dynamics a Next Frontier for FPGA Acceleration

Molecular dynamics codes have a wide range of uses across scientific research and represent a target base for a variety of accelerators and approaches, from GPUs to custom ASICs. The iterative nature of these codes on a CPU alone can require massive amounts of compute time for relatively simple simulations, thus the push is strong to find ways to bolster performance.

It is not practical for all users to make use of a custom ASIC (as exists on domain specific machines like those from D.E. Shaw, for instance). Accordingly, this community has looked to a mid-way step between general

Molecular Dynamics a Next Frontier for FPGA Acceleration was written by Nicole Hemsoth at The Next Platform.

The Road Ahead for Deep Learning in Healthcare

While there are some sectors of the tech-driven economy that thrive on rapid adoption on new innovations, other areas become rooted in traditional approaches due to regulatory and other constraints. Despite great advances toward precision medicine goals, the healthcare industry, like other important segments of the economy, is tied by several specific bounds that make it slower to adapt to potentially higher performing tools and techniques.

Although deep learning is nothing new, its application set is expanding. There is promise for the more mature variants of traditional deep learning (convolutional and recurrent neural networks are the prime example) to morph

The Road Ahead for Deep Learning in Healthcare was written by Nicole Hemsoth at The Next Platform.

From Monolith to Microservices

Microservices are big in the tech world these days. The evolutionary heir to service-oriented architecture, microservice-based design is the ultimate manifestation of everything you learned about good application design.

Loosely coupled with high cohesion, microservices are the application embodiment of DevOps principles. So why isn’t everything a microservice now? At the LISA Conference, Anders Wallgren and Avantika Mathur from Electric Cloud gave some insight with their talk “The Hard Truths about Microservices and Software Delivery”.

Perhaps the biggest impediment to the adoption of microservices-based application architecture is an organizational culture that is not supportive. Microservices proponents recommend a team

From Monolith to Microservices was written by Nicole Hemsoth at The Next Platform.

Let There Be Light: The Year in Silicon Photonics

Computing historians may look back on 2016 as the Year of Silicon Photonics. Not because the technology has become ubiquitous – that may yet be years away – but because the long-awaited silicon photonics offerings are now commercially available in networking hardware. While the advancements in networking provided by silicon photonics are indisputable, the real game changer is in the CPU.

For over half a century, Moore’s Law has been the name of the game. The transistor density on chips has been remarkably cooperative in doubling on schedule since Gordon Moore first made his observation in 1965. But Moore’s Law

Let There Be Light: The Year in Silicon Photonics was written by Nicole Hemsoth at The Next Platform.

U.S. Bumps Exascale Timeline, Focuses on Novel Architectures for 2021

It has just been announced that there has been a shift in thinking among the exascale computing leads in the U.S. government underway—one that offers the potential of the United States installing an exascale capable machine in 2021, but of even more interest, a system based on a novel architecture.

As Paul Messina, Argonne National Lab Distinguished Fellow and head of the Exascale Computing Project (ECP) tells The Next Platform, the roadmap to an exascale capable machine (meaning one capable of 50X the current 20 petaflop capability machines on the Top 500 supercomputer list now) is on a seven-year,

U.S. Bumps Exascale Timeline, Focuses on Novel Architectures for 2021 was written by Nicole Hemsoth at The Next Platform.

Configuring the Future for FPGAs in Genomics

With the announcement of FPGA instances hitting the Amazon cloud and similar such news expected from FPGA experts Microsoft via Azure, among others, the lens was centered back on reconfigurable hardware and the path ahead. This has certainly been a year-plus of refocusing for the two main makers of such hardware, Altera and Xilinx, with the former being acquired by Intel and the latter picking up a range of new users, including AWS.

In addition to exploring what having a high-end Xilinx FPGA available in the cloud means for adoption, we talked to a couple of companies that have carved

Configuring the Future for FPGAs in Genomics was written by Nicole Hemsoth at The Next Platform.

Building Intelligence into Machine Learning Hardware

Machine learning is a rising star in the compute constellation, and for good reason. It has the ability to not only make life more convenient – think email spam filtering, shopping recommendations, and the like – but also to save lives by powering the intelligence behind autonomous vehicles, heart attack prediction, etc. While the applications of machine learning are bounded only by imagination, the execution of those applications is bounded by the available compute resources. Machine learning is compute-intensive and it turns out that traditional compute hardware is not well-suited for the task.

Many machine learning shops have approached the

Building Intelligence into Machine Learning Hardware was written by Nicole Hemsoth at The Next Platform.

What it Takes to Build True FPGA as a Service

Amazon Web Services might be offering FPGAs in an EC2 cloud environment, but this is still a far cry from the FPGA-as-a-service vision many hold for the future. Nonetheless, it is a remarkable offering in terms of the bleeding-edge Xilinx accelerator. The real success of these FPGA (F1) instances now depends on pulling in the right partnerships and tools to snap a larger user base together—one that would ideally include non-FPGA experts.

In its F1 instance announcement this week, AWS made it clear that for the developer preview, there are only VHDL and Verilog programmer tools, which are very

What it Takes to Build True FPGA as a Service was written by Nicole Hemsoth at The Next Platform.

The FPGA Accelerated Cloud Push Just Got Stronger

FPGAs have been an emerging topic on the application acceleration front over the last couple of years, but despite increased attention around use cases in machine learning and other hot areas, hands have been tied due to simply on-boarding with both the hardware and software.

As we have covered here, this is changing, especially with the addition of OpenCL and other higher-level interfaces to let developers talk to FPGAs for both network and application double-duty and for that matter, getting systems that have integrated capabilities to handle FPGAs just as they do with GPUs (PCIe) takes extra footwork as well.

The FPGA Accelerated Cloud Push Just Got Stronger was written by Nicole Hemsoth at The Next Platform.

Pushing Back Against Cheap and Deep Storage

It is not always easy, but several companies dedicated to the supercomputing market have managed to retune their wares to fit more mainstream market niches. This has been true across both the hardware and software subsets of high performance computing, and such efforts have been aided by a well-timed shift in the needs of enterprises toward more robust, compute and data-intensive workhorses as new workloads, most of which are driven by dramatic increases in data volumes and analytical capabilities, keep emerging.

For supercomputer makers, the story is a clear one. However, on the storage side, especially for those select few

Pushing Back Against Cheap and Deep Storage was written by Nicole Hemsoth at The Next Platform.

Nvidia CEO’s “Hyper-Moore’s Law” Vision for Future Supercomputers

Over the last year in particular, we have documented the merger between high performance computing and deep learning and its various shared hardware and software ties. This next year promises far more on both horizons and while GPU maker Nvidia might not have seen it coming to this extent when it was outfitting its first GPUs on the former top “Titan” supercomputer, the company sensed a mesh on the horizon when the first hyperscale deep learning shops were deploying CUDA and GPUs to train neural networks.

All of this portends an exciting year ahead and for once, the mighty CPU

Nvidia CEO’s “Hyper-Moore’s Law” Vision for Future Supercomputers was written by Nicole Hemsoth at The Next Platform.

1 27 28 29 30 31 35