Nicole Hemsoth

Author Archives: Nicole Hemsoth

How Yahoo’s Internal Hadoop Cluster Does Double-Duty on Deep Learning

Five years ago, many bleeding edge IT shops had either implemented a Hadoop cluster for production use or at least had a cluster set aside to explore the mysteries of MapReduce and the HDFS storage system.

While it is not clear all these years later how many ultra-scale production Hadoop deployments there are in earnest (something we are analyzing for a later in-depth piece), those same shops are likely on top trying to exploit the next big thing in the datacenter—machine learning, or for the more intrepid, deep learning.

For those that were able to get large-scale Hadoop clusters into

How Yahoo’s Internal Hadoop Cluster Does Double-Duty on Deep Learning was written by Nicole Hemsoth at The Next Platform.

IBM Wants to Make Mainframes Next Platform for Machine Learning

Despite the emphasis on X86 clusters, large public clouds, accelerators for commodity systems, and the rise of open source analytics tools, there is a very large base of transactional processing and analysis that happens far from this landscape. This is the mainframe, and these fully integrated, optimized systems account for a large majority of the enterprise world’s most critical data processing for the largest companies in banking, insurance, retail, transportation, healthcare, and beyond.

With great memory bandwidth, I/O, powerful cores, and robust security, mainframes are still the supreme choice for business-critical operations at many Global 1000 companies, even if the

IBM Wants to Make Mainframes Next Platform for Machine Learning was written by Nicole Hemsoth at The Next Platform.

Intel Gets Serious About Neuromorphic, Cognitive Computing Future

Like all hardware device makers eager to meet the newest market opportunity, Intel is placing multiple bets on the future of machine learning hardware. The chipmaker has already cast its Xeon Phi and future integrated Nervana Systems chips into the deep learning pool while touting regular Xeons to do the heavy lifting on the inference side.

However, a recent conversation we had with Intel turned up a surprising new addition to the machine learning conversation—an emphasis on neuromorphic devices and what Intel is openly calling “cognitive computing” (a term used primarily—and heavily—for IBM’s Watson-driven AI technologies). This is the first

Intel Gets Serious About Neuromorphic, Cognitive Computing Future was written by Nicole Hemsoth at The Next Platform.

Making the Connections in Disparate Data

Enterprises are awash in data, and the number of sources of that data is only increasing. For some of the larger companies, data sources can rise into the thousands – from databases, files and tables to ERP and CRM programs – and the data itself can come in different formats, making it difficult to bring together and integrate into a unified pool. This can create a variety of challenges for businesses in everything from securing the data they have to analyzing it.

The problem isn’t going to go away. The rise of mobile and cloud computing and the Internet of

Making the Connections in Disparate Data was written by Nicole Hemsoth at The Next Platform.

Unwinding Moore’s Law from Genomics with Co-Design

More than almost any other market or research segment, genomics is vastly outpacing Moore’s Law.

The continued march of new sequencing and other instruments has created a flood of data and development of the DNA analysis software stack has created a tsunami. For some, high performance genomic research can only move at the pace of innovation with custom hardware and software, co-designed and tuned for the task.

We have described efforts to build custom ASICs for sequence alignment, as well as using reprogrammable hardware for genomics research, but for centers that have defined workloads and are limited by performance constraints

Unwinding Moore’s Law from Genomics with Co-Design was written by Nicole Hemsoth at The Next Platform.

Pushing MPI into the Deep Learning Training Stack

We have written much about large-scale deep learning implementations over the last couple of years, but one question that is being posed with increasing frequency is how these workloads (training in particular) will scale to many nodes. While different companies, including Baidu and others, have managed to get their deep learning training clusters to scale across many GPU-laden nodes, for the non-hyperscale companies with their own development teams, this scalability is a sticking point.

The answer to deep learning framework scalability can be found in the world of supercomputing. For the many nodes required for large-scale jobs, the de facto

Pushing MPI into the Deep Learning Training Stack was written by Nicole Hemsoth at The Next Platform.

Rise of China, Real-World Benchmarks Top Supercomputing Agenda

The United States for years was the dominant player in the high-performance computing world, with more than half of the systems on the Top500 list of the world’s fastest supercomputers being housed in the country. At the same time, most HPC systems around the globe were powered by technologies from such major US tech companies as Intel, IBM, AMD, Cray and Nvidia.

That has changed rapidly over the last several years, as the Chinese government has invested tens of billions of dollars to expand the capabilities of the country’s own technology community and with a promise to spend even more

Rise of China, Real-World Benchmarks Top Supercomputing Agenda was written by Nicole Hemsoth at The Next Platform.

One Small Shop, One Extreme HPC Storage Challenge

Being at the bleeding edge of computing in the life sciences does not always mean operating at extreme scale. For some shops, advancements in new data-generating scientific tools requires forward thinking at the infrastructure level—even if it doesn’t require a massive cluster with exotic architectures. We tend to cover much of what happens at the extreme scale of computing here, but it’s worth stepping back and observing how dramatic problems in HPC are addressed in much smaller environments.

This “small shop, big problem” situation is familiar to the Van Andel Research Institute (VARI), which recently moved from a genomics and

One Small Shop, One Extreme HPC Storage Challenge was written by Nicole Hemsoth at The Next Platform.

Solving the Challenges of Scientific Clouds

In distributed computing, there are two choices: move the data to the computation or move the computation to the data. Public and off-site private clouds present a third option: move them both. In any case, something is moving somewhere. The “right” choice depends on a variety of factors –  including performance, quantity, and cost – but data seems to have more intertia in many cases, especially when the volume approaches the scale of terabytes.

For the modern cloud, adding additional compute power is trivial. Moving the data to that compute power is less so. With a 10 gigabit connection, the

Solving the Challenges of Scientific Clouds was written by Nicole Hemsoth at The Next Platform.

Data, Analytics, Probabilities and the Super Bowl

Data is quickly becoming the coin of the realm in most aspects of the business world, and analytics the best way for organizations to cash in on it. It’s easy to be taken in by the systems and devices – much of the discussion around the Internet of Things tends to be around the things themselves, whether small sensors, mobile devices, self-driving cars or huge manufacturing systems. But the real value is in the data generated by these machines, and the ability to extract that data, analyze it and make decisions based on it in as close to real-time as

Data, Analytics, Probabilities and the Super Bowl was written by Nicole Hemsoth at The Next Platform.

Chip Makers and the China Challenge

China represents a big and growing market opportunity for IT vendors around the world. It’s huge population and market upside compared with the more mature regions across the globe is hugely attractive to system and component makers, and the Chinese government’s willingness to spend money to help build up the country’s compute capabilities only adds to the allure. In addition, it is home to such hyperscale players as Baidu, Alibaba and Tencent, which like US counterparts Google, Facebook and eBay are building out massive datacenters that are housing tens of thousands of servers.

However, those same Chinese government officials aren’t

Chip Makers and the China Challenge was written by Nicole Hemsoth at The Next Platform.

Memory at the Core of New Deep Learning Research Chip

Over the last two years, there has been a push for novel architectures to feed the needs of machine learning and more specifically, deep neural networks.

We have covered the many architectural options for both the training and inference sides of that workload here at The Next Platform, and in doing so, started to notice an interesting trend. Some companies with custom ASICs targeted at that market seemed to be developing along a common thread—using memory as the target for processing.

Processing in memory (PIM) architectures are certainly nothing new, but because the relatively simple logic units inside of

Memory at the Core of New Deep Learning Research Chip was written by Nicole Hemsoth at The Next Platform.

Many Life Sciences Workloads, One Single System

The trend at the high end, from supercomputer simulations to large-scale genomics studies, is to push heterogeneity and software complexity while reducing the overhead on the infrastructure side. This might sound like a case of dueling forces, but there is progress in creating a unified framework to run multiple workloads simultaneously on one robust cluster.

To put this into context from a precision medicine angle, Dr. Michael McManus shared his insights about the years he spent designing infrastructure for life sciences companies and research. Those fields have changed dramatically in just the last five years alone in terms of data

Many Life Sciences Workloads, One Single System was written by Nicole Hemsoth at The Next Platform.

Orchestrating HPC Engineering in the Cloud

Public clouds have proven useful to a growing number of organizations looking for ways to run their high-performance computing applications to scale without having to limit themselves to whatever computing capabilities they have in-house or to spending a lot of money to build up their infrastructure to meeting their growing needs.

The big three – Amazon Web Services, Microsoft Azure and Google Cloud – have rolled out a broad array of compute, networking and storage technologies that companies can leverage when their HPC workloads scale to the point that they can no longer be run on their in-house workstations or

Orchestrating HPC Engineering in the Cloud was written by Nicole Hemsoth at The Next Platform.

ARM Server Chips Challenge X86 in the Cloud

The idea of ARM processors being used in datacenter servers has been kicking around more most of the decade. The low-power architecture dominates the mobile world of smartphones and tablets as well as embedded IoT devices, and with datacenters increasingly consuming more power and generating more heat, the idea of using highly efficient ARM chips in IT infrastructure systems gained steam.

That was furthered by the rise of cloud computing environments and hyperscale datacenters, which can be packed with tens of thousands of small servers running massive numbers of workloads. The thought of using ARM-based server chips that are more

ARM Server Chips Challenge X86 in the Cloud was written by Nicole Hemsoth at The Next Platform.

Veteran IT Journalist, Jeffrey Burt, Joins The Next Platform as Senior Editor

We are thrilled to announce the full-time addition of veteran IT journalist, Jeffrey Burt to The Next Platform ranks.

Jeffrey Burt has been a journalist for more than 30 years, with the last 16-plus year writing about the IT industry. During his long tenure with eWeek, he covered a broad range of subjects, from processors and IT infrastructure to collaboration, PCs, AI and autonomous vehicles.

He’s written about FPGAs, supercomputers, hyperconverged infrastructure and SDN, cloud computing, deep learning and exascale computing. Regular readers here will recognize that his expertise in these areas fits in directly with our coverage

Veteran IT Journalist, Jeffrey Burt, Joins The Next Platform as Senior Editor was written by Nicole Hemsoth at The Next Platform.

OpenCL Opens Doors to Deep Learning Training on FPGA

Hardware and device makers are in a mad dash to create or acquire the perfect chip for performing deep learning training and inference. While we have yet to see anything that can handle both parts of the workload on a single chip with spectacular results (the Pascal general GPUs are the closest thing yet, with threats coming from Intel/Nervana in the future), there is promise for FPGAs to find inroads.

So far, most of the work we have focused on for FPGAs and deep learning has centered more on the acceleration of inference versus boosting training times and accuracy

OpenCL Opens Doors to Deep Learning Training on FPGA was written by Nicole Hemsoth at The Next Platform.

IARPA Spurs Race to Speed Cryogenic Computing Reality

The race is on to carve a path to efficient extreme-scale machines in the next five years but existing processing approaches fall far short of the efficiency and performance targets required. As we reported at the end of 2016, the Department of Energy in the U.S. is keeping its eye on non-standard processing approaches for one of its exascale-class systems by 2021, and other groups, including the IEEE are equally keeping pace with new architectures to explore as CMOS alternatives.

While there is no silver bullet technology yet that we expect will sweep current computing norms, superconducting circuits appear

IARPA Spurs Race to Speed Cryogenic Computing Reality was written by Nicole Hemsoth at The Next Platform.

Looking Through the Windows at HPC OS Trends

High performance computing (HPC) is traditionally considered the domain of large, purpose built machines running some *nix operating system (predominantly Linux in recent years). Windows is given little, if any, consideration. Indeed, it has never accounted for even a full percent of the Top500 list. Some of this may be due to technical considerations: Linux can be custom built for optimum performance, including recompiling the kernel. It is also historically more amenable to headless administration, which is a critical factor when maintaining thousands of nodes.

But at some point does the “Windows isn’t for high-performance computing” narrative become self-fulfilling?

Looking Through the Windows at HPC OS Trends was written by Nicole Hemsoth at The Next Platform.

A Case for CPU-Only Approaches to HPC, Analytics, Machine Learning

With the current data science boom, many companies and organizations are stepping outside of their traditional business models to scope work that applies rigorous quantitative methodology and machine learning – areas of analysis previously in the realm of HPC organizations.

Dr. Franz Kiraly an inaugural Faculty Fellow at the Alan Turing Institute observed at the recent Intel HPC developer conference that companies are not necessarily struggling with “big” data, but rather with data management issues as they begin to systematically and electronically collect specific data in one place that makes analytics feasible. These companies, as newcomers to “machine learning” and

A Case for CPU-Only Approaches to HPC, Analytics, Machine Learning was written by Nicole Hemsoth at The Next Platform.

1 26 27 28 29 30 35