Archive

Category Archives for "The Next Platform"

Skylake Xeon Ramp Cuts Into Intel’s Datacenter Profits

Every successive processor generation presents its own challenges to all chip makers, and the ramp of 14 nanometer processes that will be used in the future “Skylake” Xeon processors, due in the second half of this year, cut into the operating profits of its Data Center Group in the final quarter of 2016. Intel also apparently had an issue with one of its chip lines ­– it did not say if it was a Xeon or Xeon Phi, or detail what that issue was – that needed to be fixed and that hurt Data Center Group’s middle line, too.

Still,

Skylake Xeon Ramp Cuts Into Intel’s Datacenter Profits was written by Timothy Prickett Morgan at The Next Platform.

A Case for CPU-Only Approaches to HPC, Analytics, Machine Learning

With the current data science boom, many companies and organizations are stepping outside of their traditional business models to scope work that applies rigorous quantitative methodology and machine learning – areas of analysis previously in the realm of HPC organizations.

Dr. Franz Kiraly an inaugural Faculty Fellow at the Alan Turing Institute observed at the recent Intel HPC developer conference that companies are not necessarily struggling with “big” data, but rather with data management issues as they begin to systematically and electronically collect specific data in one place that makes analytics feasible. These companies, as newcomers to “machine learning” and

A Case for CPU-Only Approaches to HPC, Analytics, Machine Learning was written by Nicole Hemsoth at The Next Platform.

AWS Outlines Current HPC Cloud User Trends

Last week we discussed the projected momentum for FPGAs in the cloud with Deepak Singh, general manager of container and HPC projects at Amazon Web Services. In the second half of our interview, we delved into the current state of high performance computing on Amazon’s cloud.

While the company tends to offer generalizations versus specific breakdowns of “typical” workloads for different HPC application types, the insight reveals a continued emphasis on pushing new instances to feed both HPC and machine learning, continued drive to push ISVs to expand license models, and continued work to make running complex workflows more seamless.

AWS Outlines Current HPC Cloud User Trends was written by Nicole Hemsoth at The Next Platform.

FPGAs Focal Point for Efficient Neural Network Inference

Over the last couple of years, we have focused extensively on the hardware required for training deep neural networks and other machine learning algorithms. Focal points have included the use of general purpose and specialized CPUs, GPUs, custom ASICs, and more recently, FPGAs.

As the battle to match the correct hardware devices for these training workloads continues, another has flared up on the deep learning inference side. Training neural networks has its own challenges that can be met with accelerators, but for inference, the efficiency, performance, and accuracy need to be in balance.

One developing area in inference is in

FPGAs Focal Point for Efficient Neural Network Inference was written by Nicole Hemsoth at The Next Platform.

Heating Up the Exascale Race by Staying Cool

High performance computing is a hot field, and not just in the sense that it gets a lot of attention. The hardware necessary to perform the countless simulations performed every day consumes a lot of power, which is largely turned into heat. How to handle all of that heat is a subject that is always on the mind of facilities managers. If the thermal energy is not moved elsewhere in short order, the delicate electronics that comprise the modern computer will cease to function.

The computer room air handler (CRAH) is the usual approach. Chilled water chills the air, which

Heating Up the Exascale Race by Staying Cool was written by Nicole Hemsoth at The Next Platform.

IBM Reorg Forges Cognitive Systems, Merges Cloud And Analytics

It is the first month of a new year, and this is the time that IBM traditionally does reorganizations of its business lines and plays musical chairs with its executives to reconfigure itself for the coming year. And just like clockwork, late last week the top brass at Big Blue did internal announcements explaining the changes it is making to transform its wares into a platform better suited to the times.

The first big change, and one that may have precipitated all of the others that have been set in place, is Robert LeBlanc, who is the senior vice president

IBM Reorg Forges Cognitive Systems, Merges Cloud And Analytics was written by Timothy Prickett Morgan at The Next Platform.

Refining Oil and Gas Discovery with Deep Learning

Over the last two years, we have highlighted deep learning use cases in enterprise areas including genomics, large-scale business analytics, and beyond, but there are still many market areas that are still building a profile for where such approaches fit into existing workflows. Even though model training and inference might be useful, for some areas that have complex simulation-driven workflows, there are great efficiencies that could come from deep neural nets, but integrating those elements is difficult.

The oil and gas industry is one area where deep learning holds promise, at least in theory. For some steps in the resource

Refining Oil and Gas Discovery with Deep Learning was written by Nicole Hemsoth at The Next Platform.

Multi-Threaded Programming By Hand Versus OpenMP

For a long time now, researchers have been working on automating the process of breaking up otherwise single-threaded code to run on multiple processors by way of multiple threads. Results, although occasionally successful, eluded anything approaching a unified theory of everything.

Still, there appears to be some interesting success via OpenMP. The good thing about OpenMP is that its developers realized that what is really necessary is for the C or Fortran programmer to provide just enough hints to the compiler that say “Hey, this otherwise single-threaded loop, this sequence of code, might benefit from being split amongst multiple

Multi-Threaded Programming By Hand Versus OpenMP was written by Timothy Prickett Morgan at The Next Platform.

AWS Details FPGA Rationale and Market Trajectory

At the end of 2016, Amazon Web Services announced it would be making high-end Xilinx FPGAs available via a cloud delivery model, beginning first in a developer preview mode before branching with higher-level tools to help potential new users onboard and experiment with FPGA acceleration as the year rolls on.

As Deepak Singh, General Manager for the Container and HPC division within AWS tells The Next Platform, the application areas where the most growth is expected for cloud-based FPGAs are many of the same we detailed in our recent book, FPGA Frontiers: New Applications in Reconfigurable Computing. These

AWS Details FPGA Rationale and Market Trajectory was written by Nicole Hemsoth at The Next Platform.

BSC’s Mont Blanc 3 Puts ARM Inside Bull Sequana Supers

The HPC industry has been waiting a long time for the ARM ecosystem to mature enough to yield real-world clusters, with hundreds or thousands of nodes and running a full software stack, as a credible alternative to clusters based on X86 processors. But the wait is almost over, particularly if the Mont-Blanc 3 system that will be installed by the Barcelona Supercomputer Center is any indication.

BSC has been shy about trying new architectures in its clusters, and the original Mare Nostrum super that was installed a decade ago and that ranked fifth on the Top 500 list when it

BSC’s Mont Blanc 3 Puts ARM Inside Bull Sequana Supers was written by Timothy Prickett Morgan at The Next Platform.

The New IBM Glass Is Almost Half Full

It takes an incredible amount of resilience for any company to make it decades, much less more than a century, in any industry. IBM has taken big risks to create new markets, first with time clocks and meat slicers and tabulating machines early in the last century, and some decades later it created the modern computer industry with the System/360 mainframe. It survived a near-death experience in the middle 1990s when the IT industry was changing faster than it was, and now it is trying to find its footing in cognitive computing and public and private clouds as its legacy

The New IBM Glass Is Almost Half Full was written by Timothy Prickett Morgan at The Next Platform.

HPE Gets Serious About Hyperconverged Storage With SimpliVity Buy

Rumors have been running around for months that Hewlett Packard Enterprise was shopping around for a way to be a bigger player in the hyperconverged storage arena, and the recent scuttlebutt was that HPE was considering paying close to $4 billion for one of the larger players in server-storage hybrids. This turns out to not be true. HPE is paying only $650 million to snap up what was, until now, thought to be one of Silicon Valley’s dozen or so unicorns with over a $1 billion valuation.

It is refreshing to see that HPE is not overpaying for an

HPE Gets Serious About Hyperconverged Storage With SimpliVity Buy was written by Timothy Prickett Morgan at The Next Platform.

The Essence Of Multi-Threaded Applications

In the prior two articles in this series, we have gone through the theory behind programming multi-threaded applications, with the management of shared memory being accessed by multiple threads, and of even creating those threads in the first place. Now, we need to put one such multi-threaded application together and see how it works. You will find that the pieces fall together remarkably easily.

If we wanted to build a parallel application using multiple threads, we would likely first think of one where we split up a loop amongst the threads. We will be looking at such later in a

The Essence Of Multi-Threaded Applications was written by Timothy Prickett Morgan at The Next Platform.

Adapting InfiniBand for High Performance Cloud Computing

When it comes to low-latency interconnects for high performance computing, InfiniBand immediately springs to mind. On the most recent Top 500 list, over 37% of systems used some form of InfiniBand – the highest representation of any interconnect family. Since 2009, InfiniBand has occupied between 30 and 51 percent of every Top 500 list.

But when you look to the clouds, InfiniBand is hard to find. Of the three major public cloud offerings (Amazon Web Services, Google Cloud, and Microsoft Azure), only Azure currently has an InfiniBand offering. Some smaller players do as well (Profit Bricks, for example), but it’s

Adapting InfiniBand for High Performance Cloud Computing was written by Nicole Hemsoth at The Next Platform.

On Premises Object Storage Mimics Big Public Clouds

Object storage is not a new concept, but this type of storage architecture is beginning to garner more attention from large organisations as they grapple with the difficulties of managing increasingly large volumes of unstructured data gathered from applications, social media, and a myriad other sources.

The properties of object-based storage systems mean that they can scale easily to handle hundreds or even thousands of petabytes of capacity if required. Throw in the fact that object storage can be less costly in terms of management overhead (somewhere around 20 percent so that means needing to buy 20 percent less capacity

On Premises Object Storage Mimics Big Public Clouds was written by Timothy Prickett Morgan at The Next Platform.

FPGA Frontiers: New Applications in Reconfigurable Computing

There is little doubt that this is a new era for FPGAs.

While it is not news that FPGAs have been deployed in many different environments, particularly on the storage and networking side, there are fresh use cases emerging in part due to much larger datacenter trends. Energy efficiency, scalability, and the ability to handle vast volumes of streaming data are more important now than ever before. At a time when traditional CPUs are facing a future where Moore’s Law is less certain and other accelerators and custom ASICs are potential solutions with their own sets of expenses and hurdles,

FPGA Frontiers: New Applications in Reconfigurable Computing was written by Nicole Hemsoth at The Next Platform.

Looking Ahead To The Next Platforms That Will Define 2017

The old Chinese saying, “May you live in interesting times,” is supposed to be a curse, uttered perhaps with a wry smile and a glint in the eye. But it is also, we think, a blessing, particularly in an IT sector that could use some constructive change.

Considering how much change the tech market, and therefore the companies, governments, and educational institutions of the world, have had to endure in the past three decades – and the accelerating pace of change over that period – you might be thinking it would be good to take a breather, to coast for

Looking Ahead To The Next Platforms That Will Define 2017 was written by Nicole Hemsoth at The Next Platform.

The Interplay Of HPC Interconnects And CPU Utilization

Choosing the right interconnect for high-performance compute and storage platforms is critical for achieving the highest possible system performance and overall return on investment.

Over time, interconnect technologies have become more sophisticated and include more intelligent capabilities (offload engines), which enable the interconnect to do more than just transferring data. Intelligent interconnect can increase system efficiency; interconnect with offload engines (offload interconnect) dramatically reduces CPU overhead, allowing more CPU cycles to be dedicated to applications and therefore enabling higher application performance and user productivity.

Today, the interconnect technology has become even more critical than ever before, due to a number

The Interplay Of HPC Interconnects And CPU Utilization was written by Timothy Prickett Morgan at The Next Platform.

Bolstering Lustre on ZFS: Highlights of Continuing Work

The Zetta File System (ZFS), as a back-end file system to Lustre, has had support in Lustre for a long time. But in the last few years it has gained greater importance, likely due to Lustre’s push into enterprise and the increasing demands by both enterprise and non-enterprise IT to add more reliability and flexibility features to Lustre. So, ZFS has had significant traction in recent Lustre deployments.

However, over the last 18 months, a few challenges have been the focus of several open source projects in the Lustre developer community to improve performance, align the enterprise-grade features in ZFS

Bolstering Lustre on ZFS: Highlights of Continuing Work was written by Nicole Hemsoth at The Next Platform.

What Is SMP Without Shared Memory?

This is the second in the series on the essentials of multiprocessor programming. This time around we are going to look at some of the normally little considered effects of having memory being shared by a lot of processors and by the work concurrently executing there.

We start, though, by observing that there has been quite a market for parallelizing applications even when they do not share data. There has been remarkable growth of applications capable of using multiple distributed memory systems for parallelism, and interestingly that very nicely demonstrates the opportunity that exists for using massive compute capacity

What Is SMP Without Shared Memory? was written by Timothy Prickett Morgan at The Next Platform.