Archive

Category Archives for "The Next Platform"

Docker Inevitably Embraces Kubernetes Container Orchestration

Sometimes you can beat them, and sometimes you can join them. If you are Docker, the commercial entity behind the Docker container runtime and a stack of enterprise-class software that wraps around it, and you are facing the rising popularity of the Kubernetes container orchestrator open sourced by Google, you can do both. And so, even though it has its own Swarm orchestration layer, Docker is embracing Kubernetes as a peer to Swarm in its own stack.

This is not an either/or proposition, and in fact, the way that the company has integrated Kubernetes inside of Docker Enterprise Edition, the

Docker Inevitably Embraces Kubernetes Container Orchestration was written by Timothy Prickett Morgan at The Next Platform.

GPUs Mine Astronomical Datasets For Golden Insight Nuggets

As humankind continues to stare into the dark abyss of deep space in an eternal quest to understand our origins, new computational tools and technologies are needed at unprecedented scales. Gigantic datasets from advanced high resolution telescopes and huge scientific instrumentation installations are overwhelming classical computational and storage techniques.

This is the key issue with exploring the Universe – it is very, very large. Combining advances in machine learning and high speed data storage are starting to provide hitherto unheard of levels of insight that were previously in the realm of pure science fiction. Using computer systems to infer knowledge

GPUs Mine Astronomical Datasets For Golden Insight Nuggets was written by James Cuff at The Next Platform.

Another Step In Building The HPC Ecosystem For Arm

Many of us are impatient for Arm processors to take off in the datacenter in general and in HPC in particular. And ever so slowly, it looks like it is starting to happen.

Every system buyer wants choice because choice increases competition, which lowers cost and mitigates against risk. But no organization, no matter how large, can afford to build its own software ecosystem. Even the hyperscalers like Google and Facebook, whole literally make money on the apps running on their vast infrastructure, rely heavily on the open source community, taking as much as they give back. So it is

Another Step In Building The HPC Ecosystem For Arm was written by Timothy Prickett Morgan at The Next Platform.

Pushing Up The Scale For Hyperconverged Storage

Hyperconverged storage is a hot commodity right now. Enterprises want to dump their disk arrays and get an easier and less costly way to scale the capacity and performance of their storage to keep up with application demands. Nutanix has a become as significant player in a space where established vendors like Hewlett Packard Enterprise, Dell EMC, and Cisco Systems are broadening their portfolios and capabilities.

But as hyperconverged infrastructure (HCI) becomes increasingly popular and begin moving up from midrange environments into larger enterprises, challenges are becoming evident, from the need to bring in new – and at times

Pushing Up The Scale For Hyperconverged Storage was written by Jeffrey Burt at The Next Platform.

The Battle Of The InfiniBands, Part Two

For decades, the IT market has been obsessed with the competition between suppliers of processors, but there are rivalries between the makers of networking chips and the full-blown switches that are based on them that are just as intense. Such a rivalry exists between the InfiniBand chips from Mellanox Technologies and the Omni-Path chips from Intel, which are based on technologies Intel got six years ago when it acquired the InfiniBand business from QLogic for $125 million.

At the time, we quipped that AMD needed to buy Mellanox, but instead AMD turned right around and shelled out $334 million to

The Battle Of The InfiniBands, Part Two was written by Timothy Prickett Morgan at The Next Platform.

Building Bigger, Faster GPU Clusters Using NVSwitches

Nvidia launched its second-generation DGX system in March. In order to build the 2 petaflops half-precision DGX-2, Nvidia had to first design and build a new NVLink 2.0 switch chip, named NVSwitch. While Nvidia is only shipping NVSwitch as an integral component of its DGX-2 systems today, Nvidia has not precluded selling NVSwitch chips to data center equipment manufacturers.

This article will answer many of the questions we asked in our first look at the NVSwitch chip, using DGX-2 as an example architecture.

Nvidia’s NVSwitch is a two-billion transistor non-blocking switch design incorporating 18 complete NVLink 2.0 ports

Building Bigger, Faster GPU Clusters Using NVSwitches was written by Timothy Prickett Morgan at The Next Platform.

The Evolution Of Hyperconverged Storage To Composable Systems

Hyperconverged infrastructure in some ways is like the credit card in those old TV ads: in this case, it’s everywhere that enterprises want to be. HCI put compute and storage on the same cluster, tightly integrate them with networking and unified management tools and essentially give enterprises a private cloud for the datacenter as well as pushing compute out to the edges in a consistent manner.

HCI also promises a bunch of other things beneficial to enterprises, including streamlined management, lower costs, faster speeds, and easier scalability than traditional IT systems to better address the rise of cloud computing, analytics,

The Evolution Of Hyperconverged Storage To Composable Systems was written by Jeffrey Burt at The Next Platform.

HPC Provides Big Bang, But Needs Big Bucks, Too

Supercomputers keep getting faster, but they are keep getting more expensive. This is a problem, and it is one that is going to eventually affect every kind of computer until we get a new technology that is not based on CMOS chips.

The general budget and some of the feeds and speeds are out thanks to the requests for proposal for the “Frontier” and “El Capitan” supercomputers that will eventually be built for Oak Ridge National Laboratory and Lawrence Livermore National Laboratory. So now is a good time to take a look at not just the historical performance of capability

HPC Provides Big Bang, But Needs Big Bucks, Too was written by Timothy Prickett Morgan at The Next Platform.

Dell EMC and Fujitsu Roll Intel FPGAs Into Servers

Nvidia caused a shift in high-end computing more than a decade ago when it introduced its general-purpose GPUs and CUDA development platform to work with CPUs to increase the performance of compute-intensive workloads in HPC and other environments and drive greater energy efficiencies in datacenters.

Nvidia and to a lesser extent AMD, with its Radeon GPUs, took advantage of the growing demand for more speed and less power consumption to build out their portfolios of GPU accelerators and expand their use in a range of systems, to the point where in the last Top500 list of the world’s fastest

Dell EMC and Fujitsu Roll Intel FPGAs Into Servers was written by Jeffrey Burt at The Next Platform.

Talking Up the Expanding Markets for GPU Compute

There is a direct correlation between the length of time that Nvidia co-founder and chief executive officer Jensen Huang speaks during the opening keynote of each GPU Technology Conference and the total addressable market of accelerated computing based on GPUs.

This stands to reason since the market for GPU compute is expanding. We won’t discuss which is the cause and which is the effect. Or maybe we will.

It all started with offloading the parallel chunks of HPC applications from CPUs to GPUs in the early 2000s in academia, which were then first used in production HPC centers a decade

Talking Up the Expanding Markets for GPU Compute was written by Timothy Prickett Morgan at The Next Platform.

Will Chapel Mark Next Great Awakening for Parallel Programmers?

Just because supercomputers are engineered to be far more powerful over time does not necessarily mean programmer productivity will follow the same curve. Added performance means more complexity, which for parallel programmers means working harder, especially when accelerators are thrown into the mix.

It was with all of this in mind that DARPA’s High Productivity Computing Systems (HPCS) project rolled out in 2009 to support higher performance but more usable HPC systems. This is the era that spawned systems like Blue Waters, for instance, and various co-design efforts from IBM and Intel to make parallelism within broader reach to the

Will Chapel Mark Next Great Awakening for Parallel Programmers? was written by Nicole Hemsoth at The Next Platform.

Riding the AI Cycle Instead of Building It

We all remember learning to ride a bike. Those early wobbly moments with “experts” holding on to your seat while you furiously peddled and tugged away at the handlebars trying to find your own balance.

Training wheels were the obvious hardware choice for those unattended and slightly dangerous practice sessions. Training wheel hardware was often installed by your then “expert” in an attempt to avoid your almost inevitable trip to the ER. Eventually one day, often without planning you no longer needed the support, and you could make it all happen on your own.

Today, AI and ML needs this

Riding the AI Cycle Instead of Building It was written by James Cuff at The Next Platform.

Bidders Off And Running After $1.8 Billion DOE Exascale Super Deals

Supercomputer makers have been on their exascale marks, and they have been getting ready, and now the US Department of Energy has just said “Go!”

The requests for proposal have been opened up for two more exascale systems, with a budget ranging from $800 million to $1.2 billion for a pair of machines to be installed at Oak Ridge National Laboratory and Lawrence Livermore National Laboratory and a possible sweetener of anywhere from $400 million to $600 million that, provided funding can be found, allows Argonne National Laboratory to also buy yet another exascale machine.

Oak Ridge, Argonne, and Livermore

Bidders Off And Running After $1.8 Billion DOE Exascale Super Deals was written by Timothy Prickett Morgan at The Next Platform.

Deep Learning In R: Documentation Drives Algorithms

Hard to believe, but the R programming language has been with us since 1993.

A quarter century has now passed since the authors Gentleman and Ihaka originally conceived the R platform as an implementation of the S programming language.

Continuous global software development has taken the original concepts originally inspired by John Chambers’ Scheme in 1975 to now include parallel computing, bioinformatics, social science and more recently complex AI and deep learning methods. Layers have been built on top of layers and today’s R looks nothing like 1990’s R.

So where are we at, especially with the emerging opportunities

Deep Learning In R: Documentation Drives Algorithms was written by James Cuff at The Next Platform.

Open Compute Iron Is All About Acceleration

The Open Compute Project (OCP) held its 9th annual US Summit recently, with 3,441 registered attendees this year. While that might seem small for a top-tier high tech event, there were also 80 exhibitors representing most of the cloud datacenter supply chain, plus a host of outstanding technical sessions. We are always on the hunt for new iron, and not surprisingly the most important gear we saw at OCP this year was related to compute acceleration.

Here is how that new iron we saw breaks down across the major trends in acceleration.

The first interesting thing we saw was a

Open Compute Iron Is All About Acceleration was written by Timothy Prickett Morgan at The Next Platform.

MapD Fires Up GPU Cloud Service

In the long run, provided there are enough API pipes into the code, software as a service might be the most popular way to consume applications and systems software for all but the largest organizations that are running at such a scale that they can command almost as good prices for components as the public cloud intermediaries. The hassle of setting up and managing complex code is in a lot of cases larger than the volume pricing benefits of do it yourself. The difference can be a profit margin for both cloud builders and the software companies that peddle their

MapD Fires Up GPU Cloud Service was written by Timothy Prickett Morgan at The Next Platform.

New Approaches to Optimizing Workflow Automation

Workflow automation has been born of necessity and has evolved an increasingly sophisticated set of tools to manage the growing complexity of the automation itself.

The same theme keeps emerging across the broader spectrum of enterprise and research IT. For instance, we spoke recently about the need to profile software and algorithms when billions of events per iteration are generated from modern GPU systems. This is a similar challenge and fortunately, not all traditional or physical business processes fall into this scale bucket. Many are much less data intensive, but can have a such a critical impact in “time to

New Approaches to Optimizing Workflow Automation was written by James Cuff at The Next Platform.

AWS Puts More Muscle Behind Machine Learning And Database

Amazon Web Services essentially sparked the public cloud race a dozen years ago when it first launched the Elastic Compute Cloud (EC2) service and then in short order the Simple Storage Service (S3), giving enterprises access to the large amount compute and storage resources that its giant retail business leaned on.

Since that time, AWS has grown rapidly in the number of services it offers, the number of customers it serves, the amount of money it brings in and the number of competitors – including Microsoft, IBM, Google, Alibaba, and Oracle – looking to chip away

AWS Puts More Muscle Behind Machine Learning And Database was written by Jeffrey Burt at The Next Platform.

Inside Nvidia’s NVSwitch GPU Interconnect

At the GPU Technology Conference last week, we told you all about the new NVSwitch memory fabric interconnect that Nvidia has created to link multiple “Volta” GPUs together and that is at the heart of the DGX-2 system that the company has created to demonstrate its capabilities and to use on its internal Saturn V supercomputer at some point in the future.

Since the initial announcements, more details have been revealed by Nvidia about NVSwitch, including details of the chip itself and how it helps applications wring a lot more performance from the GPU accelerators.

Our first observation upon looking

Inside Nvidia’s NVSwitch GPU Interconnect was written by Timothy Prickett Morgan at The Next Platform.

Another Step Toward FPGAs in Supercomputing

There has been plenty of talk about where FPGA acceleration might fit into high performance computing but there are only a few testbeds and purpose-built clusters pushing this vision forward for scientific applications.

While we do not necessarily expect supercomputing centers to turn their backs on GPUs as the accelerator of choice in favor of FPGAs anytime in the foreseeable future. there is some smart investment happening in Europe and to a lesser extent, in the U.S. that takes advantage of recent hardware additions and high-level tool development that put field programmable devices within closer reach–even for centers whose users

Another Step Toward FPGAs in Supercomputing was written by Nicole Hemsoth at The Next Platform.