Archive

Category Archives for "The Next Platform"

Riding the AI Cycle Instead of Building It

We all remember learning to ride a bike. Those early wobbly moments with “experts” holding on to your seat while you furiously peddled and tugged away at the handlebars trying to find your own balance.

Training wheels were the obvious hardware choice for those unattended and slightly dangerous practice sessions. Training wheel hardware was often installed by your then “expert” in an attempt to avoid your almost inevitable trip to the ER. Eventually one day, often without planning you no longer needed the support, and you could make it all happen on your own.

Today, AI and ML needs this

Riding the AI Cycle Instead of Building It was written by James Cuff at The Next Platform.

Bidders Off And Running After $1.8 Billion DOE Exascale Super Deals

Supercomputer makers have been on their exascale marks, and they have been getting ready, and now the US Department of Energy has just said “Go!”

The requests for proposal have been opened up for two more exascale systems, with a budget ranging from $800 million to $1.2 billion for a pair of machines to be installed at Oak Ridge National Laboratory and Lawrence Livermore National Laboratory and a possible sweetener of anywhere from $400 million to $600 million that, provided funding can be found, allows Argonne National Laboratory to also buy yet another exascale machine.

Oak Ridge, Argonne, and Livermore

Bidders Off And Running After $1.8 Billion DOE Exascale Super Deals was written by Timothy Prickett Morgan at The Next Platform.

Deep Learning In R: Documentation Drives Algorithms

Hard to believe, but the R programming language has been with us since 1993.

A quarter century has now passed since the authors Gentleman and Ihaka originally conceived the R platform as an implementation of the S programming language.

Continuous global software development has taken the original concepts originally inspired by John Chambers’ Scheme in 1975 to now include parallel computing, bioinformatics, social science and more recently complex AI and deep learning methods. Layers have been built on top of layers and today’s R looks nothing like 1990’s R.

So where are we at, especially with the emerging opportunities

Deep Learning In R: Documentation Drives Algorithms was written by James Cuff at The Next Platform.

Open Compute Iron Is All About Acceleration

The Open Compute Project (OCP) held its 9th annual US Summit recently, with 3,441 registered attendees this year. While that might seem small for a top-tier high tech event, there were also 80 exhibitors representing most of the cloud datacenter supply chain, plus a host of outstanding technical sessions. We are always on the hunt for new iron, and not surprisingly the most important gear we saw at OCP this year was related to compute acceleration.

Here is how that new iron we saw breaks down across the major trends in acceleration.

The first interesting thing we saw was a

Open Compute Iron Is All About Acceleration was written by Timothy Prickett Morgan at The Next Platform.

MapD Fires Up GPU Cloud Service

In the long run, provided there are enough API pipes into the code, software as a service might be the most popular way to consume applications and systems software for all but the largest organizations that are running at such a scale that they can command almost as good prices for components as the public cloud intermediaries. The hassle of setting up and managing complex code is in a lot of cases larger than the volume pricing benefits of do it yourself. The difference can be a profit margin for both cloud builders and the software companies that peddle their

MapD Fires Up GPU Cloud Service was written by Timothy Prickett Morgan at The Next Platform.

New Approaches to Optimizing Workflow Automation

Workflow automation has been born of necessity and has evolved an increasingly sophisticated set of tools to manage the growing complexity of the automation itself.

The same theme keeps emerging across the broader spectrum of enterprise and research IT. For instance, we spoke recently about the need to profile software and algorithms when billions of events per iteration are generated from modern GPU systems. This is a similar challenge and fortunately, not all traditional or physical business processes fall into this scale bucket. Many are much less data intensive, but can have a such a critical impact in “time to

New Approaches to Optimizing Workflow Automation was written by James Cuff at The Next Platform.

AWS Puts More Muscle Behind Machine Learning And Database

Amazon Web Services essentially sparked the public cloud race a dozen years ago when it first launched the Elastic Compute Cloud (EC2) service and then in short order the Simple Storage Service (S3), giving enterprises access to the large amount compute and storage resources that its giant retail business leaned on.

Since that time, AWS has grown rapidly in the number of services it offers, the number of customers it serves, the amount of money it brings in and the number of competitors – including Microsoft, IBM, Google, Alibaba, and Oracle – looking to chip away

AWS Puts More Muscle Behind Machine Learning And Database was written by Jeffrey Burt at The Next Platform.

Inside Nvidia’s NVSwitch GPU Interconnect

At the GPU Technology Conference last week, we told you all about the new NVSwitch memory fabric interconnect that Nvidia has created to link multiple “Volta” GPUs together and that is at the heart of the DGX-2 system that the company has created to demonstrate its capabilities and to use on its internal Saturn V supercomputer at some point in the future.

Since the initial announcements, more details have been revealed by Nvidia about NVSwitch, including details of the chip itself and how it helps applications wring a lot more performance from the GPU accelerators.

Our first observation upon looking

Inside Nvidia’s NVSwitch GPU Interconnect was written by Timothy Prickett Morgan at The Next Platform.

Another Step Toward FPGAs in Supercomputing

There has been plenty of talk about where FPGA acceleration might fit into high performance computing but there are only a few testbeds and purpose-built clusters pushing this vision forward for scientific applications.

While we do not necessarily expect supercomputing centers to turn their backs on GPUs as the accelerator of choice in favor of FPGAs anytime in the foreseeable future. there is some smart investment happening in Europe and to a lesser extent, in the U.S. that takes advantage of recent hardware additions and high-level tool development that put field programmable devices within closer reach–even for centers whose users

Another Step Toward FPGAs in Supercomputing was written by Nicole Hemsoth at The Next Platform.

Startup Tackles Cloud Migration And Management Hassle

Enterprises can see cost and efficiency benefits when they migrate workloads into the cloud, but such moves also come with their share of challenges in complexity and management. This is particularly true as organizations embrace a compute environment that includes multiple clouds – both public and private – as well as one or more on-premises datacenters. True, the cloud enables businesses to easily scale up or down depending on the workloads they’re running, to pay for only the infrastructure they’re using rather than having to invest upfront in hardware, to put the onus of integration on the cloud providers, and

Startup Tackles Cloud Migration And Management Hassle was written by Jeffrey Burt at The Next Platform.

The Buck Stops – And Starts – Here For GPU Compute

Ian Buck doesn’t just run the Tesla accelerated computing business at Nvidia, which is one of the company’s fastest-growing and most profitable products in its twenty five year history. The work that Buck and other researchers started at Stanford University in 2000 and then continued at Nvidia helped to transform a graphics card shader into a parallel compute engine that is helping to solve some of the world’s toughest simulation and machine learning problems.

The annual GPU Technology Conference was held by Nvidia last week, and we sat down and had a chat with Buck about a bunch of things

The Buck Stops – And Starts – Here For GPU Compute was written by Timothy Prickett Morgan at The Next Platform.

Momentum for Bioinspired GPU Computing at the Edge

Bioinspired computing is nothing new but with the rise in mainstream interest in machine learning, these architectures and software frameworks are seeing fresh light. This is prompting a new wave of young companies that are cropping up to provide hardware, software, and management tools—something that has also spurred a new era of thinking about AI problems.

We most often think of these innovations happening at the server and datacenter level but more algorithmic work is being done (to suit better embedded hardware) to deploy comprehensive models on mobile devices that allow for long-term learning on single instances of object recognition

Momentum for Bioinspired GPU Computing at the Edge was written by Nicole Hemsoth at The Next Platform.

Fueling AI With A New Breed of Accelerated Computing

A major transformation is happening now as technological advancements and escalating volumes of diverse data drive change across all industries. Cutting-edge innovations are fueling digital transformation on a global scale, and organizations are leveraging faster, more powerful machines to operate more intelligently and effectively than ever.

Recently, Hewlett Packard Enterprise (HPE) has announced the new HPE Apollo 6500 Gen10 server, a groundbreaking platform designed to tackle the most compute-intensive high performance computing (HPC) and deep learning workloads. Deep learning – an exciting development in artificial intelligence (AI) – enables machines to solve highly complex problems quickly by autonomously analyzing

Fueling AI With A New Breed of Accelerated Computing was written by Timothy Prickett Morgan at The Next Platform.

Removing The Storage Bottleneck For AI

If the history of high performance computing has taught us anything, it is that we cannot focus too much on compute at the expense of storage and networking. Having all of the compute in the world doesn’t mean diddlysquat if the storage can’t get data to the compute elements – whatever they might be – in a timely fashion with good sustained performance.

Many organizations that have invested in GPU accelerated servers are finding this out the hard way when their performance comes up short when they get down to do work training their neural networks, and this is particularly

Removing The Storage Bottleneck For AI was written by Timothy Prickett Morgan at The Next Platform.

Aruba Networks Leads HPE to the Edge

When pre-split Hewlett-Packard bought Aruba Networks three years ago for $3 billion, the goal was to create a stronger and larger networking business that combined both wired and wireless networking capabilities and could challenge market leader Cisco Systems at a time when enterprises were more fully embracing mobile computing and public clouds.

Aruba was launched in 2002 and by the time of the acquisition had established itself as a leading vendor in the wireless networking market and had an enthusiastic following of users who call themselves “Airheads.” The worry among many of them was that once the deal was closed,

Aruba Networks Leads HPE to the Edge was written by Nicole Hemsoth at The Next Platform.

An Inside Look at What Powers Microsoft’s Internal Systems for AI R&D

For those who might expect Microsoft to favor its own Windows-centric platforms and tools to power comprehensive infrastructure for serving AI compute and software services for internal R&D groups, plan on being surprised.

While Microsoft does rely on some core windows features and certainly its Azure cloud services, much of its infrastructure is powered by a broad suite of open source tools. As Jim Jernigan, senior R&D systems engineer at Microsoft Research told us at the GPU Technology Conference (GTC18) this week, the highest volume of workloads running on the diverse research clusters Microsoft uses for AI development are running

An Inside Look at What Powers Microsoft’s Internal Systems for AI R&D was written by Nicole Hemsoth at The Next Platform.

Getting to the Heart of HPC and AI at the Edge in Healthcare

For more than a decade, GE has partnered with Nvidia to support their healthcare devices. Increasing demand for high quality medical imaging and mobile diagnostics alone has resulted in building a $4 billion segment of the $19 billion total life sciences budget within GE Healthcare.

This year at the GPU Technology Conference (GTC18), The Next Platform sat in as Keith Bigelow, GM & SVP of Analytics, and Erik Steen, Chief Engineer at GE Healthcare, discussed the challenges of deploying AI focusing on cardiovascular ultrasound imaging.

There are a wide range of GPU accelerated medical devices as well as those that

Getting to the Heart of HPC and AI at the Edge in Healthcare was written by James Cuff at The Next Platform.

A First Look at Summit Supercomputer Application Performance

Big iron aficionados packed the room when ORNL’s Jack Wells gave the latest update on the upcoming 207 petaflop Summit supercomputer at the GPU Technology Conference (GTC18) this week.

In just eight years, the folks at Oak Ridge have pushed the high performance bar from the 18.5 teraflop Phoenix system to the 27 petaflop Titan. That’s a 1000x + improvement in eight years.

Summit will deliver 5-10x more performance than the existing Titan machine, but what is noteworthy is how Summit will do this. The system is set to have far fewer nodes (18,688 for Titan vs. ~4,800 for Summit)

A First Look at Summit Supercomputer Application Performance was written by Nicole Hemsoth at The Next Platform.

Nvidia’s DGX-2 System Packs An AI Performance Punch

When Nvidia co-founder and chief executive officer Jensen Huang told the assembled multitudes at the keynote opening to the GPU Technology Conference that the new DGX-2 system, weighing in at 2 petaflops at half precision using the latest Tesla GPU accelerators, would cost $1.5 million when it became available in the third quarter, the audience paused for a few seconds, doing the human-speed math to try to reckon how that stacked up to the DGX-1 servers sporting eight Teslas.

This sounded like a pretty high price, even for such an impressive system – really a GPU cluster with some CPU

Nvidia’s DGX-2 System Packs An AI Performance Punch was written by Timothy Prickett Morgan at The Next Platform.

Nvidia Memory Switch Welds Together Massive Virtual GPU

It has happened time and time again in computing in the past three decades in the datacenter: A device scales up its capacity – be it compute, storage, or networking – as high as it can go, and then it has to go parallel and scale out.

The NVLink interconnect that Nvidia created to lash together its “Pascal” and “Volta” GPU accelerators into a kind of giant virtual GPU were the first phase of this scale out for Tesla compute. But with only six NVLink ports on a Volta SXM2 device, there is a limit to how many Teslas can

Nvidia Memory Switch Welds Together Massive Virtual GPU was written by Timothy Prickett Morgan at The Next Platform.