Archive

Category Archives for "The Next Platform"

Even at the Edge, Scale is the Real Challenge

Neural networks live on data and rely on computational firepower to help them take in that data, train on it and learn from it. The challenge increasingly is ensuring there is enough computational power to keep up with the massive amounts of data that is being generated today and the rising demands from modern neural networks for speed and accuracy in consuming the data and training on datasets that continue to grow in size.

These challenges can be seen playing out in the fast-growing autonomous vehicle market, where pure-play companies like Waymo – born from Google’s self-driving car initiative –

Even at the Edge, Scale is the Real Challenge was written by Jeffrey Burt at The Next Platform.

Delivering Predictive Outcomes with Superhuman Knowledge

Massive data growth and advances in acceleration technologies are pushing modern computing capabilities to unprecedented levels and changing the face of entire industries.

Today’s organizations are quickly realizing that the more data they have the more they can learn, and powerful new techniques like artificial intelligence (AI) and deep learning are helping them convert that data into actionable intelligence that can transform nearly every aspect of their business. NVIDIA GPUs and Hewlett Packard Enterprise (HPE) high performance computing (HPC) platforms are accelerating these capabilities and helping organizations arrive at deeper insights, enable dynamic correlation, and deliver predictive outcomes with superhuman

Delivering Predictive Outcomes with Superhuman Knowledge was written by Nicole Hemsoth at The Next Platform.

Exascale Storage Gets a GPU Boost

Alex St. John is a familiar name in the GPU and gaming industry given his role at Microsoft in the creation of DirectX technology in the 90s. And while his fame may be rooted in graphics for PC players, his newest venture has sparked the attention of both the supercomputing and enterprise storage crowds—and for good reason.

It likely helps to have some notoriety when it comes to securing funding, especially one that has roots in notoriously venture capital-denied supercomputing ecosystem. While St. John’s startup Nyriad may be a spin-out of technology developed for the Square Kilometre Array (SKA), the

Exascale Storage Gets a GPU Boost was written by Nicole Hemsoth at The Next Platform.

At the Cutting Edge of Quantum Computing Research

On today’s podcast episode of “The Interview” with The Next Platform, we focus on some of the recent quantum computing developments out of Oak Ridge National Lab’s Quantum Computing Institute with the center’s director, Dr. Travis Humble.

Regular readers will recall previous work Humble has done on the quantum simulator, as well as other lab and Quantum Insitute efforts on creating hybrid quantum and neuromorphic supercomputers and building software frameworks to support quantum interfacing. In our discussion we check in on progress along all of these fronts, including a more detailed conversation about the XACC programming framework for

At the Cutting Edge of Quantum Computing Research was written by Nicole Hemsoth at The Next Platform.

Google Boots Up Tensor Processors On Its Cloud

Google laid down its path forward in the machine learning and cloud computing arenas when it first unveiled plans for its tensor processing unit (TPU), an accelerator designed by the hyperscaler to speeding up machine learning workloads that are programmed using its TensorFlow framework.

Almost a year ago, at its Google I/O event, the company rolled out the architectural details of its second-generation TPUs – also called the Cloud TPU – for both neural network training and inference, with the custom ASICs providing up to 180 teraflops of floating point performance and 64 GB of High Bandwidth Memory.

Google Boots Up Tensor Processors On Its Cloud was written by Jeffrey Burt at The Next Platform.

Lustre Shines at HPC Peaks, But Rest of Market is Fertile Ground

The Lustre file system has been the canonical choice for the world’s largest supercomputers, but for the rest of high performance computing user base, it is moving beyond reach without the support and guidance it has had from its many backers, including most recently Intel, which dropped Lustre from its development ranks in mid-2017.

While Lustre users have seen the support story fall to pieces before, for many HPC shops, the need is greater than ever to look toward a fully supported scalable parallel file system that snaps well to easy to manage appliances. Some of these commercial HPC sites

Lustre Shines at HPC Peaks, But Rest of Market is Fertile Ground was written by Nicole Hemsoth at The Next Platform.

HPC System & Processor Trends for 2018

In this episode of The Interview from The Next Platform, we talk with Andrew Jones from independent high performance computing consulting firm, N.A.G. about processor and system acquisition trends in HPC for users at the smaller commercial end of the spectrum up through the large research centers.

In the course of the conversation, we cover how acquisition trends are being affected by machine learning entering the HPC workflow in the coming years, the differences over time between commercial HPC and academic supercomputing, and some of the issues around processor choices for both markets.

Given his experiences talking to end users

HPC System & Processor Trends for 2018 was written by Nicole Hemsoth at The Next Platform.

Just How Large Can Nvidia’s Datacenter Business Grow?

The combination of the excitement for new video games, the machine learning software revolution, the buildout of very large supercomputers based on hybrid CPU-GPU architectures, and the mining of cryptocurrencies like Bitcoin and Ethereum have combined into a quadruple whammy that is driving Nvidia to new heights for revenues, profits, and market capitalization. And thus it is no surprise Nvidia is one of the few companies that is bucking the trend in a very tough couple of weeks on Wall Street.

But having demand spiking for both its current “Volta” GPUs, which are currently aimed at HPC and AI compute,

Just How Large Can Nvidia’s Datacenter Business Grow? was written by Timothy Prickett Morgan at The Next Platform.

Different Server Workhorses For Different Workload Courses

Co-design is all the rage these days in systems design, where the hardware and software components of a system – whether it is aimed at compute, storage, or networking – are designed in tandem, not one after the other, and immediately affect how each aspect of a system are ultimate crafted. It is a smart idea that wrings the maximum amount of performance out of a system for very precise workloads.

The era of general purpose computing, which is on the wane, brought an ever-increasing amount of capacity to bear in the datacenter at an ever -lower cost, enabling an

Different Server Workhorses For Different Workload Courses was written by Timothy Prickett Morgan at The Next Platform.

A Statistical View Of Cloud Storage

Cloud datacenters in many ways are like melting pots of technologies. The massive facilities hold a broad array of servers, storage systems, and networking hardware that come in a variety of sizes. Their components come with different speeds, capacities, bandwidths, power consumption, and pricing, and they are powered by different processor architectures, optimized for disparate applications, and carry the logos of a broad array of hardware vendors, from the largest OEMs to the smaller ODMs. Some hardware systems are homegrown or built atop open designs.

As such, they are good places to compare and contrast how the components of these

A Statistical View Of Cloud Storage was written by Jeffrey Burt at The Next Platform.

Intel Sharpens The Edge With Skylake Xeon D

Compute is being embedded in everything, and there is another wave of distributed computing pushing out from the datacenter into all kinds of network, storage, and other kinds of devices that collect and process data in their own right as well as passing it back up to the glass house for final processing and permanent storage.

The computing requirements at the edge are different from the core compute in the datacenter, and it is very convenient indeed that they align nicely with some of the more modest processing needs of network devices, storage clusters, and more modest jobs in the

Intel Sharpens The Edge With Skylake Xeon D was written by Timothy Prickett Morgan at The Next Platform.

Momentum Gathers for Persistent Memory Preppers

While it is possible to reap at least some benefits from persistent memory, for those that are performance focused, the work to establish an edge is getting underway now with many of the OS and larger ecosystem players working together on new standards for existing codes.

Before we talk about some of the efforts to bring easier programming for persistent memory closer, it is useful to level-set about what it is, isn’t, how it works, and who will benefit in the near term. The most common point of confusion is that persistent memory is not necessarily about hardware, a fact

Momentum Gathers for Persistent Memory Preppers was written by Nicole Hemsoth at The Next Platform.

IBM’s 2018 Rollout Plan For Power9 Systems

In a way, the processor market started moving in slow motion through 2017 as server makers and their customers were awaiting a veritable cornucopia of processor options, something the industry has not seen in many a year. We have been predicting that there would be a Cambrian Explosion of compute, first in 2017, but it has taken a bit longer for many of these processors to come to market and it looks like 2018 might be the year.

This might be, in fact, the year when IBM’s Power RISC processors see a long-awaited resurgence, and frankly, if it doesn’t happen

IBM’s 2018 Rollout Plan For Power9 Systems was written by Timothy Prickett Morgan at The Next Platform.

New Memory Challenges Legacy Approaches to HPC Code

From DRAM to NUMA to memory non-volatile, stacked, remote, or even phase change, the coming years will bring big changes to code developers on the world’s largest parallel supercomputers.

While these memory advancements can translate to major performance leaps, the code complexity these devices will create pose big challenges in terms of performance portability for legacy and newer codes alike.

While the programming side of the emerging memory story may not be as widely appealing as the hardware, work that people like Ron Brightwell, R&D manager at Sandia National Lab and head of numerous exascale programming efforts do to expose

New Memory Challenges Legacy Approaches to HPC Code was written by Nicole Hemsoth at The Next Platform.

Private Equity Amps Up Arm Servers With Applied X86 Techies

The Carlyle Group, the publicly traded investment firm that has invested in nearly 300 companies that have a net worth of $170 billion and which itself could make around $4 billion in management fees and income from those investments for 2017, does not invest in any technology lightly.

So the fact that it has acquired the X Gene server processor assets that were created over many years by Applied Micro and briefly owned last year by Chinese IT supplier MACOM means that Carlyle believes Arm servers have a shot in the datacenter and that its investors want to get a

Private Equity Amps Up Arm Servers With Applied X86 Techies was written by Timothy Prickett Morgan at The Next Platform.

DARPA’s $200 Million JUMP Into Future Microelectronics

DARPA has always been about driving the development of emerging technologies for the benefit of both the military and the commercial world at large.

The Defense Advanced Research Projects Agency has been a driving force behind U.S. efforts around exascale computing and in recent years has targeted everything from robotics and cybersecurity to big data to technologies for implantable technologies. The agency has doled out millions of dollars to vendors like Nvidia and Rex Computing as well as national laboratories and universities to explore new CPU and GPU technologies for upcoming exascale-capable systems that hold the promise of 1,000

DARPA’s $200 Million JUMP Into Future Microelectronics was written by Jeffrey Burt at The Next Platform.

Navigating The Revenue Streams And Profit Pools Of AWS

It will not happen for a long time, if ever, but we surely do wish that Amazon Web Services, the public cloud division of the online retailing giant, was a separate company. Because if AWS was a separate company, and it was a public company at that, it would have finer grained financial results that might give us some insight into exactly what more than 1 million customers are actually renting on the AWS cloud.

As it is, all that the Amazon parent tells Wall Street about its AWS offspring is the revenue stream and operating profit levels for each

Navigating The Revenue Streams And Profit Pools Of AWS was written by Timothy Prickett Morgan at The Next Platform.

AI Will Not Be Taking Away Code Jobs Anytime Soon

There has been much recent talk about the near future of code writing itself with the help of trained neural networks but outside of some limited use cases, that reality is still quite some time away—at least for ordinary development efforts.

Although auto-code generation is not a new concept, it has been getting fresh attention due to better capabilities and ease of use in neural network frameworks. But just as in other areas where AI is touted as being the near-term automation savior, the hype does not match the technological complexity need to make it reality. Well, at least not

AI Will Not Be Taking Away Code Jobs Anytime Soon was written by Nicole Hemsoth at The Next Platform.

What It Takes to Build a Quantum Computing Startup

If you thought the up-front costs and risks were high for a silicon startup, consider the economics of building a full-stack quantum computing company from the ground-up—and at a time when the applications are described in terms of their potential and the algorithms still in primitive stages.

Quantum computing company, D-Wave managed to bootstrap its annealing-based approach and secure early big name customers with a total of $200 million over the years but as we have seen with a range of use cases, they have been able to put at least some funds back in investor pockets with system sales

What It Takes to Build a Quantum Computing Startup was written by Nicole Hemsoth at The Next Platform.

OpenMP Has More in Store for GPU Supercomputing

Just before the large-scale GPU accelerated Titan supercomputer came online in 2012, the first use cases of the OpenACC parallel programming model showed efficient, high performance interfacing with GPUs on big HPC systems.

At the time, OpenACC and CUDA were the only higher-level tools for the job. However, OpenMP, which has had twenty-plus years to develop roots in HPC, was starting to see the opportunities for GPUs in HPC at about the same time of OpenACC was forming. As legend has it, OpenACC itself was developed based on early GPU work done in an OpenMP accelerator subcommittee, generating some bad

OpenMP Has More in Store for GPU Supercomputing was written by Nicole Hemsoth at The Next Platform.