Archive

Category Archives for "The Next Platform"

Exascale Timeline Pushed to 2023: What’s Missing in Supercomputing?

The roadmap to build and deploy an exascale computer has extended over the last few years–and more than once.

Initially, the timeline marked 2018 as the year an exaflop-capable system would be on the floor, just one year after the CORAL pre-exascale machines are installed at three national labs in the U.S.. That was later shifted to 2020, and now, according to a new report setting forth the initial hardware requirements for such a system, it is anywhere between 2023-2025. For those who follow high performance computing and the efforts toward exascale computing, this extended timeline might not come as

Exascale Timeline Pushed to 2023: What’s Missing in Supercomputing? was written by Nicole Hemsoth at The Next Platform.

Telcos Dial Up OpenStack And Mainstream It

OpenStack was born at the nexus of high performance computing at NASA and the cloud at Rackspace Hosting, but it might be the phone companies of the world that help it go mainstream.

It is safe to say that most of us probably think that our data plans and voice services for our mobile phones are way too expensive, and as it turns out, that is exactly how our mobile phone operators feel about the racks and rows and datacenters that they have full of essentially proprietary network equipment that comprises their wired and wireless networks. And so, all of

Telcos Dial Up OpenStack And Mainstream It was written by Timothy Prickett Morgan at The Next Platform.

The Once And Future IBM Platform

More than anything else, over its long history in the computing business, IBM has been a platform company and say what you will about the woes it has had through several phases of its history, what seems obvious is that when Big Blue forgets this it runs into trouble.

If you stare at its quarterly financial results long enough, you can still see that platform company looking back at you, even through the artificially dissected product groups the company has used for the past decade and the new ones that IBM is using starting in 2016.

It is important to

The Once And Future IBM Platform was written by Timothy Prickett Morgan at The Next Platform.

Emphasis on Data Wrangling Brings Materials Science Up to Speed

In 2011, the United States launched a multi-agency effort to discover, develop, and produce advanced materials under the Materials Genome Initiative as part of an overall push to get out from under the 20-year process typically involved with researching a new material and bringing it to market.

At roughly the same time, the government was investing in other technology-driven initiatives to bolster competitiveness, with particular emphasis on manufacturing. While key areas in research were developing incredibly rapidly, it appeared that manufacturing, materials, and other more concrete physical problems were waiting on better solutions while genomics, nanotechnology, and other areas were

Emphasis on Data Wrangling Brings Materials Science Up to Speed was written by Nicole Hemsoth at The Next Platform.

OpenStack Still Has A Place In The Stack

The IT industry likes drama perhaps a bit more than is warranted by what actually goes on in the datacenters of the world. We are always spoiling for a good fight between rival technologies because the clash results in competition, which drives technologies forward and prices down.

Ultimately, organizations have to pick some kind of foundation for their modern infrastructure, and OpenStack, the cloud controller spawned from NASA and Rackspace Hosting nearly six years ago, is a growing and vibrant community that, despite the advent of Docker containers and the rise of Mesos and Kubernetes as an alternative substrate for

OpenStack Still Has A Place In The Stack was written by Timothy Prickett Morgan at The Next Platform.

Volkswagen’s “Cloud First” Approach to Infrastructure Decisions

As automotive companies like Ford begin to consider themselves technology companies, others, including Volkswagen Group are taking a similar route. The company’s new CEO, who took over in September, 2015 has a background in computer science and began his career managing IT department for the Audi division. Under his more technical-tuned guard, IT teams within the company are taking major strides to shore up their infrastructure to support the 1.5 billion Euro investment in R&D for forthcoming electric and connected cars in the near future.

Part of the roadmap for Volkswagen Group includes a shift to OpenStack to manage

Volkswagen’s “Cloud First” Approach to Infrastructure Decisions was written by Nicole Hemsoth at The Next Platform.

First Steps In The Program Model For Persistent Memory

In the previous article, we left off with the basic storage model having its objects first existing as changed in the processor’s cache, then being aged into volatile DRAM memory, often with changes first logged synchronously into I/O-based persistent storage, and later with the object’s changes proper later copied from volatile memory into persistent storage. That has been the model for what seems like forever.

With variations, that can be the storage model for Hewlett-Packard Enterprise’s The Machine as well. Since The Machine has a separate class of volatile DRAM memory along with rapidly-accessible, byte-addressable persistent memory accessible globally, the

First Steps In The Program Model For Persistent Memory was written by Timothy Prickett Morgan at The Next Platform.

Baidu Eyes Deep Learning Strategy in Wake of New GPU Options

This month Nvidia bolstered its GPU strategy to stretch further into deep learning, high performance computing, and other markets, and while there are new options to consider, particularly for the machine learning set, it is useful to understand what these new arrays of chips and capabilities mean for users at scale. As one of the companies directly in the lens for Nvidia with its recent wave of deep learning libraries and GPUs, Baidu has keen insight into what might tip the architectural scales—and what might still stay the same, at least for now.

Back in December, when we talked to

Baidu Eyes Deep Learning Strategy in Wake of New GPU Options was written by Nicole Hemsoth at The Next Platform.

Nvidia’s Tesla P100 Steals Machine Learning From The CPU

Pattern analytics, deep learning, and machine learning have fueled a rapid rise in interest in GPU computing, in addition to GPU computing applications in high performance computing (HPC) and cloud-based data analytics.

As a high profile example, Facebook recently contributed its “Big Sur” design to the Open Compute Project (OCP), for use specifically in training neural networks and implementing artificial intelligence (AI) at scale. Facebook’s announcement of Big Sur says “Big Sur was built with the Nvidia Tesla M40 in mind but is qualified to support a wide range of PCI-e cards,” pointing out how pervasive Nvidia’s Tesla

Nvidia’s Tesla P100 Steals Machine Learning From The CPU was written by Timothy Prickett Morgan at The Next Platform.

Intel Owns The Server, Wants To Own The Rack

Chip maker Intel has been getting a lot of grief in recent days about missing the boat on putting chips in Apple’s iPhone back when the product was announced back in 2007, and then subsequently also losing out on the opportunity to have Intel Inside the Apple iPad tablet that came out three years later. Say what you will, but the folks that have been running Intel’s Data Center Group have not missed any boats, but rather have built a warship.

As the traditional PC client business continues to erode, the part of the company that is dedicated to the

Intel Owns The Server, Wants To Own The Rack was written by Timothy Prickett Morgan at The Next Platform.

Programming For Persistent Memory Takes Persistence

Some techies are capable of writing programs in assembler, but all will agree that they are very glad that they don’t need to. More know that they are fully capable of writing programs which manage their own heap memory, but are often enough pleased that the program model of some languages allow them to avoid it. Since like the beginning of time, computer scientists have been creating simplifying abstractions which allow programmers to avoid managing the hardware’s quirks, to write code more rapidly, and to even enhance its maintainability.

And so it is with a relatively new, and indeed both

Programming For Persistent Memory Takes Persistence was written by Timothy Prickett Morgan at The Next Platform.

First Wave of Pascal GPUs Coming to European Supercomputer

There are few international supercomputing hubs sporting the systems and software prowess of the Swiss National Supercomputing Center (CSCS), which started with large-scale vector machines in 1992 and moved through a series of other architectures and vendors; from NEC at the beginning, to IBM, and most recently, Cray. In fact, the center has had an ongoing preference for Cray supercomputers, with an unbroken stretch of machines beginning in 2007.

In addition to choosing Cray as the system vendor, CSCS has been an early adopter and long-term user of GPU acceleration. According to the center’s director, Thomas Schulthess, teams there firmed

First Wave of Pascal GPUs Coming to European Supercomputer was written by Nicole Hemsoth at The Next Platform.

Drilling Down Into Nvidia’s “Pascal” GPU

Nvidia made a lot of big bets to bring its “Pascal” GP100 GPU to market and its first implementation of the GPU is aimed at its Tesla P100 accelerator for radically improving the performance of massively parallel workloads like scientific simulations and machine learning algorithms. The Pascal GPU is a beast, in all of the good senses of that word, and warrants a deep dive that was not possible on announcement day back at the GPU Technology Conference.

We did an overview of the chip back on announcement day, as well as talking about Nvidia’s own DGX-1 hybrid server

Drilling Down Into Nvidia’s “Pascal” GPU was written by Timothy Prickett Morgan at The Next Platform.

Power9 Will Bring Competition To Datacenter Compute

The Power9 processor that IBM is working on in conjunction with hyperscale and HPC customers could be the most important chip that Big Blue has brought to market since the Power4 processor back in 2001. That was another world, back then, with the dot-com boom having gone bust and enterprises looking for a less expensive but beefy NUMA server on which to run big databases and transaction processing systems.

The world that the Power9 processor will enter in 2017 is radically changed. A two-socket system has more compute, memory, and I/O capacity and bandwidth than those behemoths from a decade

Power9 Will Bring Competition To Datacenter Compute was written by Timothy Prickett Morgan at The Next Platform.

Future Cray Clusters to Storm Deep Learning

Supercomputer maker Cray might not roll out machines for deep learning anytime in 2016, but like other system vendors with deep roots in high performance computing, which leverages many of the same hardware elements (strong interconnect and GPU acceleration, among others), they are seeing how to loop their expertise into a future where machine learning rules.

As Cray CTO, Steve Scott, tells The Next Platform, “Deep learning and machine learning dovetails with our overall strategy and that overall merger between HPC and analytics that has already happened,” Scott says. HPC is not just one single application or set of

Future Cray Clusters to Storm Deep Learning was written by Nicole Hemsoth at The Next Platform.

Nvidia Lead Details Future Convergence of Supercomputing, Deep Learning

Deep learning could not have developed at the rapid pace it has over the last few years without companion work that has happened on the hardware side in high performance computing. While the applications and requirements for supercomputers versus neural network training are quite different (scalability, programming, etc.) without the rich base of GPU computing, high performance interconnect development, memory, storage, and other benefits from the HPC set, the boom around deep learning would be far quieter.

In the midst of this convergence, Marc Hamilton has watched advancements on the HPC side over the years, beginning in the mid-1990s

Nvidia Lead Details Future Convergence of Supercomputing, Deep Learning was written by Nicole Hemsoth at The Next Platform.

Oracle Steps With Moore’s Law To Rev Exadata Database Machines

Absorbing a collection of new processing, memory, storage, and networking technologies in a fast fashion on a complex system is no easy task for any system maker or end user creating their own infrastructure, and it takes time even for a big company like Oracle to get all the pieces together and weld them together seamlessly. But the time it takes is getting smaller, and Oracle has absorbed a slew of new tech in its latest Exadata X6-2 platforms.

The updated machines are available only weeks after Intel launched its new “Broadwell” Xeon E5 v4 processors, which have been shipping

Oracle Steps With Moore’s Law To Rev Exadata Database Machines was written by Timothy Prickett Morgan at The Next Platform.

Boosting Deep Learning with the Intel Scalable System Framework

Training ‘complex multi-layer’ neural networks is referred to as deep-learning as these multi-layer neural architectures interpose many neural processing layers between the input data and the predicted output results – hence the use of the word deep in the deep-learning catchphrase.

While the training procedure is computationally expensive, evaluating the resulting trained neural network is not, which explains why trained networks can be extremely valuable as they have the ability to very quickly perform complex, real-world pattern recognition tasks on a variety of low-power devices including security cameras, mobile phones, wearable technology. These architectures can also be implemented on FPGAs

Boosting Deep Learning with the Intel Scalable System Framework was written by Nicole Hemsoth at The Next Platform.

Hyperscalers And Clouds On The Xeon Bleeding Edge

Hyperscalers and cloud builders are different in a lot of ways from the typical enterprise IT shop. Perhaps the most profound one that has emerged in recent years is something that used to be only possible in the realm of the most exotic supercomputing centers, and that is this: They get what they want, and they get it ahead of everyone else.

Back in the day, before the rise of mass customization of the Xeon product line by chip maker Intel, it was HPC customers who were often trotted out as early adopters of a new processor technology and usually

Hyperscalers And Clouds On The Xeon Bleeding Edge was written by Timothy Prickett Morgan at The Next Platform.

Concept Data Tower Scrapes The Sky For Efficiency

We talk about architecture all the time when discussing systems, but when it comes to the datacenters that house these machines, the shells are about as exciting as a monstrous warehouse for a massive distribution operation. Considering that data and processing are the most profitable products on the planet these days, maybe this is suitable and fitting, but another way to look at it is that datacenters should not only be marvels of engineering but also inspiring structures like other kinds of buildings.

Every year, the architecture magazine Evolo hosts a skyscraper design competition so that architects can let their

Concept Data Tower Scrapes The Sky For Efficiency was written by Timothy Prickett Morgan at The Next Platform.