Some techies are capable of writing programs in assembler, but all will agree that they are very glad that they don’t need to. More know that they are fully capable of writing programs which manage their own heap memory, but are often enough pleased that the program model of some languages allow them to avoid it. Since like the beginning of time, computer scientists have been creating simplifying abstractions which allow programmers to avoid managing the hardware’s quirks, to write code more rapidly, and to even enhance its maintainability.
And so it is with a relatively new, and indeed both …
Programming For Persistent Memory Takes Persistence was written by Timothy Prickett Morgan at The Next Platform.
There are few international supercomputing hubs sporting the systems and software prowess of the Swiss National Supercomputing Center (CSCS), which started with large-scale vector machines in 1992 and moved through a series of other architectures and vendors; from NEC at the beginning, to IBM, and most recently, Cray. In fact, the center has had an ongoing preference for Cray supercomputers, with an unbroken stretch of machines beginning in 2007.
In addition to choosing Cray as the system vendor, CSCS has been an early adopter and long-term user of GPU acceleration. According to the center’s director, Thomas Schulthess, teams there firmed …
First Wave of Pascal GPUs Coming to European Supercomputer was written by Nicole Hemsoth at The Next Platform.
Nvidia made a lot of big bets to bring its “Pascal” GP100 GPU to market and its first implementation of the GPU is aimed at its Tesla P100 accelerator for radically improving the performance of massively parallel workloads like scientific simulations and machine learning algorithms. The Pascal GPU is a beast, in all of the good senses of that word, and warrants a deep dive that was not possible on announcement day back at the GPU Technology Conference.
We did an overview of the chip back on announcement day, as well as talking about Nvidia’s own DGX-1 hybrid server …
Drilling Down Into Nvidia’s “Pascal” GPU was written by Timothy Prickett Morgan at The Next Platform.
The Power9 processor that IBM is working on in conjunction with hyperscale and HPC customers could be the most important chip that Big Blue has brought to market since the Power4 processor back in 2001. That was another world, back then, with the dot-com boom having gone bust and enterprises looking for a less expensive but beefy NUMA server on which to run big databases and transaction processing systems.
The world that the Power9 processor will enter in 2017 is radically changed. A two-socket system has more compute, memory, and I/O capacity and bandwidth than those behemoths from a decade …
Power9 Will Bring Competition To Datacenter Compute was written by Timothy Prickett Morgan at The Next Platform.
Supercomputer maker Cray might not roll out machines for deep learning anytime in 2016, but like other system vendors with deep roots in high performance computing, which leverages many of the same hardware elements (strong interconnect and GPU acceleration, among others), they are seeing how to loop their expertise into a future where machine learning rules.
As Cray CTO, Steve Scott, tells The Next Platform, “Deep learning and machine learning dovetails with our overall strategy and that overall merger between HPC and analytics that has already happened,” Scott says. HPC is not just one single application or set of …
Future Cray Clusters to Storm Deep Learning was written by Nicole Hemsoth at The Next Platform.
Deep learning could not have developed at the rapid pace it has over the last few years without companion work that has happened on the hardware side in high performance computing. While the applications and requirements for supercomputers versus neural network training are quite different (scalability, programming, etc.) without the rich base of GPU computing, high performance interconnect development, memory, storage, and other benefits from the HPC set, the boom around deep learning would be far quieter.
In the midst of this convergence, Marc Hamilton has watched advancements on the HPC side over the years, beginning in the mid-1990s …
Nvidia Lead Details Future Convergence of Supercomputing, Deep Learning was written by Nicole Hemsoth at The Next Platform.
Absorbing a collection of new processing, memory, storage, and networking technologies in a fast fashion on a complex system is no easy task for any system maker or end user creating their own infrastructure, and it takes time even for a big company like Oracle to get all the pieces together and weld them together seamlessly. But the time it takes is getting smaller, and Oracle has absorbed a slew of new tech in its latest Exadata X6-2 platforms.
The updated machines are available only weeks after Intel launched its new “Broadwell” Xeon E5 v4 processors, which have been shipping …
Oracle Steps With Moore’s Law To Rev Exadata Database Machines was written by Timothy Prickett Morgan at The Next Platform.
Training ‘complex multi-layer’ neural networks is referred to as deep-learning as these multi-layer neural architectures interpose many neural processing layers between the input data and the predicted output results – hence the use of the word deep in the deep-learning catchphrase.
While the training procedure is computationally expensive, evaluating the resulting trained neural network is not, which explains why trained networks can be extremely valuable as they have the ability to very quickly perform complex, real-world pattern recognition tasks on a variety of low-power devices including security cameras, mobile phones, wearable technology. These architectures can also be implemented on FPGAs …
Boosting Deep Learning with the Intel Scalable System Framework was written by Nicole Hemsoth at The Next Platform.
Hyperscalers and cloud builders are different in a lot of ways from the typical enterprise IT shop. Perhaps the most profound one that has emerged in recent years is something that used to be only possible in the realm of the most exotic supercomputing centers, and that is this: They get what they want, and they get it ahead of everyone else.
Back in the day, before the rise of mass customization of the Xeon product line by chip maker Intel, it was HPC customers who were often trotted out as early adopters of a new processor technology and usually …
Hyperscalers And Clouds On The Xeon Bleeding Edge was written by Timothy Prickett Morgan at The Next Platform.
We talk about architecture all the time when discussing systems, but when it comes to the datacenters that house these machines, the shells are about as exciting as a monstrous warehouse for a massive distribution operation. Considering that data and processing are the most profitable products on the planet these days, maybe this is suitable and fitting, but another way to look at it is that datacenters should not only be marvels of engineering but also inspiring structures like other kinds of buildings.
Every year, the architecture magazine Evolo hosts a skyscraper design competition so that architects can let their …
Concept Data Tower Scrapes The Sky For Efficiency was written by Timothy Prickett Morgan at The Next Platform.
If component suppliers want to win deals at hyperscalers and cloud builders, they have to be proactive. They can’t just sit around and wait for the OEMs and ODMs to pick their stuff like a popularity contest. They have to engineer great products with performance and then do what it takes on price, power, and packaging to win deals.
This is why memory maker Micron Technology is ramping up its efforts to get its DRAM and flash products into the systems that these companies buy, and why it is also creating a set of “architected solutions” focused on storage that …
Micron Enlists Allies For Datacenter Flash Assault was written by Timothy Prickett Morgan at The Next Platform.
For those who wonder what kind of life is left in the market for elastic block storage beyond Ceph and the luxury all-flash high rises, Datera, which emerged from stealth today with $40 million in backing and some big name users, has a tale to tell. While it will likely not end with blocks, these do form the foundation as the company looks to reel in enterprises who need more scalable performance than they might find with Ceph but aren’t looking to the high-end flash appliances either.
The question is, what might the world do with an on-premises take …
Datera Bets on Massive Middle Ground for Block Storage at Scale was written by Nicole Hemsoth at The Next Platform.
As readers of The Next Platform are well aware, Hewlett Packard Enterprise is staking a lot of the future of its systems business on The Machine, which embodies the evolving concepts for disaggregated and composable systems that are heavy on persistent storage that sometimes functions like shared memory, on various kinds of compute, and on the interconnects between the two.
To get a sense of how The Machine might do on in-memory workloads that normally run on clusters that have their memory distributed, researchers at HPE Labs have fired up the Spark in-memory framework on a Superdome X shared …
Spark On Superdome X Previews In-Memory On The Machine was written by Timothy Prickett Morgan at The Next Platform.
Switch chips have a very long technical and economic lives, considerably longer than that of a Xeon processor used in a server – something on the order of seven or eight years compared to three or four. As it turns out, the various GPUs used in Nvidia’s Tesla accelerators look like they, too, will have very long technical and economic lives.
Even after a new technology is introduced, sometimes the old one can be had at a much cheaper price and therefore continues to be a good price/performer even after it has been presumably obsoleted by an improved product. Its …
Nvidia Not Sunsetting Tesla Kepler And Maxwell GPUs Just Yet was written by Timothy Prickett Morgan at The Next Platform.
There are two things that underdogs have to do to take a big bite out of a market. First, they have to tell prospective customers precisely what the plan is to develop future products, and then they have to deliver on that roadmap. The OpenPower collective behind the Power chip developed did the first thing at its eponymous summit in San Jose this week, and now it is up to the OpenPower partners to do the hard work of finishing the second.
Getting a chip as complex as a server processor into the field, along with its chipsets and memory …
IBM Unfolds Power Chip Roadmap Out Past 2020 was written by Timothy Prickett Morgan at The Next Platform.
After close to twenty years at IBM, where he began as an IBM Fellow and Chief Architect for the SOA Foundation, Rob High has developed a number of core technologies that back Big Blue’s enterprise systems, including the suite of tools behind IBM WebSphere, and more recently, those that support the wide-ranging ambitions of the Watson cognitive computing platform.
Although High gave the second day keynote this afternoon at the GPU Technology Conference, there was no mention of accelerated computing. Interestingly, while the talk was about software, specifically the machine learning behind Watson, there was also very little about the …
IBM Watson CTO on What’s Ahead for Cognitive Computing was written by Nicole Hemsoth at The Next Platform.