Archive

Category Archives for "The Next Platform"

Stretching the Business of Tape Storage to Extreme Scale

IPOs and major investments in storage startups are one thing, but when it comes to a safe tech company investment, all bets are still on tape.

The rumors of tape’s death are greatly exaggerated, but there have been some changes to the market. While the number of installed sites might be shrinking for long-time tape storage maker, SpectraLogic, the installation sizes of its remaining customers keeps growing, which produces a nice uptick in revenue for the company, according to its CTO, Matt Starr.

This makes sense since the relatively smaller backups and archives make better performance sense on disk—and many

Stretching the Business of Tape Storage to Extreme Scale was written by Nicole Hemsoth at The Next Platform.

FPGAs, OpenHMC Push SKA HPC Processing Capabilities

Astronomy is the oldest research arena, but the technologies required to process the massive amount of data created from radio telescope arrays represents some of the most bleeding-edge research in modern computer science.

With an exabyte of data expected to stream off the Square Kilometer Array (SKA), teams from both the front and back ends of the project have major challenges ahead. One “small” part of that larger picture of seeing farther into the universe than ever before is moving the data from the various distributed telescopes into a single unified platform and data format. This means transferring data from

FPGAs, OpenHMC Push SKA HPC Processing Capabilities was written by Nicole Hemsoth at The Next Platform.

Knights Landing System Development Targets Dark Matter Study

Despite the best efforts of leading cosmologists, the nature of dark energy and dark matter – which comprise approximately 95% of the total mass-energy content of the universe – is still a mystery.

Dark matter remains undetected even with all the different methods that have been employed so far to directly find it.  The origin of dark energy is one of the greatest puzzles in physics. Cosmologist Katrin Heitmann, PI of an Aurora Early Science Program effort at the Argonne Leadership Computing Facility (ALCF) and her team are conducting research to shed some light on the dark universe.

“The reach

Knights Landing System Development Targets Dark Matter Study was written by Nicole Hemsoth at The Next Platform.

Early Benchmarks on Argonne’s New Knights Landing Supercomputer

We are heading into International Supercomputing Conference week (ISC) and as such, there are several new items of interest from the HPC side of the house.

As far as supercomputer architectures go for mid-2017, we can expect to see a lot of new machines with Intel’s Knights Landing architecture, perhaps a scattered few finally adding Nvidia K80 GPUs as an upgrade from older generation accelerators (for those who are not holding out for Volta with NVlink ala the Summit supercomputer), and of course, it all remains to be seen what happens with the Tianhe-2 and Sunway machines in China in

Early Benchmarks on Argonne’s New Knights Landing Supercomputer was written by Nicole Hemsoth at The Next Platform.

Clever RDMA Technique Delivers Distributed Memory Pooling

More databases and data stores and the applications that run atop them are moving to in-memory processing, and sometimes the memory capacity in a single big iron NUMA server isn’t enough and the latencies across a cluster of smaller nodes are too high for decent performance.

For example, server memory capacity tops out at 48 TB in the Superdome X server and at 64 TB in the UV 300 server from Hewlett Packard Enterprise using NUMA architectures. HPE’s latest iteration of the The Machine packs 160 TB of shared memory capacity across its nodes, and has an early version of

Clever RDMA Technique Delivers Distributed Memory Pooling was written by Timothy Prickett Morgan at The Next Platform.

Unifying Massive Data at Cloud Scale

Enterprises continue to struggle with the issue of data: how to process and move the massive amounts that are coming in from multiple sources, how to analyze the different types of data to best leverage its capabilities, and how to store and unify it across various environments, including on-premises infrastructure and cloud environments. A broad array of major storage players, such as Dell EMC, NetApp and IBM are building out their offerings to create platforms that can do a lot of those things.

MapR Technologies, which made its bones with its commercial Hadoop distribution, is moving in a similar direction.

Unifying Massive Data at Cloud Scale was written by Jeffrey Burt at The Next Platform.

The Composable Systems Wave Is Rising

Hardware is, by its very nature, physical and therefore, unlike software or virtual hardware and software routines encoded by FPGAs, it is the one thing that cannot be easily changed. The dream of composable systems, which we have discussed in the past, is something that has been swirling around in the heads of system architects for more than a decade, and we are without question getting closer to realizing the dream of making the components of systems and the clusters that are created from them programmable like software.

The hyperscalers, of course, have been on the bleeding edge of

The Composable Systems Wave Is Rising was written by Timothy Prickett Morgan at The Next Platform.

One Hyperscaler Gets The Jump On Skylake, Everyone Else Sidelined

Well, it could have been a lot worse. About 5.6 percent worse, if you do the math.

As we here at The Next Platform have been anticipating for quite some time, with so many stars aligning here in 2017 and a slew of server processor and GPU coprocessor announcements and deliveries expected starting in the summer and rolling into the fall, there is indeed a slowdown in the server market and one that savvy customers might be able to take advantage of. But we thought those on the bleeding edge of performance were going to wait to see what Intel,

One Hyperscaler Gets The Jump On Skylake, Everyone Else Sidelined was written by Timothy Prickett Morgan at The Next Platform.

When No One Can Make Money In Systems

Making money in the information technology market has always been a challenge, but it keeps getting increasingly difficult as the tumultuous change in how companies consume compute, storage, and networking rips through all aspects of this $3 trillion market.

It is tough to know exactly what do to, and we see companies chasing the hot new things, doing acquisitions to bolster their positions, and selling off legacy businesses to generate the cash to do the deals and to keep Wall Street at bay. Companies like IBM, Dell, HPE, and Lenovo have sold things off and bought other things to try

When No One Can Make Money In Systems was written by Timothy Prickett Morgan at The Next Platform.

HPE Looks Ahead To Composable Infrastructure, Persistent Memory

Over the past several years, the server market has been roiled by the rise of cloud computing that run the applications created by companies and by services offered by hyperscalers that augment or replace such applications. This is a tougher and lumpier market, to be sure.

The top tier cloud providers like Amazon, Microsoft, and Google not only have become key drivers in server sales but also have turned to original design manufacturers (ODMs) from Taiwan and China for lower cost systems to help populate their massive datacenters. Overall, global server shipments have slowed, and top-tier OEMs are working to

HPE Looks Ahead To Composable Infrastructure, Persistent Memory was written by Jeffrey Burt at The Next Platform.

Python Coils Around FPGAs for Broader Accelerator Reach

Over the last couple of years, much work has been shifted into making FPGAs more usable and accessible. From building around OpenCL for a higher-level interface to having reconfigurable devices available on AWS, there is momentum—but FPGAs are still far from the grip of everyday scientific and technical application developers.

In an effort to bridge the gap between FPGA acceleration and everyday domain scientists who are well-versed in using the common scripting language, a team from the University of Southern California has created a new platform for Python-based development that abstracts the complexity of using low-level approaches (HDL, Verilog). “Rather

Python Coils Around FPGAs for Broader Accelerator Reach was written by Nicole Hemsoth at The Next Platform.

Machine Learning on Stampede2 Supercomputer to Bolster Brain Research

In our ongoing quest to understand the human mind and banish abnormalities that interfere with life we’ve always drawn upon the most advanced science available. During the last century, neuroimaging – most recently, the Magnetic Resonance Imaging scan (MRI) – has held the promise of showing the connection between brain structure and brain function.

Just last year, cognitive neuroscientist David Schnyer and colleagues Peter Clasen, Christopher Gonzalez, and Christopher Beevers published a compelling new proof of concept in Psychiatry Research: Neuroimaging. It suggests that machine learning algorithms running on high-performance computers to classify neuroimaging data may deliver the most

Machine Learning on Stampede2 Supercomputer to Bolster Brain Research was written by Nicole Hemsoth at The Next Platform.

Unifying Oil and Gas Data at Scale

The oil and gas industry has been among the most aggressive in pursuing internet of things (IoT), cloud and big data technologies to collect, store, sort and analyze massive amounts of data in both the drilling and refining sectors to improve efficiencies and decision-making capabilities. Systems are increasingly becoming automated, and sensors are placed throughout processes to send back data on various the systems and software has been put in place to crunch the data to create useful information.

According to a group of researchers from Turkey, the oil and gas industry is well suited to embrace all the new

Unifying Oil and Gas Data at Scale was written by Jeffrey Burt at The Next Platform.

Logistics in Application Path of Neural Networks

Accurately forecasting resource demand within the supply chain has never been easy, particular given the constantly changing nature of the data over periods of time.

What may have been true in measurements around demand or logistics one minute might be entirely different an hour, day or week later, which can throw off a short-term load forecast (STLF) and lead to costly over- or under-estimations, which in turn can lead to too much or too little supply.

To improve such forecasts, there are multiple efforts underway to create new models that can more accurately predict load needs, and while they have

Logistics in Application Path of Neural Networks was written by Jeffrey Burt at The Next Platform.

HPC Center NERSC Eases Path to Optimization at Scale

The National Energy Research Scientific Computing Center (NERSC) application performance team knows that for many users, “optimization is hard.” They’ve thought a lot about how to distill the application optimization process for users in a way that would resonate with them.

One of the analogies they use is the “Ant Farm.” Optimizing code is like continually “running a lawnmower over a lawn to find and cut-down the next tallest blade of grass,” where the blade of grass is analogous to a code bottleneck that consumes the greatest amount of runtime. One of the challenges is that each bottleneck

HPC Center NERSC Eases Path to Optimization at Scale was written by Nicole Hemsoth at The Next Platform.

Memory-Like Storage Means File Systems Must Change

The term software defined storage is in the new job title that Eric Barton has at DataDirect Networks, and he is a bit amused by this. As one of the creators of early parallel file systems for supercomputers and one of the people who took the Lustre file systems from a handful of supercomputing centers to one of the two main data management platforms for high performance computing, to a certain way of looking at it, Barton has always been doing software-defined storage.

The world has just caught up with the idea.

Now Barton, who is leaving Intel in the

Memory-Like Storage Means File Systems Must Change was written by Timothy Prickett Morgan at The Next Platform.

A Health Check For Code And Infrastructure In The Cloud

As businesses continue their migration to the cloud, the issue of monitoring the performance and health of their applications gets more challenging as they try to track them across both on-premises environments and in both private and public clouds. At the same time, as they become more cloud-based, they have to keep an eye on the entire stack, from the customer-facing applications to the underlying infrastructure they run on.

Since its founding eight years ago, New Relic has steadily built upon its first product, a cloud-based application performance management (APM) tool that is designed to assess how well the

A Health Check For Code And Infrastructure In The Cloud was written by Jeffrey Burt at The Next Platform.

One Programming Model To Support Them All

Many hands make light work, or so they say. So do many cores, many threads and many data points when addressed by a single computing instruction. Parallel programming – writing code that breaks down computing problems into small parts that run in unison – has always been a challenge. Since 2011, OpenACC has been gradually making it easier.  OpenACC is a de facto standard set of parallel extensions to Fortran, C and C++ designed to enable portable parallel programming across a variety of computing platforms.

Compilers, which are used to translate higher-level programming languages into binary executables, first appeared in

One Programming Model To Support Them All was written by Nicole Hemsoth at The Next Platform.

The Last Itanium, At Long Last

In a world of survival of the fittest coupled with mutations, something always has to be the last of its kind. And so it is with the “Kittson” Itanium 9700 processors, which Intel quietly released earlier this month and which will mostly see action in the last of the Integrity line of midrange and high-end systems from Hewlett Packard Enterprise.

The Itanium line has a complex history, perhaps fitting for a computing architecture that was evolving from the 32-bit X86 architecture inside of Intel and that was taken in a much more experimental and bold direction when the aspiring server

The Last Itanium, At Long Last was written by Timothy Prickett Morgan at The Next Platform.

Some Surprises in the 2018 DoE Budget for Supercomputing

The US Department of Energy fiscal year 2018 budget request is in. While it reflects much of what we might expect in pre-approval format in terms of forthcoming supercomputers in particular, there are some elements that strike us as noteworthy.

In the just-released 2018 FY budget request from Advanced Scientific Computing Research (ASCR), page eight of the document states that “The Argonne Leadership Computing Facility will operate Mira (at 10 petaflops) and Theta (at 8.5 petaflops) for existing users, while turning focus to site preparations for deployment of an exascale system of novel architecture.”

Notice anything missing in this description?

Some Surprises in the 2018 DoE Budget for Supercomputing was written by Nicole Hemsoth at The Next Platform.