Archive

Category Archives for "The Next Platform"

Argonne Hints at Future Architecture of Aurora Exascale System

There are two supercomputers named “Aurora” that are affiliated with Argonne National Laboratory – the one that was supposed to be built this year and the one that for a short time last year was known as “A21,” that will be built in 2021, and that will be the first exascale system built in the United States.

Details have just emerged on the second, and now only important, Aurora system, thanks to Argonne opening up proposals for the early science program that lets researchers put code on the supercomputer for three months before it starts its production work. The proposal

Argonne Hints at Future Architecture of Aurora Exascale System was written by Timothy Prickett Morgan at The Next Platform.

FPGA Maker Xilinx Says the Future of Computing is ACAP

The field programmable gate space is heating up with new use cases driven by everything from emerging network, IoT, and application acceleration trends. Keeping ahead of the curve means expanding on devices that have quite steady improvement cycles, which means the few companies at the top need to get creative to stay competitive.

Xilinx and Altera – which was bought by Intel in 2015 for $16.7 billion – have been the top vendors of FPGAs, which can be programmed and reprogrammed, enabling organizations the ability to adapt the processors to the varying workloads running on the systems. The high price

FPGA Maker Xilinx Says the Future of Computing is ACAP was written by Jeffrey Burt at The Next Platform.

How Spectre And Meltdown Mitigation Hits Xeon Performance

It has been more than two months since Google revealed its research on the Spectre and Meltdown speculative execution security vulnerabilities in modern processors, and caused the whole IT industry to slam on the brakes and brace for the impact. The initial microbenchmark results on the mitigations for these security holes, put out by Red Hat, showed the impact could be quite dramatic. But according to recent tests done by Intel, the impact is not as bad as one might think in many cases. In other cases, the impact is quite severe.

The Next Platform has gotten its hands on

How Spectre And Meltdown Mitigation Hits Xeon Performance was written by Timothy Prickett Morgan at The Next Platform.

IBM Unwinds Tangled Data for Enterprise AI

These days, organizations are creating and storing massive amounts of data, and in theory this data can be used to drive business decisions through application development, particularly with new techniques such as machine learning. Data is arguably the most important asset, and it is also probably the most difficult thing to manage. Well, excepting people.

Data is tangled mess. It can be structured or unstructured, and it is increasingly scattered in different locations – in on-premises infrastructure, in a public cloud, on a mobile device. It is a challenge to move, thanks to the costs in everything from bandwidth to

IBM Unwinds Tangled Data for Enterprise AI was written by Jeffrey Burt at The Next Platform.

Getting AI Leverage With GPU-Optimized Systems

The artificial intelligence revolution is quickly changing every industry, and modern data centers must be equipped to capitalize on these extraordinary new capabilities. Hewlett Packard Enterprise (HPE) and Nvidia are partnering to bring best-of-breed AI solutions to every customer, offering AI-integrated systems, services, and support capabilities to help all organizations seamlessly optimize their AI foundation, deliver differentiated outcomes, and gain competitive advantage.

High performance computing has become key to solving many of the world’s grand challenges in the realms of science, industry, and engineering. However, traditional CPUs are increasingly failing to deliver the performance gains they used to, and the

Getting AI Leverage With GPU-Optimized Systems was written by Timothy Prickett Morgan at The Next Platform.

Practical Computational Balance: Contending with Unplanned Data

In part one of our series on reaching computational balance, we described how computational complexity is increasing logarithmically. Unfortunately, data and storage follows an identical trend.

The challenge of balancing compute and data at scale remains constant. Because providers and consumers don’t have access to “the crystal ball of demand prediction”, the appropriate computational response to vast, unpredictable amounts of highly variable complex data becomes unintentionally unplanned.

We must address computational balance in a world barraged by vast and unplanned data.

Before starting any discussion of data balance, it is important to first remind ourselves of scale.  Small

Practical Computational Balance: Contending with Unplanned Data was written by James Cuff at The Next Platform.

Using Python to Snake Closer to Simplified Deep Learning

On today’s episode of “The Interview” with The Next Platform, we discuss the role of higher level interfaces to common machine learning and deep learning frameworks, including Caffe.

Despite the existence of multiple deep learning frameworks, there is a lack of comprehensible and easy-to-use high-level tools for the design, training, and testing of deep neural networks (DNNs) according to this episode’s guest, Soren Klemm, one of the creators of Python based Barista, which is an open-source graphical high-level interface for the Caffe framework.

While Caffe is one of the most popular frameworks for training DNNs, editing prototxt files in

Using Python to Snake Closer to Simplified Deep Learning was written by Nicole Hemsoth at The Next Platform.

Japan Invests in Fusion Energy Future with New Cray Supercomputer

There are a number of key areas where exascale computing power will be required to turn simulations into real-world good. One of these is fusion energy research with the ultimate goal of building efficient plants that can safely deliver hundreds of megawatts of clean, renewable fusion energy.

Japan has announced that it will install its top-end XC50 supercomputer at the at the Rokkasho Fusion Institute.

The new system will achieve four petaflops, which is over double the capability of the current machine for international collaborations in fusion energy, Helios, which was built by European supercomputer maker, Bull. The Helios system

Japan Invests in Fusion Energy Future with New Cray Supercomputer was written by Nicole Hemsoth at The Next Platform.

Volkswagen Refining Machine Learning on D-Wave System

Researchers at Volkswagen have been at the cutting edge of implementing D-Wave quantum computers for a number of complex optimization problems, including traffic flow optimization, among other potential use cases.

These efforts are generally focused on developing algorithms suitable for the company’s recently purchased 2000-qubit quantum system and have expanded to a range of new machine learning possibilities, including what a research team at the company’s U.S. R&D office and the Volkswagen Data:Lab in Munich are calling quantum-assisted cluster analysis.

The art and science of clustering is well known for machine learning on classical computing architectures, but the VW approach

Volkswagen Refining Machine Learning on D-Wave System was written by Nicole Hemsoth at The Next Platform.

Open Source Data Management for All

On today’s episode of “The Interview” with The Next Platform, we talk about an open source data management platform (and related standards group) called iRODS, which many in scientific computing already know—but that also has applicability in enterprise.

We found that several of our readers had heard of iRODS and knew it was associated with a scientific computing base, but few understood what the technology was and were not aware that there was a consortium. To dispel any confusion, we spoke with Jason Coposky, executive director of the iRODS Consortium about both the technology itself and the group’s role

Open Source Data Management for All was written by Nicole Hemsoth at The Next Platform.

Networks Within Networks: Optimization at Massive Scale

On today’s episode of “The Interview” with The Next Platform we talk about the growing problem of networks within networks (within networks) and what that means for future algorithms and systems that will support smart cities, smart grids, and other highly complex and interdependent optimization problems.

Our guest on this audio interview episode (player below) is Hadi Amini, a researcher at Carnegie Mellon who has focused on the interdependency of many factors for power grids and smart cities in a recent book series on these and related interdependent network topics. Here, as in the podcast, the focus is on the

Networks Within Networks: Optimization at Massive Scale was written by Nicole Hemsoth at The Next Platform.

Sandia, NREL Look to Aquarius to Cool HPC Systems

The idea of bringing liquids in the datacenter to cool off hot-running systems and components has often unnerved many in the IT field. Organizations are doing it as they look for more efficient and cost-effective ways to run their infrastructures, particularly as the workloads become larger and more complex, more compute resources are needed, parts like processors become more powerful and density increases.

But the concept of running water and other liquids through a system, and the threat of the liquids leaking into the various components and into the datacenter, has created uneasiness with the idea.

Still, the growing demands

Sandia, NREL Look to Aquarius to Cool HPC Systems was written by Jeffrey Burt at The Next Platform.

Changing HPC Workloads Mean Tighter Storage Stacks for Panasas

Changes to workloads in HPC mean alterations are needed up and down the stack—and that certainly includes storage. Traditionally these workloads were dominated by large file handling needs, but as newer applications (OpenFOAM is a good example) bring small file and mixed workload requirements to the HPC environment, it means storage approaches need to shift to meet the need.

With these changing workload demands in mind, recall that in the first part of our series on future directions for storage for enterprise HPC shops we focused on the ways open source parallel file systems like Lustre fall short for users

Changing HPC Workloads Mean Tighter Storage Stacks for Panasas was written by Nicole Hemsoth at The Next Platform.

FPGA Interconnect Boosted In Concert With Compute

To keep their niche in computing, field programmable gate arrays not only need to stay on the cutting edge of chip manufacturing processes. They also have to include the most advanced networking to balance out that compute, rivalling that which the makers of switch ASICs provide in their chips.

By comparison, CPUs have it easy. They don’t have the serializer/deserializer (SerDes) circuits that switch chips have as the foundation of their switch fabric. Rather, they might have a couple of integrated Ethernet network interface controllers embedded on the die, maybe running at 1 Gb/sec or 10 Gb/sec, and they offload

FPGA Interconnect Boosted In Concert With Compute was written by Timothy Prickett Morgan at The Next Platform.

Why Cisco Should – And Should Not – Acquire Pure Storage

Flash memory has become absolutely normal in the datacenter, but that does not mean it is ubiquitous and it most certainly does not mean that all flash arrays, whether homegrown and embedded in servers or purchased as appliances, are created equal. They are not, and you can tell not only from the feeds and speeds, but from the dollars and sense.

It has been nine years since Pure Storage, one of the original flash array upstarts, was founded and seven years since the company dropped out of stealth with its first generation of FlashArray products. In that relatively short time,

Why Cisco Should – And Should Not – Acquire Pure Storage was written by Timothy Prickett Morgan at The Next Platform.

Drilling Down Into Ethernet Switch Trends

Of the three pillars of the datacenter – compute, storage, and networking – the one that consistently still has some margins and yet does not dominate the overall system budget is networking. While these elements affect each other, they are still largely standalone realms, with their own specialized devices and suppliers. And so it is important to know the trends in the technologies.

Until fairly recently, the box counters like IDC and Gartner have been pretty secretive about the data they have about the networking business. But IDC has been gradually giving a little more flavor than just saying Cisco

Drilling Down Into Ethernet Switch Trends was written by Timothy Prickett Morgan at The Next Platform.

Pushing Greater Stream Processing Platform Evolution

Today’s episode of “The Interview” with The Next Platform is focused on the evolution of stream processing—from the early days to more recent times with vast volumes of social, financial, and other data challenging data analysts and systems designers alike.

Our guest is Nathan Trueblood, a veteran of several companies like Mirantis, Western Digital, EMC, and current VP of product management at DataTorrent—a company comprised of many ex-Yahoo employees who worked with the Hadoop platform and have pushed the evolution of that framework to include more real-time requirements with Apache Apex.

Trueblood’s career has roots in high performance computing

Pushing Greater Stream Processing Platform Evolution was written by Nicole Hemsoth at The Next Platform.

Spinning the Bottleneck for Data, AI, Analytics and Cloud

High performance computing experts came together recently at Stanford for their annual HPC Advisory Council Meeting to share strategies after what has been an interesting year in supercomputing thus far. 

As always, there was a vast amount of material covering everything from interconnects to containerized compute. In the midst of this, The Next Platform noted an obvious and critical thread over the two days–how to best map infrastructure to software in order to reduce “computational back pressure” associated with new “data heavy” AI workloads.

In the “real world” back pressure results from a bottleneck as opposed to desired

Spinning the Bottleneck for Data, AI, Analytics and Cloud was written by James Cuff at The Next Platform.

Expanding Use Cases Mean Tape Storage is Here to Stay

On today’s episode of “The Interview” with The Next Platform we talk about the past, present, and future of tape storage with industry veteran Matt Starr.

Starr is CTO at tape giant, Spectra Logic and has been with the company for almost twenty-five years. He was the lead engineer and architect forthe design and production of Spectra’s enterprise tape library family, which is still a core product.

We talk about some of the key evolutions in tape capacity and access speeds over the course of his career before moving into where the new use cases at massive scale are. In

Expanding Use Cases Mean Tape Storage is Here to Stay was written by Nicole Hemsoth at The Next Platform.

Leverage Extreme Performance with GPU Acceleration

Hewlett Packard Enterprise (HPE) and NVIDIA have partnered to accelerate innovation, combining the extreme compute capabilities of high performance computing (HPC) with the groundbreaking processing power of NVIDIA GPUs.

In this fast-paced digital climate, traditional CPU technology is no longer sufficient to support growing data centers. Many enterprises are struggling to keep pace with escalating compute and graphics requirements, particularly as computational models become larger and more complex. NVIDIA GPU accelerators for HPC seamlessly integrate with HPE servers to achieve greater speed, optimal power efficiency, and dramatically higher application performance than CPUs. High-end data centers rely on these high performance

Leverage Extreme Performance with GPU Acceleration was written by Nicole Hemsoth at The Next Platform.