Archive

Category Archives for "The Next Platform"

BSC’s Mont Blanc 3 Puts ARM Inside Bull Sequana Supers

The HPC industry has been waiting a long time for the ARM ecosystem to mature enough to yield real-world clusters, with hundreds or thousands of nodes and running a full software stack, as a credible alternative to clusters based on X86 processors. But the wait is almost over, particularly if the Mont-Blanc 3 system that will be installed by the Barcelona Supercomputer Center is any indication.

BSC has been shy about trying new architectures in its clusters, and the original Mare Nostrum super that was installed a decade ago and that ranked fifth on the Top 500 list when it

BSC’s Mont Blanc 3 Puts ARM Inside Bull Sequana Supers was written by Timothy Prickett Morgan at The Next Platform.

The New IBM Glass Is Almost Half Full

It takes an incredible amount of resilience for any company to make it decades, much less more than a century, in any industry. IBM has taken big risks to create new markets, first with time clocks and meat slicers and tabulating machines early in the last century, and some decades later it created the modern computer industry with the System/360 mainframe. It survived a near-death experience in the middle 1990s when the IT industry was changing faster than it was, and now it is trying to find its footing in cognitive computing and public and private clouds as its legacy

The New IBM Glass Is Almost Half Full was written by Timothy Prickett Morgan at The Next Platform.

HPE Gets Serious About Hyperconverged Storage With SimpliVity Buy

Rumors have been running around for months that Hewlett Packard Enterprise was shopping around for a way to be a bigger player in the hyperconverged storage arena, and the recent scuttlebutt was that HPE was considering paying close to $4 billion for one of the larger players in server-storage hybrids. This turns out to not be true. HPE is paying only $650 million to snap up what was, until now, thought to be one of Silicon Valley’s dozen or so unicorns with over a $1 billion valuation.

It is refreshing to see that HPE is not overpaying for an

HPE Gets Serious About Hyperconverged Storage With SimpliVity Buy was written by Timothy Prickett Morgan at The Next Platform.

The Essence Of Multi-Threaded Applications

In the prior two articles in this series, we have gone through the theory behind programming multi-threaded applications, with the management of shared memory being accessed by multiple threads, and of even creating those threads in the first place. Now, we need to put one such multi-threaded application together and see how it works. You will find that the pieces fall together remarkably easily.

If we wanted to build a parallel application using multiple threads, we would likely first think of one where we split up a loop amongst the threads. We will be looking at such later in a

The Essence Of Multi-Threaded Applications was written by Timothy Prickett Morgan at The Next Platform.

Adapting InfiniBand for High Performance Cloud Computing

When it comes to low-latency interconnects for high performance computing, InfiniBand immediately springs to mind. On the most recent Top 500 list, over 37% of systems used some form of InfiniBand – the highest representation of any interconnect family. Since 2009, InfiniBand has occupied between 30 and 51 percent of every Top 500 list.

But when you look to the clouds, InfiniBand is hard to find. Of the three major public cloud offerings (Amazon Web Services, Google Cloud, and Microsoft Azure), only Azure currently has an InfiniBand offering. Some smaller players do as well (Profit Bricks, for example), but it’s

Adapting InfiniBand for High Performance Cloud Computing was written by Nicole Hemsoth at The Next Platform.

On Premises Object Storage Mimics Big Public Clouds

Object storage is not a new concept, but this type of storage architecture is beginning to garner more attention from large organisations as they grapple with the difficulties of managing increasingly large volumes of unstructured data gathered from applications, social media, and a myriad other sources.

The properties of object-based storage systems mean that they can scale easily to handle hundreds or even thousands of petabytes of capacity if required. Throw in the fact that object storage can be less costly in terms of management overhead (somewhere around 20 percent so that means needing to buy 20 percent less capacity

On Premises Object Storage Mimics Big Public Clouds was written by Timothy Prickett Morgan at The Next Platform.

FPGA Frontiers: New Applications in Reconfigurable Computing

There is little doubt that this is a new era for FPGAs.

While it is not news that FPGAs have been deployed in many different environments, particularly on the storage and networking side, there are fresh use cases emerging in part due to much larger datacenter trends. Energy efficiency, scalability, and the ability to handle vast volumes of streaming data are more important now than ever before. At a time when traditional CPUs are facing a future where Moore’s Law is less certain and other accelerators and custom ASICs are potential solutions with their own sets of expenses and hurdles,

FPGA Frontiers: New Applications in Reconfigurable Computing was written by Nicole Hemsoth at The Next Platform.

Looking Ahead To The Next Platforms That Will Define 2017

The old Chinese saying, “May you live in interesting times,” is supposed to be a curse, uttered perhaps with a wry smile and a glint in the eye. But it is also, we think, a blessing, particularly in an IT sector that could use some constructive change.

Considering how much change the tech market, and therefore the companies, governments, and educational institutions of the world, have had to endure in the past three decades – and the accelerating pace of change over that period – you might be thinking it would be good to take a breather, to coast for

Looking Ahead To The Next Platforms That Will Define 2017 was written by Nicole Hemsoth at The Next Platform.

The Interplay Of HPC Interconnects And CPU Utilization

Choosing the right interconnect for high-performance compute and storage platforms is critical for achieving the highest possible system performance and overall return on investment.

Over time, interconnect technologies have become more sophisticated and include more intelligent capabilities (offload engines), which enable the interconnect to do more than just transferring data. Intelligent interconnect can increase system efficiency; interconnect with offload engines (offload interconnect) dramatically reduces CPU overhead, allowing more CPU cycles to be dedicated to applications and therefore enabling higher application performance and user productivity.

Today, the interconnect technology has become even more critical than ever before, due to a number

The Interplay Of HPC Interconnects And CPU Utilization was written by Timothy Prickett Morgan at The Next Platform.

Bolstering Lustre on ZFS: Highlights of Continuing Work

The Zetta File System (ZFS), as a back-end file system to Lustre, has had support in Lustre for a long time. But in the last few years it has gained greater importance, likely due to Lustre’s push into enterprise and the increasing demands by both enterprise and non-enterprise IT to add more reliability and flexibility features to Lustre. So, ZFS has had significant traction in recent Lustre deployments.

However, over the last 18 months, a few challenges have been the focus of several open source projects in the Lustre developer community to improve performance, align the enterprise-grade features in ZFS

Bolstering Lustre on ZFS: Highlights of Continuing Work was written by Nicole Hemsoth at The Next Platform.

What Is SMP Without Shared Memory?

This is the second in the series on the essentials of multiprocessor programming. This time around we are going to look at some of the normally little considered effects of having memory being shared by a lot of processors and by the work concurrently executing there.

We start, though, by observing that there has been quite a market for parallelizing applications even when they do not share data. There has been remarkable growth of applications capable of using multiple distributed memory systems for parallelism, and interestingly that very nicely demonstrates the opportunity that exists for using massive compute capacity

What Is SMP Without Shared Memory? was written by Timothy Prickett Morgan at The Next Platform.

Why Google Is Driving Compute Diversity

In the ideal hyperscaler and cloud world, there would be one processor type with one server configuration and it would run any workload that could be thrown at it. Earth is not an ideal world, though, and it takes different machines to run different kinds of workloads.

In fact, if Google is any measure – and we believe that it is – then the number of different types of compute that needs to be deployed in the datacenter to run an increasingly diverse application stack is growing, not shrinking. It is the end of the General Purpose Era, which began

Why Google Is Driving Compute Diversity was written by Timothy Prickett Morgan at The Next Platform.

Molecular Dynamics a Next Frontier for FPGA Acceleration

Molecular dynamics codes have a wide range of uses across scientific research and represent a target base for a variety of accelerators and approaches, from GPUs to custom ASICs. The iterative nature of these codes on a CPU alone can require massive amounts of compute time for relatively simple simulations, thus the push is strong to find ways to bolster performance.

It is not practical for all users to make use of a custom ASIC (as exists on domain specific machines like those from D.E. Shaw, for instance). Accordingly, this community has looked to a mid-way step between general

Molecular Dynamics a Next Frontier for FPGA Acceleration was written by Nicole Hemsoth at The Next Platform.

HPE Powers Up The Machine Architecture

Hewlett Packard Enterprise is not just a manufacturer that takes components from Intel and assembles them into systems. The company also has a heritage of innovating, and it was showing off its new datacenter architecture research and development testbed, dubbed The Machine, as 2016 came to a close.

While The Machine had originally attracted considerable attention as a vehicle for HPE to commercialize memristors, it is a much broader architectural testbed. This first generation of hardware can use any standard DDR4 DIMM-based memories, volatile or non-volatile. And while large, non-volatile memory pools are interesting research targets, HPE realizes that it

HPE Powers Up The Machine Architecture was written by Timothy Prickett Morgan at The Next Platform.

The Road Ahead for Deep Learning in Healthcare

While there are some sectors of the tech-driven economy that thrive on rapid adoption on new innovations, other areas become rooted in traditional approaches due to regulatory and other constraints. Despite great advances toward precision medicine goals, the healthcare industry, like other important segments of the economy, is tied by several specific bounds that make it slower to adapt to potentially higher performing tools and techniques.

Although deep learning is nothing new, its application set is expanding. There is promise for the more mature variants of traditional deep learning (convolutional and recurrent neural networks are the prime example) to morph

The Road Ahead for Deep Learning in Healthcare was written by Nicole Hemsoth at The Next Platform.

The Essentials Of Multiprocessor Programming

One near constant that you have been seeing in the pages of The Next Platform is that the downside of having a slowing rate at which the speed of new processors is increasing is offset by the upside of having a lot more processing elements in a device. While the performance of programs running on individual processors might not be speeding up like we might like, we instead get huge capacity increases via today’s systems by having the tens through many thousands of processors.

You have also seen in these pages that our usual vanilla view of a processor or

The Essentials Of Multiprocessor Programming was written by Timothy Prickett Morgan at The Next Platform.

From Monolith to Microservices

Microservices are big in the tech world these days. The evolutionary heir to service-oriented architecture, microservice-based design is the ultimate manifestation of everything you learned about good application design.

Loosely coupled with high cohesion, microservices are the application embodiment of DevOps principles. So why isn’t everything a microservice now? At the LISA Conference, Anders Wallgren and Avantika Mathur from Electric Cloud gave some insight with their talk “The Hard Truths about Microservices and Software Delivery”.

Perhaps the biggest impediment to the adoption of microservices-based application architecture is an organizational culture that is not supportive. Microservices proponents recommend a team

From Monolith to Microservices was written by Nicole Hemsoth at The Next Platform.

Let There Be Light: The Year in Silicon Photonics

Computing historians may look back on 2016 as the Year of Silicon Photonics. Not because the technology has become ubiquitous – that may yet be years away – but because the long-awaited silicon photonics offerings are now commercially available in networking hardware. While the advancements in networking provided by silicon photonics are indisputable, the real game changer is in the CPU.

For over half a century, Moore’s Law has been the name of the game. The transistor density on chips has been remarkably cooperative in doubling on schedule since Gordon Moore first made his observation in 1965. But Moore’s Law

Let There Be Light: The Year in Silicon Photonics was written by Nicole Hemsoth at The Next Platform.

Arista Gives Tomahawk 25G Ethernet Some XPliant Competition

Processing for server compute has gotten more general purpose for the past two decades and is seeing resurgence in built-for-purpose chips. Network equipment makers have made their own specialized chips as well as buying merchant chips of varying kinds to meet very specific switching and routing needs.

Of the chip upstarts that are competing against industry juggernaut Cisco Systems, Arista Networks stands out as the company that decided from its founding in 2009 to rely only on merchant silicon for switches and to differentiate on speed to market and software functionality and commonality across many different switch ASICs with its

Arista Gives Tomahawk 25G Ethernet Some XPliant Competition was written by Timothy Prickett Morgan at The Next Platform.

All Flash No Longer Has To Compete With Disk

All-flash arrays are still new enough to be somewhat exotic but are – finally – becoming mainstream. As we have been saying for years, there will come a point where enterprises, which have much more complex and unpredictable workloads than hyperscalers and do not need the exabytes of capacity of cloud builders, just throw in the towel with disk drives and move to all flash and be done with it.

To be sure, disk drives will persist for many years to come, but perhaps not for as long as many as disk drive makers and flash naysayers had expected –

All Flash No Longer Has To Compete With Disk was written by Timothy Prickett Morgan at The Next Platform.