Archive

Category Archives for "The Next Platform"

Casing The HPC Market Is Hard, And Getting Harder

Markets are always changing. Sometimes information technology is replaced by a new thing, and sometimes it morphs from one thing to another so gradually that is just becomes computing or networking or storage as we know it. For instance, in the broadest sense, all infrastructure will be cloudy, even if it is bare metal machines or those using containers or heavier server virtualization. In a similar way, in the future all high performance computing may largely be a kind of artificial intelligence, bearing little resemblance to the crunch-heavy simulations we are used to.

It has taken two decades for cloud

Casing The HPC Market Is Hard, And Getting Harder was written by Timothy Prickett Morgan at The Next Platform.

The Biggest Shift in Supercomputing Since GPU Acceleration

For years, the pace of change in large-scale supercomputing neatly tracked with the curve of Moore’s Law. As that swell flattens, and as the competitive pressure ticks up to build productive exascale supercomputers in the next few years, HPC has been scrambling to find the silver bullet architecture to reach sustained exascale performance. And as it turns out, there isn’t one.

But there is something else—something few saw coming three years ago, has less to do with hardware than it does a shift in how we approach massive-scale simulations, and is happening so fast that too-far-ahead-of-time procurements are

The Biggest Shift in Supercomputing Since GPU Acceleration was written by Nicole Hemsoth at The Next Platform.

Thinking Through The Cognitive HPC Nexus With Big Blue

There are plenty of things that the members of the high performance community do not agree on, there is a growing consensus that machine learning applications will at least in some way be part of the workflow at HPC centers that do traditional simulation and modeling.

Some HPC vendors think the HPC and AI systems have either already converged or will soon do so, and others think that the performance demands (both in terms of scale and in time to result) on both HPC and AI will necessitate radically different architectures and therefore distinct systems for these two workloads. IBM,

Thinking Through The Cognitive HPC Nexus With Big Blue was written by Timothy Prickett Morgan at The Next Platform.

Competition Returns To X86 Servers In Epyc Fashion

AMD has been absent from the X86 server market for so long that many of us have gotten into the habit of only speaking about the Xeon server space and how it relates to the relatively modest (in terms of market share, not in terms of architecture and capability) competition that Intel has faced in the past eight years.

Those days are over now that AMD has successfully got its first X86 server chip out the door with the launch of the “Naples” chip, the first in a line of processors that will carry the Epyc brand and, if all

Competition Returns To X86 Servers In Epyc Fashion was written by Timothy Prickett Morgan at The Next Platform.

AMD Winds Up One-Two Compute Punch For Servers

While AMD voluntarily exited the server processor arena in the wake of Intel’s onslaught with the “Nehalem” Xeon processors during the Great Recession, it never stopped innovating with its graphics processors and it kept enough of a hand in smaller processors used in consumer and selected embedded devices to start making money again in PCs and to take the game console business away from IBM’s Power chip division.

Now, after five long years of investing, AMD is poised to get its act together and to storm the glass house with a new line of server processors based on its Zen

AMD Winds Up One-Two Compute Punch For Servers was written by Timothy Prickett Morgan at The Next Platform.

Cray CTO On The Cambrian Compute Explosion

What goes around comes around. After fighting so hard to drive volume economics in the HPC arena with relatively inexpensive X86 clusters in the past twenty years, those economies of scale are running out of gas. That is why we are seeing an explosion in the diversity of compute, not just on the processor, but in adjunct computing elements that make a processor look smarter than it really is.

The desire of system architects to try everything because it is fun has to be counterbalanced by a desire to make systems that can be manufactured at a price that is

Cray CTO On The Cambrian Compute Explosion was written by Timothy Prickett Morgan at The Next Platform.

Knights Landing Can Stand Alone—But Often Won’t

It is a time of interesting architectural shifts in the world of supercomputing but one would be hard-pressed to prove that using the mid-year list of the top 500 HPC systems in the world. We are still very much in an X86 dominated world with the relatively stable number of accelerated systems to spice the numbers, but there are big changes afoot—as we described in depth when the rankings were released this morning.

This year, the list began to lose one of its designees in the list of “accelerator/offload” architectures as the Xeon Phi moves from offload model to host

Knights Landing Can Stand Alone—But Often Won’t was written by Nicole Hemsoth at The Next Platform.

HPC Poised For Big Changes, Top To Bottom

There is a lot of change coming down the pike in the high performance computing arena, but it has not happened as yet and that is reflected in the current Top 500 rankings of supercomputers in the world. But the June 2017 list gives us a glimpse into the future, which we think will be as diverse and contentious from an architectural standpoint as in the past.

No one architecture is winning and taking all, and many different architectures are getting a piece of the budget action. This means HPC is a healthy and vibrant ecosystem and not, like enterprise

HPC Poised For Big Changes, Top To Bottom was written by Timothy Prickett Morgan at The Next Platform.

Is HPE’s “Machine” the Novel Architecture to Fit Exascale Bill?

The exascale effort in the U.S. got a fresh injection with R&D funding set to course through six HPC vendors to develop scalable, reliable, and efficient architectures and components for new systems in the post-2020 timeframe.

However, this investment, coming rather late in the game for machines that need hit sustained exaflop performance in a 20-30 megawatt envelope in less than five years, raises a few questions about potential shifts in what the Department of Energy (DoE) is looking for in next-generation architectures. From changes in the exascale timeline and new focal points on “novel architectures” to solve exascale challenges,

Is HPE’s “Machine” the Novel Architecture to Fit Exascale Bill? was written by Nicole Hemsoth at The Next Platform.

Tackling Computational Fluid Dynamics in the Cloud

Cloud computing isn’t just for running office productivity software or realising your startup idea. It can support high-performance computing (HPC) applications that crunch through large amounts of data to produce actionable results.

Using elastic cloud resources to process data in this way can have a real business impact. What might one of these applications look like, and how could the cloud support it?

Let’s take buildings as an example. London’s ‘Walkie Talkie’ skyscraper has suffered from a bad rap of late. First it gave the term ‘hot wheels’ a whole new meaning, melting cars by inadvertently focusing the sun’s rays

Tackling Computational Fluid Dynamics in the Cloud was written by Nicole Hemsoth at The Next Platform.

The Memory Scalability At The Heart Of The Machine

Much has been made of the ability of The Machine, the system with the novel silicon photonics interconnect and massively scalable shared memory pool being developed by Hewlett Packard Enterprise, to already address more main memory at once across many compute elements than many big iron NUMA servers. With the latest prototype, which was unveiled last month, the company was able to address a whopping 160 TB of DDR4 memory.

This is a considerable feat, but HPE has the ability to significantly expand the memory addressability of the platform, using both standard DRAM memory and as lower cost memories such

The Memory Scalability At The Heart Of The Machine was written by Timothy Prickett Morgan at The Next Platform.

American HPC Vendors Get Government Boost for Exascale R&D

The US Department of Energy – and the hardware vendors it partners with – are set to enliven the exascale effort with nearly a half billion dollars in research, development, and deployment investments.  The push is led by the DoE’s Exascale Computing Project and its extended PathForward program, which was announced today.

The future of exascale computing in the United States has been subjected to several changes—some public, some still in question (although we received a bit more clarification and we will get to in a moment). The timeline for delivering an exascale capability system has also shifted, with most

American HPC Vendors Get Government Boost for Exascale R&D was written by Nicole Hemsoth at The Next Platform.

Stretching the Business of Tape Storage to Extreme Scale

IPOs and major investments in storage startups are one thing, but when it comes to a safe tech company investment, all bets are still on tape.

The rumors of tape’s death are greatly exaggerated, but there have been some changes to the market. While the number of installed sites might be shrinking for long-time tape storage maker, SpectraLogic, the installation sizes of its remaining customers keeps growing, which produces a nice uptick in revenue for the company, according to its CTO, Matt Starr.

This makes sense since the relatively smaller backups and archives make better performance sense on disk—and many

Stretching the Business of Tape Storage to Extreme Scale was written by Nicole Hemsoth at The Next Platform.

FPGAs, OpenHMC Push SKA HPC Processing Capabilities

Astronomy is the oldest research arena, but the technologies required to process the massive amount of data created from radio telescope arrays represents some of the most bleeding-edge research in modern computer science.

With an exabyte of data expected to stream off the Square Kilometer Array (SKA), teams from both the front and back ends of the project have major challenges ahead. One “small” part of that larger picture of seeing farther into the universe than ever before is moving the data from the various distributed telescopes into a single unified platform and data format. This means transferring data from

FPGAs, OpenHMC Push SKA HPC Processing Capabilities was written by Nicole Hemsoth at The Next Platform.

Knights Landing System Development Targets Dark Matter Study

Despite the best efforts of leading cosmologists, the nature of dark energy and dark matter – which comprise approximately 95% of the total mass-energy content of the universe – is still a mystery.

Dark matter remains undetected even with all the different methods that have been employed so far to directly find it.  The origin of dark energy is one of the greatest puzzles in physics. Cosmologist Katrin Heitmann, PI of an Aurora Early Science Program effort at the Argonne Leadership Computing Facility (ALCF) and her team are conducting research to shed some light on the dark universe.

“The reach

Knights Landing System Development Targets Dark Matter Study was written by Nicole Hemsoth at The Next Platform.

Early Benchmarks on Argonne’s New Knights Landing Supercomputer

We are heading into International Supercomputing Conference week (ISC) and as such, there are several new items of interest from the HPC side of the house.

As far as supercomputer architectures go for mid-2017, we can expect to see a lot of new machines with Intel’s Knights Landing architecture, perhaps a scattered few finally adding Nvidia K80 GPUs as an upgrade from older generation accelerators (for those who are not holding out for Volta with NVlink ala the Summit supercomputer), and of course, it all remains to be seen what happens with the Tianhe-2 and Sunway machines in China in

Early Benchmarks on Argonne’s New Knights Landing Supercomputer was written by Nicole Hemsoth at The Next Platform.

Clever RDMA Technique Delivers Distributed Memory Pooling

More databases and data stores and the applications that run atop them are moving to in-memory processing, and sometimes the memory capacity in a single big iron NUMA server isn’t enough and the latencies across a cluster of smaller nodes are too high for decent performance.

For example, server memory capacity tops out at 48 TB in the Superdome X server and at 64 TB in the UV 300 server from Hewlett Packard Enterprise using NUMA architectures. HPE’s latest iteration of the The Machine packs 160 TB of shared memory capacity across its nodes, and has an early version of

Clever RDMA Technique Delivers Distributed Memory Pooling was written by Timothy Prickett Morgan at The Next Platform.

Unifying Massive Data at Cloud Scale

Enterprises continue to struggle with the issue of data: how to process and move the massive amounts that are coming in from multiple sources, how to analyze the different types of data to best leverage its capabilities, and how to store and unify it across various environments, including on-premises infrastructure and cloud environments. A broad array of major storage players, such as Dell EMC, NetApp and IBM are building out their offerings to create platforms that can do a lot of those things.

MapR Technologies, which made its bones with its commercial Hadoop distribution, is moving in a similar direction.

Unifying Massive Data at Cloud Scale was written by Jeffrey Burt at The Next Platform.

The Composable Systems Wave Is Rising

Hardware is, by its very nature, physical and therefore, unlike software or virtual hardware and software routines encoded by FPGAs, it is the one thing that cannot be easily changed. The dream of composable systems, which we have discussed in the past, is something that has been swirling around in the heads of system architects for more than a decade, and we are without question getting closer to realizing the dream of making the components of systems and the clusters that are created from them programmable like software.

The hyperscalers, of course, have been on the bleeding edge of

The Composable Systems Wave Is Rising was written by Timothy Prickett Morgan at The Next Platform.

One Hyperscaler Gets The Jump On Skylake, Everyone Else Sidelined

Well, it could have been a lot worse. About 5.6 percent worse, if you do the math.

As we here at The Next Platform have been anticipating for quite some time, with so many stars aligning here in 2017 and a slew of server processor and GPU coprocessor announcements and deliveries expected starting in the summer and rolling into the fall, there is indeed a slowdown in the server market and one that savvy customers might be able to take advantage of. But we thought those on the bleeding edge of performance were going to wait to see what Intel,

One Hyperscaler Gets The Jump On Skylake, Everyone Else Sidelined was written by Timothy Prickett Morgan at The Next Platform.