Archive

Category Archives for "The Next Platform"

Knights Landing Will Waterfall Down From On High

With the general availability of the “Knights Landing” Xeon Phi many core processors from Intel last month, some of the largest supercomputing labs on the planet are getting their first taste of what the future style of high performance computing could look like for the rest of us.

We are not suggesting that the Xeon Phi processor will be the only compute engine that will be deployed to run traditional simulation and modeling applications as well as data analytics, graph processing, and deep learning algorithms. But we are suggesting that this style of compute engine – it is more than

Knights Landing Will Waterfall Down From On High was written by Timothy Prickett Morgan at The Next Platform.

Optimization Tests Confirm Knights Landing Performance Projections

Close to a year ago when more information was becoming available about the Knights Landing processor, Intel released projections for its relative performance against two-socket Haswell machines. As one might image, the performance improvements were impressive, but now that there are systems on the ground that can be optimized and benchmarked, we are finally getting a more boots-on-the-ground view into the performance bump.

As it turns out, optimization and benchmarking on the “Cori” supercomputer at NERSC are showing that those figures were right on target. In a conversation with one of the co-authors of a new report highlighting the optimization

Optimization Tests Confirm Knights Landing Performance Projections was written by Nicole Hemsoth at The Next Platform.

China’s Triple Play For Pre-Exascale Systems

Before any country can deploy an exascale system, they have to get pre-exascale prototypes into the field to test out their underlying technologies and determine what approaches have the best chance of scaling up performance and being manufactured affordably. It looks like China is looking at three different pre-exascale systems, and none of them will deploy processors or accelerators made by US companies.

It is no secret that China has wanted to develop an indigenous capability to design chips and build supercomputer-class systems, and this was true even before the US government put the kibosh on selling Intel Xeon and

China’s Triple Play For Pre-Exascale Systems was written by Timothy Prickett Morgan at The Next Platform.

Competition Heats Up In Cluster Interconnects

Any time a ranking of a technology is put together, that ranking is always called into question as to whether or not it is representative of reality. Rankings, such as the Top 500 list of the top supercomputers in the world, has been the subject of such debate with regards to the Linpack Fortran performance benchmark that is used to create the rankings and its relevance to the performance of actual workloads. When it comes to networking, the changes in the list in recent years are likely a better reflection of what is going on in high performance computing in

Competition Heats Up In Cluster Interconnects was written by Timothy Prickett Morgan at The Next Platform.

Startup Takes a Risk on RISC-V Custom Silicon

As we are carefully watching here, there is a perfect storm brewing in the semiconductor space, both for manufacturers and system designers.

On the one hand, the impending demise of Moore’s Law presents a set of challenges—and opportunities—for emerging chip companies to arise and offer alternatives, often with customization cooked into the business model. And for end users, there is a rising tide of options that might lift a lot of boats if ecosystems are rapidly adopted. This is the case in the ARM space, as we’ve seen clearly this year, as well as for other architectures, including efforts from

Startup Takes a Risk on RISC-V Custom Silicon was written by Nicole Hemsoth at The Next Platform.

Supercomputing’s Scramble to Keep Thinking in Parallel

As supercomputing centers look to future exascale systems, among the other pressing concerns (power consumption in particular) is adopting the right programming approach to scale applications across millions of cores.

And while this might sound like a big enough challenge on its own, it gets more complicated because it might just be that a new programming model (or system) might not be the scalability and performance answer either. It could just be that tweaking existing tools and methods can move programming evolution to programming revolution, that is, of course, if the supercomputing programmer community can agree.

Like all things in

Supercomputing’s Scramble to Keep Thinking in Parallel was written by Nicole Hemsoth at The Next Platform.

OpenPower Developers Primed for Big Wins at IBM Hackathon

IBM has created a virtual hackathon for all you lovely developers to test drive your data-intensive applications on the OpenPOWER server, GPU and accelerator platform. And there’s $27,000 worth of prizes on the table. Want to give it a go? Check out the competition rules and register for the OpenPOWER Developer Challenge.

The closing deadline is September 1 and already 277 individuals have signed up. So don’t dilly dally: tear down those hardware performance barriers and submit your entry. Choose which track is the one for you and connect with the experts ‘round the clock on Slack to get

OpenPower Developers Primed for Big Wins at IBM Hackathon was written by Nicole Hemsoth at The Next Platform.

When HPC Becomes Normal

Sometimes, it seems that people are of two minds about high performance computing. They want it to be special and distinct from the rest of the broader IT market, and at the same time they want the distributed simulation and modeling workloads that have for decades been the most exotic things around to be so heavily democratized that they become pervasive. Democratized. Normal.

We are probably a few years off from HPC reaching this status, but this is one of the goals that the new HPC team at Dell has firmly in mind as the world’s second largest system maker

When HPC Becomes Normal was written by Timothy Prickett Morgan at The Next Platform.

Stretching Software Across Future Exascale Systems

If money was no object, then arguably the major nations of the world that always invest heavily in supercomputing would have already put an exascale class system into the field. But money always matters and ultimately supercomputers have to justify their very existence by enabling scientific breakthroughs and enhancing national security.

This, perhaps, is why the Exascale Computing Project establish by the US government last summer is taking such a measured pace in fostering the technologies that will ultimately result in bringing three exascale-class systems with two different architectures into the field after the turn of the next decade. The

Stretching Software Across Future Exascale Systems was written by Timothy Prickett Morgan at The Next Platform.

Dreaming Of 100 Exaflops In 2030

The supercomputing industry is as insatiable as it is dreamy. We have not even reached our ambitions of hitting the exascale level of performance in a single system by the end of this decade, and we are stretching our vision out to the far future and wondering how the capacity of our largest machines will scale by many orders of magnitude more.

Dreaming is the foundation of the technology industry, and supercomputing has always been where the most REM action takes place among the best and brightest minds in computing, storage, and networking – as it should be. But to

Dreaming Of 100 Exaflops In 2030 was written by Timothy Prickett Morgan at The Next Platform.

In Situ Analysis to Push Supercomputing Efficiency

As supercomputers expand in terms of processing, storage, and network capabilities, the size and scope of simulations is also expanding outward. While this is great news for scientific progress, this naturally creates some new bottlenecks, particularly on the analysis and visualization fronts.

Historically, most large-scale simulations would dump time step and other data at defined intervals onto disk for post-processing and visualization, but as the petabyte scale of that process adds more weight, that is becoming less practical. Further, for those who know what they want to find in that data, using an in situ approach to finding the answer

In Situ Analysis to Push Supercomputing Efficiency was written by Nicole Hemsoth at The Next Platform.

Inside Look at Key Applications on China’s New Top Supercomputer

As the world is now aware, China is now home to the world’s most powerful supercomputer, toppling the previous reigning system, Tianhe-2, which is also located in the country.

In the wake of the news, we took an in-depth look at the architecture of the new Sunway TiahuLight machine, which will be useful background as we examine a few of the practical applications that have been ported to and are now running on the 10 million-core, 125 petaflop-capable supercomputer.

The sheer size and scale of the system is what initially grabbed headlines when we broke news about the system last

Inside Look at Key Applications on China’s New Top Supercomputer was written by Nicole Hemsoth at The Next Platform.

Oracle Takes On Xeons With Sparc S7

It is an accepted principle of modern infrastructure that at a certain scale, customization like that done by Google, Amazon Web Services, Microsoft, or Baidu pays off. While Oracle is building its own public cloud, it does not have the kind of scale that these companies do, but it does have something else that warrants customization and co-design up and down its stack: more than 420,000 customers who generate $38.5 billion in sales.

This, in a nutshell, is why Oracle continues to invest in its Sparc processors even though many of its customers deploy Oracle’s middleware, database, and application software

Oracle Takes On Xeons With Sparc S7 was written by Timothy Prickett Morgan at The Next Platform.

System Software, Orchestration Gets an OpenHPC Boost

System software setup and maintenance has become a major efficiency drag on HPC labs and OEMs alike, but community and industry efforts are now underway to reduce the huge amounts of duplicated development, validation and maintenance work across the HPC ecosystem. Disparate efforts and approaches, while necessary on some levels, slow adoption of hardware innovation and progress toward exascale performance. They also complicate adoption of complex workloads like big data and machine learning.

With the creation of the OpenHPC Community, a Linux Foundation collaborative project, the push is on to minimize duplicated efforts in the HPC software stack wherever

System Software, Orchestration Gets an OpenHPC Boost was written by Nicole Hemsoth at The Next Platform.

Emerging “Universal” FPGA, GPU Platform for Deep Learning

In the last couple of years, we have written and heard about the usefulness of GPUs for deep learning training as well as, to a lesser extent, custom ASICs and FPGAs. All of these options have shown performance or efficiency advantages over commodity CPU-only approaches, but programming for all of these is often a challenge.

Programmability hurdles aside, deep learning training on accelerators is standard, but is often limited to a single choice—GPUs or, to a far lesser extent, FPGAs. Now, a research team from the University of California Santa Barbara has proposed a new middleware platform that can combine

Emerging “Universal” FPGA, GPU Platform for Deep Learning was written by Nicole Hemsoth at The Next Platform.

Lenovo HPC Bounces Back After IBM Spinoff

When IBM sold off its System x division to Lenovo Group in the fall of 2014, some big supercomputing centers in the United States and Europe that were long-time customers of Big Blue had to stop and think about what their future systems would look like and who would supply them. It was not a foregone conclusion that the Xeon-based portion of IBM’s HPC business would just move over to Lenovo as part of the sale.

Quite the opposite, in fact. Many believed that Lenovo could not hold onto its HPC business, and Hewlett Packard Enterprise and Dell were quick

Lenovo HPC Bounces Back After IBM Spinoff was written by Timothy Prickett Morgan at The Next Platform.

Novel Architectures on the Far Horizon for Weather Prediction

Weather modeling and forecasting centers are among some of the top users of supercomputing systems and are at the top of the list when it comes to areas that could benefit from exascale-class compute power.

However, for modeling centers, even those with the most powerful machines, there is a great deal of leg work on the code front in particular to scale to that potential. Still, many, including most recently the UK Met Office, have planted a stake in the ground for exascale—and they are looking beyond traditional architectures to meet the power and scalability demands they’ll be facing

Novel Architectures on the Far Horizon for Weather Prediction was written by Nicole Hemsoth at The Next Platform.

Cisco Connects With SGI For Big NUMA Iron

When supercomputer maker SGI tweaked its NUMA server technology to try to pursue sales in the datacenter, the plan was not to go it alone but rather to partner with the makers of workhorse Xeon servers that did not – and would not – make their own big iron but who nonetheless want to sell high-end machines to their customers.

This, company officials have said all along, is the only way that SGI, which is quite a bit smaller than many of the tier one server makers, can reach the total addressable market that the company has forecast for its

Cisco Connects With SGI For Big NUMA Iron was written by Timothy Prickett Morgan at The Next Platform.

The Hype About Converged Systems

Converged systems are a hot commodity in the IT sector these days. But it looks to us like the hype over various kinds of integrated systems that weld servers and storage together into preconfigured stacks  including hyperconverged stacks that literally merge the compute and storage layers on the same servers – is just a bit bigger than the appetite for such iron in the datacenters of the world.

According to the latest statistics from IDC, the market for converged systems, which is a broader definition of such machines that includes integrated systems, certified reference systems, and hyperconverged systems, the market

The Hype About Converged Systems was written by Timothy Prickett Morgan at The Next Platform.

Mitigating MPI Message Matching Issues

Since the 1990s, MPI (Message Passing Interface) has been the dominant communications protocol for high-performance scientific and commercial distributed computing. Designed in an era when processors with two or four cores were considered high-end parallel devices, the recent move to processors containing tens to a few hundred cores (as exemplified by the current Intel Xeon and Intel Xeon Phi processor families) has exacerbated scaling issues inside MPI itself. Increased network traffic, amplified by high performance communications fabrics such as InfiniBand and Intel Omni-Path Architecture (Intel OPA) manifest an MPI performance and scaling issue.

In recognition of their outstanding research and

Mitigating MPI Message Matching Issues was written by Nicole Hemsoth at The Next Platform.