Archive

Category Archives for "The Next Platform"

Oak Ridge Lab’s Quantum Simulator Pulls HPC Future Closer

Oak Ridge National Laboratory has been investing heavily in quantum computing across the board. From testing new devices, programming models, and figuring out workflows that combine classical bits with qubits, this is where the DoE quantum investment seems to be centered.

Teams at Oak Ridge have access to the range of available quantum hardware devices—something that is now possible without having to own the difficult-to-manage quantum computer on sight. IBM’s Q processor is available through a web interface, as is D-Wave’s technology, which means researchers at ORNL can test their quantum applications on actual hardware. As we just

Oak Ridge Lab’s Quantum Simulator Pulls HPC Future Closer was written by Nicole Hemsoth at The Next Platform.

Europe Elbows For A Place At Exascale Table

When talking about the ongoing international race to exascale computing, it might be easy to overlook the European Union. A lot of the attention over the past several years has focused on the efforts by the United States and China, the world’s economic powerhouses and the centers of technology development.

Through its Exascale Computing Project, the United States is putting money and resources behind its plans to roll out the first of its exascale systems in 2021. For its part, China is planning at least three pre-exascale systems using mostly home-grown technologies, backed by significant investments by the Chinese

Europe Elbows For A Place At Exascale Table was written by Jeffrey Burt at The Next Platform.

Mitigating Cybersecurity Threats With Advanced Datacenter Tech

In this fast-paced global economy, enterprises must innovate to evolve and succeed. Today’s industry experts are seeking transformative technologies – like high performance computing and artificial intelligence ­– to help them accelerate data analytics, support increasingly complex workloads, and facilitate business growth to meet the challenges of tomorrow. However, data security remains a chief concern as enterprises race to implement these cutting-edge innovations.

The digital age is marked by several key trends – in cluding IT modernization, business transformation, and digital disruptions such as proliferating mobility, the Internet of Things, cloud computing, and much more. Many businesses are investing heavily

Mitigating Cybersecurity Threats With Advanced Datacenter Tech was written by Timothy Prickett Morgan at The Next Platform.

Looking Ahead to Intel’s Secret Exascale Architecture

There has been a lot of talk this week about what architectural direction Intel will be taking for its forthcoming exascale efforts. As we learned when the Aurora system (expected to be the first U.S. exascale system) at Argonne National Lab shifted from the planned Knights Hill course, Intel was seeking a replacement architecture—one that we understand will not be part of the Knights family at all but something entirely different.

Just how different that will be is up for debate. Some have posited that the exascale architecture will feature fully integrated hardware acceleration (no offload model needed for

Looking Ahead to Intel’s Secret Exascale Architecture was written by Nicole Hemsoth at The Next Platform.

D-Wave Makes Quantum Leap with Reverse Annealing

The art and science of quantum annealing to arrive at a best of all worlds answer to difficult questions has been well understood for years (even if implementing it as a computational device took time). But that area is now being turned on its head—all for the sake of achieving more nuanced results that balance the best of quantum and classical algorithms.

This new approach to quantum computing is called reverse annealing, something that has been on the research wish-list at Google and elsewhere, but is now a reality on the newest D-Wave 2000Q (2048 qubit) hardware. The company described

D-Wave Makes Quantum Leap with Reverse Annealing was written by Nicole Hemsoth at The Next Platform.

Red Hat Throws Its Full Support Behind Arm Server Chips

The gatekeeper to Arm in the datacenter has finally swung that gate wide open.

Red Hat has always been a vocal support of Arm’s efforts to migrate its low-power architecture into the datacenter. The largest distributer of commercial Linux has spent years working with other tech vendors and industry groups like Linaro to build an ecosystem of hardware and software makers to support Arm systems-on-a-chip (SoCs) in servers and to build standards and policies for products that are powered by the chips. The company was a key player in the development of the Arm Server Base System Architecture (SBSA) specification

Red Hat Throws Its Full Support Behind Arm Server Chips was written by Jeffrey Burt at The Next Platform.

Samsung Invests in Cray Supercomputer for Deep Learning Initiatives

One of the reasons this year’s Supercomputing Conference (SC) is nearing attendance records has far less to do with traditional scientific HPC and much more to do with growing interest in deep learning and machine learning.

Since the supercomputing set has pioneered many of the hardware advances required for AI (and some software and programming techniques as well), it is no surprise new interest from outside HPC is filtering in.

On the subject of pioneering HPC efforts, one of the industry’s longest-standing companies, supercomputer maker Cray, is slowly but surely beginning to reap the benefits of the need for this

Samsung Invests in Cray Supercomputer for Deep Learning Initiatives was written by Nicole Hemsoth at The Next Platform.

The TOP500 is Dead, Long Live The TOP500

Twice a year, the TOP500 project publishes a list of the 500 most powerful computer systems, aka supercomputers. The TOP500 list is widely considered to be HPC-related, and many analyze the list statistics to understand the HPC market and technology trends. As the rules of the list do not preclude non-HPC systems to be submitted and listed, various OEMs have regularly submitted non-HPC platforms to the list in order to improve their apparent market position in the HPC arena. Thus, the task of analyzing the list for HPC markets and trends has grown more complicated.

In 2007, I published an

The TOP500 is Dead, Long Live The TOP500 was written by Timothy Prickett Morgan at The Next Platform.

Dell EMC Wants to Take AI Mainstream

One of the challenges vendors are trying address when it comes to artificial intelligence is expanding the technology and its elements of machine learning and deep learning beyond the realm of hyoerscalers and some HPC centers and into the enterprise, where businesses can leverage them for such workloads as simulations, modeling, and analytics.

For the past several years, system makers have been trying to crack the code that will make it easier for mainstream enterprises to adopt and deploy traditional HPC technologies, and now they want to dovetail those efforts with the expanding AI opportunity. The difference with enterprises is

Dell EMC Wants to Take AI Mainstream was written by Jeffrey Burt at The Next Platform.

ARM Benchmarks Show HPC Ripe for Processor Shakeup

Every year at the Supercomputing Conference (SC) an unofficial theme emerges. For the last two years, machine learning and deep learning were focal points; before that it was all about data-intensive computing and stretching even farther back, the potential of cloud to reshape supercomputing.

What all of these themes have in common is that they did not focus on the processor. In fact, they centered around a generalized X86 hardware environment with well-known improvement and ecosystem cadences. Come to think of it, the closest we have come to seeing the device at the center of a theme in recent years

ARM Benchmarks Show HPC Ripe for Processor Shakeup was written by Nicole Hemsoth at The Next Platform.

Mellanox Poised For HDR InfiniBand Quantum Leap

InfiniBand and Ethernet are in a game of tug of war and are pushing the bandwidth and price/performance envelopes constantly. But the one thing they cannot do is get too far out ahead of the PCI-Express bus through which network interface cards hook into processors. The 100 Gb/sec links commonly used in Ethernet and InfiniBand server adapters run up against bandwidth ceilings with two ports running on PCI-Express 3.0 slots, and it is safe to say that 200 Gb/sec speeds will really need PCI-Express 4.0 slots to have two ports share a slot.

This, more than any other factor, is

Mellanox Poised For HDR InfiniBand Quantum Leap was written by Timothy Prickett Morgan at The Next Platform.

Top 500 Supercomputer Rankings Losing Accuracy Despite High Precision

If the hyperscalers have taught us anything, it is that more data is always better. And because of this, we have to start out by saying that we are grateful to the researchers who created and administer the Top 500 supercomputer benchmark tests for the past 25 years, creating an astonishing 50 consecutive lists ranking the most powerful machines in the world as gauged by the double precision Linpack Fortran parallel matrix math test.

This set of data stands out among a few other groups of benchmarks that have been used by the tens of thousands of organizations – academic

Top 500 Supercomputer Rankings Losing Accuracy Despite High Precision was written by Timothy Prickett Morgan at The Next Platform.

Cray ARMs Highest End Supercomputer with ThunderX2

Just this time last year, the projection was that by 2020, ARM processors would be chewing on twenty percent of HPC workloads. In that short span of time, the grain of salt many took with that figure has dropped with the addition of some very attractive options for supercomputing from ARM hardware makers.

Last winter, the big ARM news for HPC was mostly centered on the Mont Blanc project at the Barcelona Supercomputer Center. However, as the year unfolded, details on new projects with ARM at the core including the Post-K supercomputer in Japan and the Isambard supercomputer in the

Cray ARMs Highest End Supercomputer with ThunderX2 was written by Nicole Hemsoth at The Next Platform.

Nvidia Breaks $2 Billion Datacenter Run Rate

If GPU acceleration had not been conceived of by academics and researchers at companies like Nvidia more than a decade ago, how much richer would Intel be today? How many more datacenters would have had to be expanded or built? Would HPC have stretched to try to reach exascale, and would machine learning have fulfilled the long-sought promise of artificial intelligence, or at least something that looks like it?

These are big questions, and relevant ones, as Nvidia’s datacenter business has just broken through the $2 billion run rate barrier. With something on the order of a 10X speedup across

Nvidia Breaks $2 Billion Datacenter Run Rate was written by Timothy Prickett Morgan at The Next Platform.

IBM Bolsters Quantum Capability, Emphasizes Device Differentiation

Much of the quantum computing hype of the last few years has centered on D-Wave, which has installed a number of functional systems and is hard at work making quantum programming more practical.

Smaller companies like Rigetti Computing are gaining traction as well, but all the while, in the background, IBM has been steadily furthering quantum computing work that kicked off at IBM Research in the mid-1970s with the introduction of the quantum information concept by Charlie Bennett.

Since those early days, IBM has hit some important milestones on the road to quantum computing, including demonstrating the first quantum

IBM Bolsters Quantum Capability, Emphasizes Device Differentiation was written by Nicole Hemsoth at The Next Platform.

HPE Developing its Own Low Power “Neural Network” Chips

With so many chip startups targeting the future of deep learning training and inference, one might expect it would be far easier for tech giant Hewlett Packard Enterprise to buy versus build. However, when it comes to select applications at the extreme edge (for space missions in particular), nothing in the ecosystem fits the bill.

In the context of a broader discussion about the company’s Extreme Edge program focused on space-bound systems, HPE’s Dr. Tom Bradicich, VP and GM of Servers, Converged Edge, and IoT systems, described a future chip that would be ideally suited for high performance computing under

HPE Developing its Own Low Power “Neural Network” Chips was written by Nicole Hemsoth at The Next Platform.

HPC Heavyweight Goes All-In On OpenACC

Across the HPC community, commercial firms, government labs and academic institutions are adapting their code to embrace GPU architectures. They are motivated by the faster performance and lower energy consumption provided by GPUs, and many of them are using OpenACC to annotate their code and make it GPU-friendly. The Next Platform recently interviewed one key organization to learn why it is using the OpenACC programming model to expand its computing capabilities and platform support.

If the earth was the size of a basketball, its atmosphere would be the thickness of shrink wrap. It is fragile enough that in 1960, the

HPC Heavyweight Goes All-In On OpenACC was written by Timothy Prickett Morgan at The Next Platform.

Nutanix Expands Adds Breadth to Cloud Platform

Nutanix has been on a journey for well over a year to transform itself from a supplier for software for hyperconverged infrastructure to a company with a platform that allows enterprises to build private datacenter environments that give them the same kinds of tools, automation, agility, scalability and consumption options that they can find in public clouds like Amazon Web Services and Microsoft Azure.

Nutanix was one of several vendors whose software helped propel the fast-growing hyperconverged infrastructure space through partnerships with such top-tier system OEMs like Dell EMC, IBM and Lenovo, and is among the last independent companies standing,

Nutanix Expands Adds Breadth to Cloud Platform was written by Jeffrey Burt at The Next Platform.

Qualcomm’s Amberwing Arm Server Chip Finally Takes Flight

It is going to be a busy week for chip maker Qualcomm as it formally jumps from smartphones to servers with its new “Amberwing” Centriq 2400 Arm server processor during the same week that it has received an unsolicited $130 billion takeover offer from sometimes rival chipmaker Broadcom.

The Centriq 2400 is the culmination of over four years of work and investment, which according to the experts in the semiconductor industry we have talked to, easily took on the order of $100 million to $125 million to make happen ­– remember there was a prototype as well as the

Qualcomm’s Amberwing Arm Server Chip Finally Takes Flight was written by Timothy Prickett Morgan at The Next Platform.

Arm Smooths the Path for Porting HPC Apps

One of the arguments Intel officials and others have made against Arm’s push to get its silicon designs into the datacenter has been the burden it would mean for enterprises and organizations in the HPC field that would have to modify application codes to get their software to run on the Arm architecture.

For HPC organizations, that would mean moving the applications from the Intel-based and IBM systems that have dominated the space for years, a time-consuming and possibly costly process.

Arm officials over the years have acknowledged the challenge, but have noted their infrastructure’s embrace of open-source software and

Arm Smooths the Path for Porting HPC Apps was written by Nicole Hemsoth at The Next Platform.