Servers have become increasingly powerful in recent years, with more processing cores being added and accelerators like GPUs and field-programmable gate arrays (FPGAs) being added, and the amount of data that can be processed is growing rapidly.
However, a key problem has been the enabling interconnect technologies to keep pace with server evolution. It is a challenge that last year spawned the Gen-Z Consortium, a group founded by a dozen top-tier tech vendors including Hewlett Packard Enterprise, IBM, Dell EMC, AMD, Arm, and Cray that wanted to create the next-generation interconnect that can leverage existing tech while paving the way …
Breaking Memory Free Of Compute With Gen-Z was written by Jeffrey Burt at The Next Platform.
Oracle was late to the cloud game, but in recent years has moved aggressively to catch up. While still behind the top companies like Amazon Web Services, Microsoft Azure, and Google Cloud Platform, Oracle is seeing gains in revenue and customers to its cloud environment, thanks in large part due to the hundreds of thousands of enterprise customers that use its various operating system, middleware, database, and application software.
The cloud revenue jump at Oracle is pretty steep. In a conference call discussing the most recent quarterly financial numbers, Oracle co-CEO Safra Catz noted that cloud revenue for the quarter …
Expanded Oracle Cloud Rains Down GPUs, Skylake Xeons was written by Jeffrey Burt at The Next Platform.
Academic centers and government agencies often design and write their own applications, but some of them and the vast majority of enterprise customers with HPC applications usually depend on third parties for their software. They also depend upon those software developers to continually enhance and scale those applications, and that means adding support for GPU accelerators. Two important ones, Gaussian and ANSYS, depend not only on GPUs, but the OpenACC programming model, to extend across more cores and therefore do more work faster.
Let’s start with Gaussian.
The way that chemicals react can be the difference between a product success …
Key Software Firms Reengineer Their Code With OpenACC was written by Timothy Prickett Morgan at The Next Platform.
The market for high performance computing can be a capricious one in any short term, but in general has been growing and, at last according to some of the experts who have spent decades tracking it, is set to grow a little bit faster than the IT sector at large in the coming years.
A lot, of course, will depend on whether or not the United States, Europe, China, and Japan come through with what are expected to be substantial investments in pre-exascale and exascale systems in the next few years, and quite possibly resulting in a bumper crop of …
HPC Revenues Under Pressure, But Outlook Optimistic was written by Timothy Prickett Morgan at The Next Platform.
It was 31 years ago when Alan Karp, then an IBM employee, decided to put up $100 of his own money in hopes of solving a vexing issue for him and others in the computing field. When looking at the HPC space, there were supercomputers armed with eight powerful processors and designed to run the biggest applications of the day. However, there also were people putting 1,000 wimpy chips into machines that leveraged parallelism to run workloads, a rarity at the time.
According to Amdahl’s Law in 1986, even if 95 percent of a workload runs in parallel, the speedup …
Gordon Bell Looks Out Into A Parallel World was written by Jeffrey Burt at The Next Platform.
The landscape of HPC storage performance measurement is littered with unrealistic expectations. While there are seemingly endless strings of benchmarks aimed at providing balanced metrics, these far too often come from the vendors themselves.
What is needed is an independent set of measurements for supercomputing storage environments that takes into account all of the many nuances of HPC (versus enterprise) setups. Of course, building such a benchmark suite is no simple task—and ranking the results is not an easy exercise either because there are a great many dependencies; differences between individual machines, networks, memory and I/O tricks in software, and …
IO-500 Goes Where No HPC Storage Metric Has Gone Before was written by Nicole Hemsoth at The Next Platform.
Oak Ridge National Laboratory has been investing heavily in quantum computing across the board. From testing new devices, programming models, and figuring out workflows that combine classical bits with qubits, this is where the DoE quantum investment seems to be centered.
Teams at Oak Ridge have access to the range of available quantum hardware devices—something that is now possible without having to own the difficult-to-manage quantum computer on sight. IBM’s Q processor is available through a web interface, as is D-Wave’s technology, which means researchers at ORNL can test their quantum applications on actual hardware. As we just …
Oak Ridge Lab’s Quantum Simulator Pulls HPC Future Closer was written by Nicole Hemsoth at The Next Platform.
When talking about the ongoing international race to exascale computing, it might be easy to overlook the European Union. A lot of the attention over the past several years has focused on the efforts by the United States and China, the world’s economic powerhouses and the centers of technology development.
Through its Exascale Computing Project, the United States is putting money and resources behind its plans to roll out the first of its exascale systems in 2021. For its part, China is planning at least three pre-exascale systems using mostly home-grown technologies, backed by significant investments by the Chinese …
Europe Elbows For A Place At Exascale Table was written by Jeffrey Burt at The Next Platform.
In this fast-paced global economy, enterprises must innovate to evolve and succeed. Today’s industry experts are seeking transformative technologies – like high performance computing and artificial intelligence – to help them accelerate data analytics, support increasingly complex workloads, and facilitate business growth to meet the challenges of tomorrow. However, data security remains a chief concern as enterprises race to implement these cutting-edge innovations.
The digital age is marked by several key trends – in cluding IT modernization, business transformation, and digital disruptions such as proliferating mobility, the Internet of Things, cloud computing, and much more. Many businesses are investing heavily …
Mitigating Cybersecurity Threats With Advanced Datacenter Tech was written by Timothy Prickett Morgan at The Next Platform.
There has been a lot of talk this week about what architectural direction Intel will be taking for its forthcoming exascale efforts. As we learned when the Aurora system (expected to be the first U.S. exascale system) at Argonne National Lab shifted from the planned Knights Hill course, Intel was seeking a replacement architecture—one that we understand will not be part of the Knights family at all but something entirely different.
Just how different that will be is up for debate. Some have posited that the exascale architecture will feature fully integrated hardware acceleration (no offload model needed for …
Looking Ahead to Intel’s Secret Exascale Architecture was written by Nicole Hemsoth at The Next Platform.
The art and science of quantum annealing to arrive at a best of all worlds answer to difficult questions has been well understood for years (even if implementing it as a computational device took time). But that area is now being turned on its head—all for the sake of achieving more nuanced results that balance the best of quantum and classical algorithms.
This new approach to quantum computing is called reverse annealing, something that has been on the research wish-list at Google and elsewhere, but is now a reality on the newest D-Wave 2000Q (2048 qubit) hardware. The company described …
D-Wave Makes Quantum Leap with Reverse Annealing was written by Nicole Hemsoth at The Next Platform.
The gatekeeper to Arm in the datacenter has finally swung that gate wide open.
Red Hat has always been a vocal support of Arm’s efforts to migrate its low-power architecture into the datacenter. The largest distributer of commercial Linux has spent years working with other tech vendors and industry groups like Linaro to build an ecosystem of hardware and software makers to support Arm systems-on-a-chip (SoCs) in servers and to build standards and policies for products that are powered by the chips. The company was a key player in the development of the Arm Server Base System Architecture (SBSA) specification …
Red Hat Throws Its Full Support Behind Arm Server Chips was written by Jeffrey Burt at The Next Platform.
One of the reasons this year’s Supercomputing Conference (SC) is nearing attendance records has far less to do with traditional scientific HPC and much more to do with growing interest in deep learning and machine learning.
Since the supercomputing set has pioneered many of the hardware advances required for AI (and some software and programming techniques as well), it is no surprise new interest from outside HPC is filtering in.
On the subject of pioneering HPC efforts, one of the industry’s longest-standing companies, supercomputer maker Cray, is slowly but surely beginning to reap the benefits of the need for this …
Samsung Invests in Cray Supercomputer for Deep Learning Initiatives was written by Nicole Hemsoth at The Next Platform.
Twice a year, the TOP500 project publishes a list of the 500 most powerful computer systems, aka supercomputers. The TOP500 list is widely considered to be HPC-related, and many analyze the list statistics to understand the HPC market and technology trends. As the rules of the list do not preclude non-HPC systems to be submitted and listed, various OEMs have regularly submitted non-HPC platforms to the list in order to improve their apparent market position in the HPC arena. Thus, the task of analyzing the list for HPC markets and trends has grown more complicated.
In 2007, I published an …
The TOP500 is Dead, Long Live The TOP500 was written by Timothy Prickett Morgan at The Next Platform.
One of the challenges vendors are trying address when it comes to artificial intelligence is expanding the technology and its elements of machine learning and deep learning beyond the realm of hyoerscalers and some HPC centers and into the enterprise, where businesses can leverage them for such workloads as simulations, modeling, and analytics.
For the past several years, system makers have been trying to crack the code that will make it easier for mainstream enterprises to adopt and deploy traditional HPC technologies, and now they want to dovetail those efforts with the expanding AI opportunity. The difference with enterprises is …
Dell EMC Wants to Take AI Mainstream was written by Jeffrey Burt at The Next Platform.
Every year at the Supercomputing Conference (SC) an unofficial theme emerges. For the last two years, machine learning and deep learning were focal points; before that it was all about data-intensive computing and stretching even farther back, the potential of cloud to reshape supercomputing.
What all of these themes have in common is that they did not focus on the processor. In fact, they centered around a generalized X86 hardware environment with well-known improvement and ecosystem cadences. Come to think of it, the closest we have come to seeing the device at the center of a theme in recent years …
ARM Benchmarks Show HPC Ripe for Processor Shakeup was written by Nicole Hemsoth at The Next Platform.
InfiniBand and Ethernet are in a game of tug of war and are pushing the bandwidth and price/performance envelopes constantly. But the one thing they cannot do is get too far out ahead of the PCI-Express bus through which network interface cards hook into processors. The 100 Gb/sec links commonly used in Ethernet and InfiniBand server adapters run up against bandwidth ceilings with two ports running on PCI-Express 3.0 slots, and it is safe to say that 200 Gb/sec speeds will really need PCI-Express 4.0 slots to have two ports share a slot.
This, more than any other factor, is …
Mellanox Poised For HDR InfiniBand Quantum Leap was written by Timothy Prickett Morgan at The Next Platform.
If the hyperscalers have taught us anything, it is that more data is always better. And because of this, we have to start out by saying that we are grateful to the researchers who created and administer the Top 500 supercomputer benchmark tests for the past 25 years, creating an astonishing 50 consecutive lists ranking the most powerful machines in the world as gauged by the double precision Linpack Fortran parallel matrix math test.
This set of data stands out among a few other groups of benchmarks that have been used by the tens of thousands of organizations – academic …
Top 500 Supercomputer Rankings Losing Accuracy Despite High Precision was written by Timothy Prickett Morgan at The Next Platform.
Just this time last year, the projection was that by 2020, ARM processors would be chewing on twenty percent of HPC workloads. In that short span of time, the grain of salt many took with that figure has dropped with the addition of some very attractive options for supercomputing from ARM hardware makers.
Last winter, the big ARM news for HPC was mostly centered on the Mont Blanc project at the Barcelona Supercomputer Center. However, as the year unfolded, details on new projects with ARM at the core including the Post-K supercomputer in Japan and the Isambard supercomputer in the …
Cray ARMs Highest End Supercomputer with ThunderX2 was written by Nicole Hemsoth at The Next Platform.
If GPU acceleration had not been conceived of by academics and researchers at companies like Nvidia more than a decade ago, how much richer would Intel be today? How many more datacenters would have had to be expanded or built? Would HPC have stretched to try to reach exascale, and would machine learning have fulfilled the long-sought promise of artificial intelligence, or at least something that looks like it?
These are big questions, and relevant ones, as Nvidia’s datacenter business has just broken through the $2 billion run rate barrier. With something on the order of a 10X speedup across …
Nvidia Breaks $2 Billion Datacenter Run Rate was written by Timothy Prickett Morgan at The Next Platform.