Much of the quantum computing hype of the last few years has centered on D-Wave, which has installed a number of functional systems and is hard at work making quantum programming more practical.
Smaller companies like Rigetti Computing are gaining traction as well, but all the while, in the background, IBM has been steadily furthering quantum computing work that kicked off at IBM Research in the mid-1970s with the introduction of the quantum information concept by Charlie Bennett.
Since those early days, IBM has hit some important milestones on the road to quantum computing, including demonstrating the first quantum …
IBM Bolsters Quantum Capability, Emphasizes Device Differentiation was written by Nicole Hemsoth at The Next Platform.
With so many chip startups targeting the future of deep learning training and inference, one might expect it would be far easier for tech giant Hewlett Packard Enterprise to buy versus build. However, when it comes to select applications at the extreme edge (for space missions in particular), nothing in the ecosystem fits the bill.
In the context of a broader discussion about the company’s Extreme Edge program focused on space-bound systems, HPE’s Dr. Tom Bradicich, VP and GM of Servers, Converged Edge, and IoT systems, described a future chip that would be ideally suited for high performance computing under …
HPE Developing its Own Low Power “Neural Network” Chips was written by Nicole Hemsoth at The Next Platform.
Across the HPC community, commercial firms, government labs and academic institutions are adapting their code to embrace GPU architectures. They are motivated by the faster performance and lower energy consumption provided by GPUs, and many of them are using OpenACC to annotate their code and make it GPU-friendly. The Next Platform recently interviewed one key organization to learn why it is using the OpenACC programming model to expand its computing capabilities and platform support.
If the earth was the size of a basketball, its atmosphere would be the thickness of shrink wrap. It is fragile enough that in 1960, the …
HPC Heavyweight Goes All-In On OpenACC was written by Timothy Prickett Morgan at The Next Platform.
Nutanix has been on a journey for well over a year to transform itself from a supplier for software for hyperconverged infrastructure to a company with a platform that allows enterprises to build private datacenter environments that give them the same kinds of tools, automation, agility, scalability and consumption options that they can find in public clouds like Amazon Web Services and Microsoft Azure.
Nutanix was one of several vendors whose software helped propel the fast-growing hyperconverged infrastructure space through partnerships with such top-tier system OEMs like Dell EMC, IBM and Lenovo, and is among the last independent companies standing, …
Nutanix Expands Adds Breadth to Cloud Platform was written by Jeffrey Burt at The Next Platform.
It is going to be a busy week for chip maker Qualcomm as it formally jumps from smartphones to servers with its new “Amberwing” Centriq 2400 Arm server processor during the same week that it has received an unsolicited $130 billion takeover offer from sometimes rival chipmaker Broadcom.
The Centriq 2400 is the culmination of over four years of work and investment, which according to the experts in the semiconductor industry we have talked to, easily took on the order of $100 million to $125 million to make happen – remember there was a prototype as well as the …
Qualcomm’s Amberwing Arm Server Chip Finally Takes Flight was written by Timothy Prickett Morgan at The Next Platform.
One of the arguments Intel officials and others have made against Arm’s push to get its silicon designs into the datacenter has been the burden it would mean for enterprises and organizations in the HPC field that would have to modify application codes to get their software to run on the Arm architecture.
For HPC organizations, that would mean moving the applications from the Intel-based and IBM systems that have dominated the space for years, a time-consuming and possibly costly process.
Arm officials over the years have acknowledged the challenge, but have noted their infrastructure’s embrace of open-source software and …
Arm Smooths the Path for Porting HPC Apps was written by Nicole Hemsoth at The Next Platform.
Nobody likes to talk about the scope and scale of platforms than we do at The Next Platform. Almost all of the interesting frameworks for various kinds of distributed computing are open source projects, but the lack of fit and finish is a common complaint across open source software projects.
As Mark Collier, chief operating officer at the OpenStack Foundation, puts it succinctly: “Open source doesn’t have an innovation problem. It has an integration problem.”
Collier’s chief concern, as well as that of his compatriot, Jonathan Bryce, executive director of the OpenStack Foundation and a former Racker – meaning …
Keeping OpenStack On The Edge, Bleeding And Otherwise was written by Timothy Prickett Morgan at The Next Platform.
For several years there has been the ongoing debate about ARM and its future in the datacenter. That debate goes on, but the talk is changing.
At the beginning of the decade, ARM Holdings, the company behind the ARM chip architecture that is now owned by Japanese high-tech conglomerate Softbank, said its low-power system-on-a-chip (SoC) designs were a good alternative to Intel’s dominant Xeon and derivative processors for servers and other hardware at a time when energy efficiency in systems was becoming increasingly important.
Over the years that has been speculation about when ARM-based chips would find a foothold …
Computing Is Bigger Than The Datacenter was written by Jeffrey Burt at The Next Platform.
Organizations are turning to artificial intelligence and deep learning in hopes of being able to more quickly make the right business decisions, to remake their business models and become more efficient, and to improve the experience of their customers. The fast-emerging technologies will let enterprises gain more insight into the massive amounts of data they are generating and find the trends that normally would have been hidden from them. And enterprises are quickly moving in that direction.
A Gartner survey found that 59 percent of organizations are gathering information to help them build out their AI strategies, while the rest …
Easing The Pain Of Prepping Data For AI was written by Jeffrey Burt at The Next Platform.
The global server market is increasingly driven by the hyperscalers, and the trendsetter for all of them is Amazon Web Services. The massive company dominates the fast-growing public cloud space, outpacing rivals like Microsoft Azure, Google Cloud Platform, and IBM Cloud, and is the top consumer of servers among a group of hyperscalers that are becoming the most powerful buyers of systems and new components, such as processors.
This can be seen in the numbers. According to IDC analysts, hyperscalers in the first and second quarters this year made a significant push to deploy servers, with AWS accounting for more …
New AWS Instances Sport Customized Intel Skylakes, KVM Hypervisor was written by Jeffrey Burt at The Next Platform.
Distributed telecommunications cloud environments offer service providers a way to more quickly, efficiently and cost-effectively deliver services to end users, but they come with their share of complexity, management headaches, integration challenges and coordinating operations among multiple cloud vendors.
In a recent survey by Juniper Networks, service providers noted that a lack of visibility into all parts of the network cloud was the most difficult challenge facing as they migrate to the cloud, and that more than half of respondents said they use two or more cloud vendors in their distributed environments, adding to the complexity and the lack …
Juniper Dons Red Hat To Ease Cloud Migration was written by Jeffrey Burt at The Next Platform.
When Hewlett Packard Enterprise bought supercomputer maker SGI back in August 2016 for $275 million, it had already invested years in creating its own “DragonHawk” chipset to build big memory Superdome X systems that were to be the follow-ons to its PA-RISC and Itanium Superdome systems. The Superdome X machines did not support HPE’s own VMS or HP-UX operating systems, but venerable Tandem NonStop fault tolerant distributed database platform was put on the road to Intel’s Xeon processors four years ago.
Now, HPE is making another leap, as we suspected it would, and anointing the SGI UV-300 platform as its …
HPE’s Superdome Gets An SGI NUMAlink Makeover was written by Timothy Prickett Morgan at The Next Platform.
Red Hat is no stranger to Linux containers, considering the work its engineers have done in creating the OpenShift application development and management platform.
As The Next Platform has noted over the past couple of years, Red Hat has rapidly expanded the capabilities within OpenShift for developing and deploying Docker containers and managing them with the open source Kubernetes orchestrator, culminating with OpenShift 3.0, which was based on Kubernetes and Docker containers. It has continued to enhance the platform since. Most recently, Red Hat in September launched OpenShift Container Platform 3.6, which added upgraded security features and more consistency across …
Red Hat Wraps OpenStack In Containers was written by Jeffrey Burt at The Next Platform.
Private equity firm Silver Lake Partners has an appetite for tech, and securing funding for Dell to take itself private and then go out and buy EMC and VMware is now going to take a backseat in terms of deal size – and in potential ripple effects in the datacenter – now that chip giant Broadcom is making an unsolicited bid, backed by Silver Lake, to take over often-times chip rival Qualcomm.
Should this deal pass shareholder and regulatory, it could finally create a chip giant that can counterbalance Intel in the datacenter – something that Broadcom and Qualcomm both …
How The Largest Tech Deal In History Might Affect Systems was written by Timothy Prickett Morgan at The Next Platform.
Converged and hyperconverged infrastructure, those tightly integrated systems that bring together compute and storage into pre-tested and pre-configured stacks, continues to be in high demand from enterprises that are looking to rework their datacenters to become private clouds that can more easily and, in the long run, more cheaply host fast-emerging technologies like analytics, mobile applications, Internet of Things telemetry, virtual and augmented reality, and various software-defined infrastructure. These CI and HCI platforms are designed to bring greater flexibility and scalability, ease deployment and management, and reduce costs in areas such as acquisition and power consumption.
IDC analysts have been …
Fujitsu, NetApp Tag Team For Converged Infrastructure was written by Jeffrey Burt at The Next Platform.
If you want to build infrastructure that scales larger than a single image of a server and an operating system, you have no choice but to network together multiple machines. And so, the network becomes a kind of hyper backplane between compute elements and, in many cases, also a kind of virtual peripheral bus for things like disk and flash storage. From the outside, a warehouse-scale computer, as Google has been calling them for nearly a decade, is meant to look and behave like one machine even if it most certainly is not.
It is hard to quantify how …
For Google Networks, Predictable Latency Trumps Everything was written by Timothy Prickett Morgan at The Next Platform.
When enterprises talk about cloud computing, they invariably talk about hybrid and multi-cloud environments. Not all of their workloads will run on Amazon Web Services, Microsoft Azure, Google Cloud Platform – or only on one public cloud, for that matter.
In highly regulated industries like healthcare and financial services, some workloads will run in private clouds hosted by the enterprises themselves for compliance, security, and privacy reasons. Companies that have invested millions of dollars in their datacenters over the years also will want to protect those investments by leveraging them for private clouds. What’s important to them is being …
IBM Builds Private Cloud Stack With Kubernetes And Containers was written by Jeffrey Burt at The Next Platform.
The general HPC market might be growing, and the very definition of HPC is expanding thanks to the addition of advanced analytics and machine learning to the HPC toolbox. But it is tough slogging right now in the upper echelons of HPC where supercomputers roam.
There is perhaps no better barometer of the state of supercomputing than Cray, which sells a mix of processing, storage, and interconnect technologies to address the ever-widening scope of modern supercomputing. Because of a general slowdown in supercomputer sales thanks to the fact that organizations are keeping their systems around for longer than they usually …
Cray Looks Forward To Supercomputing Rebound was written by Timothy Prickett Morgan at The Next Platform.
If you want to get a microcosmic view of the epic battle between Ethernet and InfiniBand (which also includes Omni-Path no matter how much Intel protests) as they relate to high performance computing in its many modern guises, there is perhaps no better place to look at what Mellanox Technologies is selling.
Mellanox, which has been peddling InfiniBand chips, switches, and adapters since the inception of this technology, bought its biggest rival in switch sales, Voltaire, for $218 million back in November 2010. And that was perhaps its smartest move right up to the moment where the company launched …
The Tug Of War Between InfiniBand And Ethernet was written by Timothy Prickett Morgan at The Next Platform.
GPU accelerated supercomputing is not a new phenomenon with many high performance computing codes already primed to run on Nvidia hardware in particular.
However, for some legacy codes with special needs (changing models, high computational demands), particularly in areas like weather, the gap between those codes and the promise of GPU acceleration is rather large, even with higher level tools like OpenACC to bridge the divide—all without major code rewrites.
Given the limitations of porting some legacy Fortran codes to GPUs, a research team Tokyo Tech has devised what it calls, “hybrid Fortran” which is designed to “increase productivity when …
Hybrid Fortran Pulls Legacy Codes into Acceleration Era was written by Nicole Hemsoth at The Next Platform.