When looking for all-flash storage arrays, there is no lack of options. Small businesses and hyperscalers alike helped fuel the initial uptake of flash storage several years ago, and since then larger businesses have taken the plunge to help drive savings in such areas as power and cooling costs, floor and rack space, and software licensing.
The increasing demand for the technology – see the rapid growth of Pure Storage, the original flash array upstart over the past nine years – has not only fueled the rise of smaller vendors but also the portfolio expansion of such established …
Vexata Has Its Own Twist On Scaling Flash Storage was written by Jeffrey Burt at The Next Platform.
It is hard to make a profit selling hardware to supercomputing centers, hyperscalers, and cloud builders, all of whom demand the highest performance at the lowest prices. But in the first quarter of this year, network chip, adapter, switch, and cable supplier Mellanox Technologies – which has products aimed at all three of these segments – managed to do it.
And with activist investor, Starboard Value, pressing Mellanox to make the kinds of profits that other networking companies command, the swing to a very decent net income could not have come at a better time. Starboard has been on the …
Taking The Long View On High Performance Networking was written by Timothy Prickett Morgan at The Next Platform.
Intel has been making some interesting moves in the community space recently, including free licenses for its compiler suite for educators and open source contributors can now be had, as can rotating 90 day licenses for its full System Studio environment for anyone who takes the time to sign up.
In the AI space, Intel recently announced that its nGraph code for managing AI graph APIs has also been opened to the community. After opening it up last month, Intel has been followed up on the initial work on MXNet with further improvements to TensorFlow.
The Next Platform …
Is Open Source The AI Nirvana for Intel? was written by James Cuff at The Next Platform.
The European Union has never been willing to cede the exascale computing race to the United States, Japan, or China.
In recent years, Europe has ramped up its investments in the HPC space through such programs as Horizon 2020, an effort to grow R&D in Europe, and EuroHPC to drive development of exascale systems, and the Partnership for Advanced Computing in Europe (PRACE), which aims to develop a distributed supercomputing infrastructure that will be accessible to researchers, businesses, and academic institutions throughout the EU. The SAGE project will create a multi-tiered storage platform for data-centric exascale computing to enable …
EU Swaps JuQueen BlueGene/Q For Modular Xeon JUWELS Supercomputer was written by Jeffrey Burt at The Next Platform.
The thing about platforms that have a wide adoption and deep history is that they tend to persist. They have such economic inertia that, so long as they can keep morphing and grafting on new technologies, that they persist long after alternatives have emerged and dominated data processing. Every company ultimately wants to build a platform for this reason, and has since the dawn of commercial computing, for precisely this reason, for this inertia – it takes too much effort to change or replace it – is what generates the profits.
It is with this in mind that we contemplate …
The Contradictions Of IBM’s Platform Strategy was written by Timothy Prickett Morgan at The Next Platform.
With choice comes complexity, and the Cambrian explosion in compute options is only going to make this harder even if it is a much more satisfying intellectual and financial challenge. This added complexity is worth it because companies will be able to more closely align the hardware to the applications. This is why search engine giant Google has been driving compute diversity and why supercomputer maker Cray has been looking forward to it as well.
This expanding of the compute ecosystem is also necessary because big jumps in raw compute performance for general purpose processors are possible as they were …
Cray’s Ever-Expanding Compute For HPC was written by Timothy Prickett Morgan at The Next Platform.
When a company has 500,000 enterprise customers that are paying for perpetual licenses and support on systems software – this is an absolutely enormous base by corporate standards, and a retro licensing model straight from the 1980s and 1990s – what does it do for an encore?
That’s a very good question, and for now the answer for VMware seems to be to sell virtual storage and virtual networking networking to that vast base of virtual compute customers, and take wheelbarrows full of money to the bank on behalf of parent Dell Technologies. Virtualization took root during the Great Recession …
VMware’s Platform Can Only Reflect The Enterprise Datacenter was written by Timothy Prickett Morgan at The Next Platform.
For the past several years, machine learning as evolved by the hyperscalers has been trickling down from on high, through frameworks and services, into enterprises.
Machine learning is becoming a regular technique underpinning applications in a growing number of industries like manufacturing, energy, telecommunications and engineering, where companies see it as a way to not only reduce the costs and improve the efficiencies in their operations but also to more quickly detect patterns in and gain insights from the huge amounts of data they are generating. The goal is to making better and faster business decisions, and to …
DHL Gets Logical – And Logistical – About Machine Learning was written by Jeffrey Burt at The Next Platform.
HPC software evolves continuously. Those now finding themselves on the frontlines of HPC support are having to invent and build new technologies just to keep up with deluge of layers and layers of software on top of software and software is only part of the bigger picture.
We have talked in the past about computational balance and the challenges of unplanned data, these are both real and tangible issues. However, now in addition to all of that, those in support roles living at the sharp end of having to support research are also faced by what is increasingly turning …
Containing the Complexity of the Long Tail was written by James Cuff at The Next Platform.
Sometimes you can beat them, and sometimes you can join them. If you are Docker, the commercial entity behind the Docker container runtime and a stack of enterprise-class software that wraps around it, and you are facing the rising popularity of the Kubernetes container orchestrator open sourced by Google, you can do both. And so, even though it has its own Swarm orchestration layer, Docker is embracing Kubernetes as a peer to Swarm in its own stack.
This is not an either/or proposition, and in fact, the way that the company has integrated Kubernetes inside of Docker Enterprise Edition, the …
Docker Inevitably Embraces Kubernetes Container Orchestration was written by Timothy Prickett Morgan at The Next Platform.
As humankind continues to stare into the dark abyss of deep space in an eternal quest to understand our origins, new computational tools and technologies are needed at unprecedented scales. Gigantic datasets from advanced high resolution telescopes and huge scientific instrumentation installations are overwhelming classical computational and storage techniques.
This is the key issue with exploring the Universe – it is very, very large. Combining advances in machine learning and high speed data storage are starting to provide hitherto unheard of levels of insight that were previously in the realm of pure science fiction. Using computer systems to infer knowledge …
GPUs Mine Astronomical Datasets For Golden Insight Nuggets was written by James Cuff at The Next Platform.
Many of us are impatient for Arm processors to take off in the datacenter in general and in HPC in particular. And ever so slowly, it looks like it is starting to happen.
Every system buyer wants choice because choice increases competition, which lowers cost and mitigates against risk. But no organization, no matter how large, can afford to build its own software ecosystem. Even the hyperscalers like Google and Facebook, whole literally make money on the apps running on their vast infrastructure, rely heavily on the open source community, taking as much as they give back. So it is …
Another Step In Building The HPC Ecosystem For Arm was written by Timothy Prickett Morgan at The Next Platform.
Hyperconverged storage is a hot commodity right now. Enterprises want to dump their disk arrays and get an easier and less costly way to scale the capacity and performance of their storage to keep up with application demands. Nutanix has a become as significant player in a space where established vendors like Hewlett Packard Enterprise, Dell EMC, and Cisco Systems are broadening their portfolios and capabilities.
But as hyperconverged infrastructure (HCI) becomes increasingly popular and begin moving up from midrange environments into larger enterprises, challenges are becoming evident, from the need to bring in new – and at times …
Pushing Up The Scale For Hyperconverged Storage was written by Jeffrey Burt at The Next Platform.
For decades, the IT market has been obsessed with the competition between suppliers of processors, but there are rivalries between the makers of networking chips and the full-blown switches that are based on them that are just as intense. Such a rivalry exists between the InfiniBand chips from Mellanox Technologies and the Omni-Path chips from Intel, which are based on technologies Intel got six years ago when it acquired the InfiniBand business from QLogic for $125 million.
At the time, we quipped that AMD needed to buy Mellanox, but instead AMD turned right around and shelled out $334 million to …
The Battle Of The InfiniBands, Part Two was written by Timothy Prickett Morgan at The Next Platform.
Nvidia launched its second-generation DGX system in March. In order to build the 2 petaflops half-precision DGX-2, Nvidia had to first design and build a new NVLink 2.0 switch chip, named NVSwitch. While Nvidia is only shipping NVSwitch as an integral component of its DGX-2 systems today, Nvidia has not precluded selling NVSwitch chips to data center equipment manufacturers.
This article will answer many of the questions we asked in our first look at the NVSwitch chip, using DGX-2 as an example architecture.
Nvidia’s NVSwitch is a two-billion transistor non-blocking switch design incorporating 18 complete NVLink 2.0 ports …
Building Bigger, Faster GPU Clusters Using NVSwitches was written by Timothy Prickett Morgan at The Next Platform.
Hyperconverged infrastructure in some ways is like the credit card in those old TV ads: in this case, it’s everywhere that enterprises want to be. HCI put compute and storage on the same cluster, tightly integrate them with networking and unified management tools and essentially give enterprises a private cloud for the datacenter as well as pushing compute out to the edges in a consistent manner.
HCI also promises a bunch of other things beneficial to enterprises, including streamlined management, lower costs, faster speeds, and easier scalability than traditional IT systems to better address the rise of cloud computing, analytics, …
The Evolution Of Hyperconverged Storage To Composable Systems was written by Jeffrey Burt at The Next Platform.
Supercomputers keep getting faster, but they are keep getting more expensive. This is a problem, and it is one that is going to eventually affect every kind of computer until we get a new technology that is not based on CMOS chips.
The general budget and some of the feeds and speeds are out thanks to the requests for proposal for the “Frontier” and “El Capitan” supercomputers that will eventually be built for Oak Ridge National Laboratory and Lawrence Livermore National Laboratory. So now is a good time to take a look at not just the historical performance of capability …
HPC Provides Big Bang, But Needs Big Bucks, Too was written by Timothy Prickett Morgan at The Next Platform.
Nvidia caused a shift in high-end computing more than a decade ago when it introduced its general-purpose GPUs and CUDA development platform to work with CPUs to increase the performance of compute-intensive workloads in HPC and other environments and drive greater energy efficiencies in datacenters.
Nvidia and to a lesser extent AMD, with its Radeon GPUs, took advantage of the growing demand for more speed and less power consumption to build out their portfolios of GPU accelerators and expand their use in a range of systems, to the point where in the last Top500 list of the world’s fastest …
Dell EMC and Fujitsu Roll Intel FPGAs Into Servers was written by Jeffrey Burt at The Next Platform.
There is a direct correlation between the length of time that Nvidia co-founder and chief executive officer Jensen Huang speaks during the opening keynote of each GPU Technology Conference and the total addressable market of accelerated computing based on GPUs.
This stands to reason since the market for GPU compute is expanding. We won’t discuss which is the cause and which is the effect. Or maybe we will.
It all started with offloading the parallel chunks of HPC applications from CPUs to GPUs in the early 2000s in academia, which were then first used in production HPC centers a decade …
Talking Up the Expanding Markets for GPU Compute was written by Timothy Prickett Morgan at The Next Platform.
Just because supercomputers are engineered to be far more powerful over time does not necessarily mean programmer productivity will follow the same curve. Added performance means more complexity, which for parallel programmers means working harder, especially when accelerators are thrown into the mix.
It was with all of this in mind that DARPA’s High Productivity Computing Systems (HPCS) project rolled out in 2009 to support higher performance but more usable HPC systems. This is the era that spawned systems like Blue Waters, for instance, and various co-design efforts from IBM and Intel to make parallelism within broader reach to the …
Will Chapel Mark Next Great Awakening for Parallel Programmers? was written by Nicole Hemsoth at The Next Platform.