Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

Lagging In AI? Don’t Worry, It’s Still Early

Without splitting a lot of hairs on definitions, it is safe to say that machine learning in its myriad forms is absolutely shaking up data processing. The techniques for training neural networks to chew through mountains of labeled data and make inferences against new data are set to transform every aspect of computation and automation. There is a mad dash to do something, as there always is at the beginning of every technology hype cycle.

Enterprises need to breathe. The hyperscalers are perfecting these technologies, which are changing fast, and by the time things settle out and the software

Lagging In AI? Don’t Worry, It’s Still Early was written by Timothy Prickett Morgan at The Next Platform.

Making The Case For Fully Converged Arm Servers

There has been a lot of research and development devoted to bringing the Arm architecture to servers and storage in the datacenter, and a lot of that has focused on making beefier and usually custom Arm cores that look more like an X86 core than they do the kind of compute element we find in our smartphones and tablets. The other way to bring Arm to the datacenter is to use more modest processing elements and to gang a lot of them up together, cramming a lot more cores in a rack and making up the performance in volume.

This

Making The Case For Fully Converged Arm Servers was written by Timothy Prickett Morgan at The Next Platform.

The Majority Of Systems Sold Are Converged, Maybe

In the early days of computing in the datacenter, vendors of systems pretty much owned their platforms, from the chip all the way up to the compiler.

When you invested in an IBM, Sperry, Burroughs, NEC, Bull, Hitachi, or Fujitsu mainframe, or one of the myriad minicomputer systems from Big Blue, Digital, Hewlett-Packard, or eventually Unix systems from Sun Microsystems and its competition (mainly Data General, SGI, HP, and IBM), you were really investing in a way of computing life. A lot of the decisions about what to buy were already made, and you didn’t have to think much about

The Majority Of Systems Sold Are Converged, Maybe was written by Timothy Prickett Morgan at The Next Platform.

Taking The Long View On High Performance Networking

It is hard to make a profit selling hardware to supercomputing centers, hyperscalers, and cloud builders, all of whom demand the highest performance at the lowest prices. But in the first quarter of this year, network chip, adapter, switch, and cable supplier Mellanox Technologies – which has products aimed at all three of these segments – managed to do it.

And with activist investor, Starboard Value, pressing Mellanox to make the kinds of profits that other networking companies command, the swing to a very decent net income could not have come at a better time. Starboard has been on the

Taking The Long View On High Performance Networking was written by Timothy Prickett Morgan at The Next Platform.

The Contradictions Of IBM’s Platform Strategy

The thing about platforms that have a wide adoption and deep history is that they tend to persist. They have such economic inertia that, so long as they can keep morphing and grafting on new technologies, that they persist long after alternatives have emerged and dominated data processing. Every company ultimately wants to build a platform for this reason, and has since the dawn of commercial computing, for precisely this reason, for this inertia – it takes too much effort to change or replace it – is what generates the profits.

It is with this in mind that we contemplate

The Contradictions Of IBM’s Platform Strategy was written by Timothy Prickett Morgan at The Next Platform.

Cray’s Ever-Expanding Compute For HPC

With choice comes complexity, and the Cambrian explosion in compute options is only going to make this harder even if it is a much more satisfying intellectual and financial challenge. This added complexity is worth it because companies will be able to more closely align the hardware to the applications. This is why search engine giant Google has been driving compute diversity and why supercomputer maker Cray has been looking forward to it as well.

This expanding of the compute ecosystem is also necessary because big jumps in raw compute performance for general purpose processors are possible as they were

Cray’s Ever-Expanding Compute For HPC was written by Timothy Prickett Morgan at The Next Platform.

VMware’s Platform Can Only Reflect The Enterprise Datacenter

When a company has 500,000 enterprise customers that are paying for perpetual licenses and support on systems software – this is an absolutely enormous base by corporate standards, and a retro licensing model straight from the 1980s and 1990s – what does it do for an encore?

That’s a very good question, and for now the answer for VMware seems to be to sell virtual storage and virtual networking networking to that vast base of virtual compute customers, and take wheelbarrows full of money to the bank on behalf of parent Dell Technologies. Virtualization took root during the Great Recession

VMware’s Platform Can Only Reflect The Enterprise Datacenter was written by Timothy Prickett Morgan at The Next Platform.

Docker Inevitably Embraces Kubernetes Container Orchestration

Sometimes you can beat them, and sometimes you can join them. If you are Docker, the commercial entity behind the Docker container runtime and a stack of enterprise-class software that wraps around it, and you are facing the rising popularity of the Kubernetes container orchestrator open sourced by Google, you can do both. And so, even though it has its own Swarm orchestration layer, Docker is embracing Kubernetes as a peer to Swarm in its own stack.

This is not an either/or proposition, and in fact, the way that the company has integrated Kubernetes inside of Docker Enterprise Edition, the

Docker Inevitably Embraces Kubernetes Container Orchestration was written by Timothy Prickett Morgan at The Next Platform.

Another Step In Building The HPC Ecosystem For Arm

Many of us are impatient for Arm processors to take off in the datacenter in general and in HPC in particular. And ever so slowly, it looks like it is starting to happen.

Every system buyer wants choice because choice increases competition, which lowers cost and mitigates against risk. But no organization, no matter how large, can afford to build its own software ecosystem. Even the hyperscalers like Google and Facebook, whole literally make money on the apps running on their vast infrastructure, rely heavily on the open source community, taking as much as they give back. So it is

Another Step In Building The HPC Ecosystem For Arm was written by Timothy Prickett Morgan at The Next Platform.

The Battle Of The InfiniBands, Part Two

For decades, the IT market has been obsessed with the competition between suppliers of processors, but there are rivalries between the makers of networking chips and the full-blown switches that are based on them that are just as intense. Such a rivalry exists between the InfiniBand chips from Mellanox Technologies and the Omni-Path chips from Intel, which are based on technologies Intel got six years ago when it acquired the InfiniBand business from QLogic for $125 million.

At the time, we quipped that AMD needed to buy Mellanox, but instead AMD turned right around and shelled out $334 million to

The Battle Of The InfiniBands, Part Two was written by Timothy Prickett Morgan at The Next Platform.

Building Bigger, Faster GPU Clusters Using NVSwitches

Nvidia launched its second-generation DGX system in March. In order to build the 2 petaflops half-precision DGX-2, Nvidia had to first design and build a new NVLink 2.0 switch chip, named NVSwitch. While Nvidia is only shipping NVSwitch as an integral component of its DGX-2 systems today, Nvidia has not precluded selling NVSwitch chips to data center equipment manufacturers.

This article will answer many of the questions we asked in our first look at the NVSwitch chip, using DGX-2 as an example architecture.

Nvidia’s NVSwitch is a two-billion transistor non-blocking switch design incorporating 18 complete NVLink 2.0 ports

Building Bigger, Faster GPU Clusters Using NVSwitches was written by Timothy Prickett Morgan at The Next Platform.

HPC Provides Big Bang, But Needs Big Bucks, Too

Supercomputers keep getting faster, but they are keep getting more expensive. This is a problem, and it is one that is going to eventually affect every kind of computer until we get a new technology that is not based on CMOS chips.

The general budget and some of the feeds and speeds are out thanks to the requests for proposal for the “Frontier” and “El Capitan” supercomputers that will eventually be built for Oak Ridge National Laboratory and Lawrence Livermore National Laboratory. So now is a good time to take a look at not just the historical performance of capability

HPC Provides Big Bang, But Needs Big Bucks, Too was written by Timothy Prickett Morgan at The Next Platform.

Talking Up the Expanding Markets for GPU Compute

There is a direct correlation between the length of time that Nvidia co-founder and chief executive officer Jensen Huang speaks during the opening keynote of each GPU Technology Conference and the total addressable market of accelerated computing based on GPUs.

This stands to reason since the market for GPU compute is expanding. We won’t discuss which is the cause and which is the effect. Or maybe we will.

It all started with offloading the parallel chunks of HPC applications from CPUs to GPUs in the early 2000s in academia, which were then first used in production HPC centers a decade

Talking Up the Expanding Markets for GPU Compute was written by Timothy Prickett Morgan at The Next Platform.

Bidders Off And Running After $1.8 Billion DOE Exascale Super Deals

Supercomputer makers have been on their exascale marks, and they have been getting ready, and now the US Department of Energy has just said “Go!”

The requests for proposal have been opened up for two more exascale systems, with a budget ranging from $800 million to $1.2 billion for a pair of machines to be installed at Oak Ridge National Laboratory and Lawrence Livermore National Laboratory and a possible sweetener of anywhere from $400 million to $600 million that, provided funding can be found, allows Argonne National Laboratory to also buy yet another exascale machine.

Oak Ridge, Argonne, and Livermore

Bidders Off And Running After $1.8 Billion DOE Exascale Super Deals was written by Timothy Prickett Morgan at The Next Platform.

Open Compute Iron Is All About Acceleration

The Open Compute Project (OCP) held its 9th annual US Summit recently, with 3,441 registered attendees this year. While that might seem small for a top-tier high tech event, there were also 80 exhibitors representing most of the cloud datacenter supply chain, plus a host of outstanding technical sessions. We are always on the hunt for new iron, and not surprisingly the most important gear we saw at OCP this year was related to compute acceleration.

Here is how that new iron we saw breaks down across the major trends in acceleration.

The first interesting thing we saw was a

Open Compute Iron Is All About Acceleration was written by Timothy Prickett Morgan at The Next Platform.

MapD Fires Up GPU Cloud Service

In the long run, provided there are enough API pipes into the code, software as a service might be the most popular way to consume applications and systems software for all but the largest organizations that are running at such a scale that they can command almost as good prices for components as the public cloud intermediaries. The hassle of setting up and managing complex code is in a lot of cases larger than the volume pricing benefits of do it yourself. The difference can be a profit margin for both cloud builders and the software companies that peddle their

MapD Fires Up GPU Cloud Service was written by Timothy Prickett Morgan at The Next Platform.

Inside Nvidia’s NVSwitch GPU Interconnect

At the GPU Technology Conference last week, we told you all about the new NVSwitch memory fabric interconnect that Nvidia has created to link multiple “Volta” GPUs together and that is at the heart of the DGX-2 system that the company has created to demonstrate its capabilities and to use on its internal Saturn V supercomputer at some point in the future.

Since the initial announcements, more details have been revealed by Nvidia about NVSwitch, including details of the chip itself and how it helps applications wring a lot more performance from the GPU accelerators.

Our first observation upon looking

Inside Nvidia’s NVSwitch GPU Interconnect was written by Timothy Prickett Morgan at The Next Platform.

The Buck Stops – And Starts – Here For GPU Compute

Ian Buck doesn’t just run the Tesla accelerated computing business at Nvidia, which is one of the company’s fastest-growing and most profitable products in its twenty five year history. The work that Buck and other researchers started at Stanford University in 2000 and then continued at Nvidia helped to transform a graphics card shader into a parallel compute engine that is helping to solve some of the world’s toughest simulation and machine learning problems.

The annual GPU Technology Conference was held by Nvidia last week, and we sat down and had a chat with Buck about a bunch of things

The Buck Stops – And Starts – Here For GPU Compute was written by Timothy Prickett Morgan at The Next Platform.

Fueling AI With A New Breed of Accelerated Computing

A major transformation is happening now as technological advancements and escalating volumes of diverse data drive change across all industries. Cutting-edge innovations are fueling digital transformation on a global scale, and organizations are leveraging faster, more powerful machines to operate more intelligently and effectively than ever.

Recently, Hewlett Packard Enterprise (HPE) has announced the new HPE Apollo 6500 Gen10 server, a groundbreaking platform designed to tackle the most compute-intensive high performance computing (HPC) and deep learning workloads. Deep learning – an exciting development in artificial intelligence (AI) – enables machines to solve highly complex problems quickly by autonomously analyzing

Fueling AI With A New Breed of Accelerated Computing was written by Timothy Prickett Morgan at The Next Platform.

Removing The Storage Bottleneck For AI

If the history of high performance computing has taught us anything, it is that we cannot focus too much on compute at the expense of storage and networking. Having all of the compute in the world doesn’t mean diddlysquat if the storage can’t get data to the compute elements – whatever they might be – in a timely fashion with good sustained performance.

Many organizations that have invested in GPU accelerated servers are finding this out the hard way when their performance comes up short when they get down to do work training their neural networks, and this is particularly

Removing The Storage Bottleneck For AI was written by Timothy Prickett Morgan at The Next Platform.

1 57 58 59 60 61 79