Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

MapD Fires Up GPU Cloud Service

In the long run, provided there are enough API pipes into the code, software as a service might be the most popular way to consume applications and systems software for all but the largest organizations that are running at such a scale that they can command almost as good prices for components as the public cloud intermediaries. The hassle of setting up and managing complex code is in a lot of cases larger than the volume pricing benefits of do it yourself. The difference can be a profit margin for both cloud builders and the software companies that peddle their

MapD Fires Up GPU Cloud Service was written by Timothy Prickett Morgan at The Next Platform.

Inside Nvidia’s NVSwitch GPU Interconnect

At the GPU Technology Conference last week, we told you all about the new NVSwitch memory fabric interconnect that Nvidia has created to link multiple “Volta” GPUs together and that is at the heart of the DGX-2 system that the company has created to demonstrate its capabilities and to use on its internal Saturn V supercomputer at some point in the future.

Since the initial announcements, more details have been revealed by Nvidia about NVSwitch, including details of the chip itself and how it helps applications wring a lot more performance from the GPU accelerators.

Our first observation upon looking

Inside Nvidia’s NVSwitch GPU Interconnect was written by Timothy Prickett Morgan at The Next Platform.

The Buck Stops – And Starts – Here For GPU Compute

Ian Buck doesn’t just run the Tesla accelerated computing business at Nvidia, which is one of the company’s fastest-growing and most profitable products in its twenty five year history. The work that Buck and other researchers started at Stanford University in 2000 and then continued at Nvidia helped to transform a graphics card shader into a parallel compute engine that is helping to solve some of the world’s toughest simulation and machine learning problems.

The annual GPU Technology Conference was held by Nvidia last week, and we sat down and had a chat with Buck about a bunch of things

The Buck Stops – And Starts – Here For GPU Compute was written by Timothy Prickett Morgan at The Next Platform.

Fueling AI With A New Breed of Accelerated Computing

A major transformation is happening now as technological advancements and escalating volumes of diverse data drive change across all industries. Cutting-edge innovations are fueling digital transformation on a global scale, and organizations are leveraging faster, more powerful machines to operate more intelligently and effectively than ever.

Recently, Hewlett Packard Enterprise (HPE) has announced the new HPE Apollo 6500 Gen10 server, a groundbreaking platform designed to tackle the most compute-intensive high performance computing (HPC) and deep learning workloads. Deep learning – an exciting development in artificial intelligence (AI) – enables machines to solve highly complex problems quickly by autonomously analyzing

Fueling AI With A New Breed of Accelerated Computing was written by Timothy Prickett Morgan at The Next Platform.

Removing The Storage Bottleneck For AI

If the history of high performance computing has taught us anything, it is that we cannot focus too much on compute at the expense of storage and networking. Having all of the compute in the world doesn’t mean diddlysquat if the storage can’t get data to the compute elements – whatever they might be – in a timely fashion with good sustained performance.

Many organizations that have invested in GPU accelerated servers are finding this out the hard way when their performance comes up short when they get down to do work training their neural networks, and this is particularly

Removing The Storage Bottleneck For AI was written by Timothy Prickett Morgan at The Next Platform.

Nvidia’s DGX-2 System Packs An AI Performance Punch

When Nvidia co-founder and chief executive officer Jensen Huang told the assembled multitudes at the keynote opening to the GPU Technology Conference that the new DGX-2 system, weighing in at 2 petaflops at half precision using the latest Tesla GPU accelerators, would cost $1.5 million when it became available in the third quarter, the audience paused for a few seconds, doing the human-speed math to try to reckon how that stacked up to the DGX-1 servers sporting eight Teslas.

This sounded like a pretty high price, even for such an impressive system – really a GPU cluster with some CPU

Nvidia’s DGX-2 System Packs An AI Performance Punch was written by Timothy Prickett Morgan at The Next Platform.

Nvidia Memory Switch Welds Together Massive Virtual GPU

It has happened time and time again in computing in the past three decades in the datacenter: A device scales up its capacity – be it compute, storage, or networking – as high as it can go, and then it has to go parallel and scale out.

The NVLink interconnect that Nvidia created to lash together its “Pascal” and “Volta” GPU accelerators into a kind of giant virtual GPU were the first phase of this scale out for Tesla compute. But with only six NVLink ports on a Volta SXM2 device, there is a limit to how many Teslas can

Nvidia Memory Switch Welds Together Massive Virtual GPU was written by Timothy Prickett Morgan at The Next Platform.

In Modern Datacenters, The Latency Tail Wags The Network Dog

The expression, the tail wags the dog, is used when a seemingly unimportant factor or infrequent event actually dominates the situation. It turns out that in modern datacenters, this is precisely the case – with relatively rare events determining overall performance.

As the world continues to undergo a digital transformation, one of the most pressing challenges faced by cloud and web service providers is building hyperscale datacenters to handle the growing pace of interactive and real-time requests, generated by the enormous growth of users and mobile apps. With the increasing scale and demand for services, IT organizations have turned

In Modern Datacenters, The Latency Tail Wags The Network Dog was written by Timothy Prickett Morgan at The Next Platform.

Google And Its Hyperscale Peers Add Power To The Server Fleet

Six years ago, when Google decided to get involved with the OpenPower consortium being put together by IBM as its third attempt to bolster the use of Power processors in the datacenter, the online services giant had three applications that had over 1 billion users: Gmail, YouTube, and the eponymous search engine that has become the verb for search.

Now, after years of working with Rackspace Hosting on a Power9 server design, Google is putting systems based on IBM’s Power9 processor into production, and not just because it wants pricing leverage with Intel and other chip suppliers. Google now has

Google And Its Hyperscale Peers Add Power To The Server Fleet was written by Timothy Prickett Morgan at The Next Platform.

OpenPower At The Inflection Point

When IBM launched the OpenPower initiative publicly five years ago, to many it seemed like a classic case of too little, too late. But hope springs eternal, particularly with a datacenter sector that is eagerly and actively seeking an alternative to the Xeon processor to curtail the hegemony that Intel has in the glass house.

Perhaps the third time will be the charm. Back in 1991, Apple and IBM and Motorola teamed up to create the AIM Alliance, which sought to create a single unified computing architecture that was suitable for embedded and desktop applications, replacing the Motorola 68000 processors

OpenPower At The Inflection Point was written by Timothy Prickett Morgan at The Next Platform.

Turbulence – And Opportunity – Ahead In The Oracle Sparc Base

You can’t swing a good-sized cat without hitting an enterprise running Oracle software in some shape or form. If it’s not Oracle’s ubiquitous database, then it’s one of its middleware platforms or its enterprise applications in the Fusion suite or its predecessors in the Oracle, Siebel, PeopleSoft, and JD Edwards suites.

Currently Oracle boasts 430,000 customers running its software – that’s quite an installed base. And it’s all teed up to become quite a battleground. Why?

Six months or so ago, news broke that Oracle was laying off a large number of hardware folks. Something like 2,500 Sparc and Solaris

Turbulence – And Opportunity – Ahead In The Oracle Sparc Base was written by Timothy Prickett Morgan at The Next Platform.

Argonne Hints at Future Architecture of Aurora Exascale System

There are two supercomputers named “Aurora” that are affiliated with Argonne National Laboratory – the one that was supposed to be built this year and the one that for a short time last year was known as “A21,” that will be built in 2021, and that will be the first exascale system built in the United States.

Details have just emerged on the second, and now only important, Aurora system, thanks to Argonne opening up proposals for the early science program that lets researchers put code on the supercomputer for three months before it starts its production work. The proposal

Argonne Hints at Future Architecture of Aurora Exascale System was written by Timothy Prickett Morgan at The Next Platform.

How Spectre And Meltdown Mitigation Hits Xeon Performance

It has been more than two months since Google revealed its research on the Spectre and Meltdown speculative execution security vulnerabilities in modern processors, and caused the whole IT industry to slam on the brakes and brace for the impact. The initial microbenchmark results on the mitigations for these security holes, put out by Red Hat, showed the impact could be quite dramatic. But according to recent tests done by Intel, the impact is not as bad as one might think in many cases. In other cases, the impact is quite severe.

The Next Platform has gotten its hands on

How Spectre And Meltdown Mitigation Hits Xeon Performance was written by Timothy Prickett Morgan at The Next Platform.

Getting AI Leverage With GPU-Optimized Systems

The artificial intelligence revolution is quickly changing every industry, and modern data centers must be equipped to capitalize on these extraordinary new capabilities. Hewlett Packard Enterprise (HPE) and Nvidia are partnering to bring best-of-breed AI solutions to every customer, offering AI-integrated systems, services, and support capabilities to help all organizations seamlessly optimize their AI foundation, deliver differentiated outcomes, and gain competitive advantage.

High performance computing has become key to solving many of the world’s grand challenges in the realms of science, industry, and engineering. However, traditional CPUs are increasingly failing to deliver the performance gains they used to, and the

Getting AI Leverage With GPU-Optimized Systems was written by Timothy Prickett Morgan at The Next Platform.

FPGA Interconnect Boosted In Concert With Compute

To keep their niche in computing, field programmable gate arrays not only need to stay on the cutting edge of chip manufacturing processes. They also have to include the most advanced networking to balance out that compute, rivalling that which the makers of switch ASICs provide in their chips.

By comparison, CPUs have it easy. They don’t have the serializer/deserializer (SerDes) circuits that switch chips have as the foundation of their switch fabric. Rather, they might have a couple of integrated Ethernet network interface controllers embedded on the die, maybe running at 1 Gb/sec or 10 Gb/sec, and they offload

FPGA Interconnect Boosted In Concert With Compute was written by Timothy Prickett Morgan at The Next Platform.

Why Cisco Should – And Should Not – Acquire Pure Storage

Flash memory has become absolutely normal in the datacenter, but that does not mean it is ubiquitous and it most certainly does not mean that all flash arrays, whether homegrown and embedded in servers or purchased as appliances, are created equal. They are not, and you can tell not only from the feeds and speeds, but from the dollars and sense.

It has been nine years since Pure Storage, one of the original flash array upstarts, was founded and seven years since the company dropped out of stealth with its first generation of FlashArray products. In that relatively short time,

Why Cisco Should – And Should Not – Acquire Pure Storage was written by Timothy Prickett Morgan at The Next Platform.

Drilling Down Into Ethernet Switch Trends

Of the three pillars of the datacenter – compute, storage, and networking – the one that consistently still has some margins and yet does not dominate the overall system budget is networking. While these elements affect each other, they are still largely standalone realms, with their own specialized devices and suppliers. And so it is important to know the trends in the technologies.

Until fairly recently, the box counters like IDC and Gartner have been pretty secretive about the data they have about the networking business. But IDC has been gradually giving a little more flavor than just saying Cisco

Drilling Down Into Ethernet Switch Trends was written by Timothy Prickett Morgan at The Next Platform.

Weaving A Streaming Stack Like Twitter And Yahoo

The hyperscalers of the world have to deal with dataset sizes – both streaming and at rest – and real-time processing requirements that put them into an entirely different class of computing.

They are constantly inventing and reinventing what they do in compute, storage, and networking not just because they enjoy the intellectual challenge, but because they have swelling customer bases that hammer on their systems so hard they can break them.

This is one of the reasons why an upstart called Streamlio has created a new event-driven platform that is based the work of software engineers at Twitter, Yahoo,

Weaving A Streaming Stack Like Twitter And Yahoo was written by Timothy Prickett Morgan at The Next Platform.

The Roadmap Ahead For Exascale HPC In The US

The first step in rolling out a massive supercomputer installed at a government sponsored HPC laboratory is to figure out when you want to get it installed and doing useful work. The second is consider the different technologies that will be available to reach performance and power envelope goals. And the third is to give it a cool name.

Last but not least is to put a stake in the ground by telling the world about the name of the supercomputer and its rough timing, thereby confirming. These being publicly funded machines, this is only right.

As of today, it’s

The Roadmap Ahead For Exascale HPC In The US was written by Timothy Prickett Morgan at The Next Platform.

Tesla GPU Accelerator Bang For The Buck, Kepler To Volta

If you are running applications in the HPC or AI realms, you might be in for some sticker shock when you shop for GPU accelerators – thanks in part to the growing demand of Nvidia’s Tesla cards in those markets but also because cryptocurrency miners who can’t afford to etch their own ASICs are creating a huge demand for the company’s top-end GPUs.

Nvidia does not provide list prices or suggested street prices for its Tesla line of GPU accelerator cards, so it is somewhat more problematic to try to get a handle on the bang for the buck over

Tesla GPU Accelerator Bang For The Buck, Kepler To Volta was written by Timothy Prickett Morgan at The Next Platform.

1 51 52 53 54 55 72