Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

Telcos Dial Up OpenStack And Mainstream It

OpenStack was born at the nexus of high performance computing at NASA and the cloud at Rackspace Hosting, but it might be the phone companies of the world that help it go mainstream.

It is safe to say that most of us probably think that our data plans and voice services for our mobile phones are way too expensive, and as it turns out, that is exactly how our mobile phone operators feel about the racks and rows and datacenters that they have full of essentially proprietary network equipment that comprises their wired and wireless networks. And so, all of

Telcos Dial Up OpenStack And Mainstream It was written by Timothy Prickett Morgan at The Next Platform.

The Once And Future IBM Platform

More than anything else, over its long history in the computing business, IBM has been a platform company and say what you will about the woes it has had through several phases of its history, what seems obvious is that when Big Blue forgets this it runs into trouble.

If you stare at its quarterly financial results long enough, you can still see that platform company looking back at you, even through the artificially dissected product groups the company has used for the past decade and the new ones that IBM is using starting in 2016.

It is important to

The Once And Future IBM Platform was written by Timothy Prickett Morgan at The Next Platform.

OpenStack Still Has A Place In The Stack

The IT industry likes drama perhaps a bit more than is warranted by what actually goes on in the datacenters of the world. We are always spoiling for a good fight between rival technologies because the clash results in competition, which drives technologies forward and prices down.

Ultimately, organizations have to pick some kind of foundation for their modern infrastructure, and OpenStack, the cloud controller spawned from NASA and Rackspace Hosting nearly six years ago, is a growing and vibrant community that, despite the advent of Docker containers and the rise of Mesos and Kubernetes as an alternative substrate for

OpenStack Still Has A Place In The Stack was written by Timothy Prickett Morgan at The Next Platform.

First Steps In The Program Model For Persistent Memory

In the previous article, we left off with the basic storage model having its objects first existing as changed in the processor’s cache, then being aged into volatile DRAM memory, often with changes first logged synchronously into I/O-based persistent storage, and later with the object’s changes proper later copied from volatile memory into persistent storage. That has been the model for what seems like forever.

With variations, that can be the storage model for Hewlett-Packard Enterprise’s The Machine as well. Since The Machine has a separate class of volatile DRAM memory along with rapidly-accessible, byte-addressable persistent memory accessible globally, the

First Steps In The Program Model For Persistent Memory was written by Timothy Prickett Morgan at The Next Platform.

Nvidia’s Tesla P100 Steals Machine Learning From The CPU

Pattern analytics, deep learning, and machine learning have fueled a rapid rise in interest in GPU computing, in addition to GPU computing applications in high performance computing (HPC) and cloud-based data analytics.

As a high profile example, Facebook recently contributed its “Big Sur” design to the Open Compute Project (OCP), for use specifically in training neural networks and implementing artificial intelligence (AI) at scale. Facebook’s announcement of Big Sur says “Big Sur was built with the Nvidia Tesla M40 in mind but is qualified to support a wide range of PCI-e cards,” pointing out how pervasive Nvidia’s Tesla

Nvidia’s Tesla P100 Steals Machine Learning From The CPU was written by Timothy Prickett Morgan at The Next Platform.

Intel Owns The Server, Wants To Own The Rack

Chip maker Intel has been getting a lot of grief in recent days about missing the boat on putting chips in Apple’s iPhone back when the product was announced back in 2007, and then subsequently also losing out on the opportunity to have Intel Inside the Apple iPad tablet that came out three years later. Say what you will, but the folks that have been running Intel’s Data Center Group have not missed any boats, but rather have built a warship.

As the traditional PC client business continues to erode, the part of the company that is dedicated to the

Intel Owns The Server, Wants To Own The Rack was written by Timothy Prickett Morgan at The Next Platform.

Programming For Persistent Memory Takes Persistence

Some techies are capable of writing programs in assembler, but all will agree that they are very glad that they don’t need to. More know that they are fully capable of writing programs which manage their own heap memory, but are often enough pleased that the program model of some languages allow them to avoid it. Since like the beginning of time, computer scientists have been creating simplifying abstractions which allow programmers to avoid managing the hardware’s quirks, to write code more rapidly, and to even enhance its maintainability.

And so it is with a relatively new, and indeed both

Programming For Persistent Memory Takes Persistence was written by Timothy Prickett Morgan at The Next Platform.

Drilling Down Into Nvidia’s “Pascal” GPU

Nvidia made a lot of big bets to bring its “Pascal” GP100 GPU to market and its first implementation of the GPU is aimed at its Tesla P100 accelerator for radically improving the performance of massively parallel workloads like scientific simulations and machine learning algorithms. The Pascal GPU is a beast, in all of the good senses of that word, and warrants a deep dive that was not possible on announcement day back at the GPU Technology Conference.

We did an overview of the chip back on announcement day, as well as talking about Nvidia’s own DGX-1 hybrid server

Drilling Down Into Nvidia’s “Pascal” GPU was written by Timothy Prickett Morgan at The Next Platform.

Power9 Will Bring Competition To Datacenter Compute

The Power9 processor that IBM is working on in conjunction with hyperscale and HPC customers could be the most important chip that Big Blue has brought to market since the Power4 processor back in 2001. That was another world, back then, with the dot-com boom having gone bust and enterprises looking for a less expensive but beefy NUMA server on which to run big databases and transaction processing systems.

The world that the Power9 processor will enter in 2017 is radically changed. A two-socket system has more compute, memory, and I/O capacity and bandwidth than those behemoths from a decade

Power9 Will Bring Competition To Datacenter Compute was written by Timothy Prickett Morgan at The Next Platform.

Oracle Steps With Moore’s Law To Rev Exadata Database Machines

Absorbing a collection of new processing, memory, storage, and networking technologies in a fast fashion on a complex system is no easy task for any system maker or end user creating their own infrastructure, and it takes time even for a big company like Oracle to get all the pieces together and weld them together seamlessly. But the time it takes is getting smaller, and Oracle has absorbed a slew of new tech in its latest Exadata X6-2 platforms.

The updated machines are available only weeks after Intel launched its new “Broadwell” Xeon E5 v4 processors, which have been shipping

Oracle Steps With Moore’s Law To Rev Exadata Database Machines was written by Timothy Prickett Morgan at The Next Platform.

Hyperscalers And Clouds On The Xeon Bleeding Edge

Hyperscalers and cloud builders are different in a lot of ways from the typical enterprise IT shop. Perhaps the most profound one that has emerged in recent years is something that used to be only possible in the realm of the most exotic supercomputing centers, and that is this: They get what they want, and they get it ahead of everyone else.

Back in the day, before the rise of mass customization of the Xeon product line by chip maker Intel, it was HPC customers who were often trotted out as early adopters of a new processor technology and usually

Hyperscalers And Clouds On The Xeon Bleeding Edge was written by Timothy Prickett Morgan at The Next Platform.

Concept Data Tower Scrapes The Sky For Efficiency

We talk about architecture all the time when discussing systems, but when it comes to the datacenters that house these machines, the shells are about as exciting as a monstrous warehouse for a massive distribution operation. Considering that data and processing are the most profitable products on the planet these days, maybe this is suitable and fitting, but another way to look at it is that datacenters should not only be marvels of engineering but also inspiring structures like other kinds of buildings.

Every year, the architecture magazine Evolo hosts a skyscraper design competition so that architects can let their

Concept Data Tower Scrapes The Sky For Efficiency was written by Timothy Prickett Morgan at The Next Platform.

Micron Enlists Allies For Datacenter Flash Assault

If component suppliers want to win deals at hyperscalers and cloud builders, they have to be proactive. They can’t just sit around and wait for the OEMs and ODMs to pick their stuff like a popularity contest. They have to engineer great products with performance and then do what it takes on price, power, and packaging to win deals.

This is why memory maker Micron Technology is ramping up its efforts to get its DRAM and flash products into the systems that these companies buy, and why it is also creating a set of “architected solutions” focused on storage that

Micron Enlists Allies For Datacenter Flash Assault was written by Timothy Prickett Morgan at The Next Platform.

Spark On Superdome X Previews In-Memory On The Machine

As readers of The Next Platform are well aware, Hewlett Packard Enterprise is staking a lot of the future of its systems business on The Machine, which embodies the evolving concepts for disaggregated and composable systems that are heavy on persistent storage that sometimes functions like shared memory, on various kinds of compute, and on the interconnects between the two.

To get a sense of how The Machine might do on in-memory workloads that normally run on clusters that have their memory distributed, researchers at HPE Labs have fired up the Spark in-memory framework on a Superdome X shared

Spark On Superdome X Previews In-Memory On The Machine was written by Timothy Prickett Morgan at The Next Platform.

Nvidia Not Sunsetting Tesla Kepler And Maxwell GPUs Just Yet

Switch chips have a very long technical and economic lives, considerably longer than that of a Xeon processor used in a server – something on the order of seven or eight years compared to three or four. As it turns out, the various GPUs used in Nvidia’s Tesla accelerators look like they, too, will have very long technical and economic lives.

Even after a new technology is introduced, sometimes the old one can be had at a much cheaper price and therefore continues to be a good price/performer even after it has been presumably obsoleted by an improved product. Its

Nvidia Not Sunsetting Tesla Kepler And Maxwell GPUs Just Yet was written by Timothy Prickett Morgan at The Next Platform.

IBM Unfolds Power Chip Roadmap Out Past 2020

There are two things that underdogs have to do to take a big bite out of a market. First, they have to tell prospective customers precisely what the plan is to develop future products, and then they have to deliver on that roadmap. The OpenPower collective behind the Power chip developed did the first thing at its eponymous summit in San Jose this week, and now it is up to the OpenPower partners to do the hard work of finishing the second.

Getting a chip as complex as a server processor into the field, along with its chipsets and memory

IBM Unfolds Power Chip Roadmap Out Past 2020 was written by Timothy Prickett Morgan at The Next Platform.

1 78 79 80