Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

When No One Can Make Money In Systems

Making money in the information technology market has always been a challenge, but it keeps getting increasingly difficult as the tumultuous change in how companies consume compute, storage, and networking rips through all aspects of this $3 trillion market.

It is tough to know exactly what do to, and we see companies chasing the hot new things, doing acquisitions to bolster their positions, and selling off legacy businesses to generate the cash to do the deals and to keep Wall Street at bay. Companies like IBM, Dell, HPE, and Lenovo have sold things off and bought other things to try

When No One Can Make Money In Systems was written by Timothy Prickett Morgan at The Next Platform.

Memory-Like Storage Means File Systems Must Change

The term software defined storage is in the new job title that Eric Barton has at DataDirect Networks, and he is a bit amused by this. As one of the creators of early parallel file systems for supercomputers and one of the people who took the Lustre file systems from a handful of supercomputing centers to one of the two main data management platforms for high performance computing, to a certain way of looking at it, Barton has always been doing software-defined storage.

The world has just caught up with the idea.

Now Barton, who is leaving Intel in the

Memory-Like Storage Means File Systems Must Change was written by Timothy Prickett Morgan at The Next Platform.

The Last Itanium, At Long Last

In a world of survival of the fittest coupled with mutations, something always has to be the last of its kind. And so it is with the “Kittson” Itanium 9700 processors, which Intel quietly released earlier this month and which will mostly see action in the last of the Integrity line of midrange and high-end systems from Hewlett Packard Enterprise.

The Itanium line has a complex history, perhaps fitting for a computing architecture that was evolving from the 32-bit X86 architecture inside of Intel and that was taken in a much more experimental and bold direction when the aspiring server

The Last Itanium, At Long Last was written by Timothy Prickett Morgan at The Next Platform.

Under The Hood Of Google’s TPU2 Machine Learning Clusters

As we previously reported, Google unveiled its second-generation TensorFlow Processing Unit (TPU2) at Google I/O last week. Google calls this new generation “Google Cloud TPUs”, but provided very little information about the TPU2 chip and the systems that use it other than to provide a few colorful photos. Pictures do say more than words, so in this article we will dig into the photos and provide our thoughts based the pictures and on the few bits of detail Google did provide.

To start with, it is unlikely that Google will sell TPU-based chips, boards, or servers – TPU2

Under The Hood Of Google’s TPU2 Machine Learning Clusters was written by Timothy Prickett Morgan at The Next Platform.

Big Bang For The Buck Jump With Volta DGX-1

One of the reasons why Nvidia has been able to quadruple revenues for its Tesla accelerators in recent quarters is that it doesn’t just sell raw accelerators as well as PCI-Express cards, but has become a system vendor in its own right through its DGX-1 server line. The company has also engineered new adapter cards specifically aimed at hyperscalers who want to crank up the performance on their machine learning inference workloads with a cheaper and cooler Volts GPU.

Nvidia does not break out revenues for the DGX-1 line separately from other Tesla and GRID accelerator product sales, but we

Big Bang For The Buck Jump With Volta DGX-1 was written by Timothy Prickett Morgan at The Next Platform.

AMD Disrupts The Two-Socket Server Status Quo

It is funny to think that Advanced Micro Devices has been around almost as long as the IBM System/360 mainframe and that it has been around since the United States landed people on the moon. The company has gone through many gut-wrenching transformations, adapting to changing markets. Like IBM and Apple, just to name two, AMD has had its share of disappointments and near-death experiences, but unlike Sun Microsystems, Silicon Graphics, Sequent Computer, Data General, Tandem Computer, and Digital Equipment, it has managed to stay independent and live to fight another day.

AMD wants a second chance in the datacenter,

AMD Disrupts The Two-Socket Server Status Quo was written by Timothy Prickett Morgan at The Next Platform.

The Embiggening Bite That GPUs Take Out Of Datacenter Compute

We are still chewing through all of the announcements and talk at the GPU Technology Conference that Nvidia hosted in its San Jose stomping grounds last week, and as such we are thinking about the much bigger role that graphics processors are playing in datacenter compute – a realm that has seen five decades of dominance by central processors of one form or another.

That is how CPUs got their name, after all. And perhaps this is a good time to remind everyone that systems used to be a collection of different kinds of compute, and that is why the

The Embiggening Bite That GPUs Take Out Of Datacenter Compute was written by Timothy Prickett Morgan at The Next Platform.

Nvidia’s Tesla Volta GPU Is The Beast Of The Datacenter

Graphics chip maker Nvidia has taken more than a year and carefully and methodically transformed its GPUs into the compute engines for modern HPC, machine learning, and database workloads. To do so has meant staying on the cutting edge of many technologies, and with the much-anticipated but not very long-awaited “Volta” GP100 GPUs, the company is once again skating on the bleeding edge of several different technologies.

This aggressive strategy allows Nvidia to push the performance envelope on GPUs and therefore maintain its lead over CPUs for the parallel workloads it is targeting while at the same time setting up

Nvidia’s Tesla Volta GPU Is The Beast Of The Datacenter was written by Timothy Prickett Morgan at The Next Platform.

GOAI: Keeping Databases, Analytics, And Machine Learning All On The GPU

Moving data is the biggest problem in computing, and probably has been since there was data processing if we really want to be honest about it. Because of the cost of bandwidth, latency, energy, and iron to do multiple stages of processing on information in a modern application that might include a database as well as machine learning algorithms against stuff stored there as well as from other sources, you want to try to do all your computation from the memory of one set of devices.

That, in a nutshell, is what the GPU Open Analytics Initiative is laying the

GOAI: Keeping Databases, Analytics, And Machine Learning All On The GPU was written by Timothy Prickett Morgan at The Next Platform.

Impatient For Fabrics, Micron Forges Its Own NVM-Express Arrays

There may be a shortage in the supply of DRAM main memory and NAND flash memory that is having an adverse effect on the server and storage markets, but there is no shortage of vendors who are trying to push the envelope on clustered storage using a mix of these memories and others such as the impending 3D XPoint.

Micron Technology, which makes and sells all three of these types of memories, is so impatient with the rate of technological advancement in clustered flash arrays based on the NVM-Express protocol that it decided to engineer and launch its own product

Impatient For Fabrics, Micron Forges Its Own NVM-Express Arrays was written by Timothy Prickett Morgan at The Next Platform.

Crunching Machine Learning And Databases Together On GPUs

While it is always best to have the right tool for the job, it is better still if a tool can be used by multiple jobs and therefore have its utilization be higher than it might otherwise be. This is one of the reasons why general purpose, X86-based computing took over the datacenter. Economies of scale trumped the efficiency that can come from limited scope or just leaving legacy applications alone in place on alternate platforms.

The idea of offloading computational tasks from CPUs to GPU accelerators took off in academia a little more than a decade ago, and

Crunching Machine Learning And Databases Together On GPUs was written by Timothy Prickett Morgan at The Next Platform.

HPC System Delays Stall InfiniBand

Enterprise spending on servers was a bit soft in the first quarter, as evidenced by the financial results posted by Intel and by its sometime rival IBM, but the hyperscale and HPC markets, at least when it comes to networking, was a bit soft, according to high-end network chip and equipment maker Mellanox Technologies.

In the first quarter ended March 31, Mellanox had a 4.1 percent revenue decline, to $188.7 million, and because of higher research and development costs, presumably associated with the rollout of 200 Gb/sec Quantum InfiniBand technology (which the company has talked about) and

HPC System Delays Stall InfiniBand was written by Timothy Prickett Morgan at The Next Platform.

Rambus, Microsoft Put DRAM Into Deep Freeze To Boost Performance

Energy efficiency and operating costs for systems are as important as raw performance in today’s datacenters. Everyone from the largest hyperscalers and high performance computing centers to large enterprises that are sometimes like them are trying squeeze as much performance as they can from their infrastructure while reining in power consumption and the costs associated with keeping it all from overheating.

Throw in the slowing down of Moore’s Law and new emerging workloads like data analytics and machine learning, and the challenge to these organizations becomes apparent.

In response, organizations on the cutting edge have embraced accelerators like GPUs and

Rambus, Microsoft Put DRAM Into Deep Freeze To Boost Performance was written by Timothy Prickett Morgan at The Next Platform.

Intel Melds Xeon E5 And E7 With Skylake

We have been saying for the past two year that the impending “Skylake” Xeon processors represented the biggest platform architectural change in the Xeon processor business at Intel since the transformational “Nehalem” Xeon 5500s that debuted back in March 2009 into the gaping maw of the Great Recession.

There is no global recession breathing down the IT sector’s neck like a hungry wolf here in 2017, eight years and seven chip generations later. But Intel is facing competitive pressures from AMD’s Naples Opterons, IBM’s Power9, and the ARM collective (mainly Cavium and Qualcomm at this point, but Applied Micro is

Intel Melds Xeon E5 And E7 With Skylake was written by Timothy Prickett Morgan at The Next Platform.

OpenMP: From Parallel Loops To Exaflops

This fall will mark twenty years since the publication of the v1.0 specification of OpenMP Fortran. From early loop parallelism to a heterogeneous, exascale future, OpenMP has apparently weathered well the vicissitudes and tumultuous changes of the computer industry over that past two decades and appears to be positioned to address the needs of our exascale future.

In the 1990s when the OpenMP specification was first created, memory was faster than the processors that performed the computation. This is the exact opposite of today’s systems where memory is the key bottleneck and the HPC community is rapidly adopting faster memory

OpenMP: From Parallel Loops To Exaflops was written by Timothy Prickett Morgan at The Next Platform.

Lessons Learned From Facebook’s Split Network Backbone

Distributed applications, whether they are containerized or not, have a lot of benefits when it comes to modularity and scale. But in a world of feature creep on all applications, whether they are internally facing ones running a business or hyperscale consumer applications like Google’s search engine or Facebook’s social media network, these distributed applications put a huge strain on the network.

This, more than any other factor, is why network costs are rising faster than any other aspect of the datacenter. Gone are the days when everything was done in three or four tiers, with a Web server like

Lessons Learned From Facebook’s Split Network Backbone was written by Timothy Prickett Morgan at The Next Platform.

The Datacenter Does Not Revolve Around AWS, Despite Its Gravity

If the public cloud computing market were our solar system, then Amazon Web Services would be Jupiter and Saturn together and the remaining five fast-growing big clouds would be like the inner planets like Mercury, Venus, Earth, Mars,  and that pile of rocks that used to be a planet mixed up with those clouds that are finding growth a bit more challenging  – think Uranus and Neptune and maybe even Pluto if you still want to count it a planet.

This analogy came to us in the wake of Amazon’s reporting of its financial results for the first quarter of

The Datacenter Does Not Revolve Around AWS, Despite Its Gravity was written by Timothy Prickett Morgan at The Next Platform.

Intel Moves Xeons To The Moore’s Law Leading Edge

In the wake of the Technology and Manufacturing Day event that Intel hosted last month, we were pondering this week about what effect the tick-tock-clock method of advancing chip designs and manufacturing processes might have on the Xeon server chip line from Intel, and we suggested that it might close the gaps between the Core client chips and the Xeons. It turns out that Intel is not only going to close the gaps, but reverse them and put the Xeons on the leading edge.

To be precise, Brian Krzanich, Intel’s chief financial officer, and Robert Swan, the company’s chief financial

Intel Moves Xeons To The Moore’s Law Leading Edge was written by Timothy Prickett Morgan at The Next Platform.

Mapping Intel’s Tick Tock Clock Onto Xeon Processors

Chip maker Intel takes Moore’s Law very seriously, and not just because one of its founders observed the consistent rate at which the price of a transistor scales down with each tweak in manufacturing. Moore’s Law is not just personal with Intel. It is business because Intel is a chip maker first and a chip designer second, and that is how it has been able to take over the desktops and datacenters of the world.

Last month, the top brass in Intel’s chip manufacturing operations vigorously defended Moore’s Law, contending that not only was the two year cadence of

Mapping Intel’s Tick Tock Clock Onto Xeon Processors was written by Timothy Prickett Morgan at The Next Platform.

Pushing A Trillion Row Database With GPU Acceleration

There is an arms race in the nascent market for GPU-accelerated databases, and the winner will be the one that can scale to the largest datasets while also providing the most compatibility with industry-standard SQL.

MapD and Kinetica are the leaders in this market, but BlazingDB, Blazegraph, and PG-Strom also in the field, and we think it won’t be long before the commercial relational database makers start adding GPU acceleration to their products, much as they have followed SAP HANA with in-memory processing.

MapD is newer than Kinetica, and it up until now, it has been content to allow clustering

Pushing A Trillion Row Database With GPU Acceleration was written by Timothy Prickett Morgan at The Next Platform.

1 59 60 61 62 63 72