Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

Counting The Cost Of Scaling HPC Applications

It is very rare indeed to get benchmark data on HPC applications that shows it scaling over a representative number of nodes, and it is never possible to get cost allocations presented that allow for price/performance comparisons to be made for clusters of different physical sizes and the increase in throughput that more scale brings.

Counting The Cost Of Scaling HPC Applications was written by Timothy Prickett Morgan at .

Nvidia Makes Arm A Peer To X86 And Power For GPU Acceleration

Creating the Tesla GPU compute platform has taken Nvidia the better part of a decade and a half, and it has culminated in a software stack comprised of various HPC and AI frameworks, the CUDA parallel programming environment, compilers from Nvidia’s PGI division and their OpenACC extensions as well as open source GCC compilers, and various other tools that together account for tens of millions of lines of code and tens of thousands of individual APIs.

Nvidia Makes Arm A Peer To X86 And Power For GPU Acceleration was written by Timothy Prickett Morgan at .

Intel Finally Serious About Switching with Barefoot Networks Buy

In one fell swoop, Intel has finally filled a giant hole in its switching product line by acquiring upstart Barefoot Networks, the creator of the P4 programming language for networking devices and the “Tofino” family of Ethernet switch ASICs that make use of it.

Intel Finally Serious About Switching with Barefoot Networks Buy was written by Timothy Prickett Morgan at .

Future Kubernetes Will Mimic What Facebook Already Does

If you want to see what the future of the Kubernetes container management system will look like, then the closed source, homegrown Tupperware container control system that Facebook has been using and evolving since 2011 – before Docker containers and Kubernetes were around – might be a good place to find inspiration.

Future Kubernetes Will Mimic What Facebook Already Does was written by Timothy Prickett Morgan at .

Where Does Nvidia Go In The Datacenter From Here?

We have to admit that it is often a lot more fun watching an upstart carve out whole new slices of business, or create them out of what appears to be thin air, in the datacenter than it is to watch how it will respond to intense competitive pressures and somehow manage to keep growing despite that.

Where Does Nvidia Go In The Datacenter From Here? was written by Timothy Prickett Morgan at .

Those Without Persistent Memory Are Fated To Repeat It

We have published a number of stories lately that talk about the innovative uses of Intel’s 3D XPoint Optane persistent memory modules, which are a key component of the company’s “Cascade Lake” Xeon SP systems and which are also becoming a foundational technology in clustered storage based on NVM-Express over Fabrics interconnects from a number of storage upstarts.

Those Without Persistent Memory Are Fated To Repeat It was written by Timothy Prickett Morgan at .

DUG Sets Foundation For Exascale HPC Utility With Xeon Phi

While exascale systems, even at the single precision computational capability commonly used in the oil and gas industry, will cost on the order of $250 million, that cost pales in comparison to the capital outlay of drilling exploratory deep water wells, which can cost $100 million a pop.

DUG Sets Foundation For Exascale HPC Utility With Xeon Phi was written by Timothy Prickett Morgan at .

1 40 41 42 43 44 73