Timothy Prickett Morgan

Author Archives: Timothy Prickett Morgan

Injecting Machine Learning And Bayesian Optimization Into HPC

No matter what kind of traditional HPC simulation and modeling system you have, no matter what kind of fancy new machine learning AI system you have, IBM has an appliance that it wants to sell you to help make these systems work better – and work better together if you are mixing HPC and AI.

Injecting Machine Learning And Bayesian Optimization Into HPC was written by Timothy Prickett Morgan at The Next Platform.

InfiniBand Is Still Setting The Network Pace For HPC And AI

If this is the middle of November, even during a global pandemic, this must be the SC20 supercomputing conference and there either must be a speed bump that is being previewed for the InfiniBand interconnect commonly used for HPC and AI or it is actually shipping in systems.

InfiniBand Is Still Setting The Network Pace For HPC And AI was written by Timothy Prickett Morgan at The Next Platform.

The Many Facets Of Hybrid Supercomputing As Exascale Dawns

There may not be a lot of new systems on the November 2020 edition of the Top500 rankings of supercomputers, but there has been a bunch of upgrades and system tunings of machines that have been recently added, expanding their performance, as well as a handful of new machines that are interesting in their own right.

The Many Facets Of Hybrid Supercomputing As Exascale Dawns was written by Timothy Prickett Morgan at The Next Platform.

AMD At A Tipping Point With Instinct MI100 GPU Accelerators

It is hard enough to chase one competitor. Imagine how hard it is to chase two different ones in different but complementary markets while at the same time those two competitors are thinking about fighting each other in those two different markets and thus bringing even more intense competitive pressure on both fronts.

AMD At A Tipping Point With Instinct MI100 GPU Accelerators was written by Timothy Prickett Morgan at The Next Platform.

Intel’s First Discrete Xe Server GPU Aimed At Hyperscalers

We have been waiting for years to see the first discrete Xe GPU from Intel that is aimed at the datacenter, and as it turns out, the first one is not the heavy compute engine we have been anticipating, but rather a souped up version of the Iris Xe LP and Iris Max Xe LP graphics cards that were launch at the end of October, which themselves are essentially the GPU extracted from the hybrid CPU-GPU “Tiger Lake” Core i9 processors for PC clients.

Intel’s First Discrete Xe Server GPU Aimed At Hyperscalers was written by Timothy Prickett Morgan at The Next Platform.

176 Steps Closer To The Mythical All-Flash Datacenter

We have nothing against disk drives. Seriously. And in fact, we are amazed at the amount of innovation that continues to go into the last electromechanical device still in use in computing, which from a commercial standpoint started out with the tabulating machines created by Herman Hollerith in 1884 and used to process the 1890 census in the United States, thus laying the foundation of International Machines Machines.

176 Steps Closer To The Mythical All-Flash Datacenter was written by Timothy Prickett Morgan at The Next Platform.

Switching Back Into A Higher Gear

If you want to get a sense of what is happening in the high-end of the Ethernet switch and routing market, it is Arista Networks, formerly an upstart and now just one of the bigger vendors taking on the hegemony of Cisco Systems in networking in the datacenter and now on the campus and at the edge, is probably the best bellwether there is.

Switching Back Into A Higher Gear was written by Timothy Prickett Morgan at The Next Platform.

Sometimes HPC Means Big Memory, Not Big Compute

Not every HPC or analytics workload – meaning an algorithmic solver and the data that it chews on – fits nicely in a 128 GB or 256 GB or even a 512 GB memory space, and sometimes the dataset is quite large and runs best with a larger memory space rather than carving it up into smaller pieces and distributing across nodes with the same amount of raw compute.

Sometimes HPC Means Big Memory, Not Big Compute was written by Timothy Prickett Morgan at The Next Platform.

AMD Girds For Compute War With Xilinx Deal

The rumors were right, and AMD president and chief executive officer Lisa Su is indeed printing out a tower of stock to acquire FPGA maker Xilinx for what amounts to about $35 billion and, as it turns out, she is relinquishing her position as president to Victor Peng, chief executive at Xilinx, to close the deal.

AMD Girds For Compute War With Xilinx Deal was written by Timothy Prickett Morgan at The Next Platform.

1 37 38 39 40 41 80