Author Archives: Timothy Prickett Morgan
Author Archives: Timothy Prickett Morgan
It would be ideal if we lived in a universe where it was possible to increase the capacity of compute, storage, and networking at the same pace so as to keep all three elements expanding in balance. The irony is that over the past two decades, when the industry needed for networking to advance the most, Ethernet got a little stuck in the mud.
But Ethernet has pulls out of its boots and left them in the swamp and is back to being barefoot again on much more solid ground where it can run faster. The move from 10 Gb/sec …
Ethernet Getting Back On The Moore’s Law Track was written by Timothy Prickett Morgan at The Next Platform.
Virtual machines and virtual network functions, or VMs and VNFs for short, are the standard compute units in modern enterprise, cloud, and telecommunications datacenters. But varying VM and VNF resource needs as well as networking and security requirements often force IT departments to manage servers in separate silos, each with their own respective capabilities.
For example, some VMs or VNFs may require a moderate number of vCPU cores and lower I/O bandwidth, while VMs and VNFs associated with real-time voice and video, IoT, and telco applications require a moderate-to-high number of vCPU cores, rich networking services, and high I/O bandwidth, …
Using The Network To Break Down Server Silos was written by Timothy Prickett Morgan at The Next Platform.
With the network comprising as much as a quarter of the cost of a high performance computing system and being absolutely central to the performance of applications running on parallel systems, it is fair to say that the choice of network is at least as important as the choice of compute engine and storage hierarchy. That’s why we like to take a deep dive into the networking trends present in each iteration of the Top 500 supercomputer rankings as they come out.
It has been a long time since the Top 500 gave a snapshot of pure HPC centers that …
InfiniBand And Proprietary Networks Still Rule Real HPC was written by Timothy Prickett Morgan at The Next Platform.
Enterprises are purchasing storage by the truckload to support an explosion of data in the datacenter. IDC reports that in the first quarter of 2017, total capacity shipments were up 41.4 percent year-over-year and reached 50.1 exabytes of storage capacity shipped. As IT departments continue to increase their spending on capacity, few realize that their existing storage is a pile of gold that can be fully utilized once enterprises overcome the inefficiencies created by storage silos.
A metadata engine can virtualize the view of data for an application by separating the data (physical) path from the metadata (logical) path. This …
Fix Your NAS With Metadata was written by Timothy Prickett Morgan at The Next Platform.
Just by being the chief architect of the IBM’s BlueGene massively parallel supercomputer, which was built as part of a protein folding simulation grand challenge effort undertaken by IBM in the late 1990s, Al Gara would be someone whom the HPC community would listen to whenever he spoke. But Gara is now an Intel Fellow and also chief exascale architect at Intel, which has emerged as the second dominant supplier of supercomputer architectures alongside Big Blue’s OpenPower partnership with founding members Nvidia, Mellanox Technologies, and Google.
It may seem ironic that Gara did not stay around IBM to help this …
Giving Out Grades For Exascale Efforts was written by Timothy Prickett Morgan at The Next Platform.
Markets are always changing. Sometimes information technology is replaced by a new thing, and sometimes it morphs from one thing to another so gradually that is just becomes computing or networking or storage as we know it. For instance, in the broadest sense, all infrastructure will be cloudy, even if it is bare metal machines or those using containers or heavier server virtualization. In a similar way, in the future all high performance computing may largely be a kind of artificial intelligence, bearing little resemblance to the crunch-heavy simulations we are used to.
It has taken two decades for cloud …
Casing The HPC Market Is Hard, And Getting Harder was written by Timothy Prickett Morgan at The Next Platform.
There are plenty of things that the members of the high performance community do not agree on, there is a growing consensus that machine learning applications will at least in some way be part of the workflow at HPC centers that do traditional simulation and modeling.
Some HPC vendors think the HPC and AI systems have either already converged or will soon do so, and others think that the performance demands (both in terms of scale and in time to result) on both HPC and AI will necessitate radically different architectures and therefore distinct systems for these two workloads. IBM, …
Thinking Through The Cognitive HPC Nexus With Big Blue was written by Timothy Prickett Morgan at The Next Platform.
AMD has been absent from the X86 server market for so long that many of us have gotten into the habit of only speaking about the Xeon server space and how it relates to the relatively modest (in terms of market share, not in terms of architecture and capability) competition that Intel has faced in the past eight years.
Those days are over now that AMD has successfully got its first X86 server chip out the door with the launch of the “Naples” chip, the first in a line of processors that will carry the Epyc brand and, if all …
Competition Returns To X86 Servers In Epyc Fashion was written by Timothy Prickett Morgan at The Next Platform.
While AMD voluntarily exited the server processor arena in the wake of Intel’s onslaught with the “Nehalem” Xeon processors during the Great Recession, it never stopped innovating with its graphics processors and it kept enough of a hand in smaller processors used in consumer and selected embedded devices to start making money again in PCs and to take the game console business away from IBM’s Power chip division.
Now, after five long years of investing, AMD is poised to get its act together and to storm the glass house with a new line of server processors based on its Zen …
AMD Winds Up One-Two Compute Punch For Servers was written by Timothy Prickett Morgan at The Next Platform.
What goes around comes around. After fighting so hard to drive volume economics in the HPC arena with relatively inexpensive X86 clusters in the past twenty years, those economies of scale are running out of gas. That is why we are seeing an explosion in the diversity of compute, not just on the processor, but in adjunct computing elements that make a processor look smarter than it really is.
The desire of system architects to try everything because it is fun has to be counterbalanced by a desire to make systems that can be manufactured at a price that is …
Cray CTO On The Cambrian Compute Explosion was written by Timothy Prickett Morgan at The Next Platform.
There is a lot of change coming down the pike in the high performance computing arena, but it has not happened as yet and that is reflected in the current Top 500 rankings of supercomputers in the world. But the June 2017 list gives us a glimpse into the future, which we think will be as diverse and contentious from an architectural standpoint as in the past.
No one architecture is winning and taking all, and many different architectures are getting a piece of the budget action. This means HPC is a healthy and vibrant ecosystem and not, like enterprise …
HPC Poised For Big Changes, Top To Bottom was written by Timothy Prickett Morgan at The Next Platform.
Much has been made of the ability of The Machine, the system with the novel silicon photonics interconnect and massively scalable shared memory pool being developed by Hewlett Packard Enterprise, to already address more main memory at once across many compute elements than many big iron NUMA servers. With the latest prototype, which was unveiled last month, the company was able to address a whopping 160 TB of DDR4 memory.
This is a considerable feat, but HPE has the ability to significantly expand the memory addressability of the platform, using both standard DRAM memory and as lower cost memories such …
The Memory Scalability At The Heart Of The Machine was written by Timothy Prickett Morgan at The Next Platform.
More databases and data stores and the applications that run atop them are moving to in-memory processing, and sometimes the memory capacity in a single big iron NUMA server isn’t enough and the latencies across a cluster of smaller nodes are too high for decent performance.
For example, server memory capacity tops out at 48 TB in the Superdome X server and at 64 TB in the UV 300 server from Hewlett Packard Enterprise using NUMA architectures. HPE’s latest iteration of the The Machine packs 160 TB of shared memory capacity across its nodes, and has an early version of …
Clever RDMA Technique Delivers Distributed Memory Pooling was written by Timothy Prickett Morgan at The Next Platform.
Hardware is, by its very nature, physical and therefore, unlike software or virtual hardware and software routines encoded by FPGAs, it is the one thing that cannot be easily changed. The dream of composable systems, which we have discussed in the past, is something that has been swirling around in the heads of system architects for more than a decade, and we are without question getting closer to realizing the dream of making the components of systems and the clusters that are created from them programmable like software.
The hyperscalers, of course, have been on the bleeding edge of …
The Composable Systems Wave Is Rising was written by Timothy Prickett Morgan at The Next Platform.
Well, it could have been a lot worse. About 5.6 percent worse, if you do the math.
As we here at The Next Platform have been anticipating for quite some time, with so many stars aligning here in 2017 and a slew of server processor and GPU coprocessor announcements and deliveries expected starting in the summer and rolling into the fall, there is indeed a slowdown in the server market and one that savvy customers might be able to take advantage of. But we thought those on the bleeding edge of performance were going to wait to see what Intel, …
One Hyperscaler Gets The Jump On Skylake, Everyone Else Sidelined was written by Timothy Prickett Morgan at The Next Platform.
Making money in the information technology market has always been a challenge, but it keeps getting increasingly difficult as the tumultuous change in how companies consume compute, storage, and networking rips through all aspects of this $3 trillion market.
It is tough to know exactly what do to, and we see companies chasing the hot new things, doing acquisitions to bolster their positions, and selling off legacy businesses to generate the cash to do the deals and to keep Wall Street at bay. Companies like IBM, Dell, HPE, and Lenovo have sold things off and bought other things to try …
When No One Can Make Money In Systems was written by Timothy Prickett Morgan at The Next Platform.
The term software defined storage is in the new job title that Eric Barton has at DataDirect Networks, and he is a bit amused by this. As one of the creators of early parallel file systems for supercomputers and one of the people who took the Lustre file systems from a handful of supercomputing centers to one of the two main data management platforms for high performance computing, to a certain way of looking at it, Barton has always been doing software-defined storage.
The world has just caught up with the idea.
Now Barton, who is leaving Intel in the …
Memory-Like Storage Means File Systems Must Change was written by Timothy Prickett Morgan at The Next Platform.
In a world of survival of the fittest coupled with mutations, something always has to be the last of its kind. And so it is with the “Kittson” Itanium 9700 processors, which Intel quietly released earlier this month and which will mostly see action in the last of the Integrity line of midrange and high-end systems from Hewlett Packard Enterprise.
The Itanium line has a complex history, perhaps fitting for a computing architecture that was evolving from the 32-bit X86 architecture inside of Intel and that was taken in a much more experimental and bold direction when the aspiring server …
The Last Itanium, At Long Last was written by Timothy Prickett Morgan at The Next Platform.
As we previously reported, Google unveiled its second-generation TensorFlow Processing Unit (TPU2) at Google I/O last week. Google calls this new generation “Google Cloud TPUs”, but provided very little information about the TPU2 chip and the systems that use it other than to provide a few colorful photos. Pictures do say more than words, so in this article we will dig into the photos and provide our thoughts based the pictures and on the few bits of detail Google did provide.
To start with, it is unlikely that Google will sell TPU-based chips, boards, or servers – TPU2 …
Under The Hood Of Google’s TPU2 Machine Learning Clusters was written by Timothy Prickett Morgan at The Next Platform.
One of the reasons why Nvidia has been able to quadruple revenues for its Tesla accelerators in recent quarters is that it doesn’t just sell raw accelerators as well as PCI-Express cards, but has become a system vendor in its own right through its DGX-1 server line. The company has also engineered new adapter cards specifically aimed at hyperscalers who want to crank up the performance on their machine learning inference workloads with a cheaper and cooler Volts GPU.
Nvidia does not break out revenues for the DGX-1 line separately from other Tesla and GRID accelerator product sales, but we …
Big Bang For The Buck Jump With Volta DGX-1 was written by Timothy Prickett Morgan at The Next Platform.