The two-socket Xeon server has been the default workhorse machine in the datacenter for so long and to such a great extent that using anything else almost looks aberrant. But there are occasions where a fatter machine makes sense based on the applications under consideration and the specific economics of the hardware and software supporting those applications.
All things being equal, of course companies would want to buy the most powerful machines they can, and indeed, Intel has said time and time again that customers are continuing to buy up the Xeon stack within the Xeon D, Xeon E5, and …
Making The Case For Big Xeon Iron was written by Timothy Prickett Morgan at The Next Platform.
The concept of composable or disaggregated infrastructure is nothing new, but with approaching advances in technology on both the software and network sides (photonics in particular) an old idea might be infused with new life.
Several vendors have already taken a disaggregated architecture approach at the storage, network, and system level. Cisco Systems’ now defunct UCS M Series, for instance, is one example, and one can consider Hewlett Packard Enterprise’s The Machine as one contemporary example and its Project Synergy as two others; DriveScale, which we covered back in May, is possibly another. But thus far, none of …
The (Second) Coming of Composable Systems was written by Nicole Hemsoth at The Next Platform.
Call it a phase that companies will have to go through to get to the promised land of the public cloud. Call it a temporary inevitability, as Microsoft does. Call it stupid if you want to be ungenerous to IT shops used to controlling their own infrastructure. Call it what you will, but it sure does look like all of the big public clouds are going to have to figure out how to offer private cloud versions of their public cloud infrastructure, and Amazon Web Services be no exception if it hopes to capture dominant market share as it …
The Inevitability Of Private Public Clouds was written by Nicole Hemsoth at The Next Platform.
With this summer’s announcement of China’s dramatic shattering of top supercomputing performance numbers using ten million relatively simple cores, there is a slight shift in how some are considering the future of the world’s fastest, largest systems.
While one approach, which will be seeing with the pre-exascale machines at the national labs in the United States, is to build complex systems based on sophisticated cores (with a focus on balance in terms of memory) the Chinese approach with the top Sunway TaihuLight machine, which is based on lighter weight, simple, and cheap components and using those in volume, has …
Changing the Exascale Efficiency Narrative at Memory Start Point was written by Nicole Hemsoth at The Next Platform.
Cannibalize your own products or someone else will do it for you, as the old adage goes.
And so it is that Amazon Web Services, the largest provider of infrastructure services available on the public cloud, has been methodically building up a set of data and processing services that will allow customers to run functions against streams or lakes of data without ever setting up a server as we know it.
Just saying the words makes us a little woozy, with systems being the very foundation of the computing platforms that everyone deploys today to do the data processing that …
First, Kill All The Servers was written by Timothy Prickett Morgan at The Next Platform.
The battle between the Mesos and Kubernetes tools for managing applications on modern clusters continues to heat up, with the former reaching its milestone 1.0 with a “universal containerizer” feature that supports native Docker container formats and a shiny new API stack that is a lot more friendly and flexible than the manner in which APIs are implemented in systems management software these days.
Ultimately, something has to be in control of the clusters and divvy up scarce resources to hungry applications, and there has been an epic battle shaping up between Mesos, Kubernetes, and OpenStack.
Mesos is the …
Mesos Reaches Milestone, Adds Native Docker was written by Timothy Prickett Morgan at The Next Platform.
Because Google is such a wildly successful company and a true innovator when it comes to IT platforms, and because we know more about its infrastructure at a theoretical level than what has been built by other hyperscalers and cloud providers, it is natural enough to think that the future of computing for the rest of us will look like what Google has already created for itself.
But ironically, only by going into the public cloud business could Google have to change its infrastructure enough to actually have to make it look more like what large enterprises will need, and …
Google Fosters Another OpenStack Kubernetes Mashup was written by Timothy Prickett Morgan at The Next Platform.
Just because Intel doesn’t make a lot of noise about a product does not mean that it is not important for the company. Rather, it is a gauge of relative importance, and with such a broad and deep portfolio of chips, not everything can be cause for rolling out the red carpet.
So it is, as usual, with the Xeon E5-4600 processors, the variant of Intel’s server chips that has some of the scalability attributes of the high-end Xeon E7 family while being based on the workhorse Xeon E5 chip that is used in the vast majority of the servers …
Intel Broadwell Rollout Complete With Xeon E5 Quads was written by Timothy Prickett Morgan at The Next Platform.
We are hitting the limits of what can be crammed into DRAM in a number of application areas. As data volumes continue to mount, this limitation will be more keenly felt.
Accordingly, there has been a great deal of work recently to look to flash to create more efficient and capable system that can accelerate deeply data-intensive problems, but few things have gotten enough traction to filter their way into big news items. With that said, there are some potential breakthroughs on this front coming out of MIT where some rather impressive performance improvements have been snagged by taking a …
MIT Research Pushes Latency Limits with Distributed Flash was written by Nicole Hemsoth at The Next Platform.
Some technology trends get their start among enterprises, some from hyperscalers or HPC organizations. With flash storage, it was small businesses and hyperscalers who, for their own reasons, got the market growing, drawing in engineering talent and venture capital to give us the plethora of options available on the market today. Now, the big customers are ready to take the plunge.
It is no coincidence, then, that Pure Storage has architected systems that scale to multiple petabytes of capacity to meet their needs. Large enterprises with pressing demands for scaling in terms of both performance and capacity need a different …
A Thirst For Petabyte Scale All-Flash Arrays was written by Timothy Prickett Morgan at The Next Platform.
As we have seen with gathering force, ARM is making a solid bid for datacenters of the future. However, a key feature of many serer farms that will be looking exploit the energy efficiency benefits of 64-bit ARM is the ability to maintain performance in a virtualized environment.
Neither X86 or ARM were built with virtualization in mind, which meant an uphill battle for Intel to build hardware support for hypervisors into its chips. VMware led the charge here beginning in the late 1990s, and over time, Intel made it its business to ensure an ability to support several different …
Are ARM Virtualization Woes Overstated? was written by Nicole Hemsoth at The Next Platform.
Would you rather have tens of thousands of customers who collectively spend a lot of money but their spending rises and falls with the gross domestic product, or a couple of dozen customers who spend almost as much on your product but who do so with massive checks that are not always predictable?
For Intel, this question is moot because it has both kinds of customers, and sometimes they both take a slight pause at exactly the same time. This is precisely what happened for Intel’s Data Center Group in the second quarter of 2016, as revenue growth slowed as …
Datacenters, Poised To Spend, Take A Breather From Intel was written by Timothy Prickett Morgan at The Next Platform.
Even though the Xeon processor has become the default engine for most kinds of compute in the datacenter, it is by no means to only option that is available to large enterprises that can afford to indulge in different kinds of systems because they do not have to homogenize their systems as hyperscalers must if they are to keep their IT costs in check.
Sometimes, there are benefits to being smaller, and the ability to pick point solutions that are good for a specific job is one of them. This has been the hallmark of the high-end of computing since …
Stacking Up Oracle S7 Against Intel Xeon was written by Timothy Prickett Morgan at The Next Platform.
Big Blue does not participate in any meaningful sense in the booming market for infrastructure for the massive hyperscale and public cloud buildout that is transforming the face of the IT business. But the company is still a bellwether for computing at large enterprises, and its efforts to – once again – transform itself to address the very different needs that companies have compared to a decade or two ago are fascinating to contemplate.
In a very real way, the manner that IBM talks about its own business these days, which is very different from how it described the rising …
Systems Are The Table Stakes For IBM’s Evolution was written by Timothy Prickett Morgan at The Next Platform.
Dell recently unveiled its datacenter liquid cooling technology under the codename of Triton. Dell’s Extreme Scale Infrastructure team originally designed and developed Triton as a proof of concept for eBay, leveraging Dell’s existing rack-scale infrastructure.
In addition to liquid-cooled cold plates that directly contact the CPUs, Triton is also designed with embedded liquid to air heat exchangers to cool the airborne heat of a large number of tightly packed and hot processor nodes using 80% of the cooling capacity of the heat exchangers. That leaves 20% of Triton’s cooling capacity as “overhead”. The overhead cooling capacity is then used to …
HPC Flows Into Hyperscale With Dell Triton was written by Timothy Prickett Morgan at The Next Platform.
For the first time access to cutting-edge quantum computing is open free to the public over the web. On 3 May 2016, IBM launched their IBM Quantum Experience website, which enthusiasts and professionals alike can program on a prototype quantum processor chip within a simulation environment. Users, accepted over email by IBM, are given a straightforward ‘composer’ interface, much like a musical note chart, to run a program and test the output. In over a month more than 25,000 users have signed up.
The quantum chip itself combines five superconducting quantum bits (qubits) operating at a cool minus 273.135531 degrees …
IBM Quantum Computing Push Gathering Steam was written by Nicole Hemsoth at The Next Platform.
In the first part of this series on the proposed Cache Coherence Interconnect for Accelerators (CCIX) standard, we talked about the issues of cache coherence and the need to share memory across various kinds of compute elements in a system. In this second part, we will go deeper into the approach of providing memory coherence across CPUs and various kinds of accelerators that have their own local memory.
A local accelerator could potentially be anything. You want something to execute faster than what is possible in today’s generic processors, and so you throw specialized hardware at the problem. Still, …
Weaving Accelerators Into The Memory Complex was written by Timothy Prickett Morgan at The Next Platform.
With the increasing adoption of scale-out architectures and cloud computing, high performance interconnect (HPI) technologies have become a more critical part of IT systems. Today, HPI represents its own market segment at the upper echelons of the networking equipment market, supporting applications requiring extremely low latency and exceptionally high bandwidth.
As big data analytics, machine learning, and business optimization applications become more prevalent, HPI technologies are of increasing importance for enterprises as well. These most demanding enterprise applications, as well as high performance computing (HPC) applications, are generally addressed with scale-out clusters based on large numbers of ‘skinny’ nodes. The …
Ranking High Performance Interconnects was written by Timothy Prickett Morgan at The Next Platform.
MPI (Message Passing Interface) is the de facto standard distributed communications framework for scientific and commercial parallel distributed computing. The Intel MPI implementation is a core technology in the Intel Scalable System Framework that provides programmers a “drop-in” MPICH replacement library that can deliver the performance benefits of the Intel Omni-Path Architecture (Intel OPA ) communications fabric plus high core count Intel Xeon and Intel Xeon Phi processors.
“Drop-in” literally means that programmers can set an environmental variable to dynamically load the highly tuned and optimized Intel MPI library – no recompilation required! Of course, Intel’s MPI library supports other …
MPI and Scalable Distributed Machine Learning was written by Nicole Hemsoth at The Next Platform.
The past decade or so has seen some really phenomenal capacity growth and similarly remarkable software technology in support of distributed-memory systems. When work can be spread out across a lot of processors and/or a lot of disjointed memory, life has been good.
Pity, though, that poor application needing access to a lot of shared memory or which could use the specialized and so faster resources of local accelerators. For such, distributed memory just does not cut it and having to send work out to an IO-attached accelerator chews into much of what would otherwise be an accelerator’s advantages. With …
Drilling Into The CCIX Coherence Standard was written by Timothy Prickett Morgan at The Next Platform.