Archive

Category Archives for "The Next Platform"

The Long Future Ahead For Intel Xeon Processors

The personal computer has been the driver of innovation in the IT sector in a lot of ways for the past three and a half decades, but perhaps one of the most important aspects of the PC business is that it gave chip maker Intel a means of perfecting each successive manufacturing technology at high volume before moving it over to more complex server processors that would otherwise have lower yields and be more costly if they were the only chips Intel made with each process.

That PC volume is what gave Intel datacenter prowess, in essence, and it is

The Long Future Ahead For Intel Xeon Processors was written by Timothy Prickett Morgan at The Next Platform.

Mashing Up OpenStack With Hyperconverged Storage

While innovators in the HPC and hyperscale arenas usually have the talent and often have the desire to get into the code for the tools that they use to create their infrastructure, most enterprises want their software with a bit more fit and finish, and if they can get it so it is easy to operate and yet still in some ways open, they are willing to pay a decent amount of cash to get commercial-grade support.

OpenStack has pretty much vanquished Eucalyptus, CloudStack, and a few other open source alternatives from the corporate datacenter, and it is giving

Mashing Up OpenStack With Hyperconverged Storage was written by Timothy Prickett Morgan at The Next Platform.

Storage Performance Models Highlight Burst Buffers at Scale

For storage at scale, particularly for large scientific computing centers, burst buffers have become a hot topic for both checkpoint and application performance reasons. Major vendors in high performance computing have climbed on board and we will be seeing a new crop of big machines featuring burst buffers this year to complement the few that have already emerged.

The what, why, and how of burst buffers can be found in our interview with the inventor of the concept, Gary Grider at Los Alamos National Lab. But for centers that have already captured the message and are looking for the

Storage Performance Models Highlight Burst Buffers at Scale was written by Nicole Hemsoth at The Next Platform.

NVLink Takes GPU Acceleration To The Next Level

One of the breakthrough moments in computing, which was compelled by necessity, was the advent of symmetric multiprocessor, or SMP, clustering to make two or more processors look and act, as far as the operating system and applications were concerned, as a single, more capacious processor. With NVLink clustering for GPUs and for lashing GPUs to CPUs, Nvidia is bringing something as transformative as SMP was for CPUs to GPU accelerators.

The NVLink interconnect has been in development for years, and is one of the “five miracles” that Nvidia CEO and co-founder Jen-Hsun Huang said at the GPU Technology Conference

NVLink Takes GPU Acceleration To The Next Level was written by Timothy Prickett Morgan at The Next Platform.

Google Pits Dataflow Against Spark

It is almost without question that search engine giant Google has the most sophisticated and scalable data analytics platform on the planet. The company has been on the leading edge of analytics and the infrastructure that supports it for a decade and a half and through its various services it has an enormous amount of data on which to chew and draw inferences to drive its businesses.

In the wake of the launch of Amazon Web Services a decade ago, Google came to the conclusion that what end users really needed was services to store and process data, not access

Google Pits Dataflow Against Spark was written by Timothy Prickett Morgan at The Next Platform.

How Big Is The Ecosystem Growing On Clouds?

Letting go of infrastructure is hard, but once you do – or perhaps more precisely, once you can – picking it back up again is a whole lot harder.

This is perhaps going to be the long-term lesson that cloud computing teaches the information technology industry as it moves back in time to data processing, as it used to be called, and back towards a rental model that got IBM sued by the US government in the nascent days of computing and compelled changes in Big Blue’s behavior that made it possible for others to create and sell systems against

How Big Is The Ecosystem Growing On Clouds? was written by Timothy Prickett Morgan at The Next Platform.

The Uptake Of Broadwell Xeons Begins

Servers are still monolithic pieces of machinery and the kind of disaggregated and composable infrastructure that will eventually be the norm in the datacenter is still many years into the future. And that means organizations have to time their upgrade cycles for their clusters in a manner that is mindful of processor launches from their vendors.

With Intel dominating the server these days, its Xeon processor release dates are arguably the dominant component of the overall server cycle. But even taking this into account, there is a steady beat of demand for more computing in the datacenters of the world

The Uptake Of Broadwell Xeons Begins was written by Timothy Prickett Morgan at The Next Platform.

Bolder Battle Lines Drawn At Extreme Scale Computing Borders

The race to reach exascale computing has been a point of national ambition in the United States, which is currently home to the highest number of top-performing supercomputers in the world. But without focused effort, investment, and development in research and the vendor communities, that position is set to come under fire from high performance computing (HPC) developments in China and Japan in particular, as well as elsewhere around the globe.

This national priority of moving toward ever-more capable supercomputers as a source of national competitiveness in manufacturing, healthcare, and other segments was the subject of a detailed report that

Bolder Battle Lines Drawn At Extreme Scale Computing Borders was written by Nicole Hemsoth at The Next Platform.

Are ARM Server Chips Xeon Class, And Does It Matter?

We have often opined that ARMv8 processors would struggle to meet Intel Xeon chips head-on until they got a few microarchitecture revisions under their belts to improve per-core performance and until they narrowed the manufacturing gap to 14 nanometers or 16 nanometers, or perhaps even 10 nanometers.

But it looks like ARM server chip maker Applied Micro is aiming to do just that with its X-Gene 3 chip, which we profiled last November when its architecture was announced. Applied Micro has reached for this lofty goal before, with its X-Gene 1 and X-Gene 2 processors, but it appears that

Are ARM Server Chips Xeon Class, And Does It Matter? was written by Timothy Prickett Morgan at The Next Platform.

Intel Does The Math On Broadwell Server Upgrades

The “Broadwell” generation of Xeon processors debuted a month ago, and now that the basic feeds and speeds are out there, customers are trying to figure out what to buy as they upgrade their systems and when to do it. This being a “tick” in the Intel chip cadence – meaning a shrink to smaller transistors instead of a “tock” rearchitecting of the Xeon core and the surrounding electronics – the Broadwell Xeons snap into existing “Haswell” systems and an upgrade is fairly straightforward for both system makers and their customers.

It all comes down to math about what to

Intel Does The Math On Broadwell Server Upgrades was written by Timothy Prickett Morgan at The Next Platform.

Exascale Timeline Pushed to 2023: What’s Missing in Supercomputing?

The roadmap to build and deploy an exascale computer has extended over the last few years–and more than once.

Initially, the timeline marked 2018 as the year an exaflop-capable system would be on the floor, just one year after the CORAL pre-exascale machines are installed at three national labs in the U.S.. That was later shifted to 2020, and now, according to a new report setting forth the initial hardware requirements for such a system, it is anywhere between 2023-2025. For those who follow high performance computing and the efforts toward exascale computing, this extended timeline might not come as

Exascale Timeline Pushed to 2023: What’s Missing in Supercomputing? was written by Nicole Hemsoth at The Next Platform.

Telcos Dial Up OpenStack And Mainstream It

OpenStack was born at the nexus of high performance computing at NASA and the cloud at Rackspace Hosting, but it might be the phone companies of the world that help it go mainstream.

It is safe to say that most of us probably think that our data plans and voice services for our mobile phones are way too expensive, and as it turns out, that is exactly how our mobile phone operators feel about the racks and rows and datacenters that they have full of essentially proprietary network equipment that comprises their wired and wireless networks. And so, all of

Telcos Dial Up OpenStack And Mainstream It was written by Timothy Prickett Morgan at The Next Platform.

The Once And Future IBM Platform

More than anything else, over its long history in the computing business, IBM has been a platform company and say what you will about the woes it has had through several phases of its history, what seems obvious is that when Big Blue forgets this it runs into trouble.

If you stare at its quarterly financial results long enough, you can still see that platform company looking back at you, even through the artificially dissected product groups the company has used for the past decade and the new ones that IBM is using starting in 2016.

It is important to

The Once And Future IBM Platform was written by Timothy Prickett Morgan at The Next Platform.

Emphasis on Data Wrangling Brings Materials Science Up to Speed

In 2011, the United States launched a multi-agency effort to discover, develop, and produce advanced materials under the Materials Genome Initiative as part of an overall push to get out from under the 20-year process typically involved with researching a new material and bringing it to market.

At roughly the same time, the government was investing in other technology-driven initiatives to bolster competitiveness, with particular emphasis on manufacturing. While key areas in research were developing incredibly rapidly, it appeared that manufacturing, materials, and other more concrete physical problems were waiting on better solutions while genomics, nanotechnology, and other areas were

Emphasis on Data Wrangling Brings Materials Science Up to Speed was written by Nicole Hemsoth at The Next Platform.

OpenStack Still Has A Place In The Stack

The IT industry likes drama perhaps a bit more than is warranted by what actually goes on in the datacenters of the world. We are always spoiling for a good fight between rival technologies because the clash results in competition, which drives technologies forward and prices down.

Ultimately, organizations have to pick some kind of foundation for their modern infrastructure, and OpenStack, the cloud controller spawned from NASA and Rackspace Hosting nearly six years ago, is a growing and vibrant community that, despite the advent of Docker containers and the rise of Mesos and Kubernetes as an alternative substrate for

OpenStack Still Has A Place In The Stack was written by Timothy Prickett Morgan at The Next Platform.

Volkswagen’s “Cloud First” Approach to Infrastructure Decisions

As automotive companies like Ford begin to consider themselves technology companies, others, including Volkswagen Group are taking a similar route. The company’s new CEO, who took over in September, 2015 has a background in computer science and began his career managing IT department for the Audi division. Under his more technical-tuned guard, IT teams within the company are taking major strides to shore up their infrastructure to support the 1.5 billion Euro investment in R&D for forthcoming electric and connected cars in the near future.

Part of the roadmap for Volkswagen Group includes a shift to OpenStack to manage

Volkswagen’s “Cloud First” Approach to Infrastructure Decisions was written by Nicole Hemsoth at The Next Platform.

First Steps In The Program Model For Persistent Memory

In the previous article, we left off with the basic storage model having its objects first existing as changed in the processor’s cache, then being aged into volatile DRAM memory, often with changes first logged synchronously into I/O-based persistent storage, and later with the object’s changes proper later copied from volatile memory into persistent storage. That has been the model for what seems like forever.

With variations, that can be the storage model for Hewlett-Packard Enterprise’s The Machine as well. Since The Machine has a separate class of volatile DRAM memory along with rapidly-accessible, byte-addressable persistent memory accessible globally, the

First Steps In The Program Model For Persistent Memory was written by Timothy Prickett Morgan at The Next Platform.

Baidu Eyes Deep Learning Strategy in Wake of New GPU Options

This month Nvidia bolstered its GPU strategy to stretch further into deep learning, high performance computing, and other markets, and while there are new options to consider, particularly for the machine learning set, it is useful to understand what these new arrays of chips and capabilities mean for users at scale. As one of the companies directly in the lens for Nvidia with its recent wave of deep learning libraries and GPUs, Baidu has keen insight into what might tip the architectural scales—and what might still stay the same, at least for now.

Back in December, when we talked to

Baidu Eyes Deep Learning Strategy in Wake of New GPU Options was written by Nicole Hemsoth at The Next Platform.

Nvidia’s Tesla P100 Steals Machine Learning From The CPU

Pattern analytics, deep learning, and machine learning have fueled a rapid rise in interest in GPU computing, in addition to GPU computing applications in high performance computing (HPC) and cloud-based data analytics.

As a high profile example, Facebook recently contributed its “Big Sur” design to the Open Compute Project (OCP), for use specifically in training neural networks and implementing artificial intelligence (AI) at scale. Facebook’s announcement of Big Sur says “Big Sur was built with the Nvidia Tesla M40 in mind but is qualified to support a wide range of PCI-e cards,” pointing out how pervasive Nvidia’s Tesla

Nvidia’s Tesla P100 Steals Machine Learning From The CPU was written by Timothy Prickett Morgan at The Next Platform.

Intel Owns The Server, Wants To Own The Rack

Chip maker Intel has been getting a lot of grief in recent days about missing the boat on putting chips in Apple’s iPhone back when the product was announced back in 2007, and then subsequently also losing out on the opportunity to have Intel Inside the Apple iPad tablet that came out three years later. Say what you will, but the folks that have been running Intel’s Data Center Group have not missed any boats, but rather have built a warship.

As the traditional PC client business continues to erode, the part of the company that is dedicated to the

Intel Owns The Server, Wants To Own The Rack was written by Timothy Prickett Morgan at The Next Platform.