Archive

Category Archives for "The Next Platform"

The Serverless Revolution Will Make Us All Developers

At Build 2017, Microsoft’s annual and influential developer event, CEO Satya Nadella introduced the idea of the “intelligent cloud” and “intelligent edge.” This vision of software’s immediate future considers the plethora of smart devices – cell phones, appliances, home environment controls, business machinery and the like – that permeate and, in large part, orchestrate our daily lives.

We all know about the Internet of Things. Today, the ability to glean valuable business insights from seemingly mundane device telemetry is impressive. Consider the case of the connected cows. Researchers at a farm attached pedometers to dairy cows, largely to monitor

The Serverless Revolution Will Make Us All Developers was written by Timothy Prickett Morgan at The Next Platform.

Azure Stack Finally Takes Microsoft Public Cloud Private

Microsoft announced Azure Stack at its Ignite event in September 2016 and soft-launched Azure Stack at its Inspire event in July, when it announced that the private cloud solution was available for customer orders. The first wave of Microsoft’s Azure Stack system partners – Dell EMC, Hewlett Packard Enterprise, and Lenovo – plan to ship their certified solutions to customers in September. We will be surprised if Microsoft does not announce first customer shipments with those vendors at Microsoft’s Ignite event in late September.

Azure Stack with compete with other hybrid private cloud frameworks, such as OpenStack, Cloud Foundry, HPE’s

Azure Stack Finally Takes Microsoft Public Cloud Private was written by Timothy Prickett Morgan at The Next Platform.

Supercomputing Advancing Too Fast for Key Codes to Keep Pace

The high performance computing world is set to become more diverse over the next several years on the hardware front, but for software development, this new array of ever-higher performance options creates big challenges for codes.

While the hardware advances might be moving too quick for long-standing software to take optimal advantage of, for some areas, things are at a relative standstill in terms of how to approach this future. Is it better to keep optimizing old codes that could be ticked along with the X86 tocks, or does a new architectural landscape mean starting from scratch with scientific codes–even

Supercomputing Advancing Too Fast for Key Codes to Keep Pace was written by Nicole Hemsoth at The Next Platform.

China Arms Upgraded Tianhe-2A Hybrid Supercomputer

As an economic powerhouse and with a rising military and political presence around the world, you would expect, given the inherent political nature of supercomputing, that China would have multiple and massive supercomputing centers as well as a desire to spread its risk and demonstrate its technical breadth by investing in many different kinds of capability class supercomputers.

And this is precisely what China is doing, including creating its own offload accelerator, based on digital signal processors. This Matrix2000 DSP accelerator, which was unveiled at the ISC16 supercomputing event last year and which is being created by the National University

China Arms Upgraded Tianhe-2A Hybrid Supercomputer was written by Timothy Prickett Morgan at The Next Platform.

The Power9 Rollout Begins With Summit And Sierra

At the end of July, Oak Ridge National Laboratories started receiving the first racks of servers that will eventually be expanded to become the “Summit” supercomputer, the long-awaited replacement to the “Titan” hybrid CPU-GPU system that was built by Cray and installed back in the fall of 2012. So, technically speaking, IBM has begun shipping its Power9-based “Witherspoon” system, the kicker to the Power8-based “Minksy” machine that Big Blue unveiled in September 2016 as a precursor and a testbed for the Power9 iron.

Given that IBM is shipping Summit nodes to Oak Ridge and has also started shipping similar (but

The Power9 Rollout Begins With Summit And Sierra was written by Timothy Prickett Morgan at The Next Platform.

Is M8 The Last Hurrah For Oracle Sparc?

Intel is not the only system maker that is looking to converge its processor lines to make life a bit simpler for itself and for its customers as well as to save some money on engineering work. Oracle has just announced its Sparc M8 processor, and while this is an interesting chip, what is also interesting is that a Sparc T8 companion processor aimed at entry and midrange systems was not already introduced and does not appear to be in the works.

There is plenty a little weird here. The new Sparc T8 systems are, in fact, going to be

Is M8 The Last Hurrah For Oracle Sparc? was written by Timothy Prickett Morgan at The Next Platform.

A Rare Peek Inside A 400G Cisco Network Chip

Server processor architectures are trying to break the ties between memory and compute to allow the capacities of each to scale independently of each other, but switching and routing giant Cisco Systems has already done this for a high-end switch chip that looks remarkably like a CPU tuned for network processing.

At the recent Hot Chips conference in Silicon Valley, Jamie Markevitch, a principal engineer at Cisco showed off the guts of an unnamed but currently shipping network processor, something that happens very rarely in the switching and routing racket. With the exception of the upstarts like Barefoot Networks with

A Rare Peek Inside A 400G Cisco Network Chip was written by Timothy Prickett Morgan at The Next Platform.

Shedding Light on on Dark Bandwidth

We have heard much about the concept of dark silicon but there is a separate, related companion to this idea.

Dark bandwidth is a term that is being bandied about to describe the major inefficiencies of data movement. The idea of this is not unknown or new, but some of the ways the problem is being tackled present new practical directions as the emphasis on system balance over pure performance persists.

As ARM Research architect, Jonathan Beard describes it, the way systems work now is a lot like ordering a tiny watch battery online and having it delivered in a

Shedding Light on on Dark Bandwidth was written by Nicole Hemsoth at The Next Platform.

Custom Server Makers Set The Datacenter Pace

Makers of tightly coupled, shared memory machines can make all of the arguments they want about how it is much more efficient and easier to program these NUMA machines than it is to do distributed computing across a cluster of more loosely coupled boxes, but for the most part, the IT market doesn’t care.

Distributed computing, in its more modern implementation of frameworks running on virtual machines or containers – or both – is by far the norm, both in the datacenter and on the public clouds. You don’t have to look any further than the latest server sales statistics

Custom Server Makers Set The Datacenter Pace was written by Timothy Prickett Morgan at The Next Platform.

Hospital Captures First Commercial Volta GPU Based DGX-1 Systems

At well over $150,000 per appliance, the Volta GPU based DGX appliances from Nvidia, which take aim at deep learning with framework integration and 8 Volta-accelerated nodes linked with NVlink, is set to appeal to the most bleeding edge of machine learning shops.

Nvidia has built its own clusters by stringing several of these together, just as researchers at Tokyo Tech have done with the Pascal generation systems. But one of the first commercial customers for the Volta based boxes is the Center for Clinical Data Science, which is part of the first wave of hospitals set to

Hospital Captures First Commercial Volta GPU Based DGX-1 Systems was written by Nicole Hemsoth at The Next Platform.

What’s So Bad About POSIX I/O?

POSIX I/O is almost universally agreed to be one of the most significant limitations standing in the way of I/O performance exascale system designs push 100,000 client nodes.

The desire to kill off POSIX I/O is a commonly beaten drum among high-performance computing experts, and a variety of new approaches—ranging from I/O forwarding layers, user-space I/O stacks, and completely new I/O interfaces—are bandied about as remedies to the impending exascale I/O crisis.

However, it is much less common to hear exactly why POSIX I/O is so detrimental to scalability and performance, and what needs to change to have a suitably

What’s So Bad About POSIX I/O? was written by Nicole Hemsoth at The Next Platform.

Signposts On The Roadmap Out To 10 Tb/sec Ethernet

The world of Ethernet switching and routing used to be more predictable than just about any other part of the datacenter, but for the past decade the old adage – ten times the bandwidth for three times the cost – has not held. While 100 Gb/sec Ethernet was launched in 2010 and saw a fair amount of uptake amongst telecom suppliers for their backbones, the hyperscalers decided, quite correctly, that 100 Gb/sec Ethernet was too expensive and opted for 40 Gb/sec instead.

Now, we are sitting on the cusp of the real 100 Gb/sec Ethernet rollout among hyperscalers and enterprise

Signposts On The Roadmap Out To 10 Tb/sec Ethernet was written by Timothy Prickett Morgan at The Next Platform.

Mesos Borgs Google’s Kubernetes Right Back

The rivalry between Mesos, Kubernetes, and OpenStack just keeps getting more interesting, and instead of a winner take all situation, it has become more of a take what you need approach. That said, it is looking like Kubernetes is emerging as the de facto standard for container control, even though Google not the first out of the gate in open sourcing Kubernetes and Docker Swam and the full Docker Enterprise are seeing plenty of momentum in the enterprise.

Choice is a good thing for the IT industry, and the good news is that because of architectural choices made by

Mesos Borgs Google’s Kubernetes Right Back was written by Timothy Prickett Morgan at The Next Platform.

The Prospects For A Leaner And Meaner HPE

The era of Hewlett Packard Enterprise’s envious – and expensive – desire to become IT software and services behemoth like the IBM of the 1990s and 2000s is coming to a close.

The company has finalized its spinout-merger of substantially all of its software assets to Micro Focus. HPE has already spun out the lion’s share of its outsourcing and consulting businesses to Computer Sciences and even earlier had split from its troublesome PC and very profitable printer businesses. These were spun out together to give the combined HP Inc a chance to live on Wall Street and because PCs

The Prospects For A Leaner And Meaner HPE was written by Timothy Prickett Morgan at The Next Platform.

Future Interconnects: Gen-Z Stitches A Memory Fabric

It is difficult not to be impatient for the technologies of the future, which is one reason that this publication is called The Next Platform. But those who are waiting for the Gen-Z consortium to deliver a memory fabric that will break the hegemony of the CPU in controlling access to memory and to deepen the memory hierarchy while at the same time flattening memory addressability are going to have to wait a little longer.

About a year longer, in fact, which is a bit further away than the founders of the Gen-Z consortium were hoping when they launched

Future Interconnects: Gen-Z Stitches A Memory Fabric was written by Timothy Prickett Morgan at The Next Platform.

The Huge Premium Intel Is Charging For Skylake Xeons

There is no question that Intel has reached its peak in the datacenter when it comes to compute. For years now, it has had very little direct competition and only some indirect competition for the few remaining RISC upstarts and the threat of the newbies with their ARM architectures.

The question now, as we ponder the “Skylake” Xeon SP processors and their “Purley” platform that launched in July, is this: Is Intel at a local maximum, with another peak off in the distance, perhaps after a decline or perhaps after steady growth or a flat spot, or is this the

The Huge Premium Intel Is Charging For Skylake Xeons was written by Timothy Prickett Morgan at The Next Platform.

NASA Supercomputing Strategy Takes the Road Less Traveled

For a large institution playing at the leadership-class supercomputing level, NASA tends to do things a little differently than its national lab and academic peers.

One of the most striking differences between how the space agency views its supercomputing future can be found at the facilities level. Instead of building massive brick and mortar datacenters within a new or existing complex, NASA has taken the modular route, beginning with its Electra supercomputer and in the near future, with a 30 Megawatt-capable new modular installation that can house about a million compute cores.

“What we found is that the modular approach

NASA Supercomputing Strategy Takes the Road Less Traveled was written by Nicole Hemsoth at The Next Platform.

Kafka Wakes Up And Is Metamorphosed Into A Database

Sometimes a database is like a collection of wax tablets that you can stack and sort through to update, and these days, sometimes it is more like a river that has a shape defined by its geography but it is constantly changing and flowing and that flow, more than anything else, defines the information that drives the business. There is no time to persist it, organize it, and then query it.

In this case, embedding a database right in that stream makes good sense, and that is precisely what Confluent, the company that has commercialized Apache Kafka, which is a

Kafka Wakes Up And Is Metamorphosed Into A Database was written by Timothy Prickett Morgan at The Next Platform.

VMware’s Platform Revolves Around ESXi, Except Where It Can’t

Building a platform is hard enough, and there are very few companies that can build something that scales, supports a diversity of applications, and, in the case of either cloud providers or software or whole system sellers, can be suitable for tens of thousands, much less hundreds of thousands or millions, of customers.

But if building a platform is hard, keeping it relevant is even harder, and those companies who demonstrate the ability to adapt quickly and to move to new ground while holding old ground are the ones that get to make money and wield influence in the datacenter.

VMware’s Platform Revolves Around ESXi, Except Where It Can’t was written by Timothy Prickett Morgan at The Next Platform.

Heterogeneous Supercomputing on Japan’s Most Powerful System

We continue with our second part of the series on the Tsubame supercomputer (first section here) with the next segment of our interview with Professor Satoshi Matsuoka, of the Tokyo Institute of Technology (Tokyo Tech).

Matsuoka researches and designs large scale supercomputers and similar infrastructures. More recently, he has worked on the convergence of Big Data, machine/deep learning, and AI with traditional HPC, as well as investigating the post-Moore technologies towards 2025. He has designed supercomputers for years and has collaborated on projects involving basic elements for the current and more importantly future exascale systems.

TNP: Will you be running

Heterogeneous Supercomputing on Japan’s Most Powerful System was written by Nicole Hemsoth at The Next Platform.