Archive

Category Archives for "The Next Platform"

Pascal GPUs On All Fronts Push Nvidia To New Highs

Chip maker Nvidia was founded by people who loved gaming and who wanted to make better 3D graphics cards, and decades later, the company has become a force in computing, first in HPC and then in machine learning and now database acceleration. And it all works together, with gaming graphics providing the foundation on which Nvidia can build a considerable compute business, much as Intel’s PC business provided the foundation for its Xeon assault on the datacenter over the past two and a half decades.

At some point, Nvidia may not need an explicit link to PC graphics and gaming

Pascal GPUs On All Fronts Push Nvidia To New Highs was written by Timothy Prickett Morgan at The Next Platform.

What Sort of Burst Buffer Are You?

Burst buffer technology is closely associated with HPC applications and supercomputer sites as a means of ensuring that persistent storage, typically a parallel file system, does not become a bottleneck to overall performance, specifically where checkpoints and restarts are concerned. But attention is now turning to how burst buffers might find broader use cases beyond this niche, and how they could be used for accelerating performance in other areas where the ability to handle a substantial volume of data with high speed and low latency is key.

The term burst buffer is applied to this storage technology simply because this

What Sort of Burst Buffer Are You? was written by Nicole Hemsoth at The Next Platform.

InfiniBand Breaks Through The 200G Barrier

Moore’s Law may be slowing down performance increases in compute capacity, but InfiniBand networking did not get the memo. Mellanox Technologies has actually picked up the pace, in fact, and is previewing 200 Gb/sec InfiniBand switches and server adapters that are timed to come to market with a slew of Xeon, Opteron, ARM, and Power processors due around the middle of next year.

The new Quantum InfiniBand switch ASIC and its companion ConnextX-6 adapter ASICs come relatively hot on the heels of the 100 Gb/sec Enhanced Data Rate, or EDR, products that were announced in the fall of 2014 and

InfiniBand Breaks Through The 200G Barrier was written by Timothy Prickett Morgan at The Next Platform.

JVM Boost Shows Warm Java is Better than Cold

The Java Virtual Machine (JVM) is a vital part of modern distributed computing. It is the platform of big data applications like Spark, HDFS, Cassandra,and Hive. While the JVM provides “write once, run anywhere” platform independence, this comes at a cost. The JVM takes time to “warm up”, that is to load the classes, interpret the bytecode, and so on. This time may not matter much for a long-running Tomcat server, but big data jobs are typically short-lived. Thus the parallelization often used to speed up the time-to-results compounds the JVM warmup time problem.

David Lion and his colleagues examined

JVM Boost Shows Warm Java is Better than Cold was written by Nicole Hemsoth at The Next Platform.

Getting Hyper About Converged Storage, And Then Some

There is no question that information technology is always too complex, and that people have been complaining about this for over five decades now. It keeps us employed, so perhaps we should not point this out, and moreover, perhaps we should not be so eager to automate ourselves out of jobs. But if the advance of computing from mainframes to artificial intelligence teach us anything, it is that we always want to make IT simpler to get people out of the way of doing business or research.

The founders of hyperconverged systems maker Nutanix learned its lessons from hyperscalers like

Getting Hyper About Converged Storage, And Then Some was written by Timothy Prickett Morgan at The Next Platform.

Microsoft Research Pens Quill for Data Intensive Analysis

Collecting data is only useful to the extent that the data is analyzed. These days, human Internet usage is generating more data (particularly for advertising purposes) and Internet of Things devices are providing data about our homes, our cars, and our bodies.

Analyzing that data can become a challenge at scale. Streaming platforms work well with incoming data but aren’t designed for post hoc analysis. Traditional database management systems can perform complex queries against stored data, but cannot be put to real-time usage.

One proposal to address these challenges, called Quill, was developed by Badrish Chandramouli and colleagues at Microsoft

Microsoft Research Pens Quill for Data Intensive Analysis was written by Nicole Hemsoth at The Next Platform.

DDN Turns The Crank On “Wolfcreek” Storage

In the high performance computing arena, the stress is always on performance. Anything and everything that can be done to try to make data retrieval and processing faster ultimately adds up to better simulations and models that more accurately reflect the reality we are trying to recreate and often cast forward in time to figure out what will happen next.

Pushing performance is an interesting challenge here at the beginning of the 21st century, since a lot of server and storage components are commoditized and therefore available to others. The real engineering is coming up with innovative ways of putting

DDN Turns The Crank On “Wolfcreek” Storage was written by Timothy Prickett Morgan at The Next Platform.

Google Wants Kubernetes To Rule The World

At some point, all of the big public cloud providers will have to eat their own dog food, as the parlance goes, and run their applications atop the cloudy version of their infrastructure that they sell to other people, not distinct and sometimes legacy systems that predate the ascent of their clouds. In this regard, none of the cloud providers are any different from any major enterprise or government agency that struggles with any kind of legacy system.

Search engine and online advertising giant Google wants its Cloud Platform business to compete against Amazon Web Services and Microsoft Azure and

Google Wants Kubernetes To Rule The World was written by Timothy Prickett Morgan at The Next Platform.

Chasing The Dream Of Code HPC Once, Run Anywhere

In many ways, enterprises and hyperscalers have it easy. Very quickly in the wake of its announcement more than two decades ago, the Java programming language, a kind of virtualized C++, became the de facto standard for coding enterprise applications that run the business. And a slew of innovative storage and data analytics applications that have transformed computing were created by hyperscalers in Java and often open sourced so enterprises could use them.

The HPC community – and it is probably more accurate to say the many HPC communities – has it a bit tougher because they use a variety

Chasing The Dream Of Code HPC Once, Run Anywhere was written by Timothy Prickett Morgan at The Next Platform.

Physics Code Modifications Push Xeon Phi Peak Performance

In the worlds of high performance computing (HPC) and physics, seemingly straightforward challenges are frequently not what they seem at first glance.

For example, doing justice to an outwardly simple physics experiment involving a pendulum and drive motor can involve the need to process billions of data points. Moreover, even when aided by the latest high performance technology, such as the Intel Xeon Phi processor, achieving optimal compute levels requires ingenuity for addressing unexpected coding considerations.

Jeffery Dunham, the William R. Kenan Jr. Professor of Natural Sciences at Middlebury College in Vermont, should know. For about eight years, Professor Dunham

Physics Code Modifications Push Xeon Phi Peak Performance was written by Nicole Hemsoth at The Next Platform.

Advances in In Situ Processing Tie to Exascale Targets

Waiting for a simulation to complete before visualizing the results is often an unappealing prospect for researchers.

Verifying that output matches expectations early in a run helps prevent wasted computation time, which is particularly important on systems in high demand or when a limited allocation is availableIn addition, the growth in the ability to perform computation continues to outpace the growth in the ability to performantly store the results. The ability to analyze simulation output while it is still resident in memory, known as in situ processing, is appealing and sometimes necessary for researchers running large-scale simulations.

In light of

Advances in In Situ Processing Tie to Exascale Targets was written by Nicole Hemsoth at The Next Platform.

Building The Stack Above And Below OpenStack

It has been six years now since the “Austin” release of the OpenStack cloud controller was released by the partnership of Rackspace Hosting, which contributed its Swift object storage, and NASA, which contributed its Nova compute controller. NASA was frustrated by the open source Eucalyptus cloud controller, which was not completely open source and which did not add features fast enough, and Rackspace was in a fight for mindshare and marketshare against much larger cloud rival Amazon Web Services and wanted to leverage both open source and community to push back against its much larger rival.

OpenStack may not have

Building The Stack Above And Below OpenStack was written by Timothy Prickett Morgan at The Next Platform.

Getting Agile And Staying That Way

Let’s be honest. Although the old saying “slow and steady wins the race” may be a lesson that helps us get through school, it isn’t a realistic credo for the unrelenting demands of today’s fast-paced businesses. Faster may be better, but only if quality doesn’t suffer – and this puts immense strain on agile organizations that must continually deliver new features and software to their customers.

Meeting these needs, and doing so with efficiency, requires rethinking how we view application development and operations. For organizations embracing and addressing these challenges, the pursuit of DevOps is the new normal, but it

Getting Agile And Staying That Way was written by Timothy Prickett Morgan at The Next Platform.

Cisco Drives Density With S Series Storage Server

Getting the ratio of compute to storage right is not something that is easy within a single server design. Some workloads are wrestling with either more bits of data or heavier file types (like video), and the amount of capacity required per given unit of compute is much higher than can fit in a standard 2U machine with either a dozen large 3.5-inch drives or two dozen 2.5-inch drives.

To attack these use cases, Cisco Systems is tweaking a storage-dense machine it debuted two years ago, and equipping it with some of the System Link virtualization technologies that it created

Cisco Drives Density With S Series Storage Server was written by Timothy Prickett Morgan at The Next Platform.

Microsoft Azure Goes Back To Rack Servers With Project Olympus

The thing we hear time and time again from the hyperscalers is that technology is a differentiator, but supply chain can make or break them. Designing servers, storage, switching, and datacenters is fun, but if all of the pieces can’t be brought together at volume, and at a price that is the best in the industry, then their operations can’t scale.

It is with this in mind that we ponder Microsoft’s new “Project Olympus” hyperscale servers, which it debuted today at the Zettastructure conference in London. Or, to be more precise, the hyperscale server designs that it has created but

Microsoft Azure Goes Back To Rack Servers With Project Olympus was written by Timothy Prickett Morgan at The Next Platform.

Mainstreaming Machine Learning: Emerging Solutions

In the course of this three-part series on the challenges and opportunities for enterprise machine learning, we have worked to define the landscape and ecosystem for these workloads in large-scale business settings and have taken an in-depth look at some of the roadblocks on the path to more mainstream machine learning applications.

In this final part of the series, we will turn from pointing to the problems and look at the ways the barriers can be removed, both in terms of leveraging the technology ecosystem around machine learning and addressing more difficult problems, most notably, how to implement the human

Mainstreaming Machine Learning: Emerging Solutions was written by Nicole Hemsoth at The Next Platform.

The State of HPC Cloud in 2016

We are pleased to announce that the first book from Next Platform Press, titled “The State of HPC Cloud: 2016 Edition” is complete. The printed book will be available on Amazon.com and other online bookstores in December, 2016. However, in the meantime, supercomputing cloud company, Nimbix, is backing an effort to offer a digital download edition for free for this entire week—from today, October 31 until November 6.

As you will note from looking at the Next Platform Press page, we have other books we will be delivering in a similar manner this year. However, that this is the

The State of HPC Cloud in 2016 was written by Nicole Hemsoth at The Next Platform.

Major Roadblocks on the Path to Machine Learning

In part one of this series last week, we discussed the emerging ecosystem of machine learning applications and what promise those portend. But of course, as with any emerging application area (although to be fair, machine learning is not new), there are bound to be some barriers.

Even in analytically sophisticated organizations, machine learning often operates in “silos of expertise.” For example, the financial crimes unit in a bank may use advanced techniques to catch anti-money laundering; the credit risk team uses completely different and incompatible tools to predict loan defaults and set risk-based pricing; while treasury uses still other

Major Roadblocks on the Path to Machine Learning was written by Nicole Hemsoth at The Next Platform.

Broadcom Strikes 100G Ethernet Harder With Tomahawk-II

Because space costs so much money and having multiple machines adds complexity and even more costs on top of that, there is always pressure to increase the density of the devices that provide compute, storage, and networking capacity in the datacenter. Moore’s Law, in essence, doesn’t just drive chips, but also the devices that are comprised of chips.

Often, it is the second or third iteration of a technology that takes off because the economics and density of the initial products can’t match the space and power constraints of a system rack. Such was the case with the initial 100

Broadcom Strikes 100G Ethernet Harder With Tomahawk-II was written by Timothy Prickett Morgan at The Next Platform.

Learning From Google’s Cloud Storage Evolution

Making storage cheaper on the cloud does not necessarily mean using tape or Blu-Ray discs to hold data. In a datacenter that has enormous bandwidth and consistent latency over a Clos network interconnecting hundreds of thousands of compute and storage servers, and by changing the durability and availability of data on the network and trading off storage costs and data access and movement costs, a hyperscaler can offer a mix of price and performance and cut costs.

That, in a nutshell, is what search engine giant and public cloud provider Google is doing with the latest variant of persistent storage

Learning From Google’s Cloud Storage Evolution was written by Timothy Prickett Morgan at The Next Platform.