Archive

Category Archives for "The Next Platform"

The Machine Learning Opportunity in Manufacturing, Logistics

There is increasing pressure in such fields as manufacturing, energy and transportation to adopt AI and machine learning to help improve efficiencies in operations, optimize workflows, enhance business decisions through analytics and reduce costs in logistics.

We have talked about how industries like telecommunications and transportation are looking at recurrent neural networks for helping to better forecast resource demand in supply chains. However, adopting AI and machine learning comes with its share of challenges. Companies whose datacenters are crowded with traditional systems powered by CPUs now have to consider buying and bringing in GPU-based hardware that is better situated to

The Machine Learning Opportunity in Manufacturing, Logistics was written by Jeffrey Burt at The Next Platform.

Prying The Lid Off Black Box Switch SDKs

It would be hard to find a business that has been more proprietary, insular, and secretive than the networking industry, and for good reasons. The sealed boxes that switch vendors sell, and that are the very backbone of the Internet, have been wickedly profitable – and in a way that neither servers nor storage have been.

There are so many control points in the networking stack that it is no wonder the hyperscalers and cloud builders have been leaning so heavily on switch ASIC vendors to open up their entire stack. The only reason they don’t build their own switch

Prying The Lid Off Black Box Switch SDKs was written by Timothy Prickett Morgan at The Next Platform.

How AlphaGo Sparked a New Approach to De Novo Drug Design

Researcher Olexandr Isayev wasn’t just impressed to see an AI framework best the top player of a game so complex it was considered impossible for an algorithm to track. He was inspired.

“The analogy of the complexity of chemistry, the number of possible molecule we don’t know about, is roughly the same order of complexity of Go, the University of North Carolina computational biology and chemistry expert explained.

“Instead of playing with checkers on a board, we envisioned a neural network that could play the game of generating molecules—one that did not rely on human intuition for this initial but

How AlphaGo Sparked a New Approach to De Novo Drug Design was written by Nicole Hemsoth at The Next Platform.

Putting Graph Analytics Back on the Board

Even though graph analytics has not disappeared, especially in the select areas where this is the only efficient way to handle large-scale pattern matching and analysis, the attention has been largely silenced by the new wave machine learning and deep learning applications.

Before this newest hype cycle displaced its “big data” predecessor, there was a small explosion of new hardware and software approaches to tackling graphs at scale—from system-level offerings from companies like Cray with their Eureka appliance (which is now available as software on its standard server platforms) to unique hardware startups (ThinCI, for example) and graph

Putting Graph Analytics Back on the Board was written by Nicole Hemsoth at The Next Platform.

Deep Learning is the Next Platform for Pathology

It is a renaissance for companies that sell GPU-dense systems and low-power clusters that are right for handling AI inference workloads, especially as they look to the healthcare market–one that for a while was moving toward increasing compute on medical devices.

The growth of production deep learning in medical imaging and diagnostics has spurred investments in hospitals and research centers, pushing high performance systems for medicine back to the forefront.

We have written quite a bit about some of the emerging use cases for deep learning in medicine with an eye on the systems angle in particular, and while these

Deep Learning is the Next Platform for Pathology was written by Jeffrey Burt at The Next Platform.

Red Hat Shakes Up Container Ecosystem With CoreOS Deal

The container craze on Linux platforms just took an interesting twist now that Red Hat is sheling out $250 million to acquire its upstart rival in Linux and containers, CoreOS.

As the largest and by far the most profitable open source software company in the world – it had $2.4 billion in sales in fiscal 2017, brought $253.7 million of that to the bottom line, and ended that fiscal year in February with a $2.7 billion subscription and services backlog – Red Hat has not been afraid to spend some money to get its hands on control of key open

Red Hat Shakes Up Container Ecosystem With CoreOS Deal was written by Timothy Prickett Morgan at The Next Platform.

Networking With Intent

Networking has always been the laggard in the enterprise datacenter. As servers and then storage appliances became increasingly virtualized and disaggregated over the past 15 years or so, the network stubbornly stuck with the appliance model, closed and proprietary. As other datacenter resources became faster, more agile and easier to manage, many of those efficiencies were hobbled by the network, which could take months to program and could require new hardware before making any significant changes.

However slowly, and thanks largely to the hyperscalers and now telcos and other communications service providers, that has begun to change. The rise of

Networking With Intent was written by Jeffrey Burt at The Next Platform.

Reckoning The Spectre And Meltdown Performance Hit For HPC

While no one has yet created an exploit to take advantage of the Spectre and Meltdown speculative execution vulnerabilities that were exposed by Google six months ago and that were revealed in early January, it is only a matter of time. The patching frenzy has not settled down yet, and a big concern is not just whether these patches fill the security gaps, but at what cost they do so in terms of application performance.

To try to ascertain the performance impact of the Spectre and Meltdown patches, most people have relied on comments from Google on the negligible

Reckoning The Spectre And Meltdown Performance Hit For HPC was written by Timothy Prickett Morgan at The Next Platform.

Graphcore Builds Momentum with Early Silicon

There has been a great deal of interest in deep learning chip startup, Graphcore, since we first got the limited technical details of the company’s first-generation chip last year, which was followed by revelations about how their custom software stack can run a range of convolutional, recurrent, generative adversarial neural network jobs.

In our conversations with those currently using GPUs for large-scale training (often with separate CPU only inference clusters), we have found generally that there is great interest in all new architectures for deep learning workloads. But what would really seal the deal is something that could both training

Graphcore Builds Momentum with Early Silicon was written by Nicole Hemsoth at The Next Platform.

Oil and Gas Industry Gets GPU, Deep Learning Injection

Although oil and gas software giant, Baker Hughes, is not in the business of high performance computing, the software it creates for the world’s leading oil and gas companies requires supercomputing capabilities for some use cases and increasingly, these systems can serve double-duty for emerging deep learning workloads.

The HPC requirements make sense for an industry awash in hundreds of petabytes each year in sensor and equipment data and many terabytes per day for seismic and discovery simulations and the deep learning angle is becoming the next best way of building meaning out of so many bytes.

In effort to

Oil and Gas Industry Gets GPU, Deep Learning Injection was written by Nicole Hemsoth at The Next Platform.

For Many, Hyperconverged Is The Next Platform

There is a kind of dichotomy in the datacenter. The upstart hyperconverged storage makers will tell you that the server-storage half-bloods that they have created are inspired by the storage at Google or Facebook or Amazon Web Services, but this is not, strictly speaking, true.  Hyperscalers and cloud builders are creating completely disaggregated compute and storage, linked by vast Clos networks with incredible amounts of bandwidth. But enterprises, who operate on a much more modest scale, are increasingly adopting hyperconverged storage – which mixes compute and storage on the same virtualized clusters.

One camp is splitting up servers and storage,

For Many, Hyperconverged Is The Next Platform was written by Timothy Prickett Morgan at The Next Platform.

Intel’s Glass House Is Definitely More Than Half Full

An unexpected jump in enterprise spending coupled with the ongoing heavy spending by hyperscalers, cloud builders, and communications companies revamping their networks of gear coupled with the ramp of the “Skylake” Xeon SP processors launched last July gave Intel the best overall quarter in its history, gauged by revenues and profits, and the best one also that its Data Center Group has ever posted.

Many are wondering if this boom can last. Intel’s top brass are not among them, but they do concede that the fourth quarter of 2017 was an unusually good one. It is hard to see if

Intel’s Glass House Is Definitely More Than Half Full was written by Timothy Prickett Morgan at The Next Platform.

Startup Builds GPU Native Custom Neural Network Framework

It is estimated that each day over a million malicious files are created and kicked to every corner of the web.

While there are plenty of options for security against these potential attacks, the methods for doing so at the pace, scope, and complexity of modern nasty files has left traditional detection in the dust—even those that are based on heuristics or machine learning versus signature-based.

With those traditional methods falling short of what large enterprises need for multi-device and system security the answer (to everything in IT in 2018 it seems) is to look to deep learning. But this

Startup Builds GPU Native Custom Neural Network Framework was written by Nicole Hemsoth at The Next Platform.

The Hard Limits for Deep Learning in HPC

If the hype is to be believed, there is no computational problem that cannot be tackled faster and better by artificial intelligence. But many of the supercomputing sites of the world beg to differ.

With that said, the deep learning boom has benefitted HPC in numerous ways, including bringing new cred to the years of hardware engineering around GPUs, software scalability tooling for complex parallel codes, and other feats of efficient performance at scale. And there are indeed areas of high performance computing that stand to benefit from integration of deep learning into the larger workflow including weather, cosmology, molecular

The Hard Limits for Deep Learning in HPC was written by Nicole Hemsoth at The Next Platform.

It’s About Time For Time Series Databases

To get straight to the point: nobody wants to have large grain snapshots of data for any dataset that is actually comprised of a continuous stream of data points. With the data storage and stream processing now so cost-effective (relatively speaking, of course) that anybody can do it – not just national security agencies or hedge funds and brokerages with big budgets – there is pent up demand for a SQL-friendly time series database.

So that is what the founders of Timescale set out to create. And while they are by no means alone in this market, the open source

It’s About Time For Time Series Databases was written by Timothy Prickett Morgan at The Next Platform.

Energy Giant Eni Starts Investing In Supercomputers Again

Energy is not free, not even to energy companies, and so they are just as concerned with being efficient with their supercomputers as the most penny pinching hyperscaler or cloud builder where the computing is the product.

Like the other major oil and gas producers on Earth, the last few years have not been easy ones for Ente Nazionale Idrocarburi, the Italian energy major that employs 33,000 people and operates in 76 countries worldwide and now has the distinction of having the most powerful supercomputer in the energy sector – and indeed, among all kinds of commercial entities in the

Energy Giant Eni Starts Investing In Supercomputers Again was written by Timothy Prickett Morgan at The Next Platform.

Could Algorithmic Accelerators Spur a Hardware Startup Revival?

It would be interesting to find out how many recent college graduates in electronics engineering, computer science, or related fields expect to roll out their own silicon startup in the next five years compared to similar polls from ten or even twenty years ago. Our guess is that only a select few now would even consider the possibility in the near term.

The complexity of chip designs is growing, which means higher design costs, which thus limits the number of startups that can make a foray into the market. Estimates vary, but bringing a new chip to market can cost

Could Algorithmic Accelerators Spur a Hardware Startup Revival? was written by Nicole Hemsoth at The Next Platform.

Google’s Vision for Mainstreaming Machine Learning

Here at The Next Platform, we’ve touched on the convergence of machine learning, HPC, and enterprise requirements looking at ways that vendors are trying to reduce the barriers to enable enterprises to leverage AI and machine learning to better address the rapid changes brought about by such emerging trends as the cloud, edge computing and mobility.

At the SC17 show in November 2017, Dell EMC unveiled efforts underway to bring AI, machine learning and deep learning into the mainstream, similar to how the company and other vendors in recent years have been working to make it easier for enterprises

Google’s Vision for Mainstreaming Machine Learning was written by Jeffrey Burt at The Next Platform.

Mellanox Trims Down To Reach For A Profitable $1 Billion

If Mellanox Technologies had not begun investing in the Ethernet switching as it came out of the Great Recession, it would be a much different company than it is today. It might have even been long since acquired by Oracle or some other company, for instance.

To be sure, Mellanox might have been able to capture some business at the cloud builders, hyperscalers, and clustered storage makers with InfiniBand. But it would not have been the one to capitalize on moving InfiniBand-style technologies such as remote direct memory access (RDMA) to Ethernet, and it would not have been able to

Mellanox Trims Down To Reach For A Profitable $1 Billion was written by Timothy Prickett Morgan at The Next Platform.

Flattening Networks – And Budgets – With 400G Ethernet

If it were not for the insatiable bandwidth needs of the twenty major hyperscalers and cloud builders, it is safe to say that the innovation necessary to get Ethernet switching and routing up to 200 Gb/sec or 400 Gb/sec might not have been done at the fast pace that the industry as been able to pull off.

Knowing that there are 100 million ports of switching up for grabs among these companies from 2017 through 2021 – worth tens of billions of dollars per year in revenues – is a strong motivator to get clever.

And that is precisely what

Flattening Networks – And Budgets – With 400G Ethernet was written by Timothy Prickett Morgan at The Next Platform.