Archive

Category Archives for "The Next Platform"

Intel’s Glass House Is Definitely More Than Half Full

An unexpected jump in enterprise spending coupled with the ongoing heavy spending by hyperscalers, cloud builders, and communications companies revamping their networks of gear coupled with the ramp of the “Skylake” Xeon SP processors launched last July gave Intel the best overall quarter in its history, gauged by revenues and profits, and the best one also that its Data Center Group has ever posted.

Many are wondering if this boom can last. Intel’s top brass are not among them, but they do concede that the fourth quarter of 2017 was an unusually good one. It is hard to see if

Intel’s Glass House Is Definitely More Than Half Full was written by Timothy Prickett Morgan at The Next Platform.

Startup Builds GPU Native Custom Neural Network Framework

It is estimated that each day over a million malicious files are created and kicked to every corner of the web.

While there are plenty of options for security against these potential attacks, the methods for doing so at the pace, scope, and complexity of modern nasty files has left traditional detection in the dust—even those that are based on heuristics or machine learning versus signature-based.

With those traditional methods falling short of what large enterprises need for multi-device and system security the answer (to everything in IT in 2018 it seems) is to look to deep learning. But this

Startup Builds GPU Native Custom Neural Network Framework was written by Nicole Hemsoth at The Next Platform.

The Hard Limits for Deep Learning in HPC

If the hype is to be believed, there is no computational problem that cannot be tackled faster and better by artificial intelligence. But many of the supercomputing sites of the world beg to differ.

With that said, the deep learning boom has benefitted HPC in numerous ways, including bringing new cred to the years of hardware engineering around GPUs, software scalability tooling for complex parallel codes, and other feats of efficient performance at scale. And there are indeed areas of high performance computing that stand to benefit from integration of deep learning into the larger workflow including weather, cosmology, molecular

The Hard Limits for Deep Learning in HPC was written by Nicole Hemsoth at The Next Platform.

It’s About Time For Time Series Databases

To get straight to the point: nobody wants to have large grain snapshots of data for any dataset that is actually comprised of a continuous stream of data points. With the data storage and stream processing now so cost-effective (relatively speaking, of course) that anybody can do it – not just national security agencies or hedge funds and brokerages with big budgets – there is pent up demand for a SQL-friendly time series database.

So that is what the founders of Timescale set out to create. And while they are by no means alone in this market, the open source

It’s About Time For Time Series Databases was written by Timothy Prickett Morgan at The Next Platform.

Energy Giant Eni Starts Investing In Supercomputers Again

Energy is not free, not even to energy companies, and so they are just as concerned with being efficient with their supercomputers as the most penny pinching hyperscaler or cloud builder where the computing is the product.

Like the other major oil and gas producers on Earth, the last few years have not been easy ones for Ente Nazionale Idrocarburi, the Italian energy major that employs 33,000 people and operates in 76 countries worldwide and now has the distinction of having the most powerful supercomputer in the energy sector – and indeed, among all kinds of commercial entities in the

Energy Giant Eni Starts Investing In Supercomputers Again was written by Timothy Prickett Morgan at The Next Platform.

Could Algorithmic Accelerators Spur a Hardware Startup Revival?

It would be interesting to find out how many recent college graduates in electronics engineering, computer science, or related fields expect to roll out their own silicon startup in the next five years compared to similar polls from ten or even twenty years ago. Our guess is that only a select few now would even consider the possibility in the near term.

The complexity of chip designs is growing, which means higher design costs, which thus limits the number of startups that can make a foray into the market. Estimates vary, but bringing a new chip to market can cost

Could Algorithmic Accelerators Spur a Hardware Startup Revival? was written by Nicole Hemsoth at The Next Platform.

Google’s Vision for Mainstreaming Machine Learning

Here at The Next Platform, we’ve touched on the convergence of machine learning, HPC, and enterprise requirements looking at ways that vendors are trying to reduce the barriers to enable enterprises to leverage AI and machine learning to better address the rapid changes brought about by such emerging trends as the cloud, edge computing and mobility.

At the SC17 show in November 2017, Dell EMC unveiled efforts underway to bring AI, machine learning and deep learning into the mainstream, similar to how the company and other vendors in recent years have been working to make it easier for enterprises

Google’s Vision for Mainstreaming Machine Learning was written by Jeffrey Burt at The Next Platform.

Mellanox Trims Down To Reach For A Profitable $1 Billion

If Mellanox Technologies had not begun investing in the Ethernet switching as it came out of the Great Recession, it would be a much different company than it is today. It might have even been long since acquired by Oracle or some other company, for instance.

To be sure, Mellanox might have been able to capture some business at the cloud builders, hyperscalers, and clustered storage makers with InfiniBand. But it would not have been the one to capitalize on moving InfiniBand-style technologies such as remote direct memory access (RDMA) to Ethernet, and it would not have been able to

Mellanox Trims Down To Reach For A Profitable $1 Billion was written by Timothy Prickett Morgan at The Next Platform.

Flattening Networks – And Budgets – With 400G Ethernet

If it were not for the insatiable bandwidth needs of the twenty major hyperscalers and cloud builders, it is safe to say that the innovation necessary to get Ethernet switching and routing up to 200 Gb/sec or 400 Gb/sec might not have been done at the fast pace that the industry as been able to pull off.

Knowing that there are 100 million ports of switching up for grabs among these companies from 2017 through 2021 – worth tens of billions of dollars per year in revenues – is a strong motivator to get clever.

And that is precisely what

Flattening Networks – And Budgets – With 400G Ethernet was written by Timothy Prickett Morgan at The Next Platform.

IBM Gets Machines Back Into International Business

Designing and manufacturing processors – or paying a third party foundry to do some of the work – and then manufacturing systems and updating and modernizing operating systems and middleware is tough work. And it is work that few IT vendors and about the same number of hyperscalers do these days. Despite all of the gut-wrenching changes in the datacenter over the past six decades, International Business Machines is still in the game that it largely defined so long ago.

In the fourth quarter of 2017, the company’s System z mainframes had the highest shipment level, as gauged by aggregate

IBM Gets Machines Back Into International Business was written by Timothy Prickett Morgan at The Next Platform.

A New Architecture For NVM-Express

NVM-Express is the latest hot thing in storage, with server and storage array vendors big and small making a mad dash to bring the protocol into their products and get an advantage in what promises to be a fast-growing market.

With the rapid rise in the amount of data being generated and processed, and the growth of such technologies as artificial intelligence and machine learning in managing and processing the data, demand for faster speeds and lower latency in flash and other non-volatile memory will continue to increase in the coming years, and established companies like Dell EMC, NetApp

A New Architecture For NVM-Express was written by Jeffrey Burt at The Next Platform.

Datacenters Brace For Spectre And Meltdown Impact

The Spectre and Meltdown speculative execution security vulnerabilities fall into the category of “low probability, but very high impact” potential exploits. The holes that Spectre and Meltdown open up into systems might enable any application to read the data of any other app, when running on the same server in the same pool of system memory – bypassing any and all security permissions. These potential exploits apply to every IT shop, from single-tenant servers potentially exposed to malware to apps running in a virtual machine (VM) framework in an enterprise datacenter to apps running in a multi-tenant public cloud instance.

Datacenters Brace For Spectre And Meltdown Impact was written by Timothy Prickett Morgan at The Next Platform.

Samsung Puts the Crunch on Emerging HBM2 Market

The memory market can be a volatile one, swinging from tight availability and high prices one year to plenty of inventory and falling prices a couple of years later. The fortunes of vendors can similarly swing with the market changes, with Samsung recently displacing Intel at the top of the semiconductor space as a shortage in the market drove up prices and, with it, the company’s revenues.

High performance and high-speed memory is only going to grow in demand in the HPC and supercomputing arena with the rise of technologies like artificial intelligence (AI), machine learning and graphics processing, and

Samsung Puts the Crunch on Emerging HBM2 Market was written by Jeffrey Burt at The Next Platform.

Bringing a New HPC File System to Bear

File systems have never been the flashiest segment of the IT space, which might explain why big shakeups and new entrants into the market don’t draw the attention they could.

Established vendors have rolled out offerings that primarily are based on GPFS or Lustre and enterprises and HPC organizations have embraced those products. However, changes in the IT landscape in recent years have convinced some companies and vendors to rethink file servers. Such changes as the rise of large-scale analytics and machine learning, the expansion of HPC into more mainstream enterprises and the growth of cloud storage all have brought

Bringing a New HPC File System to Bear was written by Jeffrey Burt at The Next Platform.

NOAA Weather Forecasts Stick With CPUs, Keep An Eye On GPUs

When it comes to supercomputing, more is almost always better. More data and more compute – and more bandwidth to link the two – almost always result in a better set of models, whether they are descriptive or predictive. This has certainly been the case in weather forecasting, where the appetite for capacity to support more complex models of the atmosphere and the oceans and the integration of models running across different (and always increasing) resolutions never abates.

This is certainly the case with the National Oceanic and Atmospheric Administration, which does weather and climate forecasting on a regional, national,

NOAA Weather Forecasts Stick With CPUs, Keep An Eye On GPUs was written by Timothy Prickett Morgan at The Next Platform.

Intel, Nervana Shed Light on Deep Learning Chip Architecture

Almost two years after the acquisition by Intel, the deep learning chip architecture from startup Nervana Systems will finally be moving from its codenamed “Lake Crest” status to an actual product.

In that time, Nvidia, which owns the deep learning training market by a long shot, has had time to firm up its commitment to this expanding (if not overhyped in terms of overall industry dollar figures) market with new deep learning-tuned GPUs and appliances on the horizon as well as software tweaks to make training at scale more robust. In other words, even with solid technology at a reasonable

Intel, Nervana Shed Light on Deep Learning Chip Architecture was written by Nicole Hemsoth at The Next Platform.

Quantum Computing Enters 2018 Like It Is 1968

The quantum computing competitive landscape continues to heat up in early 2018. But today’s quantum computing landscape looks a lot like the semiconductor landscape 50 years ago.

The silicon-based integrated circuit (IC) entered its “medium-scale” integration phase in 1968. Transistor counts ballooned from ten transistors on a chip to hundreds of transistors on a chip within a few short years. After a while, there were thousands of transistors on a chip, then tens of thousands, and now we have, fifty years later, tens of billions.

Quantum computing is a practical application of quantum physics using individual subatomic particles chilled to

Quantum Computing Enters 2018 Like It Is 1968 was written by Timothy Prickett Morgan at The Next Platform.

Machine Learning Drives Changing Disaster Recovery At Facebook

Hyperscalers have billions of users who get access to their services for free, but the funny thing is that these users act like they are paying for it and expect for these services to be always available, no excuses.

Organizations and consumers also rely on Facebook, Google, Microsoft, Amazon, Alibaba, Baidu, and Tencent for services that they pay for, too, and they reasonably expect that their data will always be immediately accessible and secure, the services always available, their search returns always popping up milliseconds after their queries are entered, and the recommendations that come to them

Machine Learning Drives Changing Disaster Recovery At Facebook was written by Jeffrey Burt at The Next Platform.

The Spectre And Meltdown Server Tax Bill

The new year in the IT sector got off to a roaring start with the revelation of the Meltdown and Spectre security threats, the latter of which affects most of the processors used in consumer and commercial computing gear made on the last decade or so.

Much has been written about the nature of the Meltdown and Spectre threats, which leverage the speculative execution features of modern processors to give user-level applications access to operating system kernel memory, which is a very big problem. Chip suppliers and operating system and hypervisor makers have known about these exploits since last June,

The Spectre And Meltdown Server Tax Bill was written by Timothy Prickett Morgan at The Next Platform.

Facebook’s Expanding Machine Learning Infrastructure

Here at The Next Platform, we tend to keep a close eye on how the major hyperscalers evolve their infrastructure to support massive scale and evermore complex workloads.

Not so long ago the core services were relatively standard transactions and operations, but with the addition of training and inferencing against complex deep learning models—something that requires a two-handed approach to hardware—the hyperscale hardware stack has had to quicken its step to keep pace with the new performance and efficiency demands of machine learning at scale.

While not innovating on the custom hardware side quite the same way as Google,

Facebook’s Expanding Machine Learning Infrastructure was written by Nicole Hemsoth at The Next Platform.