Archive

Category Archives for "The Next Platform"

The New Dell Stops Trying To Be The Old IBM

It is week one of the new Dell Technologies, the conglomerate glued together with $60 billion from the remaining parts of the old Dell it has not sold off to raise cash to buy storage giant EMC and therefore server virtualization juggernaut VMware, which is owned mostly by EMC but remains a public company in the wake of the deal.

By adding EMC and VMware to itself and shedding its outsourcing services and software business units, Dell is becoming the largest supplier of IT gear in the world, at least by its own reckoning. You could argue that consumer PCs

The New Dell Stops Trying To Be The Old IBM was written by Timothy Prickett Morgan at The Next Platform.

Refreshed IBM Power Linux Systems Add NVLink

The very first systems that allow for GPUs to be hooked directly to CPUs using Nvidia’s NVLink high-speed interconnect are coming to market now that Big Blue is updating its Power Systems LC line of Linux-based systems with the help of hardware partners in the OpenPower Foundation collective.

Interestingly, the advent of the Power Systems S822LC for HPC system, code-named “Minsky” inside of IBM because human beings like real names even if marketeers are not allowed to, gives the DGX-1 machine crafted by Nvidia for deep learning workloads some competition. Right now, these systems are the only two machines on

Refreshed IBM Power Linux Systems Add NVLink was written by Timothy Prickett Morgan at The Next Platform.

The Next Wave of Deep Learning Architectures

Intel has planted some solid stakes in the ground for the future of deep learning over the last month with its acquisition of deep learning chip startup, Nervana Systems, and most recently, mobile and embedded machine learning company, Movidius.

These new pieces will snap into Intel’s still-forming puzzle for capturing the supposed billion-plus dollar market ahead for deep learning, which is complemented by its own Knights Mill effort and software optimization work on machine learning codes and tooling. At the same time, just down the coast, Nvidia is firming up the market for its own GPU training and inference

The Next Wave of Deep Learning Architectures was written by Nicole Hemsoth at The Next Platform.

The Vast Potential For VMware’s OpenStack Cloud

While hyperscalers and HPC centers like the bleeding edge – their very existence commands that they be on it – enterprises are a more conservative lot. No IT supplier ever went broke counting on enterprises to be risk adverse, but plenty of companies have gone the way of all flesh by not innovating enough and not seeing market inflections when they exist.

VMware, the virtualization division of the new Dell Technologies empire that formally comes into being this week, does not want to miss such changes and very much wants to continue to extract revenues and profits from its impressively

The Vast Potential For VMware’s OpenStack Cloud was written by Timothy Prickett Morgan at The Next Platform.

Exascale Might Prove To Be More Than A Grand Challenge

The supercomputing industry is accustomed to 1,000X performance strides, and that is because people like to think in big round numbers and bold concepts. Every leap in performance is exciting not just because of the engineering challenges in bringing systems with kilo, mega, tera, peta, and exa scales into being, but because of the science that is enabled by such increasingly massive machines.

But every leap is getting a bit more difficult as imagination meets up with the constraints of budgets and the laws of physics. The exascale leap is proving to be particularly difficult, and not just because it

Exascale Might Prove To Be More Than A Grand Challenge was written by Timothy Prickett Morgan at The Next Platform.

Big Data Rides Up The Cloud Abstraction Wave

There is no workload in the datacenter that can’t, in theory and in practice, be supplied as a service from a public cloud. Big Data as a Service, or BDaaS for short, is an emerging category of services that delivers data processing for analytics in the cloud and it is getting a lot of buzz these days – and for good reason. These BDaaS products vary in features, functions, and target use cases, but all address the same basic problem: Big data and data warehousing in the cloud is deceptively challenging and customers want to abstract away the complexity.

Data

Big Data Rides Up The Cloud Abstraction Wave was written by Timothy Prickett Morgan at The Next Platform.

VMware’s Embrace And Extend Strategy

The most successful players in the information technology space are those that can adapt, again and again, to tumultuous change. With a vast installed base and plenty of technical talent, it is unwise to count VMware out as the enterprise customers who have embraces its server virtualization tools ponder how they want to evolve to something that looks more like what a hyperscaler would build.

Many companies have faced these moments, but here is perhaps a pertinent parallel.

IBM’s vaunted mainframe business reacted successfully to the onslaught of minicomputers in the late 1970s and early 1980s, spearheaded by Digital Equipment

VMware’s Embrace And Extend Strategy was written by Timothy Prickett Morgan at The Next Platform.

Details Emerge On China’s 64-Core ARM Chip

While the world awaits the AMD K12 and Qualcomm Hydra ARM server chips to join the ranks of the Applied Micro X-Gene and Cavium ThunderX processors already in the market, it could be upstart Chinese chip maker Phytium Technology that gets a brawny chip into the field first and also gets traction among actual datacenter server customers, not just tire kickers.

Phytium was on hand at last week’s Hot Chips 28 conference, showing off its chippery and laptop, desktop and server machines employing its “Earth” and “Mars” FT series of ARM chips. Most of the interest that people showed in

Details Emerge On China’s 64-Core ARM Chip was written by Timothy Prickett Morgan at The Next Platform.

CPU, GPU Put to Deep Learning Framework Test

In the last couple of years, we have examined how deep learning shops are thinking about hardware. From GPU acceleration, to CPU-only approaches, and of course, FPGAs, custom ASICs, and other devices, there are a range of options—but these are still early days. The algorithmic platforms for deep learning are still evolving and it is incumbent on hardware to keep up. Accordingly, we have been seeing more benchmarking efforts of various approaches from the research community.

This week yielded a new benchmark effort comparing various deep learning frameworks on a short list of CPU and

CPU, GPU Put to Deep Learning Framework Test was written by Nicole Hemsoth at The Next Platform.

Inside VMware Before It Dons The Dell Invisibility Cloak

Where do you go when you are an infrastructure software provider that already has 500,000 enterprise customers? That is about as good and as big as it gets, particularly when the biggest spenders in IT infrastructure, the hyperscalers and the largest cloud builders, create their own hardware and infrastructure software and inspire legions of companies to follow their lead, often with open source projects they found.

So VMware, which has grown into a nearly $7 billion software powerhouse, has done so in the only way that any company can that has reached such a saturation point in the market. Having

Inside VMware Before It Dons The Dell Invisibility Cloak was written by Timothy Prickett Morgan at The Next Platform.

Nutanix Pivots From Hyperconvergence To Platform

The chant for years and years from hyperconverged storage pioneer Nutanix has been “Ban the SAN.” But going forward, as the upstart is moving closer to its initial public offering, Nutanix wants to do much more. With two recent acquisitions, of PernixData and Calm.io, Nutanix is trying to transform itself into a proper, self-contained platform.

It will take either more acquisitions or lots more development to accomplish this goal. So Nutanix is by no means done. PernixData was equally ambitious in flash-accelerated and all-flash storage, and seems to have overextended itself as it invested in an effort to bring an

Nutanix Pivots From Hyperconvergence To Platform was written by Timothy Prickett Morgan at The Next Platform.

KiloCore Pushes On-Chip Scale Limits with Killer Core

We have profiled a number of processor updates and novel architectures this week in the wake of the Hot Chips conference this week, many of which have focused on clever FPGA implementations, specialized ASICs, or additions to well-known architectures, including Power and ARM.

Among the presentations that provided yet another way to loop around the Moore’s Law wall is a 1000-core processor “KiloCore” from UC Davis researchers, which they noted during Hot Chips (and the press repeated) was the first to wrap 1000 processors on a single die. Actually, Japanese startup, Exascaler, Inc. beat them to this with the PEZY-SC

KiloCore Pushes On-Chip Scale Limits with Killer Core was written by Nicole Hemsoth at The Next Platform.

AMD Strikes A Balance – And Strikes Back – With Zen

Being too dependent on one source for a key component is not just a bad idea because of supply chain risks, but because it can result in higher prices.

Intel customers don’t need to be reminded of the lack of direct competitive pressure in the X86 chip market for servers, because they remember what competition that felt like. And customers and system makers that had taken a risk with AMD Opteron processors a decade ago don’t need to be reminded of either of these facts, particularly after AMD walked away from the server business in the wake of technical problems

AMD Strikes A Balance – And Strikes Back – With Zen was written by Timothy Prickett Morgan at The Next Platform.

Inside the Manycore Research Chip That Could Power Future Clouds

For those interested in novel architectures for large-scale datacenters and complex computing domains, this year has offered plenty of fodder for exploration.

From a rise in custom ASICs to power next generation deep learning, to variations on FPGAs, DSPs, and ARM processor cores, and advancements in low-power processors for webscale datacenters, it is clear that the Moore’s Law death knell is clanging loud enough to spur faster, more voluminous action.

At the Hot Chips conference this week, we analyzed the rollout of a number of new architectures (more on the way as the week unfolds), but one that definitely grabbed

Inside the Manycore Research Chip That Could Power Future Clouds was written by Nicole Hemsoth at The Next Platform.

Baidu Takes FPGA Approach to Accelerating SQL at Scale

While much of the work at Baidu we have focused on this year has centered on the Chinese search giant’s deep learning initiatives, many other critical, albeit less bleeding edge applications present true big data challenges.

As Baidu’s Jian Ouyang detailed this week at the Hot Chips conference, Baidu sits on over an exabyte of data, processes around 100 petabytes per day, updates 10 billion webpages daily, and handles over a petabyte of log updates every 24 hours. These numbers are on par with Google and as one might imagine, it takes a Google-like approach to problem solving at

Baidu Takes FPGA Approach to Accelerating SQL at Scale was written by Nicole Hemsoth at The Next Platform.

Big Blue Aims For The Sky With Power9

Intel has the kind of control in the datacenter that only one vendor in the history of data processing has ever enjoyed. That other company is, of course, IBM, and Big Blue wants to take back some of the real estate it lost in the datacenters of the world in the past twenty years.

The Power9 chip, unveiled at the Hot Chips conference this week, is the best chance the company has had to make some share gains against X86 processors since the Power4 chip came out a decade and a half ago and set IBM on the path to

Big Blue Aims For The Sky With Power9 was written by Timothy Prickett Morgan at The Next Platform.

FPGA Based Deep Learning Accelerators Take on ASICs

Over the last couple of years, the idea that the most efficient and high performance way to accelerate deep learning training and inference is with a custom ASIC—something designed to fit the specific needs of modern frameworks.

While this idea has racked up major mileage, especially recently with the acquisition of Nervana Systems by Intel (and competitive efforts from Wave Computing and a handful of other deep learning chip startups), yet another startup is challenging the idea that a custom ASIC is the smart, cost-effective path.

The argument is a simple one; deep learning frameworks are not unified, they are

FPGA Based Deep Learning Accelerators Take on ASICs was written by Nicole Hemsoth at The Next Platform.

ARM Puts Some Muscle Into Vector Number Crunching

If the ARM processor in its many incarnations is to take on the reigning Xeon champ in the datacenter and the born again Power processor that is also trying to knock Xeons from the throne, it is going to need some bigger vector math capabilities. This is why, as we have previously reported, supercomputer maker Fujitsu has teamed up with ARM holdings to add better vector processing to the ARM architecture.

Details of that new vector format, known as Scalable Vector Extension (SVE), were revealed by ARM at the Hot Chips 28 conference in Silicon Valley, and any licensee

ARM Puts Some Muscle Into Vector Number Crunching was written by Timothy Prickett Morgan at The Next Platform.

Specialized Supercomputing Cloud Turns Eye to Machine Learning

Back in 2010, when the term “cloud computing” was still laden with peril and mystery for many users in enterprise and high performance computing, HPC cloud startup, Nimbix, stepped out to tackle that perceived risk for some of the most challenging, latency-sensitive applications.

At the time, there were only a handful of small companies catering to the needs of high performance computing applications and those that existed were developing clever middleware to hook into AWS infrastructure. There were a few companies offering true “HPC as a service” (distinct datacenters designed to fit such workloads that could be accessed via a

Specialized Supercomputing Cloud Turns Eye to Machine Learning was written by Nicole Hemsoth at The Next Platform.

Why Intel Is Tweaking Xeon Phi For Deep Learning

If there is anything that chip giant Intel has learned over the past two decades as it has gradually climbed to dominance in processing in the datacenter, it is ironically that one size most definitely does not fit all. Quite the opposite, and increasingly so.

As the tight co-design of hardware and software continues in all parts of the IT industry, we can expect fine-grained customization for very precise – and lucrative – workloads, like data analytics and machine learning, just to name two of the hottest areas today.

Software will run most efficiently on hardware that is tuned for

Why Intel Is Tweaking Xeon Phi For Deep Learning was written by Timothy Prickett Morgan at The Next Platform.