Archive

Category Archives for "The Next Platform"

VMware’s Embrace And Extend Strategy

The most successful players in the information technology space are those that can adapt, again and again, to tumultuous change. With a vast installed base and plenty of technical talent, it is unwise to count VMware out as the enterprise customers who have embraces its server virtualization tools ponder how they want to evolve to something that looks more like what a hyperscaler would build.

Many companies have faced these moments, but here is perhaps a pertinent parallel.

IBM’s vaunted mainframe business reacted successfully to the onslaught of minicomputers in the late 1970s and early 1980s, spearheaded by Digital Equipment

VMware’s Embrace And Extend Strategy was written by Timothy Prickett Morgan at The Next Platform.

Details Emerge On China’s 64-Core ARM Chip

While the world awaits the AMD K12 and Qualcomm Hydra ARM server chips to join the ranks of the Applied Micro X-Gene and Cavium ThunderX processors already in the market, it could be upstart Chinese chip maker Phytium Technology that gets a brawny chip into the field first and also gets traction among actual datacenter server customers, not just tire kickers.

Phytium was on hand at last week’s Hot Chips 28 conference, showing off its chippery and laptop, desktop and server machines employing its “Earth” and “Mars” FT series of ARM chips. Most of the interest that people showed in

Details Emerge On China’s 64-Core ARM Chip was written by Timothy Prickett Morgan at The Next Platform.

CPU, GPU Put to Deep Learning Framework Test

In the last couple of years, we have examined how deep learning shops are thinking about hardware. From GPU acceleration, to CPU-only approaches, and of course, FPGAs, custom ASICs, and other devices, there are a range of options—but these are still early days. The algorithmic platforms for deep learning are still evolving and it is incumbent on hardware to keep up. Accordingly, we have been seeing more benchmarking efforts of various approaches from the research community.

This week yielded a new benchmark effort comparing various deep learning frameworks on a short list of CPU and

CPU, GPU Put to Deep Learning Framework Test was written by Nicole Hemsoth at The Next Platform.

Inside VMware Before It Dons The Dell Invisibility Cloak

Where do you go when you are an infrastructure software provider that already has 500,000 enterprise customers? That is about as good and as big as it gets, particularly when the biggest spenders in IT infrastructure, the hyperscalers and the largest cloud builders, create their own hardware and infrastructure software and inspire legions of companies to follow their lead, often with open source projects they found.

So VMware, which has grown into a nearly $7 billion software powerhouse, has done so in the only way that any company can that has reached such a saturation point in the market. Having

Inside VMware Before It Dons The Dell Invisibility Cloak was written by Timothy Prickett Morgan at The Next Platform.

Nutanix Pivots From Hyperconvergence To Platform

The chant for years and years from hyperconverged storage pioneer Nutanix has been “Ban the SAN.” But going forward, as the upstart is moving closer to its initial public offering, Nutanix wants to do much more. With two recent acquisitions, of PernixData and Calm.io, Nutanix is trying to transform itself into a proper, self-contained platform.

It will take either more acquisitions or lots more development to accomplish this goal. So Nutanix is by no means done. PernixData was equally ambitious in flash-accelerated and all-flash storage, and seems to have overextended itself as it invested in an effort to bring an

Nutanix Pivots From Hyperconvergence To Platform was written by Timothy Prickett Morgan at The Next Platform.

KiloCore Pushes On-Chip Scale Limits with Killer Core

We have profiled a number of processor updates and novel architectures this week in the wake of the Hot Chips conference this week, many of which have focused on clever FPGA implementations, specialized ASICs, or additions to well-known architectures, including Power and ARM.

Among the presentations that provided yet another way to loop around the Moore’s Law wall is a 1000-core processor “KiloCore” from UC Davis researchers, which they noted during Hot Chips (and the press repeated) was the first to wrap 1000 processors on a single die. Actually, Japanese startup, Exascaler, Inc. beat them to this with the PEZY-SC

KiloCore Pushes On-Chip Scale Limits with Killer Core was written by Nicole Hemsoth at The Next Platform.

AMD Strikes A Balance – And Strikes Back – With Zen

Being too dependent on one source for a key component is not just a bad idea because of supply chain risks, but because it can result in higher prices.

Intel customers don’t need to be reminded of the lack of direct competitive pressure in the X86 chip market for servers, because they remember what competition that felt like. And customers and system makers that had taken a risk with AMD Opteron processors a decade ago don’t need to be reminded of either of these facts, particularly after AMD walked away from the server business in the wake of technical problems

AMD Strikes A Balance – And Strikes Back – With Zen was written by Timothy Prickett Morgan at The Next Platform.

Inside the Manycore Research Chip That Could Power Future Clouds

For those interested in novel architectures for large-scale datacenters and complex computing domains, this year has offered plenty of fodder for exploration.

From a rise in custom ASICs to power next generation deep learning, to variations on FPGAs, DSPs, and ARM processor cores, and advancements in low-power processors for webscale datacenters, it is clear that the Moore’s Law death knell is clanging loud enough to spur faster, more voluminous action.

At the Hot Chips conference this week, we analyzed the rollout of a number of new architectures (more on the way as the week unfolds), but one that definitely grabbed

Inside the Manycore Research Chip That Could Power Future Clouds was written by Nicole Hemsoth at The Next Platform.

Baidu Takes FPGA Approach to Accelerating SQL at Scale

While much of the work at Baidu we have focused on this year has centered on the Chinese search giant’s deep learning initiatives, many other critical, albeit less bleeding edge applications present true big data challenges.

As Baidu’s Jian Ouyang detailed this week at the Hot Chips conference, Baidu sits on over an exabyte of data, processes around 100 petabytes per day, updates 10 billion webpages daily, and handles over a petabyte of log updates every 24 hours. These numbers are on par with Google and as one might imagine, it takes a Google-like approach to problem solving at

Baidu Takes FPGA Approach to Accelerating SQL at Scale was written by Nicole Hemsoth at The Next Platform.

Big Blue Aims For The Sky With Power9

Intel has the kind of control in the datacenter that only one vendor in the history of data processing has ever enjoyed. That other company is, of course, IBM, and Big Blue wants to take back some of the real estate it lost in the datacenters of the world in the past twenty years.

The Power9 chip, unveiled at the Hot Chips conference this week, is the best chance the company has had to make some share gains against X86 processors since the Power4 chip came out a decade and a half ago and set IBM on the path to

Big Blue Aims For The Sky With Power9 was written by Timothy Prickett Morgan at The Next Platform.

FPGA Based Deep Learning Accelerators Take on ASICs

Over the last couple of years, the idea that the most efficient and high performance way to accelerate deep learning training and inference is with a custom ASIC—something designed to fit the specific needs of modern frameworks.

While this idea has racked up major mileage, especially recently with the acquisition of Nervana Systems by Intel (and competitive efforts from Wave Computing and a handful of other deep learning chip startups), yet another startup is challenging the idea that a custom ASIC is the smart, cost-effective path.

The argument is a simple one; deep learning frameworks are not unified, they are

FPGA Based Deep Learning Accelerators Take on ASICs was written by Nicole Hemsoth at The Next Platform.

ARM Puts Some Muscle Into Vector Number Crunching

If the ARM processor in its many incarnations is to take on the reigning Xeon champ in the datacenter and the born again Power processor that is also trying to knock Xeons from the throne, it is going to need some bigger vector math capabilities. This is why, as we have previously reported, supercomputer maker Fujitsu has teamed up with ARM holdings to add better vector processing to the ARM architecture.

Details of that new vector format, known as Scalable Vector Extension (SVE), were revealed by ARM at the Hot Chips 28 conference in Silicon Valley, and any licensee

ARM Puts Some Muscle Into Vector Number Crunching was written by Timothy Prickett Morgan at The Next Platform.

Specialized Supercomputing Cloud Turns Eye to Machine Learning

Back in 2010, when the term “cloud computing” was still laden with peril and mystery for many users in enterprise and high performance computing, HPC cloud startup, Nimbix, stepped out to tackle that perceived risk for some of the most challenging, latency-sensitive applications.

At the time, there were only a handful of small companies catering to the needs of high performance computing applications and those that existed were developing clever middleware to hook into AWS infrastructure. There were a few companies offering true “HPC as a service” (distinct datacenters designed to fit such workloads that could be accessed via a

Specialized Supercomputing Cloud Turns Eye to Machine Learning was written by Nicole Hemsoth at The Next Platform.

Why Intel Is Tweaking Xeon Phi For Deep Learning

If there is anything that chip giant Intel has learned over the past two decades as it has gradually climbed to dominance in processing in the datacenter, it is ironically that one size most definitely does not fit all. Quite the opposite, and increasingly so.

As the tight co-design of hardware and software continues in all parts of the IT industry, we can expect fine-grained customization for very precise – and lucrative – workloads, like data analytics and machine learning, just to name two of the hottest areas today.

Software will run most efficiently on hardware that is tuned for

Why Intel Is Tweaking Xeon Phi For Deep Learning was written by Timothy Prickett Morgan at The Next Platform.

Growing Hyperconverged Platforms Takes Patience, Time, And Money

In this day and age when the X86 server has pretty much taken over compute in the datacenter, enterprise customers still have their preferences and prejudices when it comes to the make and model of X86 machine that they deploy to run their applications. So a company that is trying to get its software into the datacenter, as server-storage hybrid Nutanix is, needs to befriend the big incumbent server makers and get its software onto their boxes.

This is not always an easy task, given that some of these companies have their own hyperconverged storage products or they have a

Growing Hyperconverged Platforms Takes Patience, Time, And Money was written by Timothy Prickett Morgan at The Next Platform.

Seven Years Later, SGI Finds a New Ending

Seven years ago, it was the end for SGI. The legendary company had gone bankrupt, its remains were up for liquidation, and its relatively few remaining loyal customers were left in limbo.

This week, SGI reached a new ending, significantly different from its last one, as HPE announced an intended deal to purchase the company for approximately $275 million.

SGI was reincarnated in 2009 when Rackable bought its assets, including its brand, off the scrap heap, for only $42.5 million (originally reported as $25 million at the time, but later updated). Rackable—that is to say, the new SGI—protected employees, key

Seven Years Later, SGI Finds a New Ending was written by Nicole Hemsoth at The Next Platform.

Intel Leverages Chip Might To Etch Photonics Future

Computing has gone through a few waves. There was human to human computing in the first few decades, and in recent years it has been dominated by human to machine computing with hyperscale consumer-facing applications, and we are on the cusp of a third wave of machine to machine computing that will swell compute, storage, and networking to untold zettabytes of traffic.

Under such data strain, there is an explosive need for bandwidth across datacenters as a whole, but particularly among hyperscalers with their hundreds of millions to billions of users. (Ironically, some datacenters are only now moving to 10

Intel Leverages Chip Might To Etch Photonics Future was written by Timothy Prickett Morgan at The Next Platform.

The Cloud Startup that Just Keeps Kicking

Many startups have come and gone since the early days of cloud, but when it comes to those that started small and grown organically with the expansion of use cases, Cycle Computing still stands tall.

Tall being relative, of course. As with that initial slew of cloud startups, a lot of investment money has sloshed around as well. As Cycle Computing CEO, Jason Stowe, reminds The Next Platform, the small team started with an $8,000 credit card bill with sights on the burgeoning needs of scientific computing users in need of spare compute capacity and didn’t take funding until

The Cloud Startup that Just Keeps Kicking was written by Nicole Hemsoth at The Next Platform.

Intel SSF Optimizations Boost Machine Learning

Data scientists and deep and machine learning researchers rely on frameworks and libraries such as Torch, Caffe, TensorFlow, and Theano. Studies by Colfax Research and Kyoto University have found that existing open source packages such as Torch and Theano deliver significantly faster performance through the use of Intel Scalable System Framework (Intel SSF) technologies like the Intel compiler and performance libraries for Intel Math Kernel Library (Intel MKL), Intel MPI (Message Passing Interface), and Intel Threading Building Blocks (Intel TBB), and Intel Distribution for Python (Intel Python).

Andrey Vladimirov (Head of HPC Research, Colfax Research) noted

Intel SSF Optimizations Boost Machine Learning was written by Nicole Hemsoth at The Next Platform.

Getting Cloud Out Of A Fugue State

The polyphonic weavings of a fugue in baroque music is a beautiful thing and an apt metaphor for how we want orchestration on cloud infrastructure to behave in a harmonic fashion. Unfortunately, most cloudy infrastructure is in more of a fugue state, complete with multiple personalities and amnesia.

A startup founded by some architects and engineers from Amazon Web Services wants to get the metaphor, and therefore the tools, right and have just popped out of stealth mode with a company aptly called Fugue to do just that.

Programmers are in charge of some of the largest and most profitable

Getting Cloud Out Of A Fugue State was written by Timothy Prickett Morgan at The Next Platform.