What is good for the simulation and the machine learning is, as it turns out, also good for the database. The performance and thermal limits of traditional CPUs have made GPUs the go-to accelerator for these workloads at extreme scale, and now databases, which are thread monsters in their own right, are also turning to GPUs to get a performance and scale boost.
Commercializing GPU databases takes time, and Kinetica, formerly known as GPUdb, is making a bit of a splash ahead of the Strata+Hadoop World conference next week as it brags about the performance and scale of the parallel …
Pushing Database Scalability Up And Out With GPUs was written by Timothy Prickett Morgan at The Next Platform.
Two years ago, when Big Blue put a stake through the heart of its impartial attitude about the X86 server business, it was also putting a stake in the ground for its Power systems business.
IBM bet that it could make more money selling Power machinery to its existing customer base and while at the same time expanding it out to hyperscalers like Google through the OpenPower Foundation while at the same time gradually building out a companion public cloud offering of Power machinery on its SoftLayer cloud and through partners like Rackspace Hosting. This is a big bet, and …
IBM Builds A Bridge Between Private And Public Power Clouds was written by Timothy Prickett Morgan at The Next Platform.
As Moore’s Law spirals downward, ultra-high bandwidth memory matched with custom accelerators for specialized workloads might be the only saving grace for the pace of innovation we are accustomed to.
With advancements on both the memory and ASIC sides driven by machine learning and other workloads pushing greater innovation, this could be great news for big datacenters with inefficient legions of machines dedicated to ordinary processing tasks—jobs that could far more efficient with more tailored approaches.
We have described this trend in the context of architectures built on stacked memory with FPGAs and other custom accelerators inside recently, and we …
Baking Specialization into Hardware Cools CPU Concerns was written by Nicole Hemsoth at The Next Platform.
It’s elastic! It’s on-demand! It scales dynamically to meet your needs! It streamlines your operations, gives you persistent access to data, and it’s always, always cheaper. It’s cloud computing, and it’s here to save your enterprise.
And yet, for all the promise of cloud, there are still segments of IT, such as HPC and many categories of big data analytics, that have been resistant to wholesale outsourcing to public cloud resources. At present cloud computing makes up only 2.4% of the HPC market by revenue, and although Intersect360 Research is forecast its growth at a robust 10.9%, that still keeps …
The Three Great Lies of Cloud Computing was written by Nicole Hemsoth at The Next Platform.
With the record-breaking $60 billion Dell/EMC acquisition now complete, both of these companies and their customers now have more options than ever before to meet evolving storage needs. Joining forces helps the newly minted Dell Technologies combine the best of both worlds to better serve customers by blending EMC storage and support with Dell pricing and procurement.
But there is some trouble in paradise. Even when sold by the same vendor, most storage systems have been designed as secluded islands of data, meaning they aren’t terribly good at talking to each other.
In fact, this silo effect is exacerbated …
Modern Storage Software Erodes Resistant Data Silos was written by Timothy Prickett Morgan at The Next Platform.
One of the reasons why Dell spent $60 billion on the EMC-VMware conglomerate was to become the top supplier of infrastructure in the corporate datacenter. But even before the deal closed, Dell was on its way – somewhat surprisingly to many – to toppling Hewlett Packard Enterprise as the dominant supplier of X86 systems in the world.
But that computing world is set to change, we think. And perhaps more quickly – some might say jarringly — than any of the server incumbents are prepared to absorb.
After Intel, with the help of a push from AMD a decade ago, …
The Server At Peak X86 was written by Timothy Prickett Morgan at The Next Platform.
Last week we described the next stage of deep learning hardware developments in some detail, focusing on a few specific architectures that capture what the rapidly-evolving field of machine learning algorithms require. This week we are focusing in on a trend that is moving faster than the devices can keep up with; the codes and application areas that are set to make this market spin in 2017.
It was with reserved skepticism that we listened, not even one year ago, to dramatic predictions about the future growth of the deep learning market—numbers that climbed into the billions despite the fact …
The Next Wave of Deep Learning Applications was written by Nicole Hemsoth at The Next Platform.
The jury is still out when it comes to how wide-ranging the application set and market potential for quantum computing will be. Optimistic estimates project that in the 2020s it will be a billion-dollar field, while others expect the novelty will wear off and the one company behind the actual production of quantum annealing machines will go bust.
Ultimately, whichever direction the market goes with quantum computing will depend on two things. First, the ability for applications of sufficient value to warrant the cost of quantum systems have to be in place. Second, and connected to that point, is the …
So, You Want to Program Quantum Computers? was written by Nicole Hemsoth at The Next Platform.
While containers are old news in enterprise circles, by and large, high performance computing centers have just recently begun to consider packaging up their complex applications. A few centers are known for their rapid progress in this area, but for smaller sites, especially those that serve users from a diverse domain base via medium-sized HPC clusters, progress has been slower—even though containers could zap some serious deployment woes and make collaboration simpler.
When it comes to containers in HPC, there are a couple of noteworthy efforts that go beyond the more enterprise-geared Docker and CoreOS options. These include Shifter out …
When Will Containers Be the Total Package for HPC? was written by Nicole Hemsoth at The Next Platform.
No one knows for sure how pervasive deep learning and artificial intelligence are in the aggregate across all of the datacenters in the world, but what we do know is that the use of these techniques is growing and could represent a big chunk of the processing that gets done every millisecond of every day.
We spend a lot of time thinking about such things, and as Nvidia was getting ready to launch its new Tesla P4 and P40 GPU accelerator cards, we asked Ian Buck, vice president of accelerated computing at Nvidia just how much computing could be devoted …
Nvidia Pushes Deep Learning Inference With New Pascal GPUs was written by Timothy Prickett Morgan at The Next Platform.
We have heard about a great number of new architectures and approaches to scalable and efficient deep learning processing that sit outside of the standard CPU, GPU, and FPGA box and while each is different, many are leveraging a common element at all-important memory layer.
The Hybrid Memory Cube (HMC), which we expect to see much more of over the coming year and beyond, is at the heart of several custom architectures to suit the deep learning market. Nervana Systems, which was recently acquired by Intel (HMC maker, Micron’s close partner), Wave Computing, and other research efforts all see a …
Deep Learning Architectures Hinge on Hybrid Memory Cube was written by Nicole Hemsoth at The Next Platform.
If money were no object and accountants allowed companies to write off investments in systems instantly, then datacenters would be tossing hardware into the scrap heap as soon as new technology came along. But in the real world, companies have to take a more measured approach to adding new systems and upgrading old ones, and that can make the time when customers buy shiny new boxes a bit tough to predict.
Forecasting sales and trying to close them are two of the many challenges that all server, storage, and switching vendors face, and Supermicro, which straddles the line between the …
Surfing On Tech Waves With Supermicro was written by Timothy Prickett Morgan at The Next Platform.
Over the long course of IT history, the burden has been on the software side to keep pace with rapid hardware advances—to exploit new capabilities and boldly go where no benchmarks have gone before. However, as we swiftly ride into a new age where machine learning and deep learning take the place of more static applications and software advances are far faster than chipmakers can tick and tock to, hardware device makers are scrambling.
That problem is profound enough on its own, and is an entirely different architectural dance than general purpose device have ever had to step to. Shrinking …
Hardware Slaves to the Master Algorithm was written by Nicole Hemsoth at The Next Platform.
The Hewlett Packard that Carly Fiorina and Mark Hurd created through aspiration and acquisition is hardly recognizable in the increasingly streamlined Hewlett Packard Enterprise that Meg Whitman is whittling.
We joked earlier this week that with its acquisition of VMware and EMC and the sales of its outsourcing and software businesses that the new Dell was stop trying to be the old IBM. Well, the same is true of the new HP. It is not clear when and if Oracle will get the same memo, but it seems content to build engineered systems, from top to bottom, and we …
HPE Trims Back To The Core Enterprise Essentials was written by Timothy Prickett Morgan at The Next Platform.
It is week one of the new Dell Technologies, the conglomerate glued together with $60 billion from the remaining parts of the old Dell it has not sold off to raise cash to buy storage giant EMC and therefore server virtualization juggernaut VMware, which is owned mostly by EMC but remains a public company in the wake of the deal.
By adding EMC and VMware to itself and shedding its outsourcing services and software business units, Dell is becoming the largest supplier of IT gear in the world, at least by its own reckoning. You could argue that consumer PCs …
The New Dell Stops Trying To Be The Old IBM was written by Timothy Prickett Morgan at The Next Platform.
The very first systems that allow for GPUs to be hooked directly to CPUs using Nvidia’s NVLink high-speed interconnect are coming to market now that Big Blue is updating its Power Systems LC line of Linux-based systems with the help of hardware partners in the OpenPower Foundation collective.
Interestingly, the advent of the Power Systems S822LC for HPC system, code-named “Minsky” inside of IBM because human beings like real names even if marketeers are not allowed to, gives the DGX-1 machine crafted by Nvidia for deep learning workloads some competition. Right now, these systems are the only two machines on …
Refreshed IBM Power Linux Systems Add NVLink was written by Timothy Prickett Morgan at The Next Platform.
Intel has planted some solid stakes in the ground for the future of deep learning over the last month with its acquisition of deep learning chip startup, Nervana Systems, and most recently, mobile and embedded machine learning company, Movidius.
These new pieces will snap into Intel’s still-forming puzzle for capturing the supposed billion-plus dollar market ahead for deep learning, which is complemented by its own Knights Mill effort and software optimization work on machine learning codes and tooling. At the same time, just down the coast, Nvidia is firming up the market for its own GPU training and inference …
The Next Wave of Deep Learning Architectures was written by Nicole Hemsoth at The Next Platform.
While hyperscalers and HPC centers like the bleeding edge – their very existence commands that they be on it – enterprises are a more conservative lot. No IT supplier ever went broke counting on enterprises to be risk adverse, but plenty of companies have gone the way of all flesh by not innovating enough and not seeing market inflections when they exist.
VMware, the virtualization division of the new Dell Technologies empire that formally comes into being this week, does not want to miss such changes and very much wants to continue to extract revenues and profits from its impressively …
The Vast Potential For VMware’s OpenStack Cloud was written by Timothy Prickett Morgan at The Next Platform.
The supercomputing industry is accustomed to 1,000X performance strides, and that is because people like to think in big round numbers and bold concepts. Every leap in performance is exciting not just because of the engineering challenges in bringing systems with kilo, mega, tera, peta, and exa scales into being, but because of the science that is enabled by such increasingly massive machines.
But every leap is getting a bit more difficult as imagination meets up with the constraints of budgets and the laws of physics. The exascale leap is proving to be particularly difficult, and not just because it …
Exascale Might Prove To Be More Than A Grand Challenge was written by Timothy Prickett Morgan at The Next Platform.
There is no workload in the datacenter that can’t, in theory and in practice, be supplied as a service from a public cloud. Big Data as a Service, or BDaaS for short, is an emerging category of services that delivers data processing for analytics in the cloud and it is getting a lot of buzz these days – and for good reason. These BDaaS products vary in features, functions, and target use cases, but all address the same basic problem: Big data and data warehousing in the cloud is deceptively challenging and customers want to abstract away the complexity.
Data …
Big Data Rides Up The Cloud Abstraction Wave was written by Timothy Prickett Morgan at The Next Platform.