Among the many challenges ahead for programming in the exascale era is the portability and performance of codes on heterogeneous machines.
Since the future plan for architectures includes new memory and accelerator capabilities, along with advances in general purpose cores, developing on a solid base that offers flexibility and support for many hardware architectures is a priority. Some contend that the best place to start is with C++, which has been gathering steam in HPC in recent years.
As our own Douglas Eadline noted back in January, choosing a programming language for HPC used to be an easy task. Select …
Exascale Code Performance and Portability in the Tune of C was written by Nicole Hemsoth at The Next Platform.
In the public cloud business, scale is everything – hyper, in fact – and having too many different kinds of compute, storage, or networking makes support more complex and investment in infrastructure more costly. So when a big public cloud like Amazon Web Services invests in a non-standard technology, that means something. In the case of Nvidia’s Tesla accelerators, it means that GPU compute has gone mainstream.
It may not be obvious, but AWS tends to hang back on some of the Intel Xeon compute on its cloud infrastructure, at least compared to the largest supercomputer centers and hyperscalers like …
Amazon Gets Serious About GPU Compute On Clouds was written by Timothy Prickett Morgan at The Next Platform.
What constitutes an operating system changes with the work a system performs and the architecture that defines how that work is done. All operating systems tend to expand out from their initial core functionality, embedding more and more functions. And then, every once in a while, there is a break, a shift in technology that marks a fundamental change in how computing gets done.
It is fair to say that Windows Server 2016, which made it formal debut at Microsoft’s Ignite conference today and which starts shipping on October 1, is at the fulcrum of a profound change where an …
Windows Server 2016: End Of One Era, Start Of Another was written by Timothy Prickett Morgan at The Next Platform.
Oil and natural resource discovery and production is an incredibly risky endeavor, with the cost of simply finding a new barrel of oil tripling over the last ten years. Discovery teams want to ensure they are only drilling in the most lucrative locations, which these days means looking to increasingly inaccessible (for a bevy of reasons) sources for hydrocarbons.
Even with renewable resources like wind, there are still major financial risks. An accurate prediction of shifting output and location for expensive turbines are two early-stage challenges, and maintaining, monitoring, and optimizing those turbines is an ongoing pressure.
The common thread …
Exascale Capabilities Underpin Future of Energy Sector was written by Nicole Hemsoth at The Next Platform.
When it comes to deep learning innovation on the hardware front, few other research centers have been as forthcoming with their results as Baidu. Specifically, the company’s Silicon Valley AI Lab (SVAIL) has been the center of some noteworthy work on GPU-based deep learning as well as exploratory efforts using novel architectures specifically for ultra-fast training and inference.
It stands to reason that teams at SVAIL don’t simply throw hardware at the wall to see what sticks, even though they seem to have more to toss around than most. Over the last couple of years, they have broken down …
Baidu’s New Yardstick for Deep Learning Hardware Makers was written by Nicole Hemsoth at The Next Platform.
If you want to study how datacenter design has changed over the past two decades, a good place to visit is Quincy, Washington. There are five different datacenter operators in this small farming community of around 7,000 people, including Microsoft, Yahoo, Intuit, Sabey Data Centers, and Vantage Data Centers, and they have located there thanks to the proximity of Quincy to hydroelectric power generated from the Columbia River and the relatively cool and arid climate, which can be used to great advantage to keep servers, storage, and switches cool.
All of the datacenter operators are pretty secretive about their glass …
A Rare Tour Of Microsoft’s Hyperscale Datacenters was written by Timothy Prickett Morgan at The Next Platform.
What is good for the simulation and the machine learning is, as it turns out, also good for the database. The performance and thermal limits of traditional CPUs have made GPUs the go-to accelerator for these workloads at extreme scale, and now databases, which are thread monsters in their own right, are also turning to GPUs to get a performance and scale boost.
Commercializing GPU databases takes time, and Kinetica, formerly known as GPUdb, is making a bit of a splash ahead of the Strata+Hadoop World conference next week as it brags about the performance and scale of the parallel …
Pushing Database Scalability Up And Out With GPUs was written by Timothy Prickett Morgan at The Next Platform.
Two years ago, when Big Blue put a stake through the heart of its impartial attitude about the X86 server business, it was also putting a stake in the ground for its Power systems business.
IBM bet that it could make more money selling Power machinery to its existing customer base and while at the same time expanding it out to hyperscalers like Google through the OpenPower Foundation while at the same time gradually building out a companion public cloud offering of Power machinery on its SoftLayer cloud and through partners like Rackspace Hosting. This is a big bet, and …
IBM Builds A Bridge Between Private And Public Power Clouds was written by Timothy Prickett Morgan at The Next Platform.
As Moore’s Law spirals downward, ultra-high bandwidth memory matched with custom accelerators for specialized workloads might be the only saving grace for the pace of innovation we are accustomed to.
With advancements on both the memory and ASIC sides driven by machine learning and other workloads pushing greater innovation, this could be great news for big datacenters with inefficient legions of machines dedicated to ordinary processing tasks—jobs that could far more efficient with more tailored approaches.
We have described this trend in the context of architectures built on stacked memory with FPGAs and other custom accelerators inside recently, and we …
Baking Specialization into Hardware Cools CPU Concerns was written by Nicole Hemsoth at The Next Platform.
It’s elastic! It’s on-demand! It scales dynamically to meet your needs! It streamlines your operations, gives you persistent access to data, and it’s always, always cheaper. It’s cloud computing, and it’s here to save your enterprise.
And yet, for all the promise of cloud, there are still segments of IT, such as HPC and many categories of big data analytics, that have been resistant to wholesale outsourcing to public cloud resources. At present cloud computing makes up only 2.4% of the HPC market by revenue, and although Intersect360 Research is forecast its growth at a robust 10.9%, that still keeps …
The Three Great Lies of Cloud Computing was written by Nicole Hemsoth at The Next Platform.
With the record-breaking $60 billion Dell/EMC acquisition now complete, both of these companies and their customers now have more options than ever before to meet evolving storage needs. Joining forces helps the newly minted Dell Technologies combine the best of both worlds to better serve customers by blending EMC storage and support with Dell pricing and procurement.
But there is some trouble in paradise. Even when sold by the same vendor, most storage systems have been designed as secluded islands of data, meaning they aren’t terribly good at talking to each other.
In fact, this silo effect is exacerbated …
Modern Storage Software Erodes Resistant Data Silos was written by Timothy Prickett Morgan at The Next Platform.
One of the reasons why Dell spent $60 billion on the EMC-VMware conglomerate was to become the top supplier of infrastructure in the corporate datacenter. But even before the deal closed, Dell was on its way – somewhat surprisingly to many – to toppling Hewlett Packard Enterprise as the dominant supplier of X86 systems in the world.
But that computing world is set to change, we think. And perhaps more quickly – some might say jarringly — than any of the server incumbents are prepared to absorb.
After Intel, with the help of a push from AMD a decade ago, …
The Server At Peak X86 was written by Timothy Prickett Morgan at The Next Platform.
Last week we described the next stage of deep learning hardware developments in some detail, focusing on a few specific architectures that capture what the rapidly-evolving field of machine learning algorithms require. This week we are focusing in on a trend that is moving faster than the devices can keep up with; the codes and application areas that are set to make this market spin in 2017.
It was with reserved skepticism that we listened, not even one year ago, to dramatic predictions about the future growth of the deep learning market—numbers that climbed into the billions despite the fact …
The Next Wave of Deep Learning Applications was written by Nicole Hemsoth at The Next Platform.
The jury is still out when it comes to how wide-ranging the application set and market potential for quantum computing will be. Optimistic estimates project that in the 2020s it will be a billion-dollar field, while others expect the novelty will wear off and the one company behind the actual production of quantum annealing machines will go bust.
Ultimately, whichever direction the market goes with quantum computing will depend on two things. First, the ability for applications of sufficient value to warrant the cost of quantum systems have to be in place. Second, and connected to that point, is the …
So, You Want to Program Quantum Computers? was written by Nicole Hemsoth at The Next Platform.
While containers are old news in enterprise circles, by and large, high performance computing centers have just recently begun to consider packaging up their complex applications. A few centers are known for their rapid progress in this area, but for smaller sites, especially those that serve users from a diverse domain base via medium-sized HPC clusters, progress has been slower—even though containers could zap some serious deployment woes and make collaboration simpler.
When it comes to containers in HPC, there are a couple of noteworthy efforts that go beyond the more enterprise-geared Docker and CoreOS options. These include Shifter out …
When Will Containers Be the Total Package for HPC? was written by Nicole Hemsoth at The Next Platform.
No one knows for sure how pervasive deep learning and artificial intelligence are in the aggregate across all of the datacenters in the world, but what we do know is that the use of these techniques is growing and could represent a big chunk of the processing that gets done every millisecond of every day.
We spend a lot of time thinking about such things, and as Nvidia was getting ready to launch its new Tesla P4 and P40 GPU accelerator cards, we asked Ian Buck, vice president of accelerated computing at Nvidia just how much computing could be devoted …
Nvidia Pushes Deep Learning Inference With New Pascal GPUs was written by Timothy Prickett Morgan at The Next Platform.
We have heard about a great number of new architectures and approaches to scalable and efficient deep learning processing that sit outside of the standard CPU, GPU, and FPGA box and while each is different, many are leveraging a common element at all-important memory layer.
The Hybrid Memory Cube (HMC), which we expect to see much more of over the coming year and beyond, is at the heart of several custom architectures to suit the deep learning market. Nervana Systems, which was recently acquired by Intel (HMC maker, Micron’s close partner), Wave Computing, and other research efforts all see a …
Deep Learning Architectures Hinge on Hybrid Memory Cube was written by Nicole Hemsoth at The Next Platform.
If money were no object and accountants allowed companies to write off investments in systems instantly, then datacenters would be tossing hardware into the scrap heap as soon as new technology came along. But in the real world, companies have to take a more measured approach to adding new systems and upgrading old ones, and that can make the time when customers buy shiny new boxes a bit tough to predict.
Forecasting sales and trying to close them are two of the many challenges that all server, storage, and switching vendors face, and Supermicro, which straddles the line between the …
Surfing On Tech Waves With Supermicro was written by Timothy Prickett Morgan at The Next Platform.
Over the long course of IT history, the burden has been on the software side to keep pace with rapid hardware advances—to exploit new capabilities and boldly go where no benchmarks have gone before. However, as we swiftly ride into a new age where machine learning and deep learning take the place of more static applications and software advances are far faster than chipmakers can tick and tock to, hardware device makers are scrambling.
That problem is profound enough on its own, and is an entirely different architectural dance than general purpose device have ever had to step to. Shrinking …
Hardware Slaves to the Master Algorithm was written by Nicole Hemsoth at The Next Platform.
The Hewlett Packard that Carly Fiorina and Mark Hurd created through aspiration and acquisition is hardly recognizable in the increasingly streamlined Hewlett Packard Enterprise that Meg Whitman is whittling.
We joked earlier this week that with its acquisition of VMware and EMC and the sales of its outsourcing and software businesses that the new Dell was stop trying to be the old IBM. Well, the same is true of the new HP. It is not clear when and if Oracle will get the same memo, but it seems content to build engineered systems, from top to bottom, and we …
HPE Trims Back To The Core Enterprise Essentials was written by Timothy Prickett Morgan at The Next Platform.