Archive

Category Archives for "IT Industry"

Pushing Back Against Cheap and Deep Storage

It is not always easy, but several companies dedicated to the supercomputing market have managed to retune their wares to fit more mainstream market niches. This has been true across both the hardware and software subsets of high performance computing, and such efforts have been aided by a well-timed shift in the needs of enterprises toward more robust, compute and data-intensive workhorses as new workloads, most of which are driven by dramatic increases in data volumes and analytical capabilities, keep emerging.

For supercomputer makers, the story is a clear one. However, on the storage side, especially for those select few

Pushing Back Against Cheap and Deep Storage was written by Nicole Hemsoth at The Next Platform.

HPE Takes On The High End With SGI Expertise

SGI has always had scalable technology that should have been deployed more broadly in the academic, government, and enterprise datacenters of the world. But fighting for those budget dollars at the high end of the market always came down to needing more feet on the street, a larger global footprint for service and support, and broader certification of software stacks to exploit that SGI iron.

Now that Hewlett Packard Enterprise owns SGI – or more precisely, owns its operations in the United States and will finish off its acquisition, announced in August, probably sometime in the middle of next

HPE Takes On The High End With SGI Expertise was written by Timothy Prickett Morgan at The Next Platform.

Nvidia CEO’s “Hyper-Moore’s Law” Vision for Future Supercomputers

Over the last year in particular, we have documented the merger between high performance computing and deep learning and its various shared hardware and software ties. This next year promises far more on both horizons and while GPU maker Nvidia might not have seen it coming to this extent when it was outfitting its first GPUs on the former top “Titan” supercomputer, the company sensed a mesh on the horizon when the first hyperscale deep learning shops were deploying CUDA and GPUs to train neural networks.

All of this portends an exciting year ahead and for once, the mighty CPU

Nvidia CEO’s “Hyper-Moore’s Law” Vision for Future Supercomputers was written by Nicole Hemsoth at The Next Platform.

OpenHPC Pedal Put To The Compute Metal

The ultimate success of any platform depends on the seamless integration of diverse components into a synergistic whole – well, as much as is possible in the real world – while at the same time being flexible enough to allow for components to be swapped out and replaced by others to suit personal preferences.

Is OpenHPC, the open source software stack aimed at simulation and modeling workloads that was spearheaded by Intel a year ago, going to be the dominant and unifying platform for high performance computing? Will OpenHPC be analogous to the Linux distributions that grew up around

OpenHPC Pedal Put To The Compute Metal was written by Timothy Prickett Morgan at The Next Platform.

As The Server Goes, So Goes The World

At this point in the 21st Century, a surprisingly large portion of the manufacturing, warehousing, distribution, marketing, and retailing of every good and service known to humankind is dependent on a piece of circuit board with two Xeon processors welded to it, wrapped in some bent sheet metal with a few blinky lights peeking out of the darkness.

Back in 1885, as the United States was beginning its rise to power, Reverend Josiah Strong declared in his populist book, Our Country: “As America goes, so goes the world.” National borders and national interests still exist, but networks cross boundaries

As The Server Goes, So Goes The World was written by Timothy Prickett Morgan at The Next Platform.

FPGAs Give Microsoft a “Von Neumann Tax” Break

At the annual Supercomputing Conference (SC16) last week, the emphasis was on deep learning and its future role as part of supercomputing applications and systems. Before that focus, however, the rise of novel architectures and reconfigurable accelerators (as alternatives to building a custom ASIC) was swift.

Feeding on that trend, a panel exploring how non-Von Neumann architectures looked at the different ways the high performance computing set might consider non-stored program machines and what the many burgeoning options might mean for energy efficiency and performance.

Among the presenters was Gagan Gupta, a computer architect with Microsoft Research, who detailed the

FPGAs Give Microsoft a “Von Neumann Tax” Break was written by Nicole Hemsoth at The Next Platform.

Inside Intel’s Strategy to Integrate Nervana Deep Learning Assets

There is little doubt that 2017 will be a dense year for deep learning. With a sudden new wave of applications that integrate neural networks into existing workflows (not to mention entirely new uses) and a fresh array of hardware architectures to meet them, we expect the space to start shaking out its early winners and losers and show a clearer path ahead.

As we described earlier this week, Intel has big plans to integrate the Nervana ASIC and software stack with its Knights family of processors in the next several years. This effort, codenamed Knights Crest, is a long-term

Inside Intel’s Strategy to Integrate Nervana Deep Learning Assets was written by Nicole Hemsoth at The Next Platform.

Intel Declares War on GPUs at Disputed HPC, AI Border

In Supercomputing Conference (SC) years past, chipmaker Intel has always come forth with a strong story, either as an enabling processor or co-processor force, or more recently, as a prime contractor for a leading-class national lab supercomputer.

But outside of a few announcements at this year’s SC related to beefed-up SKUs for high performance computing and Skylake plans, the real emphasis back in Portland seemed to ring far fainter for HPC and much louder for the newest server tech darlings, deep learning and machine learning. Far from the HPC crowd last week was Intel’s AI Day, an event in

Intel Declares War on GPUs at Disputed HPC, AI Border was written by Nicole Hemsoth at The Next Platform.

Details Emerge On “Summit” Power Tesla AI Supercomputer

The future “Summit” pre-exascale supercomputer that is being built out in late 2017 and early 2018 for the US Department of Energy for its Oak Ridge National Laboratory looks like a giant cluster of systems that might be used for training neural networks. And that is an extremely convenient development.

More than once during the SC16 supercomputing conference this week in Salt Lake City, the Summit system and its companion “Sierra” system that will be deployed at Lawrence Livermore National Laboratory, were referred to as “AI supercomputers.” This is a reflection of the fact that the national labs around the

Details Emerge On “Summit” Power Tesla AI Supercomputer was written by Timothy Prickett Morgan at The Next Platform.

Networks Drive HPC Harder Than Compute

It is hard to tell which part of the HPC market is more competitive: compute, networking, or storage. From where we sit, there is an increasing level of competition on all three fronts and the pressure to perform, both financially and technically, has never been higher. This is great for customers, of course, who are being presented with lots of technology to choose from. But HPC customers tend to pick architectures for several generations, so there is also pressure on them to make the right choices – whatever that means.

In a sense, enterprises and hyperscalers and cloud builders, who

Networks Drive HPC Harder Than Compute was written by Timothy Prickett Morgan at The Next Platform.

Stacking Up Software To Drive FPGAs Into The Datacenter

Every new hardware device that offers some kind of benefit compared to legacy devices faces the task of overcoming the immense inertia that is imparted to a platform by the software that runs upon it. While FPGAs have so many benefits compared to general purpose CPUs and even GPUs because of their malleability, the inertia seems even heavier.

Any substantial change requires a big payoff to be implemented because change is inherently risky. It doesn’t help that the method of programming FPGAs efficiently using VHDL and Verilog are so alien to Java and C programmers, and that the tools for

Stacking Up Software To Drive FPGAs Into The Datacenter was written by Timothy Prickett Morgan at The Next Platform.

Wringing Cost and Complexity Out of HPC

The race toward exascale supercomputing gets a lot of attention, as it should. Driving up top-end performance levels in high performance computing (HPC) is essential to generate new insights into the fundamental laws of physics, the origins of the universe, global climate systems, and more. The wow factor is huge.

There is another area of growth in HPC that is less glamorous, but arguably even more important. It is the increasing use of small to mid-sized clusters by individuals and small groups that have long been reliant on workstations to handle their most compute-intensive tasks. Instead of a small number

Wringing Cost and Complexity Out of HPC was written by Nicole Hemsoth at The Next Platform.

The Business Of HPC Is Evolving

With most of the year finished and a new one coming up fast, and a slew of new compute and networking technologies ramping for the past year and more on the horizon for a very exciting 2017, now is the natural time to take stock of what has happened in the HPC business and what is expected to happen in the coming years.

The theme of the SC16 supercomputing conference this year is that HPC matters, and of course, we have all known this since the first such machines were distinct from enterprise-class electronic computers back in the 1960s. HPC

The Business Of HPC Is Evolving was written by Timothy Prickett Morgan at The Next Platform.

Intel Sets Up Skylake Xeon For HPC, Knights Mill Xeon Phi For AI

With the “Skylake” Xeon E5 v5 processors not slated until the middle of next year and the “Knights Landing” Xeon Phi processors and Omni-Path interconnect still ramping after entering the HPC space a year ago, there are no blockbuster announcements coming out of Intel this year at the SC16 supercomputing conference in Salt Lake City. But there are some goodies for HPC shops that were unveiled at the event and the chip giant also set the stage for big changes in the coming year in both traditional HPC and its younger and fast-growing sibling, machine learning.

Speaking ahead of the

Intel Sets Up Skylake Xeon For HPC, Knights Mill Xeon Phi For AI was written by Timothy Prickett Morgan at The Next Platform.

A Deep Learning Supercomputer Approach to Cancer Research

Deep learning and machine learning are major themes at this year’s annual Supercomputing Conference (SC16), both in terms of vendors showcasing systems that are a fit for both high performance computing and machine learning, and in the revelation of new efforts to combine traditional simulations with neural networks for greater efficiency and insight.

We have already described this momentum in the context of announcements from supercomputer makers like Cray, which just unveiled a Pascal GPU-based addition to their modeling and simulation-oriented XC supercomputer line, complete with deep learning frameworks integrated into the stack. The question was, how many HPC workloads

A Deep Learning Supercomputer Approach to Cancer Research was written by Nicole Hemsoth at The Next Platform.

How Nvidia’s Own Saturn V DGX-1 Cluster Stacks Up

Not all of the new and interesting high performance computing systems are always in the upper echelons of the Top 500 supercomputing list, which was announced at the opening of the SC16 supercomputing conference in Salt Lake City this week. Sometimes, an intriguing system breaks into the list outside of the top ten or twenty most powerful machines in the bi-annual rankings of number-crunching performance, and such is the case with the new “Saturn V” supercomputer built by Nvidia using its latest GPUs and interconnects.

The Saturn V system, nick-named of course for the NASA launch vehicle that eventually

How Nvidia’s Own Saturn V DGX-1 Cluster Stacks Up was written by Timothy Prickett Morgan at The Next Platform.

A Closer Look at 2016 Top 500 Supercomputer Rankings

The bi-annual rankings of the Top 500 supercomputers for the November 2016 systems are now live. While the top of the list is static with the same two Chinese supercomputers dominating, there are several new machines that have cropped up to replace decommissioned systems throughout, the momentum at the very top shows some telling architectural trends, particularly among the newcomers in the top 20.

We already described the status of the major Chinese and Japanese systems in our analysis of the June 2016 list and thought it might be more useful to look at some of the broader

A Closer Look at 2016 Top 500 Supercomputer Rankings was written by Nicole Hemsoth at The Next Platform.

Inside Six of the Newest Top 20 Supercomputers

The latest listing of the Top 500 rankings of the world’s most powerful supercomputers has just been released. While there were no big surprises at the top of the list, there have been some notable additions to the top tier, all of which feature various elements of supercomputers yet to come as national labs and research centers prepare for their pre-exascale and eventual exascale systems.

We will be providing a deep dive on the list results this morning, but for now, what is most interesting about the list is what it is just beginning to contain at the top–and what

Inside Six of the Newest Top 20 Supercomputers was written by Nicole Hemsoth at The Next Platform.

IBM Shows Off AI And HPC Oomph On Power8 Tesla Hybrids

While the machine learning applications created by hyperscalers and the simulations and models run by HPC centers are very different animals, the kinds of hardware that help accelerate the performance for one is also helping to boost the other in many cases. And that means that the total addressable market for systems like the latest GPU-accelerated Power Systems machines or the alternatives from Nvidia and others has rapidly expanded as enterprises try to deploy both HPC and AI to better run their businesses.

HPC as we know it has obviously been around for a long time, and is in a

IBM Shows Off AI And HPC Oomph On Power8 Tesla Hybrids was written by Timothy Prickett Morgan at The Next Platform.

Cray’s New Pascal XC50 Supercomputer Points to Richer HPC Future

Over the course of the last five years, GPU computing has featured prominently in supercomputing as an accelerator on some of the world’s fastest machines. If some supercomputer makers are correct, GPUs will continue to play a major role in high performance computing, but the acceleration they provide will go beyond boosts to numerical simulations. This has been great news for Nvidia’s bottom line since the market for GPU computing is swelling, and for HPC vendors that can integrate those and wrap the proper software stacks around both HPC and machine learning, it could be an equal boon.

Cray’s New Pascal XC50 Supercomputer Points to Richer HPC Future was written by Nicole Hemsoth at The Next Platform.