If you happen to believe that spending on core IT infrastructure is a leading indicator of the robustness of national economies and the global one that is stitched, somewhat piecemeal like a patchwork quilt. From them, then the third quarter sales and shipments of servers is probably sounding a note of caution for you.
It certainly does for us here at The Next Platform. But it is important, particularly if we have in fact hit the peak of the X86 server market as we mused about three months ago, to not get carried away. A slowdown in spending …
Is This A Server Slowdown, Or Increasing Efficiency? was written by Timothy Prickett Morgan at The Next Platform.
Amazon Web Services might be offering FPGAs in an EC2 cloud environment, but this is still a far cry from the FPGA-as-a-service vision many hold for the future. Nonetheless, it is a remarkable offering in terms of the bleeding-edge Xilinx accelerator. The real success of these FPGA (F1) instances now depends on pulling in the right partnerships and tools to snap a larger user base together—one that would ideally include non-FPGA experts.
In its F1 instance announcement this week, AWS made it clear that for the developer preview, there are only VHDL and Verilog programmer tools, which are very …
What it Takes to Build True FPGA as a Service was written by Nicole Hemsoth at The Next Platform.
When it comes to supercomputing, you don’t only have to strike while the iron is hot, you have to spend while the money is available. And that fact is what often determines the technologies that HPC centers deploy as they expand the processing and storage capacity of their systems.
A good case in point is the MareNostrum 4 hybrid cluster that the Barcelona Supercomputing Center, one of the flagship research and computing institutions in Europe, has just commissioned IBM to build with the help of partners Lenovo and Fujitsu. The system balances the pressing need for more general purpose computing …
BSC Keeps Its HPC Options Open With MareNostrum 4 was written by Timothy Prickett Morgan at The Next Platform.
FPGAs have been an emerging topic on the application acceleration front over the last couple of years, but despite increased attention around use cases in machine learning and other hot areas, hands have been tied due to simply on-boarding with both the hardware and software.
As we have covered here, this is changing, especially with the addition of OpenCL and other higher-level interfaces to let developers talk to FPGAs for both network and application double-duty and for that matter, getting systems that have integrated capabilities to handle FPGAs just as they do with GPUs (PCIe) takes extra footwork as well. …
The FPGA Accelerated Cloud Push Just Got Stronger was written by Nicole Hemsoth at The Next Platform.
While we all spend a lot of time talking about the massive supercomputers that cultivate new architectures, and precisely for that reason, it is the more modest cluster that makes use of these technologies many years hence that actually cultivates a healthy and vibrant HPC market.
Lenovo picked up a substantial HPC business when it acquired IBM’s System x server division two years ago and also licensed key software, such as the Platform Computing stack and the GPFS file system, to drive its own HPC agenda. The Sino-American system maker has been buoyed by higher volume manufacturing thanks to the …
Lenovo Drives HPC From The Middle Ground was written by Timothy Prickett Morgan at The Next Platform.
It is not always easy, but several companies dedicated to the supercomputing market have managed to retune their wares to fit more mainstream market niches. This has been true across both the hardware and software subsets of high performance computing, and such efforts have been aided by a well-timed shift in the needs of enterprises toward more robust, compute and data-intensive workhorses as new workloads, most of which are driven by dramatic increases in data volumes and analytical capabilities, keep emerging.
For supercomputer makers, the story is a clear one. However, on the storage side, especially for those select few …
Pushing Back Against Cheap and Deep Storage was written by Nicole Hemsoth at The Next Platform.
SGI has always had scalable technology that should have been deployed more broadly in the academic, government, and enterprise datacenters of the world. But fighting for those budget dollars at the high end of the market always came down to needing more feet on the street, a larger global footprint for service and support, and broader certification of software stacks to exploit that SGI iron.
Now that Hewlett Packard Enterprise owns SGI – or more precisely, owns its operations in the United States and will finish off its acquisition, announced in August, probably sometime in the middle of next …
HPE Takes On The High End With SGI Expertise was written by Timothy Prickett Morgan at The Next Platform.
Over the last year in particular, we have documented the merger between high performance computing and deep learning and its various shared hardware and software ties. This next year promises far more on both horizons and while GPU maker Nvidia might not have seen it coming to this extent when it was outfitting its first GPUs on the former top “Titan” supercomputer, the company sensed a mesh on the horizon when the first hyperscale deep learning shops were deploying CUDA and GPUs to train neural networks.
All of this portends an exciting year ahead and for once, the mighty CPU …
Nvidia CEO’s “Hyper-Moore’s Law” Vision for Future Supercomputers was written by Nicole Hemsoth at The Next Platform.
The ultimate success of any platform depends on the seamless integration of diverse components into a synergistic whole – well, as much as is possible in the real world – while at the same time being flexible enough to allow for components to be swapped out and replaced by others to suit personal preferences.
Is OpenHPC, the open source software stack aimed at simulation and modeling workloads that was spearheaded by Intel a year ago, going to be the dominant and unifying platform for high performance computing? Will OpenHPC be analogous to the Linux distributions that grew up around …
OpenHPC Pedal Put To The Compute Metal was written by Timothy Prickett Morgan at The Next Platform.
At this point in the 21st Century, a surprisingly large portion of the manufacturing, warehousing, distribution, marketing, and retailing of every good and service known to humankind is dependent on a piece of circuit board with two Xeon processors welded to it, wrapped in some bent sheet metal with a few blinky lights peeking out of the darkness.
Back in 1885, as the United States was beginning its rise to power, Reverend Josiah Strong declared in his populist book, Our Country: “As America goes, so goes the world.” National borders and national interests still exist, but networks cross boundaries …
As The Server Goes, So Goes The World was written by Timothy Prickett Morgan at The Next Platform.
At the annual Supercomputing Conference (SC16) last week, the emphasis was on deep learning and its future role as part of supercomputing applications and systems. Before that focus, however, the rise of novel architectures and reconfigurable accelerators (as alternatives to building a custom ASIC) was swift.
Feeding on that trend, a panel exploring how non-Von Neumann architectures looked at the different ways the high performance computing set might consider non-stored program machines and what the many burgeoning options might mean for energy efficiency and performance.
Among the presenters was Gagan Gupta, a computer architect with Microsoft Research, who detailed the …
FPGAs Give Microsoft a “Von Neumann Tax” Break was written by Nicole Hemsoth at The Next Platform.
There is little doubt that 2017 will be a dense year for deep learning. With a sudden new wave of applications that integrate neural networks into existing workflows (not to mention entirely new uses) and a fresh array of hardware architectures to meet them, we expect the space to start shaking out its early winners and losers and show a clearer path ahead.
As we described earlier this week, Intel has big plans to integrate the Nervana ASIC and software stack with its Knights family of processors in the next several years. This effort, codenamed Knights Crest, is a long-term …
Inside Intel’s Strategy to Integrate Nervana Deep Learning Assets was written by Nicole Hemsoth at The Next Platform.
In Supercomputing Conference (SC) years past, chipmaker Intel has always come forth with a strong story, either as an enabling processor or co-processor force, or more recently, as a prime contractor for a leading-class national lab supercomputer.
But outside of a few announcements at this year’s SC related to beefed-up SKUs for high performance computing and Skylake plans, the real emphasis back in Portland seemed to ring far fainter for HPC and much louder for the newest server tech darlings, deep learning and machine learning. Far from the HPC crowd last week was Intel’s AI Day, an event in …
Intel Declares War on GPUs at Disputed HPC, AI Border was written by Nicole Hemsoth at The Next Platform.
The future “Summit” pre-exascale supercomputer that is being built out in late 2017 and early 2018 for the US Department of Energy for its Oak Ridge National Laboratory looks like a giant cluster of systems that might be used for training neural networks. And that is an extremely convenient development.
More than once during the SC16 supercomputing conference this week in Salt Lake City, the Summit system and its companion “Sierra” system that will be deployed at Lawrence Livermore National Laboratory, were referred to as “AI supercomputers.” This is a reflection of the fact that the national labs around the …
Details Emerge On “Summit” Power Tesla AI Supercomputer was written by Timothy Prickett Morgan at The Next Platform.
It is hard to tell which part of the HPC market is more competitive: compute, networking, or storage. From where we sit, there is an increasing level of competition on all three fronts and the pressure to perform, both financially and technically, has never been higher. This is great for customers, of course, who are being presented with lots of technology to choose from. But HPC customers tend to pick architectures for several generations, so there is also pressure on them to make the right choices – whatever that means.
In a sense, enterprises and hyperscalers and cloud builders, who …
Networks Drive HPC Harder Than Compute was written by Timothy Prickett Morgan at The Next Platform.
Every new hardware device that offers some kind of benefit compared to legacy devices faces the task of overcoming the immense inertia that is imparted to a platform by the software that runs upon it. While FPGAs have so many benefits compared to general purpose CPUs and even GPUs because of their malleability, the inertia seems even heavier.
Any substantial change requires a big payoff to be implemented because change is inherently risky. It doesn’t help that the method of programming FPGAs efficiently using VHDL and Verilog are so alien to Java and C programmers, and that the tools for …
Stacking Up Software To Drive FPGAs Into The Datacenter was written by Timothy Prickett Morgan at The Next Platform.
The race toward exascale supercomputing gets a lot of attention, as it should. Driving up top-end performance levels in high performance computing (HPC) is essential to generate new insights into the fundamental laws of physics, the origins of the universe, global climate systems, and more. The wow factor is huge.
There is another area of growth in HPC that is less glamorous, but arguably even more important. It is the increasing use of small to mid-sized clusters by individuals and small groups that have long been reliant on workstations to handle their most compute-intensive tasks. Instead of a small number …
Wringing Cost and Complexity Out of HPC was written by Nicole Hemsoth at The Next Platform.
With most of the year finished and a new one coming up fast, and a slew of new compute and networking technologies ramping for the past year and more on the horizon for a very exciting 2017, now is the natural time to take stock of what has happened in the HPC business and what is expected to happen in the coming years.
The theme of the SC16 supercomputing conference this year is that HPC matters, and of course, we have all known this since the first such machines were distinct from enterprise-class electronic computers back in the 1960s. HPC …
The Business Of HPC Is Evolving was written by Timothy Prickett Morgan at The Next Platform.
With the “Skylake” Xeon E5 v5 processors not slated until the middle of next year and the “Knights Landing” Xeon Phi processors and Omni-Path interconnect still ramping after entering the HPC space a year ago, there are no blockbuster announcements coming out of Intel this year at the SC16 supercomputing conference in Salt Lake City. But there are some goodies for HPC shops that were unveiled at the event and the chip giant also set the stage for big changes in the coming year in both traditional HPC and its younger and fast-growing sibling, machine learning.
Speaking ahead of the …
Intel Sets Up Skylake Xeon For HPC, Knights Mill Xeon Phi For AI was written by Timothy Prickett Morgan at The Next Platform.
Deep learning and machine learning are major themes at this year’s annual Supercomputing Conference (SC16), both in terms of vendors showcasing systems that are a fit for both high performance computing and machine learning, and in the revelation of new efforts to combine traditional simulations with neural networks for greater efficiency and insight.
We have already described this momentum in the context of announcements from supercomputer makers like Cray, which just unveiled a Pascal GPU-based addition to their modeling and simulation-oriented XC supercomputer line, complete with deep learning frameworks integrated into the stack. The question was, how many HPC workloads …
A Deep Learning Supercomputer Approach to Cancer Research was written by Nicole Hemsoth at The Next Platform.