There was concern in some scientific quarters last year that President Trump’s election could mean budget cuts to the Department of Energy (DoE) that could cascade down to the country’s exascale program at a time when China was ramping up investments in its own initiatives.
The worry was that any cuts that could slow down the work of the Exascale Computing Project would hand the advantage to China in this critical race that will have far-reaching implications in a wide range of scientific and commercial fields like oil and gas exploration, financial services, high-end healthcare, national security and the military. …
U.S. Exascale Efforts Benefit in FY 2019 Budget was written by Jeffrey Burt at The Next Platform.
On today’s episode of “The Interview” with The Next Platform we talk with Doug Miles who runs the PGI compilers and tools team at Nvidia about the past, present, and future of OpenACC with an emphasis on what lies ahead in the next release.
Over the last few years we have described momentum with OpenACC in a number of articles covering everything from what it means in the wake of new memory options to how it pushes on OpenMP to develop alongside. Today, however, we take a step back with an HPC code veteran for the bigger picture and …
OpenACC Developments: Past, Present, and Future was written by Nicole Hemsoth at The Next Platform.
Computing used to be far away.
It was accessed via remote command terminals, through time sliced services. It was a pretty miserable experience. During the personal computing revolution, computing once again became local. It would fit under your desk, or in a small dedicated “computer rooms”. You could touch it. It was once more, a happy and contented time for computer users. The computer was personal again. There was a clue in the name.
However, as complexity grew, and as networks improved, computing was effectively taken away again and placed in cold dark rooms once more far, far away for …
Hardware as a Service: The New Missing Middle? was written by James Cuff at The Next Platform.
The general consensus, for as long as anyone can remember, is that there is an insatiable appetite for compute in the datacenter. Were it not for the constraints of budgets, for both the acquisition of iron, the facilities to house it, the electricity to power and cool it, and the people to manage it, it is safe to say that the installed base of servers worldwide would be much larger than it is.
But the world doesn’t work that way, and there are constraints. But thanks to ever more complex applications, ever richer media, and a burning desire to save …
The Server Market Booms, And It Could Last For A While was written by Timothy Prickett Morgan at The Next Platform.
The high performance computing market might get some windfall from the processing requirements of IoT and edge devices, but the real driver for spending will come well before the device ever hits the market.
The rise in IoT and edge device demand means an exponential increase in the number of devices that need to be modeled, simulated and tested and this means greater investment in HPC hardware as well as the engineering software tools that have generally served the HPC set.
In other words, it is not the new, enlarged, complex dataset from IoT and edge that presents the next …
IoT Will Force HPC Spending–But Not For the Reasons You Think was written by Nicole Hemsoth at The Next Platform.
Since its inception, the OpenStack cloud controller co-created by NASA and Rackspace Hosting, with these respective organizations supplying the core Nova compute and Swift object storage foundations, has been focused on the datacenter. But as the “Queens” release of OpenStack is being made available, the open source community that controls that cloud controller is being pulled out of the datacenter and out to the edge, where a minimalist variant of the software is expected to have a major presence in managing edge computing devices.
The Queens release of OpenStack is the 17th drop of software since NASA and Rackspace first …
OpenStack Stretches To The Edge, Embraces Accelerators was written by Timothy Prickett Morgan at The Next Platform.
The rapid proliferation of connected devices and the huge amounts of data they are generating is forcing tech vendors and enterprises alike to cast their eyes to the network edge, which has become a focus of the distributed computing movement as more compute, storage, network, analytics and other resources are moving closer to where these devices live.
Compute is being embedded in everything, and this makes sense. The costs in time, due to latency issues, and money, due to budgetary issues, from transferring all that data from those distributed devices back to a private or public datacenter are too high …
VMware Crafts Compute And Storage Stacks For The Edge was written by Jeffrey Burt at The Next Platform.
On today’s episode of “The Interview” with The Next Platform, we focus on how geographic information systems (GIS) is, as a field, being revolutionized by deep learning.
This stands to reason given the large volumes of satellite image data and robust deep learning frameworks that excel at image classification and analysis–a volume issue that has been compounded by more satellites with ever-higher resolution output.
Unlike other areas of large-scale scientific data analysis that have traditionally relied on massive supercomputers, our audio interview (player below) reveals that a great deal of GIS analysis can be done on smaller systems. However, …
Geographic Information Systems (GIS) Field Upended by Neural Networks was written by Nicole Hemsoth at The Next Platform.
There is no question right now that if you have a big computing job in either high performance computing – the colloquial name for traditional massively parallel simulation and modeling applications – or in machine learning – the set of statistical analysis routines with feedback loops that can do identification and transformation tasks that used to be solely the realm of humans – then an Nvidia GPU accelerator is the engine of choice to run that work at the best efficiency.
It is usually difficult to make such clean proclamations in the IT industry, with so many different kinds of …
The Engine Of HPC And Machine Learning was written by Timothy Prickett Morgan at The Next Platform.
On today’s episode of “The Interview” with The Next Platform we talk about the use of petascale supercomputers for training deep learning algorithms. More specifically, how this happening in Astronomy to enable real-time analysis of LIGO detector data.
We are joined by Daniel George, a researcher in the Gravity Group at the National Center for Supercomputing Applications, or NCSA. His team garnered a great deal of attention at the annual supercomputing conference in November with work blending traditional HPC simulation data and deep learning.
George and his team have shown that deep learning with convolutional neural networks can provide many …
Deep Learning on HPC Systems for Astronomy was written by Nicole Hemsoth at The Next Platform.
There is a new sheriff in town at Hewlett Packard Enterprise – that would be chief executive officer Antonio Neri – and that means a new way of looking at the books and therefore steering the business. No, we didn’t mean that the other way around.
In opening up its books for the first quarter of its fiscal 2018 year, which ended in January, we can see some important things about HPE’s business, and at the same time, we have lost some visibility about core parts of its business.
First of all, the books have been reclassified significantly in the …
The New HPE Sheriff Lays Down The Hybrid IT Law was written by Timothy Prickett Morgan at The Next Platform.
Among all of the hardware and software that is in a datacenter, the database – whether it is SQL, NoSQL, NewSQL or some other contraption in which to pour data and ask questions about it – is probably the stickiest. Companies can and do change server, storage, or networking hardware, and they change operating systems and even applications, but they are loath to mess with repository of the information that is used to run the company.
This is understandably so, given the risk of inadvertently altering or losing that vital data. Ironically, this is one reason why databases proliferate at …
Bringing GPUs To Bear On Big Standard Relational Databases was written by Timothy Prickett Morgan at The Next Platform.
On today’s episode of “The Interview” with The Next Platform, we take a look at the evolution of the NAMD molecular dynamics and how the introduction of GPU computing upended performance expectations and set the stage for new metrics now that the Volta GPU architecture will be available on large supercomputers like the Summit machine coming to Oak Ridge National Lab.
Our guest is well known for being part of the team that won a Gordon Bell Prize in 2002 for work on scaling NAMD. Dr. Jim Phillips is a Senior Research Programmer in the NCSA Blue Waters Project …
The Evolution of NAMD: A Scalability Story from Single-Core to GPU Boosted was written by Nicole Hemsoth at The Next Platform.
The phenomenal complexity of computing is not decreasing. Charts of growth, investment and scale continue to follow a logarithmic curve.
But how is computational balance to be maintained with any level of objectivity under such extreme circumstances? How do we plan for this known, and yet highly unknown challenge of building balanced systems to operate at scale? The ever more bewildering set of options (e.g. price lists now have APIs) may, if not managed with utmost care, result in chaos and confusion.
This first in a series of articles will set some background and perspective on the …
Striking Practical Computational Balance was written by James Cuff at The Next Platform.
The HPC field hasn’t always had the closest of relationships with the cloud.
Concerns about the performance of the workloads on a hypervisor running in the cloud, the speed of the networking and capacity of storage, the security and privacy of the research data and results, and the investments of millions of dollars already made to build massive on-premises supercomputers and other systems can become issues when considering moving applications to the cloud.
However, HPC workloads also are getting more complex and compute-intensive, and demand from researchers for more compute time and power on those on-premises supercomputers is growing. Cloud …
An Adaptive Approach to Bursting HPC to the Cloud was written by Jeffrey Burt at The Next Platform.
If there is one thing that can be said about modern distributed computing that has held true for three decades now, it is that the closer you get to the core of the datacenter, the beefier the compute tends to be. Conversely, as computing gets pushed to the edge, it gets lighter by the necessity of using little power and delivering just enough performance to accomplish whatever data crunching is necessary outside of the datacenter.
While we have focused on the compute in the traditional datacenter since founding The Next Platform three years ago, occasionally dabbling in the microserver arena …
AMD Gets Zen About The Edge was written by Timothy Prickett Morgan at The Next Platform.
The demands for more compute resources, power and density in HPC environments is fueling the need for innovative ways to cool datacenters that are churning through petabyte levels of data to run modern simulation workloads that touch on everything from healthcare and climate change to space exploration and oil and gas initiatives.
The top cooling technologies for most datacenters are air and chilled water. However, Lenovo is promoting its latest warm-water cooling system for HPC clusters with its ThinkSystem SD650 systems that the company says will lower datacenter power consumption by 30 to 40 percent of the more traditional cooling …
Lenovo Sees Expanding Market for Dense Water-Cooled HPC was written by Jeffrey Burt at The Next Platform.
Former Harvard Computer Science Lead Brings Distributed Systems Experience to Top Publication’s Readers
The Next Platform is proud to announce that former Assistant Dean and Distinguished Engineer for Research Computing at Harvard, Dr. James Cuff, has joined the editorial team in a full-time capacity as Distinguished Technical Author.
As the leading publication covering distributed systems in research and large enterprise, Dr. Cuff rounds out a seasoned editorial team that delivers in-depth analysis from the worlds of supercomputing, artificial intelligence, cloud and hyperscale datacenters, and the many other technology areas that comprise the highest end of today’s IT ecosystems.
Dr. Cuff …
The Next Platform Announces Renowned HPC Expert Joins Team was written by Nicole Hemsoth at The Next Platform.
The best way to make a wave is to make a big splash, which is something that Andy Bechtolsheim, perhaps the most famous serial entrepreneur in IT infrastructure, is very good at doing. As one of the co-founders of Sun Microsystems and a slew of networking and system startups as well as the first investor in Google, he doesn’t just see waves, but generates them and then surfs on them, creating companies and markets as he goes along.
Bechtolsheim was a PhD student at Stanford University, working on a project that aimed to integrate networking interfaces with processors when he …
The Road To 400G Ethernet Is Paved With Bechtolsheim’s Intentions was written by Timothy Prickett Morgan at The Next Platform.
Much of the focus of the recent high-profile budget battle in Washington – and for that matter, many of the financial debates over the past few decades – has been around how much money should go to the military and how much to domestic programs like Social Security and Medicare.
In the bipartisan deal struck earlier this month, both sides saw funding increase over the next two years, with the military seeing its budget jump $160 billion. Congressional Republicans boasted of a critical win for the Department of Defense (DoD) that will result in more soldiers, better weapons, and improved …
HPE Brings More HPC To The DoD was written by Jeffrey Burt at The Next Platform.