Jeffrey Burt

Author Archives: Jeffrey Burt

Sandia, NREL Look to Aquarius to Cool HPC Systems

The idea of bringing liquids in the datacenter to cool off hot-running systems and components has often unnerved many in the IT field. Organizations are doing it as they look for more efficient and cost-effective ways to run their infrastructures, particularly as the workloads become larger and more complex, more compute resources are needed, parts like processors become more powerful and density increases.

But the concept of running water and other liquids through a system, and the threat of the liquids leaking into the various components and into the datacenter, has created uneasiness with the idea.

Still, the growing demands

Sandia, NREL Look to Aquarius to Cool HPC Systems was written by Jeffrey Burt at The Next Platform.

China’s Global Cloud and AI Ambitions Keep Extending

Gone are the days of early warehouse scale computing pioneers that were based in the U.S.. Over the last several years, China’s web giants are extending their reach through robust shared AI and cloud efforts—and those are pushing ever further into territory once thought separate.

Alibaba is much like compatriots Baidu and Tencent in its desire to expand well beyond the borders of China and compete with global players like Amazon Web Services, Google, Microsoft and Facebook in such fast-growing areas like the cloud, supercomputing and artificial intelligence (AI).

The tech giant has significant resources at its disposal, pulling in

China’s Global Cloud and AI Ambitions Keep Extending was written by Jeffrey Burt at The Next Platform.

Accelerating HPC Investments In Canada

Details about the technologies being used in Canada’s newest and most powerful research supercomputer have been coming out in a piecemeal fashion over the past several months, but now the complete story.

At the SC17 show in November, it was revealed that the HPC system will use Mellanox’s Dragonfly+ network topology and a NVM Express burst buffer fabric from Excelero as key part of a cluster that will offer a peak performance of more than 4.6 petaflops.

Now Lenovo, which last fall won the contract for the Niagara system over 11 other vendors, is unveiling this week that it is

Accelerating HPC Investments In Canada was written by Jeffrey Burt at The Next Platform.

U.S. Exascale Efforts Benefit in FY 2019 Budget

There was concern in some scientific quarters last year that President Trump’s election could mean budget cuts to the Department of Energy (DoE) that could cascade down to the country’s exascale program at a time when China was ramping up investments in its own initiatives.

The worry was that any cuts that could slow down the work of the Exascale Computing Project would hand the advantage to China in this critical race that will have far-reaching implications in a wide range of scientific and commercial fields like oil and gas exploration, financial services, high-end healthcare, national security and the military.

U.S. Exascale Efforts Benefit in FY 2019 Budget was written by Jeffrey Burt at The Next Platform.

VMware Crafts Compute And Storage Stacks For The Edge

The rapid proliferation of connected devices and the huge amounts of data they are generating is forcing tech vendors and enterprises alike to cast their eyes to the network edge, which has become a focus of the distributed computing movement as more compute, storage, network, analytics and other resources are moving closer to where these devices live.

Compute is being embedded in everything, and this makes sense. The costs in time, due to latency issues, and money, due to budgetary issues, from transferring all that data from those distributed devices back to a private or public datacenter are too high

VMware Crafts Compute And Storage Stacks For The Edge was written by Jeffrey Burt at The Next Platform.

An Adaptive Approach to Bursting HPC to the Cloud

The HPC field hasn’t always had the closest of relationships with the cloud.

Concerns about the performance of the workloads on a hypervisor running in the cloud, the speed of the networking and capacity of storage, the security and privacy of the research data and results, and the investments of millions of dollars already made to build massive on-premises supercomputers and other systems can become issues when considering moving applications to the cloud.

However, HPC workloads also are getting more complex and compute-intensive, and demand from researchers for more compute time and power on those on-premises supercomputers is growing. Cloud

An Adaptive Approach to Bursting HPC to the Cloud was written by Jeffrey Burt at The Next Platform.

Lenovo Sees Expanding Market for Dense Water-Cooled HPC

The demands for more compute resources, power and density in HPC environments is fueling the need for innovative ways to cool datacenters that are churning through petabyte levels of data to run modern simulation workloads that touch on everything from healthcare and climate change to space exploration and oil and gas initiatives.

The top cooling technologies for most datacenters are air and chilled water. However, Lenovo is promoting its latest warm-water cooling system for HPC clusters with its ThinkSystem SD650 systems that the company says will lower datacenter power consumption by 30 to 40 percent of the more traditional cooling

Lenovo Sees Expanding Market for Dense Water-Cooled HPC was written by Jeffrey Burt at The Next Platform.

HPE Brings More HPC To The DoD

Much of the focus of the recent high-profile budget battle in Washington – and for that matter, many of the financial debates over the past few decades – has been around how much money should go to the military and how much to domestic programs like Social Security and Medicare.

In the bipartisan deal struck earlier this month, both sides saw funding increase over the next two years, with the military seeing its budget jump $160 billion. Congressional Republicans boasted of a critical win for the Department of Defense (DoD) that will result in more soldiers, better weapons, and improved

HPE Brings More HPC To The DoD was written by Jeffrey Burt at The Next Platform.

IBM Storage Rides Up Flash And NVM-Express

IBM’s systems hardware business finished 2017 in a stronger position than it has seen in years, due in large part to the continued growth of the company’s stalwart System z mainframes and Power platform. As we at The Next Platform noted, the last three months of last year were also the first full quarter of shipments of IBM’s new System z14 mainframes, while the first nodes of the “Summit” supercomputer at Oak Ridge National Laboratory and the “Sierra” system at Lawrence Livermore National Laboratory began to ship.

Not to be overlooked was the strong performance of the IBM’s storage

IBM Storage Rides Up Flash And NVM-Express was written by Jeffrey Burt at The Next Platform.

Gen-Z Interconnect Ready To Restore Compute Memory Balance

For several years, work has been underway to develop a standard interconnect that can address the increasing speeds in servers driven by the growing use of such accelerators as GPUs and field-programmable gate arrays (FPGAs) and the pressures put on memory by the massive amounts of data being generated and bottleneck between the CPUs and the memory.

Any time the IT industry wants a standard, you can always expect at least two, and this time around is no different. Today there is a cornucopia of emerging interconnects, some of them overlapping in purpose, some working side by side, to break

Gen-Z Interconnect Ready To Restore Compute Memory Balance was written by Jeffrey Burt at The Next Platform.

A Look at What’s in Store for China’s Tianhe-2A Supercomputer

The field of competitors looking to bring exascale-capable computers to the market is a somewhat crowded one, but the United States and China continue to be the ones that most eyes are on.

It’s a clash of an established global superpower and another one on the rise, and one that that envelopes a struggle for economic, commercial and military advantages and a healthy dose of national pride. And because of these two countries, the future of exascale computing – which to a large extent to this point has been more about discussion, theory and promise – will come into sharper

A Look at What’s in Store for China’s Tianhe-2A Supercomputer was written by Jeffrey Burt at The Next Platform.

Even at the Edge, Scale is the Real Challenge

Neural networks live on data and rely on computational firepower to help them take in that data, train on it and learn from it. The challenge increasingly is ensuring there is enough computational power to keep up with the massive amounts of data that is being generated today and the rising demands from modern neural networks for speed and accuracy in consuming the data and training on datasets that continue to grow in size.

These challenges can be seen playing out in the fast-growing autonomous vehicle market, where pure-play companies like Waymo – born from Google’s self-driving car initiative –

Even at the Edge, Scale is the Real Challenge was written by Jeffrey Burt at The Next Platform.

Google Boots Up Tensor Processors On Its Cloud

Google laid down its path forward in the machine learning and cloud computing arenas when it first unveiled plans for its tensor processing unit (TPU), an accelerator designed by the hyperscaler to speeding up machine learning workloads that are programmed using its TensorFlow framework.

Almost a year ago, at its Google I/O event, the company rolled out the architectural details of its second-generation TPUs – also called the Cloud TPU – for both neural network training and inference, with the custom ASICs providing up to 180 teraflops of floating point performance and 64 GB of High Bandwidth Memory.

Google Boots Up Tensor Processors On Its Cloud was written by Jeffrey Burt at The Next Platform.

A Statistical View Of Cloud Storage

Cloud datacenters in many ways are like melting pots of technologies. The massive facilities hold a broad array of servers, storage systems, and networking hardware that come in a variety of sizes. Their components come with different speeds, capacities, bandwidths, power consumption, and pricing, and they are powered by different processor architectures, optimized for disparate applications, and carry the logos of a broad array of hardware vendors, from the largest OEMs to the smaller ODMs. Some hardware systems are homegrown or built atop open designs.

As such, they are good places to compare and contrast how the components of these

A Statistical View Of Cloud Storage was written by Jeffrey Burt at The Next Platform.

DARPA’s $200 Million JUMP Into Future Microelectronics

DARPA has always been about driving the development of emerging technologies for the benefit of both the military and the commercial world at large.

The Defense Advanced Research Projects Agency has been a driving force behind U.S. efforts around exascale computing and in recent years has targeted everything from robotics and cybersecurity to big data to technologies for implantable technologies. The agency has doled out millions of dollars to vendors like Nvidia and Rex Computing as well as national laboratories and universities to explore new CPU and GPU technologies for upcoming exascale-capable systems that hold the promise of 1,000

DARPA’s $200 Million JUMP Into Future Microelectronics was written by Jeffrey Burt at The Next Platform.

The Machine Learning Opportunity in Manufacturing, Logistics

There is increasing pressure in such fields as manufacturing, energy and transportation to adopt AI and machine learning to help improve efficiencies in operations, optimize workflows, enhance business decisions through analytics and reduce costs in logistics.

We have talked about how industries like telecommunications and transportation are looking at recurrent neural networks for helping to better forecast resource demand in supply chains. However, adopting AI and machine learning comes with its share of challenges. Companies whose datacenters are crowded with traditional systems powered by CPUs now have to consider buying and bringing in GPU-based hardware that is better situated to

The Machine Learning Opportunity in Manufacturing, Logistics was written by Jeffrey Burt at The Next Platform.

Deep Learning is the Next Platform for Pathology

It is a renaissance for companies that sell GPU-dense systems and low-power clusters that are right for handling AI inference workloads, especially as they look to the healthcare market–one that for a while was moving toward increasing compute on medical devices.

The growth of production deep learning in medical imaging and diagnostics has spurred investments in hospitals and research centers, pushing high performance systems for medicine back to the forefront.

We have written quite a bit about some of the emerging use cases for deep learning in medicine with an eye on the systems angle in particular, and while these

Deep Learning is the Next Platform for Pathology was written by Jeffrey Burt at The Next Platform.

Networking With Intent

Networking has always been the laggard in the enterprise datacenter. As servers and then storage appliances became increasingly virtualized and disaggregated over the past 15 years or so, the network stubbornly stuck with the appliance model, closed and proprietary. As other datacenter resources became faster, more agile and easier to manage, many of those efficiencies were hobbled by the network, which could take months to program and could require new hardware before making any significant changes.

However slowly, and thanks largely to the hyperscalers and now telcos and other communications service providers, that has begun to change. The rise of

Networking With Intent was written by Jeffrey Burt at The Next Platform.

Google’s Vision for Mainstreaming Machine Learning

Here at The Next Platform, we’ve touched on the convergence of machine learning, HPC, and enterprise requirements looking at ways that vendors are trying to reduce the barriers to enable enterprises to leverage AI and machine learning to better address the rapid changes brought about by such emerging trends as the cloud, edge computing and mobility.

At the SC17 show in November 2017, Dell EMC unveiled efforts underway to bring AI, machine learning and deep learning into the mainstream, similar to how the company and other vendors in recent years have been working to make it easier for enterprises

Google’s Vision for Mainstreaming Machine Learning was written by Jeffrey Burt at The Next Platform.

A New Architecture For NVM-Express

NVM-Express is the latest hot thing in storage, with server and storage array vendors big and small making a mad dash to bring the protocol into their products and get an advantage in what promises to be a fast-growing market.

With the rapid rise in the amount of data being generated and processed, and the growth of such technologies as artificial intelligence and machine learning in managing and processing the data, demand for faster speeds and lower latency in flash and other non-volatile memory will continue to increase in the coming years, and established companies like Dell EMC, NetApp

A New Architecture For NVM-Express was written by Jeffrey Burt at The Next Platform.

1 16 17 18 19 20 22