Archive

Category Archives for "The Next Platform"

When No One Can Make Money In Systems

Making money in the information technology market has always been a challenge, but it keeps getting increasingly difficult as the tumultuous change in how companies consume compute, storage, and networking rips through all aspects of this $3 trillion market.

It is tough to know exactly what do to, and we see companies chasing the hot new things, doing acquisitions to bolster their positions, and selling off legacy businesses to generate the cash to do the deals and to keep Wall Street at bay. Companies like IBM, Dell, HPE, and Lenovo have sold things off and bought other things to try

When No One Can Make Money In Systems was written by Timothy Prickett Morgan at The Next Platform.

HPE Looks Ahead To Composable Infrastructure, Persistent Memory

Over the past several years, the server market has been roiled by the rise of cloud computing that run the applications created by companies and by services offered by hyperscalers that augment or replace such applications. This is a tougher and lumpier market, to be sure.

The top tier cloud providers like Amazon, Microsoft, and Google not only have become key drivers in server sales but also have turned to original design manufacturers (ODMs) from Taiwan and China for lower cost systems to help populate their massive datacenters. Overall, global server shipments have slowed, and top-tier OEMs are working to

HPE Looks Ahead To Composable Infrastructure, Persistent Memory was written by Jeffrey Burt at The Next Platform.

Python Coils Around FPGAs for Broader Accelerator Reach

Over the last couple of years, much work has been shifted into making FPGAs more usable and accessible. From building around OpenCL for a higher-level interface to having reconfigurable devices available on AWS, there is momentum—but FPGAs are still far from the grip of everyday scientific and technical application developers.

In an effort to bridge the gap between FPGA acceleration and everyday domain scientists who are well-versed in using the common scripting language, a team from the University of Southern California has created a new platform for Python-based development that abstracts the complexity of using low-level approaches (HDL, Verilog). “Rather

Python Coils Around FPGAs for Broader Accelerator Reach was written by Nicole Hemsoth at The Next Platform.

Machine Learning on Stampede2 Supercomputer to Bolster Brain Research

In our ongoing quest to understand the human mind and banish abnormalities that interfere with life we’ve always drawn upon the most advanced science available. During the last century, neuroimaging – most recently, the Magnetic Resonance Imaging scan (MRI) – has held the promise of showing the connection between brain structure and brain function.

Just last year, cognitive neuroscientist David Schnyer and colleagues Peter Clasen, Christopher Gonzalez, and Christopher Beevers published a compelling new proof of concept in Psychiatry Research: Neuroimaging. It suggests that machine learning algorithms running on high-performance computers to classify neuroimaging data may deliver the most

Machine Learning on Stampede2 Supercomputer to Bolster Brain Research was written by Nicole Hemsoth at The Next Platform.

Unifying Oil and Gas Data at Scale

The oil and gas industry has been among the most aggressive in pursuing internet of things (IoT), cloud and big data technologies to collect, store, sort and analyze massive amounts of data in both the drilling and refining sectors to improve efficiencies and decision-making capabilities. Systems are increasingly becoming automated, and sensors are placed throughout processes to send back data on various the systems and software has been put in place to crunch the data to create useful information.

According to a group of researchers from Turkey, the oil and gas industry is well suited to embrace all the new

Unifying Oil and Gas Data at Scale was written by Jeffrey Burt at The Next Platform.

Logistics in Application Path of Neural Networks

Accurately forecasting resource demand within the supply chain has never been easy, particular given the constantly changing nature of the data over periods of time.

What may have been true in measurements around demand or logistics one minute might be entirely different an hour, day or week later, which can throw off a short-term load forecast (STLF) and lead to costly over- or under-estimations, which in turn can lead to too much or too little supply.

To improve such forecasts, there are multiple efforts underway to create new models that can more accurately predict load needs, and while they have

Logistics in Application Path of Neural Networks was written by Jeffrey Burt at The Next Platform.

HPC Center NERSC Eases Path to Optimization at Scale

The National Energy Research Scientific Computing Center (NERSC) application performance team knows that for many users, “optimization is hard.” They’ve thought a lot about how to distill the application optimization process for users in a way that would resonate with them.

One of the analogies they use is the “Ant Farm.” Optimizing code is like continually “running a lawnmower over a lawn to find and cut-down the next tallest blade of grass,” where the blade of grass is analogous to a code bottleneck that consumes the greatest amount of runtime. One of the challenges is that each bottleneck

HPC Center NERSC Eases Path to Optimization at Scale was written by Nicole Hemsoth at The Next Platform.

Memory-Like Storage Means File Systems Must Change

The term software defined storage is in the new job title that Eric Barton has at DataDirect Networks, and he is a bit amused by this. As one of the creators of early parallel file systems for supercomputers and one of the people who took the Lustre file systems from a handful of supercomputing centers to one of the two main data management platforms for high performance computing, to a certain way of looking at it, Barton has always been doing software-defined storage.

The world has just caught up with the idea.

Now Barton, who is leaving Intel in the

Memory-Like Storage Means File Systems Must Change was written by Timothy Prickett Morgan at The Next Platform.

A Health Check For Code And Infrastructure In The Cloud

As businesses continue their migration to the cloud, the issue of monitoring the performance and health of their applications gets more challenging as they try to track them across both on-premises environments and in both private and public clouds. At the same time, as they become more cloud-based, they have to keep an eye on the entire stack, from the customer-facing applications to the underlying infrastructure they run on.

Since its founding eight years ago, New Relic has steadily built upon its first product, a cloud-based application performance management (APM) tool that is designed to assess how well the

A Health Check For Code And Infrastructure In The Cloud was written by Jeffrey Burt at The Next Platform.

One Programming Model To Support Them All

Many hands make light work, or so they say. So do many cores, many threads and many data points when addressed by a single computing instruction. Parallel programming – writing code that breaks down computing problems into small parts that run in unison – has always been a challenge. Since 2011, OpenACC has been gradually making it easier.  OpenACC is a de facto standard set of parallel extensions to Fortran, C and C++ designed to enable portable parallel programming across a variety of computing platforms.

Compilers, which are used to translate higher-level programming languages into binary executables, first appeared in

One Programming Model To Support Them All was written by Nicole Hemsoth at The Next Platform.

The Last Itanium, At Long Last

In a world of survival of the fittest coupled with mutations, something always has to be the last of its kind. And so it is with the “Kittson” Itanium 9700 processors, which Intel quietly released earlier this month and which will mostly see action in the last of the Integrity line of midrange and high-end systems from Hewlett Packard Enterprise.

The Itanium line has a complex history, perhaps fitting for a computing architecture that was evolving from the 32-bit X86 architecture inside of Intel and that was taken in a much more experimental and bold direction when the aspiring server

The Last Itanium, At Long Last was written by Timothy Prickett Morgan at The Next Platform.

Some Surprises in the 2018 DoE Budget for Supercomputing

The US Department of Energy fiscal year 2018 budget request is in. While it reflects much of what we might expect in pre-approval format in terms of forthcoming supercomputers in particular, there are some elements that strike us as noteworthy.

In the just-released 2018 FY budget request from Advanced Scientific Computing Research (ASCR), page eight of the document states that “The Argonne Leadership Computing Facility will operate Mira (at 10 petaflops) and Theta (at 8.5 petaflops) for existing users, while turning focus to site preparations for deployment of an exascale system of novel architecture.”

Notice anything missing in this description?

Some Surprises in the 2018 DoE Budget for Supercomputing was written by Nicole Hemsoth at The Next Platform.

Under The Hood Of Google’s TPU2 Machine Learning Clusters

As we previously reported, Google unveiled its second-generation TensorFlow Processing Unit (TPU2) at Google I/O last week. Google calls this new generation “Google Cloud TPUs”, but provided very little information about the TPU2 chip and the systems that use it other than to provide a few colorful photos. Pictures do say more than words, so in this article we will dig into the photos and provide our thoughts based the pictures and on the few bits of detail Google did provide.

To start with, it is unlikely that Google will sell TPU-based chips, boards, or servers – TPU2

Under The Hood Of Google’s TPU2 Machine Learning Clusters was written by Timothy Prickett Morgan at The Next Platform.

FPGA Startup Gathers Funding Force for Merged Hyperscale Inference

Around this time last year, we delved into a new FPGA-based architecture that targeted efficient, scalable machine learning inference from startup DeePhi Tech. The company just rounded out its first funding effort with an undisclosed sum with major investors, including Banyan Capital and as we learned this week, FPGA maker Xilinx.

As that initial article details, the Stanford and Tsinghua University-fed research focused on network pruning and compression at low precision with a device that could be structured for low latency and custom memory allocations. These efforts were originally built on Xilinx FPGA hardware and given this first round of

FPGA Startup Gathers Funding Force for Merged Hyperscale Inference was written by Nicole Hemsoth at The Next Platform.

Big Bang For The Buck Jump With Volta DGX-1

One of the reasons why Nvidia has been able to quadruple revenues for its Tesla accelerators in recent quarters is that it doesn’t just sell raw accelerators as well as PCI-Express cards, but has become a system vendor in its own right through its DGX-1 server line. The company has also engineered new adapter cards specifically aimed at hyperscalers who want to crank up the performance on their machine learning inference workloads with a cheaper and cooler Volts GPU.

Nvidia does not break out revenues for the DGX-1 line separately from other Tesla and GRID accelerator product sales, but we

Big Bang For The Buck Jump With Volta DGX-1 was written by Timothy Prickett Morgan at The Next Platform.

Singularity is the Hinge To Swing HPC Cloud Adoption

For almost a decade now, the cloud has been pitched as a cost-effective way to bring supercomputing out of the queue and into public IaaS or HPC on-demand environments. While there are certainly many use cases to prove that tightly-coupled problems can still work in the cloud despite latency hits (among other issues), application portability is one sticking point.

For instance, let’s say you have developed a financial modeling application on an HPC on demand service to prove that the model works so you can make the case for purchasing a large cluster to run it at scale on-prem. This

Singularity is the Hinge To Swing HPC Cloud Adoption was written by Nicole Hemsoth at The Next Platform.

AMD Disrupts The Two-Socket Server Status Quo

It is funny to think that Advanced Micro Devices has been around almost as long as the IBM System/360 mainframe and that it has been around since the United States landed people on the moon. The company has gone through many gut-wrenching transformations, adapting to changing markets. Like IBM and Apple, just to name two, AMD has had its share of disappointments and near-death experiences, but unlike Sun Microsystems, Silicon Graphics, Sequent Computer, Data General, Tandem Computer, and Digital Equipment, it has managed to stay independent and live to fight another day.

AMD wants a second chance in the datacenter,

AMD Disrupts The Two-Socket Server Status Quo was written by Timothy Prickett Morgan at The Next Platform.

First In-Depth Look at Google’s New Second-Generation TPU

It was only just last month that we spoke with Google distinguished hardware engineer, Norman Jouppi, in depth about the tensor processing unit used internally at the search giant to accelerate deep learning inference, but that device—that first TPU—is already appearing rather out of fashion.

This morning at the Google’s I/O event, the company stole Nvidia’s recent Volta GPU thunder by releasing details about its second-generation tensor processing unit (TPU), which will manage both training and inference in a rather staggering 180 teraflops system board, complete with custom network to lash several together into “TPU pods” that can deliver Top

First In-Depth Look at Google’s New Second-Generation TPU was written by Nicole Hemsoth at The Next Platform.

The Embiggening Bite That GPUs Take Out Of Datacenter Compute

We are still chewing through all of the announcements and talk at the GPU Technology Conference that Nvidia hosted in its San Jose stomping grounds last week, and as such we are thinking about the much bigger role that graphics processors are playing in datacenter compute – a realm that has seen five decades of dominance by central processors of one form or another.

That is how CPUs got their name, after all. And perhaps this is a good time to remind everyone that systems used to be a collection of different kinds of compute, and that is why the

The Embiggening Bite That GPUs Take Out Of Datacenter Compute was written by Timothy Prickett Morgan at The Next Platform.

Cray Supercomputing as a Service Becomes a Reality

For a mature company that kickstarted supercomputing as we know it, Cray has done a rather impressive job of reinventing itself over the years.

From its original vector machines, to HPC clusters with proprietary interconnects and custom software stacks, to graph analytics appliances engineered in-house, and now to machine learning, the company tends not to let trends in computing slip by without a new machine.

However, all of this engineering and tuning comes at a cost—something that, arguably, has kept Cray at bay when it comes to reaching the new markets that sprung up in the “big data” days of

Cray Supercomputing as a Service Becomes a Reality was written by Nicole Hemsoth at The Next Platform.