We are thrilled to announce the full-time addition of veteran IT journalist, Jeffrey Burt to The Next Platform ranks.
Jeffrey Burt has been a journalist for more than 30 years, with the last 16-plus year writing about the IT industry. During his long tenure with eWeek, he covered a broad range of subjects, from processors and IT infrastructure to collaboration, PCs, AI and autonomous vehicles.
He’s written about FPGAs, supercomputers, hyperconverged infrastructure and SDN, cloud computing, deep learning and exascale computing. Regular readers here will recognize that his expertise in these areas fits in directly with our coverage …
Veteran IT Journalist, Jeffrey Burt, Joins The Next Platform as Senior Editor was written by Nicole Hemsoth at The Next Platform.
Hardware and device makers are in a mad dash to create or acquire the perfect chip for performing deep learning training and inference. While we have yet to see anything that can handle both parts of the workload on a single chip with spectacular results (the Pascal general GPUs are the closest thing yet, with threats coming from Intel/Nervana in the future), there is promise for FPGAs to find inroads.
So far, most of the work we have focused on for FPGAs and deep learning has centered more on the acceleration of inference versus boosting training times and accuracy …
OpenCL Opens Doors to Deep Learning Training on FPGA was written by Nicole Hemsoth at The Next Platform.
The race is on to carve a path to efficient extreme-scale machines in the next five years but existing processing approaches fall far short of the efficiency and performance targets required. As we reported at the end of 2016, the Department of Energy in the U.S. is keeping its eye on non-standard processing approaches for one of its exascale-class systems by 2021, and other groups, including the IEEE are equally keeping pace with new architectures to explore as CMOS alternatives.
While there is no silver bullet technology yet that we expect will sweep current computing norms, superconducting circuits appear …
IARPA Spurs Race to Speed Cryogenic Computing Reality was written by Nicole Hemsoth at The Next Platform.
Breaking into the switch market is not an easy task, whether you are talking about providing whole switches or just the chips that drive them. But there is always room for innovation, which is why some of the upstarts have a pretty credible chance to shake up networking, which is the last bastion of proprietary within the datacenter.
Barefoot Networks is one of the up-and-coming switch chip makers, with its “Tofino” family of ASICs that, among other things, has circuits and software that allow for the data plane – that part of the device that controls how data moves …
Hyperscalers Ready To Run Barefoot In The Datacenter was written by Timothy Prickett Morgan at The Next Platform.
High performance computing (HPC) is traditionally considered the domain of large, purpose built machines running some *nix operating system (predominantly Linux in recent years). Windows is given little, if any, consideration. Indeed, it has never accounted for even a full percent of the Top500 list. Some of this may be due to technical considerations: Linux can be custom built for optimum performance, including recompiling the kernel. It is also historically more amenable to headless administration, which is a critical factor when maintaining thousands of nodes.
But at some point does the “Windows isn’t for high-performance computing” narrative become self-fulfilling? …
Looking Through the Windows at HPC OS Trends was written by Nicole Hemsoth at The Next Platform.
High performance computing in its various guises is not just determined by the kind and amount of computing that is made available at scale to applications. More and more, the choice of network adapters and switches as well as the software stack that links the network to applications plays an increasingly important role. And moreover, networks are comprising a larger and larger portion of the cluster budget, too.
So picking the network that lashes servers to each other and to their shared storage is important. And equally important is having a roadmap for the technology that is going to provide …
The Relentless Yet Predictable Pace Of InfiniBand Speed Bumps was written by Timothy Prickett Morgan at The Next Platform.
Every successive processor generation presents its own challenges to all chip makers, and the ramp of 14 nanometer processes that will be used in the future “Skylake” Xeon processors, due in the second half of this year, cut into the operating profits of its Data Center Group in the final quarter of 2016. Intel also apparently had an issue with one of its chip lines – it did not say if it was a Xeon or Xeon Phi, or detail what that issue was – that needed to be fixed and that hurt Data Center Group’s middle line, too.
Still, …
Skylake Xeon Ramp Cuts Into Intel’s Datacenter Profits was written by Timothy Prickett Morgan at The Next Platform.
With the current data science boom, many companies and organizations are stepping outside of their traditional business models to scope work that applies rigorous quantitative methodology and machine learning – areas of analysis previously in the realm of HPC organizations.
Dr. Franz Kiraly an inaugural Faculty Fellow at the Alan Turing Institute observed at the recent Intel HPC developer conference that companies are not necessarily struggling with “big” data, but rather with data management issues as they begin to systematically and electronically collect specific data in one place that makes analytics feasible. These companies, as newcomers to “machine learning” and …
A Case for CPU-Only Approaches to HPC, Analytics, Machine Learning was written by Nicole Hemsoth at The Next Platform.
Last week we discussed the projected momentum for FPGAs in the cloud with Deepak Singh, general manager of container and HPC projects at Amazon Web Services. In the second half of our interview, we delved into the current state of high performance computing on Amazon’s cloud.
While the company tends to offer generalizations versus specific breakdowns of “typical” workloads for different HPC application types, the insight reveals a continued emphasis on pushing new instances to feed both HPC and machine learning, continued drive to push ISVs to expand license models, and continued work to make running complex workflows more seamless. …
AWS Outlines Current HPC Cloud User Trends was written by Nicole Hemsoth at The Next Platform.
Over the last couple of years, we have focused extensively on the hardware required for training deep neural networks and other machine learning algorithms. Focal points have included the use of general purpose and specialized CPUs, GPUs, custom ASICs, and more recently, FPGAs.
As the battle to match the correct hardware devices for these training workloads continues, another has flared up on the deep learning inference side. Training neural networks has its own challenges that can be met with accelerators, but for inference, the efficiency, performance, and accuracy need to be in balance.
One developing area in inference is in …
FPGAs Focal Point for Efficient Neural Network Inference was written by Nicole Hemsoth at The Next Platform.
High performance computing is a hot field, and not just in the sense that it gets a lot of attention. The hardware necessary to perform the countless simulations performed every day consumes a lot of power, which is largely turned into heat. How to handle all of that heat is a subject that is always on the mind of facilities managers. If the thermal energy is not moved elsewhere in short order, the delicate electronics that comprise the modern computer will cease to function.
The computer room air handler (CRAH) is the usual approach. Chilled water chills the air, which …
Heating Up the Exascale Race by Staying Cool was written by Nicole Hemsoth at The Next Platform.
It is the first month of a new year, and this is the time that IBM traditionally does reorganizations of its business lines and plays musical chairs with its executives to reconfigure itself for the coming year. And just like clockwork, late last week the top brass at Big Blue did internal announcements explaining the changes it is making to transform its wares into a platform better suited to the times.
The first big change, and one that may have precipitated all of the others that have been set in place, is Robert LeBlanc, who is the senior vice president …
IBM Reorg Forges Cognitive Systems, Merges Cloud And Analytics was written by Timothy Prickett Morgan at The Next Platform.
Over the last two years, we have highlighted deep learning use cases in enterprise areas including genomics, large-scale business analytics, and beyond, but there are still many market areas that are still building a profile for where such approaches fit into existing workflows. Even though model training and inference might be useful, for some areas that have complex simulation-driven workflows, there are great efficiencies that could come from deep neural nets, but integrating those elements is difficult.
The oil and gas industry is one area where deep learning holds promise, at least in theory. For some steps in the resource …
Refining Oil and Gas Discovery with Deep Learning was written by Nicole Hemsoth at The Next Platform.
For a long time now, researchers have been working on automating the process of breaking up otherwise single-threaded code to run on multiple processors by way of multiple threads. Results, although occasionally successful, eluded anything approaching a unified theory of everything.
Still, there appears to be some interesting success via OpenMP. The good thing about OpenMP is that its developers realized that what is really necessary is for the C or Fortran programmer to provide just enough hints to the compiler that say “Hey, this otherwise single-threaded loop, this sequence of code, might benefit from being split amongst multiple …
Multi-Threaded Programming By Hand Versus OpenMP was written by Timothy Prickett Morgan at The Next Platform.
At the end of 2016, Amazon Web Services announced it would be making high-end Xilinx FPGAs available via a cloud delivery model, beginning first in a developer preview mode before branching with higher-level tools to help potential new users onboard and experiment with FPGA acceleration as the year rolls on.
As Deepak Singh, General Manager for the Container and HPC division within AWS tells The Next Platform, the application areas where the most growth is expected for cloud-based FPGAs are many of the same we detailed in our recent book, FPGA Frontiers: New Applications in Reconfigurable Computing. These …
AWS Details FPGA Rationale and Market Trajectory was written by Nicole Hemsoth at The Next Platform.
The HPC industry has been waiting a long time for the ARM ecosystem to mature enough to yield real-world clusters, with hundreds or thousands of nodes and running a full software stack, as a credible alternative to clusters based on X86 processors. But the wait is almost over, particularly if the Mont-Blanc 3 system that will be installed by the Barcelona Supercomputer Center is any indication.
BSC has been shy about trying new architectures in its clusters, and the original Mare Nostrum super that was installed a decade ago and that ranked fifth on the Top 500 list when it …
BSC’s Mont Blanc 3 Puts ARM Inside Bull Sequana Supers was written by Timothy Prickett Morgan at The Next Platform.
It takes an incredible amount of resilience for any company to make it decades, much less more than a century, in any industry. IBM has taken big risks to create new markets, first with time clocks and meat slicers and tabulating machines early in the last century, and some decades later it created the modern computer industry with the System/360 mainframe. It survived a near-death experience in the middle 1990s when the IT industry was changing faster than it was, and now it is trying to find its footing in cognitive computing and public and private clouds as its legacy …
The New IBM Glass Is Almost Half Full was written by Timothy Prickett Morgan at The Next Platform.
Rumors have been running around for months that Hewlett Packard Enterprise was shopping around for a way to be a bigger player in the hyperconverged storage arena, and the recent scuttlebutt was that HPE was considering paying close to $4 billion for one of the larger players in server-storage hybrids. This turns out to not be true. HPE is paying only $650 million to snap up what was, until now, thought to be one of Silicon Valley’s dozen or so unicorns with over a $1 billion valuation.
It is refreshing to see that HPE is not overpaying for an …
HPE Gets Serious About Hyperconverged Storage With SimpliVity Buy was written by Timothy Prickett Morgan at The Next Platform.
In the prior two articles in this series, we have gone through the theory behind programming multi-threaded applications, with the management of shared memory being accessed by multiple threads, and of even creating those threads in the first place. Now, we need to put one such multi-threaded application together and see how it works. You will find that the pieces fall together remarkably easily.
If we wanted to build a parallel application using multiple threads, we would likely first think of one where we split up a loop amongst the threads. We will be looking at such later in a …
The Essence Of Multi-Threaded Applications was written by Timothy Prickett Morgan at The Next Platform.
When it comes to low-latency interconnects for high performance computing, InfiniBand immediately springs to mind. On the most recent Top 500 list, over 37% of systems used some form of InfiniBand – the highest representation of any interconnect family. Since 2009, InfiniBand has occupied between 30 and 51 percent of every Top 500 list.
But when you look to the clouds, InfiniBand is hard to find. Of the three major public cloud offerings (Amazon Web Services, Google Cloud, and Microsoft Azure), only Azure currently has an InfiniBand offering. Some smaller players do as well (Profit Bricks, for example), but it’s …
Adapting InfiniBand for High Performance Cloud Computing was written by Nicole Hemsoth at The Next Platform.