Author Archives: Nicole Hemsoth
Author Archives: Nicole Hemsoth
This year at the International Supercomputing Conference (ISC) Dr. Keren Bergman from the Lightwave Research Lab at Columbia University detailed the road ahead for optical interconnects in supercomputing and what it means for exascale energy efficiency and beyond. …
Efficiency Gains of Optical Interconnects at Exascale was written by Nicole Hemsoth at .
There is no way to predict which quantum system will garner system share in the next years, but most large chip, system, and software companies are hedging their bets and establishing a quantum research effort in the form of hardware, simulations, or software stacks. …
Microsoft Builds Quantum Strategy Around Q# was written by Nicole Hemsoth at .
FPGA maker Xilinx has acquired Chinese deep learning chip startup DeePhi Tech for an undisclosed sum. …
FPGA Maker Snaps Up Deep Learning Chip Startup was written by Nicole Hemsoth at .
No matter what language or techniques are being applied, there are enough similarities between data science approaches that some broad parallels can be drawn. …
Why an Active Ontology Matters for Data Science was written by Nicole Hemsoth at .
HPC luminary Jack Dongarra (University Distinguished Professor University of Tennessee) presented a new direction for math libraries at the International Supercomputing conference (ISC) in Frankfurt Germany in his presentation “Numerical Linear Algebra for Future Extreme-Scale Systems (NLAFET)”. …
A New Direction for HPC Math Libraries was written by Nicole Hemsoth at .
It has been difficult for the supercomputing community to watch the tools they have honed over many years get snatched up by a more commercially-oriented community like AI and popularized. …
The Supercomputing of 2025 Looks Much Like AI Today was written by Nicole Hemsoth at .
Most companies in the HPC space have made the natural transition to moving into the deep learning and AI space over the last couple of years. …
Cray Bolsters HPC to AI Crossover Appeal was written by Nicole Hemsoth at .
When it comes to deep learning chip startups, hype moves fast but crossing the finish line to real production silicon takes an incredibly long time. …
MIPS in Hand, AI Chip Startup Wave Computing Delivers First Silicon was written by Nicole Hemsoth at .
There are several competing processor efforts targeting deep learning training and inference but even for these specialized devices, the old performance ghosts found in other areas haunt machine learning as well. …
The Neuromorphic Memory Landscape for Deep Learning was written by Nicole Hemsoth at .
Just because supercomputers are engineered to be far more powerful over time does not necessarily mean programmer productivity will follow the same curve. Added performance means more complexity, which for parallel programmers means working harder, especially when accelerators are thrown into the mix.
It was with all of this in mind that DARPA’s High Productivity Computing Systems (HPCS) project rolled out in 2009 to support higher performance but more usable HPC systems. This is the era that spawned systems like Blue Waters, for instance, and various co-design efforts from IBM and Intel to make parallelism within broader reach to the …
Will Chapel Mark Next Great Awakening for Parallel Programmers? was written by Nicole Hemsoth at The Next Platform.
There has been plenty of talk about where FPGA acceleration might fit into high performance computing but there are only a few testbeds and purpose-built clusters pushing this vision forward for scientific applications.
While we do not necessarily expect supercomputing centers to turn their backs on GPUs as the accelerator of choice in favor of FPGAs anytime in the foreseeable future. there is some smart investment happening in Europe and to a lesser extent, in the U.S. that takes advantage of recent hardware additions and high-level tool development that put field programmable devices within closer reach–even for centers whose users …
Another Step Toward FPGAs in Supercomputing was written by Nicole Hemsoth at The Next Platform.
Bioinspired computing is nothing new but with the rise in mainstream interest in machine learning, these architectures and software frameworks are seeing fresh light. This is prompting a new wave of young companies that are cropping up to provide hardware, software, and management tools—something that has also spurred a new era of thinking about AI problems.
We most often think of these innovations happening at the server and datacenter level but more algorithmic work is being done (to suit better embedded hardware) to deploy comprehensive models on mobile devices that allow for long-term learning on single instances of object recognition …
Momentum for Bioinspired GPU Computing at the Edge was written by Nicole Hemsoth at The Next Platform.
When pre-split Hewlett-Packard bought Aruba Networks three years ago for $3 billion, the goal was to create a stronger and larger networking business that combined both wired and wireless networking capabilities and could challenge market leader Cisco Systems at a time when enterprises were more fully embracing mobile computing and public clouds.
Aruba was launched in 2002 and by the time of the acquisition had established itself as a leading vendor in the wireless networking market and had an enthusiastic following of users who call themselves “Airheads.” The worry among many of them was that once the deal was closed, …
Aruba Networks Leads HPE to the Edge was written by Nicole Hemsoth at The Next Platform.
For those who might expect Microsoft to favor its own Windows-centric platforms and tools to power comprehensive infrastructure for serving AI compute and software services for internal R&D groups, plan on being surprised.
While Microsoft does rely on some core windows features and certainly its Azure cloud services, much of its infrastructure is powered by a broad suite of open source tools. As Jim Jernigan, senior R&D systems engineer at Microsoft Research told us at the GPU Technology Conference (GTC18) this week, the highest volume of workloads running on the diverse research clusters Microsoft uses for AI development are running …
An Inside Look at What Powers Microsoft’s Internal Systems for AI R&D was written by Nicole Hemsoth at The Next Platform.
Big iron aficionados packed the room when ORNL’s Jack Wells gave the latest update on the upcoming 207 petaflop Summit supercomputer at the GPU Technology Conference (GTC18) this week.
In just eight years, the folks at Oak Ridge have pushed the high performance bar from the 18.5 teraflop Phoenix system to the 27 petaflop Titan. That’s a 1000x + improvement in eight years.
Summit will deliver 5-10x more performance than the existing Titan machine, but what is noteworthy is how Summit will do this. The system is set to have far fewer nodes (18,688 for Titan vs. ~4,800 for Summit) …
A First Look at Summit Supercomputer Application Performance was written by Nicole Hemsoth at The Next Platform.
Machine learning has moved from prototype to production across a wide range of business units at financial services giant Capital One due in part to a centralized approach to evaluating and rolling out new projects.
This is no easy task given the scale and scope of the enterprise but according to Zachary Hanif who is director of Capitol One’s machine learning “center for excellence”, the trick is to define use cases early that touch as broad of a base within the larger organization as possible and build outwards. This is encapsulated in the philosophy Hanif spearheads—locating machine learning talent in …
Capital One Machine Learning Lead on Lessons at Scale was written by Nicole Hemsoth at The Next Platform.
There are few people as visible in high performance computing programming circles as Michael Wolfe—and fewer still with level of experience. With 20 years working on PGI compilers and another 20 years before that working on languages and HPC compilers in industry, when he talks about the past, present and future of programming supercomputers, it is worthwhile to listen.
In his early days at PGI (formerly known as The Portland Group) Wolfe focused on building out the company’s suite of Fortran, C, and C++ compilers for HPC, a role that changed after Nvidia Tesla GPUs came onto the scene and …
The Future of Programming GPU Supercomputers was written by Nicole Hemsoth at The Next Platform.
We all know about the Top 500 supercomputing benchmark, which measures raw floating point performance. But over the several years there has been talk that this no longer represents real-world application performance.
This has opened the door for a new benchmark to come to the fore, in this case the high performance conjugate gradients benchmark, or HPCG, benchmark.
Here to talk about this on today’s episode of “The Interview” with The Next Platform is one of the creators of HPCG, Sandia National Lab’s Dr. Michael Heroux. Interestingly, Heroux co-developed HPCG with one of the founders of the Top …
What’s Ahead for Supercomputing’s Balanced Benchmark was written by Nicole Hemsoth at The Next Platform.
Containerization as a concept of isolating application processes while sharing the same operating system (OS) kernel has been around since the beginning of this century. It started its journey from as early as Jails from the FreeBSD era. Jails heavily leveraged the chroot environment but expanded capabilities to include a virtualized path to other system attributes such as storage, interconnects and users. Solaris Zones and AIX Workload Partitions also fall into a similar category.
Since then, the advent and advancement in technologies such as cgroups, systemd and user-namespaces greatly improved the security and isolation of containers when compared to their …
Singularity Containers for HPC & Deep Learning was written by Nicole Hemsoth at The Next Platform.
On today’s episode of “The Interview” with The Next Platform, we discuss the role of higher level interfaces to common machine learning and deep learning frameworks, including Caffe.
Despite the existence of multiple deep learning frameworks, there is a lack of comprehensible and easy-to-use high-level tools for the design, training, and testing of deep neural networks (DNNs) according to this episode’s guest, Soren Klemm, one of the creators of Python based Barista, which is an open-source graphical high-level interface for the Caffe framework.
While Caffe is one of the most popular frameworks for training DNNs, editing prototxt files in …
Using Python to Snake Closer to Simplified Deep Learning was written by Nicole Hemsoth at The Next Platform.