Nicole Hemsoth

Author Archives: Nicole Hemsoth

IBM Research Lead Charts Scope of Watson AI Effort

Over the past few years, IBM has been devoting a great deal of corporate energy into developing Watson, the company’s Jeopardy-beating supercomputing platform. Watson represents a larger focus at IBM that integrates machine learning and data analytics technologies to bring cognitive computing capabilities to its customers.

To find out about how the company perceives its own invention, we asked IBM Fellow Dr. Alessandro Curioni to characterize Watson and how it has evolved into new application domains. Curioni, will be speaking on the subject at the upcoming ISC High Performance conference. He is an IBM Fellow, Vice President Europe and

IBM Research Lead Charts Scope of Watson AI Effort was written by Nicole Hemsoth at The Next Platform.

Intel Stretches Deep Learning on Scalable System Framework

The strong interest in deep learning neural networks lies in the ability of neural networks to solve complex pattern recognition tasks – sometimes better than humans. Once trained, these machine learning solutions can run very quickly – even in real-time – and very efficiently on low-power mobile devices and in the datacenter.

However training a machine learning algorithm to accurately solve complex problems requires large amounts of data that greatly increases the computational workload. Scalable distributed parallel computing using a high-performance communications fabric is an essential part of what makes the training of deep learning on large complex datasets

Intel Stretches Deep Learning on Scalable System Framework was written by Nicole Hemsoth at The Next Platform.

Next Generation Supercomputing Strikes Data, Compute Balance

Although the future of exascale computing might be garnering the most deadlines in high performance computing, one of the most important stories unfolding in the supercomputing space, at least from a system design angle, is the merging of compute and data-intensive machines.

In many ways, merging both the compute horsepower of today’s top systems with the data-intensive support in terms of data movement, storage, and software is directly at odds with current visions of exascale supercomputers. Hence there appear to be two camps forming on either side of the Top 500 level centers; one that argues strongly in favor of

Next Generation Supercomputing Strikes Data, Compute Balance was written by Nicole Hemsoth at The Next Platform.

More than Moore: IEEE Set to Standardize on Uncertainty

Several decades ago, Gordon Moore made it far simpler to create technology roadmaps along the lines of processor capabilities, but as his namesake law begins to slow on the rails, the IEEE is stepping in to create a new, albeit more diverse roadmap for future systems.

The organization has launched a new effort to identify and trace the course of what follows Moore’s Law with the International Roadmap for Devices and Systems (IRDS), which will take a workload focused view of the mixed landscape and the systems that will be required. In other words, instead of pegging a

More than Moore: IEEE Set to Standardize on Uncertainty was written by Nicole Hemsoth at The Next Platform.

Storage Performance Models Highlight Burst Buffers at Scale

For storage at scale, particularly for large scientific computing centers, burst buffers have become a hot topic for both checkpoint and application performance reasons. Major vendors in high performance computing have climbed on board and we will be seeing a new crop of big machines featuring burst buffers this year to complement the few that have already emerged.

The what, why, and how of burst buffers can be found in our interview with the inventor of the concept, Gary Grider at Los Alamos National Lab. But for centers that have already captured the message and are looking for the

Storage Performance Models Highlight Burst Buffers at Scale was written by Nicole Hemsoth at The Next Platform.

Bolder Battle Lines Drawn At Extreme Scale Computing Borders

The race to reach exascale computing has been a point of national ambition in the United States, which is currently home to the highest number of top-performing supercomputers in the world. But without focused effort, investment, and development in research and the vendor communities, that position is set to come under fire from high performance computing (HPC) developments in China and Japan in particular, as well as elsewhere around the globe.

This national priority of moving toward ever-more capable supercomputers as a source of national competitiveness in manufacturing, healthcare, and other segments was the subject of a detailed report that

Bolder Battle Lines Drawn At Extreme Scale Computing Borders was written by Nicole Hemsoth at The Next Platform.

Exascale Timeline Pushed to 2023: What’s Missing in Supercomputing?

The roadmap to build and deploy an exascale computer has extended over the last few years–and more than once.

Initially, the timeline marked 2018 as the year an exaflop-capable system would be on the floor, just one year after the CORAL pre-exascale machines are installed at three national labs in the U.S.. That was later shifted to 2020, and now, according to a new report setting forth the initial hardware requirements for such a system, it is anywhere between 2023-2025. For those who follow high performance computing and the efforts toward exascale computing, this extended timeline might not come as

Exascale Timeline Pushed to 2023: What’s Missing in Supercomputing? was written by Nicole Hemsoth at The Next Platform.

Emphasis on Data Wrangling Brings Materials Science Up to Speed

In 2011, the United States launched a multi-agency effort to discover, develop, and produce advanced materials under the Materials Genome Initiative as part of an overall push to get out from under the 20-year process typically involved with researching a new material and bringing it to market.

At roughly the same time, the government was investing in other technology-driven initiatives to bolster competitiveness, with particular emphasis on manufacturing. While key areas in research were developing incredibly rapidly, it appeared that manufacturing, materials, and other more concrete physical problems were waiting on better solutions while genomics, nanotechnology, and other areas were

Emphasis on Data Wrangling Brings Materials Science Up to Speed was written by Nicole Hemsoth at The Next Platform.

Volkswagen’s “Cloud First” Approach to Infrastructure Decisions

As automotive companies like Ford begin to consider themselves technology companies, others, including Volkswagen Group are taking a similar route. The company’s new CEO, who took over in September, 2015 has a background in computer science and began his career managing IT department for the Audi division. Under his more technical-tuned guard, IT teams within the company are taking major strides to shore up their infrastructure to support the 1.5 billion Euro investment in R&D for forthcoming electric and connected cars in the near future.

Part of the roadmap for Volkswagen Group includes a shift to OpenStack to manage

Volkswagen’s “Cloud First” Approach to Infrastructure Decisions was written by Nicole Hemsoth at The Next Platform.

Baidu Eyes Deep Learning Strategy in Wake of New GPU Options

This month Nvidia bolstered its GPU strategy to stretch further into deep learning, high performance computing, and other markets, and while there are new options to consider, particularly for the machine learning set, it is useful to understand what these new arrays of chips and capabilities mean for users at scale. As one of the companies directly in the lens for Nvidia with its recent wave of deep learning libraries and GPUs, Baidu has keen insight into what might tip the architectural scales—and what might still stay the same, at least for now.

Back in December, when we talked to

Baidu Eyes Deep Learning Strategy in Wake of New GPU Options was written by Nicole Hemsoth at The Next Platform.

First Wave of Pascal GPUs Coming to European Supercomputer

There are few international supercomputing hubs sporting the systems and software prowess of the Swiss National Supercomputing Center (CSCS), which started with large-scale vector machines in 1992 and moved through a series of other architectures and vendors; from NEC at the beginning, to IBM, and most recently, Cray. In fact, the center has had an ongoing preference for Cray supercomputers, with an unbroken stretch of machines beginning in 2007.

In addition to choosing Cray as the system vendor, CSCS has been an early adopter and long-term user of GPU acceleration. According to the center’s director, Thomas Schulthess, teams there firmed

First Wave of Pascal GPUs Coming to European Supercomputer was written by Nicole Hemsoth at The Next Platform.

Future Cray Clusters to Storm Deep Learning

Supercomputer maker Cray might not roll out machines for deep learning anytime in 2016, but like other system vendors with deep roots in high performance computing, which leverages many of the same hardware elements (strong interconnect and GPU acceleration, among others), they are seeing how to loop their expertise into a future where machine learning rules.

As Cray CTO, Steve Scott, tells The Next Platform, “Deep learning and machine learning dovetails with our overall strategy and that overall merger between HPC and analytics that has already happened,” Scott says. HPC is not just one single application or set of

Future Cray Clusters to Storm Deep Learning was written by Nicole Hemsoth at The Next Platform.

Nvidia Lead Details Future Convergence of Supercomputing, Deep Learning

Deep learning could not have developed at the rapid pace it has over the last few years without companion work that has happened on the hardware side in high performance computing. While the applications and requirements for supercomputers versus neural network training are quite different (scalability, programming, etc.) without the rich base of GPU computing, high performance interconnect development, memory, storage, and other benefits from the HPC set, the boom around deep learning would be far quieter.

In the midst of this convergence, Marc Hamilton has watched advancements on the HPC side over the years, beginning in the mid-1990s

Nvidia Lead Details Future Convergence of Supercomputing, Deep Learning was written by Nicole Hemsoth at The Next Platform.

Boosting Deep Learning with the Intel Scalable System Framework

Training ‘complex multi-layer’ neural networks is referred to as deep-learning as these multi-layer neural architectures interpose many neural processing layers between the input data and the predicted output results – hence the use of the word deep in the deep-learning catchphrase.

While the training procedure is computationally expensive, evaluating the resulting trained neural network is not, which explains why trained networks can be extremely valuable as they have the ability to very quickly perform complex, real-world pattern recognition tasks on a variety of low-power devices including security cameras, mobile phones, wearable technology. These architectures can also be implemented on FPGAs

Boosting Deep Learning with the Intel Scalable System Framework was written by Nicole Hemsoth at The Next Platform.

Datera Bets on Massive Middle Ground for Block Storage at Scale

For those who wonder what kind of life is left in the market for elastic block storage beyond Ceph and the luxury all-flash high rises, Datera, which emerged from stealth today with $40 million in backing and some big name users, has a tale to tell. While it will likely not end with blocks, these do form the foundation as the company looks to reel in enterprises who need more scalable performance than they might find with Ceph but aren’t looking to the high-end flash appliances either.

The question is, what might the world do with an on-premises take

Datera Bets on Massive Middle Ground for Block Storage at Scale was written by Nicole Hemsoth at The Next Platform.

IBM Watson CTO on What’s Ahead for Cognitive Computing

After close to twenty years at IBM, where he began as an IBM Fellow and Chief Architect for the SOA Foundation, Rob High has developed a number of core technologies that back Big Blue’s enterprise systems, including the suite of tools behind IBM WebSphere, and more recently, those that support the wide-ranging ambitions of the Watson cognitive computing platform.

Although High gave the second day keynote this afternoon at the GPU Technology Conference, there was no mention of accelerated computing. Interestingly, while the talk was about software, specifically the machine learning behind Watson, there was also very little about the

IBM Watson CTO on What’s Ahead for Cognitive Computing was written by Nicole Hemsoth at The Next Platform.

1 33 34 35