Archive

Category Archives for "The Next Platform"

New Optimizations Improve Deep Learning Frameworks For CPUs

Today, most machine learning is done on processors. Some would say that acceleration of learning has to be done on GPUs, but for most users that is not good advice for several reasons. The biggest reason is now the Intel Xeon SP processor, formerly codenamed “Skylake.”

Up until recently, the software for machine learning has been often more optimized for GPUs than anything else. A series of efforts by Intel have changed that – and when coupled with Platinum version of the Intel Xeon SP family, the top performance gap is closer to 2X than it is to 100X. This

New Optimizations Improve Deep Learning Frameworks For CPUs was written by Timothy Prickett Morgan at The Next Platform.

Cisco Stretches ACI Network Fabrics, Eases Management

For disaster recovery, political, and organizational reasons, enterprises like to have multiple datacenters, and now they are going hybrid with public cloud capacity adding in the mix. Having networks scattered across the globe brings operational challenges, from being able to easily migrate and manage workloads across the multiple sites and increased complexity around networks, security to adopting emerging datacenter technologies like containers.

As the world becomes more cloud-centric, organizations are looking for ways to gain greater visibility and scalability across their environments, automate as many processes as possible and manage all these sites as a single entity.

Cisco Systems

Cisco Stretches ACI Network Fabrics, Eases Management was written by Jeffrey Burt at The Next Platform.

Japan’s ABCI System Shows The Subtleties Separating AI And HPC

Governments like to spread the money around their indigenous IT companies when they can, and so it is with the AI Bridging Cloud Infrastructure, or ABCI, supercomputer that is being commissioned by the National Institute of Advanced Industrial Science and Technology (AIST) in Japan. NEC built the ABCI prototype last year, and now Fujitsu has been commissioned to build the actual ABCI system.

The resulting machine, which is being purchased specifically to offer cloud access to compute and storage capacity for artificial intelligence and data analytics workloads, would make a fine system for running HPC simulation and models. But that

Japan’s ABCI System Shows The Subtleties Separating AI And HPC was written by Timothy Prickett Morgan at The Next Platform.

IBM Combines PowerAI, Data Science Experience in Enterprise AI Push

IBM has spent the past several years putting a laser focus on what it calls cognitive computing, using its Watson platform as the foundation for its efforts in such emerging fields as artificial intelligence (AI) and is successful spinoff, deep learning. Big Blue has leaned on Watson technology, its traditional Power systems, and increasingly powerful GPUs from Nvidia to drive its efforts to not only bring AI and deep learning into the cloud, but also to push AI into the enterprise.

The technologies are part of a larger push in the industry to help enterprises transform their businesses to take

IBM Combines PowerAI, Data Science Experience in Enterprise AI Push was written by Jeffrey Burt at The Next Platform.

Baidu Sheds Precision Without Paying Deep Learning Accuracy Cost

One of the reasons we have written so much about Chinese search and social web giant, Baidu, in the last few years is because they have openly described both the hardware and software steps to making deep learning efficient and high performance at scale.

In addition to providing several benchmarking efforts and GPU use cases, researchers at the company’s Silicon Valley AI Lab (SVAIL) have been at the forefront of eking power efficiency and performance out of new hardware by lowering precision. This is a trend that has kickstarted similar thinking in hardware usage in other areas, including supercomputing

Baidu Sheds Precision Without Paying Deep Learning Accuracy Cost was written by Nicole Hemsoth at The Next Platform.

Intel Takes First Steps To Universal Quantum Computing

Someone is going to commercialize a general purpose, universal quantum computer first, and Intel wants to be the first. So does Google. So does IBM. And D-Wave is pretty sure it already has done this, even if many academics and a slew of upstart competitors don’t agree. What we can all agree on is that there is a very long road ahead in the development of quantum computing, and it will be a costly endeavor that could nonetheless help solve some intractable problems.

This week, Intel showed off the handiwork its engineers and those of partner QuTech, a

Intel Takes First Steps To Universal Quantum Computing was written by Timothy Prickett Morgan at The Next Platform.

Public Cloud Doesn’t Dominate IT Quite Yet

Everyone in the IT industry likes drama, and we here at The Next Platform are no different. But it is also important as the industry in undergoing gut-wrenching transformations, as it has been for five decades now and will probably do so for a decade or two more, to keep some perspective. While the public cloud is certainly an exciting part of the IT market, it hasn’t taken over the world even if it has become the dominant metaphor that all kinds of IT – public, private, and hybrid – aspired to mimic.

That’s something, and it is important. But

Public Cloud Doesn’t Dominate IT Quite Yet was written by Timothy Prickett Morgan at The Next Platform.

The Clever Machinations Of Livermore’s Sierra Supercomputer

The potent combination of powerful CPUs, floating point laden GPU accelerators, and fast InfiniBand networking are coming to market and reshaping the upper echelons of supercomputing. While Intel is having issues with its future Knights massively parallel X86 processors, which it has not really explained, the two capability class supercomputers that are being built for the US Department of Energy by IBM with the help of Nvidia and Mellanox Technologies, named “Summit” and ‘Sierra” and installed at Oak Ridge National Lab and Lawrence Livermore National Laboratory, are beginning to be assembled.

We have previously profiled the nodes in

The Clever Machinations Of Livermore’s Sierra Supercomputer was written by Timothy Prickett Morgan at The Next Platform.

Red Hat Stretches Gluster Clustered Storage Under Containers

Red Hat has been aggressive in building out its capabilities around containers. The company last month unveiled its OpenShift Container Platform 3.6, its enterprise-grade Kubernetes container platform for cloud native applications that added enhanced security features and greater consistency across hybrid and multi-cloud deployments.

A couple of weeks later, Red Hat and Microsoft expanded their alliance to make it easier for organizations to adopt containers. Red Hat last year debuted OpenShift 3.0, which was based on the open source Kubernetes orchestration system and Docker containers, and the company has since continued to roll out enhancements to the platform.

The

Red Hat Stretches Gluster Clustered Storage Under Containers was written by Jeffrey Burt at The Next Platform.

Oracle Emulates Google, AWS On Its Cloud

During the dot-com boom, when Oracle was the dominant supplier of relational databases to startups and established enterprises alike, it used its profits to fund the acquisition of application serving middleware, notably BEA WebLogic, and then applications, such as PeopleSoft and Siebel, and then Java and hardware systems, from its acquisition of Sun Microsystems. It was an expensive proposition, but one that paid off handsomely for the software giant.

In the cloud and hyperscale era, open source middleware is the driving force and in a lot of cases there is nothing to acquire. Projects either go open themselves or are

Oracle Emulates Google, AWS On Its Cloud was written by Timothy Prickett Morgan at The Next Platform.

Intel Gears Up For FPGA Push

Chip giant Intel has been talking about CPU-FPGA compute complexes for so long that it is hard to remember sometimes that its hybrid Xeon-Arria compute unit, which puts a Xeon server chip and a midrange FPGA into a single Xeon processor socket, is not shipping as a volume product. But Intel is working to get it into the field and has given The Next Platform an update on the current plan.

The hybrid CPU-FPGA devices, which are akin to AMD’s Accelerated Computing Units, or APUs, in that they put compute and, in this case, GPU acceleration into a single

Intel Gears Up For FPGA Push was written by Timothy Prickett Morgan at The Next Platform.

IBM Brings Analytics To The Data For Faster Processing

Data analytics is a rapidly evolving field, and IBM and other vendors over the past several years have built numerous tools to address segments of it. But now Big Blue is shifting its focus to give data scientists and developers the technologies they need more easily and quickly analyze the data and derive insights that they can apply to their businesses strategies.

“We have [created] a ton of different products that solve parts of the problem,” Rob Thomas, general manager of IBM Analytics, tells The Next Platform. “We’re moving toward a strategy of developing platforms for analytics. This trend

IBM Brings Analytics To The Data For Faster Processing was written by Jeffrey Burt at The Next Platform.

How Oakforest-PACS Outpaced The K Supercomputer

In high performance computing in the public sector, dollars follow teraflops and now petaflops. Especially in the datacenters of academia, where cutting-edge computational research projects funded by large grants seek the most powerful supercomputers in the region.

Institutions with limited budgets these days are creatively solving their financial supercomputing needs by creating a collective that pools funds and shares computing resources. Some institutions, such as Iowa State University, are doing this internally, with various departments pitching in to buy a single, large HPC cluster, as with their Condo supercomputer.

In Japan, the University of Tokyo (U Tokyo) and University of

How Oakforest-PACS Outpaced The K Supercomputer was written by Timothy Prickett Morgan at The Next Platform.

Anaconda Teams With Microsoft In Machine Learning Push

Microsoft is embedding Anaconda’s Python distribution into its Azure Machine Learning products, the latest move by the software vendor to expand its capabilities in the fast-growing artificial intelligence space and an example of Anaconda extending its reach beyond high performance computing and into AI.

The two companies announced the partnership this week at the Strata Data Conference in New York City, with the news dovetailing with other announcements around AI that Microsoft officials made this week at its own Ignite 2017 show. The vendors said they will offer Anaconda for Microsoft, which they described as a subset of the Anaconda

Anaconda Teams With Microsoft In Machine Learning Push was written by Jeffrey Burt at The Next Platform.

Harnessing Data Insights To Achieve Optimal Energy Consumption

In an age of ongoing digital advancement, leaders across all industries are seeking new ways to improve workplace productivity, ensure competitive advantage, and facilitate continued growth. Success hinges on their ability to accelerate time to value, and work more efficiently and effectively than the competition. Innovation and sustainability are key.

This is particularly true for the energy, oil, and gas (EOG) sector. As the global economy progressively moves away from fossil fuels in search of renewable resources, EOG companies are challenged to operate faster and smarter than ever before. Many organizations are utilizing high performance computing (HPC) technologies in order

Harnessing Data Insights To Achieve Optimal Energy Consumption was written by Timothy Prickett Morgan at The Next Platform.

Volta GPU Accelerators Hit The Streets

Announcements of new iron are exciting, but it doesn’t get real until customers beyond the handful of elite early adopters can get their hands on the gear.

Nvidia launched its “Volta” Tesla V100 GPU accelerators back in May, meeting and in some important ways exceeding most of its performance goals, and has been shipping devices, both in PCI-Express and SXM2 form factors, for a few months. Now, the ramp of this complex processor and its packaging of stacked High Bandwidth Memory – HMB2 from Samsung, to be specific – is progressing and the server OEMs and ODMs of the

Volta GPU Accelerators Hit The Streets was written by Timothy Prickett Morgan at The Next Platform.

Plans for First Exascale Supercomputer in U.S. Released

This morning a presentation filtered from the Department of Energy’s Office of Science showing the roadmap to exascale with a 2021 machine at Argonne National Lab.

This is the Aurora machine, which had an uncertain future this year when its budgetary and other details were thrown into question. We understood the deal was being restructured and indeed it has. The system was originally slated to appear in 2018 with 180 petaflop potential. Now it is 1000 petaflops, an exascale capable machine, and will be delivered in 2021—right on target with the projected revised plans for exascale released earlier this

Plans for First Exascale Supercomputer in U.S. Released was written by Nicole Hemsoth at The Next Platform.

The Ascendancy Of Ethernet Storage Fabrics

It is hard to remember that for decades, whether a system was large or small, its storage was intricately and inescapably linked to its compute.

Network attached storage, as pioneered by NetApp, helped break those links between compute and storage in the enterprise at the file level. But it was the advent of storage area networks that allowed for storage to still be reasonably tightly coupled to servers and work at the lower block level, below file systems, while at the same time allowing that storage to scale independently from the number of disks you might jam into a single

The Ascendancy Of Ethernet Storage Fabrics was written by Timothy Prickett Morgan at The Next Platform.

With Machine Learning, Can HPC Be Self Healing?

High performance computing, long the domain of research centers and academia, is increasingly becoming a part of mainstream IT infrastructure and being opened up to a broader range of enterprise workloads, and in recent years, that includes big data analytics and machine learning. At the forefront of this expanded use is MasterCard, a financial services giant that is looking to drive the real-time business benefits of HPC.

As MasterCard has learned, however, alongside business value come additional needs around data protection. As HPC systems are more likely to hold customer-facing and other compliance related data, such infrastructure has the potential

With Machine Learning, Can HPC Be Self Healing? was written by Timothy Prickett Morgan at The Next Platform.

MapR Bulks Up Database for Modern Apps

MapR Technologies has been busy in recent years build out its capabilities as a data platform company that can support a broad range of open-source technologies, from Hadoop and Spark to Hive, and can reach from the data center through the edge and out into the cloud. At the center of its efforts is its Converged Data Platform, which comes with the MapR-FS Posix file system and includes enterprise-level database and storage that are designed to handle the emerging big data workloads.

At the Strata Data Conference in New York City Sept. 26, company officials are putting their focus

MapR Bulks Up Database for Modern Apps was written by Nicole Hemsoth at The Next Platform.