Archive

Category Archives for "The Next Platform"

The Dream Of Software Only Shared Memory Clusters

It is hard to quantify the amount of effort in systems and application software development that has been done precisely because there is no easy, efficient, and cheap way to make a bunch of cheap commodity servers look like one wonking system with a big ole flat memory space that is as easy to program as a PC but which brings to bear all that compute, memory, and I/O of a cluster as a single system image.

We have SMP and NUMA glue chips to do such shared memory clustering in hardware, scaling from two to four and sometimes eight,

The Dream Of Software Only Shared Memory Clusters was written by Timothy Prickett Morgan at The Next Platform.

Qualcomm Builds Momentum For Centriq ARM Server Chip

The talk about ARM-based servers pushing their way into the datacenter has been going for almost a decade now, during which time we have seen companies like Samsung drop their interest before they really got going on it and others like AMD getting an ARM-based chip out but then turning their attention to other initiatives.

We have also seen vendors like Cavium and Applied Micro get chips to market with some levels of adoption. Top system OEMs like Dell, Hewlett Packard Enterprise, Lenovo, and Cray are using these chips to various degrees in commercially available or test servers. And the

Qualcomm Builds Momentum For Centriq ARM Server Chip was written by Jeffrey Burt at The Next Platform.

HPE Demystifies Deep Learning For Faster Intelligence

Today’s enterprises need deep learning, but most don’t know how to get started. As rising data volumes and evolving industry trends push the limits of traditional IT, the latest innovations are helping them operate faster and smarter—and high performance computing is just the beginning.

Enterprises are deploying robust server platforms to power HPC applications, leveraging optimal performance, reliability, and flexibility to handle increasingly dense workloads. And with these industry-leading tools, modeling and simulation capabilities are rapidly evolving. Artificial intelligence is transforming how we operate and relate to technology. AI allows machines to think and learn like the human brain, while

HPE Demystifies Deep Learning For Faster Intelligence was written by Timothy Prickett Morgan at The Next Platform.

Connecting The Dots With Graph Databases

Graph querying of data housed in massive data lakes and data warehouses has been part of the big data and analytics scene for many years, but it hasn’t always been a particularly easy process. Understanding with graphs has in many ways been a highly manual process, and not all data scientists have had access to the Cypher graph database query language. Executives at graph company Neo4j are looking to change that.

At the GraphConnect New York show this week, Neo4j announced it has donated an early version of its Cypher for Apache Spark language toolkit to the openCypher project, a

Connecting The Dots With Graph Databases was written by Jeffrey Burt at The Next Platform.

BSC Builds 21st Century HPC In A 19th Century Cathedral

This summer, the Partnership for Advanced Computing in Europe (PRACE) added to its roster another of the world’s most powerful high performance computing systems. The Barcelona Computing Center’s new MareNostrum 4, delivered by IBM with the help of partners Lenovo and Fujitsu, and fueled by HPC technologies from Intel, will facilitate extensive engineering and scientific research in fields like astrophysics, weather forecasting, and genome research. Nestled within a unique building – the Torre Girona chapel, which fell out of use – the fourth generation MareNostrum system relies on a general purpose cluster working with three specialized clusters to achieve its

BSC Builds 21st Century HPC In A 19th Century Cathedral was written by Timothy Prickett Morgan at The Next Platform.

Cray Supercomputers One Step Closer to Cloud Users

Supercomputer maker Cray is always looking for ways to extend its reach outside of the traditional academic and government markets where the biggest deals are often made.

From its forays into graph analytics appliances and more recently, machine and deep learning, the company has potential to exploit its long history building some of the world’s fastest machines. This has expanded into some new ventures wherein potential new Cray users can try on the company’s systems, including via an on-demand partnership with datacenter provider, Markley, and now, inside of Microsoft’s Azure datacenters.

For Microsoft Azure cloud users looking to bolster modeling

Cray Supercomputers One Step Closer to Cloud Users was written by Nicole Hemsoth at The Next Platform.

Intel Pumps Funds Into Data Processing In All Shapes And Sizes

Intel’s multi-year effort to expand its reach beyond its PC and server processor roots has taken the chip maker down multiple paths, some of which have ended in dead ends.

The most memorable of those was the billion-plus-dollar attempt to challenge ARM Holdings and its various partners – such as Qualcomm and Samsung – in making chips for mobile devices. Under current CEO Brian Krzanich, Intel has retrenched, dropping its mobile device efforts and pulling back from wearables, and instead is pushing to provide the foundational technologies that will underpin the trends that will continue to shape the industry, from

Intel Pumps Funds Into Data Processing In All Shapes And Sizes was written by Jeffrey Burt at The Next Platform.

The IBM Transformation Can Gather Steam Now

For the past five and a half years, which is not quite an eternity in the IT business but is something akin to a half of a generation or so, IBM’s revenues have been declining, quarter in and quarter out. As has happened many, many times in its more than century of existence, Big Blue, which used to be a peddler of meat slicers, time machines, scales, and punch card tabulators early in its history, has had to constantly evolve and reimagine itself.

The transformation that IBM had to undergo in the late 1980s and early 1990s was a near

The IBM Transformation Can Gather Steam Now was written by Timothy Prickett Morgan at The Next Platform.

Easing Enterprise Migration To The Cloud

No one knows better than IBM that the time, money, energy, and risk associated with changing platforms can hinder that change. In some cases, as with the System z mainframe, this helps the company preserve its footprint in the datacenter. But in other cases, it hurts IBM’s ability to get people to try out different public or private infrastructure.

It is no secret that Big Blue wants a much bigger cloud business, and that it got a late start compared to Amazon, Microsoft, and Google. But IBM does have a presence at most of the large companies on earth, and

Easing Enterprise Migration To The Cloud was written by Jeffrey Burt at The Next Platform.

A Match Made In Hyperscale: Docker Borgs Kubernetes

For more than a year, container pioneer Docker has pushed its own Docker Swarm as the orchestration tool for managing highly distributed computing environments based on its eponymous containers in physical and virtual environments. But it is hard to deny the rapid uptake of Kubernetes, the container orchestration technology that was derived from Google’s internal Borg and Omega cluster managers and that the search engine giant open sourced three years ago.

Kubernetes has become highly popular, gaining momentum with top cloud providers like Amazon Web Services and Microsoft Azure, and obviously Google Cloud Platform, and is getting support from

A Match Made In Hyperscale: Docker Borgs Kubernetes was written by Jeffrey Burt at The Next Platform.

Live Today : HPC, Machine Learning, And Security – Can HPC Be Self Healing?

SPONSORED WEBCAST

Today at 10 am Eastern / 15:00 UK this free webcast will broadcast live.

In this webcast, we learn from Nick Curcuru, vice president of the big data practice at MasterCard, about what needs to be in place both technically and in terms of management models and processes so that the benefits can be fully achieved.

High performance computing, long the domain of research centers and academia, is increasingly becoming a part of mainstream IT infrastructure and being opened up to a broader range of enterprise workloads, and in recent years, that includes big data analytics and machine

Live Today : HPC, Machine Learning, And Security – Can HPC Be Self Healing? was written by Matt Proud at The Next Platform.

Getting With The Program On Software Defined Networks

If the profit margins are under pressure among the switch and router makers of the world, their chief financial officers can probably place a lot of the blame on Nick McKeown and his several partners throughout the years. And if McKeown is right about what is happening as the network software is increasingly disaggregated from the hardware – what is called software defined networking – they will either have to adapt or be relegated to the dustbins of history.

McKeown cut his teeth after university in the late 1980s at Hewlett Packard Labs in Bristol, England, one of the hotbeds

Getting With The Program On Software Defined Networks was written by Timothy Prickett Morgan at The Next Platform.

Cisco Knows No One Wants To Manage The Management Stack

The highly distributed and increasingly cloud-based nature of the modern IT environment is adding to the complexity that organizations have to deal with, particularly in terms of managing their infrastructures. Mobility, the internet of things, new development paradigms, containerization, more distributed applications, data analytics and multi-cloud deployments are all conspiring to create even more challenges in what is an already complicated management scenario for enterprises facing cost and time constraints.

At a time when speed and scalability are imperative and human errors can be costly, the answer to many of these challenges may lie in the cloud. That’s the

Cisco Knows No One Wants To Manage The Management Stack was written by Jeffrey Burt at The Next Platform.

IBM Preps Power9 For AI And HPC Launch, Forges Big NUMA Iron

After a long, long wait and years of anticipation, it looks like IBM is finally getting ready to ship commercial versions of its Power9 chips, and as expected, its first salvo of processors aimed at the datacenter will be aimed at HPC, data analytics, and machine learning workloads.

We are also catching wind about IBM’s Power9-based scale-up NUMA machines, which will debut sometime next year and take on big iron systems based on Intel Xeon SP, Oracle Sparc M8, and Fujitsu Sparc64-XII processors as well as give some competition to IBM’s own System z14 mainframes.

The US Department

IBM Preps Power9 For AI And HPC Launch, Forges Big NUMA Iron was written by Timothy Prickett Morgan at The Next Platform.

New Optimizations Improve Deep Learning Frameworks For CPUs

Today, most machine learning is done on processors. Some would say that acceleration of learning has to be done on GPUs, but for most users that is not good advice for several reasons. The biggest reason is now the Intel Xeon SP processor, formerly codenamed “Skylake.”

Up until recently, the software for machine learning has been often more optimized for GPUs than anything else. A series of efforts by Intel have changed that – and when coupled with Platinum version of the Intel Xeon SP family, the top performance gap is closer to 2X than it is to 100X. This

New Optimizations Improve Deep Learning Frameworks For CPUs was written by Timothy Prickett Morgan at The Next Platform.

Cisco Stretches ACI Network Fabrics, Eases Management

For disaster recovery, political, and organizational reasons, enterprises like to have multiple datacenters, and now they are going hybrid with public cloud capacity adding in the mix. Having networks scattered across the globe brings operational challenges, from being able to easily migrate and manage workloads across the multiple sites and increased complexity around networks, security to adopting emerging datacenter technologies like containers.

As the world becomes more cloud-centric, organizations are looking for ways to gain greater visibility and scalability across their environments, automate as many processes as possible and manage all these sites as a single entity.

Cisco Systems

Cisco Stretches ACI Network Fabrics, Eases Management was written by Jeffrey Burt at The Next Platform.

Japan’s ABCI System Shows The Subtleties Separating AI And HPC

Governments like to spread the money around their indigenous IT companies when they can, and so it is with the AI Bridging Cloud Infrastructure, or ABCI, supercomputer that is being commissioned by the National Institute of Advanced Industrial Science and Technology (AIST) in Japan. NEC built the ABCI prototype last year, and now Fujitsu has been commissioned to build the actual ABCI system.

The resulting machine, which is being purchased specifically to offer cloud access to compute and storage capacity for artificial intelligence and data analytics workloads, would make a fine system for running HPC simulation and models. But that

Japan’s ABCI System Shows The Subtleties Separating AI And HPC was written by Timothy Prickett Morgan at The Next Platform.

IBM Combines PowerAI, Data Science Experience in Enterprise AI Push

IBM has spent the past several years putting a laser focus on what it calls cognitive computing, using its Watson platform as the foundation for its efforts in such emerging fields as artificial intelligence (AI) and is successful spinoff, deep learning. Big Blue has leaned on Watson technology, its traditional Power systems, and increasingly powerful GPUs from Nvidia to drive its efforts to not only bring AI and deep learning into the cloud, but also to push AI into the enterprise.

The technologies are part of a larger push in the industry to help enterprises transform their businesses to take

IBM Combines PowerAI, Data Science Experience in Enterprise AI Push was written by Jeffrey Burt at The Next Platform.

Baidu Sheds Precision Without Paying Deep Learning Accuracy Cost

One of the reasons we have written so much about Chinese search and social web giant, Baidu, in the last few years is because they have openly described both the hardware and software steps to making deep learning efficient and high performance at scale.

In addition to providing several benchmarking efforts and GPU use cases, researchers at the company’s Silicon Valley AI Lab (SVAIL) have been at the forefront of eking power efficiency and performance out of new hardware by lowering precision. This is a trend that has kickstarted similar thinking in hardware usage in other areas, including supercomputing

Baidu Sheds Precision Without Paying Deep Learning Accuracy Cost was written by Nicole Hemsoth at The Next Platform.

Intel Takes First Steps To Universal Quantum Computing

Someone is going to commercialize a general purpose, universal quantum computer first, and Intel wants to be the first. So does Google. So does IBM. And D-Wave is pretty sure it already has done this, even if many academics and a slew of upstart competitors don’t agree. What we can all agree on is that there is a very long road ahead in the development of quantum computing, and it will be a costly endeavor that could nonetheless help solve some intractable problems.

This week, Intel showed off the handiwork its engineers and those of partner QuTech, a

Intel Takes First Steps To Universal Quantum Computing was written by Timothy Prickett Morgan at The Next Platform.