Archive

Category Archives for "IT Industry"

Oracle Takes On Xeons With Sparc S7

It is an accepted principle of modern infrastructure that at a certain scale, customization like that done by Google, Amazon Web Services, Microsoft, or Baidu pays off. While Oracle is building its own public cloud, it does not have the kind of scale that these companies do, but it does have something else that warrants customization and co-design up and down its stack: more than 420,000 customers who generate $38.5 billion in sales.

This, in a nutshell, is why Oracle continues to invest in its Sparc processors even though many of its customers deploy Oracle’s middleware, database, and application software

Oracle Takes On Xeons With Sparc S7 was written by Timothy Prickett Morgan at The Next Platform.

System Software, Orchestration Gets an OpenHPC Boost

System software setup and maintenance has become a major efficiency drag on HPC labs and OEMs alike, but community and industry efforts are now underway to reduce the huge amounts of duplicated development, validation and maintenance work across the HPC ecosystem. Disparate efforts and approaches, while necessary on some levels, slow adoption of hardware innovation and progress toward exascale performance. They also complicate adoption of complex workloads like big data and machine learning.

With the creation of the OpenHPC Community, a Linux Foundation collaborative project, the push is on to minimize duplicated efforts in the HPC software stack wherever

System Software, Orchestration Gets an OpenHPC Boost was written by Nicole Hemsoth at The Next Platform.

Emerging “Universal” FPGA, GPU Platform for Deep Learning

In the last couple of years, we have written and heard about the usefulness of GPUs for deep learning training as well as, to a lesser extent, custom ASICs and FPGAs. All of these options have shown performance or efficiency advantages over commodity CPU-only approaches, but programming for all of these is often a challenge.

Programmability hurdles aside, deep learning training on accelerators is standard, but is often limited to a single choice—GPUs or, to a far lesser extent, FPGAs. Now, a research team from the University of California Santa Barbara has proposed a new middleware platform that can combine

Emerging “Universal” FPGA, GPU Platform for Deep Learning was written by Nicole Hemsoth at The Next Platform.

Lenovo HPC Bounces Back After IBM Spinoff

When IBM sold off its System x division to Lenovo Group in the fall of 2014, some big supercomputing centers in the United States and Europe that were long-time customers of Big Blue had to stop and think about what their future systems would look like and who would supply them. It was not a foregone conclusion that the Xeon-based portion of IBM’s HPC business would just move over to Lenovo as part of the sale.

Quite the opposite, in fact. Many believed that Lenovo could not hold onto its HPC business, and Hewlett Packard Enterprise and Dell were quick

Lenovo HPC Bounces Back After IBM Spinoff was written by Timothy Prickett Morgan at The Next Platform.

Novel Architectures on the Far Horizon for Weather Prediction

Weather modeling and forecasting centers are among some of the top users of supercomputing systems and are at the top of the list when it comes to areas that could benefit from exascale-class compute power.

However, for modeling centers, even those with the most powerful machines, there is a great deal of leg work on the code front in particular to scale to that potential. Still, many, including most recently the UK Met Office, have planted a stake in the ground for exascale—and they are looking beyond traditional architectures to meet the power and scalability demands they’ll be facing

Novel Architectures on the Far Horizon for Weather Prediction was written by Nicole Hemsoth at The Next Platform.

Cisco Connects With SGI For Big NUMA Iron

When supercomputer maker SGI tweaked its NUMA server technology to try to pursue sales in the datacenter, the plan was not to go it alone but rather to partner with the makers of workhorse Xeon servers that did not – and would not – make their own big iron but who nonetheless want to sell high-end machines to their customers.

This, company officials have said all along, is the only way that SGI, which is quite a bit smaller than many of the tier one server makers, can reach the total addressable market that the company has forecast for its

Cisco Connects With SGI For Big NUMA Iron was written by Timothy Prickett Morgan at The Next Platform.

The Hype About Converged Systems

Converged systems are a hot commodity in the IT sector these days. But it looks to us like the hype over various kinds of integrated systems that weld servers and storage together into preconfigured stacks  including hyperconverged stacks that literally merge the compute and storage layers on the same servers – is just a bit bigger than the appetite for such iron in the datacenters of the world.

According to the latest statistics from IDC, the market for converged systems, which is a broader definition of such machines that includes integrated systems, certified reference systems, and hyperconverged systems, the market

The Hype About Converged Systems was written by Timothy Prickett Morgan at The Next Platform.

Mitigating MPI Message Matching Issues

Since the 1990s, MPI (Message Passing Interface) has been the dominant communications protocol for high-performance scientific and commercial distributed computing. Designed in an era when processors with two or four cores were considered high-end parallel devices, the recent move to processors containing tens to a few hundred cores (as exemplified by the current Intel Xeon and Intel Xeon Phi processor families) has exacerbated scaling issues inside MPI itself. Increased network traffic, amplified by high performance communications fabrics such as InfiniBand and Intel Omni-Path Architecture (Intel OPA) manifest an MPI performance and scaling issue.

In recognition of their outstanding research and

Mitigating MPI Message Matching Issues was written by Nicole Hemsoth at The Next Platform.

Alchemy Can’t Save Moore’s Law

We don’t have a Moore’s Law problem so much as we have a materials science or alchemy problem. If you believe in materials science, what seems abundantly clear in listening to so many discussions about the end of scale for chip manufacturing processes is that, for whatever reason, the industry as a whole has not done enough investing to discover the new materials that will allow us to enhance or move beyond CMOS chip technology.

The only logical conclusion is that people must actually believe in alchemy, that some kind of magic is going to save the day and allow

Alchemy Can’t Save Moore’s Law was written by Timothy Prickett Morgan at The Next Platform.

Inside Japan’s Future Exascale ARM Supercomputer

The rumors that supercomputer maker Fujitsu would be dropping the Sparc architecture and moving to ARM cores for its next generation of supercomputers have been going around since last fall, and at the International Supercomputing Conference in Frankfurt, Germany this week, officials at the server maker and RIKEN, the research and development arm of the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT) that currently houses the mighty K supercomputer, confirmed that this is indeed true.

The ARM architecture now gets a heavy-hitter system maker with expertise in developing processors to support diverse commercial and technical workloads, and

Inside Japan’s Future Exascale ARM Supercomputer was written by Timothy Prickett Morgan at The Next Platform.

HPC is Great for AI, But What Does Supercomputing Stand to Gain?

As we have written about extensively here at The Next Platform, there is no shortage of use cases in deep learning and machine learning where HPC hardware and software approaches have bled over to power next generation applications in image, speech, video, and other classification and learning tasks.

Since we focus on high performance computing systems here in their many forms, that trend has been exciting to follow, particularly watching GPU computing and matrix math-based workloads find a home outside of the traditional scientific supercomputing center.

This widened attention has been good for HPC as well since it has

HPC is Great for AI, But What Does Supercomputing Stand to Gain? was written by Nicole Hemsoth at The Next Platform.

HPC Spending Outpaces The IT Market, And Will Continue To

Sales of HPC systems were a lot brisker in 2015 than anticipated, and according to the latest prognostications from the market researchers at IDC presented from the International Supercomputing Conference in Frankfurt, Germany this week, growth in the HPC sector will continue to outpace that of the overall IT market for many years to come.

In a sense, the good numbers that the HPC market turned in last year are perhaps a little undercounted. In his traditional early morning breakfast briefing at the conference, Earl Joseph, program vice president for high performance computing at IDC, said that he had been

HPC Spending Outpaces The IT Market, And Will Continue To was written by Timothy Prickett Morgan at The Next Platform.

Measuring Top Supercomputer Performance in the Real World

When we cover the bi-annual listing of the world’s most powerful supercomputers, the metric at the heart of those results, the high performance Linpack benchmark, the gold standard for over two decades, is the basis. However, many have argued the benchmark is getting long in tooth with its myopic focus on sheer floating point performance over other important factors that determine a supercomputer’s value for real-world applications.

This shift in value stands to reason, since larger machines mean more data coursing through the system, thus an increased reliance on memory and the I/O subsystem, among other factors. While raw floating

Measuring Top Supercomputer Performance in the Real World was written by Nicole Hemsoth at The Next Platform.

Knights Landing Proves Solid Ground for Intel’s Stake in Deep Learning

Intel has finally opened the first public discussions of its investment in the future of machine learning and deep learning and while some might argue it is a bit late in the game with its rivals dominating the training market for such workloads, the company had to wait for the official rollout of Knights Landing and extensions to the scalable system framework to make it official—and meaty enough to capture real share from the few players doing deep learning at scale.

Yesterday, we detailed the announcement of the first volume shipments of Knights Landing, which already is finding a home

Knights Landing Proves Solid Ground for Intel’s Stake in Deep Learning was written by Nicole Hemsoth at The Next Platform.

Intel Knights Landing Yields Big Bang For The Buck Jump

The long wait for volume shipments of Intel’s “Knights Landing” parallel X86 processors is over, and at the International Supercomputing Conference in Frankfurt, Germany is unveiling the official lineup of the Xeon Phi chips that are aimed at high performance computing and machine learning workloads alike.

The lineup is uncharacteristically simple for a Xeon product line, which tends to have a lot of different options turned on and off to meet the myriad requirements of features and price points that a diverse customer base usually compels Intel to support. Over time, the Xeon Phi lineup will become more complex, with

Intel Knights Landing Yields Big Bang For The Buck Jump was written by Timothy Prickett Morgan at The Next Platform.

China Topples United States As Top Supercomputer User

For the first time since the Top 500 rankings of the most powerful supercomputers in the world was started 23 years ago, the United States is not home to the largest number of machines on the list – and China, after decades of intense investment and engineering, is.

Supercomputing is not just an academic or government endeavor, but it is an intensely nationalistic one given the enormous sums that are required to create the components of these massive machines, write software for them, and keep them running until some new approach comes along. And given that the machines support the

China Topples United States As Top Supercomputer User was written by Timothy Prickett Morgan at The Next Platform.

A Look Inside China’s Chart-Topping New Supercomputer

Much to the surprise of the supercomputing community, which is gathered in Germany for the International Supercomputing Conference this morning, news arrived that a new system has dramatically topped the Top 500 list of the world’s fastest and largest machines. And like the last one that took this group by surprise a few years ago, the new system is also in China.

Recall that the reigning supercomputer in China, the Tianhe-2 machine, has stood firmly at the top of that list for three years, outpacing the U.S. “Titan” system at Oak Ridge National Laboratory. We have a more detailed analysis

A Look Inside China’s Chart-Topping New Supercomputer was written by Nicole Hemsoth at The Next Platform.

Nvidia Rounds Out Pascal Tesla Accelerator Lineup

Nvidia wants for its latest “Pascal” GP100 generation of GPUs to be broadly adopted in the market, not just used in capability-class supercomputers that push the limits of performance for traditional HPC workloads as well as for emerging machine learning systems. And to accomplish this, Nvidia needs to put Pascal GPUs into a number of distinct devices that fit into different system form factors and offer various capabilities at multiple price points.

At the International Supercomputing Conference in Frankfurt, Germany, Nvidia is therefore taking the wraps off two new Tesla accelerators based on the Pascal GPUs that plug into systems

Nvidia Rounds Out Pascal Tesla Accelerator Lineup was written by Timothy Prickett Morgan at The Next Platform.

What Will GPU Accelerated AI Lend to Traditional Supercomputing?

This week at the International Supercomputing Conference (ISC ’16) we are expecting a wave of vendors and high performance computing pros to blur the borders between traditional supercomputing and what is around the corner on the application front—artificial intelligence and machine learning.

For some, merging those two areas is a stretch, but for others, particularly GPU maker, Nvidia, which just extended its supercomputing/deep learning roadmap this morning, the story is far more direct since much of the recent deep learning work has hinged on GPUs for training of neural networks and machine learning algorithms.

We have written extensively over

What Will GPU Accelerated AI Lend to Traditional Supercomputing? was written by Nicole Hemsoth at The Next Platform.

Cavium Buys Access To Enterprise With QLogic Deal

Might doesn’t make right, but it sure does help. One of the recurring bothers about any technology upstart is that they are smaller Davids usually up against vastly larger Goliaths, usually with a broader and deeper set of technologies covering multiple markets. The best way to get traction in one market, then, seems to be to have significant footing in several markets.

This is the strategy that ARM server chip and switch ASIC maker Cavium is taking as it shells out approximately $1.36 billion to acquire network and storage switch chip maker QLogic. The combination of the two companies will

Cavium Buys Access To Enterprise With QLogic Deal was written by Timothy Prickett Morgan at The Next Platform.