Archive

Category Archives for "IT Industry"

Nvidia Tesla Compute Business Quadruples In Q4

If Nvidia’s Datacenter business unit was a startup and separate from the company, we would all be talking about the long investment it has made in GPU-based computing and how the company has moved from the blade of the hockey stick and rounded the bend and is moving rapidly up the handle with triple-digit revenue growth and an initial public offering on the horizon.

But the part of Nvidia’s business that is driven by its Tesla compute engines and GRID visualization engines is not a separate company and it is not going public. Still, that business is sure making things

Nvidia Tesla Compute Business Quadruples In Q4 was written by Timothy Prickett Morgan at The Next Platform.

ARM Gains Stronger Foothold In China With AI And IoT

China represents a huge opportunity for chip designer ARM as it looks to extend its low-power system-on-a-chip (SoC) architecture beyond the mobile and embedded devices spaces and into new areas, such as the datacenter and emerging markets like autonomous vehicles, drones and the Internet of Things. China is a massive, fast-growing market with tech companies – including such giants as Baidu, Alibaba, and Tencent – looking to leverage such technologies as artificial intelligence to help expand their businesses deeper into the global market and turning to vendors like ARM that can help them fuel that growth.

ARM Holdings, which designs

ARM Gains Stronger Foothold In China With AI And IoT was written by Jeffrey Burt at The Next Platform.

Top Chinese Supercomputer Blazes Real-World Application Trail

China’s massive Sunway TaihuLight supercomputer sent ripples through the computing world last year when it debuted in the number-one spot on the Top500 list of the world’s fastest supercomputers. Delivering 93 teraflops of performance – and a peak of more than 125,000 teraflops – the system is nearly three times faster than the second supercomputer on the list (the Tianhe-2, also a Chinese system) and dwarfs the Titan system Oak Ridge National Laboratory, a Cray-based machine that is the world’s third-fastest system, and the fastest in the United States.

However, it wasn’t only the system’s performance that garnered a lot

Top Chinese Supercomputer Blazes Real-World Application Trail was written by Jeffrey Burt at The Next Platform.

Intel Gets Serious About Neuromorphic, Cognitive Computing Future

Like all hardware device makers eager to meet the newest market opportunity, Intel is placing multiple bets on the future of machine learning hardware. The chipmaker has already cast its Xeon Phi and future integrated Nervana Systems chips into the deep learning pool while touting regular Xeons to do the heavy lifting on the inference side.

However, a recent conversation we had with Intel turned up a surprising new addition to the machine learning conversation—an emphasis on neuromorphic devices and what Intel is openly calling “cognitive computing” (a term used primarily—and heavily—for IBM’s Watson-driven AI technologies). This is the first

Intel Gets Serious About Neuromorphic, Cognitive Computing Future was written by Nicole Hemsoth at The Next Platform.

Locking Down Docker To Open Up Enterprise Adoption

It happens time and time again with any new technology. Coders create this new thing, it gets deployed as an experiment and, if it is an open source project, shared with the world. As its utility is realized, adoption suddenly spikes with the do-it-yourself crowd that is eager to solve a particular problem. And then, as more mainstream enterprises take an interest, the talk turns to security.

It’s like being told to grow up by a grownup, to eat your vegetables. In fact, it isn’t like that at all. It is precisely that, and it is healthy for any technology

Locking Down Docker To Open Up Enterprise Adoption was written by Timothy Prickett Morgan at The Next Platform.

Getting Down To Bare Metal On The Cloud

When you think of the public cloud, the tendency is to focus on the big ones, like Amazon Web Services, Microsoft Azure, or Google Cloud Platform. They’re massive, dominating the public cloud skyline with huge datacenters filled with thousands of highly virtualized servers, not to mention virtualized storage and networking. Capacity is divvied up among corporate customers that are increasingly looking to run and store their workloads on someone else’s infrastructure, hardware that they don’t have to set up, deploy, manage or maintain themselves.

But as we’ve talked about before here at The Next Platform, not all workloads run

Getting Down To Bare Metal On The Cloud was written by Jeffrey Burt at The Next Platform.

Cray Outpaces HPC Market, Books Historic Quarter

It is hard to tell which part of the systems market is lumpier – that for traditional HPC systems like supercomputers or that for massive cluster deployments for the hyperscalers that run public clouds and public facing applications on a massive scale. But what we do know for sure is that the HPC market is slowing down, and that the bellwether for that market, Cray, is doing better than that market according to its latest financial results.

Despite the softness in the traditional HPC market for clusters to run simulations and models (partly driven by the political climates around the

Cray Outpaces HPC Market, Books Historic Quarter was written by Timothy Prickett Morgan at The Next Platform.

Making the Connections in Disparate Data

Enterprises are awash in data, and the number of sources of that data is only increasing. For some of the larger companies, data sources can rise into the thousands – from databases, files and tables to ERP and CRM programs – and the data itself can come in different formats, making it difficult to bring together and integrate into a unified pool. This can create a variety of challenges for businesses in everything from securing the data they have to analyzing it.

The problem isn’t going to go away. The rise of mobile and cloud computing and the Internet of

Making the Connections in Disparate Data was written by Nicole Hemsoth at The Next Platform.

Inside That Big Silicon Valley Hyperscale Supermicro Deal

Among the major companies that design and sell servers with their own brands, which are called original equipment manufacturers or OEMs, and those that co-design machines with customers and then make them, which are called original design manufacturers or ODMs, Supermicro stands apart. It does not fall precisely into either category. The company makes system components, like motherboards and enclosures, for those who want to build their own systems or those who want to sell systems to others, and it also makes complete systems, sold in onesies or twosies or sold by the hundreds of racks.

Supermicro is also a

Inside That Big Silicon Valley Hyperscale Supermicro Deal was written by Timothy Prickett Morgan at The Next Platform.

Putting ARM-Based Microservers Through The Paces

When ARM officials and partners several years ago began talking about pushing the low-power chip architecture from our phones and tablets and into the datacenter, the initial target was the emerging field of microservers – small, highly dense and highly efficient systems aimed at the growing number of cloud providers and hyperscale environments where power efficiency was as important as performance.

The thinking was that the low-power ARM architecture that was found in almost all consumer devices would fit into the energy-conscious parts of the server space that Intel was having troubling reaching with its more power-hungry Xeon processors. It

Putting ARM-Based Microservers Through The Paces was written by Timothy Prickett Morgan at The Next Platform.

Unwinding Moore’s Law from Genomics with Co-Design

More than almost any other market or research segment, genomics is vastly outpacing Moore’s Law.

The continued march of new sequencing and other instruments has created a flood of data and development of the DNA analysis software stack has created a tsunami. For some, high performance genomic research can only move at the pace of innovation with custom hardware and software, co-designed and tuned for the task.

We have described efforts to build custom ASICs for sequence alignment, as well as using reprogrammable hardware for genomics research, but for centers that have defined workloads and are limited by performance constraints

Unwinding Moore’s Law from Genomics with Co-Design was written by Nicole Hemsoth at The Next Platform.

The Case For IBM Buying Nvidia, Xilinx, And Mellanox

We spend a lot of time contemplating what technologies will be deployed at the heart of servers, storage, and networks and thereby form the foundation of the next successive generations of platforms in the datacenter for running applications old and new. While technology is inherently interesting, we are cognizant of the fact that the companies producing technology need global reach and a certain critical mass.

It is with this in mind, and as more of a thought experiment than a desire, that we consider the fate of International Business Machines in the datacenter. In many ways, other companies have long

The Case For IBM Buying Nvidia, Xilinx, And Mellanox was written by Timothy Prickett Morgan at The Next Platform.

Pushing MPI into the Deep Learning Training Stack

We have written much about large-scale deep learning implementations over the last couple of years, but one question that is being posed with increasing frequency is how these workloads (training in particular) will scale to many nodes. While different companies, including Baidu and others, have managed to get their deep learning training clusters to scale across many GPU-laden nodes, for the non-hyperscale companies with their own development teams, this scalability is a sticking point.

The answer to deep learning framework scalability can be found in the world of supercomputing. For the many nodes required for large-scale jobs, the de facto

Pushing MPI into the Deep Learning Training Stack was written by Nicole Hemsoth at The Next Platform.

Rise of China, Real-World Benchmarks Top Supercomputing Agenda

The United States for years was the dominant player in the high-performance computing world, with more than half of the systems on the Top500 list of the world’s fastest supercomputers being housed in the country. At the same time, most HPC systems around the globe were powered by technologies from such major US tech companies as Intel, IBM, AMD, Cray and Nvidia.

That has changed rapidly over the last several years, as the Chinese government has invested tens of billions of dollars to expand the capabilities of the country’s own technology community and with a promise to spend even more

Rise of China, Real-World Benchmarks Top Supercomputing Agenda was written by Nicole Hemsoth at The Next Platform.

One Small Shop, One Extreme HPC Storage Challenge

Being at the bleeding edge of computing in the life sciences does not always mean operating at extreme scale. For some shops, advancements in new data-generating scientific tools requires forward thinking at the infrastructure level—even if it doesn’t require a massive cluster with exotic architectures. We tend to cover much of what happens at the extreme scale of computing here, but it’s worth stepping back and observing how dramatic problems in HPC are addressed in much smaller environments.

This “small shop, big problem” situation is familiar to the Van Andel Research Institute (VARI), which recently moved from a genomics and

One Small Shop, One Extreme HPC Storage Challenge was written by Nicole Hemsoth at The Next Platform.

A New Twist On Adding Data Persistence To Containers

Containers continue to gain momentum as organizations look for greater efficiencies and lower costs to run distributed applications in their increasingly virtualized datacenters as well as for improving their application development environments. As we have noted before, containers are becoming more common in the enterprise, though they still have a way to go before being fully embraced in high performance computing circles.

There are myriad advantages to containers, from being able to spin them up much faster than virtual machine instances on hypervisors and to pack more containers than virtual machines on a host system to gaining efficiencies

A New Twist On Adding Data Persistence To Containers was written by Timothy Prickett Morgan at The Next Platform.

Solving the Challenges of Scientific Clouds

In distributed computing, there are two choices: move the data to the computation or move the computation to the data. Public and off-site private clouds present a third option: move them both. In any case, something is moving somewhere. The “right” choice depends on a variety of factors –  including performance, quantity, and cost – but data seems to have more intertia in many cases, especially when the volume approaches the scale of terabytes.

For the modern cloud, adding additional compute power is trivial. Moving the data to that compute power is less so. With a 10 gigabit connection, the

Solving the Challenges of Scientific Clouds was written by Nicole Hemsoth at The Next Platform.

Data, Analytics, Probabilities and the Super Bowl

Data is quickly becoming the coin of the realm in most aspects of the business world, and analytics the best way for organizations to cash in on it. It’s easy to be taken in by the systems and devices – much of the discussion around the Internet of Things tends to be around the things themselves, whether small sensors, mobile devices, self-driving cars or huge manufacturing systems. But the real value is in the data generated by these machines, and the ability to extract that data, analyze it and make decisions based on it in as close to real-time as

Data, Analytics, Probabilities and the Super Bowl was written by Nicole Hemsoth at The Next Platform.

When Will AWS Move Up The Stack To Real Applications?

Imagine how little fun online retailer Amazon would be having on its quarterly calls if it had not launched its Amazon Web Services cloud almost eleven years ago. The very premise of Amazon was to eliminate brick and mortar retailing, cutting out capital expenses as much as possible, to deliver books and then myriad other things to our doorsteps.

How ironic is it that Amazon pivoted to one of the most capital intensive businesses on earth – running datacenters – and has been able to extract predictable and sizable profits from it to prop up its other businesses and strengthen

When Will AWS Move Up The Stack To Real Applications? was written by Timothy Prickett Morgan at The Next Platform.

Chip Makers and the China Challenge

China represents a big and growing market opportunity for IT vendors around the world. It’s huge population and market upside compared with the more mature regions across the globe is hugely attractive to system and component makers, and the Chinese government’s willingness to spend money to help build up the country’s compute capabilities only adds to the allure. In addition, it is home to such hyperscale players as Baidu, Alibaba and Tencent, which like US counterparts Google, Facebook and eBay are building out massive datacenters that are housing tens of thousands of servers.

However, those same Chinese government officials aren’t

Chip Makers and the China Challenge was written by Nicole Hemsoth at The Next Platform.