Archive

Category Archives for "IT Industry"

What Is SMP Without Shared Memory?

This is the second in the series on the essentials of multiprocessor programming. This time around we are going to look at some of the normally little considered effects of having memory being shared by a lot of processors and by the work concurrently executing there.

We start, though, by observing that there has been quite a market for parallelizing applications even when they do not share data. There has been remarkable growth of applications capable of using multiple distributed memory systems for parallelism, and interestingly that very nicely demonstrates the opportunity that exists for using massive compute capacity

What Is SMP Without Shared Memory? was written by Timothy Prickett Morgan at The Next Platform.

Why Google Is Driving Compute Diversity

In the ideal hyperscaler and cloud world, there would be one processor type with one server configuration and it would run any workload that could be thrown at it. Earth is not an ideal world, though, and it takes different machines to run different kinds of workloads.

In fact, if Google is any measure – and we believe that it is – then the number of different types of compute that needs to be deployed in the datacenter to run an increasingly diverse application stack is growing, not shrinking. It is the end of the General Purpose Era, which began

Why Google Is Driving Compute Diversity was written by Timothy Prickett Morgan at The Next Platform.

Molecular Dynamics a Next Frontier for FPGA Acceleration

Molecular dynamics codes have a wide range of uses across scientific research and represent a target base for a variety of accelerators and approaches, from GPUs to custom ASICs. The iterative nature of these codes on a CPU alone can require massive amounts of compute time for relatively simple simulations, thus the push is strong to find ways to bolster performance.

It is not practical for all users to make use of a custom ASIC (as exists on domain specific machines like those from D.E. Shaw, for instance). Accordingly, this community has looked to a mid-way step between general

Molecular Dynamics a Next Frontier for FPGA Acceleration was written by Nicole Hemsoth at The Next Platform.

HPE Powers Up The Machine Architecture

Hewlett Packard Enterprise is not just a manufacturer that takes components from Intel and assembles them into systems. The company also has a heritage of innovating, and it was showing off its new datacenter architecture research and development testbed, dubbed The Machine, as 2016 came to a close.

While The Machine had originally attracted considerable attention as a vehicle for HPE to commercialize memristors, it is a much broader architectural testbed. This first generation of hardware can use any standard DDR4 DIMM-based memories, volatile or non-volatile. And while large, non-volatile memory pools are interesting research targets, HPE realizes that it

HPE Powers Up The Machine Architecture was written by Timothy Prickett Morgan at The Next Platform.

The Road Ahead for Deep Learning in Healthcare

While there are some sectors of the tech-driven economy that thrive on rapid adoption on new innovations, other areas become rooted in traditional approaches due to regulatory and other constraints. Despite great advances toward precision medicine goals, the healthcare industry, like other important segments of the economy, is tied by several specific bounds that make it slower to adapt to potentially higher performing tools and techniques.

Although deep learning is nothing new, its application set is expanding. There is promise for the more mature variants of traditional deep learning (convolutional and recurrent neural networks are the prime example) to morph

The Road Ahead for Deep Learning in Healthcare was written by Nicole Hemsoth at The Next Platform.

The Essentials Of Multiprocessor Programming

One near constant that you have been seeing in the pages of The Next Platform is that the downside of having a slowing rate at which the speed of new processors is increasing is offset by the upside of having a lot more processing elements in a device. While the performance of programs running on individual processors might not be speeding up like we might like, we instead get huge capacity increases via today’s systems by having the tens through many thousands of processors.

You have also seen in these pages that our usual vanilla view of a processor or

The Essentials Of Multiprocessor Programming was written by Timothy Prickett Morgan at The Next Platform.

From Monolith to Microservices

Microservices are big in the tech world these days. The evolutionary heir to service-oriented architecture, microservice-based design is the ultimate manifestation of everything you learned about good application design.

Loosely coupled with high cohesion, microservices are the application embodiment of DevOps principles. So why isn’t everything a microservice now? At the LISA Conference, Anders Wallgren and Avantika Mathur from Electric Cloud gave some insight with their talk “The Hard Truths about Microservices and Software Delivery”.

Perhaps the biggest impediment to the adoption of microservices-based application architecture is an organizational culture that is not supportive. Microservices proponents recommend a team

From Monolith to Microservices was written by Nicole Hemsoth at The Next Platform.

Let There Be Light: The Year in Silicon Photonics

Computing historians may look back on 2016 as the Year of Silicon Photonics. Not because the technology has become ubiquitous – that may yet be years away – but because the long-awaited silicon photonics offerings are now commercially available in networking hardware. While the advancements in networking provided by silicon photonics are indisputable, the real game changer is in the CPU.

For over half a century, Moore’s Law has been the name of the game. The transistor density on chips has been remarkably cooperative in doubling on schedule since Gordon Moore first made his observation in 1965. But Moore’s Law

Let There Be Light: The Year in Silicon Photonics was written by Nicole Hemsoth at The Next Platform.

Arista Gives Tomahawk 25G Ethernet Some XPliant Competition

Processing for server compute has gotten more general purpose for the past two decades and is seeing resurgence in built-for-purpose chips. Network equipment makers have made their own specialized chips as well as buying merchant chips of varying kinds to meet very specific switching and routing needs.

Of the chip upstarts that are competing against industry juggernaut Cisco Systems, Arista Networks stands out as the company that decided from its founding in 2009 to rely only on merchant silicon for switches and to differentiate on speed to market and software functionality and commonality across many different switch ASICs with its

Arista Gives Tomahawk 25G Ethernet Some XPliant Competition was written by Timothy Prickett Morgan at The Next Platform.

All Flash No Longer Has To Compete With Disk

All-flash arrays are still new enough to be somewhat exotic but are – finally – becoming mainstream. As we have been saying for years, there will come a point where enterprises, which have much more complex and unpredictable workloads than hyperscalers and do not need the exabytes of capacity of cloud builders, just throw in the towel with disk drives and move to all flash and be done with it.

To be sure, disk drives will persist for many years to come, but perhaps not for as long as many as disk drive makers and flash naysayers had expected –

All Flash No Longer Has To Compete With Disk was written by Timothy Prickett Morgan at The Next Platform.

U.S. Bumps Exascale Timeline, Focuses on Novel Architectures for 2021

It has just been announced that there has been a shift in thinking among the exascale computing leads in the U.S. government underway—one that offers the potential of the United States installing an exascale capable machine in 2021, but of even more interest, a system based on a novel architecture.

As Paul Messina, Argonne National Lab Distinguished Fellow and head of the Exascale Computing Project (ECP) tells The Next Platform, the roadmap to an exascale capable machine (meaning one capable of 50X the current 20 petaflop capability machines on the Top 500 supercomputer list now) is on a seven-year,

U.S. Bumps Exascale Timeline, Focuses on Novel Architectures for 2021 was written by Nicole Hemsoth at The Next Platform.

Buffers Sometimes Beat Bandwidth For Networks

Just like every kind of compute job cannot be handled by a single type of microprocessor, the diversity of networking tasks in the datacenters of the world require a variety of different switch and router ASICs to best manage those tasks.

As the volume leader in the switching arena, Broadcom comes under intense competitive pressure and has to keep on its toes to provide enough variety in its switch chips to keep its rivals at bay. One way that Broadcom does this is by having two distinct switch ASIC lines.

The StrataXGS line of chips have the famous and ubiquitous

Buffers Sometimes Beat Bandwidth For Networks was written by Timothy Prickett Morgan at The Next Platform.

Configuring the Future for FPGAs in Genomics

With the announcement of FPGA instances hitting the Amazon cloud and similar such news expected from FPGA experts Microsoft via Azure, among others, the lens was centered back on reconfigurable hardware and the path ahead. This has certainly been a year-plus of refocusing for the two main makers of such hardware, Altera and Xilinx, with the former being acquired by Intel and the latter picking up a range of new users, including AWS.

In addition to exploring what having a high-end Xilinx FPGA available in the cloud means for adoption, we talked to a couple of companies that have carved

Configuring the Future for FPGAs in Genomics was written by Nicole Hemsoth at The Next Platform.

Qualcomm Fires ARM Server Salvo, Broadcom Silences Guns

Depending on how you want to look at it, the half dozen companies that have aspired to bring ARM architecture to the datacenter through chips designed specifically to run server workloads are either very late to market or very early. The opportunity to take on Intel was arguably many years ago, when the world’s largest chip maker was weaker, and yet despite all of the excitement and hype, no one could get an ARM chip into the field that clearly and cleanly competed against Intel’s Xeons and did so publicly with design wins that generated real volumes that took a

Qualcomm Fires ARM Server Salvo, Broadcom Silences Guns was written by Timothy Prickett Morgan at The Next Platform.

Building Intelligence into Machine Learning Hardware

Machine learning is a rising star in the compute constellation, and for good reason. It has the ability to not only make life more convenient – think email spam filtering, shopping recommendations, and the like – but also to save lives by powering the intelligence behind autonomous vehicles, heart attack prediction, etc. While the applications of machine learning are bounded only by imagination, the execution of those applications is bounded by the available compute resources. Machine learning is compute-intensive and it turns out that traditional compute hardware is not well-suited for the task.

Many machine learning shops have approached the

Building Intelligence into Machine Learning Hardware was written by Nicole Hemsoth at The Next Platform.

Is This A Server Slowdown, Or Increasing Efficiency?

If you happen to believe that spending on core IT infrastructure is a leading indicator of the robustness of national economies and the global one that is stitched, somewhat piecemeal like a patchwork quilt. From them, then the third quarter sales and shipments of servers is probably sounding a note of caution for you.

It certainly does for us here at The Next Platform. But it is important, particularly if we have in fact hit the peak of the X86 server market as we mused about three months ago, to not get carried away. A slowdown in spending

Is This A Server Slowdown, Or Increasing Efficiency? was written by Timothy Prickett Morgan at The Next Platform.

What it Takes to Build True FPGA as a Service

Amazon Web Services might be offering FPGAs in an EC2 cloud environment, but this is still a far cry from the FPGA-as-a-service vision many hold for the future. Nonetheless, it is a remarkable offering in terms of the bleeding-edge Xilinx accelerator. The real success of these FPGA (F1) instances now depends on pulling in the right partnerships and tools to snap a larger user base together—one that would ideally include non-FPGA experts.

In its F1 instance announcement this week, AWS made it clear that for the developer preview, there are only VHDL and Verilog programmer tools, which are very

What it Takes to Build True FPGA as a Service was written by Nicole Hemsoth at The Next Platform.

BSC Keeps Its HPC Options Open With MareNostrum 4

When it comes to supercomputing, you don’t only have to strike while the iron is hot, you have to spend while the money is available. And that fact is what often determines the technologies that HPC centers deploy as they expand the processing and storage capacity of their systems.

A good case in point is the MareNostrum 4 hybrid cluster that the Barcelona Supercomputing Center, one of the flagship research and computing institutions in Europe, has just commissioned IBM to build with the help of partners Lenovo and Fujitsu. The system balances the pressing need for more general purpose computing

BSC Keeps Its HPC Options Open With MareNostrum 4 was written by Timothy Prickett Morgan at The Next Platform.

The FPGA Accelerated Cloud Push Just Got Stronger

FPGAs have been an emerging topic on the application acceleration front over the last couple of years, but despite increased attention around use cases in machine learning and other hot areas, hands have been tied due to simply on-boarding with both the hardware and software.

As we have covered here, this is changing, especially with the addition of OpenCL and other higher-level interfaces to let developers talk to FPGAs for both network and application double-duty and for that matter, getting systems that have integrated capabilities to handle FPGAs just as they do with GPUs (PCIe) takes extra footwork as well.

The FPGA Accelerated Cloud Push Just Got Stronger was written by Nicole Hemsoth at The Next Platform.

Lenovo Drives HPC From The Middle Ground

While we all spend a lot of time talking about the massive supercomputers that cultivate new architectures, and precisely for that reason, it is the more modest cluster that makes use of these technologies many years hence that actually cultivates a healthy and vibrant HPC market.

Lenovo picked up a substantial HPC business when it acquired IBM’s System x server division two years ago and also licensed key software, such as the Platform Computing stack and the GPFS file system, to drive its own HPC agenda. The Sino-American system maker has been buoyed by higher volume manufacturing thanks to the

Lenovo Drives HPC From The Middle Ground was written by Timothy Prickett Morgan at The Next Platform.