Archive

Category Archives for "IT Industry"

Let There Be Light: The Year in Silicon Photonics

Computing historians may look back on 2016 as the Year of Silicon Photonics. Not because the technology has become ubiquitous – that may yet be years away – but because the long-awaited silicon photonics offerings are now commercially available in networking hardware. While the advancements in networking provided by silicon photonics are indisputable, the real game changer is in the CPU.

For over half a century, Moore’s Law has been the name of the game. The transistor density on chips has been remarkably cooperative in doubling on schedule since Gordon Moore first made his observation in 1965. But Moore’s Law

Let There Be Light: The Year in Silicon Photonics was written by Nicole Hemsoth at The Next Platform.

Arista Gives Tomahawk 25G Ethernet Some XPliant Competition

Processing for server compute has gotten more general purpose for the past two decades and is seeing resurgence in built-for-purpose chips. Network equipment makers have made their own specialized chips as well as buying merchant chips of varying kinds to meet very specific switching and routing needs.

Of the chip upstarts that are competing against industry juggernaut Cisco Systems, Arista Networks stands out as the company that decided from its founding in 2009 to rely only on merchant silicon for switches and to differentiate on speed to market and software functionality and commonality across many different switch ASICs with its

Arista Gives Tomahawk 25G Ethernet Some XPliant Competition was written by Timothy Prickett Morgan at The Next Platform.

All Flash No Longer Has To Compete With Disk

All-flash arrays are still new enough to be somewhat exotic but are – finally – becoming mainstream. As we have been saying for years, there will come a point where enterprises, which have much more complex and unpredictable workloads than hyperscalers and do not need the exabytes of capacity of cloud builders, just throw in the towel with disk drives and move to all flash and be done with it.

To be sure, disk drives will persist for many years to come, but perhaps not for as long as many as disk drive makers and flash naysayers had expected –

All Flash No Longer Has To Compete With Disk was written by Timothy Prickett Morgan at The Next Platform.

U.S. Bumps Exascale Timeline, Focuses on Novel Architectures for 2021

It has just been announced that there has been a shift in thinking among the exascale computing leads in the U.S. government underway—one that offers the potential of the United States installing an exascale capable machine in 2021, but of even more interest, a system based on a novel architecture.

As Paul Messina, Argonne National Lab Distinguished Fellow and head of the Exascale Computing Project (ECP) tells The Next Platform, the roadmap to an exascale capable machine (meaning one capable of 50X the current 20 petaflop capability machines on the Top 500 supercomputer list now) is on a seven-year,

U.S. Bumps Exascale Timeline, Focuses on Novel Architectures for 2021 was written by Nicole Hemsoth at The Next Platform.

Buffers Sometimes Beat Bandwidth For Networks

Just like every kind of compute job cannot be handled by a single type of microprocessor, the diversity of networking tasks in the datacenters of the world require a variety of different switch and router ASICs to best manage those tasks.

As the volume leader in the switching arena, Broadcom comes under intense competitive pressure and has to keep on its toes to provide enough variety in its switch chips to keep its rivals at bay. One way that Broadcom does this is by having two distinct switch ASIC lines.

The StrataXGS line of chips have the famous and ubiquitous

Buffers Sometimes Beat Bandwidth For Networks was written by Timothy Prickett Morgan at The Next Platform.

Configuring the Future for FPGAs in Genomics

With the announcement of FPGA instances hitting the Amazon cloud and similar such news expected from FPGA experts Microsoft via Azure, among others, the lens was centered back on reconfigurable hardware and the path ahead. This has certainly been a year-plus of refocusing for the two main makers of such hardware, Altera and Xilinx, with the former being acquired by Intel and the latter picking up a range of new users, including AWS.

In addition to exploring what having a high-end Xilinx FPGA available in the cloud means for adoption, we talked to a couple of companies that have carved

Configuring the Future for FPGAs in Genomics was written by Nicole Hemsoth at The Next Platform.

Qualcomm Fires ARM Server Salvo, Broadcom Silences Guns

Depending on how you want to look at it, the half dozen companies that have aspired to bring ARM architecture to the datacenter through chips designed specifically to run server workloads are either very late to market or very early. The opportunity to take on Intel was arguably many years ago, when the world’s largest chip maker was weaker, and yet despite all of the excitement and hype, no one could get an ARM chip into the field that clearly and cleanly competed against Intel’s Xeons and did so publicly with design wins that generated real volumes that took a

Qualcomm Fires ARM Server Salvo, Broadcom Silences Guns was written by Timothy Prickett Morgan at The Next Platform.

Building Intelligence into Machine Learning Hardware

Machine learning is a rising star in the compute constellation, and for good reason. It has the ability to not only make life more convenient – think email spam filtering, shopping recommendations, and the like – but also to save lives by powering the intelligence behind autonomous vehicles, heart attack prediction, etc. While the applications of machine learning are bounded only by imagination, the execution of those applications is bounded by the available compute resources. Machine learning is compute-intensive and it turns out that traditional compute hardware is not well-suited for the task.

Many machine learning shops have approached the

Building Intelligence into Machine Learning Hardware was written by Nicole Hemsoth at The Next Platform.

Is This A Server Slowdown, Or Increasing Efficiency?

If you happen to believe that spending on core IT infrastructure is a leading indicator of the robustness of national economies and the global one that is stitched, somewhat piecemeal like a patchwork quilt. From them, then the third quarter sales and shipments of servers is probably sounding a note of caution for you.

It certainly does for us here at The Next Platform. But it is important, particularly if we have in fact hit the peak of the X86 server market as we mused about three months ago, to not get carried away. A slowdown in spending

Is This A Server Slowdown, Or Increasing Efficiency? was written by Timothy Prickett Morgan at The Next Platform.

What it Takes to Build True FPGA as a Service

Amazon Web Services might be offering FPGAs in an EC2 cloud environment, but this is still a far cry from the FPGA-as-a-service vision many hold for the future. Nonetheless, it is a remarkable offering in terms of the bleeding-edge Xilinx accelerator. The real success of these FPGA (F1) instances now depends on pulling in the right partnerships and tools to snap a larger user base together—one that would ideally include non-FPGA experts.

In its F1 instance announcement this week, AWS made it clear that for the developer preview, there are only VHDL and Verilog programmer tools, which are very

What it Takes to Build True FPGA as a Service was written by Nicole Hemsoth at The Next Platform.

BSC Keeps Its HPC Options Open With MareNostrum 4

When it comes to supercomputing, you don’t only have to strike while the iron is hot, you have to spend while the money is available. And that fact is what often determines the technologies that HPC centers deploy as they expand the processing and storage capacity of their systems.

A good case in point is the MareNostrum 4 hybrid cluster that the Barcelona Supercomputing Center, one of the flagship research and computing institutions in Europe, has just commissioned IBM to build with the help of partners Lenovo and Fujitsu. The system balances the pressing need for more general purpose computing

BSC Keeps Its HPC Options Open With MareNostrum 4 was written by Timothy Prickett Morgan at The Next Platform.

The FPGA Accelerated Cloud Push Just Got Stronger

FPGAs have been an emerging topic on the application acceleration front over the last couple of years, but despite increased attention around use cases in machine learning and other hot areas, hands have been tied due to simply on-boarding with both the hardware and software.

As we have covered here, this is changing, especially with the addition of OpenCL and other higher-level interfaces to let developers talk to FPGAs for both network and application double-duty and for that matter, getting systems that have integrated capabilities to handle FPGAs just as they do with GPUs (PCIe) takes extra footwork as well.

The FPGA Accelerated Cloud Push Just Got Stronger was written by Nicole Hemsoth at The Next Platform.

Lenovo Drives HPC From The Middle Ground

While we all spend a lot of time talking about the massive supercomputers that cultivate new architectures, and precisely for that reason, it is the more modest cluster that makes use of these technologies many years hence that actually cultivates a healthy and vibrant HPC market.

Lenovo picked up a substantial HPC business when it acquired IBM’s System x server division two years ago and also licensed key software, such as the Platform Computing stack and the GPFS file system, to drive its own HPC agenda. The Sino-American system maker has been buoyed by higher volume manufacturing thanks to the

Lenovo Drives HPC From The Middle Ground was written by Timothy Prickett Morgan at The Next Platform.

Pushing Back Against Cheap and Deep Storage

It is not always easy, but several companies dedicated to the supercomputing market have managed to retune their wares to fit more mainstream market niches. This has been true across both the hardware and software subsets of high performance computing, and such efforts have been aided by a well-timed shift in the needs of enterprises toward more robust, compute and data-intensive workhorses as new workloads, most of which are driven by dramatic increases in data volumes and analytical capabilities, keep emerging.

For supercomputer makers, the story is a clear one. However, on the storage side, especially for those select few

Pushing Back Against Cheap and Deep Storage was written by Nicole Hemsoth at The Next Platform.

HPE Takes On The High End With SGI Expertise

SGI has always had scalable technology that should have been deployed more broadly in the academic, government, and enterprise datacenters of the world. But fighting for those budget dollars at the high end of the market always came down to needing more feet on the street, a larger global footprint for service and support, and broader certification of software stacks to exploit that SGI iron.

Now that Hewlett Packard Enterprise owns SGI – or more precisely, owns its operations in the United States and will finish off its acquisition, announced in August, probably sometime in the middle of next

HPE Takes On The High End With SGI Expertise was written by Timothy Prickett Morgan at The Next Platform.

Nvidia CEO’s “Hyper-Moore’s Law” Vision for Future Supercomputers

Over the last year in particular, we have documented the merger between high performance computing and deep learning and its various shared hardware and software ties. This next year promises far more on both horizons and while GPU maker Nvidia might not have seen it coming to this extent when it was outfitting its first GPUs on the former top “Titan” supercomputer, the company sensed a mesh on the horizon when the first hyperscale deep learning shops were deploying CUDA and GPUs to train neural networks.

All of this portends an exciting year ahead and for once, the mighty CPU

Nvidia CEO’s “Hyper-Moore’s Law” Vision for Future Supercomputers was written by Nicole Hemsoth at The Next Platform.

OpenHPC Pedal Put To The Compute Metal

The ultimate success of any platform depends on the seamless integration of diverse components into a synergistic whole – well, as much as is possible in the real world – while at the same time being flexible enough to allow for components to be swapped out and replaced by others to suit personal preferences.

Is OpenHPC, the open source software stack aimed at simulation and modeling workloads that was spearheaded by Intel a year ago, going to be the dominant and unifying platform for high performance computing? Will OpenHPC be analogous to the Linux distributions that grew up around

OpenHPC Pedal Put To The Compute Metal was written by Timothy Prickett Morgan at The Next Platform.

As The Server Goes, So Goes The World

At this point in the 21st Century, a surprisingly large portion of the manufacturing, warehousing, distribution, marketing, and retailing of every good and service known to humankind is dependent on a piece of circuit board with two Xeon processors welded to it, wrapped in some bent sheet metal with a few blinky lights peeking out of the darkness.

Back in 1885, as the United States was beginning its rise to power, Reverend Josiah Strong declared in his populist book, Our Country: “As America goes, so goes the world.” National borders and national interests still exist, but networks cross boundaries

As The Server Goes, So Goes The World was written by Timothy Prickett Morgan at The Next Platform.

FPGAs Give Microsoft a “Von Neumann Tax” Break

At the annual Supercomputing Conference (SC16) last week, the emphasis was on deep learning and its future role as part of supercomputing applications and systems. Before that focus, however, the rise of novel architectures and reconfigurable accelerators (as alternatives to building a custom ASIC) was swift.

Feeding on that trend, a panel exploring how non-Von Neumann architectures looked at the different ways the high performance computing set might consider non-stored program machines and what the many burgeoning options might mean for energy efficiency and performance.

Among the presenters was Gagan Gupta, a computer architect with Microsoft Research, who detailed the

FPGAs Give Microsoft a “Von Neumann Tax” Break was written by Nicole Hemsoth at The Next Platform.

Inside Intel’s Strategy to Integrate Nervana Deep Learning Assets

There is little doubt that 2017 will be a dense year for deep learning. With a sudden new wave of applications that integrate neural networks into existing workflows (not to mention entirely new uses) and a fresh array of hardware architectures to meet them, we expect the space to start shaking out its early winners and losers and show a clearer path ahead.

As we described earlier this week, Intel has big plans to integrate the Nervana ASIC and software stack with its Knights family of processors in the next several years. This effort, codenamed Knights Crest, is a long-term

Inside Intel’s Strategy to Integrate Nervana Deep Learning Assets was written by Nicole Hemsoth at The Next Platform.