Object storage is not a new concept, but this type of storage architecture is beginning to garner more attention from large organisations as they grapple with the difficulties of managing increasingly large volumes of unstructured data gathered from applications, social media, and a myriad other sources.
The properties of object-based storage systems mean that they can scale easily to handle hundreds or even thousands of petabytes of capacity if required. Throw in the fact that object storage can be less costly in terms of management overhead (somewhere around 20 percent so that means needing to buy 20 percent less capacity …
On Premises Object Storage Mimics Big Public Clouds was written by Timothy Prickett Morgan at The Next Platform.
There is little doubt that this is a new era for FPGAs.
While it is not news that FPGAs have been deployed in many different environments, particularly on the storage and networking side, there are fresh use cases emerging in part due to much larger datacenter trends. Energy efficiency, scalability, and the ability to handle vast volumes of streaming data are more important now than ever before. At a time when traditional CPUs are facing a future where Moore’s Law is less certain and other accelerators and custom ASICs are potential solutions with their own sets of expenses and hurdles, …
FPGA Frontiers: New Applications in Reconfigurable Computing was written by Nicole Hemsoth at The Next Platform.
The old Chinese saying, “May you live in interesting times,” is supposed to be a curse, uttered perhaps with a wry smile and a glint in the eye. But it is also, we think, a blessing, particularly in an IT sector that could use some constructive change.
Considering how much change the tech market, and therefore the companies, governments, and educational institutions of the world, have had to endure in the past three decades – and the accelerating pace of change over that period – you might be thinking it would be good to take a breather, to coast for …
Looking Ahead To The Next Platforms That Will Define 2017 was written by Nicole Hemsoth at The Next Platform.
Choosing the right interconnect for high-performance compute and storage platforms is critical for achieving the highest possible system performance and overall return on investment.
Over time, interconnect technologies have become more sophisticated and include more intelligent capabilities (offload engines), which enable the interconnect to do more than just transferring data. Intelligent interconnect can increase system efficiency; interconnect with offload engines (offload interconnect) dramatically reduces CPU overhead, allowing more CPU cycles to be dedicated to applications and therefore enabling higher application performance and user productivity.
Today, the interconnect technology has become even more critical than ever before, due to a number …
The Interplay Of HPC Interconnects And CPU Utilization was written by Timothy Prickett Morgan at The Next Platform.
The Zetta File System (ZFS), as a back-end file system to Lustre, has had support in Lustre for a long time. But in the last few years it has gained greater importance, likely due to Lustre’s push into enterprise and the increasing demands by both enterprise and non-enterprise IT to add more reliability and flexibility features to Lustre. So, ZFS has had significant traction in recent Lustre deployments.
However, over the last 18 months, a few challenges have been the focus of several open source projects in the Lustre developer community to improve performance, align the enterprise-grade features in ZFS …
Bolstering Lustre on ZFS: Highlights of Continuing Work was written by Nicole Hemsoth at The Next Platform.
This is the second in the series on the essentials of multiprocessor programming. This time around we are going to look at some of the normally little considered effects of having memory being shared by a lot of processors and by the work concurrently executing there.
We start, though, by observing that there has been quite a market for parallelizing applications even when they do not share data. There has been remarkable growth of applications capable of using multiple distributed memory systems for parallelism, and interestingly that very nicely demonstrates the opportunity that exists for using massive compute capacity …
What Is SMP Without Shared Memory? was written by Timothy Prickett Morgan at The Next Platform.
In the ideal hyperscaler and cloud world, there would be one processor type with one server configuration and it would run any workload that could be thrown at it. Earth is not an ideal world, though, and it takes different machines to run different kinds of workloads.
In fact, if Google is any measure – and we believe that it is – then the number of different types of compute that needs to be deployed in the datacenter to run an increasingly diverse application stack is growing, not shrinking. It is the end of the General Purpose Era, which began …
Why Google Is Driving Compute Diversity was written by Timothy Prickett Morgan at The Next Platform.
Molecular dynamics codes have a wide range of uses across scientific research and represent a target base for a variety of accelerators and approaches, from GPUs to custom ASICs. The iterative nature of these codes on a CPU alone can require massive amounts of compute time for relatively simple simulations, thus the push is strong to find ways to bolster performance.
It is not practical for all users to make use of a custom ASIC (as exists on domain specific machines like those from D.E. Shaw, for instance). Accordingly, this community has looked to a mid-way step between general …
Molecular Dynamics a Next Frontier for FPGA Acceleration was written by Nicole Hemsoth at The Next Platform.
Hewlett Packard Enterprise is not just a manufacturer that takes components from Intel and assembles them into systems. The company also has a heritage of innovating, and it was showing off its new datacenter architecture research and development testbed, dubbed The Machine, as 2016 came to a close.
While The Machine had originally attracted considerable attention as a vehicle for HPE to commercialize memristors, it is a much broader architectural testbed. This first generation of hardware can use any standard DDR4 DIMM-based memories, volatile or non-volatile. And while large, non-volatile memory pools are interesting research targets, HPE realizes that it …
HPE Powers Up The Machine Architecture was written by Timothy Prickett Morgan at The Next Platform.
While there are some sectors of the tech-driven economy that thrive on rapid adoption on new innovations, other areas become rooted in traditional approaches due to regulatory and other constraints. Despite great advances toward precision medicine goals, the healthcare industry, like other important segments of the economy, is tied by several specific bounds that make it slower to adapt to potentially higher performing tools and techniques.
Although deep learning is nothing new, its application set is expanding. There is promise for the more mature variants of traditional deep learning (convolutional and recurrent neural networks are the prime example) to morph …
The Road Ahead for Deep Learning in Healthcare was written by Nicole Hemsoth at The Next Platform.
One near constant that you have been seeing in the pages of The Next Platform is that the downside of having a slowing rate at which the speed of new processors is increasing is offset by the upside of having a lot more processing elements in a device. While the performance of programs running on individual processors might not be speeding up like we might like, we instead get huge capacity increases via today’s systems by having the tens through many thousands of processors.
You have also seen in these pages that our usual vanilla view of a processor or …
The Essentials Of Multiprocessor Programming was written by Timothy Prickett Morgan at The Next Platform.
Microservices are big in the tech world these days. The evolutionary heir to service-oriented architecture, microservice-based design is the ultimate manifestation of everything you learned about good application design.
Loosely coupled with high cohesion, microservices are the application embodiment of DevOps principles. So why isn’t everything a microservice now? At the LISA Conference, Anders Wallgren and Avantika Mathur from Electric Cloud gave some insight with their talk “The Hard Truths about Microservices and Software Delivery”.
Perhaps the biggest impediment to the adoption of microservices-based application architecture is an organizational culture that is not supportive. Microservices proponents recommend a team …
From Monolith to Microservices was written by Nicole Hemsoth at The Next Platform.
Computing historians may look back on 2016 as the Year of Silicon Photonics. Not because the technology has become ubiquitous – that may yet be years away – but because the long-awaited silicon photonics offerings are now commercially available in networking hardware. While the advancements in networking provided by silicon photonics are indisputable, the real game changer is in the CPU.
For over half a century, Moore’s Law has been the name of the game. The transistor density on chips has been remarkably cooperative in doubling on schedule since Gordon Moore first made his observation in 1965. But Moore’s Law …
Let There Be Light: The Year in Silicon Photonics was written by Nicole Hemsoth at The Next Platform.
Processing for server compute has gotten more general purpose for the past two decades and is seeing resurgence in built-for-purpose chips. Network equipment makers have made their own specialized chips as well as buying merchant chips of varying kinds to meet very specific switching and routing needs.
Of the chip upstarts that are competing against industry juggernaut Cisco Systems, Arista Networks stands out as the company that decided from its founding in 2009 to rely only on merchant silicon for switches and to differentiate on speed to market and software functionality and commonality across many different switch ASICs with its …
Arista Gives Tomahawk 25G Ethernet Some XPliant Competition was written by Timothy Prickett Morgan at The Next Platform.
All-flash arrays are still new enough to be somewhat exotic but are – finally – becoming mainstream. As we have been saying for years, there will come a point where enterprises, which have much more complex and unpredictable workloads than hyperscalers and do not need the exabytes of capacity of cloud builders, just throw in the towel with disk drives and move to all flash and be done with it.
To be sure, disk drives will persist for many years to come, but perhaps not for as long as many as disk drive makers and flash naysayers had expected – …
All Flash No Longer Has To Compete With Disk was written by Timothy Prickett Morgan at The Next Platform.
It has just been announced that there has been a shift in thinking among the exascale computing leads in the U.S. government underway—one that offers the potential of the United States installing an exascale capable machine in 2021, but of even more interest, a system based on a novel architecture.
As Paul Messina, Argonne National Lab Distinguished Fellow and head of the Exascale Computing Project (ECP) tells The Next Platform, the roadmap to an exascale capable machine (meaning one capable of 50X the current 20 petaflop capability machines on the Top 500 supercomputer list now) is on a seven-year, …
U.S. Bumps Exascale Timeline, Focuses on Novel Architectures for 2021 was written by Nicole Hemsoth at The Next Platform.
Just like every kind of compute job cannot be handled by a single type of microprocessor, the diversity of networking tasks in the datacenters of the world require a variety of different switch and router ASICs to best manage those tasks.
As the volume leader in the switching arena, Broadcom comes under intense competitive pressure and has to keep on its toes to provide enough variety in its switch chips to keep its rivals at bay. One way that Broadcom does this is by having two distinct switch ASIC lines.
The StrataXGS line of chips have the famous and ubiquitous …
Buffers Sometimes Beat Bandwidth For Networks was written by Timothy Prickett Morgan at The Next Platform.
With the announcement of FPGA instances hitting the Amazon cloud and similar such news expected from FPGA experts Microsoft via Azure, among others, the lens was centered back on reconfigurable hardware and the path ahead. This has certainly been a year-plus of refocusing for the two main makers of such hardware, Altera and Xilinx, with the former being acquired by Intel and the latter picking up a range of new users, including AWS.
In addition to exploring what having a high-end Xilinx FPGA available in the cloud means for adoption, we talked to a couple of companies that have carved …
Configuring the Future for FPGAs in Genomics was written by Nicole Hemsoth at The Next Platform.
Depending on how you want to look at it, the half dozen companies that have aspired to bring ARM architecture to the datacenter through chips designed specifically to run server workloads are either very late to market or very early. The opportunity to take on Intel was arguably many years ago, when the world’s largest chip maker was weaker, and yet despite all of the excitement and hype, no one could get an ARM chip into the field that clearly and cleanly competed against Intel’s Xeons and did so publicly with design wins that generated real volumes that took a …
Qualcomm Fires ARM Server Salvo, Broadcom Silences Guns was written by Timothy Prickett Morgan at The Next Platform.
Machine learning is a rising star in the compute constellation, and for good reason. It has the ability to not only make life more convenient – think email spam filtering, shopping recommendations, and the like – but also to save lives by powering the intelligence behind autonomous vehicles, heart attack prediction, etc. While the applications of machine learning are bounded only by imagination, the execution of those applications is bounded by the available compute resources. Machine learning is compute-intensive and it turns out that traditional compute hardware is not well-suited for the task.
Many machine learning shops have approached the …
Building Intelligence into Machine Learning Hardware was written by Nicole Hemsoth at The Next Platform.