Over the last couple of years, we have been watching how burst buffers might be deployed at some of the world’s largest supercomputer sites. For some background on how these SSD devices boost throughput on large machines and aid in both checkpoint and application acceleration, you can read here, but the real question is how these might penetrate the market outside of the leading supercomputing sites.
There is clear need for burst buffer technology in other areas where users are matching a parallel file system with SSDs. While that is still an improvement over the disk days, a lot …
How Long Before Burst Buffers Push Past Supercomputing? was written by Nicole Hemsoth at The Next Platform.
Systems built from commodity hardware such as servers, desktops and laptops often contain so-called general-purpose processors (CPUs)—processors that specialize in doing many different things reasonably well. This is driven by the fact that users often perform various types of computations; the processor is expected to run an Operating System, browse the internet and even run video games.
Because general-purpose processors target such a broad set of applications, they require having hardware that supports all such application areas. Since hardware occupies silicon area, there is a limit to how many of these processor “cores” that can be placed—typically between 4 and …
Turning OpenMP Programs into Parallel Hardware was written by Nicole Hemsoth at The Next Platform.
In the absence of Power8+ processor upgrades this year, and with sales of midrange systems taking a bit of a hit in the third quarter, IBM has to do something to push its iron against Xeon E7 and Sparc M7 systems between now and when the Power9 machines are available in the second half of 2017. It also needs to entice customers who are on older Power7 and Power7+ machinery to upgrade now rather than wait the better part of a year to spend money.
To that end, IBM has launched the Power 850C four-socket server, a companion to …
IBM Overclocks Power8 To Take On “Broadwell” Xeon E7 was written by Timothy Prickett Morgan at The Next Platform.
Clustering together commodity servers has allowed the economies of scale that enable large-scale cloud computing, but as we look to the future of big infrastructure beyond Moore’s Law, how might bleeding edge technologies capture similar share and mass production?
To say that quantum computing is a success simply because a few machines manufactured by quantum device maker, D-Wave, would not necessarily be accurate. However, what the few purchases of such machines by Los Alamos National Lab, Google, and Lockheed Martin do show is that there is enough interest and potential to get the technology off the ground and …
Future Economies of Scale for Quantum Computing was written by Nicole Hemsoth at The Next Platform.
Microsoft’s embrace of programmable chips knowns as FPGAs is well documented. But in a paper released Monday the software and cloud company provided a look into how it has fundamentally changed the economics of delivering hardware as a service thanks to these once-specialty pieces of silicon.
Field programmable gate arrays, or FPGAs, are chips where the logic and networking functions can be reconfigured after they’ve been manufactured. They are typically larger than similarly functioning chips and traditionally were made for small jobs where the performance advantage outweighed the higher engineering cost associated with designing them.
But thanks to the massive …
How Microsoft Fell Hard for FPGAs was written by Nicole Hemsoth at The Next Platform.
It is the job of the chief financial officer and the rest of the top brass of every public company in the world to present the financial results of their firms in the best possible light every thirteen weeks when the numbers are compiled and presented to Wall Street for grading. Money is how we all keep score, and how we decide we will invest and therefore live in our old age, hopefully with a certain amount of peace.
Starting this year, IBM has been presenting its financial results in a new format, which helps it emphasize its cognitive computing …
A Snapshot Of Big Blue’s Systems Business was written by Timothy Prickett Morgan at The Next Platform.
What happens when the world’s largest public cloud and the biggest peddler of server virtualization in the enterprise team up to create a hybrid cloud?
A few things. First, the many VMware partners who have built clouds based on the ESXi hypervisor get nervous. And second, VMware very delicately and carefully prices its software low enough that it can have a scalable public cloud play but not so low that Amazon Web Services doesn’t end up having the pricing leverage that its parent company had with the book business a decade ago. And third, AWS uses the might of a …
AWS And VMware Acquaint As Strange Cloudfellows was written by Timothy Prickett Morgan at The Next Platform.
When IBM started to use the word “open” in conjunction with its Power architecture more than three years with the formation of the OpenPower Foundation three years ago, Big Blue was not confused about what that term meant. If the Power architecture was to survive, it would do so by having open specifications that would allow third parties to not only make peripherals, but also to license the technology and make clones of Power8 or Power9 processors.
One of the key technologies that IBM wove into the Power8 chip that differentiates it from Xeon, Opteron, ARM, and Sparc processors is …
Opening Up The Server Bus For Coherent Acceleration was written by Timothy Prickett Morgan at The Next Platform.
The computing industry is facing a number of challenges as Moore’s Law improvements in circuitry slow down, and they don’t all have to do with transistor counts and memory bandwidth and such. Another problem is that it has gotten progressively more costly to design chips at the same time that mass customization seems to be the way to provide unique processing capability specifically for precise workloads.
In recent decades, the US Defense Advanced Research Project Agency pumped huge sums of money into designing and manufacturing gigascale, terascale, and petascale systems, but in recent years this development arm of the US …
Adapteva Joins The Kilocore Club With Epiphany-V was written by Timothy Prickett Morgan at The Next Platform.
If there is any organization on the planet that has had a closer view of the coming demise of Moore’s Law, it is the Institute of Electrical and Electronics Engineers (IEEE). Since its inception in the 1960s, the wide range of industry professionals have been able to trace a steady trajectory for semiconductors, but given the limitations ahead, it is time to look to a new path—or several forks, to be more accurate.
This realization about the state of computing for the next decade and beyond has spurred action from a subgroup, led by Georgia Tech professor Tom Conte and …
IEEE Reboots, Scans for Future Architectures was written by Nicole Hemsoth at The Next Platform.
People tend to obsess about processing when it comes to system design, but ultimately an application and its data lives in memory and anything that can improve the capacity, throughput, and latency of memory will make all the processing you throw at it result in useful work rather than wasted clock cycles.
This is why flash has been such a boon for systems. But we can do better, and the Gen-Z consortium announced this week is going to create a new memory fabric standard that it hopes will break down the barriers between main memory and other storage-class memories on …
Raising The Standard For Storage Memory Fabrics was written by Timothy Prickett Morgan at The Next Platform.
Although the timeline for reaching exascale class computing continues to stretch farther into the future, research teams are keeping an eye on what technologies will shape the machines of the post-exascale timeframe, which is in the 2022-2030 timeframe.
While many of the technologies stated in a comprehensive report about post-exascale supercomputers are already in process, albeit in various stages of development and adoption, there is little consensus about which mode of computing will lead us into an era of unprecedented data and simulation potential. Still, the effort on behalf of the EuroLab-4-HPC program is notable in its divisions between where …
Disruptive Technologies on the Post Exascale Horizon was written by Nicole Hemsoth at The Next Platform.
It is not news that offloading work from CPUs to GPUs can grant radical speedups, but what can come as a surprise is that scaling of these workloads doesn’t change just because they run faster. Moving beyond a single node means encountering a performance wall, that is, unless something can glue everything together so it can scale at will.
There are already technologies that can take multiple units of compute and have them share work from supercomputing and other areas (consider ScaleMP, for instance) but there are limitations to these approaches and thus far, they haven’t extended to meet the …
Ganging up Accelerators to Beat Scale Limits was written by Nicole Hemsoth at The Next Platform.
The only companies that want – and expect – all compute and storage to move to the public cloud are those public clouds that do not have a compelling private cloud story to tell. But the fact remains that for many enterprises, their most sensitive data and workloads cannot – and will not – move to the public cloud.
This almost demands, as we have discussed before, the creation of private versions of public cloud infrastructure, which interestingly enough, is not as easy as it might seem. Scaling infrastructure down so it is still cost effective and usable by …
Igneous Melds ARM Chips And Disks For Private S3 Storage was written by Timothy Prickett Morgan at The Next Platform.
A new crop of applications is driving the market along some unexpected routes, in some cases bypassing the processor as the landmark for performance and efficiency. While there is no end in sight for the CPUs dominant role, at least not until Moore’s Law has been buried along the roadside, there is another path—this time, down memory lane.
Just as machine learning oriented applications represent the next development platform, memory appears to be the next platform for compute. While this won’t extend to all application areas, given the thrust of machine learning and memory bandwidth and capacity-strained applications, the more …
Memory is the Next Platform was written by Nicole Hemsoth at The Next Platform.
As data grows, a shift in computing paradigm is underway. I started my professional career in the 1990s, during massive shift from mainframe computing to the heyday of client/server computing and enterprise applications such as ERP, CRM, and human resources software. Relational databases like Oracle, DB2, SQL Server, and Informix offered improvements to managing data, and the technique of combining a new class of midrange servers from Sun Microsystems, Digital Equipment Corporation, IBM, and Hewlett-Packard with storage tiers from EMC and IBM reduced costs and complexity over traditional mainframes.
However, what remained was that these new applications continued to operate …
The Emergence Of Data-Centric Computing was written by Timothy Prickett Morgan at The Next Platform.
Todd Mostak, the creator of the MapD GPU-accelerated database and visualization system, made that database because he was a frustrated user of other database technologies, and as a user, he is adamant that accelerating databases and making visualization of queried data is about more than just being a speed freak.
“Analytics is ultimately a creative exercise,” Mostak tells The Next Platform during a conversation that was supposed to be about benchmark results but that, as often happens here, wandered far and wide. “Analysts start from some place, and where they go is a function of the resources that are …
Accelerating Slow Databases That Wear People Down was written by Timothy Prickett Morgan at The Next Platform.
One of the frustrating facts about peddling any new technology is that the early adopters that discover a strategic advantage in that technology want to keep that secret all to themselves. Word of mouth and real-world use cases are big factors in the adoption of any new technology, and anything that hampers this actually causes the adoption to move slower than it otherwise might.
But eventually, despite all of the secrecy, there comes a time when the critical mass is reached and adoption proceeds apace. We have been waiting for that moment for a long time now for 64-bit ARM …
Applied Micro Finds ARM Server Footing, Reaches Higher was written by Timothy Prickett Morgan at The Next Platform.
Linux container technology is IT’s shiny new thing. Containers promise to ease application development and deployment, a necessity in a business environment where getting ahead of application demand can mean the difference between staying in business or not. Containers offer many benefits, but they are not a panacea, and it’s important to understand why, where and when to use them.
Most IT pros recognize that application containers can provide a technological edge, one that translates into a clear business advantage. Containers unify and streamline application components – including the libraries and binaries upon which individual applications depend. Combining isolation with …
Making The Case For Containers was written by Timothy Prickett Morgan at The Next Platform.
Everyone talks about security on infrastructure, but it comes at a heavy cost. While datacenters have been securing their perimeters with firewalls for decades, this is far from sufficient for modern applications.
Back in the early days of the Internet, all traffic was from the client in through the web and application servers to the back-end database that fed the applications – what is known as north-south traffic in the datacenter lingo. But these days, an application is a collection of multiple services that are assembled on the fly from all over the datacenter, across untold server nodes, in what …
Server Encryption With An FPGA Offload Boost was written by Timothy Prickett Morgan at The Next Platform.