Compress objects, not cache lines: an object-based compressed memory hierarchy
Compress objects, not cache lines: an object-based compressed memory hierarchy Tsai & Sanchez, ASPLOS’19
Last time out we saw how Google have been able to save millions of dollars though memory compression enabled via zswap. One of the important attributes of their design was easy and rapid deployment across an existing fleet. Today’s paper introduces Zippads, which compared to a state of the art compressed memory hierarchy is able to achieve a 1.63x higher compression ratio and improve performance by 17%. The big idea behind zippads is simple and elegant, but the ramifications go deep: all the way down to a modified instruction set (ISA)! So while you probably won’t be using Zippads in practice anytime soon, it’s a wonderful example of what’s possible when you’re prepared to take a fresh look at “the way we’ve always done things.”
The big idea
Existing cache and main memory compression techniques compress data in small fixed-size blocks, typically cache lines. Moreover, they use simple compression algorithms that focus on exploiting redundancy within a block. These techniques work well for scientific programs that are dominated by arrays. However, they are ineffective on object-based programs because objects do not fall neatly Continue reading
Open source is making an impact on the software layer of 5G networks, but the vision for open...
A look at how open source intersects with SD-WAN, how openness could help, and whether open source...
While Google’s analytics cred is well established, Chronicle is an interesting choice because...
Pivotal's Dormain Drewitz explained that tech companies need to prioritize the level of risk in...
A top Huawei executive has been accused of intellectual property theft and a growing number of...
By leveraging Riverbed SD-WAN, the Negros Women for Tomorrow Foundation has been able to scale its...
The platform’s pull-based, declarative deployment mechanism allows it to scale to tens of...