Revolutionary data compression technique could slash compute costs
There’s a major problem with today’s money-saving memory compression used for storing more data in less space. The issue is that computers store and run memory in predetermined blocks, yet many modern programs function and play out in variable chunks.The way it’s currently done is actually, highly inefficient. That’s because the compressed programs, which use objects rather than evenly configured slabs of data, don’t match the space used to store and run them, explain scientists working on a revolutionary new compression system called Zippads.The answer, they say—and something that if it works would drastically reduce those inefficiencies, speed things up, and importantly, reduce compute costs—is to compress the varied objects and not the cache lines, as is the case now. Cache lines are fixed-size blocks of memory that are transferred to memory cache.To read this article in full, please click here