Stanford’s TETRIS Clears Blocks for 3D Memory Based Deep Learning
The need for speed to process neural networks is far less a matter of processor capabilities and much more a function of memory bandwidth. As the compute capability rises, so too does the need to keep the chips fed with data—something that often requires going off chip to memory. That not only comes with a performance penalty, but an efficiency hit as well, which explains why so many efforts are being made to either speed that connection to off-chip memory or, more efficiently, doing as much in memory as possible.
The advent of 3D or stacked memory opens new doors, …
Stanford’s TETRIS Clears Blocks for 3D Memory Based Deep Learning was written by Nicole Hemsoth at The Next Platform.
Today's applications are connected both to users and other applications, increasing traffic and profoundly affecting performance.
It's a maneuver mostly – but not entirely – about size.
They don’t want the NFV wheel reinvented for 5G.