Escher Erases Batching Lines for Efficient FPGA Deep Learning
Aside from the massive parallelism available in modern FPGAs, there are other two other key reasons why reconfigurable hardware is finding a fit in neural network processing in both training and inference.
First is the energy efficiency of these devices relative to performance, and second is the flexibility of an architecture that can be recast to the framework at hand. In the past we’ve described how FPGAs can fit over GPUs as well as custom ASICs in some cases, and what the future might hold for novel architectures based on reconfigurable hardware for these workloads. But there is still …
Escher Erases Batching Lines for Efficient FPGA Deep Learning was written by Nicole Hemsoth at The Next Platform.