Beyond data and model parallelism for deep neural networks Jia et al., SysML’2019
I’m guessing the authors of this paper were spared some of the XML excesses of the late nineties and early noughties, since they have no qualms putting SOAP at the core of their work! To me that means the “simple” object access protocol, but not here:
We introduce SOAP, a more comprehensive search space of parallelization strategies for DNNs that includes strategies to parallelize a DNN in the Sample, Operator, Attribute, and Parameter dimensions.
The goal here is to reduce the training times of DNNs by finding efficient parallel execution strategies, and even including its search time, FlexFlow is able to increase training throughput by up to 3.3x compared to state-of-the-art approaches.
There are two key ideas behind FlexFlow. The first is to expand the set of possible solutions (and hence also the search space!) in the hope of covering more interesting potential solutions. The second is an efficient execution simulator that makes searching that space possible by giving a quick evaluation of the potential performance of a given parallelisation strategy. Combine those with an off-the-shelf Metropolis-Hastings MCMC search strategy and Bob’s your uncle.
"We have to rethink how we deliver security in this cloud-first world," Cisco's David Goeckeler...
The enhanced performance now matches that of its hyperscaler-focused Tomahawk line, though the...
This is because Cisco can "help [network operators] monetize their 5G infrastructure," he told...
I’ve created a history of networking page which lists all the current episodes of the History of Networking podcast by technology. I have a long list of guests still to schedule, so the new page will be updated as new episodes are recorded and released.
The acquisition follows a spate of recent silicon vendor acquisitions and comes as Intel exited the...