Training a machine learning algorithm to accurately solve complex problems requires large amounts of data. The previous article discussed how scalable distributed parallel computing using a high-performance communications fabric like Intel Omni-Path Architecture (Intel OPA) is an essential part of what makes the training of deep learning on large complex datasets tractable in both the data center and within the cloud. Preparing large unstructured data sets for machine learning can be as intensive a task as the training process – especially for the file-system and storage subsystem(s). Starting (and restarting) big data training jobs using tens of thousands of clients …
Lustre to DAOS: Machine Learning on Intel’s Platform was written by Nicole Hemsoth at The Next Platform.
A primer on the 802.11 wireless protocol and when to implement it.
Proximity is everything.
Survey shows a widening gap between the needs of users and IT's ability to deliver against that need.