Continuum: a platform for cost-aware low-latency continual learning
Continuum: a platform for cost-aware low-latency continual learning Tian et al., SoCC’18
Let’s start with some broad approximations. Batching leads to higher throughput at the cost of higher latency. Processing items one at a time leads to lower latency and often reduced throughput. We can recover throughput to a degree by throwing horizontally scalable resources at the problem, but it’s hard to recover latency. In many business scenarios latency matters, so we’ve been seeing a movement overtime from batching through micro-batching to online streaming.
Continuum looks at the same issues from the perspective of machine learning models. Offline (batch) trained models can suffer from concept drift (loss of accuracy over time) as a result of not incorporating the latest data. I.e., there’s a business cost incurred for higher latency of update incorporation. Online models support incremental updates. Continuum determines the optimum time to retrain models in the presence of incoming data, based on user policy (best effort, cost-aware, or user-defined). There’s some great data here about the need for and benefit of continual learning, and a surprising twist in the tale where it turns out that even if you can afford it, updating the model on Continue reading
Cisco’s CEO Chuck Robbins earned $21.28 million in fiscal 2018.
“When there is an economic recession that is when people pull in their horns," says Forrester VP John Rymer.
Earlier this month, Broadcom slashed over 400 jobs from CA Technologies’ offices, just days after it bought the company for $18.9 billion.
The vendor-agnostic software collects 300 million data points daily, analyzes the data, consolidates alarm notifications, and makes recommendations from a centralized console.
T-Mobile plans to launch its 5G network in lower band spectrum so it can blanket the country with 5G service.