Don’t use Opera browser Because Privacy Threat

Lack of transparency, false incentives and lack of trus
The post Don’t use Opera browser Because Privacy Threat appeared first on EtherealMind.

Lack of transparency, false incentives and lack of trus
The post Don’t use Opera browser Because Privacy Threat appeared first on EtherealMind.
In my quest to understand how much buffer space we really need in high-speed switches I encountered an interesting phenomenon: we no longer have the gut feeling of what makes sense, sometimes going as far as assuming that 16 MB (or 32MB) of buffer space per 10GE/25GE data center ToR switch is another $vendor shenanigan focused on cutting cost. Time for another set of Fermi estimates.
Let’s take a recent data center switch using Trident II+ chipset and having 16 MB of buffer space (source: awesome packet buffers page by Jim Warner). Most of switches using this chipset have 48 10GE ports and 4-6 uplinks (40GE or 100GE).
Read more ...Hello my friend,
This article is kind of a special one for me. It doesn’t mean that everything I have written before has a little sense. Everything what I have written about the Data Centre Fabric project was steps towards fully automated data centre operation, and today we make a final step towards the closed-loop automation based using the real-time data analytics by InfluxData Kapacitor.

1
2
3
4
5 No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.
According to the official website, InfluxData Kapacitor is alerting system following publish-subscribe design pattern, which supports both steam and batch data processing. If we translate it from the geeks’ language, it means that Kapacitor can subscribe to a certain to topics in the data producer (e.g., time series database – InfluxDB or collector – Telegraf) and start getting information out of it:
Beyond data and model parallelism for deep neural networks Jia et al., SysML’2019
I’m guessing the authors of this paper were spared some of the XML excesses of the late nineties and early noughties, since they have no qualms putting SOAP at the core of their work! To me that means the “simple” object access protocol, but not here:
We introduce SOAP, a more comprehensive search space of parallelization strategies for DNNs that includes strategies to parallelize a DNN in the Sample, Operator, Attribute, and Parameter dimensions.
The goal here is to reduce the training times of DNNs by finding efficient parallel execution strategies, and even including its search time, FlexFlow is able to increase training throughput by up to 3.3x compared to state-of-the-art approaches.
There are two key ideas behind FlexFlow. The first is to expand the set of possible solutions (and hence also the search space!) in the hope of covering more interesting potential solutions. The second is an efficient execution simulator that makes searching that space possible by giving a quick evaluation of the potential performance of a given parallelisation strategy. Combine those with an off-the-shelf Metropolis-Hastings MCMC search strategy and Bob’s your uncle.
"We have to rethink how we deliver security in this cloud-first world," Cisco's David Goeckeler...
The enhanced performance now matches that of its hyperscaler-focused Tomahawk line, though the...