A research team from Nvidia has provided interesting insight about using mixed precision on deep learning training across very large training sets and how performance and scalability are affected by working with a batch size of 32,000 using recurrent neural networks. …
Nvidia DGX1-V Appliance Crushes NLP Training Baselines was written by Nicole Hemsoth at .
The Knative GitHub page begins with a pronunciation guide because no one understands how to pronounce the platform's name.
There is a lot of work required to virtualize the RAN and standards groups like the xRAN/O-RAN Alliance and the 3GPP can’t do it all, says Cisco.
More users need access to APM tools because of the growing complexity of architectures. Instana developed a tool to personalize APM tools for specific users.
When asked to rank U.S. election security preparedness, Cisco’s director of threat management and incident response said “little to none.”
The Resource Public Key Infrastructure (RPKI) system is designed to prevent hijacking of routes at their origin AS. If you don’t know how this system works (and it is likely you don’t, because there are only a few deployments in the world), you can review the way the system works by reading through this post here on rule11.tech.
The paper under review today examines how widely Route Origin Validation (ROV) based on the RPKI system has been deployed. The authors began by determining which Autonomous Systems (AS’) are definitely not deploying route origin validation. They did this by comparing the routes in the global RPKI database, which is synchronized among all the AS’ deploying the RPKI, to the routes in the global Default Free Zone (DFZ), as seen from 44 different route servers located throughout the world. In comparing these two, they found a set of routes which the RPKI system indicated should be originated from one AS, but were actually being originated from another AS in the default free zone.
Decentralized systems will continue to lose to centralized systems until there's a driver requiring decentralization to deliver a clearly superior consumer experience. Unfortunately, that may not happen for quite some time.
I say unfortunately because ten years ago, even five years ago, I still believed decentralization would win. Why? For all the idealistic technical reasons I laid out long ago in Building Super Scalable Systems: Blade Runner Meets Autonomic Computing In The Ambient Cloud.
While the internet and the web are inherently decentralized, mainstream applications built on top do not have to be. Typically, applications today—Facebook, Salesforce, Google, Spotify, etc.—are all centralized.
That wasn't always the case. In the early days of the internet the internet was protocol driven, decentralized, and often distributed—FTP (1971), Telnet (<1973), FINGER (1971/1977), TCP/IP (1974), UUCP (late 1970s) NNTP (1986), DNS (1983), SMTP (1982), IRC(1988), HTTP(1990), Tor (mid-1990s), Napster(1999), and XMPP(1999).
We do have new decentalized services: Bitcoin(2009), Minecraft(2009), Ethereum(2104), IPFS(2015), Mastadon(2016), and PeerTube(2018). We're still waiting on Pied Piper to deliver the decentralized internet.
On an evolutionary timeline decentralized systems are neanderthals; centralized systems are the humans. Neanderthals came first. Humans may have interbred with neanderthals, humans may have even killed off the neanderthals, but Continue reading
Encryption for us, not for you: The U.S. Department of Homeland Security is researching ways to improve mobile encryption for federal users, even as the FBI continues to fight against encrypted data on the smartphones of ordinary users. FifthDomain has a story on the DHS effort.
Safety labels for the IoT: The Internet of things needs food safety-style labels detailing the safety and privacy controls on IoT devices, suggests a story at Motherboard.
Consumer Reports and other groups have begun working on a new open source standard intended to help make Internet-connected hardware safer, the story says.
Let’s Encrypt gains support: Let’s Encrypt, the Internet Society-supported secure certificate authority, has picked up endorsements from major root programs like Microsoft, Google, Apple, Mozilla, Oracle, and Blackberry, Packt Hub reports. The zero-cost service allows website operators to pick up SSL certificates for free.
That’s a lot of AI: Intel sold $1 billion worth of Artificial Intelligence chips in 2017, Reuters reports. That’s even a conservative estimate, Intel says. Prepare now for the smart robot takeover!
Hired by AI: Meanwhile, AI is coming to the hiring process, Bloomberg reports, and that may not be such a bad thing. AI may actually be Continue reading
The hottest networking startups are leveraging trends like artificial intelligence and interoperability as well as building on tried and true technologies of the past.
Network device and data center connections often described as "active-active" might be better termed "psuedo active-active."
A while ago I stumbled upon Schneier’s law (must-read):
Any person can invent a security system so clever that she or he can't think of how to break it.
I’m pretty sure there’s a networking equivalent:
Any person can create a clever network design that is so complex that she or he can't figure out how it will fail in production.
I know I’ve been there with my early OSPF network designs.
Delayed impact of fair machine learning Liu et al., ICML’18
“Delayed impact of fair machine learning” won a best paper award at ICML this year. It’s not an easy read (at least it wasn’t for me), but fortunately it’s possible to appreciate the main results without following all of the proof details. The central question is how to ensure fair treatment across demographic groups in a population when using a score-based machine learning model to decide who gets an opportunity (e.g. is offered a loan) and who doesn’t. Most recently we looked at the equal opportunity and equalized odds models.
The underlying assumption of course for studied fairness models is that the fairness criteria promote the long-term well-being of those groups they aim to protect. The big result in this paper is that you can easily up end ‘killing them with kindness’ instead. The potential for this to happen exists when there is a feedback loop in place in the overall system. By overall system here, I mean the human system of which the machine learning model is just a small part. Using the loan/no-loan decision that is a popular study vehicle in fairness papers, we need to Continue reading
Nearly eight years ago, I wrote an article about configuring the ASA to permit Traceroute and how to make the device show up in the output. That article is still relevant and gets quite a few hits every day. I wanted to put together a similar How-To article for those using Firepower Threat Defense.
This article examines the configuration required to allow proper traceroute functionality in an FTD environment. The examples shown here leverage Firepower Management Center to manage Firepower Threat Defense. As with any configuration, please assess the security impact and applicability to your environment before implementing.
Before we get started, it is important to understand that there are two basic types of Traceroute implementations. I am using OSX for testing and it defaults to using UDP packets for the test. However, I can also test with ICMP using the -I option. I am already permitting all outbound traffic, so this is not a problem of allowing the UDP or ICMP toward the destination.
Both types of traceroute depend on sending packets with an incrementing TTL field. This is the field that each router decreases as the packet is forwarded. Any router decreasing this to zero should drop the Continue reading