Parameter Encoding on FPGAs Boosts Neural Network Efficiency

The key to creating more efficient neural network models is rooted in trimming and refining the many parameters in deep learning models without losing accuracy. Much of this work is happening on the software side, but devices like FPGAs that can be tuned for trimmed parameters are offering promising early results for implementation.

A team from UC San Diego has created a reconfigurable clustering approach to deep neural networks that encodes the parameters the network according the accuracy requirements and limitations of the platform—which are often bound by memory access bandwidth. Encoding the trimmed parameters in an FPGA resulted in

Parameter Encoding on FPGAs Boosts Neural Network Efficiency was written by Nicole Hemsoth at The Next Platform.

Using The Network To Break Down Server Silos

Virtual machines and virtual network functions, or VMs and VNFs for short, are the standard compute units in modern enterprise, cloud, and telecommunications datacenters. But varying VM and VNF resource needs as well as networking and security requirements often force IT departments to manage servers in separate silos, each with their own respective capabilities.

For example, some VMs or VNFs may require a moderate number of vCPU cores and lower I/O bandwidth, while VMs and VNFs associated with real-time voice and video, IoT, and telco applications require a moderate-to-high number of vCPU cores, rich networking services, and high I/O bandwidth,

Using The Network To Break Down Server Silos was written by Timothy Prickett Morgan at The Next Platform.

Real-time DDoS mitigation using sFlow and BGP FlowSpec

Remotely Triggered Black Hole (RTBH) Routing describes how native BGP support in the sFlow-RT real-time sFlow analytics engine can be used to blackhole traffic in order to mitigate a distributed denial of service (DDoS) attack. Black hole routing is effective, but there is significant potential for collateral damage since ALL traffic to the IP address targeted by the attack is dropped.

The BGP FlowSpec extension (RFC 5575: Dissemination of Flow Specification Rules) provides a method of transmitting traffic filters that selectively block the attack traffic while allowing normal traffic to pass. BGP FlowSpec support has recently been added to sFlow-RT and this article demonstrates the new capability.

This demonstration uses the test network described in Remotely Triggered Black Hole (RTBH) Routing. The network was constructed using free components: VirtualBox, Cumulus VX, and Ubuntu LinuxBGP FlowSpec on white box switch describes how to implement basic FlowSpec support on Cumulus Linux.

The following flowspec.js sFlow-RT script detects and blocks UDP-Based Amplification attacks:
var router = '10.0.0.141';
var id = '10.0.0.70';
var as = 65141;
var thresh = 1000;
var block_minutes = 1;

setFlow('udp_target',{keys:'ipdestination,udpsourceport',value:'frames'});

setThreshold('attack',{metric:'udp_target', value:thresh, byFlow:true});

bgpAddNeighbor(router,as,id,{flowspec:true});

var Continue reading

48% off Anker 15W Dual USB Solar Charger – Deal Alert

This solar charger from Anker delivers the fastest possible charge up to 2.1 amps under direct sunlight. 15 watt SunPower solar array is provides enough power to charge two devices simultaneously. Industrial-strength PET polymer faced solar panels are sewn into a rugged polyester canvas for weather-resistant outdoor durability. Clip it to your backpack, or attach to your tent or a tree. The charger currently averages 4.3 out of 5 stars from over 340 people on Amazon (read reviews), where its typical list price of $79.99 has been reduced 48% to $41.99. See this deal on Amazon.To read this article in full or to leave a comment, please click here

NEC claims new vector processor speeds data processing 50-fold

It seems more vendors are looking beyond the x86 architecture for the big leaps in performance needed to power things like artificial intelligence (AI) and machine learning. Google and IBM have their processor projects, Nvidia and AMD are positioning their GPUs as an alternative, and now Japan’s NEC has announced a vector processor accelerates that data processing by more than a factor of 50 compared to the Apache Spark cluster-computing framework. + Also on Network World: NVM Express spec updated for data-intensive operations + The company said its vector processor, called the Aurora Vector Engine, leverages “sparse matrix” data structures to accelerate processor performance in executing machine learning tasks. Vector-based computers are basically supercomputers built specifically to handle large scientific and engineering calculations. Cray used to build them in previous decades before shifting to x86 processors. To read this article in full or to leave a comment, please click here

NEC claims new vector processor speeds data processing 50-fold

It seems more vendors are looking beyond the x86 architecture for the big leaps in performance needed to power things like artificial intelligence (AI) and machine learning. Google and IBM have their processor projects, Nvidia and AMD are positioning their GPUs as an alternative, and now Japan’s NEC has announced a vector processor accelerates that data processing by more than a factor of 50 compared to the Apache Spark cluster-computing framework. + Also on Network World: NVM Express spec updated for data-intensive operations + The company said its vector processor, called the Aurora Vector Engine, leverages “sparse matrix” data structures to accelerate processor performance in executing machine learning tasks. Vector-based computers are basically supercomputers built specifically to handle large scientific and engineering calculations. Cray used to build them in previous decades before shifting to x86 processors. To read this article in full or to leave a comment, please click here

High-reliability OCSP stapling and why it matters

High-reliability OCSP stapling and why it matters

At Cloudflare our focus is making the internet faster and more secure. Today we are announcing a new enhancement to our HTTPS service: High-Reliability OCSP stapling. This feature is a step towards enabling an important security feature on the web: certificate revocation checking. Reliable OCSP stapling also improves connection times by up to 30% in some cases. In this post, we’ll explore the importance of certificate revocation checking in HTTPS, the challenges involved in making it reliable, and how we built a robust OCSP stapling service.

Why revocation is hard

Digital certificates are the cornerstone of trust on the web. A digital certificate is like an identification card for a website. It contains identity information including the website’s hostname along with a cryptographic public key. In public key cryptography, each public key has an associated private key. This private key is kept secret by the site owner. For a browser to trust an HTTPS site, the site’s server must provide a certificate that is valid for the site’s hostname and a proof of control of the certificate’s private key. If someone gets access to a certificate’s private key, they can impersonate the site. Private key compromise is a serious risk Continue reading

Encryption is Crucial to a Trusted Internet

The Five Eyes – Canada, the United States, United Kingdom, Australia, and New Zealand – recently met in Ottawa to discuss national security challenges. The resulting joint communiqué noted that “encryption can severely undermine public safety efforts by impeding lawful access to the content of communications during investigations into serious crimes, including terrorism.” The Internet Society believes that this view of encryption is misleading and bodes badly for a trusted Internet. Any weakening of encryption will hurt cybersecurity and individual rights and freedoms.

Mark Buell

Progress update – 10/07-2017

Hello folks,

Im currently going through the INE DC videos and learning a lot about fabrics and how they work along with a fair bit of UCS information on top of that!

Im spending an average of 2.5 hours on weekdays for study and a bit more in the weekends when time permits.

I still have no firm commitment to the CCIE DC track, but at some point I need to commit to it and really get behind it. One of these days ?

I mentioned it to the wife-to-be a couple of days ago and while she didn’t applaud the idea, at least she wasn’t firmly against it, which is always something I guess! Its very important for me to have my family behind me in these endeavours!

Im still a bit concerned about the lack of rack rentals for DCv2 from INE, which is something I need to have in place before I order a bootcamp or more training materials from them. As people know by now, I really do my best learning in front of the “system”, trying out what works and what doesn’t.

Now to spin up a few N9K’s in the lab and play around Continue reading

Cisco Datacenter: Default Cisco OTV Configurations

Today I am going to talk about the Cisco OTV configuration and what components we need to configure when you are extended your L2 traffic over the L3 interface between the two Datacenter. 

What is Cisco OTV ?
Cisco OTV stands for Overlay Transport Virtualization, So OTV is a Cisco propriety protocol used in Cisco Datacenter environment basically on Cisco Nexus device Cisco Nexus 7k to extend the L2 traffic via L3 route between two different datacenter. 

OTV will provides a native built-in multi-homing capability with automatic detection, critical to increasing high availability of the overall solution. Cisco OTV has the concept of dynamic encapsulation for Layer 2 flows that need to be sent to remote locations. 

Each Ethernet frame is individually encapsulated into an IP packet and delivered across the transport network. Cisco OTV eliminates the need to establish virtual circuits, called Pseudowires, between the data center locations and you can say that it os one of the demanding technology in datacenter environment where you have Cisco Nexus devices.

Cisco OTV required the one single VDC to work, it means if you are going to have a one Cisco Nexus 7k switch you need to have the separate Continue reading

Unikernels are secure. Here is why.

Per Buer is the CEO of IncludeOS. IncludeOS is a clean-slate unikernel written in C++ with performance and security in mind. Per Buer is the founder and previous CEO/CTO of Varnish Software.

We’ve created a video that explains this in 7 minutes, so you’ll have the option of watching it instead of reading it.

There have been put forth various arguments for why unikernels are the better choice security wise and also some contradictory opinions on why they are a disaster. I believe that from a security perspective unikernels can offer a level of security that is unprecedented in mainstream computing.

A smaller codebase

Classic operating systems are nothing if not generic. They support everything and the kitchen sink. Since they ship in their compiled form and since users cannot be expected to compile functionality as it is needed, everything needs to come prebuilt and activated. Case in point; your Windows laptop might come with various services activated (bluetooth, file sharing, name resolution, and similar services). You might not use them but they are there. Go to some random security conference and these services will likely be the attack vector that is used to break into your laptop — even Continue reading

Everything Has a Cost

Everything comes at a cost: steak dinners & pre-sales engineering has to get paid for somehow. That should be obvious to most. Feature requests also come at a cost, both upfront, and ongoing. Those ongoing costs are not always understood.

It’s easy to look at vendor gross margins, and assume that there is plenty of fat. But remember that Gross margin is just Revenue minus cost of goods sold. It’s not profit. It doesn’t include sales & marketing costs, or R&D costs. Those costs affect net income, which is ‘real’ income. Companies need to recoup those costs somehow if they want to make money. Gross margin alone doesn’t pay the bills.

Four-Legged SalesDroids, and Steak Dinners

A “four-legged sales call” is when two people show up for sales calls. The usual pattern is an Account Manager for the ‘relationship’ stuff, with a Sales Engineer acting as truth police. These calls can be very useful. It’s a good way to talk about the current business challenges, discuss product roadmaps, provide feedback on what’s working, and what’s not. The Sales Engineer can offer implementation advice, maybe help with some configuration issues.

Often a sales call includes lunch or dinner. Breaking bread together Continue reading

The Linux Migration: July 2017 Progress Report

I’m now roughly six months into using Linux as my primary laptop OS, and it’s been a few months since my last progress report. If you’re just now picking up this thread, I encourage you to go back and read my initial progress report, see which Linux distribution I selected, or check how I chose to handle corporate collaboration (see here, here, and here). In this post, I’ll share where things currently stand.

My configuration is unchanged from the last progress report. I’m still running Fedora 25, and may consider upgrading to Fedora 26 when it releases (due to be released tomorrow, I believe). I’m still using the Dell Latitude E7370, which continues—from a hardware perspective—to perform admirably. CPU power is a bit limited, but that’s to be expected from a mobile-focused chip. My line-up of applications also remains largely unchanged as well.

Some things are working really well:

  • Sublime Text runs really well and is quite fast, making it easy to continue using Markdown as my primary content format. Sublime Text’s performance and stability have been unparalleled.
  • I’ve had no performance or stability issues with Firefox (for browsing) or Enpass (for password management).
  • ODrive, Continue reading

The Linux Migration: July 2017 Progress Report

I’m now roughly six months into using Linux as my primary laptop OS, and it’s been a few months since my last progress report. If you’re just now picking up this thread, I encourage you to go back and read my initial progress report, see which Linux distribution I selected, or check how I chose to handle corporate collaboration (see here, here, and here). In this post, I’ll share where things currently stand.

My configuration is unchanged from the last progress report. I’m still running Fedora 25, and may consider upgrading to Fedora 26 when it releases (due to be released tomorrow, I believe). I’m still using the Dell Latitude E7370, which continues—from a hardware perspective—to perform admirably. CPU power is a bit limited, but that’s to be expected from a mobile-focused chip. My line-up of applications also remains largely unchanged as well.

Some things are working really well:

  • Sublime Text runs really well and is quite fast, making it easy to continue using Markdown as my primary content format. Sublime Text’s performance and stability have been unparalleled.
  • I’ve had no performance or stability issues with Firefox (for browsing) or Enpass (for password management).
  • ODrive, Continue reading

The Linux Migration: July 2017 Progress Report

I’m now roughly six months into using Linux as my primary laptop OS, and it’s been a few months since my last progress report. If you’re just now picking up this thread, I encourage you to go back and read my initial progress report, see which Linux distribution I selected, or check how I chose to handle corporate collaboration (see here, here, and here). In this post, I’ll share where things currently stand.

My configuration is unchanged from the last progress report. I’m still running Fedora 25, and may consider upgrading to Fedora 26 when it releases (due to be released tomorrow, I believe). I’m still using the Dell Latitude E7370, which continues—from a hardware perspective—to perform admirably. CPU power is a bit limited, but that’s to be expected from a mobile-focused chip. My line-up of applications also remains largely unchanged as well.

Some things are working really well:

  • Sublime Text runs really well and is quite fast, making it easy to continue using Markdown as my primary content format. Sublime Text’s performance and stability have been unparalleled.
  • I’ve had no performance or stability issues with Firefox (for browsing) or Enpass (for password management).
  • ODrive, Continue reading