Custom metrics with Cumulus Linux

Cumulus Networks, sFlow and data center automation describes how Cumulus Linux is monitored using the open source Host sFlow agent that supports Linux, Windows, FreeBSD, Solaris, and AIX operating systems and KVM, Xen, XCP, XenServer, and Hyper-V hypervisors, delivering a standard set of performance metrics from switches, servers, hypervisors, virtual switches, and virtual machines.

Host sFlow version 1.28.3 adds support for Custom Metrics. This article demonstrates how the extensive set of standard sFlow measurements can be augmented using custom metrics.

Recent releases of Cumulus Linux simplify the task by making machine readable JSON a supported output in command line tools. For example, the cl-bgp tool can be used to dump BGP summary statistics:
cumulus@leaf1$ sudo cl-bgp summary show json
{ "router-id": "192.168.0.80", "as": 65080, "table-version": 5, "rib-count": 9, "rib-memory": 1080, "peer-count": 2, "peer-memory": 34240, "peer-group-count": 1, "peer-group-memory": 56, "peers": { "swp1": { "remote-as": 65082, "version": 4, "msgrcvd": 52082, "msgsent": 52084, "table-version": 0, "outq": 0, "inq": 0, "uptime": "05w1d04h", "prefix-received-count": 2, "prefix-advertised-count": 5, "state": "Established", "id-type": "interface" }, "swp2": { "remote-as": 65083, "version": 4, "msgrcvd": 52082, "msgsent": 52083, "table-version": 0, "outq": 0, "inq": 0, "uptime": "05w1d04h", "prefix-received-count": 2, "prefix-advertised-count": 5, "state": "Established", "id-type": "interface" } }, Continue reading

I use SNMP SETs and I’m not afraid to admit it.

Do you remember back in CCNA school when we learned all sorts of great things that we very rarely followed. One of the favourites was that we are supposed to put meaningful descriptions on all of our interfaces so we know what the other side is connected to.

How many people actually follow that advice?

Yeah, I never do it either. There’s always just too many things on the list that need to get done and it seems like that extra 5 seconds it would take me to update the description to the interface just doesn’t seem like it’s worth the effort. Of course, then I later check the port and end up knocking out my XYZ services and cause myself an outage.

This is where a little python and a decent NMS can help to solve a problem.

Understanding ifIndex

Before we get into the code. We need to understand a little about ifIndex values and how they relate to the physical interfaces of the devices. If you’re REALLY interested, you can do some reading in RFC 2863.  But in a nutshell, each interface on a device, whether physical or logical has a specific numeric value assigned to it Continue reading

Policy wonks aren’t computer experts

This Politico story polls "cybersecurity experts" on a range of issues. But they weren't experts, they were mostly policy wonks and politicians. Almost none of them have ever configured a firewall, wrote some code, exploited SQL injection, analyzed a compromise, or in any other way have any technical expertise in cybersecurity. It's like polling a group of "medical experts", none of which has a degree in medicine, or having a "council of economic advisers", consisting of nobody with economics degrees, but instead representatives from labor unions and corporations.

As an expert, a real expert, I thought I'd answer the questions in the poll. After each question, I'll post my answer (yes/no), the percentage from the Politico poll of those agreeing with me, and then a discussion.

Should the government mandate minimum cybersecurity requirements for private-sector firms?

No (39%). This question is biased because they asked policy wonks, most of which will answer "yes" to any question "should government mandate". It's also biases because if you ask anybody involved in X if we need more X, they'll say "yes", regardless of the subject you are talking about.

But the best answer is "no", for three reasons.

Firstly, we experts don't know Continue reading

The latest IoT network builds on a power-grid foundation

New networks being built with far less fanfare than cell towers will connect objects that in some cases have never been linked before, like street lights and traffic signals. The latest, called Starfish, is now debuting in Silicon Valley.The many new dedicated networks for the Internet of Things aren't as fast as LTE or Wi-Fi, but they're designed to reach devices across an entire region with lower cost and power consumption. That's part of the equation that's supposed to make IoT work.But as a new kind of network, these LPWA (low-power wide area) technologies are still a Wild West of competing vendors and approaches. Take your pick: Ingenu, SigFox, LoRaWAN, NB-LTE and more. To read this article in full or to leave a comment, please click here

This Japanese security drone will chase intruders

Security guards in Japan have a new tool to deter intruders: a drone that will chase down and follow people without human intervention.Made by Secom, Japan's biggest security company, the drone goes on sale Friday to organizations that need to protect large parcels of land. It will launch whenever suspicious cars or people are detected on the property by other security equipment.The drone will snap pictures and send them to a Secom monitoring center where it can determine the threat. Today, the company sends security guards to investigate potential intrusions, so a drone could reduce its response time considerably.To read this article in full or to leave a comment, please click here

Using SSH Multiplexing

In this post, I’m going to discuss how to configure and use SSH multiplexing. This is yet another aspect of Secure Shell (SSH), the “Swiss Army knife” for administering and managing Linux (and other UNIX-like) workloads.

Generally speaking, multiplexing is the ability to carry multiple signals over a single connection (see this Wikipedia article for a more in-depth discussion). Similarly, SSH multiplexing is the ability to carry multiple SSH sessions over a single TCP connection. This Wikibook article goes into more detail on SSH multiplexing; in particular, I would call your attention to the table under the “Advantages of Multiplexing” to better understand the idea of multiple SSH sessions with a single TCP connection.

One of the primary advantages of using SSH multiplexing is that it speeds up certain operations that rely on or occur over SSH. For example, let’s say that you’re using SSH to regularly execute a command on a remote host. Without multiplexing, every time that command is executed your SSH client must establish a new TCP connection and a new SSH session with the remote host. With multiplexing, you can configure SSH to establish a single TCP connection that is kept alive for a specific period Continue reading

Technology Short Take #57

Welcome to Technology Short Take #57. I hope you find something useful here!

Networking

Block yourself from being tracked and profiled online  

This column is available in a weekly newsletter called IT Best Practices.  Click here to subscribe.  I don't often write about technology products aimed at the home user, but this is one I definitely want for my home. Small offices might find this product useful as well, and there is an enterprise version in development, so it's worth me telling you about what's on my wish list this time of year.I'm talking about eBlocker, a small device that protects your personal privacy when you are surfing the web. It's from a German engineering company of the same name, eBlocker.To read this article in full or to leave a comment, please click here

Google’s new Data Loss Prevention tools could drive enterprise adoption of Gmail

Enterprises that do not have an extremely large IT operating scale or unique compliance requirements don't have much of a reason to operate internal email systems. Yesterday, Google announced Data Loss Prevention (DLP) for its enterprise Gmail service, eliminating one more compliance reason justifying the operation of custom email services within the enterprise. DLP checks email messages and attachments for sensitive data to prevent disclosure to unauthorized personnel. Sensitive data includes trade secrets or intellectual property or data regulated in industries like healthcare and financial services.Innovation often takes a back seat to compliance; the more regulated the business, the more compliance becomes a roadblock to innovation. Before Google released DLP, the burden of data loss compliance standards prevented some enterprises from taking advantage of Gmail's 900 million mailbox scale. Few enterprises can operate email services with the redundancy, resilience, and security of Google's Gmail. DLP means that many enterprises running less-efficient email services for compliance reasons now have a Gmail option.To read this article in full or to leave a comment, please click here

If only this abandoned AT&T microwave tower could talk …

Oh, the stories it might tell. But since even the tower’s days of facilitating talk are behind it, a writer for The Atlantic has taken up the task of telling its story … or at least piecing one together as best as possible. From that account, which has better pictures: We were somewhere in Kansas when we found the second microwave tower. We’d found the ruins of one somewhere else in Kansas earlier during that day. This other one still had its pyramidal horn-reflector antennae intact. One abandoned microwave tower is a coincidence; two is probably an omen. Especially when that second one has AT&T Long Lines signage out front.To read this article in full or to leave a comment, please click here

FBI director renews push for back doors, urging vendors to change business models

The FBI still wants backdoors into encrypted communications, it just doesn’t want to call them backdoors and it doesn’t want to dictate what they should look like.FBI Director James Comey told the Senate Judiciary Committee that he’d been in talks with unspecified tech leaders about his need to crack encrypted communications in order to track down terrorists and that these leaders understood the need.In order to comply, tech companies need to change their business model – by selling only communications gear that enables law enforcement to access communications in unencrypted form, he says, rather than products that only the parties participating in the communication can decrypt.To read this article in full or to leave a comment, please click here

When APIs and DevOps Meet Cybersecurity

Cybersecurity professionals often complain about the number of disparate tools they’ve deployed on their networks.  Ask any enterprise CISOs and he or she will come up with a list of around 60 to 80 various security tools from a myriad of distinct vendors.This has become a nagging problem as an enterprise cybersecurity architecture based upon point tools can’t scale and requires way too much operational overhead to maintain.  Thus, CISOs are moving in another direction – a tightly-coupled cybersecurity technology architecture based upon software integration.I’ve been following this transition for years and always thought it would look something like the departmental application to ERP migration of the 1990s.  Oracle, SAP, and lots of professional services built an interoperable software infrastructure connecting applications across the enterprise and soon dominated the market.  This is happening in cybersecurity to some extent as ecosystems form around the biggest vendors like Blue Coat, Cisco, IBM, Intel Security, Raytheon, Splunk, Symantec, and Trend Micro. To read this article in full or to leave a comment, please click here

SHA-1 cutoff could block millions of users from encrypted websites

Millions of Web users could be left unable to access websites over the HTTPS protocol if those websites only use digital certificates signed with the SHA-2 hashing algorithm.The warning comes from Facebook and CloudFlare as browser makers are considering an accelerated retirement of the older and increasingly vulnerable SHA-1 function.The two companies have put mechanisms in place to serve SHA-1 certificates from their websites to old browsers and operating systems that don't support SHA-2, but are still widely used in some regions of the world.These include Windows versions older than Windows XP with Service Pack 3, Android versions older than 2.3 (Gingerbread) and any applications that rely on OpenSSL 0.9.8 for encrypted communications.To read this article in full or to leave a comment, please click here

HTTP/2 For Web Developers

HTTP/2 changes the way web developers optimize their websites. In HTTP/1.1, it’s become common practice to eek out an extra 5% of page load speed by hacking away at your TCP connections and HTTP requests with techniques like spriting, inlining, domain sharding, and concatenation.

Life’s a little bit easier in HTTP/2. It gives the typical website a 30% performance gain without a complicated build and deploy process. In this article, we’ll discuss the new best practices for website optimization in HTTP/2.

Web Optimization in HTTP/1.1

Most of the website optimization techniques in HTTP/1.1 revolved around minimizing the number of HTTP requests to an origin server. A browser can only open a limited number of simultaneous TCP connections to an origin, and downloading assets over each of those connections is a serial process: the response for one asset has to be returned before the next one can be sent. This is called head-of-line blocking.

As a result, web developers began squeezing as many assets as they could into a single connection and finding other ways to trick browsers into avoiding head-of-line blocking. In HTTP/2, some of these practices can actually hurt page load times.

The New Web Optimization Continue reading