BrandPost: Edge computing is in most industries’ future

The growth of edge computing is about to take a huge leap. Right now, companies are generating about 10% of their data outside a traditional data center or cloud. But within the next six years, that will increase to 75%, according to Gartner.That’s largely down to the need to process data emanating from devices, such as Internet of Things (IoT) sensors. Early adopters include: Manufacturers: Devices and sensors seem endemic to this industry, so it’s no surprise to see the need to find faster processing methods for the data produced. A recent Automation World survey found that 43% of manufacturers have deployed edge projects. Most popular use cases have included production/manufacturing data analysis and equipment data analytics. Retailers: Like most industries deeply affected by the need to digitize operations, retailers are being forced to innovate their customer experiences. To that end, these organizations are “investing aggressively in compute power located closer to the buyer,” writes Dave Johnson, executive vice president of the IT division at Schneider Electric. He cites examples such as augmented-reality mirrors in fitting rooms that offer different clothing options without the consumer having to try on the items, and beacon-based heat maps that show Continue reading

Prometheus exporter

Prometheus is an open source time series database optimized to collect large numbers of metrics from cloud infrastructure. This article will explore how industry standard sFlow telemetry streaming supported by network devices (Arista, Aruba, Cisco, Dell, Huawei, Juniper, etc.) and Host sFlow agents (Linux, Windows, FreeBSD, AIX, Solaris, Docker, Systemd, Hyper-V, KVM, Nutanix AHV, Xen) can be integrated with Prometheus to extend visibility into the network.

The diagram above shows the elements of the solution: sFlow telemetry streams from hosts and switches to an instance of sFlow-RT. The sFlow-RT analytics software converts the raw measurements into metrics that are accessible through a REST API. The sflow-rt/prometheus application extends the REST API to include native Prometheus exporter functionality allowing Prometheus to retrieve metrics. Prometheus stores metrics in a time series database that can be queries by Grafana to build dashboards.

Update 19 October 2019, native support for Prometheus export added to sFlow-RT, Prometheus application no longer needed to run this example, use URL: /prometheus/metrics/ALL/ALL/txt. The Prometheus application is needed for exporting traffic flows, see Flow metrics with Prometheus and Grafana.

The Docker sflow/prometheus image provides a simple way to run the application:
docker run --name sflow-rt -p 8008:8008 -p  Continue reading

How to identify same-content files on Linux

In a recent post, we looked at how to identify and locate files that are hard links (i.e., that point to the same disk content and share inodes). In this post, we'll check out commands for finding files that have the same content, but are not otherwise connected.Hard links are helpful because they allow files to exist in multiple places in the file system while not taking up any additional disk space. Copies of files, on the other hand, sometimes represent a big waste of disk space and run some risk of causing some confusion if you want to make updates. In this post, we're going to look at multiple ways to identify these files.To read this article in full, please click here

Amazon CloudFront with WordPress as Infrastructure as Code

There are roughly a GAJILLION articles, blogs, and documents out there that explain how to setup Amazon CloudFront to work with WordPress.

Most of them are wrong in one or more ways.

  • They advise a type of cache behavior that is incorrect for one or more WordPress assets.
  • They fail to provide any advice for WordPress assets that need specific cache behavior.
  • The article/blog/document is stale and hasn’t been updated to reflect changes in newer versions of WordPress.

Rather than fall into the trap of writing yet another article for whatever the “now current” version of WordPress is that will likely fall victim to one or more of the conditions listed above, I’m going to take a different approach.

I’m going to codify the CloudFront configuration, version it on GitHub, and adopt an “infrastructure-as-code” (IaC) mentality. This blog post will describe the overall architecture and provide some context, but the actual mechanics of setting up CloudFront to work with WordPress will live (and evolve!) in the IaC files themselves which will be under version control.

Let’s do it!

The Architecture

I’ll say this up front: this architecture may not be for everyone (but I have a sneaky Continue reading

Cumulus NetQ Reinvented

When it comes to visibility into the health of your network. telemetry is all the rage these days. Even so, many customers are still plowing through old SNMP and Flow data to try to piece together what went wrong in their network with no easy way to back the clock up to a time before something broke your spine or leaf! Network downtime is usually costly and for many customers, large and small, can be mission critical.

With these and many other reasons in mind, I’m really excited to introduce Cumulus NetQ. With NetQ, Cumulus Networks has reinvented, from the ground up, our original NetQ product to include a long list of very useful features that are sure to make NetQ a Network Operators best friend.

Figure 1: Cumulus NetQ Benefits

NetQ is a highly-scalable, modern network operations toolset that provides visibility into and troubleshooting of your overlay and underlay networks in real-time. NetQ, delivers actionable insights and operational intelligence about the health of your network and your Linux-based data center — from the container, virtual machine, or host, all the way to the switch and port. In short, NetQ provides holistic, real-time intelligence about your modern network.

So, what Continue reading

IDG Contributor Network: Open architecture and open source – The new wave for SD-WAN?

I recently shared my thoughts about the role of open source in networking. I discussed two significant technological changes that we have witnessed. I call them waves, and these waves will redefine how we think about networking and security.The first wave signifies that networking is moving to the software so that it can run on commodity off-the-shelf hardware. The second wave is the use of open source technologies, thereby removing the barriers to entry for new product innovation and rapid market access. This is especially supported in the SD-WAN market rush.To read this article in full, please click here

Text Files or Relational Database?

This blog post was initially sent to subscribers of my SDN and Network Automation mailing list. Subscribe here.

One of the common questions I get once the networking engineers progress from Ansible 101 to large-scale deployments (example: generating configurations for 1000 devices) is “Can Ansible use a relational database? Text files don’t scale…”

TL&DR answer: Not directly, but there are tons of database Ansible plugins or custom Jinja2 filters out there.

Read more ...

Expanding the DNS Root: Hyperlocal vs NSEC Caching

The root zone of the DNS has been the focal point of many DNS conversations for decades. One set of conversations, which is a major preoccupation of ICANN meetings, concerns what labels are contained in the root zone. A separate set of conversations concern how this root zone is served in the context of the DNS resolution protocol. In this article I'd like to look at the second topic, and, in particular, look at two proposals to augment the way the root zone is served to the DNS.

Summary of Authentication Methods in Red Hat Ansible Tower

Ansible-Blog-Tower-Authentication-Overview-Screen

Red Hat Ansible Tower 3.4.0 has added token authentication as a new method for authentication so I wanted to use this post to summarize the numerous enterprise authentication methods and the best use case for each. Ansible Tower is designed for organizations to centralize and control their automation with a visual dashboard for out-of-the box control while providing a REST API to integrate with your other tooling on a deeper level. We support a number of authentication methods to make it easy to embed Ansible Tower into existing tools and processes to help ensure the right people can access Ansible Tower resources. For this blog post I will go over four of Ansible Tower’s authentication methods: Session, Basic, OAuth2 Token, and Single Sign-on (SSO). For each method I will provide some quick examples and links to the relevant supporting documentation, so you can easily integrate Ansible Tower into your environment.

1. Session Authentication

Session authentication is what’s used when logging in directly to Ansible Tower’s API or UI. It is used when a user wants to remain logged in for a prolonged period of time, not just for that HTTP request, i.e. when browsing the UI or Continue reading

Practical Simplification

Simplification is a constant theme not only here, and in my talks, but across the network engineering world right now. But what does this mean practically? Looking at a complex network, how do you begin simplifying?

The first option is to abstract, abstract again, and abstract some more. But before diving into deep abstraction, remember that abstraction is both a good and bad thing. Abstraction can reduce the amount of state in a network, and reduce the speed at which that state changes. Abstraction can cover a multitude of sins in the legacy part of the network, but abstractions also leak!!! In fact, all nontrivial abstractions leak. Following this logic through: all non-trivial abstractions leak; the more non-trivial the abstraction, the more it will leak; the more complexity an abstraction is covering, the less trivial the abstraction will be. Hence: the more complexity you are covering with an abstraction, the more it will leak.

Abstraction, then, is only one part of the solution. You must not only abstract, but you must also simplify the underlying bits of the system you are covering with the abstraction. This is a point we often miss.

Which returns us to our original question. The Continue reading

10 Years of Auditing Online Trust – What’s Changed?

Last week we released the 10th Online Trust Audit & Honor Roll, which is a comprehensive evaluation of an organization’s consumer protection, data security, and privacy practices. If you want to learn more about this year’s results, please join us for our webinar on Wednesday, 24 April, at 1PM EDT / 5PM UTC. Today, though, we thought it would be interesting to see how the Audit and results have evolved over time. Here are some quick highlights over the years:

  • 2005 – The Online Trust Alliance issued “scorecards” tracking adoption of email authentication (SPF) in Fortune 500 companies.
  • 2008 – Added DKIM tracking to the scorecards, and extended the sectors to include the US federal government, banks, and Internet retailers.
  • 2009 – Shifted from scorecard to “Audit” because criteria were expanded to include Extended Validation (EV) certificates and elements of site security (e.g., website malware).
  • 2010 – Introduced the Honor Roll concept, highlighting organizations following best practices. Only 8% made the Honor Roll.
  • 2012 – Expanded criteria to include DMARC, Qualys SSL Labs website assessment, and scoring of privacy statements and trackers. Shifted overall sector focus to consumer-facing organizations, so dropped the Fortune 500 and added Continue reading

Network Break 231: CloudGenix Loads Up On VC Cash For SD-WAN Fight; Apple Settles With Qualcomm

Today's Network Break examines a $65 million investment in CloudGenix and what it says about the SD-WAN market, a new cloud-based packet processor from Nubeva, why Apple settled with Qualcomm just before trial, and more tech news.

The post Network Break 231: CloudGenix Loads Up On VC Cash For SD-WAN Fight; Apple Settles With Qualcomm appeared first on Packet Pushers.