When It Comes to Security Architecture, Edge Is Where It’s At
There are billions of reasons why network security needs to be pushed to the edge, and Netskope is...
There are billions of reasons why network security needs to be pushed to the edge, and Netskope is...
Recently, while troubleshooting a separate issue, I had a need to get more information about the token used by Kubernetes Service Accounts. In this post, I’ll share a quick command-line that can fully decode a Service Account token.
Service Account tokens are stored as Secrets in the “kube-system” namespace of a Kubernetes cluster. To retrieve just the token portion of the Secret, use -o jsonpath like this (replace “sa-token” with the appropriate name for your environment):
kubectl -n kube-system get secret sa-token \
-o jsonpath='{.data.token}'
The output is Base64-encoded, so just pipe the output into base64:
kubectl -n kube-system get secret sa-token \
-o jsonpath='{.data.token}' | base64 --decode
The result you’re seeing is a JSON Web Token (JWT). You could use the JWT web site to decode the token, but given that I’m a fan of the CLI I decided to use this JWT CLI utility instead:
kubectl -n kube-system get secret sa-token \
-o jsonpath='{.data.token}' | base64 --decode | \
jwt decode -
The final -, for those who may not be familiar, is the syntax to tell the jwt utility to look at STDIN for the JWT it needs to Continue reading
The first public beta of its open source SD-WAN platform was released alongside the announcement of...
We all know what OpenWrt is. The amazing Linux distro built specifically for embedded devices.
What you can achieve with a rather cheap router running OpenWrt, is mind-boggling.
OpenWrt also gives you a great control over its build system. For normal cases, you probably don’t need to build OpenWrt from source yourself. That has been done for you already and all you need to do, is to just download the appropriate compiled firmware image and then upload it to your router1.
But for more advanced usages, you may find yourself needing to build OpenWrt images yourself. This could be due wanting to make some changes to the code, add some device specific options, etc.
Building OpenWrt from source is easy, well-documented, and works great. That is, until you start using opkg to install some new packages.
opkg will by default fetch new packages from the official repository (as one might expect), but depending on the package, the installation may or may not fail.

In this post, I’m going to walk you through how to add a name (specifically, a Subject Alternative Name) to the TLS certificate used by the Kubernetes API server. This process of updating the certificate to include a name that wasn’t included could find use for a few different scenarios. A couple of situations come to mind, such as adding a load balancer in front of the control plane, or using a new or different URL/hostname used to access the API server (both situations taking place after the cluster was bootstrapped).
This process does assume that the cluster was bootstrapped using kubeadm. This could’ve been a simple kubeadm init with no customization, or it could’ve been using a configuration file to modify the behavior of kubeadm when bootstrapping the cluster. This process also assumes your Kubernetes cluster is using the default certificate authority (CA) created by kubeadm when bootstrapping a cluster. Finally, this process assumes you are using a non-HA (single control plane node) configuration.
Before getting into the details of how to update the certificate, I’d like to first provide a bit of background on why this is important.
The Kubernetes API server uses digital certificates to both Continue reading
Starting this week, we will do a series of four blogposts on the intersection of Spark with Kubernetes. The first blog post will delve into the reasons why both platforms should be integrated. The second will deep-dive into Spark/K8s integration. The third will discuss usecases for Serverless and Big Data Analytics. The last post will round off with insights on best practices.
Most Cloud Native Architectures are designed in response to Digital Business initiatives – where it is important to personalize and to track minute customer interactions. The main components of a Cloud Native Platform inevitably leverage a microservices based design. At the same time, Big Data architectures based on Apache Spark have been implemented at 1000s of enterprises and support multiple data ingest capabilities whether real-time, streaming, interactive SQL platform while performing any kind of data processing (batch, analytical, in memory & graph, based) at the same time providing search, messaging & governance capabilities.
The RDBMS has been a fixture of the monolithic application architecture. Cloud Native applications, however, need to work with data formats of the loosely structured kind as well as the regularly structured data. This implies the need to support data streams that are Continue reading
The topic of testing in continuous integration pipelines, is something we at Cumulus discuss almost daily, whether it’s internally or with customers. While our approach mainly centers around doing this type of testing in a virtual simulated environment, the moment I heard about a project called Batfish taking a different approach to testing, it had my attention. Better yet, once Batfish announced initial support for Cumulus earlier this year, there were no excuses left to not start digging in and understanding how it can fit into pipelines and replace or complement existing testing strategies.
While there are various testing frameworks out there that help in building and organizing an approach to testing changes, the ugly truth is that the majority of this process occurs after a change has actually been pushed to a device. Techniques like linting provide some level of aid in the mostly empty pre-change testing area, but the control and data plane validation checks are forced to occur after a change has been pushed, when its generally “too late”. Even though there’s no argument that some testing is better than none, the pre-change test area is desperate for any type of visibility Continue reading