Archive

Category Archives for "Systems"

Revisiting X.509 Certificates in Kubeconfig Files

In 2018, I wrote an article on examining X.509 certificates embedded in Kubeconfig files. In that article, I showed one way of extracting client certificate data from a Kubeconfig file and looking at the properties of the client certificate data. While there’s nothing technically wrong with that article, since then I’ve found another tool that makes the process a tad easier. In this post, I’ll revisit the topic of examining embedded X.509v3 certificates in Kubeconfig files.

The tool that I’ve found is yq, which is an incredibly useful tool when it comes to parsing YAML (much in the same way that jq is an incredibly useful tool when it comes to parsing JSON). I should probably write some sort of introductory post on yq.

In any case, you can use yq to replace the grep plus awk combo outlined in my earlier article on examining certificate data in Kubeconfig files. Instead, to pull out only the client certificate data, just use this yq command (you did know that Kubeconfig files are YAML, right?):

yq '.users[0].user.client-certificate-data' < ~./kube/config

(Of course, this command assumes your Kubeconfig file is named config in the ~/.kube Continue reading

Posts from the Past, August 2022

I thought I might start highlighting some older posts here on the site through a semi-regular “Posts from the Past” series. I’ll start with posts published in the month of August through the years. Here’s hoping you find something that is useful (or perhaps entertaining, at least)!

August 2021

Last year, I had a couple of posts that I think are still relevant today. First, I talked about using Pulumi with Go to create a VPC Peering relationship on AWS. Second, I showed readers how to have Wireguard interfaces start automatically (using launchd) on macOS.

August 2020

I didn’t write too much in August 2020; my wife and I took a big road trip around the US to visit family and such. However, I did publish a post on some behavior changes in version 0.5.5 of the Cluster API tool clusterawsadm.

August 2019

This was a busy month for the blog! In addition to two Technology Short Takes, I also published posts on converting Kubernetes to an HA control plane, reconstructing the kubeadm join command (in the event you didn’t write down the output of kubeadm init), and one introducing Cluster API.

August 2018

In Continue reading

Site Category Changes

This weekend I made a couple of small changes to the categories on the site, in an effort to make navigation a bit more intuitive. In the past, readers had expressed some confusion over the “Education” and “Explanation” categories, and—to be frank—their confusion was warranted. I also wasn’t clear on the distinction between those categories, so this post explains the changes I’ve made.

The following category changes are now in effect on the site:

  • First, the “Education” category has been completely removed. I try to make almost everything on this site educational in nature, so why have an “Education” category? This really only affects you if you’d subscribed to that category’s RSS feed. (Did you know that every category and every tag has its own RSS feed?)
  • A lot of the content from the “Education” category has been moved into the “Explanation” category. This is the category that will contain posts where I provide some level of explanation around a concept, technology, product, or project.
  • The “Tutorial” category also picked up some new articles from the now-eliminated “Education” category. The “Tutorial” category contains walkthroughs or step-by-step instructions for doing something. There’s most likely going to be explanation along the Continue reading

Peeling back the layers and understanding automation mesh

Red Hat Ansible Automation Platform 2 features an awesome new way to scale out your automation workloads: automation mesh.  If you are unfamiliar with automation mesh, I highly recommend reading Craig Brandt’s blog post What's new: an introduction to automation mesh which outlines how automation mesh can simplify your operations and scale your automation globally.  For this blog post, I want to focus on the technical implementation of automation mesh, what network ports it is using and how you can secure it. 

To quickly summarize both Craig’s blog post and our documentation, we separated the control plane (which includes the webUI and API) from the execution plane (where an Ansible Playbook is executed) in Ansible Automation Platform 2.  This allows you to choose where jobs run across execution nodes, so you can deliver and run automation closer to the devices that need it. In our implementation, there is four different types of nodes:

  • Control plane nodes: These are  accessed either via the WebUI and API. Execution capabilities are disabled on these nodes. 
  • Execution nodes: This is where Ansible Playbooks are actually executed.  This node will run an automation execution environment which in Continue reading

Using Default AWS Resources with Pulumi

Per the AWS documentation (although I’m sure there are exceptions), when you start using AWS you are given some automatically-created resources: a default VPC that contains public subnets in each availability zone in the region along with an Internet gateway and settings to enable DNS resolution. Most of the infrastructure-as-code tutorials that I’ve seen start with creating a VPC and subnets and gateway, but what if you wanted to use these default resources instead? I wasn’t really able to find a good walkthrough on how to do this, so this post provides some sample Go code you can use with Pulumi to identify these default AWS resources and use them.

I’ll approach this from the perspective of wanting to launch an EC2 instance in the default infrastructure that AWS provides for you in a region. To launch an EC2 instance using Pulumi (and most other infrastructure-as-code tools), there are several pieces of information you need:

  1. An AMI ID
  2. The instance type
  3. The name of an SSH keypair that’s been uploaded to/created in AWS
  4. A subnet ID
  5. A security group ID

The first three are probably things you’ll want to parameterize (i.e., make it possible for you to pass Continue reading

Using Red Hat Ansible Automation Platform to Enable a Policy as Code Solution

policy as code blog

Nelson, the director of operations of a large manufacturing company, told me that he has a highly leveraged staff. That is, most of the work on the company's critical cloud project is being done by consultants and vendors, and not by their team members. Unfortunately, much of the staff’s time is spent making sure that the infrastructure as code (IaC) implementation is in compliance with the standards and policies that his company has for cloud resources. Nelson said, “Oftentimes the code works, but doesn’t conform to naming conventions, or is missing labels, or has security issues, which impacts downstream workflows and applications. We need the environment to conform to our policies, and I don’t want my staff burning cycles to ensure that this is the case.” This was the reason why he brought me in to run a proof of concept (POC). The POC would validate what would become a Policy as Code solution based on one of the common IaC products.  

When the technical team and I reviewed the proof points for the POC, and it was a standard demonstration of Policy as Code capabilities, it was determined that Red Hat Ansible Automation Platform would satisfy all Continue reading

Technology Short Take 157

Welcome to Technology Short Take 157! I hope that this collection of links I’ve gathered is useful to someone out there. In particular, the “Career/Soft Skills” section is a bit bigger than usual this time around, as is the “Security” section.

Networking

  • Interested in understanding how NAT Traversal works? David Anderson’s post on how NAT Traversal works should help.
  • This happened a couple of months ago, but I don’t think I’ve linked to it in a Technology Short Take: the Envoy Proxy open source project announced Envoy Gateway, a “new member of the Envoy Proxy family aimed at significantly decreasing the barrier to entry when using Envoy for API Gateway (sometimes known as ’north-south’) use cases”.
  • This is a slightly older article from Ivan Pepelnjak on using netsim-tools to build Vagrant boxes, but let’s be real—his stuff is kind of timeless anyway, right?

Security

What’s New: Cloud Automation with amazon.aws 4.0.0

When it comes to Amazon Web Services (AWS) infrastructure automation, the latest release of the amazon.aws Collection brings a number of enhancements to improve the overall user experience and speed up the process from development to production.

This blog post goes through changes and highlights on what’s new in the 4.0.0 release of this Ansible Content Collection.

 

Forward-looking Changes

With the recent release, we have included numerous bug fixes and features that further solidify the amazon.aws Collection. Let's go through some of them!

 

New Features Highlights

Some of the new features available in this Collection release are listed below.

 

EC2 Subnets in AWS Outposts

AWS Outposts is a fully managed service that extends AWS infrastructure to on-premise locations, reducing latency and data processing needs. EC2 subnets can be created on AWS Outposts by specifying the AWS Outpost Amazon Resource Name (ARN) of the in the creation phase.

The new outpost_arn option of the ec2_vpc_subnet module allows you to do that.

- name: Create an EC2 subnet on an AWS Outpost
  amazon.aws.ec2_vpc_subnet:
    state: present
    vpc_id: vpc-123456
    cidr: 10.1.100.0/24
    outpost_arn: "{{ outpost_arn }}"
    tags:
      "Environment": "production"

 

New EC2 Instance Continue reading

When localhost isn’t what it seems in Red Hat Ansible Automation Platform 2

localhost blog

With Red Hat Ansible Automation Platform 2  and the advent of automation execution environments, some behaviors are now different for. This blog explains the use case around using localhost and options for sharing data and persistent data storage, for VM-based Ansible Automation Platform 2  deployments.

With Ansible Automation Platform 2  and its containerised execution environments, the concept of localhost has altered. Before Ansible Automation Platform 2, you could run a job against localhost, which translated into running on the underlying tower host. You could use this to store data and persistent artifacts, although this was not always a good idea or best practice.

Now with Ansible Automation Platform 2, localhost means you’re running inside a container, which is ephemeral in nature. This means we must do things differently to achieve the same goal. If you consider this a backwards move, think again. In fact, localhost is now no longer tied to a particular host, and with portable execution environments, this means it can run anywhere, with the right environment and software prerequisites already embedded into the execution environment container.

So, if we now have a temporal runtime container and we want to use existing data or persistent data, Continue reading

Terraforming clouds with Ansible

 

The wheel was invented in the 4th millennium BC. Now, in the 4th millennium, I am sure the wheel was the hottest thing on the block, and only the most popular Neolithic cool cats had wheels. Fast forward to the present day, and we can all agree that the wheel is nothing really to write home about. It is part of our daily lives. The wheel is not sexy. If we want the wheel to become sexy again we just need to slap a sports car together with all the latest gadgets and flux capacitors in a nice Ansible red, and voilà! We have something we want to talk about. 

Like the sports car, Red Hat Ansible Automation Platform has the same ability to turn existing resources into something a bit more intriguing. It can enhance toolsets and extend them further into an automation workflow. 

Let's take Terraform. Terraform is a tool used often for infrastructure-as-code. It is a great tool to use when provisioning infrastructure in a repeatable way across multiple large public cloud providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Many organizations use Terraform for quick infrastructure provisioning every Continue reading

Network Programmability and Automation, Second Edition

In late 2015, I was lucky enough to be part of a small crew of authors who launched a new book project targeting “next-generation network engineering skills.” That book, Network Programmability and Automation, was published by O’Reilly and has garnered praise and accolades for tackling head-on the topics that network engineers should consider mastering as the field of network engineering continues to grow and evolve. I was excited about that announcement, and I’m even more excited to announce that the early release of the second edition of Network Programmability and Automation is now available!

2nd edition book cover

The original team of authors—Jason Edelman, Matt Oswalt, and myself—are joined this time around by Christian Adell. Christian works with Jason at Network to Code, and it has been a tremendous pleasure to get to know Christian (a little bit, at least!) as part of this project so far. I am impressed with his knowledge and experience, and I think it really adds to the book. Jason and Matt, of course, need no introductions; they are both industry leaders and are well-known in the network automation space.

Check out Jason and Christian’s announcement blog post here.

I am, once again, humbled and honored Continue reading

Taking Automation to the Next Level: Using Ansible + GitOps to Manage the Lifecycle of a Containerized Application

taking automation to the next level blog

One of the great advantages of combining GitOps with Ansible is that you get to streamline the automation delivery and the lifecycle of a containerized application.

With the abilities of GitOps we get to:

  • Standardize configurations of our applications.
  • Inherit the benefits of version control of our configurations.
  • Easily track changes of the configuration settings making fixing issues easier.
  • Have one source of truth for our applications.

Combine the above with Ansible and you have everything you need to accomplish configuration consistency for a containerized app anywhere that you automate. 

That leads us to, “how do we combine Ansible and GitOps to manage the lifecycle of a containerized application?”

Simple. By creating an Ansible workflow that is associated with a Git webhook that is part of my application’s repository.

What is a Git webhook you ask?

Git webhooks are defined as a method to deliver notifications to an external web server whenever certain actions occur on a repository.

For example, when a repository is updated, this could trigger an event that could trigger CI builds, deploy an environment, or in our case, modify the configuration of our containerized application. 

A webhook provides the ability to execute specified Continue reading

What’s new: network automation with ansible.netcommon 3.0.0

libssh blog

With the recent release of the ansible.netcommon Collection version 3.0.0, we have promoted two features as standard: libssh transport and import_modules. These features provide increased performance and resiliency for your network automation. These two features, which have been available since July 2020 (libssh) and March 2021 (import_modules), are now turned on by default for playbooks running the latest version of the netcommon Collection, so let's take a look at what makes these changes so exciting!

 

The road to libssh

Libssh support was formally announced in November 2020 for FIPS mode compatibility and speed. This blog goes into great detail about why we started this change, and it's worth a read if you want to know more about how we use paramiko or libssh in the network_cli connection plugin. I'm going to try not to rehash everything from that post, but I do want to take a little time to revisit security and speed to show what libssh brings to the experience of using ansible with network devices.

 

FIPS Mode

One of the earliest issues we identified with paramiko, our earlier SSH transport plugin, is that it was not FIPS 140 compliant. This meant that environments Continue reading

Technology Short Take 156

Welcome to Technology Short Take #156! It’s been about a month since the last Technology Short Take, and in that time I’ve been gathering links that I wanted to share with my readers. (I still have quite the backlog of links to read!) Hopefully something I share here will prove useful to someone. Enjoy the links below, and enjoy your weekend!

Networking

  • I’d never heard of Pipy before seeing it in this article, but it look like it could be quite useful for a number of use cases.
  • William Morgan, one of the creators of Linkerd, has a lengthy treatise on eBPF, sidecars, and the future of the service mesh. As a (relative) layperson—meaning I’m not an eBPF expert—I don’t know if I should believe the eBPF cheerleaders (some of whom I know personally and are familiar with their technical expertise) or folks like William who have clearly “been there, done that” with service mesh. I certainly think there’s a place for eBPF in service meshes, but I’m not yet on board with sidecar-less service meshes (or per-node proxy models).

Security

  • BPFDoor, as it is known, is a passive backdoor that allows threat actors to remotely connect Continue reading

Digitally signing Ansible Content Collections using private automation hub

Digitally signing content in Private Automation Hub

Red Hat Ansible Automation Platform can manage and execute automation made from many different origins, coming from Red Hat product teams, ISV partners, community and private contributors.

Here is a typical makeup of an automation play that is launched from automation controller:

  1. A job template is executed by automation controller and is a playbook.
  2. The playbook runs inside of an automation execution environment by the automation controller.
  3. The automation execution environment is made using the execution environment builder (ansible-builder tool).
  4. When ansible-builder creates the execution environment, it includes dependencies.
  5. The dependencies are Ansible Content Collections and their requirements.
  6. Collections and their dependencies can be private, community-based, or supplied by Red Hat or its ISV partners.

Previously, there was no way to verify that a Collection downloaded from either Ansible automation hub (console.redhat.com) or private automation hub was developed and released by its original Collection maintainer. This is a potential security issue and breaks the supply chain from creator to consumer.

Providing security-focused features in Ansible Automation Platform 2 continues to be a priority, to enable the execution of certified and supported automation anywhere in your hybrid cloud environment. New in Ansible Automation Platform 2.2  is Continue reading

Scaling Automation Controller for API Driven Workloads

Scaling controller blog

When scaling automation controller in an enterprise organization, administrators are faced with more clients automating their interactions with its REST API. As with any web application, automation controller has a finite capacity to serve web requests, and  web clients can experience degraded service if that capacity is met or superseded.

In this blog, we will explore methods to:

  1. Increase the number of web requests an Red Hat Ansible Automation Platform cluster can serve.
  2. Implement best practices on the client side to reduce the load on the automation controller API to improve performance and uptime.   

We will use automation controller 4.2 in our examples, but many of the best practices  and solutions described in this blog apply to most versions, including Ansible Tower 3.8.z.

 

Use cases that cause high-volume API requests

In this section, we will outline some of the use cases that can drive a high volume of API requests. In the recommendations section, we will address options to improve the quality of service at the client, load balancer, and controller levels.

 

External inventory management

In some use cases, organizations maintain their inventory in an external system. This practice can lead to a pattern Continue reading

Red Hat Ansible Automation Platform on Microsoft Azure – Network Access – blog #2

Thank you to Hicham Mourad and Scott Harwell for co-authoring this blog.

Introduction

In this blog series, we will continue discussing the deployment of Red Hat Ansible Automation Platform on Microsoft Azure.

The first blog covered the deployment process as well as how to access a Red Hat Ansible Automation Platform on Azure deployment that was deployed using the “Public” access option.

This blog we’ll cover how to access the managed application when it’s deployed using the “Private” access option.

 

Connecting to Red Hat Ansible Automation Platform on Microsoft Azure

There are three ways you can access Red Hat Ansible Automation Platform on Azure if you selected “Private” access.

  • An Azure hosted virtual machine (VM)
  • Azure VPN or Direct Connect
  • SSH Tunnel

Let’s assume that you have already configured network peering between the Red Hat Ansible Automation Platform on Azure deployment, on the Azure network and your existing Azure Virtual Networks.  Network peering is an Azure action for connecting two or more networks on Azure that route traffic to resources across those networks.  See Microsoft Azure documentation for more information on network peering types.

 

Access Details

Regardless of whether you selected public or private Continue reading

Red Hat Ansible Automation Platform on Microsoft Azure – Network Access – blog #1

Introduction

In this blog series we will discuss the deployment of Red Hat Ansible Automation Platform on Microsoft Azure, specifically focusing on the deployment access types and what that means for accessing Red Hat Ansible Automation Platform on Azure after deployment completion.

 

Deployment Access Types

First let’s start by discussing the different deployment access types.

During deployment, Red Hat Ansible Automation Platform on Azure will present an option called “Access” that determines how you will access the user interfaces.

 

Access Selection at Deploy Time

Deployment Type

Details

Public

Public deployments allow ingress to the user interfaces over the public internet. Upon deployment, a domain name is issued to the Red Hat Ansible Automation Platform on Azure instance, and users will be able to navigate to the domain to login. This is the easiest approach to deploy because there is no configuration required to access Red Hat Ansible Automation Platform on Azure.


Public Access Architecture Diagram below

 

Public Access Architecture Diagram

Deployment Type

Details

Private

Private deployments omit access from the public internet. When deployed, Red Hat Ansible Automation Platform on Azure will reside in an isolated Azure VNET with no access from external sources or even other Continue reading

Pump-Up your ITIL with Automation

pump up ITIL blog

In the world of automation and agility, it seems that Information Technology Infrastructure Library (ITIL) doesn’t have a role to play anymore, being marked as an “old school” framework. Can it be the end of the methodology after it served numerous IT organizations for so long as a guideline and blueprint for their processes?

This series of articles shows how automation, and more specifically Red Hat Ansible Automation Platform and the principles of Infrastructure as Code (IaC), can help bring some of the ITIL topics into the agile and automated bliss:

  • Configuration management
  • Change and release management
  • Incident and problem management

So let’s step into the topic of configuration management and what everybody still knows as CMDB (Configuration Management Database) even if ITIL has since long titled it as CMS (Configuration Management System). This name change was meant to highlight the fact that the function can be fulfilled by a combination of multiple databases and tools, but it won’t matter here, so we’ll stick to the infamous CMDB term.

Do you love your CMDB? Probably not, according to my experience with numerous customers. The data is generally outdated and wrong, considered useless, which means that its maintenance is considered a Continue reading

8 private automation hub features about automation execution environments

8 private automatio hub features blog

Red Hat Ansible Automation Platform 2.1 introduced automation execution environments, which is a new way to package automation into a container runtime environment. In addition, private automation hub also joined the party by adding significant support for execution environments. 

Let's dive into those features:

 

Feature 1 - The registry

Private automation hub now ships with the pulp container registry. This means it can store and serve up container images. 

We only support the Ansible private automation hub registry serving execution environment images.

 

Feature 2 - Remote registries

The Ansible private automation hub user interface allows the administrator to define remote registries. This allows for the registry to mirror container images from their source. A good example of remote registries is adding the base execution environment images available at Red Hat.

To access the Red Hat registry, visit registry.redhat.io and use the same username and password that you use for access.redhat.com.

 

Upon adding the registry, you will see a new remote registry definition.

 

Feature 3 - Indexing a remote registry

This capability is available after you have added a remote registry as per Feature 2;click the menu on the registry Continue reading

1 8 9 10 11 12 125