Note: This blog refers to Red Hat Insights using Ansible Automation Platform 2.1. Automation controller is the control plane for Ansible Automation Platform, formerly known as Red Hat Ansible Tower.
An indispensable but sometimes overlooked tool included with an Ansible Automation Platform subscription is the cloud-based service, Red Hat Insights for Ansible Automation Platform.
Insights is a suite of reporting and analytics tools to help you identify, troubleshoot, and resolve operational, business, and security issues across your entire ecosystem. You can also use Insights to track the ROI of your automation investment and plan future automation projects to prioritize your efforts where they will have the biggest impact on your business.
Before you can start using Insights to better understand your automation estate and make data-driven decisions, you need to set up the flow of information from your enterprise into the Red Hat Hybrid Cloud Console.
In order to turn on Insights data collection, you’ll need:
In ansible.utils, there are a variety of plugins which we can use for operational state assessment of network devices. I overviewed the ansible.utils collection in part one of this two part blog series. If you have not reviewed part one, I recommend you do so, since I will build on this information in this part two blog. We will see how the ansible.utils collection can be useful in operational state assessment as an example use case.
In general, state assessment workflow has following steps:
The Ansible ansible.utils collection includes a variety of plugins that aid in the management, manipulation and visibility of data for the Ansible playbook developer. The most common use case for this collection is when you want to work with the complex data structures present in an Ansible playbook, inventory, or returned from modules. See each plugin documentation page for detailed examples for how these utilities can be used in tasks. In this two-part blog we will overview this collection in part one and see an example use case of using the utils collection in detail in part two.
Plugins are code which will augment ansible core functionality. This code executes on control node.it and gives options and extensions for the core features of Red Hat Ansible Automation Platform. This ansible.utils plugin collection includes:
Filter plugins manipulate data. With the right filter you can extract a particular value, transform data types and formats, perform mathematical calculations, split and concatenate strings, insert dates and times, and do much more. Ansible Automation Platform uses the standard filters shipped with Jinja2 and adds some specialized filter Continue reading
Seven years ago, I wrote a quick post on bootstrapping servers into Ansible. The basic gist of the post was that you can use variables on the Ansible command-line to specify hosts that aren’t part of your inventory or log in via a different user (useful if the host doesn’t yet have a dedicated Ansible user account because you want to use Ansible to create that account). Recently, though, I encountered a situation where this approach doesn’t work, and in this post I’ll describe the workaround.
In one of the Slack communities I frequent, someone asked about using the approach described in the original blog post. However, they were having issues connecting. Specifically, this error was cropping up in the Ansible output (names have been changed to protect the innocent):
fatal: [new-server.int.domain.test]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: [email protected]: Permission denied (publickey,password).", "unreachable": true}
Now, this is odd, because the Ansible command-line being executed included the parameters I mentioned in the original blog post:
ansible-playbook bootstrap.yml -i inventory/hosts -K --extra-vars "hosts=new-server.int.domain.test user=john"
For some reason, though, it was ignoring that parameter and Continue reading
Welcome to Technology Short Take #151, the first Technology Short Take of 2022. I hope everyone had a great holiday season and that 2022 is off to a wonderful start! I have a few more links than normal this time around, although I didn’t find articles in a couple categories. Don’t worry—I’ll keep my eyes peeled and my RSS reader ready to pull in new articles in those categories for next time. And now for the content!
It seems there are lots of tutorials on setting up a PKI (public key infrastructure) using HashiCorp Vault. What I’ve found missing from most of these tutorials, however, is how to get details on certificates issued by a Vault-driven PKI after the initial creation. For example, someone other than you issued a certificate, but now you need to get the details for said certificate. How is that done? In this post, I’ll show you a couple ways to get details on certificates issued and stored in HashiCorp Vault.
For the commands and API calls I’ve shared below, I’m using “pki” as the name/path you (or someone else) assigned to a PKI secrets engine within Vault. If you’re using a different name/path, then be sure to substitute the correct name/path as appropriate.
To use the Vault CLI to see the list of certificates issued by Vault, you can use this command:
vault list pki/certs
This will return a list of the serial numbers of the certificates issued by this PKI. Looking at just serial numbers isn’t terribly helpful, though. To get more details, you first need to read the certificate details (note singular “cert” here versus plural “certs” in the previous Continue reading
I am by no means a developer (not by a long shot!), but I have been learning lots of development-related things over the last several years and trying to incorporate those into my workflows. One of these is the idea of test-driven development (see Wikipedia for a definition and some additional information), in which one writes tests to validate functionality before writing the code to implement said functionality (pardon the paraphrasing). In this post, I’ll discuss how to use conftest
to (loosely) implement test-driven development for Kustomize overlays.
If you’re unfamiliar with Kustomize, then this introductory article I wrote will probably be useful.
For the discussion around using the principles of test-driven development for Kustomize overlays, I’ll pull in a recent post I did on creating reusable YAML for installing Kuma. In that post, I pointed out four changes that needed to be made to the output of kumactl install control-plane
to make it reusable:
caBundle
value for all webhooks.caBundle
value.Welcome to Technology Short Take #150! This is the last Technology Short Take of 2021, so hopefully I’ll close the year out “with a bang” with this collection of links and articles on various technology areas. Bring on the content!
About six months ago I purchased an OWC Thunderbolt 3 Dock to replace my Anker PowerElite Thunderbolt 3 Dock (see my review here). While there was nothing necessarily wrong with the Anker PowerElite, it lacked a digital audio port that I could use to send audio to a soundbar positioned under my monitor. (I’d grown accustomed to using a soundbar when my 2012 Mac Pro was my primary workstation.) In this post, I’ll provide a brief review of the OWC Thunderbolt 3 Dock.
Note that I’m posting this as a customer. I paid for the dock with my own money, and I have not received any compensation of any kind from anyone for this review.
The OWC Thunderbolt 3 Dock feels well-built, but is larger than the Anker PowerElite. To be frank, I think I prefer the smaller footprint of the Anker PowerElite, but the added ports available on the OWC Thunderbolt Dock sealed the deal for me. Your priorities may be different, of course.
As one might expect, setup was truly “plug-and-play.” I connected all my peripherals to the dock—see below for the list of what I use on a regular basis—and then plugged Continue reading
Year-end recaps have a way of encapsulating the most significant topics of the year, ones that sparked a curiosity to learn more, excitement to incorporate into our work, and inspiration to put our core takeaways into practice. As a newer member of the Ansible Automation Platform team, I’m always interested to learn which blogs resonate with our customers. To that end, we’re sharing our top five most read blogs so you can catch up on what you missed as well as gain some insight into what your peers are reading. For our Ansible blog aficionados, we welcome you to read alongside us for a refresher of what was most meaningful to your work this year.
A common thread running through these posts: Red Hat Ansible Certified Content Collections. As you look to expand your automation in 2022, remember that there are over 100 Certified Content Collections from more than 40 partners, and Red Hat to help you jump start your next automation project with consistent and reusable modules, plug-ins and roles.
Let’s dive in:
Even expert developers can make mistakes, so if you’re a busy content creator, you should always test your own Collections Continue reading
A financial customer explained his first automation priority in the most visual and understandable way: “I want to paint all of my network devices with the color of the company.” What I like about that analogy is that it clearly describes the first rule for automation: customers must define their golden configurations (the color to paint) to be able to automate configurations and later assess compliance, and remediate any issues accordingly.
A “golden configuration” usually refers to a Day 1 configuration, and covers the minimal settings needed for a network device to be configured after a fresh network operating system installation. This usually includes common services such as NTP, DNS, AAA, Syslog, SNMP, and ACLs for management connectivity.
As part of this blog, I will provide an overview for new automation capabilities available to achieve some of these Day 1 configuration activities. In addition to the enhancements for network configuration management, I will cover new Ansible Automation Platform capabilities that are frequently required by our network customers, such as:
With the release of Red Hat Ansible Automation Platform 2.1, we are proud to deliver the latest reference architecture on the best practices for deploying a highly available Ansible Automation Platform environment.
Why are you going to love it?This reference architecture focuses on providing a step-by-step deployment procedure to install and configure a highly available Ansible Automation Platform environment from start to finish.
But there’s more!
Aside from the key steps to install Ansible Automation Platform, it incorporates key building blocks to optimize your Ansible Automation Platform environments, including:
The reference architecture consists of two environments of Ansible Automation Platform: Ansible Site 1 and Ansible Site 2 for high availability. Site 1 is an active environment while Site 2 is a passive environment. Each site consists of the following:
Welcome to Technology Short Take #149! I’ll have one more Technology Short Take in 2021, scheduled for three weeks from now (on the last day of the year!). For now, though, I have a small collection of articles and links for your reading pleasure—not as many as I usually include in a Technology Short Take, but better than nothing at all (I hope!). Enjoy!
As part of the most recent Ansible Automation Platform 2.1 release announced December 2, 2021, we are excited to debut one of the most long-awaited features of the release: automation mesh.
Automation mesh enables you to reliably and consistently automate at scale, across on-premises environments, the hybrid cloud, and to the edge. It delivers flexible design options, from single-site deployments to platform installations spanning the globe, wherever you are in your automation journey.
This blog details the benefits of automation mesh, a high-level overview of how it works, and how it helps you simplify scaling your automation across your enterprise environments. We are planning more detailed technical deep dive blogs with automation mesh use cases in the future, so stay tuned!
Scaling automation across different platforms and locations is challenging. How do you ensure your automation executes consistently while still managing your platform centrally? How do you automate endpoints in remote areas with limited connectivity?
The best practice to overcome these challenges is delivering and running automation closer to the devices that need it. This design limits execution interruptions, which lead to inconsistent states, and possible downtime to IT services.
Enterprises, however, have multiple Continue reading
We are thrilled to announce the general availability of Red Hat Ansible Automation Platform 2.1. This is the follow-on to the Ansible Automation Platform 2.0 Early Access released this summer, and announced at AnsibleFest 2021. Red Ansible Automation Platform 2.1 introduces major features that allow customers to onboard more easily with even more flexible automation architectures and use cases. Ansible Automation Platform 2.1 is the culmination of many years of reimagining how enterprise automators automate for today and tomorrow.
You can download the latest version directly from the Red Hat Customer Portal, or sign up for a free trial at red.ht/try_ansible. Ansible Automation Platform is the Ansible you know and love, designed for the enterprise. I am going to summarize Andrius Benokraitis’ blog post from September, when Ansible Automation Platform 2 was announced, and expand on some key developments from 2.0 to 2.1.
First, some general information:
subscription-manager repos subscription-manager repos --enable=ansible-automation-platform-2.1-for-rhel-8-x86_64-rpms |
Seamlessly, every single day, we wake up and check our health statistics in smart watches, scan QR codes to validate information, pay using credit cards in different locations, use surveillance cameras to record our neighborhoods, and connect our smartphones to distributed WiFi access points in our restaurants or coffee shops. According to the Statista, in the Forecast number of mobile users worldwide 2020-2025[1] report, the number of mobile users worldwide reached 7.1 billion in 2021, and this number is projected to grow. This initiates a new set of use cases for edge devices due to the explosive growth of network-connected entry points.
Edge computing and networking is not specific to any industry; all of these scenarios span many different types of organizations. However, all edge scenarios have one common factor: creating and consuming data resources that are geographically distributed. As a final objective we want to analyze, consume or react to data to fulfill our customer and business needs.
12 years ago, I was the network administrator for a bank. We had a branch office connected through a satellite link, which was easily impacted by the constant heavy rains. In the Continue reading
Welcome to Technology Short Take #148, aka the Thanksgiving Edition (at least, for US readers). I’ve been scouring RSS feeds and various social media sites, collecting as many useful links and articles as I can find: from networking hardware and networking CI/CD pipelines to Kernel TLS and tricks for improving your working memory. That’s quite the range! I hope that you find something useful here.
pwru
tool, which aims to help with tracing network packets in the Linux kernel. It seems like it may be a bit too debug-level to be useful to the average person, but I have yet to lay hands on it myself and find out for sure.requests
module to work with REST APIs. Good stuff here.I recently had the opportunity to present our Red Hat Ansible Automation Platform cloud strategy at Cloud Field Day 12.
Cloud Field Day 12 was a three day event that focused on the impact of cloud on enterprise IT. As a presenter, you can use any combination of slides and live demos to foster a discussion with a group of thought leaders. This roundtable included people from many different companies, skill sets, backgrounds and favorite tools. Check out the Cloud Field Day website to see the delegate panel, their backgrounds and Twitter handles. I quite enjoyed, and preferred, the conversational tone of Cloud Field Day, and the delegates who asked questions during the demo made it a lot more interactive.
Red Hat presented three products at Cloud Field Day: Red Hat OpenShift, which is our enterprise-ready Kubernetes container platform, Ansible Automation Platform, which I co-presented with Richard Henshall, our Head of Product and Strategy for Ansible Automation Platform, and finally Red Hat Advanced Cluster Management for Kubernetes, which extends the value of Red Hat OpenShift by deploying apps, managing multiple clusters and enforcing policies across multiple clusters at scale. I will list all three videos below.
The Automation Controller Collection allows Ansible Playbooks to automate the interaction with automation controller. For example, manually interacting via the Web-based UI or the API can now be automated just as the targets it manages.
This Collection provides a programmatic way to create, update or delete automation controller objects as well as perform tasks such as run jobs, change configurations and much more. This article discusses new updates to this Collection, as well as an example playbook and details on how to run it successfully.
The ansible.controller Ansible Collection is the downstream supported distribution available on Ansible automation hub, made to work with Red Hat Ansible Automation Platform 2. The awx.awx Collection is the upstream community distribution available on Ansible Galaxy. For more details on the difference between Ansible Galaxy and Ansible automation hub, please refer to Ajay Chenampara’s blog post.
In this post, we are use the ansible.controller Collection, but this can be replaced with the legacy ansible.tower or the awx.awx Collection depending on the user’s needs.
One of the goals of the Automation Controller Collection is to allow users to Continue reading
Ansible Automation Platform 2 leverages containers dubbed automation execution environments which bundle in collection, python and platform dependencies to provide predictable, self-contained automation spaces that can be easily distributed across an organization.
In addition, Red Hat Ansible Automation Platform introduced tools such as execution environment builder, used to create execution environments, and automation content navigator, used to inspect images and execute automation within execution environments. These tools themselves are also highly automatable and can be included in workflows to automatically generate environments to support the execution of automation throughout the organization.
For this demonstration, let's cut to film where I’ll walk through a demo scenario and verify along the way that we’re on the right track. Additionally, you can fork the repository for your own proof of concept.