Using CLI tools—instead of a “wall of YAML”—to install things onto Kubernetes is a growing trend, it seems. Istio and Cilium, for example, each have a CLI tool for installing their respective project. I get the reasons why; you can build logic into a CLI tool that you can’t build into a YAML file. Kuma, the open source service mesh maintained largely by Kong and a CNCF Sandbox project, takes a similar approach with its kumactl tool. In this post, however, I’d like to take a look at creating reusable YAML to install Kuma, instead of using the CLI tool every time you install.
You might be wondering, “Why?” That’s a fair question. Currently, the kumactl tool, unless configured otherwise, will generate a set of TLS assets to be used by Kuma (and embeds some of those assets in the YAML regardless of the configuration). Every time you run kumactl, it will generate a new set of TLS assets. This means that the command is not declarative, even if the output is. Unfortunately, you can’t reuse the output, as that would result in duplicate TLS assets across installations. That brings me to the point of this Continue reading
In 2018, after finding a dearth of information on setting up Kubernetes with AWS integration/support, I set out to try to establish some level of documentation on this topic. That effort resulted in a few different blog posts, but ultimately culminated in this post on setting up an AWS-integrated Kubernetes cluster using kubeadm. Although originally written for Kubernetes 1.15, the process described in that post is still accurate for newer versions of Kubernetes. With the release of Kubernetes 1.22, though, the in-tree AWS cloud provider—which is what is used/described in the post linked above—has been deprecated in favor of the external cloud provider. In this post, I’ll show how to set up an AWS-integrated Kubernetes cluster using the external AWS cloud provider.
In addition to the post I linked above, there were a number of other articles I published on this topic:
Most of the information in these posts, if not all of it, is found in the latest iteration, but I wanted to include these links here for some additional context. Also, Continue reading
Red Hat Ansible Automation Platform 2 introduces an updated architecture, new tools and an improved but familiar experience to automation teams. However, there are multiple considerations for your planning and strategy to migrate your current deployment to Ansible Automation Platform 2.
This document provides guidance to all of the stakeholders responsible for planning and executing an Ansible Automation Platform migration guidance with factors to address in your migration strategy.
This document does not provide a one-size-fits-all approach for migration. Various factors unique to your organization will impact the effort required, stakeholders involved and delivery plan.
We understand that many factors specific to your needs affect your migration assessment and planning. This section highlights critical factors to determine your migration readiness and what approach will best suit your organization.
Assess your current environment
There will be configurations unique to your environment, and it’s crucial to perform a thorough technical assessment. We recommend including the following:
The topic of combining kustomize with Cluster API (CAPI) is a topic I’ve touched on several times over the last 18-24 months. I first touched on this topic in November 2019 with a post on using kustomize with CAPI manifests. A short while later, I discovered a way to change the configurations for the kustomize transformers to make it easier to use it with CAPI. That resulted in two posts on changing the kustomize transformers: one for v1alpha2 and one for v1alpha3 (since there were changes to the API between versions). In this post, I’ll revisit kustomize transformer configurations again, this time for CAPI v1beta1 (the API version corresponding to the CAPI 1.0 release).
In the v1alpha2 post (the first post on modifying kustomize transformer configurations), I mentioned that changes were needed to the NameReference and CommonLabel transformers. In the v1alpha3 post, I mentioned that the changes to the CommonLabel transformer became largely optional; if you are planning on adding additional labels to MachineDeployments, then the change to the CommonLabels transformer is required, but otherwise you could probably get by without it.
For v1beta1, the necessary changes are very similar to v1alpha3, and (for the most part) are Continue reading
Welcome to Technology Short Take #146! Over the last couple of weeks, I’ve gathered a few technology-related links for you all. There’s some networking stuff, a few security links, and even a hardware-related article. But enough with the introduction—let’s get into the content!
dnspeep to help folks understand how DNS works.Are you trying to manage private clouds easily and efficiently using Ansible Automation Platform? When it comes to VMware infrastructure automation, the latest release of the vmware.vmware_rest Collection and new lookup plugins bring a set of fresh features to build, manage and govern various VMware use cases and accelerate the process from development to production.
The modules in the vmware.vmware_rest Collection rely on the resource MOID a lot. This is a design decision that we covered in an earlier blog. Consequently, when the users want to modify a VMware resource, they need to first write Ansible tasks to identify its MOID.
The new 2.1.0 release of vmware.vmware_rest Collection comes with a series of filter plugins dedicated to gathering the resource MOID. In this blog post, we will help you to keep your VMware automation playbooks concise.
Internally VMware vSphere manages resources in the form of objects. Every object has a type and an ID. What we are calling MOID stands for Managed Object ID. Using the vSphere UI obfuscates the MOID logic from users and presents the objects in a visible hierarchy, potentially at several different locations.
In this post, I’m going to walk you through how to install Cilium onto a Cluster API-managed workload cluster using a ClusterResourceSet. It’s reasonable to consider this post a follow-up to my earlier post that walked you through using a ClusterResourceSet to install Calico. There’s no need to read the earlier post, though, as this post includes all the information (or links to the information) you need. Ready? Let’s jump in!
If you aren’t already familiar with Cluster API—hereafter just referred to as CAPI—then I would encourage you to read my introduction to Cluster API post. Although it is a bit dated (it was written in the very early days of the project, which recently released version 1.0). Some of the commands referenced in that post have changed, but the underlying concepts remain valid. If you’re not familiar with Cilium, check out their introduction to the project for more information. Finally, if you’re not familiar at all with the idea of ClusterResourceSets, you can read my earlier post or check out the ClusterResourceSet CAEP document.
If you want to install Cilium via a ClusterResourceSet, the process looks something like this:

With the introduction of Red Hat Ansible Automation Platform 2, several new key components are being introduced as a part of the overall developer experience. This includes automation execution environments, introduced to provide predictable environments during automation runtime. All collection dependencies are contained within the execution environment to make sure that automation created in development environments runs the same as in production environments.
Building execution environments is now easier with the introduction of the execution environment builder (ansible-builder) tool included with Ansible Automation Platform, while the updates to automation controller help IT teams leverage execution environments.
Considering the shift towards containerized execution of automation, the automation development workflow and pre-existing tooling must be reimagined. In short, ansible-navigator replaces ansible-playbook and other ansible-*` command line utilities.
However, ansible-navigator serves a much greater purpose than just a drop in replacement or alias to ansible-playbook. With this release, ansible-navigator is introduced as an application exclusively for developing and executing automation. Ansible-playbook has long been one of the first utilities that is leveraged in an Ansible automated environment. Building on the ease of use, the automation content navigator exposes automation runs in greater detail. Apart from debugging Continue reading

We are excited to announce that the Ansible Automation Platform 2 release includes private automation hub 4.3. Private automation hub provides automation developers the ability to collaborate and publish their own automation content and streamline delivery of Ansible code within their organization.
Private automation hub in Ansible Automation Platform 2 primarily delivers support for automation execution environments. Execution environments are a standardized way to define, build and distribute the environments that the automation runs in. In a nutshell, automation execution environments are container images that allow for easier administration of Ansible by the platform administrator. If you are unfamiliar with execution environments, please refer to this blog written by Technical Marketing manager Anshul Behl.
Private automation hub will serve as the on-premises execution environment container image repository for customers who wish to use this feature, aimed at customers who run the platform on physical or virtual environments. Ansible Automation Platform will seamlessly integrate with private automation hub for publishing and pulling execution environment container images.
Private automation hub is intended for curating automation content from creators and making it seamlessly accessible to operators. It makes it easy to share these execution environments, which make Continue reading

Red Hat Ansible Automation Platform 2 is the next generation automation platform from Red Hat’s trusted enterprise technology experts. We are excited to announce that the Ansible Automation Platform 2 release includes automation controller 4.0, the improved and renamed Red Hat Ansible Tower.
Automation controller continues to provide a standardized way to define, operate and delegate automation across the enterprise. It also introduces new, exciting technologies and an enhanced architecture that enables automation teams to scale and deliver automation rapidly to meet ever-growing business demand.
As Ansible Automation Platform 2 continues to evolve, certain functionality has been decoupled (and will continue to be decoupled in 2.1) from what was formerly known as Ansible Tower. The naming change better reflects these enhancements and the overall position within the Ansible Automation Platform suite.
All automation team members interact with or rely on automation controller, either directly or indirectly.
These roles are not necessarily dedicated to Continue reading

Red Hat Ansible Automation Platform 2 is now available to customers. This release expands the possibilities of automation across your organization, with a more secure, flexible foundation to build and deploy automation with greater acceleration, orchestration and innovation.
As automation usage/practices/etc. spreads throughout an organization, managing multiple automation environments for different teams and use cases become challenging. This is even more true as automation starts to scale across the IT organization. As automation continues to be part of critical workflows, the following enhancements have been made in Ansible Automation Platform 2:
For the last two years, the Red Hat Ansible Automation Platform product team has been hard at work developing the next major release. We are incredibly excited to introduce Red Hat Ansible Automation Platform 2, which was just announced at AnsibleFest 2021.
What’s new in Ansible Automation Platform 2?
The main focus was to enhance the foundational pieces of the Ansible Automation Platform and to enable automators to automate at enterprise scale more easily and flexibly. This means everything you know and love about writing Ansible Playbooks is largely unchanged, but what is evolving is the underlying implementation of how automation is developed, managed, and operated in large complex environments. In the end, enterprise automation platforms must be designed, packaged, and supported with container native and hybrid cloud environments in mind.
So how did we get here? It’s been years in the making, which included the following changes:
1. Ansible content was separated from the Ansible executable in the Ansible Project, creating a new construct called an Ansible Content Collections to house Ansible modules, plugins, roles and more in a discrete and atomic form.
The vast majority of time recently has been spent relocating the majority of Ansible Continue readingWelcome to Technology Short Take #145! What will you find in this Tech Short Take? Well, let’s see…stuff on Envoy, network automation, network designs, M1 chips (and potential open source variants!), a bevy of security articles (including a couple on very severe vulnerabilities), Kubernetes, AWS IAM, and so much more! I hope that you find something useful here. Enjoy!
This year, we are adapting our signature automation event, AnsibleFest, into a free virtual experience. Seasoned pros and new Ansible enthusiasts alike can find answers and learn more about Red Hat Ansible Automation Platform, the platform for building and operating automation at scale and creating an enterprise automation strategy. We are excited to offer a track designed specifically for IT leaders.
Let’s take a closer look at this track for AnsibleFest 2021.
The role of automation for IT leaders has certainly evolved in recent years. Automation that used to be contained to domain-specific, task-bound practices has evolved to encompass full enterprises -- connecting teams and areas that were not connected before. The prospect of beginning an automation practice, or even unifying separate automation efforts, can often seem daunting. No matter where you are on your automation journey, we have a session for you.
Attendees will hear from like-minded companies about their targeted use cases and experiences with automation, sharing how they navigated their digital transformation journeys and lessons learned. We are also excited to be joined by several analysts who will share their insights and perspectives.
If you are experienced with the Ansible Automation Continue reading
This year, we are adapting our signature automation event, AnsibleFest, into a free virtual experience. Seasoned pros and new Ansible enthusiasts alike can find answers and learn more about Red Hat Ansible Automation Platform, the platform for building and operating automation at scale and creating an enterprise automation strategy. This year, we have a content topic designed specifically for attendees from telecommunications and media companies with our Telco topic within the Network track.
Let’s take a closer look at this content topic for AnsibleFest 2021.
Telecommunication service providers have extremely critical and complex workflows that require specialized attention for automation. Attendees can expect to learn about targeted use cases, especially for Telecommunications customers, partners and vendors. You will hear from like-minded companies about their experiences with automation, understand advanced automation use cases like AIOps, pair programming, provisioning for Red Hat OpenShift, NetOps with continuous innovation/continuous delivery (CI/CD) and more.
In addition, you can take advantage of the Telco content topic where Telco customers can learn how to get started with network automation, how to expand their network automation use cases, what’s new for those developing the automation for network projects, and how they Continue reading
It has been almost half a year since the XLAB Steampunk and Red Hat Ansible Automation Platform teams developed the first version of the Red Hat Ansible Certified Content Collection for ServiceNow IT Service Management (ITSM). You may also want to read our ServiceNow introduction if you are not familiar with this already. So, let’s take a look at what is new since the last release.
We will skip most of the technical details in this post because, let us face it, talking about interactions between API pagination and filtering is not something many people enjoy. Instead, we will focus on the new things users can do using the current 1.2.0 version of this collection.
ServiceNow records (incidents, problems, change requests, etc.) may contain attachments, but the first version of the ServiceNow Collection did not expose this capability to Ansible Automation Platform users. We changed this in version 1.2.0 when we added the attachments parameter to all non-info modules.
Ansible users can now upload error logs and other artifacts when creating new incidents and other ServiceNow records. They can also add them later to existing ServiceNow records. For example, users might Continue reading
This year, we are adapting our signature automation event, AnsibleFest, into a free virtual experience to connect our communities with a wider audience and to collaborate to solve problems. Seasoned pros and new Ansible enthusiasts alike can find answers and learn more about Red Hat Ansible Automation Platform, the platform for building and operating automation at scale and creating an enterprise automation strategy. Have you already automated some type of server or infrastructure management? Use the network automation track to understand the benefits that come with automating network management the Ansible way.
Let’s take a closer look at this track for AnsibleFest 2021.
Gone are the days of hand-typing commands into network devices one by one, because you simply can’t keep up. Manage your network infrastructure using Ansible Automation Platform throughout the entire development and production life cycle, and free time as a result to focus on your top priority network engineering challenges. This AnsibleFest track focuses on network automation topics for automation content developers as well as network and cloud engineers or operators.
Attendees will learn how network automation can no longer be a point tool, but instead part of a holistic Continue reading
Welcome to Technology Short Take #144! I have a fairly diverse set of links for readers this time around, covering topics from microchips to improving your writing, with stops along the way in topics like Kubernetes, virtualization, Linux, and the popular JSON-parsing tool jq along the way. I hope you find something useful!
I use Pulumi to manage my lab infrastructure on AWS (I shared some of the details in this April 2020 blog post published on the Pulumi site). Originally I started with TypeScript, but later switched to Go. Recently I had a need to add some VPC peering relationships to my lab configuration. I was concerned that this may pose some problems—due entirely to the way I structure my Pulumi projects and stacks—but as it turned out it was more straightforward than I expected. In this post, I’ll share some example code and explain what I learned in the process of writing it.
First, let me share some background on how I structure my Pulumi projects and stacks.
It all starts with a Pulumi project that manages my base AWS infrastructure—VPC, subnets, route tables and routes, Internet gateways, NAT gateways, etc. I use a separate stack in this project for each region where I need base infrastructure.
All other projects build on “top” of this base project, referencing the resources created by the base project in order to create their own resources. Referencing the resources created by the base project is accomplished via a Pulumi StackReference.
In my Continue reading
To conduct some testing, I recently needed to spin up a group of Kubernetes clusters on AWS. Generally speaking, my “weapon of choice” for something like this is Cluster API (CAPI) with the AWS provider. Normally this would be enormously simple. In this particular case—for reasons that I won’t bother going into here—I needed to spin up all these clusters in a single VPC. This presents a problem for the Cluster API Provider for AWS (CAPA), as it currently doesn’t add some required tags to existing AWS infrastructure (see this issue). The fix is to add the tags manually, so in this post I’ll share how I used the AWS CLI to add the necessary tags.
Without the necessary tags, the AWS cloud provider—which is responsible for the integration that creates Elastic Load Balancers (ELBs) in response to the creation of a Service of type LoadBalancer, for example— won’t work properly. Specifically, the following tags are needed:
kubernetes.io/cluster/<cluster-name>
kubernetes.io/role/elb
kubernetes.io/role/internal-elb
The latter two tags are mutually exclusive: the former should be assigned to public subnets to tell the AWS cloud provider where to place public-facing ELBs, while the latter is assigned to private subnets Continue reading