Lately I’ve been spending a little bit of time building Pulumi programs to assist with standing up Azure Kubernetes Service (AKS) clusters. I’ve learned a pretty fair amount about Azure and AKS along the way, as expected, but I was taken aback by the poor user experience (in my opinion) when it came to accessing the AKS clusters once they’d been established. In this post, I’ll share a small tweak you can make that will, in most cases, make accessing your AKS clusters a great deal smoother.
What do I mean by “poor user experience”? In the same vein as comparable offerings from AWS (EKS) and Google Cloud (GKE), AKS leverages Azure’s identity and access management (IAM) functionality, so that users have a single place to manage user and group entities. This makes perfect sense! What doesn’t make sense to me, though, is the requirement that users must perform a separate login process to gain access to the cluster, even if the user is already authenticated via the Azure CLI. This is very counter to both EKS and GKE, where—if you are already authenticated via their CLI tools—no additional steps are necessary to access appropriately-configured managed Kubernetes clusters on their Continue reading
Red Hat Ansible Automation Platform can help you orchestrate, operationalize and govern your hybrid cloud deployments. In my last public cloud blog, I talked about Two Simple Ways Automation Can Save You Money on Your AWS Bill and similarly to Ashton’s blog Bringing Order to the Cloud: Day 2 Operations in AWS with Ansible, we both wanted to look outside the common public cloud use-case of provisioning and deprovisioning resources and instead look at automating common operational tasks. For this blog post I want to cover how the Technical Marketing team for Ansible orchestrates a pipeline for demos and workshops with Ansible and how we integrate that with custom AMIs (Amazon Machine Images) created with Packer. Packer is an open source tool that allows IT operators to standardize and automate the process of building system images.
For some of our self-paced interactive hands-on labs on Ansible.com, we can quickly spin up images in seconds. In an example automation pipeline we will:
One common challenge our customers face is the need to track hosts from multiple sources: LDAP, cloud providers, and enterprise CMDB systems. Using a dynamic inventory allows users to integrate with these systems and update the Ansible inventory as it varies over time, with hosts spinning up and shutting down in response to business demands.
Ansible supports two ways to connect with external inventory: Inventory plugins and inventory scripts.
Today we are going to cover dynamic inventory plugins as a Collection for network device management through an /etc/hosts file. This same type of setup can be used for creating any dynamic inventory using different items from /etc/hosts files to ini files or even csv’s.
We are going to start by figuring out the source of truth of the inventory we want to import.
If you want to test and use this inventory plugin you can find the code in this Github repository:
https://github.com/jmcleroy/inventoryplugin.git
In this case, it will be an /etc/hosts file externally stored in the Github/Gitlab inventory plugin repo as a test, in a similar fashion this file Continue reading
Azure Arc is becoming the default Microsoft Azure service for connecting non-Azure infrastructure into Azure monitoring and administration. Azure has also issued a deprecation notice for the Azure Log Analytics Agents; Microsoft Monitoring Agent and Log Analytics (OMS). Azure Monitor Agent replaces these agents, introducing a simplified, flexible method of configuring collection configuration called Data Collection Rules. To leverage Azure Monitor Agent with their non-Azure servers, customers will need to onboard their machines to Azure Arc-enabled servers.
This article covers how to use Red Hat Ansible Automation Platform to migrate servers that are currently using Azure Log Analytics Agent to Azure Monitor Agent on Azure Arc using Ansible Automation Platform. When you have completed the configuration in this blog, you will be able to run a workflow against an automation controller inventory that performs the following tasks:
Since the example workflow in this blog post is modular, you may also implement the Continue reading
Welcome to Technology Short Take #160! This time around, my list of links and articles is a tad skewed toward cloud computing/cloud management, but I’ve still managed to pull together some links on other topics that readers will hopefully find useful. For example, did you know about the secret macOS network quality tool? You didn’t? Lucky for you there’s a link to an article about it below. Read on to get all the details!
Lately I’ve been doing a fair amount of work with Pulumi’s YAML support (see this blog post announcing it), and I recently ran into a situation where I wanted to read in and use a configuration value (set via pulumi config
). When using one of Pulumi’s supported programming languages, like TypeScript or Python or Go, this is pretty easy. It’s also easy in YAML, but not as intuitive as I originally expected. In this post, I’ll share how to read in and use a configuration value when using Pulumi YAML.
Configuration values are how you parameterize a Pulumi program in order to make it more flexible and reusable (see this page on configuration from Pulumi’s architecture and concepts documentation). That same page also has examples of using config.Get
or config.Require
to pull configuration values into a program (the difference between these two, by the way, is that the latter will prevent a program from running if the configuration value isn’t supplied).
In YAML, it’s (currently) handled a bit differently. As outlined in the Pulumi YAML reference, a Pulumi YAML document has four main sections: configuration
, resources
, variables
, and outputs
. At first, I thought Continue reading
Life comes down to moments. These events are often how we define our achievements, successes, and failures throughout life. Just like our daily lives, IT organizations and teams can also have these defining moments, where you will often hear phrases like the "great database crash of '98." Many of these memorable IT moments occur from limiting ourselves to a reactive approach when it comes to managing our IT assets. This is where event-driven automation can help us move from reactive to proactive IT management – well before we have the next great issue or moment in our IT teams.
In an IT context, events come from monitoring or other tools to tell us when something needs attention. With this event data, we are able respond faster with automated tasks, resolving issues or enhancing observation where needed, often so we can identify and address festering issues before they are full blown problems. A byproduct of this means teams are now able to spend more time innovating, and are able to realize greater work-life balance because troubleshooting patterns and remediation approaches are automatically initiated based on an initial event in your environments.
Consider Continue reading
Since connectivity is critical to all types of business applications including cloud apps, managing and maintaining the network often falls into the overnight hours. This is great for the business, but it puts a large wrinkle in your work-life balance. Luckily, AnsibleFest is here to help you get more sleep!
Continued use of an overnight network management work model can leave you wondering what other technology career options are available… which may mean that your team is smaller than it used to be due to turnover. Search LinkedIn jobs for “Ansible engineer” and you will find as many as 166,000 roles that ask for some form of Ansible skills. Udemy published the 2020 Workplace Learning Trends Report: The Skills of the Future that describes increased enthusiasm for learning technologies such as automation (page 20) and Cisco cites market research showing nearly 23% growth in network automation from 2022 to 2028. If you are in networking, automation can be very important to boost your career.
Across networking domains, automation plays a key role in helping to balance working hours, so Ansible skills can be good to develop. Red Hat Ansible Automation Platform makes management and other tasks faster and Continue reading
Ansible and Terraform are two very powerful but unique open source IT tools that are often compared in competitive discussions. We often see comparisons of the two tools - but many times, these comparisons are done purely from a “spec sheet” comparison. This type of comparison, while an interesting read, doesn’t take into account using the products at scale or if the comparison is realistic as a binary all-or-nothing approach. We at Red Hat have been helping enterprises for over 20 years and have a good idea how most IT administrators are using these two tools in production. Although both tools can generally do most things, we typically see that they are each leveraged by means of their biggest strengths as opposed to having to choose one or the other.
Spoiler: The two tools are better together and can work in harmony to create a better experience for developers and operations teams.
Both Ansible and Terraform are open source tools with huge user bases, which often leads to cult followings because of the classical “hammer” approach. That is, if my only tool is a hammer, every problem will start resembling a nail. This ends up trying to solve new Continue reading
Red Hat Ansible Automation Platform 2 introduced major architectural changes, like automation mesh and automation execution environments, that help extend Ansible automation across your organization in a flexible manner, providing a single solution to all your organizational and hybrid cloud automation needs.
Automation execution environments are container images that act as Ansible runtimes for automation controller jobs. Ansible Automation Platform also includes a command-line tool called ansible-builder(execution environment builder)that lets you create automation execution environments by specifying Ansible Content Collections and Python dependencies.
In general, an automation execution environment includes:
In this blog, I will take you through the inner workings of ansible-builder and how all the above requirements are packaged inside automation execution environments and delivered as part of Ansible Automation Platform.
As all projects in Red Hat, ansible-builder follows an open development model and an upstream-first approach. The upstream project for ansible-builder is distributed as a Python package, and then packaged into an RPM for Ansible Automation Platform downstream.This also means that there are different ways to install the upstream package and the downstream ansible-builder.
NOTE: Continue reading
It is almost time for our favorite event of the year! AnsibleFest is back as an in-person event, and it’s the only place to feel the full power of automation. Join us October 18 and 19 in Chicago, and immerse yourself in the movement that’s made Ansible an industry-leading automation technology.
The AnsibleFest 2022 session catalog and agenda builder are now available. That means you can see all this year’s session content and build your own custom experience. Let’s take a closer look at all that AnsibleFest 2022 has to offer.
This year we will be featuring six content tracks. These tracks include: Getting Started with Automation, the Ansible Community, Automation Applied, Ansible Automation Platform, Advanced Ansible, and Automation Adoption. Sessions across these tracks will cover a range of focus areas, including network automation, security automation, automation at the edge, IT leaders, developers, and several more. We also have a wide range of customer, partner, and Ansible contributor talks to join. You can see a list of all the sessions being offered in our session catalog.
Get hands-on experience at AnsibleFest in our workshops and labs. We will have a mixture of self-paced Continue reading
As I was winding down things at Kong and getting ready to transition to Pulumi (more information on why I moved to Pulumi here), I casually made the comment on Twitter that I needed to start managing my AWS key pairs using Pulumi. When the opportunity arose last week, I started doing exactly that! In this post, I’ll show you a quick example of how to use Pulumi and Go to declaratively manage AWS key pairs.
This is a pretty simple example, so let’s just jump straight to the code:
_, err := ec2.NewKeyPair(ctx, "aws-rsa-keypair", &ec2.KeyPairArgs{
KeyName: pulumi.String("key-pair-name"),
PublicKey: pulumi.String("<ssh-key-material-here>"),
Tags: pulumi.StringMap{
"Owner": pulumi.String("User Name"),
"Team": pulumi.String("Team Name"),
"Purpose": pulumi.String("Public key for authenticating to AWS EC2 instances"),
},
})
if err != nil {
return err
}
This code is, by and large, pretty self-explanatory. For PublicKey
, you just need to supply the contents of the public key file (use cat
or similar to get the contents of the file) where Continue reading
Welcome to Technology Short Take #159! If you’re interested in finding some links to articles around the web on topics like WASM, Git, Sigstore, or EKS—among other things—then you’ve come to the right place. I’ve spent the last few weeks collecting articles I think you’ll find useful, gleaning them from the depths of Twitter, RSS feeds, Reddit, and Slack. Enjoy, and never stop learning!
As Red Hat Ansible Automation Platform enables teams and organizations to drive their automation from across the cloud and on-premise, keeping Ansible Automation Platform healthy with the ability to monitor key metrics becomes paramount.
This blog post demonstrates how to monitor the API metrics provided by an Ansible Automation Platform environment when deployed within Red Hat OpenShift.
Prometheus is an open source monitoring solution for collecting and aggregating metrics. Partner Prometheus’ monitoring capabilities with Grafana, an open source solution for running data analytics and pulling up metrics in customizable dashboards, and you get a real-time visualization of metrics to track the status and health of your Ansible Automation Platform.
Expect to be fast-tracked to a deployment of Ansible Automation Platform that is monitored by Prometheus paired with a Grafana Ansible Automation Platform dashboard showing those metrics in real time.
This blog will guide you through:
DevOps strives to improve service delivery by bringing teams together, streamlining processes, automating tasks and making these available in a self-service manner.
Many organisations don’t realise the full benefits of DevOps for multiple reasons, including unintegrated tools, manual handovers, and lack of a unified automation solution, leading to islands of automation.
“If we develop automation and pipelines that aren’t cohesive and don’t interoperate, there’s a lot of risk of chaos.”
Jayne Groll, CEO, DevOps Institute.
Red Hat Ansible Automation Platform offers an easy-to-understand automation language, a vast array of IT ecosystem integrations, and enterprise features, such as an API and Role-Based Access Control (RBAC). This blog demonstrates how these capabilities can help accelerate your DevOps practices using simple, practical examples.
This blog covers:
Note
The examples shared in this blog are based on the “DevOps and CI/CD with automation controller” self-paced lab. Feel free to get hands-on and try it out!
Let’s explore the tools Continue reading
When we manage a large number of virtual machines (VMs), we want to reduce the differences between them and create a standard template. By using a standard template, it becomes easier to manage and propagate the same operation on the different nodes. When using VMware vSphere, it is a common practice to share a standardized VM template and use it to create new VMs. This template is often called a golden image. Its creation involves a series of steps that can be automated with Ansible. In this blog, we will see how one can create and use a new golden image.
We use image builder to prepare a new image. The tool provides a user interface that allows users to define custom images. In this example, we include the SSH server and tmux. The result image is a file in the VMDK4 format that is not totally supported by VMware vSphere 7, so this is the reason why we use a .vmdk-4 suffix.
We upload the image using the uri module. Uploading large files using this method is rather slow. If you can, you may want to drop the file on the datastore directly (e. Continue reading
Reproducibility and consistency are two key traits that are the driving forces behind the popularity of containers. Consistency is also a principle found within Red Hat Ansible Automation Platform. With only a few lines of YAML-formatted manifests, thousands of instances can be set up and configured uniformly. While the management of target instances is simple, it is the control node, or where the Ansible execution is initiated, that can be the most challenging aspect. As Ansible is written in Python, does the machine have the correct version of Python installed? Are the necessary Python modules installed? Are there any operating system dependencies needed? The list goes on and on.
These concerns, along with many others, led to a desire to leverage the benefits of containers to perform the control node’s role and eventually ushered in the concept of automation execution environments in Ansible Automation Platform 2. Running Ansible within containers is not a new concept and has been used quite successfully for some time now. However, there was no consistent process for building the container or executing Ansible from within the container. It seemed like everyone and anyone had their own version of running Ansible in a container. Ansible Continue reading
For quite a few years, I’ve had this desktop wallpaper that I really love. I don’t even remember where I got it or where it came from, so I can’t properly attribute it to anyone. I use this wallpaper from time to time when I want to be reminded to challenge myself, to learn new things, and to step outside of what is comfortable in order to explore the as-yet-unknown. Looking at this wallpaper on my desktop a little while ago, I realized that I may have started taking the inspirational phrase on this wallpaper for granted, instead of truly applying it to my life.
Here’s the wallpaper I’m talking about:
To me, this phrase—illustrated so well by the wallpaper—means taking a leap into the unknown. It means putting yourself into a position where you are forced to grow and adapt in order to survive. It’s going to be scary, and possibly even a bit painful at times. In the end, though, you will emerge different than when you started.
It’s been a while since I did that, at least from a career perspective. Yes, I did change jobs a little over a year ago when I left VMware to Continue reading
Welcome to Technology Short Take #158! What do I have in store for you this time around? Well, you’ll have to read the whole article to find out for sure, but I have links to articles on…well, lots of different topics! DNS, BGP, hardware-based security, Kubernetes, Linux—they’re all in here. Hopefully I’ve managed to find something useful for someone.
When managing infrastructure, there are times when a dynamic inventory is essential. Kubernetes is a perfect example of this where you may create multiple applications within a namespace but you will not be able to create a static inventory due to Kubernetes appending a systems-generated string to uniquely identify objects.
Recently, I decided to play with using a Kubernetes dynamic inventory to manage pods, but finding the details on how to use and apply it was a bit scarce. As such, I wanted to write a quick start guide on how you can create an Ansible Playbook to retrieve your pods within a namespace and generate a Kubernetes dynamic inventory.
This is much easier to do when you take advantage of the kubernetes.core.k8s_info module.
In my example, I’m going to take advantage of using my existing ansible-automation-platform namespace that has a list of pods to create my dynamic inventory. In your scenario, you’d apply this to any namespace you wish to capture a pod inventory from.
When creating your inventory, the first step is to register the pods found within a particular namespace. Here’s an example of a task creating an inventory within the ansible-automation-platform Continue reading