Archive

Category Archives for "Ansible Blog"

It takes a community: how partners play a key role in event driven automation

Event-driven automation is increasingly being adopted because of the strong benefits it delivers in managing huge amounts of complexity across multi-clouds, a multi-device remote workforce, and growing edge implementations. In a digital world, maintaining resilience and reliability is essential and event driven automation helps teams meet these needs while working around resource and skills gaps.  

This advanced automation technique can be used to address festering problems before there is a full-blown outage, improve agility and resilience to meet the demands of the business, and maintain consistency to avoid downtime and meet governance requirements. It also frees time spent on routine tasks so IT teams can focus on the innovations that matter.  

 

Partners benefit from enabling end-to-end event-driven automation

For independent software vendors (ISVs), solution providers and service partners, this is a great opportunity to create easy-to-implement solutions for your customers and help them work with modern automation techniques that will truly make an operational impact. Event-driven technologies – including network, security, monitoring tools, observability solutions and workload optimization tools – must be cooperative players in a larger ecosystem. 

Today, we invite ISVs and consulting/service partners to create event driven automation content that makes it easy for Continue reading

Using Ansible and Packer, From Provisioning to Orchestration

Red Hat Ansible Automation Platform can help you orchestrate, operationalize and govern your hybrid cloud deployments.  In my last public cloud blog, I talked about Two Simple Ways Automation Can Save You Money on Your AWS Bill and similarly to Ashton’s blog Bringing Order to the Cloud: Day 2 Operations in AWS with Ansible, we both wanted to look outside the common public cloud use-case of provisioning and deprovisioning resources and instead look at automating common operational tasks.  For this blog post I want to cover how the Technical Marketing team for Ansible orchestrates a pipeline for demos and workshops with Ansible and how we integrate that with custom AMIs (Amazon Machine Images) created with Packer.  Packer is an open source tool that allows IT operators to standardize and automate the process of building system images.

For some of our self-paced interactive hands-on labs on Ansible.com, we can quickly spin up images in seconds.  In an example automation pipeline we will:

  1. Provision a virtual instance.
  2. Use Ansible Automation Platform to install an application; in my case, I am literally installing our product Ansible Automation Platform (is that too meta?).
  3. After the application Continue reading

Dynamic inventory plugin collection for network device management

network device management blog

Tackling the complexities of enterprise inventories

One common challenge our customers face is the need to track hosts from multiple sources: LDAP, cloud providers, and enterprise CMDB systems. Using a dynamic inventory allows users to integrate with these systems and update the Ansible inventory as it varies over time, with hosts spinning up and shutting down in response to business demands.

Ansible supports two ways to connect with external inventory: Inventory plugins and inventory scripts. 

Today we are going to cover dynamic inventory plugins as a Collection for network device management through an /etc/hosts file. This same type of setup can be used for creating any dynamic inventory using different items from /etc/hosts files to ini files or even csv’s. 

 

The first mission: Where is the source of truth?

We are going to start by figuring out the source of truth of the inventory we want to import. 

If you want to test and use this inventory plugin you can find the code in this Github repository: 

https://github.com/jmcleroy/inventoryplugin.git

In this case, it will be an /etc/hosts file externally stored in the Github/Gitlab inventory plugin repo as a test, in a similar fashion this file Continue reading

Migrate to Azure Monitor Agent on Azure Arc using Red Hat Ansible Automation Platform

azure arc blog

Azure Arc is becoming the default Microsoft Azure service for connecting non-Azure infrastructure into Azure monitoring and administration.  Azure has also issued a deprecation notice for the Azure Log Analytics Agents; Microsoft Monitoring Agent and Log Analytics (OMS).  Azure Monitor Agent replaces these agents, introducing a simplified, flexible method of configuring collection configuration called Data Collection Rules. To leverage Azure Monitor Agent with their non-Azure servers, customers will need to onboard their machines to Azure Arc-enabled servers. 

This article covers how to use Red Hat Ansible Automation Platform to migrate servers that are currently using Azure Log Analytics Agent to Azure Monitor Agent on Azure Arc using Ansible Automation Platform.  When you have completed the configuration in this blog, you will be able to run a workflow against an automation controller inventory that performs the following tasks:

  1. Ensures that the Azure Arc agent is installed on each machine.  In cases where the agent is not installed, then it will be installed.
  2. Enable the Azure Monitor Agent on Arc enabled machines.
  3. Disable the Log Analytics Agent.
  4. Uninstall the Log Analytics Agent.

Since the example workflow in this blog post is modular, you may also implement the Continue reading

Why Event-Driven Matters

why event driven matters blog

Life comes down to moments. These events are often how we define our achievements, successes, and failures throughout life. Just like our daily lives, IT organizations and teams can also have these defining moments, where you will often hear phrases like the "great database crash of '98." Many of these memorable IT  moments occur from limiting ourselves to a reactive approach when it comes to managing our IT assets. This is where event-driven automation can help us move from reactive to proactive IT management – well before we have the next great issue or moment in our IT teams. 

In an IT context, events come from monitoring or other tools to tell us when something needs attention.  With this event data, we are able respond faster with automated tasks, resolving issues or enhancing observation where needed, often so we can identify and address festering issues before they are full blown problems. A byproduct of this means teams are now able to spend more time innovating, and are able to realize greater work-life balance because  troubleshooting patterns and remediation approaches are automatically initiated based on an initial event in your environments. 

 

Events on the Infrastructure

Consider Continue reading

How network engineers can get more sleep using Red Hat Ansible Automation Platform: Your guide to AnsibleFest 2022

Screen Shot 2022-09-07 at 11.09.29 AM

Since connectivity is critical to all types of business applications including cloud apps, managing and maintaining the network often falls into the overnight hours. This is great for the business, but it puts a large wrinkle in your work-life balance. Luckily, AnsibleFest is here to help you get more sleep! 

Continued use of an overnight network management work model can leave you wondering what other technology career options are available… which may mean that your team is smaller than it used to be due to turnover. Search LinkedIn jobs for “Ansible engineer” and you will find as many as 166,000 roles that ask for some form of Ansible skills. Udemy published the 2020 Workplace Learning Trends Report: The Skills of the Future that describes increased enthusiasm for learning technologies such as automation (page 20) and Cisco cites market research showing nearly 23% growth in network automation from 2022 to 2028. If you are in networking, automation can be very important to boost your career. 

Across networking domains, automation plays a key role in helping to balance working hours, so Ansible skills can be good to develop. Red Hat Ansible Automation Platform makes management and other tasks faster and Continue reading

Ansible vs. Terraform Demystified

 

Ansible and Terraform are two very powerful but unique open source IT tools that are often compared in competitive discussions.  We often see comparisons of the two tools - but many times, these comparisons are done purely from a “spec sheet” comparison. This type of comparison, while an interesting read, doesn’t take into account using the products at scale or if the comparison is realistic as a binary all-or-nothing approach. We at Red Hat have been helping enterprises for over 20 years and have a good idea how most IT administrators are using these two tools in production. Although both tools can generally do most things, we typically see that they are each leveraged by means of their biggest strengths as opposed to having to choose one or the other.

Spoiler:  The two tools are better together and can work in harmony to create a better experience for developers and operations teams.

Both Ansible and Terraform are open source tools with huge user bases, which often leads to cult followings because of the classical “hammer” approach.  That is, if my only tool is a hammer, every problem will start resembling a nail. This ends up trying to solve new Continue reading

The anatomy of automation execution environments

anatomy of EE blog

Red Hat Ansible Automation Platform 2 introduced  major architectural changes, like automation mesh and automation execution environments, that help extend Ansible automation across your organization in a flexible manner, providing a single solution to all your organizational and hybrid cloud automation needs.

Automation execution environments are container images that act as Ansible runtimes for automation controller jobs. Ansible Automation Platform also includes a command-line tool called ansible-builder(execution environment builder)that lets you create automation execution environments by specifying Ansible Content Collections and Python dependencies.

In general, an automation execution environment includes:

  • A version of Python.
  • A version of ansible-core.
  • Python modules/dependencies.
  • Ansible Content Collections (optional).

In this blog, I will take you through the inner workings of ansible-builder and how all the above requirements are packaged inside automation execution environments and delivered as part of Ansible Automation Platform.

 

A tale of two ansible-builder packages

As all projects in Red Hat, ansible-builder follows an open development model and an upstream-first approach. The upstream project for ansible-builder is distributed as a Python package, and then packaged into an RPM for Ansible Automation Platform downstream.This also means that there are different ways to install the upstream package and the downstream ansible-builder.

NOTE: Continue reading

The Automation Experience: AnsibleFest 2022

Screen Shot 2022-09-07 at 11.09.29 AM

It is almost time for our favorite event of the year! AnsibleFest is back as an in-person event, and it’s the only place to feel the full power of automation. Join us October 18 and 19 in Chicago, and immerse yourself in the movement that’s made Ansible an industry-leading automation technology.

The AnsibleFest 2022 session catalog and agenda builder are now available. That means you can see all this year’s session content and build your own custom experience. Let’s take a closer look at all that AnsibleFest 2022 has to offer.

 

Breakout Sessions

This year we will be featuring six content tracks. These tracks include: Getting Started with Automation, the Ansible Community, Automation Applied, Ansible Automation Platform, Advanced Ansible, and Automation Adoption. Sessions across these tracks will cover a range of focus areas, including network automation, security automation, automation at the edge, IT leaders, developers, and several more. We also have a wide range of customer, partner, and Ansible contributor talks to join. You can see a list of all the sessions being offered in our session catalog.

 

Workshops and Labs

Get hands-on experience at AnsibleFest in our workshops and labs. We will have a mixture of self-paced Continue reading

Monitoring Red Hat Ansible Automation Platform on Red Hat OpenShift – The Easy Way

monitoring ansible on ocp blog

As Red Hat Ansible Automation Platform enables teams and organizations to drive their automation from across the cloud and on-premise, keeping Ansible Automation Platform healthy with the ability to monitor key metrics becomes paramount.

This blog post demonstrates how to monitor the API metrics provided by an Ansible Automation Platform environment when deployed within Red Hat OpenShift.

 

What will we use to monitor the API metrics?

Prometheus and Grafana. 

Prometheus is an open source monitoring solution for collecting and aggregating metrics. Partner Prometheus’ monitoring capabilities with Grafana, an open source solution for running data analytics and pulling up metrics in customizable dashboards, and you get a real-time visualization of metrics to track the status and health of your Ansible Automation Platform.

 

What can we expect?

Expect to be fast-tracked to a deployment of Ansible Automation Platform that is monitored by Prometheus paired with a Grafana Ansible Automation Platform dashboard showing those metrics in real time.

This blog will guide you through:

  • The deployment of Prometheus using an operator.
  • Configuring your Prometheus deployment to capture Ansible Automation Platform metrics.
  • The deployment of Grafana using an operator.
  • Configuring Grafana with a pre-built dashboard that displays the Ansible Automation Platform Continue reading

DevOps and CI/CD with automation controller

 

DevOps strives to improve service delivery by bringing teams together, streamlining processes, automating tasks and making these available in a self-service manner.

Many organisations don’t realise the full benefits of DevOps for multiple reasons, including unintegrated tools, manual handovers, and lack of a unified automation solution, leading to islands of automation.

 

“If we develop automation and pipelines that aren’t cohesive and don’t interoperate, there’s a lot of risk of chaos.”

Jayne Groll, CEO, DevOps Institute.

 

Red Hat Ansible Automation Platform offers an easy-to-understand automation language, a vast array of IT ecosystem integrations, and enterprise features, such as an API and Role-Based Access Control (RBAC). This blog demonstrates how these capabilities can help accelerate your DevOps practices using simple, practical examples. 

This blog covers:

  • Using Ansible Automation Platform to automate DevOps tooling configurations.
  • Integration of Ansible Automation Platform into existing DevOps environments.
  • Orchestrating DevOps workflows using automation controller.
  • Use controller approvals to allow for final sign-off of services before production deployment.

Note

The examples shared in this blog are based on the “DevOps and CI/CD with automation controller” self-paced lab. Feel free to get hands-on and try it out!

 

Environment overview

Let’s explore the tools Continue reading

Managing a VMware Template Lifecycle with Ansible

When we manage a large number of virtual machines (VMs), we want to reduce the differences between them and create a standard template. By using a standard template, it becomes easier to manage and propagate the same operation on the different nodes. When using VMware vSphere, it is a common practice to share a standardized VM template and use it to create new VMs. This template is often called a golden image. Its creation involves a series of steps that can be automated with Ansible. In this blog, we will see how one can create and use a new golden image.

 

Prepare the golden image

We use image builder to prepare a new image. The tool provides a user interface that allows users to define custom images. In this example, we include the SSH server and tmux. The result image is a file in the VMDK4 format that is not totally supported by VMware vSphere 7, so this is the reason why we use a .vmdk-4 suffix.

We upload the image using the uri module. Uploading large files using this method is rather slow. If you can,  you may want to drop the file on the datastore directly (e. Continue reading

Creating automation execution environments using ansible-builder and Shipwright

Reproducibility and consistency are two key traits that are the driving forces behind the popularity of containers. Consistency is also a principle found within Red Hat Ansible Automation Platform. With only a few lines of YAML-formatted manifests, thousands of instances can be set up and configured uniformly. While the management of target instances is simple, it is the control node, or where the Ansible execution is initiated, that can  be the most challenging aspect. As Ansible is written in Python, does the machine have the correct version of Python installed?  Are the necessary Python modules installed? Are there any operating system dependencies needed? The list goes on and on.

These concerns, along with many others, led to a desire to leverage the benefits of containers to perform the control node’s role and eventually ushered in the concept of automation execution environments in Ansible Automation Platform 2. Running Ansible within containers is not a new concept and has been used quite successfully for some time now. However, there was no consistent process for building the container or executing Ansible from within the container. It seemed like everyone and anyone had their own version of running Ansible in a container. Ansible Continue reading

Creating Kubernetes Dynamic Inventories with kubernetes.core Modules

roger kube.core blog aug 5 22

When managing infrastructure, there are times when a dynamic inventory is essential. Kubernetes is a perfect example of this where you may create multiple applications within a namespace but you will not be able to create a static inventory due to Kubernetes appending a systems-generated string to uniquely identify objects. 

Recently, I decided to play with using a Kubernetes dynamic inventory to manage pods, but finding the details on how to use and apply it was a bit scarce. As such, I wanted to write a quick start guide on how you can create an Ansible Playbook to retrieve your pods within a namespace and generate a Kubernetes dynamic inventory. 

This is much easier to do when you take advantage of the kubernetes.core.k8s_info module.

In my example, I’m going to take advantage of using my existing ansible-automation-platform namespace that has a list of pods to create my dynamic inventory. In your scenario, you’d apply this to any namespace you wish to capture a pod inventory from. 

When creating your inventory, the first step is to register the pods found within a particular namespace. Here’s an example of a task creating an inventory within the ansible-automation-platform Continue reading

Peeling back the layers and understanding automation mesh

Red Hat Ansible Automation Platform 2 features an awesome new way to scale out your automation workloads: automation mesh.  If you are unfamiliar with automation mesh, I highly recommend reading Craig Brandt’s blog post What's new: an introduction to automation mesh which outlines how automation mesh can simplify your operations and scale your automation globally.  For this blog post, I want to focus on the technical implementation of automation mesh, what network ports it is using and how you can secure it. 

To quickly summarize both Craig’s blog post and our documentation, we separated the control plane (which includes the webUI and API) from the execution plane (where an Ansible Playbook is executed) in Ansible Automation Platform 2.  This allows you to choose where jobs run across execution nodes, so you can deliver and run automation closer to the devices that need it. In our implementation, there is four different types of nodes:

  • Control plane nodes: These are  accessed either via the WebUI and API. Execution capabilities are disabled on these nodes. 
  • Execution nodes: This is where Ansible Playbooks are actually executed.  This node will run an automation execution environment which in Continue reading

Using Red Hat Ansible Automation Platform to Enable a Policy as Code Solution

policy as code blog

Nelson, the director of operations of a large manufacturing company, told me that he has a highly leveraged staff. That is, most of the work on the company's critical cloud project is being done by consultants and vendors, and not by their team members. Unfortunately, much of the staff’s time is spent making sure that the infrastructure as code (IaC) implementation is in compliance with the standards and policies that his company has for cloud resources. Nelson said, “Oftentimes the code works, but doesn’t conform to naming conventions, or is missing labels, or has security issues, which impacts downstream workflows and applications. We need the environment to conform to our policies, and I don’t want my staff burning cycles to ensure that this is the case.” This was the reason why he brought me in to run a proof of concept (POC). The POC would validate what would become a Policy as Code solution based on one of the common IaC products.  

When the technical team and I reviewed the proof points for the POC, and it was a standard demonstration of Policy as Code capabilities, it was determined that Red Hat Ansible Automation Platform would satisfy all Continue reading

What’s New: Cloud Automation with amazon.aws 4.0.0

When it comes to Amazon Web Services (AWS) infrastructure automation, the latest release of the amazon.aws Collection brings a number of enhancements to improve the overall user experience and speed up the process from development to production.

This blog post goes through changes and highlights on what’s new in the 4.0.0 release of this Ansible Content Collection.

 

Forward-looking Changes

With the recent release, we have included numerous bug fixes and features that further solidify the amazon.aws Collection. Let's go through some of them!

 

New Features Highlights

Some of the new features available in this Collection release are listed below.

 

EC2 Subnets in AWS Outposts

AWS Outposts is a fully managed service that extends AWS infrastructure to on-premise locations, reducing latency and data processing needs. EC2 subnets can be created on AWS Outposts by specifying the AWS Outpost Amazon Resource Name (ARN) of the in the creation phase.

The new outpost_arn option of the ec2_vpc_subnet module allows you to do that.

- name: Create an EC2 subnet on an AWS Outpost
  amazon.aws.ec2_vpc_subnet:
    state: present
    vpc_id: vpc-123456
    cidr: 10.1.100.0/24
    outpost_arn: "{{ outpost_arn }}"
    tags:
      "Environment": "production"

 

New EC2 Instance Continue reading

When localhost isn’t what it seems in Red Hat Ansible Automation Platform 2

localhost blog

With Red Hat Ansible Automation Platform 2  and the advent of automation execution environments, some behaviors are now different for. This blog explains the use case around using localhost and options for sharing data and persistent data storage, for VM-based Ansible Automation Platform 2  deployments.

With Ansible Automation Platform 2  and its containerised execution environments, the concept of localhost has altered. Before Ansible Automation Platform 2, you could run a job against localhost, which translated into running on the underlying tower host. You could use this to store data and persistent artifacts, although this was not always a good idea or best practice.

Now with Ansible Automation Platform 2, localhost means you’re running inside a container, which is ephemeral in nature. This means we must do things differently to achieve the same goal. If you consider this a backwards move, think again. In fact, localhost is now no longer tied to a particular host, and with portable execution environments, this means it can run anywhere, with the right environment and software prerequisites already embedded into the execution environment container.

So, if we now have a temporal runtime container and we want to use existing data or persistent data, Continue reading

Terraforming clouds with Ansible

 

The wheel was invented in the 4th millennium BC. Now, in the 4th millennium, I am sure the wheel was the hottest thing on the block, and only the most popular Neolithic cool cats had wheels. Fast forward to the present day, and we can all agree that the wheel is nothing really to write home about. It is part of our daily lives. The wheel is not sexy. If we want the wheel to become sexy again we just need to slap a sports car together with all the latest gadgets and flux capacitors in a nice Ansible red, and voilà! We have something we want to talk about. 

Like the sports car, Red Hat Ansible Automation Platform has the same ability to turn existing resources into something a bit more intriguing. It can enhance toolsets and extend them further into an automation workflow. 

Let's take Terraform. Terraform is a tool used often for infrastructure-as-code. It is a great tool to use when provisioning infrastructure in a repeatable way across multiple large public cloud providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Many organizations use Terraform for quick infrastructure provisioning every Continue reading

Taking Automation to the Next Level: Using Ansible + GitOps to Manage the Lifecycle of a Containerized Application

taking automation to the next level blog

One of the great advantages of combining GitOps with Ansible is that you get to streamline the automation delivery and the lifecycle of a containerized application.

With the abilities of GitOps we get to:

  • Standardize configurations of our applications.
  • Inherit the benefits of version control of our configurations.
  • Easily track changes of the configuration settings making fixing issues easier.
  • Have one source of truth for our applications.

Combine the above with Ansible and you have everything you need to accomplish configuration consistency for a containerized app anywhere that you automate. 

That leads us to, “how do we combine Ansible and GitOps to manage the lifecycle of a containerized application?”

Simple. By creating an Ansible workflow that is associated with a Git webhook that is part of my application’s repository.

What is a Git webhook you ask?

Git webhooks are defined as a method to deliver notifications to an external web server whenever certain actions occur on a repository.

For example, when a repository is updated, this could trigger an event that could trigger CI builds, deploy an environment, or in our case, modify the configuration of our containerized application. 

A webhook provides the ability to execute specified Continue reading

1 4 5 6 7 8 33