Archive

Category Archives for "Ansible Blog"

Red Hat Ansible Automation Platform Product Status Update

The Red Hat Ansible Product Team wanted to provide an update on the status and progress of Ansible’s foundational role as it pertains to the product, specifically as a deliverable for implementing automation as a language. That is, Ansible as provided by aggregated low-level command line binary executables leveraging Python, with a YAML-based user abstraction. Specifically, the packaged deliverable is currently named Ansible Base, but will soon be named Ansible Core later this year. When people often generally refer to “Ansible,” this largely describes what people use directly as part of their day-to-day development efforts.

As an Ansible Automation Platform user, you may have noticed changes over the last year and a half to the Ansible open source project and downstream product in order to provide targeted solutions for each customer persona, focusing on enhancements to packaging, release cadence, and content development.

We’ve seen the community and enterprise user bases of Ansible continue to grow as different groups adopt Ansible due to its strengths and its ability to automate a broad set of IT domains (such as Linux, Windows, network, cloud, security, storage, etc.).  But with this success it became apparent that there is no one size Continue reading

Announcing the Community Ansible 3.0.0 Package

Version 3.0.0 of the Ansible community package marks the end of the restructuring of the Ansible ecosystem. This work culminates what began in 2019 to restructure the Ansible project and shape how Ansible content was delivered. Starting with Ansible 3.0.0, the versioning and naming reflects the new structure of the project in the following ways: 

  1. The versioning methodology for the Ansible community package now adopts semantic versioning, and begins to diverge from the versions of the Ansible Core package (which contains the Ansible language and runtime)
  2. The forthcoming Ansible Core package will be renamed from ansible-base in version 2.10 to ansible-core in version 2.11 for consistency

First, a little history. In Ansible 2.9 and prior, every plugin and module was in the Ansible project (https://github.com/ansible/ansible) itself. When you installed the "ansible" package, you got the language, runtime, and all content (modules and other plugins). Over time, the overwhelming popularity of Ansible created scalability concerns. Users had to wait many months for updated content. Developers had to rely on Ansible maintainers to review and merge their content. These obvious bottlenecks needed to be addressed. 

During the Ansible 2.10 development Continue reading

Fast vs Easy: Benchmarking Ansible Operators for Kubernetes

With Kubernetes, you get a lot of powerful functionality that makes it relatively easy to manage and scale simple applications and API services right out of the box. These simple apps are generally stateless, so the Kubernetes can deploy, scale and recover from failures without any specific knowledge. But what if Kubernetes native capabilities are not enough?

Operators in Kubernetes and Red Hat OpenShift clusters are a common means for controlling the complete application lifecycle (deployment, updates, and integrations) for complex container-native deployments.

Initially, building and maintaining an Operator required deep knowledge of Kubernetes' internals. They were usually written in Go, the same language as Kubernetes itself. 

The Operator SDK, which is a Cloud Native Computing Foundation (CNCF) incubator project, makes managing Operators much easier by providing the tools to build, test, and package Operators. The SDK currently incorporates three options for building an Operator:

  • Go
  • Ansible
  • Helm

Go-based Operators are the most customizable, since you're working close to the underlying Kubernetes APIs with a full programming language. But they are also the most complex, because the plumbing is directly exposed. You have to know the Go language and Kubernetes internals to be able to maintain these operators.

Continue reading

Fast vs Easy: Benchmarking Ansible Operators for Kubernetes

With Kubernetes, you get a lot of powerful functionality that makes it relatively easy to manage and scale simple applications and API services right out of the box. These simple apps are generally stateless, so the Kubernetes can deploy, scale and recover from failures without any specific knowledge. But what if Kubernetes native capabilities are not enough?

Operators in Kubernetes and Red Hat OpenShift clusters are a common means for controlling the complete application lifecycle (deployment, updates, and integrations) for complex container-native deployments.

Initially, building and maintaining an Operator required deep knowledge of Kubernetes' internals. They were usually written in Go, the same language as Kubernetes itself. 

The Operator SDK, which is a Cloud Native Computing Foundation (CNCF) incubator project, makes managing Operators much easier by providing the tools to build, test, and package Operators. The SDK currently incorporates three options for building an Operator:

  • Go
  • Ansible
  • Helm

Go-based Operators are the most customizable, since you're working close to the underlying Kubernetes APIs with a full programming language. But they are also the most complex, because the plumbing is directly exposed. You have to know the Go language and Kubernetes internals to be able to maintain these operators.

Continue reading

Automating mixed Red Hat Enterprise Linux and Windows Environments

For a system administrator, a perfect world would consist of just one type of server that we needed to support and just one tool to do that work. Unfortunately, we don’t live in an ideal world. Many system admins are required to manage day to day operations of very different servers with different operating systems. The complexity gets magnified when you start looking for tools to manage these distinct systems. Looking at how to automate these systems could lead you down a path of one automation tool per OS type. But why? When you can have one central automation platform that can be used for all servers. In this example, we are going to look at managing Red Hat Enterprise Linux (RHEL) and Windows servers in one data center by the same group of system administrators. While we are going to cover the use case of managing web servers on both RHEL and Windows in some technical details, be aware that this method can be used for almost any typical operational tasks. 

 

Scenario: Managing the web service on RHEL and Windows

In this scenario, we have a system administrator that is tired of getting calls from the network Continue reading

Automating mixed Red Hat Enterprise Linux and Windows Environments

For a system administrator, a perfect world would consist of just one type of server that we needed to support and just one tool to do that work. Unfortunately, we don’t live in an ideal world. Many system admins are required to manage day to day operations of very different servers with different operating systems. The complexity gets magnified when you start looking for tools to manage these distinct systems. Looking at how to automate these systems could lead you down a path of one automation tool per OS type. But why? When you can have one central automation platform that can be used for all servers. In this example, we are going to look at managing Red Hat Enterprise Linux (RHEL) and Windows servers in one data center by the same group of system administrators. While we are going to cover the use case of managing web servers on both RHEL and Windows in some technical details, be aware that this method can be used for almost any typical operational tasks. 

 

Scenario: Managing the web service on RHEL and Windows

In this scenario, we have a system administrator that is tired of getting calls from the network Continue reading

Network Functions Virtualisation (NFV) Automation

Red Hat Ansible Automation Platform acts as the single pane of glass to automate different manual tasks in a heterogeneous cloud and virtualization environments; be it Red Hat OpenStack Platform, VMware vSphere, bare metal or the next-generation Telco cloud-native platform.

To manage cloud infrastructures like Red Hat OpenStack Platform, we will need to manage not just the individual cloud services (configuration), but also the interactions and relationships between them (orchestration).  Ansible Content Collections for Red Hat OpenStack Platform allows automation and management of various OpenStack offerings powered by different prominent Telco network vendors - such as Ericsson, Huawei and Nokia.

Bringing the values and benefits of Ansible automation to Telco NFV operations and deployments daily jobs helps avoid a lot of manual tasks, saves time, improves consistency and frees the existing resources to do more non-repetitive tasks to focus on innovation.  

The following is an example of an Asia Pacific Telco using Ansible to implement NFV automation.



Background

The Telco customer has multiple Red Hat OpenStack Platform clouds from different vendors, i.e. Ericsson, Huawei, Nokia, VMware, and would like to have an automation tool that acts as a single pane of glass to fill some gaps in Continue reading

Network Functions Virtualisation (NFV) Automation

Red Hat Ansible Automation Platform acts as the single pane of glass to automate different manual tasks in a heterogeneous cloud and virtualization environments; be it Red Hat OpenStack Platform, VMware vSphere, bare metal or the next-generation Telco cloud-native platform.

To manage cloud infrastructures like Red Hat OpenStack Platform, we will need to manage not just the individual cloud services (configuration), but also the interactions and relationships between them (orchestration).  Ansible Content Collections for Red Hat OpenStack Platform allows automation and management of various OpenStack offerings powered by different prominent Telco network vendors - such as Ericsson, Huawei and Nokia.

Bringing the values and benefits of Ansible automation to Telco NFV operations and deployments daily jobs helps avoid a lot of manual tasks, saves time, improves consistency and frees the existing resources to do more non-repetitive tasks to focus on innovation.  

The following is an example of an Asia Pacific Telco using Ansible to implement NFV automation.



Background

The Telco customer has multiple Red Hat OpenStack Platform clouds from different vendors, i.e. Ericsson, Huawei, Nokia, VMware, and would like to have an automation tool that acts as a single pane of glass to fill some gaps in Continue reading

Using New Ansible Utilities for Operational State Management and Remediation

Comparing the current operational state of your IT infrastructure to your desired state is a common use case for IT automation.  This allows automation users to identify drift or problem scenarios to take corrective actions and even proactively identify and solve problems.  This blog post will walk through the automation workflow for validation of operational state and even automatic remediation of issues.

We will demonstrate how the Red Hat supported and certified Ansible content can be used to:

  • Collect the current operational state from the remote host and convert it into normalised structure data.
  • Define the desired state criteria in a standard based format that can be used across enterprise infrastructure teams.
  • Validate the current state data against the pre-defined criteria to identify if there is any deviation.
  • Take corrective remediation action as required.
  • Validate input data as per the data model schema

 

Gathering state data from a remote host:

The recently released ansible.utils version 1.0.0 Collection has added support for ansible.utils.cli_parse module, which converts text data into structured JSON format.  The module has the capability to either execute the command on the remote endpoint and fetch the text response, or Continue reading

Using New Ansible Utilities for Operational State Management and Remediation

Comparing the current operational state of your IT infrastructure to your desired state is a common use case for IT automation.  This allows automation users to identify drift or problem scenarios to take corrective actions and even proactively identify and solve problems.  This blog post will walk through the automation workflow for validation of operational state and even automatic remediation of issues.

We will demonstrate how the Red Hat supported and certified Ansible content can be used to:

  • Collect the current operational state from the remote host and convert it into normalised structure data.
  • Define the desired state criteria in a standard based format that can be used across enterprise infrastructure teams.
  • Validate the current state data against the pre-defined criteria to identify if there is any deviation.
  • Take corrective remediation action as required.
  • Validate input data as per the data model schema

 

Gathering state data from a remote host:

The recently released ansible.utils version 1.0.0 Collection has added support for ansible.utils.cli_parse module, which converts text data into structured JSON format.  The module has the capability to either execute the command on the remote endpoint and fetch the text response, or Continue reading

Ansible Network Resource Modules: Deep Dive on Return Values

The Red Hat Ansible Network Automation engineering team is continually adding new resource modules to its supported network platforms.  Ansible Network Automation resource modules are opinionated network modules that make network automation easier to manage and more consistent for those automating various network platforms in production. The goal for resource modules is to avoid creating and maintaining overly complex jinja2 templates for rendering and pushing network configuration, as well as having to maintain complex fact gathering and parsing methodologies.  For this blog post, we will cover standard return values that are the same across all supported network platforms (e.g. Arista EOS, Cisco IOS, NXOS, IOS-XR, and Juniper Junos) and all resource modules. 

Before we get started, I wanted to call out three previous blog posts covering resource modules. If you are unfamiliar with resource modules, check any of these out:

Ansible Network Resource Modules: Deep Dive on Return Values

The Red Hat Ansible Network Automation engineering team is continually adding new resource modules to its supported network platforms.  Ansible Network Automation resource modules are opinionated network modules that make network automation easier to manage and more consistent for those automating various network platforms in production. The goal for resource modules is to avoid creating and maintaining overly complex jinja2 templates for rendering and pushing network configuration, as well as having to maintain complex fact gathering and parsing methodologies.  For this blog post, we will cover standard return values that are the same across all supported network platforms (e.g. Arista EOS, Cisco IOS, NXOS, IOS-XR, and Juniper Junos) and all resource modules. 

Before we get started, I wanted to call out three previous blog posts covering resource modules. If you are unfamiliar with resource modules, check any of these out:

Migrating to Ansible Collections (Webinar Q&A)

Sean Cavanaugh, Anshul Behl and I recently hosted a webinar entitled “Migrating to Ansible Collections” (link to YouTube on-demand webinar replay and link to PDF slides download). This webinar was focused on enabling the Ansible Playbook writers, looking to move to the wonderful world of Ansible Collections in existing Ansible 2.9 environments and beyond.

Screen Shot 2020-12-16 at 4.04.32 PM

 

The webinar was much more popular than we expected, so much so we didn’t have enough time to answer all the questions, so we took all the questions and put them in this blog to make them available to everyone.

 

I would like to use Ansible to automate an application using a REST API (for example creating a new bitbucket project). Should I be writing a role or a custom module? And should I then publish that as a Collection?    

It depends on how much investment you’d like to make into the module or role that you develop. For example, creating a role that references the built-in Ansible URI module can be evaluated versus creating an Ansible module written in Python. If you were to create a module, it can be utilized via a role developed by you or the playbook author. Continue reading

Migrating to Ansible Collections (Webinar Q&A)

Sean Cavanaugh, Anshul Behl and I recently hosted a webinar entitled “Migrating to Ansible Collections” (link to YouTube on-demand webinar replay and link to PDF slides download). This webinar was focused on enabling the Ansible Playbook writers, looking to move to the wonderful world of Ansible Collections in existing Ansible 2.9 environments and beyond.

Screen Shot 2020-12-16 at 4.04.32 PM

 

The webinar was much more popular than we expected, so much so we didn’t have enough time to answer all the questions, so we took all the questions and put them in this blog to make them available to everyone.

 

I would like to use Ansible to automate an application using a REST API (for example creating a new bitbucket project). Should I be writing a role or a custom module? And should I then publish that as a Collection?    

It depends on how much investment you’d like to make into the module or role that you develop. For example, creating a role that references the built-in Ansible URI module can be evaluated versus creating an Ansible module written in Python. If you were to create a module, it can be utilized via a role developed by you or the playbook author. Continue reading

Introduction to Ansible Builder

Hello and welcome to another introductory Ansible blog post, where we'll be covering a new command-line interface (CLI) tool, Ansible Builder. Please note that this article will cover some intermediate-level topics such as containers (Ansible Builder uses Podman by default), virtual environments, and Ansible Content Collections. If you have some familiarity with those topics, then read on to find out what Ansible Builder is, why it was developed, and how to use it. 

This project is currently in development upstream on GitHub and is not yet part of the Red Hat Ansible Automation Platform product.  As with all Red Hat software, our code is open and we have an open source development model for our enterprise software.  The goal of this blog post is to show the current status of this initiative, and start getting the community and customers comfortable with our methodologies, thought process, and concept of Execution Environments.  Feedback on this upstream project can be provided on GitHub via comments and issues, or provided via the various methods listed on our website.  There is also a great talk on AnsibleFest.com, titled “Creating and Using Ansible Execution Environments,” available on-demand, which Continue reading

Introduction to Ansible Builder

Hello and welcome to another introductory Ansible blog post, where we'll be covering a new command-line interface (CLI) tool, Ansible Builder. Please note that this article will cover some intermediate-level topics such as containers (Ansible Builder uses Podman by default), virtual environments, and Ansible Content Collections. If you have some familiarity with those topics, then read on to find out what Ansible Builder is, why it was developed, and how to use it. 

This project is currently in development upstream on GitHub and is not yet part of the Red Hat Ansible Automation Platform product.  As with all Red Hat software, our code is open and we have an open source development model for our enterprise software.  The goal of this blog post is to show the current status of this initiative, and start getting the community and customers comfortable with our methodologies, thought process, and concept of Execution Environments.  Feedback on this upstream project can be provided on GitHub via comments and issues, or provided via the various methods listed on our website.  There is also a great talk on AnsibleFest.com, titled “Creating and Using Ansible Execution Environments,” available on-demand, which Continue reading

Using NetBox for Ansible Source of Truth

Here you will learn about NetBox at a high level, how it works to become a Source of Truth (SoT), and look into the use of the Ansible Content Collection, which is available on Ansible Galaxy. The goal is to show some of the capabilities that make NetBox a terrific tool and why you will want to use NetBox as your network Source of Truth for automation!

Screen Shot 2020-12-08 at 9.27.19 AM

Source of Truth

Why a Source of Truth? The Source of Truth is where you go to get the intended state of the device. There does not need to be a single Source of Truth, but you should have a single Source of Truth per data domain, often referred to as the System of Record (SoR). For example, if you have a database that maintains your physical sites that is used by teams outside of the IT domain, that should be the Source of Truth on physical sites. You can aggregate the data from the physical site Source of Truth into other data sources for automation. Just be aware that when it comes time to collect data, then it should come from that other tool.

The first step in creating a network automation Continue reading

New LibSSH Connection Plugin for Ansible Network Replaces Paramiko, Adds FIPS Mode Enablement

As Red Hat Ansible Automation Platform expands its footprint with a growing customer base, security continues to be an important aspect of organizations’ overall strategy. Red Hat regularly reviews and enhances the foundational codebase to follow better security practices. As part of this effort, we are introducing FIPS 140-2 readiness enablement by means of a newly developed Ansible SSH connection plugin that uses the libssh library. 

 

Ansible Network SSH Connection Basics

Since most network appliances don't support or have limited capability for the local execution of a third party software, the Ansible network modules are not copied to the remote host unlike linux hosts; instead, they run on the control node itself. Hence, Ansible network can’t use the typical Ansible SSH connection plugin that is used with linux host. Furthermore, due to this behavior, performance of the underlying SSH subsystem is critical. Not only is the new LibSSH connection plugin enabling FIPS readiness, but it was also designed to be more performant than the existing Paramiko SSH subsystem.

Screen Shot 2020-11-20 at 8.52.53 AM

The top level network_cli connection plugin, provided by the ansible.netcommon Collection (specifically ansible.netcommon.network_cli), provides an SSH based connection to the network appliance. It in turn calls the Continue reading

Now Available: Red Hat Ansible Automation Platform 1.2

Red Hat Ansible Automation Platform 1.2 is now generally available with increased focus on improving efficiency, increasing productivity and controlling risk and expenses.  While many IT infrastructure engineers are familiar with automating compute platforms, Ansible Automation Platform is the first holistic automation platform to help manage, automate and orchestrate everything in your IT infrastructure from edge to datacenter.  To download the newest release or get a trial license, please sign up on http://red.ht/try_ansible.

Image One

An automation platform for mission critical workloads

The Ansible project is a remarkable open source project with hundreds of thousands of users encompassing a large community.  Red Hat extends this community and open source developer model to innovate, experiment and incorporate feedback to satisfy our customer challenges and use cases.  Red Hat Ansible Automation Platform transforms Ansible and many related open source projects into an enterprise grade, multi-organizational automation platform for mission-critical workloads.  In modern IT infrastructure, automation is no longer a nice-to-have; it’s often now a requirement to run, operate and scale how everything is managed: including network, security, Linux, Windows, cloud and more. 

Ansible Automation Platform includes a RESTful API for seamless integration with existing IT tools Continue reading

Introducing the Ansible Content Collection for Red Hat OpenShift

Increasing business demands are driving the need for automation to support rapid, yet stable and reliable deployments of applications and supporting infrastructure.  Kubernetes and cloud-native tools have quickly emerged as the enabling technologies essential for organizations to build the scalable open hybrid cloud solutions of tomorrow. This is why Red Hat has developed the Red Hat OpenShift Container Platform (OCP) to enable enterprises to meet these emerging business and technical challenges. Red Hat OpenShift brings together Kubernetes and other cloud-native technologies into a single, consistent platform that has been fine-tuned and enhanced for the enterprise. 

There are many similarities to how Red Hat OpenShift and Red Hat Ansible Automation Platform approach their individual problem domains that make a natural fit when we bring the two together to help make hard things easier through automation and orchestration.

We’ve released the Ansible Content Collection for Red Hat OpenShift (redhat.openshift) to enable the automation and management of Red Hat OpenShift clusters. This is the latest edition to the certified content available to subscribers of Red Hat Ansible Automation Platform in the Ansible Automation Hub.

In this blog post, we will go over what you’ll find in redhat.openshift Continue reading

1 10 11 12 13 14 33