James Cammarata

Author Archives: James Cammarata

Kubernetes Operators with Ansible Deep Dive: Part 2

blog_ansible-and-kubernetes-deep-dive-2

In part 1 of this series, we looked at operators overall, and what they do in OpenShift/Kubernetes. We peeked at the Operator SDK, and why you'd want to use an Ansible Operator rather than other kinds of operators provided by the SDK. We also explored how Ansible Operators are structured and the relevant files created by the Operator SDK when building Kubernetes Operators with Ansible.

In this the second part of this deep dive series, we'll:

  1. Take a look at creating an OpenShift Project and deploying a Galera Operator
  2. Next we’ll check the MySQL cluster, then setup and test a Galera cluster
  3. Then we’ll test scaling down, disaster recovery, and demonstrate cleaning up

Creating the project and deploying the operator

We start by creating a new project in OpenShift, which we'll simply call test:

$ oc new-project test --display-name="Testing Ansible Operator"
Now using project "test" on server "https://ec2-xx-yy-zz-1.us-east-2.compute.amazonaws.com:8443".

We won't delve too much into this role, however the basic operation is:

  1. Use set_fact to generate variables using the k8s lookup plugin or other variables defined in defaults/main.yml.
  2. Determine if any corrective action needs to be taken based on the above variables. For example, one Continue reading

Kubernetes Operators with Ansible Deep Dive: Part 1

blog_ansible-and-kubernetes-deep-dive-1

This deep dive series assumes the reader has access to a Kubernetes test environment. A tool like minikube is an acceptable platform for the purposes of this article. If you are an existing Red Hat customer, another option is spinning up an OpenShift cluster through cloud.redhat.com. This SaaS portal makes trying OpenShift a turnkey operation.

In this part of this deep dive series, we'll:

  1. Take a look at operators overall, and what they do in OpenShift/Kubernetes.
  2. Take a quick look at the Operator SDK, and why you'd want to use an Ansible operator rather than other kinds of operators provided by the SDK.
  3. And finally, how Ansible Operators are structured and the relevant files created by the Operator SDK.

What Are Operators?

For those who may not be very familiar with Kubernetes, it is, in its most simplistic description - a resource manager. Users specify how much of a given resource they want and Kubernetes manages those resources to achieve the state the user specified. These resources can be pods (which contain one or more containers), persistent volumes, or even custom resources defined by users.

This makes Kubernetes useful for managing resources that don't contain any state (like Continue reading

Kubernetes Operators with Ansible Deep Dive: Part 1

blog_ansible-and-kubernetes-deep-dive-1

Deploying applications on Red Hat OpenShift or Kubernetes has come a long way. These days, it's relatively easy to use OpenShift's GUI or something like Helm to deploy applications with minimal effort. Unfortunately, these tools don't typically address the needs of operations teams tasked with maintaining the health or scalability of the application - especially if the deployed application is something stateful like a database. This is where Operators come in.

An Operator is a method of packaging, deploying and managing a Kubernetes application.  Kubernetes Operators with Ansible exists to help you encode the operational knowledge of your application in Ansible.

What can we do with Ansible in a Kubernetes Operator? Because Ansible is now part of the Operator SDK, anything Operators could do should be able to be done with Ansible. It’s now possible to write an Operator as an Ansible Playbook or Role to manage components in Kubernetes clusters. In this blog, we're going to be diving into an example Operator.

For more information on Kubernetes Operators with Ansible please refer to the following resources:

Ansible Is 5 Years Old Today!

Ansible birthday

Five years ago today, Michael DeHaan created this commit:


commit f31421576b00f0b167cdbe61217c31c21a41ac02
Author: Michael DeHaan
Date:   Thu Feb 23 14:17:24 2012 -0500

    Genesis.

When you create something you intend to release as open-source software, you never know if it will be something others are actually interested in using (much less contributing to).

Michael invited me to join Ansible when it was just over a year old as a project, and I have seen it grow from an already wildly popular project into something used by people around the world. The thing that makes Ansible the strongest though, by far, is its community of users and contributors. So join us in wishing Happy Birthday by sharing how you innovate with Ansible!

twitter-logo.png Tweet #AnsibleBirthday


How to Extend Ansible Through Plugins


 

Did you know a large portion of Ansible’s functionality comes from the Ansible plugin system? These important pieces of code augment Ansible’s core functionality such as parsing and loading inventory and Playbooks, running Playbooks and reading the results. Essentially, Ansible uses plugins to extend what the system is doing under the hood.

In this blog, I’ll review each of these plugins and offer a high-level overview on how to write your own plugin to extend Ansible functionality.

Action Plugins

One of the core critical plugins used by Ansible are action plugins. Anytime you run a module, Ansible first runs an action plugin.

Action plugins are a layer between the executor engine and the module and allow for controller-side actions to be taken before the module is executed. A good example of this is the template module. If you look at template.py in the modules directory, it’s basically a Python stub with documentation strings, everything is done by the action plugin. The template action plugin itself creates the template file locally as a temporary file, and then uses the copy or file modules to push it out to the target system.

If Ansible finds an action plugin with the Continue reading

Ansible 2.0 Has Arrived

Ansible-2-Release-Blog-Header

After a year of work, we are extremely proud to announce that Ansible 2.0 has been released and is now generally available. This is by far one of the most ambitious Ansible releases to date, and it reflects an enormous amount of work by the community, which continues to amaze me. Approximately 300 users have contributed code to what has been known as “v2” for some time, and 500 users have contributed code to modules since the last major Ansible release.

Why Did We Start V2?

There are many pitfalls to refactoring software, so why did we decide to tackle such a major project? At the time we started the work on v2, Ansible was approximately three years old and had recently crossed the 1,000 contributor mark. This huge rate in growth also resulted in a degree of technical debt in the code, which was beginning to show as we continued to add features.

Ultimately, we decided it was worth it to take a step back and rework some aspects of the codebase which had been prone to having features bolted on without a clear-cut architectural vision. We also rewrote from scratch much of the code which was responsible Continue reading

Ansible 2.0 Has Arrived

Ansible-2-Release-Blog-Header

After a year of work, we are extremely proud to announce that Ansible 2.0 ("Over the Hills and Far Away") has been released and is now generally available. This is by far one of the most ambitious Ansible releases to date, and it reflects an enormous amount of work by the community, which continues to amaze me. Approximately 300 users have contributed code to what has been known as “v2” for some time, and 500 users have contributed code to modules since the last major Ansible release.

Why Did We Start V2?

There are many pitfalls to refactoring software, so why did we decide to tackle such a major project? At the time we started the work on v2, Ansible was approximately three years old and had recently crossed the 1,000 contributor mark. This huge rate in growth also resulted in a degree of technical debt in the code, which was beginning to show as we continued to add features.

Ultimately, we decided it was worth it to take a step back and rework some aspects of the codebase which had been prone to having features bolted on without a clear-cut architectural vision. We also rewrote from scratch much Continue reading