Following the previous article where we saw how to build multi arch images using GitHub Actions, we will now show how to do the same thing using another CI. In this article, we’ll show how to use GitLab CI, which is part of the GitLab.
To start building your image with GitLab CI, you will first need to create a .gitlab-ci.yml file at the root of your repository, commit it and push it.
image: docker:stable
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
services:
- docker:dind
build:
stage: build
script:
- docker version
This should result in a build output that shows the version of the Docker CLI and Engine:
We will now install Docker buildx. Because GitLabCI runs everything in containers and uses any image you want to start this container, we can use one with buildx preinstalled, like the one we used for CircleCI. And as for CircleCI, we need to start a builder instance.
image: jdrouet/docker-with-buildx:stable
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
services:
- docker:dind
build:
stage: build
script:
- docker buildx create --use
- docker Continue reading
The Red Hat Ansible Automation Platform makes IT automation simple and powerful. In line with the fast growing adoption and community, we want Red Hat’s business partners and customers to be familiar with the Red Hat Ansible Automation Platform. Of course, there are lots of resources for learning about Ansible out there: books, blogs, tutorials and training. But the people at Red Hat working behind the scenes on Ansible created something especially useful: the Red Hat Ansible Automation Platform workshops!
As a Red Hat partner, no matter if you are planning to run an Ansible demo, train your internal staff or deliver a workshop to get your customers started with Ansible, the Ansible workshops are the way to go! Instead of creating your own workshop framework and content, you can focus on delivering Ansible enablement with consistent messaging through tested and curated exercises created by Red Hat. Using consistent, scalable content following best practices allows you to concentrate on your main business, building solutions for your customers and enabling the customer teams on the corresponding technology.
The Ansible workshops provide you with everything you need to successfully run workshops, including presentations, guided exercises and dedicated Continue reading
Over the last couple of weeks or so, I’ve been using my 2017 MacBook Pro (running macOS “Mojave” 10.14.6) more frequently as my daily driver/primary workstation. Along with it, I’ve been using the Anker PowerExpand Elite 13-in-1 Thunderbolt 3 Dock. In this post, I’d like to share my experience with this dock and provide a quick review of the Anker PowerExpand Elite.
Note that I’m posting this as a customer of Anker. I paid for the PowerExpand Elite out of my own pocket, and haven’t received any compensation of any kind from anyone in return for my review. This is just me sharing my experience in the event it will help others.
The dock is both smaller than I expected (it measures 5 inches by 3.5 inches by 1.5 inches) and yet heavier than I expected. It feels solid and well-built. It comes with a (rather large) power brick and a Thunderbolt 3 cable to connect to the MacBook Pro. Setup was insanely easy; plug it in, connect it to the laptop, and you’re off to the races. (I did need to reboot my MacBook Pro for macOS to recognize the network interface in Continue reading
One of the crucial pieces of the Red Hat Ansible Automation Platform is Ansible Tower. Ansible Tower helps scaling IT automation, managing complex deployments and speeding up productivity. A strength of Ansible Tower is its simplicity that also extends to the installation routine: when installed as a non-container version, a simple script is used to read in variables from an initial configuration to deploy Ansible Tower. The same script and initial configuration can even be re-used to extend the setup and add, for example, more cluster nodes.
However, part of this initial configuration are passwords for the database, Ansible Tower itself and so on. In many online examples, these passwords are often stored in plain text. One question I frequently get as a Red Hat Consultant is how to protect this information. A common solution is to simply remove the file after you complete the installation of Ansible Tower. But, there are reasons you may want to keep the file around. In this article, I will present another way to protect the passwords in your installation files.
For some quick background, setup.sh is the script used to install Ansible Tower and is provided in Continue reading
This is the second part of the blog post series on how to containerize our Python development. In part 1, we have already shown how to containerize a Python service and the best practices for it. In this part, we discuss how to set up and wire other components to a containerized Python service. We show a good way to organize project files and data and how to manage the overall project configuration with Docker Compose. We also cover the best practices for writing Compose files for speeding up our containerized development process.
Let’s take as an example an application for which we separate its functionality in three-tiers following a microservice architecture. This is a pretty common architecture for multi-service applications. Our example application consists of:
The reason for splitting an application into tiers is that we can easily modify or add new ones without having to rework the entire project.
A good way to Continue reading
Yes. Docker is available for Windows, MacOS and Linux. Here are the download links:
This is a great question and I get this one a lot. The simplest way I can explain the differences between Virtual Machines and Containers is that a VM virtualizes the hardware and a Container “virtualizes” the OS.
If you take a look at the image above, you can see that there are multiple Operating Systems running when using Virtual Machine technology. Which produces a huge difference in start up times and various other constraints and overhead when installing and maintaining a full blow operating system. Also, with VMs, you can run different flavors of operating systems. For example, I can run Windows 10 and a Linux distribution on the same hardware at the same time. Now let’s take a look at the image for Docker Containers.
As you can see in this image, we only have one Host Operating System installed on our infrastructure. Docker sits “on top” of the host operating system. Each application is then bundled in an Continue reading
Welcome to Technology Short Take #129, where I’ve collected a bunch of links and references to technology-centric resources around the Internet. This collection is (mostly) data center- and cloud-focused, and hopefully I’ve managed to curate a list that has some useful information for readers. Sorry this got published so late; it was supposed to go live this morning!
Note there is a slight format change debuting in this Tech Short Take. Moving forward, I won’t include sections where I have no content to share, and I’ll add sections for content that may not typically appear. This will make the list of sections a bit more dynamic between Tech Short Takes. Let me know if you like this new approach—feel free to contact me on Twitter and provide your feedback.
Now, on to the good stuff!
Last week we announced Docker and AWS created an integrated and frictionless experience for developers to leverage Docker Compose, Docker Desktop, and Docker Hub to deploy their apps on Amazon Elastic Container Service (Amazon ECS) and Amazon ECS on AWS Fargate. On the heels of that announcement, we continue the latest series of blog articles focusing on developer content that we are curating from DockerCon LIVE 2020, this time with a focus on AWS. If you are running your apps on AWS, bookmark this post for relevant insights for easy access in one place.
As more developers adopt and learn Docker, and as more organizations are jumping head-first into containerizing their applications, AWS continues to be the cloud of choice for deployment. Earlier this year Docker and AWS collaborated on Compose-spec.io open specification and as mentioned on the Docker blog by my colleague Chad Metcalf, deploying straight from Docker to AWS has never been easier. It’s just another step to constantly put ourselves in the shoes of you, our customer, the developer.
The replay of these three sessions on AWS is where you can learn more about container trends for developers, adopting microservices and building and deploying multi-container Continue reading
Running IT environments means facing many challenges at the same time: security, performance, availability and stability are critical for the successful operation of today’s data centers. IT managers and their teams of administrators, operators and architects are well advised to move from a reactive, “fire-fighting” mode to a proactive approach where systems are continuously scanned and improvements are applied before critical situations come up. Red Hat Insights routinely analyzes Red Hat Enterprise Linux systems for security/vulnerability, compliance, performance, availability and stability threats, and based on the results, can provide guidance on how to improve daily operations. Insights is included with your Red Hat Enterprise Linux subscription and located at cloud.redhat.com.
We recently announced a new Red Hat Ansible Content Collection for Insights, an integration designed to make it easier for Insights users to manage Red Hat Enterprise Linux and to automate tasks on those systems using Ansible. The Ansible Content Collection for Insights is ideal for customers that have large Red Hat Enterprise Linux estates that require initial deployment and ongoing management of the Insights client.
In this blog, we will look at how this integration with Ansible takes care of key tasks via included Ansible Continue reading
On June 30, 2020, a security vulnerability affecting multiple BIG-IP platforms from F5 Networks was made public with a CVSS score of 10 (Critical). Due to the significance of the vulnerability, network administrators are advised to mitigate this issue in a timely manner. Doing so manually is tricky, especially if many devices are involved. Because F5 BIG-IP and BIG-IQ are certified with the Red Hat Ansible Automation Platform, we can use it to tackle the issue.
This post provides one way of temporarily mitigating CVE-2020-5902 via Ansible Tower without upgrading the BIG-IP platform. However, larger customers like service providers might struggle to upgrade on a short notice, as they may have to go through a lengthy internal validation process. For those situations, an automated mitigation may be a reasonable workaround until such time to perform an upgrade.
The vulnerability is described in K52145254 of the F5 Networks support knowledgebase:
The Traffic Management User Interface (TMUI), also referred to as the Configuration utility, has a Remote Code Execution (RCE) vulnerability in undisclosed pages.
And describes the impact is serious:
This vulnerability allows for unauthenticated attackers, or authenticated users, with network access to the Configuration Continue reading
Developing Python projects in local environments can get pretty challenging if more than one project is being developed at the same time. Bootstrapping a project may take time as we need to manage versions, set up dependencies and configurations for it. Before, we used to install all project requirements directly in our local environment and then focus on writing the code. But having several projects in progress in the same environment becomes quickly a problem as we may get into configuration or dependency conflicts. Moreover, when sharing a project with teammates we would need to also coordinate our environments. For this we have to define our project environment in such a way that makes it easily shareable.
A good way to do this is to create isolated development environments for each project. This can be easily done by using containers and Docker Compose to manage them. We cover this in a series of blog posts, each one with a specific focus.
This first part covers how to containerize a Python service/tool and the best practices for it.
Requirements
To easily exercise what we discuss in this blog post series, we need to install a minimal set Continue reading
Cloud environments do not lend themselves to manual management or interference, and only thrive in well-automated environments. Many cloud environments are created and deployed from a known definition/template, but what do you do on day 2? In this blog post, we will cover some of the top day 2 operations use cases available through our Red Hat Certified Ansible Content Collection for AWS (requires a Red Hat Ansible Automation Platform subscription) or from Ansible Galaxy (community supported).
No matter the road that led you to managing a cloud environment, you’ll likely have run into the ever-scaling challenge of maintaining cloud-based services over time. Cloud environments do not operate the same ways the old datacenter-based infrastructures did. Coupled with the ease of access for just about anyone to deploy services, you’ll have a potential recipe for years of unlimited maintenance headaches.
The good news is that there is one way to bring order to all the cloud-based chaos: Ansible. In this blog post we will explore common day 2 operations use cases for Amazon Web Services using the amazon.aws Ansible Certified Content Collection. For more information on how to use Ansible Content Collections, check out Continue reading
Running containers in the cloud can be hard and confusing. There are so many options to choose from and then understanding how all the different clouds work from virtual networks to security. Not to mention orchestrators. It’s a learning curve to say the least.
At Docker we are making the Developer Experience (DX) more simple. As an extension of that we want to provide the same beloved Docker experience that developers use daily and integrate it with the cloud. Microsoft’s Azure ACI provided an awesome platform to do just that.
In this tutorial, we take a look at running single containers and multiple containers with Compose in Azure ACI. We’ll walk you through setting up your docker context and even simplifying logging into Azure. At the end of this tutorial, you will be able to use familiar Docker commands to deploy your applications into your own Azure ACI account.
To complete this tutorial, you will need:
For many IT teams, automation is a core component these days. But automation is not something on it’s own - it is a part of a puzzle and needs to interact with the surrounding IT. So one way to grade automation is how well it integrates with other tooling of the IT ecosystem - like the central logging infrastructure. After all, through the central logging the IT team can quickly survey what is happening, where, and what the state of it is.
The Red Hat Ansible Automation Platform is a solution to build and operate automation at scale. As part of the platform, Ansible Tower integrates well with external logging solutions, such as Splunk, and it is easy to set that up. In this blog post we will demonstrate how to perform the necessary configurations in both Splunk and Ansible Tower to let them work well together.
The first step is to get Splunk up and running. You can download a Splunk RPM after you register yourself at the Splunk home page.
After the registration, download the rpm and perform the installation:
$ rpm -ivh splunk-8.0.3-a6754d8441bf-linux-2.6-x86_64.rpm
warning: splunk-8.0.3-a6754d8441bf-linux-2.6-x86_64.rpm: Continue reading
Just about six years ago to the day Docker hit the first milestone for Docker Compose, a simple way to layout your containers and their connections. A talks to B, B talks to C, and C is a database. Fast forward six years and the container ecosystem has become complex. New managed container services have arrived bringing their own runtime environments, CLIs, and configuration languages. This complexity serves the needs of the operations teams who require fine grained control, but carries a high price for developers.
One thing has remained constant over this time is that developers love the simplicity of Docker and Compose. This led us to ask, why do developers now have to choose between simple and powerful? Today, I am excited to finally be able to talk about the result of what we have been working on for over a year to provide developers power and simplicity from desktop to the cloud using Compose. Docker is expanding our strategic partnership with Amazon and integrating the Docker experience you already know and love with Amazon Elastic Container Service (ECS) with AWS Fargate. Deploying straight from Docker straight to AWS has never been easier.
Today this functionality is Continue reading
Just about six years ago to the day Docker hit the first milestone for Docker Compose, a simple way to layout your containers and their connections. A talks to B, B talks to C, and C is a database. Fast forward six years and the container ecosystem has become complex. New managed container services have arrived bringing their own runtime environments, CLIs, and configuration languages. This complexity serves the needs of the operations teams who require fine grained control, but carries a high price for developers.
One thing has remained constant over this time is that developers love the simplicity of Docker and Compose. This led us to ask, why do developers now have to choose between simple and powerful? Today, I am excited to finally be able to talk about the result of what we have been working on for over a year to provide developers power and simplicity from desktop to the cloud using Compose. Docker is expanding our strategic partnership with Amazon and integrating the Docker experience you already know and love with Amazon Elastic Container Service (ECS) with AWS Fargate. Deploying straight from Docker straight to AWS has never been easier.
Today this functionality is Continue reading
IBM Security QRadar is a Security Information and Event Management (SIEM), which can help security teams to accurately detect and prioritize threats across the organization, providing intelligent insights that enable organisations to respond quickly to reduce the impact of incidents. By consolidating log events and network flow data from thousands of devices, endpoints, users and applications distributed throughout your network, QRadar correlates all this different information and aggregates related events into single alerts to accelerate incident analysis and remediation.
Ansible is the open and powerful language security teams can use to interoperate across the various security technologies involved in their day-to-day activities.
Customers can take advantage of the IBM QRadar Content Collection to create sophisticated security workflows through the automation of the following functionalities:
Ansible allows security organizations to integrate QRadar into automated security processes, enabling them to automate QRadar configuration deployments in recurring situations like automated test environments, but also in large scale deployments where similar tasks have to be rolled out and managed across multiple nodes.
Security practitioners can automate investigation activities enabling QRadar to programmatically access newdata sources. Also, they now have Continue reading
As of the time that I published this blog post in early July 2020, Docker Desktop for macOS was at version 2.2.0.4 (for the “stable” channel). That version includes a relatively recent version of the Docker engine (19.03.8, compared to 19.03.12 on my Fedora 31 box), but a quite outdated version of Kubernetes (1.15.5, which isn’t supported by upstream). Now, this may not be a problem for users who only use Kubernetes via Docker Desktop. For me, however, the old version of Kubernetes—specifically the old version of kubectl—causes problems. Here’s how I worked around the old version that Docker Desktop supplies.
First, you’ll note that Docker Desktop automatically symlinks its version of kubectl into your system path at /usr/local/bin. You can verify the version of Docker Desktop’s kubectl by running this command:
/usr/local/bin/kubectl version --client=true
On my macOS 10.14.6-based system, this returned a version of 1.15.5. According to GitHub, v1.15.5 was released in October of 2019. Per the Kubernetes version skew policy, this version of kubectl would work with with 1.14, 1.15, and 1.16. What if I need to Continue reading
Following the previous article where we saw how to build multi arch images using GitHub Actions, we will now show how to do the same thing using another CI. In this article, we’ll consider Travis, which is one of the most tricky ones to use for this use case.
To start building your image with Travis, you will first need to create .travis.yml file at the root of your repository.
language: bash
dist: bionic
services:
- docker
script:
- docker version
You may notice that we specified using “bionic” to have the latest version of Ubuntu available – Ubuntu 18.04 (Bionic Beaver). As of today (May 2020), if you run this script, you’ll be able to see that the Docker Engine version it provides is 18.06.0-ce which is too old to be able to use buildx. So we’ll have to install Docker manually.
language: bash
dist: bionic
before_install:
- sudo rm -rf /var/lib/apt/lists/*
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
- sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) edge"
- sudo apt-get update
- sudo apt-get -y -o Dpkg::Options::="--force-confnew" install docker-ce
script:
Continue reading
As we navigate through unprecedented times, the spotlight is on enhancing IT resilience and ensuring business continuity. We see that enterprises are experiencing shifts in market conditions and automation can be a key to rapidly responding to changes. With many enterprises having hybrid IT and multiple operating system environments, each with its own tooling and processes, implementing a consistent automation strategy to help scale and maximize impact has been a challenge. This is where Red Hat Ansible Automation Platform can help, by more easily enabling automation across different IT environments.
Red Hat Ansible Automation Platform provides automation in areas that span across development, DevOps, compute, network, storage, applications, security, and Internet of Things (IoT). A common request we at IBM had been getting from our users was for Ansible Automation support of AIX and IBM i operating systems. Red Hat and IBM are pleased to announce the general availability of Red Hat Ansible Certified Content for IBM Power Systems. Red Hat Ansible certification involves Red Hat testing the Collections developed by IBM and a commitment to provide enterprise support. The Collections for AIX and IBM i are maintained and supported by IBM.
Ansible content for AIX and IBM i helps Continue reading