Archive

Category Archives for "Systems"

Ansible Linting with GitHub Actions

Colin M Blog Header

Want to trigger linting to your Ansible deployment on every Pull Request?

In this blog, I will show you how to add some great automation into your Ansible code pipeline. 

CI/CD is currently a pretty hot topic for developers. Operations teams can get started with some automated linting with GitHub actions. If you use GitHub you can lint your playbooks during different stages including git pushes or pull requests.

If you’re following good git flow practices and have an approval committee reviewing pull requests, this type of automated testing can save you a lot of time and keep your Ansible code nice and clean.

 

What is Ansible Lint?

Ansible Lint is an open source project that lints your Ansible code. The docs state that it checks playbooks for practices and behavior that could potentially be improved. It can be installed with pip and run manually on playbooks or set up in a pre-commit hook and run when you attempt a commit on your repo from the CLI.

The project can be found under the Ansible org on GitHub.

 

What are GitHub Actions?

From the GitHub documentation: GitHub actions enable you to create custom workflows to automate Continue reading

Up Your Productivity with these DockerCon LIVE 2020 Sessions

DockerCon LIVE 2020 is the opportunity for the Docker Community to connect while socially distancing, have fun, and learn from each other. From beginner to expert, DockerCon speakers are getting ready to share their tips and tricks for becoming more productive and finding joy while building apps.

From the Docker team, Engineer Dave Scott will share how recent updates to Docker Desktop helps to deliver quicker edit-compile-test cycles to developers. He’ll dive into the New Docker Desktop Filesharing Features that enabled this change and how to use them effectively to speed up your dev workflow.

Docker Engineer Anca Lordache will cover Best Practices for Compose-managed Python Applications, from bootstrapping your project to how to reproduce your builds and make them more optimized.

If you are a Node.js developer, Digital Ocean’s Kathleen Juell will share How to Build and Run Node Apps with Docker and Compose.

And for PHP devs, Erika Heidi from Digital Ocean will demonstrate How to Create PHP Development Environments with Docker Compose, using a Laravel 6 application as case study. She’ll demonstrate how to define and integrate services, share files between containers, and manage your environment with Docker Compose commands.

Or if it’s Continue reading

Speed Up Your Development Flow With These Dockerfile Best Practices

The Dockerfile is the starting point for creating a Docker image. The file format provides a well-defined set of directives that allow you to copy files or folders, run commands, set environment variables, and do other tasks required to create a container image. It’s really important to craft your Dockerfile well to keep the resulting image secure, small, quick to build, and quick to update.

In this post, we’ll see how to write good Dockerfiles to speed up your development flow, ensure build reproducibility and that produce images that can be confidently deployed to production.

Note: for this blog post we’ll base our Dockerfile examples on the react-java-mysql sample from the awesome-compose repository.

Development flow

As developers, we want to match our development environment to the target production context as closely as possible to ensure that what we build will work when deployed.


We also want to be able to develop quickly which means we want builds to be fast and for us to be able to use developer tools like debuggers. Containers are a great way to codify our development environment but we need to define our Dockerfile correctly to be able to interact quickly with our containers.

Incremental Continue reading

Technology Short Take 126

Welcome to Technology Short Take #126! I meant to get this published last Friday, but completely forgot. So, I added a couple more links and instead have it ready for you today. I don’t have any links for servers/hardware or security in today’s Short Take, but hopefully there’s enough linked content in the other sections that you’ll still find something useful. Enjoy!

Networking

Servers/Hardware

Nothing this time around!

Security

I don’t have anything to include this time, but I’ll stay alert for content I can Continue reading

Setting up etcd with etcdadm

I’ve written a few different posts on setting up etcd. There’s this one on bootstrapping a TLS-secured etcd cluster with kubeadm, and there’s this one about using kubeadm to run an etcd cluster as static Pods. There’s also this one about using kubeadm to run etcd with containerd. In this article, I’ll provide yet another way of setting up a “best practices” etcd cluster, this time using a tool named etcdadm.

etcdadm is an open source project, originally started by Platform9 (here’s the blog post announcing the project being open sourced). As the README in the GitHub repository mentions, the user experience for etcdadm “is inspired by kubeadm.”

Getting etcdadm

The instructions in the repository indicate that you can use go get -u sigs.k8s.io/etcdadm, but I ran into problems with that approach (using Go 1.14). At the suggestion of the one of the maintainers, I also tried Go 1.12, but it failed both on my main Ubuntu laptop as well as on a clean Ubuntu VM. However, running make etcdadm in a clone of the repository worked, and one of the maintainers indicated the documentation will be updated to reflect this approach Continue reading

Announcing the DockerCon LIVE 2020 Speakers

After receiving many excellent CFP submissions, we are thrilled to finally announce the first round of speakers for DockerCon LIVE on May 28th starting at 9am PT / GMT-7. Check out the agenda here.

In order to maximize the opportunity to connect with speakers and learn from their experience, talks are pre-recorded and speakers are available for live Q&A for their whole session. From best practices and how tos to new product features and use cases; from technical deep dives to open source projects in action, there are a lot of great sessions to choose from, like:

Docker Desktop + WSL 2 Integration Deep Dive

Simon Ferquel, Docker

Dev and Test Agility for Your Database with Docker

Julie Lerman, The Data Farm

Build & Deploy Multi-Container Applications to AWS

Lukonde Mwila, Entelect

COVID-19 in Italy: How Docker is Helping the Biggest Italian IT Company Continue Business Operations

Clemente Biondo, Engineering Ingegneria Informatica

How to Create PHP Development Environments with Docker Compose

Erika Heidi, Digital Ocean

From Fortran on the Desktop to Kubernetes in the Cloud: A Windows Migration Story

Elton Stoneman, Container Consultant and Trainer

How to Use Mirroring and Caching to Optimize your Container Registry

Brandon Mitchell, Boxboat 


In Continue reading

Using External Etcd with Cluster API on AWS

If you’ve used Cluster API (CAPI), you may have noticed that workload clusters created by CAPI use, by default, a “stacked master” configuration—that is, the etcd cluster is running co-located on the control plane node(s) alongside the Kubernetes control plane components. This is a very common configuration and is well-suited for most deployments, so it makes perfect sense that this is the default. There may be cases, however, where you’ll want to use a dedicated, external etcd cluster for your Kubernetes clusters. In this post, I’ll show you how to use an external etcd cluster with CAPI on AWS.

The information in this blog post is based on this upstream document. I’ll be adding a little bit of AWS-specific information, since I primarily use the AWS provider for CAPI. This post is written with CAPI v1alpha3 in mind.

The key to this solution is building upon the fact that CAPI leverages kubeadm for bootstrapping cluster nodes. This puts the full power of the kubeadm API at your fingertips—which in turn means you have a great deal of flexibility. This is the mechanism whereby you can tell CAPI to use an external etcd cluster instead of creating a co-located etcd Continue reading

Using Existing AWS Security Groups with Cluster API

I’ve written before about how to use existing AWS infrastructure with Cluster API (CAPI), and I was recently able to help update the upstream documentation on this topic (the upstream documentation should now be considered the authoritative source). These instructions are perfect for placing a Kubernetes cluster into an existing VPC and associated subnets, but there’s one scenario that they don’t yet address: what if you need your CAPI workload cluster to be able to communicate with other EC2 instances or other AWS services in the same VPC? In this post, I’ll show you the CAPI functionality that makes this possible.

One of the primary mechanisms used in AWS to control communications among instances and services is the security group. I won’t go into any detail on security groups, but this page from AWS provides an explanation and overview of how security groups work.

In order to make a CAPI workload cluster able to communicate with other EC2 instances or other AWS services, you’ll need to somehow use security groups to make that happen. There are at least two—possibly more—ways to accomplish this:

  1. You could add other instances or services to the CAPI-created security groups. The Cluster API Provider Continue reading

Using Docker Desktop and Docker Hub Together – Part 1

Introduction

In today’s fast-paced development world CTOs, dev managers and product managers demand quicker turnarounds for features and defect fixes. “No problem, boss,” you say. “We’ll just use containers.” And you would be right but once you start digging in and looking at ways to get started with containers, well quite frankly, it’s complex. 

One of the biggest challenges is getting a toolset installed and setup where you can build images, run containers and duplicate a production kubernetes cluster locally. And then shipping containers to the Cloud, well, that’s a whole ‘nother story.

Docker Desktop and Docker Hub are two of the foundational toolsets to get your images built and shipped to the cloud. In this two-part series, we’ll get Docker Desktop set up and installed, build some images and run them using Docker Compose. Then we’ll take a look at how we can ship those images to the cloud, set up automated builds, and deploy our code into production using Docker Hub.

Docker Desktop

Docker Desktop is the easiest way to get started with containers on your development machine. The Docker Desktop comes with the Docker Engine, Docker CLI, Docker Compose and Kubernetes. With Docker Desktop there Continue reading

The Next IBM Platform, Revisited

When IBM announced that it was acquiring Red Hat for $34 billion eighteen months ago, one of the things we said that Big Blue needed most and would get from taking over – but not messing with – the world’s largest commercial open source software company was a coherent story that it could tell to its customers about how IBM, which more than any other company helped define data processing, was still relevant to the future.

The Next IBM Platform, Revisited was written by Timothy Prickett Morgan at The Next Platform.

Advanced Dockerfiles: Faster Builds and Smaller Images Using BuildKit and Multistage Builds

Multistage builds feature in Dockerfiles enables you to create smaller container images with better caching and smaller security footprint. In this blog post, I’ll show some more advanced patterns that go beyond copying files between a build and a runtime stage, allowing to get most out of the feature. If you are new to multistage builds you probably want to start by reading the usage guide first.

Note on BuildKit

The latest Docker versions come with new opt-in builder backend BuildKit. While all the patterns here work with the older builder as well, many of them run much more efficiently when BuildKit backend is enabled. For example, BuildKit efficiently skips unused stages and builds stages concurrently when possible. I’ve marked these cases under the individual examples. If you use these patterns, enabling BuildKit is strongly recommended. All other BuildKit based builders support these patterns as well.

• • •

Inheriting from a stage

Multistage builds added a couple of new syntax concepts. First of all, you can name a stage that starts with a FROM command with AS stagename and use --from=stagename option in a COPY command to copy files from that stage. In fact, FROM command and --from flag Continue reading

A New Way to Get Started with Docker!

One of the most common challenges we hear from developers is how getting started with containers can sometimes feel daunting. It’s one of the needs Docker is focusing on in its commitment to developers and dev teams. Our two aims: teach developers and help accelerate their onboarding.

With the benefits of Docker so appealing, many developers are eager to get something up and running quickly. That’s why, with Docker Desktop Edge 2.2.3 Release, we have launched a brand new “Quick Start” guide which displays after first installation and shows users the Docker basics: how to quickly clone, build, run, and share an image to Docker Hub directly in Docker Desktop. 

To keep everything in one place, we’ve crafted the guide with a built-in terminal so that you can paste commands directly — or type them out yourself. It’s a light-touch and integrated way to get something up and running.

Continue learning in an in-depth tutorial

You might expect that this new container you’ve spun up would be just a run-of-the-mill “hello world”. Instead, we’re providing you with a resource for further hands-on learning that you can do at your own pace.

This Docker tutorial, accessible on Continue reading

How we test Docker Desktop with WSL 2

Recently we have released a new Edge version 2.2.3.0 of Docker Desktop for Windows. This can be considered as a release candidate for the next Stable version that will officially support WSL 2. With Windows 10 version 2004 in sight we are giving the next version of Docker Desktop the final touches to give you the best experience running Linux containers on Windows 10.

One of the great benefits is that with the next update of Windows 10 we will also support running Docker Desktop on Windows 10 Home. We worked closely with Microsoft during the last few months to make Docker Desktop and WSL 2 fit together.

In this blog post we look behind the scenes at how we set up new WSL 2 capable test machines to run automated tests in our CI pipeline.

It started with a laptop

Let’s keep in mind that all automation somehow starts with manual steps and you evolve from there to get better and more automated. At the beginning of this project we were given a laptop back at KubeCon 2019 with an early version of WSL 2.

With that single laptop our development team could start getting their Continue reading

Getting started with Ansible and Check Point

ansible-blog-and-social_guy-on-laptop-1

The scale and complexity of modern infrastructures require not only that you be able to define a security policy for your systems, but also be able to apply that security policy programmatically or make changes as a response to external events.  As such, the proper automation tooling is a necessary building block to allow you to apply the appropriate actions in a fast, simple and consistent manner.

Check Point has a certified Ansible Content Collection of modules to help enable organizations to automate their response and remediation practices, and to embrace the DevOps model to accelerate application deployment with operational efficiency. The modules, based on Check Point security management APIs* are also available on Ansible Galaxy, in the upstream version of Check Point Collection for the Management Server

The operational flow is exactly the same for the API as it is for the Check Point security management GUI SmartConsole, i.e. Login > Get Session > Do changes > Publish > Logout. 

Security professionals can leverage these modules to automate various tasks for the identification, search, and response to security events.  Additionally, in combination with other modules that are part of Ansible security automation, existing Continue reading

Using Paw to Launch an EC2 Instance via API Calls

Last week I wrote a post on using Postman to launch an EC2 instance via API calls. Postman is a cross-platform application, so while my post was centered around Postman on Linux (Ubuntu, specifically) the steps should be very similar—if not exactly the same—when using Postman on other platforms. Users of macOS, however, have another option: a macOS-specific peer to Postman named Paw. In this post, I’ll walk through using Paw to issue API requests to AWS to launch an EC2 instance.

I’ll structure this post as a “diff,” if you will, that outlines the differences of using Paw to launch an EC2 instance via API calls versus using Postman to do the same thing. Therefore, if you haven’t already read the Postman post from last week, I strongly recommend reviewing it before proceeding.

Prerequisites

This post assumes you’ve already installed Paw on your macOS system. It also assumes you are somewhat familiar with Paw; refer to the Paw documentation if not. Also, to support AWS authentication, please be sure to install the “AWS Signature 4 Auth Dynamic value” extension (see here or here). This extension is necessary in order to have the API requests sent Continue reading

#mydockerbday Recap + Community Stories

Emma Cresta, 13

Although March has come and gone, you can still take part in the awesome activities put together by the community to celebrate Docker’s 7th birthday. 

Birthday Challenge

Denise Rey and Captains Łukasz Lach, Marcos Nils, Elton Stoneman, Nicholas Dille, and Brandon Mitchell put together an amazing birthday challenge for the community to complete and it is still available. If you haven’t checked out the hands-on learning content yet, go to the birthday page and earn your seven badges (and don’t forget to share them on twitter).

Live Show

Captain Bret Fisher hosted a 3-hour live Birthday Show with the Docker team and Captains. You can check out the whole thing on Docker’s Youtube Channel, or skip ahead using the timestamps below:

02:00 Pre-show pics and games

07:43 Kickoff with Captains

29:00 Docker Roadmap

1:15:47 Docker Desktop: What’s New

1:53:45 Docker Hub with GitHub Actions

2:20:15 Using Docker with Kubernetes

2:55:00 #myDockerBday Stories

Community Stories

And while many Community Leaders had to cancel in-person meetups due to the evolving COVID 19 situation, they and their communities still showed up and shared their #mydockerbday stories. There Continue reading

Using Postman to Launch an EC2 Instance via API Calls

As I mentioned in this post on region and endpoint match in AWS API requests, exploring the AWS APIs is something I’ve been doing off and on for several months. There’s a couple reasons for this; I’ll go into those in a bit more detail shortly. In any case, I’ve been exploring the APIs using Postman (when on Linux) and Paw (when on macOS), and in this post I’ll share how to use Postman to launch an EC2 instance via API calls.

Before I get into the technical details, let me lay out a couple reasons for spending some time on this. I’m pretty familiar with tools like Terraform and Pulumi (my current favorite), and I’m reasonably familiar with AWS CLI itself. In looking at working directly with the APIs, I see this as adding a new perspective on how these other tools work. (I’ve found, in fact, that exploring the APIs has improved my usage of the AWS CLI.) Finally, as I try to deepen my knowledge of programming languages, I wanted to have a reasonable knowledge of the APIs before trying to program around the APIs (hopefully this will make the learning curve a bit less Continue reading

Deploy Stateful Docker Containers with Amazon ECS and Amazon EFS

At Docker, we are always looking for ways to make developers’ lives easier either directly or by working with our partners. Improving developer productivity is a core benefit of using Docker products and recently one of our partners made an announcement that makes developing cloud-native apps easier.

AWS announced that its customers can now configure their Amazon Elastic Container Service (ECS) applications deployed in Amazon Elastic Compute Cloud (EC2) mode to access Amazon Elastic File Storage (EFS) file systems. This is good news for Docker developers who use Amazon ECS. It means that Amazon ECS now natively integrates with Amazon EFS to automatically mount shared file systems into Docker containers. This allows you to deploy workloads that require access to shared storage such as machine learning workloads, containerizing legacy apps, or internal DevOps workloads such as GitLab, Jenkins, or Elasticsearch. 

The beauty of containerizing your applications is to provide a better way to create, package, and deploy software across different computing environments in a predictable and easy-to-manage way. Containers were originally designed to be stateless and ephemeral (temporary). A stateless application is one that neither reads nor stores information about its state from one time that it is run Continue reading

Migrating Existing Content Into a Dedicated Ansible Collection

Today, we will demonstrate how to migrate part of the existing Ansible content (modules and plugins) into a dedicated Ansible Collection. We will be using modules for managing DigitalOcean's resources as an example so you can follow along and test your development setup. But first, let us get the big question out of the way: Why would we want to do that? 

 

Ansible on a Diet 

In late March 2020, Ansible's main development branch lost almost all of its modules and plugins. Where did they go? Many of them moved to the ansible-collections GitHub organization. More specifically, the vast majority landed in the community.general GitHub repository that serves as their temporary home (refer to the Community overview README for more information). 

The ultimate goal is to get as much content in the community.general Ansible Collection "adopted" by a caring team of developers and moved into a dedicated upstream location, with a dedicated Galaxy namespace. Maintainers of the newly migrated Ansible Collection can then set up the development and release processes as they see fit, (almost) free from the requirements of the comunity.general collection. For more information about the future of Ansible content delivery, please Continue reading

Announcing the Compose Specification

Docker is pleased to announce that we have created a new open community to develop the Compose Specification. This new community will be run with open governance with input from all interested parties allowing us together to create a new standard for defining multi-container apps that can be run from the desktop to the cloud. 

Docker is working with Amazon Web Services (AWS), Microsoft and others in the open source community to extend the Compose Specification to more flexibly support cloud-native platforms like Kubernetes and Amazon Elastic Container Service (Amazon ECS) in addition to the existing Compose platforms. Opening the specification will allow innovation to flourish and deliver more choices to developers, accelerating how development teams build and ship applications.

Currently used by millions of developers and with over 650,000 Compose files on GitHub, Compose has been widely embraced by developers because it is a simple cloud and platform-agnostic way of defining multi-container based applications. Compose dramatically simplifies the code to cloud process and toolchain for developers by allowing them to define a complex stack in a single file and run it with a single command. This eliminates the need to build and start every container manually, saving development Continue reading

1 29 30 31 32 33 125