Archive

Category Archives for "Systems"

Deploying WordPress to the Cloud

I was curious the other day how hard it would be to actually set up my own blog or rather I was more interested in how easy it is now to do this with containers. There are plenty of platforms that host blogs for you but is it really now as easy to just run one yourself?

In order to get started, you can sign up for a Docker ID, or use your existing Docker ID to download the latest version of Docker Desktop Edge which includes the new Compose on ECS experience. 

Start with the local experience

To start I setup a local WordPress instance on my machine, grabbing a Compose file example from the awesome-compose repo.

Initially I had a go at running this locally on with Docker Compose:

$ docker-compose up -d

Then I can get the list of running containers:

$ docker-compose ps
           Name                          Command               State          Ports
--------------------------------------------------------------------------------------
deploywptocloud_db_1          docker-entrypoint.sh --def ...   Up      3306/tcp, 33060/tcp
deploywptocloud_wordpress_1   docker-entrypoint.sh apach ...   Up      0.0.0.0:80->80/tcp

And then lastly I had a look to see that this was running correctly:

Deploy to the Cloud

Great! Now I needed to look at the contents of the Compose file Continue reading

Docker’s sessions at KubeCon 2020

In a few weeks, August 17-20, lots of us at Docker in Europe were looking forward to hopping on the train down to Amsterdam for KubeCon CloudNativeCon Europe. But like every other event since March, this one is virtual so we will all be at home joining remotely. Most of the sessions are pre recorded with live Q&A, the format that we used at DockerCon 2020. As a speaker I really enjoyed this format at DockerCon, we got an opportunity to clarify and answer extra questions during the talk. It will be rather different from the normal KubeCon experience with thousands of people at the venue though!

Our talks

Chris Crone has been closely involved with the CNAB (Cloud Native Application Bundle) project since the launch in late 2018. He will be talking about how to Simplify Your Cloud Native Application Packaging and Deployments, and will explain why CNAB is a great tool for developers. Packaging up entire applications into self contained artifacts is a really useful tool, an extension of packaging up a single container. The tooling, especially Porter has been making a lot of progress recently so if you heard about CNAB before and are wondering what Continue reading

Docker Talks Live Stream Monthly Recap

Here at Docker, we have a deep love for developers and with more and more of the community working remotely, we thought it would be a great time to start live streaming and connecting with the community virtually. 

To that end, Chad Metcalf (@metcalfc) and I (@pmckee) have started to live stream every Wednesday at 10am Pacific Time on YouTube. You can find all of the past streams and subscribe to get notifications when we go live on our YouTube channel.

Every week we will cover a new topic focusing on developers and developer productivity using the Docker platform. We will have guest speakers, demo a bunch of code and answer any questions that you might have. 

Below I’ve compiled a list of past live streams that you can watch at your leisure and we look forward to seeing you on the next live stream.

Docker ♥ AWS – A match made in heaven

Cloud container runtimes are complex and the learning curve can be steep for some developers. Not all development teams have DevOps teams to partner with which shifts the burden of understanding runtime environments, CLIs, and configuration for the cloud to the Continue reading

How To Setup Your Local Node.js Development Environment Using Docker – Part 2

In part I of this series, we took a look at creating Docker images and running Containers for Node.js applications. We also took a look at setting up a database in a container and how volumes and network play a part in setting up your local development environment.

In this article we’ll take a look at creating and running a development image where we can compile, add modules and debug our application all inside of a container. This helps speed up the developer setup time when moving to a new application or project. 

We’ll also take a quick look at using Docker Compose to help streamline the processes of setting up and running a full microservices application locally on your development machine.

Fork the Code Repository

The first thing we want to do is download the code to our local development machine. Let’s do this using the following git command:

git clone [email protected]:pmckeetx/memphis.git

Now that we have the code local, let’s take a look at the project structure. Open the code in your favorite IDE and expand the root level directories. You’ll see the following file structure.

├── docker-compose.yml
├── notes-service
│   ├── config
│    Continue reading

Using an Inventory Plugin from a Collection in Ansible Tower

Many IT environments grow more and more complex. It is more important than ever that an automation solution always has the most up to date information about what nodes are present and need to be automated. To answer this challenge, the Red Hat Ansible Automation Platform uses inventories: lists of managed nodes.

In its simplest form, inventories can be static files. This is ideal when getting started with Ansible, but as the automation is scaled, a static inventory file is not enough anymore:

  1. How do we update and maintain a list of all of our managed nodes if something changes, if workloads are spun up or teared down?
  2. How do we classify our infrastructure so that we can be more selective in what managed nodes we automate against?

The answer to both of these questions is to use a dynamic inventory: a script or a plugin that will go to a source of truth and discover the nodes that need to be managed. It will also automatically classify the nodes by putting them into groups, which can be used to more selectively target devices when automating with Ansible.

Inventory plugins allow Ansible users to use external platforms to Continue reading

Automating Security with CyberArk and Red Hat Ansible Automation Platform

Proper privilege management is crucial with automation. Automation has the power to perform multiple functions across many different systems. When automation is deployed enterprise-wide, across sometimes siloed teams and functions, enterprise credential management can simplify adoption of automation — even complex authentication processes can be integrated into the setup seamlessly, while adding additional security in managing and handling those credentials.

Depending on how users have defined them, users can craft Ansible Playbooks that require access to credentials and secrets that have wide access to organizational systems. These are necessary to systems and IT resources to accomplish their automation tasks, but they’re also a very attractive target for bad actors. In particular, they are tempting targets for advanced persistent threat (APT) intruders. Gaining access to these credentials could give the attacker the keys to the entire organization.

Most breaches involve stolen credentials, and APT intruders prefer to leverage privileged accounts like administrators, service accounts with domain privileges, and even local admin or privileged user accounts.

You’re probably familiar with the traditional attack flow: compromise an environment, escalate privilege, move laterally, continue to escalate, then own and exfiltrate. It works, but it also requires a lot of work and a lot of Continue reading

Docker Index: Dramatic Growth in Docker Usage Affirms the Continued Rising Power of Developers

Developers have always been an integral part of business innovation and transformation. With the massive increase in Docker usage, we can see the continued rising importance of developers as they create the next generation of cloud native applications. 

You may recall in February we introduced the Docker Index, which gives a snapshot and analysis of developer and dev team preferences and trends based on anonymized data from 5 million Docker Hub users, 2 million Docker Desktop users and countless other developers engaging with content on Docker Hub.

According to a newly updated Docker Index, the eight months between November 2019 and July 2020 have seen a dramatic swell in consumption across the Docker community and ecosystem. How exactly is usage expanding? Let us count the ways.

Last November, there were 130 billion pulls on Docker Hub. That seemed worth talking about, so we shared this data in a blog in February. But since then consumption of the world’s most popular repository for application components (Docker Hub lest there be any doubt) has skyrocketed; in July, total pulls on Docker Hub reached 242 billion. That’s almost a doubling of pulls in a little over six months. (To be Continue reading

Creating an AWS ELB using Pulumi and Go

In case you hadn’t noticed, I’ve been on a bit of a kick with Pulumi and Go recently. There are two reasons for this. First, I have a number of “learning projects” (things that I decide I’d like to try or test) that would benefit greatly from the use of infrastructure as code. Second, I’ve been working on getting more familiar with Go. The idea of combining both those reasons by using Pulumi with Go seemed natural. Unfortunately, examples of using Pulumi with Go seem to be more limited than examples of using Pulumi with other languages, so in this post I’d like to share how to create an AWS ELB using Pulumi and Go.

Here’s the example code:

elb, err := elb.NewLoadBalancer(ctx, "elb", &elb.LoadBalancerArgs{
	NamePrefix:             pulumi.String(baseName),
	CrossZoneLoadBalancing: pulumi.Bool(true),
	AvailabilityZones:      pulumi.StringArray(azNames),
	Instances:              pulumi.StringArray(cpNodeIds),
	HealthCheck: &elb.LoadBalancerHealthCheckArgs{
		HealthyThreshold:   pulumi.Int(3),
		Interval:           pulumi.Int(30),
		Target:             pulumi.String("SSL:6443"),
		UnhealthyThreshold: pulumi.Int(3),
		Timeout:            pulumi.Int(30),
	},
	Listeners: &elb.LoadBalancerListenerArray{
		&elb.LoadBalancerListenerArgs{
			InstancePort:     pulumi.Int(6443),
			InstanceProtocol: pulumi.String("TCP"),
			LbPort:           pulumi.Int(6443),
			LbProtocol:       pulumi.String("TCP"),
		},
	},
	Tags: pulumi.StringMap{
		"Name": pulumi.String(fmt.Sprintf("cp-elb-%s", baseName)),
		k8sTag: pulumi.String("shared"),
	},
})

You can probably infer from the code above that this Continue reading

Containerized Python Development – Part 3

This is the last part in the series of blog posts showing how to set up and optimize a containerized Python development environment. The first part covered how to containerize a Python service and the best development practices for it. The second part showed how to easily set up different components that our Python application needs and how to easily manage the lifecycle of the overall project with Docker Compose.

In this final part, we review the development cycle of the project and discuss in more details how to apply code updates and debug failures of the containerized Python services. The goal is to analyze how to speed up these recurrent phases of the development process such that we get a similar experience to the local development one.

Applying Code Updates

In general, our containerized development cycle consists of writing/updating code, building, running and debugging it.

For the building and running phase, as most of the time we actually have to wait, we want these phases to go pretty quick such that we focus on coding and debugging.

We now analyze how to optimize the build phase during development. The build phase corresponds to image build time when we change Continue reading

Multi-arch build, what about GitLab CI?

Following the previous article where we saw how to build multi arch images using GitHub Actions, we will now show how to do the same thing using another CI. In this article, we’ll show how to use GitLab CI, which is part of the GitLab.

To start building your image with GitLab CI, you will first need to create a .gitlab-ci.yml file at the root of your repository, commit it and push it.

image: docker:stable

variables:
  DOCKER_HOST: tcp://docker:2375/
  DOCKER_DRIVER: overlay2

services:
  - docker:dind

build:
  stage: build
  script:
    - docker version

This should result in a build output that shows the version of the Docker CLI and Engine: 

We will now install Docker buildx. Because GitLabCI runs everything in containers and uses any image you want to start this container, we can use one with buildx preinstalled, like the one we used for CircleCI. And as for CircleCI, we need to start a builder instance.

image: jdrouet/docker-with-buildx:stable

variables:
  DOCKER_HOST: tcp://docker:2375/
  DOCKER_DRIVER: overlay2

services:
  - docker:dind

build:
  stage: build
  script:
    - docker buildx create --use
    - docker Continue reading

Ansible Workshops, Value for partners

The Red Hat Ansible Automation Platform makes IT automation simple and powerful. In line with the fast growing adoption and community, we want Red Hat’s business partners and customers to be familiar with the Red Hat Ansible Automation Platform. Of course, there are lots of resources for learning about Ansible out there: books, blogs, tutorials and training. But the people at Red Hat working behind the scenes on Ansible created something especially useful: the Red Hat Ansible Automation Platform workshops! 

As a Red Hat partner, no matter if you are planning to run an Ansible demo, train your internal staff or deliver a workshop to get your customers started with Ansible, the Ansible workshops are the way to go! Instead of creating your own workshop framework and content, you can focus on delivering Ansible enablement with consistent messaging through tested and curated exercises created by Red Hat. Using consistent, scalable content following best practices allows you to concentrate on your main business, building solutions for your customers and enabling the customer teams on the corresponding technology.

 

The Ansible Workshops

The Ansible workshops provide you with everything you need to successfully run workshops, including presentations, guided exercises and dedicated Continue reading

Review: Anker PowerExpand Elite Thunderbolt 3 Dock

Over the last couple of weeks or so, I’ve been using my 2017 MacBook Pro (running macOS “Mojave” 10.14.6) more frequently as my daily driver/primary workstation. Along with it, I’ve been using the Anker PowerExpand Elite 13-in-1 Thunderbolt 3 Dock. In this post, I’d like to share my experience with this dock and provide a quick review of the Anker PowerExpand Elite.

Note that I’m posting this as a customer of Anker. I paid for the PowerExpand Elite out of my own pocket, and haven’t received any compensation of any kind from anyone in return for my review. This is just me sharing my experience in the event it will help others.

First Impressions

The dock is both smaller than I expected (it measures 5 inches by 3.5 inches by 1.5 inches) and yet heavier than I expected. It feels solid and well-built. It comes with a (rather large) power brick and a Thunderbolt 3 cable to connect to the MacBook Pro. Setup was insanely easy; plug it in, connect it to the laptop, and you’re off to the races. (I did need to reboot my MacBook Pro for macOS to recognize the network interface in Continue reading

Securing Tower Installer Passwords

One of the crucial pieces of the Red Hat Ansible Automation Platform is Ansible Tower. Ansible Tower helps scaling IT automation, managing complex deployments and speeding up productivity. A strength of Ansible Tower is its simplicity that also extends to the installation routine: when installed as a non-container version, a simple script is used to read in variables from an initial configuration to deploy Ansible Tower. The same script and initial configuration can even be re-used to extend the setup and add, for example, more cluster nodes.

However, part of this initial configuration are passwords for the database, Ansible Tower itself and so on. In many online examples, these passwords are often stored in plain text. One question I frequently get as a Red Hat Consultant is how to protect this information. A common solution is to simply remove the file after you complete the installation of Ansible Tower. But, there are reasons you may want to keep the file around. In this article, I will present another way to protect the passwords in your installation files.

 

Ansible Tower’s setup.sh

For some quick background, setup.sh is the script used to install Ansible Tower and is provided in Continue reading

Containerized Python Development – Part 2

This is the second part of the blog post series on how to containerize our Python development. In part 1, we have already shown how to containerize a Python service and the best practices for it. In this part, we discuss how to set up and wire other components to a containerized Python service. We show a good way to organize project files and data and how to manage the overall project configuration with Docker Compose. We also cover the best practices for writing Compose files for speeding up our containerized development process.

Managing Project Configuration with Docker Compose

Let’s take as an example an application for which we separate its functionality in three-tiers following a microservice architecture. This is a pretty common architecture for multi-service applications. Our example application consists of:

  • a UI tier – running on an nginx service
  • a logic tier – the Python component we focus on
  • a data tier – we use a mysql database to store some data we need in the logic tier

The reason for splitting an application into tiers is that we can easily modify or add new ones without having to rework the entire project.

A good way to Continue reading

Top Questions for Getting Started with Docker

Does Docker run on Windows?

Yes. Docker is available for Windows, MacOS and Linux. Here are the download links:

What is the difference between Virtual Machines (VM) and Containers?

This is a great question and I get this one a lot. The simplest way I can explain the differences between Virtual Machines and Containers is that a VM virtualizes the hardware and a Container “virtualizes” the OS. 

If you take a look at the image above, you can see that there are multiple Operating Systems running when using Virtual Machine technology. Which produces a huge difference in start up times and various other constraints and overhead when installing and maintaining a full blow operating system. Also, with VMs, you can run different flavors of operating systems. For example, I can run Windows 10 and a Linux distribution on the same hardware at the same time. Now let’s take a look at the image for Docker Containers.

As you can see in this image, we only have one Host Operating System installed on our infrastructure. Docker sits “on top” of the host operating system. Each application is then bundled in an Continue reading

Technology Short Take 129

Welcome to Technology Short Take #129, where I’ve collected a bunch of links and references to technology-centric resources around the Internet. This collection is (mostly) data center- and cloud-focused, and hopefully I’ve managed to curate a list that has some useful information for readers. Sorry this got published so late; it was supposed to go live this morning!

Note there is a slight format change debuting in this Tech Short Take. Moving forward, I won’t include sections where I have no content to share, and I’ll add sections for content that may not typically appear. This will make the list of sections a bit more dynamic between Tech Short Takes. Let me know if you like this new approach—feel free to contact me on Twitter and provide your feedback.

Now, on to the good stuff!

Networking

DockerCon 2020: The AWS Sessions

Last week we announced Docker and AWS created an integrated and frictionless experience for developers to leverage Docker Compose, Docker Desktop, and Docker Hub to deploy their apps on Amazon Elastic Container Service (Amazon ECS) and Amazon ECS on AWS Fargate. On the heels of that announcement, we continue the latest series of blog articles focusing on developer content that we are curating from DockerCon LIVE 2020, this time with a focus on AWS. If you are running your apps on AWS, bookmark this post for relevant insights for easy access in one place.

As more developers adopt and learn Docker, and as more organizations are jumping head-first into containerizing their applications, AWS continues to be the cloud of choice for deployment. Earlier this year Docker and AWS collaborated on Compose-spec.io open specification and as mentioned on the Docker blog by my colleague Chad Metcalf, deploying straight from Docker to AWS has never been easier. It’s just another step to constantly put ourselves in the shoes of you, our customer, the developer.

The replay of these three sessions on AWS is where you can learn more about container trends for developers, adopting microservices and building and deploying multi-container Continue reading

Manage Red Hat Enterprise Linux like a Boss with Red Hat Ansible Content Collection for Red Hat Insights

Running IT environments means facing many challenges at the same time: security, performance, availability and stability are critical for the successful operation of today’s data centers. IT managers and their teams of administrators, operators and architects are well advised to move from a reactive, “fire-fighting” mode to a proactive approach where systems are continuously scanned and improvements are applied before critical situations come up. Red Hat Insights routinely analyzes Red Hat Enterprise Linux systems for security/vulnerability, compliance, performance, availability and stability threats, and based on the results, can provide guidance on how to improve daily operations. Insights is included with your Red Hat Enterprise Linux subscription and located at cloud.redhat.com

We recently announced a new Red Hat Ansible Content Collection for Insights, an integration designed to make it easier for Insights users to manage Red Hat Enterprise Linux and to automate tasks on those systems using Ansible. The Ansible Content Collection for Insights is ideal for customers that have large Red Hat Enterprise Linux estates that require initial deployment and ongoing management of the Insights client. 

In this blog, we will look at how this integration with Ansible takes care of key tasks via included Ansible Continue reading

Automating Mitigation of the F5 BIG-IP TMUI RCE Security Vulnerability Using Ansible Tower (CVE-2020-5902)

On June 30, 2020, a security vulnerability affecting multiple BIG-IP platforms from F5 Networks was made public with a CVSS score of 10 (Critical). Due to the significance of the vulnerability, network administrators are advised to mitigate this issue in a timely manner. Doing so manually is tricky, especially if many devices are involved. Because F5 BIG-IP and BIG-IQ are certified with the Red Hat Ansible Automation Platform, we can use it to tackle the issue.

This post provides one way of temporarily mitigating CVE-2020-5902 via Ansible Tower without upgrading the BIG-IP platform. However, larger customers like service providers might struggle to upgrade on a short notice, as they may have to go through a lengthy internal validation process. For those situations, an automated mitigation may be a reasonable workaround until such time to perform an upgrade.

 

Background of the vulnerability

The vulnerability is described in K52145254 of the F5 Networks support knowledgebase

The Traffic Management User Interface (TMUI), also referred to as the Configuration utility, has a Remote Code Execution (RCE) vulnerability in undisclosed pages.

And describes the impact is serious:

This vulnerability allows for unauthenticated attackers, or authenticated users, with network access to the Configuration Continue reading

Containerized Python Development – Part 1

Developing Python projects in local environments can get pretty challenging if more than one project is being developed at the same time. Bootstrapping a project may take time as we need to manage versions, set up dependencies and configurations for it. Before, we used to install all project requirements directly in our local environment and then focus on writing the code. But having several projects in progress in the same environment becomes quickly a problem as we may get into configuration or dependency conflicts. Moreover, when sharing a project with teammates we would need to also coordinate our environments. For this we have to define our project environment in such a way that makes it easily shareable. 

A good way to do this is to create isolated development environments for each project. This can be easily done by using containers and  Docker Compose to manage them.  We cover this in a series of blog posts, each one with a specific focus.

This first part covers how to containerize a Python service/tool and the best practices for it.

Requirements

To easily exercise what we discuss in this blog post series, we need to install a minimal set Continue reading

1 25 26 27 28 29 125