Archive

Category Archives for "Systems"

Automating Red Hat Satellite with Ansible

Red Hat Satellite is a great tool to automate deployment, provisioning, patching and configuration of your infrastructure, but how can you automate Satellite itself?

Using the Red Hat Ansible Automation Platform and the Satellite Ansible Content Collection, of course!

Since you’re already tuning in, you probably don’t need convincing that automation is great; it helps enable easier collaboration, better accountability and easier reproducibility. But have you already heard about Collections?

We’ll show you how you can use the Satellite Ansible Content Collection to manage your Satellite installations via Ansible

What is the Satellite Ansible Content Collection?

The Satellite Ansible Content Collection is, as you might have guessed already, a set of Ansible modules and plugins to interact with Red Hat Satellite.

These modules are an evolution from the foreman and katello modules previously available in Ansible itself, as those are deprecated since Ansible 2.8 and are scheduled for removal in 2.12. Due to the use of a Satellite-specific library, the old modules would not work properly in plain Foreman setups and often lacked features that were not present in Red Hat Satellite. At the same time, using the modules together with Satellite wasn’t easy either, as the used Continue reading

Adopting the Default Route Table of an AWS VPC using Pulumi and Go

Up until now, when I used Pulumi to create infrastructure on AWS, my code would create all-new infrastructure: a new VPC, new subnets, new route tables, new Internet gateway, etc. One thing bothered me, though: when I created a new VPC, that new VPC automatically came with a default route table. My code, however, would create a new route table and then explicitly associate the subnets with that new route table. This seemed less than ideal. (What can I say? I’m a stickler for details.) While building a Go-based replacement for my existing TypeScript code, I found a way to resolve this duplication of resources. In this post, I’ll show you how to “adopt” the default route table of an AWS VPC so that you can manage it in your Pulumi code.

Let’s assume you are creating a new VPC using code that looks something like this:

vpc, err := ec2.NewVpc(ctx, "testvpc", &ec2.VpcArgs{
	CidrBlock: pulumi.String("10.100.0.0/16"),
	Tags: pulumi.Map {
		"Name": pulumi.String("testvpc"),
		k8sTag: pulumi.String("shared"),
	},
})

(Note that this snippet of code doesn’t show anything happening with the return values of the ec2.NewVpc function, which Go will complain about. Make Continue reading

Running a container in Microsoft Azure Container Instances (ACI) with Docker Desktop Edge

Earlier this month Docker announced our partnership with Microsoft to shorten the developer commute between the desktop and running containers in the cloud. We are excited to announce the first release of the new Docker Azure Container Instances (ACI) experience today and wanted to give you an overview of how you can get started using it.

The new Docker and Microsoft ACI experience allows developers to easily move between working locally and in the Cloud with ACI; using the same Docker CLI experience used today! We have done this by expanding the existing docker context command to now support ACI as a new backend. We worked with Microsoft to target ACI as we felt its performance and ‘zero cost when nothing is running’ made it a great place to jump into running containers in the cloud.

ACI is a Microsoft serverless container solution for running a single Docker container or a service composed of a group of multiple containers defined with a Docker Compose file. Developers can run their containers in the cloud without needing to set up any infrastructure and take advantage of features such as mounting Azure Storage and GitHub repositories as volumes. For production cases, you can Continue reading

Adding integration tests to Ansible Content Collections

In the previous installment of our "let us create the best Ansible Content Collection ever" saga, we covered the DigitalOcean-related content migration process. What we ended up with was a fully functioning Ansible Content Collection that unfortunately had no tests. But not for long; we will be adding an integration test for the droplet module.

 

We do not need tests, right?

If we were able to write perfect code all of the time, there would be no need for tests. But unfortunately, this is not how things work in real life. Any modestly useful software has deadlines attached, which usually means that developers need to strike a compromise between polish and delivery speed.

For us, the Ansible Content Collections authors, having a semi-decent Collection of integration tests has two main benefits:

  1. We know that the tested code paths function as expected and produce desired results.
  2. We can catch the breaking changes in the upstream product that we are trying to automate.

The second point is especially crucial in the Ansible world, where  one team of developers is usually responsible for the upstream product, and a separate group maintains Ansible content.

With the "why integration tests" behind us, we can Continue reading

Getting AWS Availability Zones using Pulumi and Go

I’ve written several different articles on Pulumi (take a look at all articles tagged “Pulumi”), the infrastructure-as-code tool that allows users to define their infrastructure using a general-purpose programming language instead of a domain-specific language (DSL). Thus far, my work with Pulumi has leveraged TypeScript, but moving forward I’m going to start sharing more Pulumi code written using Go. In this post, I’ll share how to use Pulumi and Go to get a list of Availability Zones (AZs) from a particular region in AWS.

Before I proceed, I feel like it is important to provide the disclaimer that I’m new to Go (and therefore still learning). There are probably better ways of doing what I’m doing here, and so I welcome all constructive feedback on how I can improve.

With that disclaimer out of the way, allow me to first provide a small bit of context around this code. When I’m using Pulumi to manage infrastructure on AWS, I like to try to keep things as region-independent as possible. Therefore, I try to avoid hard-coding things like the number of AZs or the AZ names, and prefer to gather that information dynamically—which is what this code does.

Here’s the Continue reading

Top 5 Questions from “How to become a Docker Power User” session at DockerCon 2020

This is a guest post from Brian Christner. Brian is a Docker Captain since 2016, host of The Byte podcast, and Co-Founder & Site Reliability Engineer at 56K.Cloud. At 56K.Cloud, he helps companies to adapt technologies and concepts like Cloud, Containers, and DevOps. 56K.Cloud is a Technology company from Switzerland focusing on Automation, IoT, Containerization, and DevOps.

It was a fantastic experience hosting my first ever virtual conference session. The commute to my home office was great, and I even picked up a coffee on the way before my session started. No more waiting in lines, queueing for food, or sitting on the conference floor somewhere in a corner to check emails. 

The “DockerCon 2020 that’s a wrap” blog post highlighted my session “How to Become a Docker Power User using VS Code” session was one of the most popular sessions from DockerCon. Docker asked if I could write a recap and summarize some of the top questions that appeared in the chat. Absolutely.

Honestly, I liked the presented/audience interaction more than an in-person conference. Typically, a presenter broadcasts their content to a room full of participants, and if you are lucky and Continue reading

Containerize Your Go Developer Environment – Part 3

In this series of blog posts, we show how to put in place an optimized containerized Go development environment. In part 1, we explained how to start a containerized development environment for local Go development, building an example CLI tool for different platforms. Part 2 covered how to add Go dependencies, caching for faster builds and unit tests. This third and final part is going to show you how to add a code linter, a GitHub Action CI, and some extra build optimizations.

Adding a linter

We’d like to automate checking for good programming practices as much as possible so let’s add a linter to our setup. First step is to modify the Dockerfile:

# syntax = docker/dockerfile:1-experimental

FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS base
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .


FROM base AS build
ARG TARGETOS
ARG TARGETARCH
RUN --mount=type=cache,target=/root/.cache/go-build \
  GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .


FROM base AS unit-test
RUN --mount=type=cache,target=/root/.cache/go-build \
  go test -v .


FROM golangci/golangci-lint:v1.27-alpine AS lint-base

FROM base AS lint
COPY --from=lint-base /usr/bin/golangci-lint /usr/bin/golangci-lint
RUN --mount=type=cache,target=/root/.cache/go-build \
  --mount=type=cache,target=/root/.cache/golangci-lint \
  golangci-lint run --timeout 10m0s ./...


FROM scratch AS bin-unix
COPY Continue reading

Now Available: Red Hat-Maintained Content Collections on Automation Hub

Today marks an important milestone for Red Hat Ansible Automation Platform subscribers: The initial release of Red Hat-maintained Ansible Content Collections have been published to Automation Hub for automating select platforms from Arista, AWS, Cisco, IBM, Juniper, Splunk and more. The addition of these 17 Red Hat-maintained Collections on Automation Hub brings the total number to 47 Collections certified and published since September 2019. Finally, we are thrilled to have Ansible Collections for automating Red Hat Insights and Red Hat Satellite included as part of this release as well.

Why is this significant? First, it is important to understand that the Ansible project has recently completed an effort to decouple the Ansible executable from most of the content, and all migrated content now resides in new upstream repositories on GitHub. This change has had a ripple effect on backend development, testing, publishing, and maintenance on Ansible content. The good news is that now features of high quality, can be delivered more quickly, asynchronously from Ansible releases. 

Today’s announcement highlights the successful culmination of the following: 

  1. Migration of Ansible-maintained content from Ansible project to Collections. 
  2. Releasing new features and functionality since Ansible 2.9, without having to wait Continue reading

Fixes for Some Vagrant Issues on Fedora

Yesterday I needed to perform some testing of an updated version of some software that I use. (I was conducting the testing because this upgrade contained some breaking changes, and needed to understand how to mitigate the breaking changes.) So, I broke out Vagrant (with the Libvirt provider) on my Fedora laptop—and promptly ran into a couple issues. Fortunately, these issues were relatively easy to work around, but since the workarounds were non-intuitive I wanted to share them here for the benefit of others.

If you’re unfamiliar with Vagrant, have a look at my quick introduction to Vagrant. The “TL;DR” is the Vagrant can offer users a consistent workflow to creating and destroying VMs across a fairly wide number of platforms, including both local providers (like VirtualBox or VMware Fusion/VMware Workstation) and cloud provider (such as AWS and Azure). I’ve written a fair amount on Vagrant, so feel free to browse all the “Vagrant”-tagged posts on the site for more information.

Likewise, if you’re unfamiliar with the Libvirt provider, check out this post from 2017 on using Vagrant with Libvirt on Fedora 27.

In my testing yesterday, I ran into two networking-related issues. The first of them was Continue reading

DockerCon 2020: The Microsoft Sessions

This is the second post of our series of blog articles focusing on the key developer content that we are curating from DockerCon LIVE 2020. Increasingly, we are seeing more and more developers targeting Microsoft architectures and Azure for their containerized application deployments. Microsoft has always had a rich set of developer tools including VS Code and GitHub that work with Docker tools. 

One of the biggest developments for developers using Windows 10 is the release of WSL 2 (Windows Subsystem for Linux). Instead of using a translation layer to convert Linux kernel calls into Windows calls, WSL 2 now offers its own isolated Linux kernel running on a thin version of the Hyper-V hypervisor. Check out Simon Ferquel’s session on WSL 2 as well as Paul Yuknewicz’s session on apps running in Azure. Be sure to check out these valuable sessions on using Docker with Microsoft tools and technologies.

Docker Desktop + WSL 2 Integration Deep Dive

Simon Ferquel – Docker

Simon’s session provides a deep dive on how Docker Desktop on Windows works with WSL 2 to provide a better developer experience. This presentation will give you a better understanding of how Docker Desktop and WSL 2 Continue reading

Technology Short Take 128

Welcome to Technology Short Take #128! It looks like I’m settling into a roughly monthly cadence with the Technology Short Takes. This time around, I’ve got a (hopefully) interesting collection of links. The collection seems a tad heavier than normal in the hardware and security sections, probably due to new exploits discovered in Intel’s speculative execution functionality. In any case, here’s what I’ve gathered for you. Enjoy!

Networking

Servers/Hardware

  • This article from Carlos Fenollosa talks about his experience with a new 2020 MacBook Pro compared to his 2013-era MacBook Air. While there is some Continue reading

Containerize your Go Developer Environment – Part 2

This is the second part in a series of posts where we show how to use Docker to define your Go development environment in code. The goal of this is to make sure that you, your team, and the CI are all using the same environment. In part 1, we explained how to start a containerized development environment for local Go development, building an example CLI tool for different platforms and shrinking the build context to speed up builds. Now we are going to go one step further and learn how to add dependencies to make the project more realistic, caching to make the builds faster, and unit tests.

Adding dependencies

Go program from part 1 is very simple and doesn’t have any dependencies Go dependencies. Let’s add a simple dependency – the commonly used github.com/pkg/errors package:

package main

import (
   "fmt"
   "os"
   "strings"
   "github.com/pkg/errors"

)

func echo(args []string) error {
   if len(args) < 2 {
       return errors.New("no message to echo")
   }
   _, err := fmt.Println(strings.Join(args[1:], " "))
   return err
}

func main() {
   if err := echo(os.Args); err != nil {
       fmt.Fprintf(os.Stderr, "%+v\n", err)
       os.Exit(1)
   }
}

Our example program is now a simple echo program that writes out the arguments that the user inputs or “no message to echo” and a stack trace if nothing is specified.

We will use Go modules to handle this dependency. Running the following commands will create the go.mod and go.sum files:

$ go mod init
$ go mod tidy

Now when we run the build, we will see that each time we build, the dependencies are downloaded

$ make
[+] Building 8.2s (7/9)
 => [internal] load build definition from Dockerfile
...
0.0s
 => [build 3/4] COPY . . 
0.1s
 => [build 4/4] RUN GOOS=darwin GOARCH=amd64 go build -o /out/example .
7.9s
 => => # go: downloading github.com/pkg/errors v0.9.1

This is clearly inefficient and slows things down. We can fix this by downloading our dependencies as a separate step in our Dockerfile:

FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS build
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
ARG TARGETOS
ARG TARGETARCH
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .


FROM scratch AS bin-unix
COPY --from=build /out/example /
...

Notice that we’ve added the go.* files and download the modules before adding the rest of the source. This allows Docker to cache the modules as it will only rerun these steps if the go.* files change.

Caching

Separating the downloading of our dependencies from our build is a great improvement but each time we run the build, we are starting the compile from scratch. For small projects this might not be a problem but as your project gets bigger you will want to leverage Go’s compiler cache.

To do this, you will need to use BuildKit’s Dockerfile frontend (https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md). Our updated Dockerfile is as follows:

# syntax = docker/dockerfile:1-experimental

FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS build
ARG TARGETOS
ARG TARGETARCH
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .
RUN --mount=type=cache,target=/root/.cache/go-build \
GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .


FROM scratch AS bin-unix
COPY --from=build /out/example /
...

Notice the # syntax at the top of the Dockerfile that selects the experimental Dockerfile frontend and the –mount option attached to the run command. This mount option means that each time the go build command is run, the container will have the cache mounted to Go’s compiler cache folder.

Benchmarking this change for the example binary on a 2017 MacBook Pro 13”, I see that a small code change takes 11 seconds to build without the cache and less than 2 seconds with it. This is a huge improvement!

Adding unit tests

All projects need tests! We’ll add a simple test for our echo function in a main_test.go file:

package main

import (
    "testing"
    "github.com/stretchr/testify/require"

)

func TestEcho(t *testing.T) {
    // Test happy path
    err := echo([]string{"bin-name", "hello", "world!"})
    require.NoError(t, err)
}

func TestEchoErrorNoArgs(t *testing.T) {
    // Test empty arguments
    err := echo([]string{})
    require.Error(t, err)
}

This test ensures that we get an error if the echo function is passed an empty list of arguments.

We will now want another build target for our Dockerfile so that we can run the tests and build the binary separately. This will require a refactor into a base stage and then unit-test and build stages:

# syntax = docker/dockerfile:1-experimental

FROM --platform=${BUILDPLATFORM} golang:1.14.3-alpine AS base
WORKDIR /src
ENV CGO_ENABLED=0
COPY go.* .
RUN go mod download
COPY . .


FROM base AS build
ARG TARGETOS
ARG TARGETARCH
RUN --mount=type=cache,target=/root/.cache/go-build \
GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .


FROM base AS unit-test
RUN --mount=type=cache,target=/root/.cache/go-build \
go test -v .


FROM scratch AS bin-unix
COPY --from=build /out/example /
...

Note that Go test uses the same cache as the build so we mount the cache for this stage too. This allows Go to only run tests if there have been code changes which makes the tests run quicker.

We can also update our Makefile to add a test target:

all: bin/example
test: lint unit-test

PLATFORM=local

.PHONY: bin/example
bin/example:
    @docker build . --target bin \
    --output bin/ \
    --platform ${PLATFORM}

.PHONY: unit-test
unit-test:
    @docker build . --target unit-test

What’s next?

In this post we have seen how to add Go dependencies efficiently, caching to make the build faster and unit tests to our containerized Go development environment. In the next and final post of the series, we are going to complete our journey and learn how to add a linter, set up a GitHub Actions CI, and some extra build optimizations.

You can find the finalized source this example on my GitHub: https://github.com/chris-crone/containerized-go-dev

You can read more about the experimental Dockerfile syntax here: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md

If you’re interested in build at Docker, take a look at the Buildx repository: https://github.com/docker/buildx

Read the whole blog post series here.

The post Containerize your Go Developer Environment – Part 2 appeared first on Docker Blog.

Tolerable Ansible

Ansible Playbooks are very easy to read and their linear execution makes it simple to understand what will happen while a playbook is executing. Unfortunately, in some circumstances, the things you need to automate may not function in a linear fashion. For example, I was once asked to perform the following tasks with Ansible:

  • Notify an external patching system to patch a Windows target
  • Wait until the patching process was completed before moving on with the remaining playbooks tasks

While the request sounded simple, upon further investigation it would prove more challenging for the following reasons:

  • The system patched the server asynchronously from the call. i.e. the call into the patching system would simply put the target node into a queue to be patched 
  • The patching process itself could last for several hours
  • As part of the patching process the system would reboot no fewer than two times but with an unspecified maximum depending on the patches which need to be applied
  • Due to the specific implementation of the patching system the only reliable way to tell if patching was completed was by interrogating a registry entry on the client
  • If the patching took too long to complete additional Continue reading

How to Develop Inside a Container Using Visual Studio Code Remote Containers

This is a guest post from Jochen Zehnder. Jochen is a Docker Community Leader and working as a Site Reliability Engineer for 56K.Cloud. He started his career as a Software Developer, where he learned the ins and outs of creating software. He is not only focused on development but also on the automation to bridge the gap to the operations side. At 56K.Cloud he helps companies to adapt technologies and concepts like Cloud, Containers, and DevOps. 56K.Cloud is a Technology company from Switzerland focusing on Automation, IoT, Containerization, and DevOps.

Jochen Zehnder joined 56K.Cloud in February, after working as a software developer for several years. He always tries to make the lives easier for everybody involved in the development process. One VS Code feature that excels at this is the Visual Studio Code Remote – Containers extension. It is one of many extensions of the Visual Studio Remote Development feature.

This post is based on the work Jochen did for the 56K.Cloud internal handbook. It uses Jekyll to generate a static website out of markdown files. This is a perfect example of how to make lives easier for everybody. Nobody should know how to install, Continue reading

Using kubectl via an SSH Tunnel

In this post, I’d like to share one way (not the only way!) to use kubectl to access your Kubernetes cluster via an SSH tunnel. In the future, I may explore some other ways (hit me on Twitter if you’re interested). I’m sharing this information because I suspect it is not uncommon for folks deploying Kubernetes on the public cloud to want to deploy them in a way that does not expose them to the Internet. Given that the use of SSH bastion hosts is not uncommon, it seemed reasonable to show how one could use an SSH tunnel to reach a Kubernetes cluster behind an SSH bastion host.

If you’re unfamiliar with SSH bastion hosts, see this post for an overview.

To use kubectl via an SSH tunnel through a bastion host to a Kubernetes cluster, there are two steps required:

  1. The Kubernetes API server needs an appropriate Subject Alternative Name (SAN) on its certificate.
  2. The Kubeconfig file needs to be updated to reflect the tunnel details.

Ensuring an Appropriate SAN for the API Server

As is the case with just about any TLS-secured connection, if the destination to which you’re connecting with kubectl doesn’t match any of Continue reading

DockerCon 2020: Top Rated Sessions – The Fundamentals

Of all the sessions from DockerCon LIVE 2020, the Best Practices + How To’s track sessions received the most live views and on-demand views. Not only were these sessions highly viewed, they were also highly rated. We thought this would be the case based on the fact that many developers are learning Docker for this first time as application containerization is experiencing broad adoption within IT shops. In the recently released 2020 Stack Overflow Developer Survey Docker ranked as the #1 most wanted platform. The data is clear…developers love Docker!

This post begins our series of blog articles focusing on the key developer content that we are curating from DockerCon. What better place to start than with the fundamentals. Developers are looking for the best content by the top experts to get started with Docker. These are the top sessions from the Best Practices + How To’s track. 

How to Get Started with Docker
Peter McKee – Docker

Peter’s session was the top session based on views across all of the tracks. He does an excellent job focusing on the fundamentals of containers and how to go from code to cloud. This session covers getting Docker installed, writing Continue reading

Containerize Your Go Developer Environment – Part 1

When joining a development team, it takes some time to become productive. This is usually a combination of learning the code base and getting your environment setup. Often there will be an onboarding document of some sort for setting up your environment but in my experience, this is never up to date and you always have to ask someone for help with what tools are needed.

This problem continues as you spend more time in the team. You’ll find issues because the version of the tool you’re using is different to that used by someone on your team, or, worse, the CI. I’ve been on more than one team where “works on my machine” has been exclaimed or written in all caps on Slack and I’ve spent a lot of time debugging things on the CI which is incredibly painful.

Many people use Docker as a way to run application dependencies, like databases, while they’re developing locally and for containerizing their production applications. Docker is also a great tool for defining your development environment in code to ensure that your team members and the CI are all using the same set of tools.

We do a lot of Go development Continue reading

Making it Easier to Get Started with Cluster API on AWS

I’ve written a few articles about Cluster API (you can see a list of the articles here), but even though I strive to make my articles easy to understand and easy to follow along many of those articles make an implicit assumption: that readers are perhaps already somewhat familiar with Linux, Docker, tools like kind, and perhaps even Kubernetes. Today I was thinking, “What about folks who are new to this? What can I do to make it easier?” In this post, I’ll talk about the first idea I had: creating a “bootstrapper” AMI that enables new users to quickly and easily jump into the Cluster API Quick Start.

Normally, in order to use the Quick Start, there are some prerequisites that are needed first (these are all clearly listed on the Quick Start page):

  • You need kubectl installed
  • You need kind (which in turn requires Docker) or an existing Kubernetes cluster up and running

For Linux users (like myself), these prerequisites are pretty easy/simple to handle. But what if you’re a Windows or Mac user? Yes, you could use Docker Desktop and then install kind (or use docker-machine, if you’re feeling adventurous). Then you’d Continue reading

How To Manage Docker Hub Organizations and Teams

Docker Hub has two major constructs to help with managing users access to your repository images. Organizations and Teams. Organizations are a collection of Teams and Teams are a collection of DockerIDs.

There are a variety of ways of configuring your Teams within your Organization. In this blog post we’ll use a fictitious software company named Stark Industries which has a couple of development teams. One which works on the front-end of the application and the other that works on the back-end of the application. They also have a QA team and a DevOps team. 

We’ll want to set up our Teams so that each engineering team can push and pull the images that they create. We’ll give the DevOps team access privileges to pull images from the dev teams repos and the ability to push images to the repos that they own. We’ll also give the QA team read-only access to all the repos.

Organizations

In Docker Hub, an organization is a collection of teams. Image repositories can be created at the organization level. We are also able to configure notifications and link to source code repositories.

Let’s set up our Organization.

Open your favorite browser and navigate Continue reading

Creating a Multi-AZ NAT Gateway with Pulumi

I recently had a need to test a configuration involving the use of a single NAT Gateway servicing multiple private subnets across multiple availability zones (AZs) within a single VPC. While there are notable caveats with such a design (see the “Caveats” section at the bottom of this article), it could make sense in some use cases. In this post, I’ll show you how I used TypeScript with Pulumi to automate the creation of this design.

For the most part, if you’re familiar with Pulumi and using TypeScript with Pulumi, this will be pretty straightforward. The code I’ll show you makes a couple assumptions:

  1. It assumes you’ve already created the VPC and the subnets earlier in the code. I’ll reference the VPC object as vpc.
  2. I’ll assume you’ve already created subnets in said VPC, and that the subnet-to-AZ ratio is 1:1 (exactly one subnet of each type—public or private—in each AZ). The code will reference the subnet IDs as pubSubnetIds (for public subnets) or privSubnetIds (for private subnets). (How to create the subnets and capture the list of IDs is left as an exercise for the reader. If you’d be interested in seeing how I do it, let me know. Continue reading
1 27 28 29 30 31 125