Earlier in August, we hosted a series of virtual events to introduce Docker Enterprise 3.0. Thousands of you registered and joined us, and many of you asked great questions. This blog contains the top questions and answers from the event series.
Q: Can Docker Enterprise be used on AWS and other cloud providers?
A: Yes! Docker Enterprise, including the Docker Universal Control Plane (UCP) and Docker Trusted Registry (DTR), can be deployed to any of the leading cloud environments, including AWS, Azure and GCP. With Docker Enterprise 3.0, we also launched the Docker Cluster CLI plugin for use with Docker Certified Infrastructure. The plugin (now supporting AWS and Azure) allows for simple installation and upgrading of Docker Enterprise on selected cloud providers.
Q: Is Docker Cluster only available in the public cloud, or is it possible to add local machines or VMs?
A: Additional support for VMware vSphere environments is coming shortly. If you have other platforms that need to be supported, please engage with your account team to provide that feedback!
Q: Does Docker Kubernetes Service (DKS) work with both on-premises and other Kubernetes environments such as EKS, AKS, Continue reading
With AnsibleFest less than a month away we wanted to take a closer look at each of the session tracks to help you make your experience as personalized as possible. We talked with Track Lead Bill Nottingham and asked him a few questions about the Ansible Integrations Track and sessions within the track.
Who is this track best for?
In Ansible Integrations, we’re highlighting integrations of Ansible with other technologies. This track is best for people who manage a large variety of varied infrastructure, and are interested in how Ansible can help manage in new areas. It’s also useful for those interested in building integrations with Ansible for their own platforms.
What topics will this track cover?
In Ansible Integrations, we’ll highlight the impact of Ansible combined with a variety of technologies and use cases. We will highlight how Ansible allows easy management of application lifecycles, how Ansible helps enable management of containers in the public cloud, how XLAB worked to build certified collections for Ansible, how to customize your base operating system image and much more!
What should attendees expect to learn from this track?
Attendees should expect to learn Continue reading
We had the chance recently to sit down with the Citizens Bank mortgage division and ask them how they’ve incorporated innovation into a regulated and traditional business that is still very much paper-based.
The most important lesson they’ve learned: you have to be willing to “fail fearlessly,” but to do that, you also have to minimize the consequences and cost of failure so you can constantly try new ideas. With Docker Enterprise, the team has been able to take ideas from concept to production in as little as a day.
Here’s what they told us. You can also catch the highlights in this 2 minute video:
Matt Rider, CIO Mortgage Division: Our focus is changing the mortgage technology experience at the front end with the borrower and on the back end for the loan officers and the processors. How do we bring those two together? How do we reduce the aggravation that comes with obtaining a mortgage?
Matt: When I came here I recognized that we were never going to achieve our vision if we kept doing things the same way. We wanted to reduce the aggravation that comes with obtaining a mortgage. Continue reading
Security Automation seems to be a growing topic of interest. This year at AnsibleFest we will have a track for Security Automation. We talked with Track Lead Massimo Ferrari to learn more about the Security Automation track and the sessions within it.
Who is this track best for?
This track is intended for professionals in security operations and vulnerability management who want to learn how Ansible can support and simplify their activities, and automation experts tasked to expand the footprint of their automation practice and support security teams in their organization.
What topics will this track cover?
Sessions included in this track cover how to introduce and consume Ansible Automation in different stages of maturity of a security or cross-functional organization. They include guidance from Red Hat subject matter experts, customer stories and technical deep downs from partners that are suitable for both automation veterans and security professionals looking at automation for the first time.
What should attendees expect to learn from this track?
People attending the sessions in this track will learn how Ansible can be leveraged in security environments to support activities like incident investigation and response, compliance enforcement and Continue reading
Yesterday I published a high-level overview of Cluster API (CAPI) that provides an introduction to some of the concepts and terminology in CAPI. In this post, I’d like to walk readers through actually using CAPI to bootstrap a Kubernetes cluster on AWS.
It’s important to note that all of the information shared here is also found in the “Getting Started” guide in the AWS provider’s GitHub repository. My purpose here is provide an additional walkthrough that supplements that official documentation, not to supplant the official documentation, and to spread the word about how the process works.
Four basic steps are involved in bootstrapping a Kubernetes cluster on AWS using CAPI:
The following sections take a look at each of these steps in a bit more detail. First, though, I think it’s important to mention that CAPI is still in its early days (it’s currently at v1alpha1). As such, it’s possible that commands may (will) change, and API specifications may (will) change as further development Continue reading
If you’ve worked in IT for a few years, you’ve seen it happen. You select an application framework, operating system, database platform, or other infrastructure because it meets the checklist, the price is right, or sometimes because of internal politics. You quickly discover that it doesn’t play well with other solutions or across platforms — except of course it’s “easy and seamless” when used with offerings from the same vendor.
But try telling your developers that they can’t use their favorite framework, development toolset, or have to use a specific operating system for everything they do. If developers feel like they don’t have flexibility, they quickly adopt their own tools, creating a second wave of shadow IT.
And it doesn’t just affect developers. IT operations and security get bogged down in managing multiple systems and software sprawl. The business suffers because efficiency and innovation lag when teams get caught up in fighting fires.
Below are 5 things that can go wrong when you get locked in to an infrastructure platform:
Will the platform you pick work with any combination of public and private clouds? Will you get cornered into Continue reading
In this post, I’d like to provide a high-level introduction to the Kubernetes Cluster API. The aim of Cluster API (CAPI, for short) is, as outlined in the project’s GitHub repository, “a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management”. This high-level introduction serves to establish some core terminology and concepts upon which I’ll build in future posts about CAPI.
First, let’s start with some terminology:
Bootstrap cluster: The bootstrap cluster is a temporary cluster used by CAPI. It’s used to create a more permanent cluster that is CAPI-enabled (the management cluster). Typically, the bootstrap cluster is created locally using kind
(other options are possible), and is destroyed once the management cluster is up and running.
Management cluster: The CAPI-enabled cluster created by the temporary bootstrap cluster is the management cluster. The management cluster is long-lived, is running the CAPI provider components, and understands the CAPI Custom Resource Definitions (CRDs). Typically, users would use the management cluster to create and manage the lifecycle of one or more workload clusters.
Workload cluster: This is a cluster whose lifecycle is managed by CAPI via the management cluster, but isn’t actually CAPI-enabled itself and it doesn’t manage Continue reading
This is the liveblog from the day 1 general session at VMworld 2019. This year the event is back at Moscone Center in San Francisco, and VMware has already released some juicy news (see here, here, and here) in advance of the keynote this morning, foreshadowing what Pat is expected to talk about.
The keynote kicks off with the usual inspirational video, this one incorporating themes and references from a number of high-tech movies, including “The Matrix” and “Inception,” among others. As the video concludes, Pat Gelsinger takes the stage promptly at 9am.
Gelsingers speaks briefly of his 7 years at VMware (this is his 8th VMworld), then jumps into the content of his presentation with the theme of this morning’s session: “Tech in the Age of Any”. Along those lines, Gelsinger talks about the diversity of the VMworld audience, welcomes the attendees in Klingon, and speaks very quickly to the Pivotal and Carbon Black acquisitions that were announced only a few days ago.
Shifting gears, Gelsinger talks about “digital life” and how that translates into millions of applications and billions of devices and billions of users. He talks about how 5G, Edge, and AI are going Continue reading
On Wednesday we took a closer look at the Networking Automation track. Soon you will be able to start building out your schedule for AnsibleFest, so we want to help you figure out what tracks and sessions will be best for you! We talked with Track Lead Jake Jackson to learn more about the Getting Started track and the sessions within it.
Who is this track best for?
This track is best for people who are new to Ansible, whether that is in application or in concept. Many of these breakout sessions are introductory in nature for people who want to learn more about Ansible and how it works.
What topics will this track cover?
This track will cover several topics. It includes introductions to Ansible and Ansible Tower, and a deeper dive into Ansible inventories. It also discusses bite-size ways to automate and manage Windows the same way you would linux. There will also be a session that introduces using Ansible in CI and analyzing roles.
What should attendees expect to learn from this track?
Attendees can expect to learn the basics of Ansible and Ansible Tower from this track. They Continue reading
Welcome to Technology Short Take #118! Next week is VMworld US in San Francisco, CA, and I’ll be there live-blogging and meeting up with folks to discuss all things Kubernetes. If you’re going to be there, look me up! Otherwise, I leave you with this list of links and articles from around the Internet to keep you busy. Enjoy!
In all of the excitement and buzz around Kubernetes, one important factor in the conversation that seems to be glossed over is how and where containerized applications are built. Going back to Docker’s roots, it was developers who were the first ones to adopt Docker containers. It solved their own local development issues and made it easier and faster to get applications out the door.
Fast forward 5 years, and developers are more important than ever. They build modern apps and modernize existing apps that are the backbone of organizations. If you’re in IT operations and selecting application platforms, one of the biggest mistakes you can make is making this decision in isolation, without development buy-in.
In the early days of public cloud, developers started going around IT to get fast access to computing resources, creating the first round of “Shadow IT”. Today, most large enterprises have embraced cloud applications and infrastructure, and work collaboratively across application development and operations teams to serve their needs.
But there’s a risk we’ll invite the same thing to happen again by making a container platform decision that doesn’t involve your developers. Here are 3 reasons to Continue reading
Now that the agenda for AnsibleFest is live, we wanted to take a closer look at each of the tracks that we will offer. Soon you will be able to start building out your schedule for AnsibleFest, so we want to help you figure out what tracks and sessions will be best for you! We talked with Track Lead Andrius Benokraitis to learn more about the Network Automation track and the sessions within it.
Who is this track best for?
This track is best for Network Operators, Network Engineers, Cloud Operators, and DevOps Engineers. It is great for people who are looking to learn more about automating the configuration, management and operations of a computer network.
What topics will this track cover?
This track will cover topics that include operational application of Red Hat Ansible Automation for network use cases, including devices such as: switches, routers, load balancers, firewalls. We will also be discussing different point of views: Developer of modules vs. User and implementer of modules and roles. There will also be a discussion around how enterprises are using Ansible Automation as a platform for large scale network deployments.
What should attendees expect Continue reading
As I mentioned back in May in this post on creating a sandbox for learning Pulumi, I’ve started using Pulumi more and more of my infrastructure-as-code needs. I did switch from JavaScript to TypeScript (which I know compiles to JavaScript on the back-end, but the strong typing helps a new programmer like me). Recently I had a need to create some resources in AWS using Pulumi, and—for reasons I’ll explain shortly—many of the “canned” Pulumi examples didn’t cut it for my use case. In this post, I’ll share how I created tagged subnets across AWS availability zones (AZs) using Pulumi.
In this particular case, I was using Pulumi to create all the infrastructure necessary to spin up an AWS-integrated Kubernetes cluster. That included a new VPC, subnets in the different AZs for that region, an Internet gateway, route tables and route table associations, security groups, an ELB for the control plane, and EC2 instances. As I’ve outlined in my latest post on setting up an AWS-integrated Kubernetes 1.15 cluster using kubeadm
, these resources on AWS require specific AWS tags to be assigned in order for the AWS cloud provider to work.
As I started working on this, Continue reading
LuCi is a very popular OpenWrt web interface. For an average user, LuCi is probably one of the main deciding factors between giving OpenWrt a try in the first place, or moving on to another user friendlier firmware like DD-WRT.
If you’re an advanced user however, most of the times you may find yourself adjusting settings either through UCI or by editing the config files manually. In fact at one point you may realize you’re not using LuCi at all and it’s just sitting there idle. Basically a component that’s not only using resources, but also providing an extra attack surface.
Now, one could just disable uHTTPd to address some of these concerns, but LuCi installs too many dependencies, and cluttering a router with things that you’ll hardly ever use, is not the best use of the very limited storage space available in most routers.
Another method that some use to “remove” LuCi, is by issuing:
opkg --autoremove remove luci
This may seem to work, but in reality LuCi packages are not really removed this way and the related files will only be masked by OverlayFS. This is because the packages are built into the firmware itself.
While OpenWrt Continue reading
If you’ve used kubeadm
to bootstrap a Kubernetes cluster, you probably know that at the end of the kubeadm init
command to bootstrap the first node in the cluster, kubeadm
prints out a bunch of information: how to copy over the admin Kubeconfig file, and how to join both control plane nodes and worker nodes to the cluster you just created. But what if you didn’t write these values down after the first kubeadm init
command? How does one go about reconstructing the proper kubeadm join
command?
Fortunately, the values needed for a kubeadm join
command are relatively easy to find or recreate. First, let’s look at the values that are needed.
Here’s the skeleton of a kubeadm join
command for a control plane node:
kubeadm join <endpoint-ip-or-dns>:<port> \
--token <valid-bootstrap-token> \
--discovery-token-ca-cert-hash <ca-cert-sha256-hash> \
--control-plane \
--certificate-key <certificate-key>
And here’s the skeleton of a kubeadm join
command for a worker node:
kubeadm join <endpoint-ip-or-dns>:<port> \
--token <valid-bootstrap-token> \
--discovery-token-ca-cert-hash <ca-cert-sha256-hash> \
As you can see, the information needed for the worker node is a subset of the information needed for a control plane node.
Here’s how to find or recreate all the various pieces of information you need:
AnsibleFest Atlanta is September 24th - 26th at the Hilton Atlanta, a few short blocks from Centennial Olympic Park. This year is going to be bigger and better than ever. As AnsibleFest continues to grow, so does its offerings. We are excited to offer more breakout sessions, more hands-on workshops, and more Ask an Expert sessions. This year we have expanded our AnsibleFest programming to offer 10 different tracks. We are also introducing the Open Lounge this year, which is a place to network, relax and recharge. It provides a great opportunity to meet and connect with passionate Ansible users, developers, and industry partners.
The AnsibleFest Agenda is live. Thank you to everyone who answered the call for submission. It was a challenge to narrow down the sessions from the record-setting submissions we received. We love our community, customers, partners, and appreciate everyone who contributed.
For those who are not familiar with AnsibleFest, or have not attended the event before, below are a few highlights of AnsibleFest that you won’t want to miss.
General Sessions
We have some amazing general sessions planned this year. The opening keynote at AnsibleFest will feature talks from Red Hat Ansible Automation Continue reading
In this post, I’d like to walk through setting up an AWS-integrated Kubernetes 1.15 cluster using kubeadm
. Over the last year or so, the power and utility of kubeadm
has vastly improved (thank you to all the contributors who have spent countless hours!), and it is now—in my opinion, at least—at a point where setting up a well-configured, highly available Kubernetes cluster is pretty straightforward.
This post builds on the official documentation for setting up a highly available Kubernetes 1.15 cluster. This post also builds upon previous posts I’ve written about setting up Kubernetes clusters with the AWS cloud provider:
All of these posts are focused on Kubernetes releases prior to 1.15, and given the changes in kubeadm
in the 1.14 and 1.15 releases, I felt it would be helpful to revisit the process again for 1.15. For now, I’m focusing on the in-tree AWS cloud provider; however, in the very near future I’ll look at using the new external AWS cloud provider.
As pointed out in the “original” Continue reading