Archive

Category Archives for "Systems"

Exploring Cluster API v1alpha2 Manifests

The Kubernetes community recently released v1alpha2 of Cluster API (a monumental effort, congrats to everyone involved!), and with it comes a number of fairly significant changes. Aside from the new Quick Start, there isn’t (yet) a great deal of documentation on Cluster API (hereafter just called CAPI) v1alpha2, so in this post I’d like to explore the structure of the CAPI v1alpha2 YAML manifests, along with links back to the files that define the fields for the manifests. I’ll focus on the CAPI provider for AWS (affectionately known as CAPA).

As a general note, any links back to the source code on GitHub will reference the v0.2.1 release for CAPI and the v0.4.0 release for CAPA, which are the first v1apha2 releases for these projects.

Let’s start with looking at a YAML manifest to define a Cluster in CAPA (this is taken directly from the Quick Start):

apiVersion: cluster.x-k8s.io/v1alpha2
kind: Cluster
metadata:
  name: capi-quickstart
spec:
  clusterNetwork:
    pods:
      cidrBlocks: ["192.168.0.0/16"]
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
    kind: AWSCluster
    name: capi-quickstart
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
metadata:
  name: capi-quickstart
spec:
  region: us-east-1
  sshKeyName: default

Right off the bat, Continue reading

Designing Your First Application in Kubernetes, Part 3: Communicating via Services

I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series, and discussed processes as pods and controllers in part 2. In this post, I’ll explain how to configure networking services in Kubernetes to allow pods to communicate reliably with each other.

Setting up Communication via Services 

At this point, we’ve deployed our workloads as pods managed by controllers, but there’s no reliable, practical way for pods to communicate with each other, nor is there any way for us to visit any network-facing pod from outside the cluster. Kubernetes networking model says that any pod can reach any other pod at the target pod’s IP by default, but discovering those IPs and maintaining that list while pods are potentially being rescheduled — resulting in them getting an entirely new IP — by hand would be a lot of tedious, fragile work.

Instead, we need to think about Kubernetes services when we’re ready to start building the networking part of our application. Kubernetes services provide reliable, simple networking endpoints for routing traffic to pods via the fixed metadata defined in the controller that created them, rather than via unreliable pod IPs. For simple applications, Continue reading

Designing Your First App in Kubernetes, Part 2: Setting up Processes

I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series. In this post, I’ll explain how to use pods and controllers to create scalable processes for managing your applications.

Processes as Pods & Controllers in Kubernetes

The heart of any application is its running processes, and in Kubernetes we fundamentally create processes as pods. Pods are a bit fancier than individual containers, in that they can schedule whole groups of containers, co-located on a single host, which brings us to our first decision point:

Decision #1: How should our processes be arranged into pods?

The original idea behind a pod was to emulate a logical host – not unlike a VM. The containers in a pod will always be scheduled on the same Kubernetes node, and they’ll be able to communicate with each other via localhost, making pods a good representation of clusters of processes that need to work together closely. 

A pod can contain one or more containers, but containers in the pod must scale together.

But there’s an important consideration: it’s not possible to scale individual containers in a pod separately from each other. If you need to scale Continue reading

Introducing Red Hat Ansible Automation Platform

RedHat-Ansible-Automation-Platform_logo-white-1

We are excited to introduce Red Hat Ansible Automation Platform, a new offering that combines the simple and powerful Ansible solutions with new capabilities for cross-team collaboration, governance and analytics, resulting in a platform for building and operating automation at scale. 

Over the past several years, we’ve listened closely to the community, customers and partners and their needs. We’ve also looked carefully at how the market is changing and where we see automation headed. One of the most common requests we’ve heard from customers is the need to bring together separate teams using automation. Today’s organizations are often automating different areas of their business (such as on-premises IT vs. cloud services vs. networks) each with their own set of Ansible Playbooks and little collaboration between the different domains. While this may still get the task accomplished, it can be a barrier to realizing the full value of automation. 

 

We’ve also found that even within a single organization, teams are often at different stages of automation maturity. Organizations are often recreating the wheel - automating processes that have already been done.

 

Organizations need a solution they can use across teams and domains, and a solution they can Continue reading

Designing Your First App in Kubernetes, Part 1: Getting Started

Image credit: Evan Lovely

Kubernetes: Always Powerful, Occasionally Unwieldy

Kubernetes’s gravity as the container orchestrator of choice continues to grow, and for good reason: It has the broadest capabilities of any container orchestrator available today. But all that power comes with a price; jumping into the cockpit of a state-of-the-art jet puts a lot of power under you, but how to actually fly the thing is not obvious. 

Kubernetes’ complexity is overwhelming for a lot of people jumping in for the first time. In this blog series, I’m going to walk you through the basics of architecting an application for Kubernetes, with a tactical focus on the actual Kubernetes objects you’re going to need. I’m not, however, going to spend much time reviewing 12-factor design principles and microservice architecture; there are some excellent ideas in those sort of strategic discussions with which anyone designing an application should be familiar, but here on the Docker Training Team I like to keep the focus on concrete, hands-on-keyboard implementation as much as possible.

Furthermore, while my focus is on application architecture, I would strongly encourage devops engineers and developers building to Kubernetes to follow along, in addition to readers in application architecture Continue reading

Ansible Security Automation is our answer to the lack of integration across the IT industry

Ansible-Blog_Security-Automation

 

In 2019, CISOs struggle more than ever to contain and counter cyberattacks despite an apparently flourishing IT security market and hundreds of millions of dollars in venture capital fueling yearly waves of new startups. Why?

If you review the IT security landscape today, you’ll find it crowded with startups and mainstream vendors offering solutions against cybersecurity threats that have fundamentally remained unchanged for the last two decades. Yes, a small minority of those solutions focus on protecting new infrastructures and platforms (like container-based ones) and new application architecture (like serverless computing), but for the most part, the threats and attack methods against these targets have remained largely the same as in the past.

This crowded market, propelled by increasing venture capital investments, is challenging to assess, and can make it difficult for a CISO to identify and select the best possible solution to protect an enterprise IT environment. On top of this, none of the solutions on the market solve all security problems, and so the average security portfolio of a large end user organization can often comprise of dozens of products, sometimes up to 50 different vendors and overlap in multiple areas.

Despite the choices, and more than Continue reading

Docker + Arm Virtual Meetup Recap: Building Multi-arch Apps with Buildx

Docker support for cross-platform applications is better than ever. At this month’s Docker Virtual Meetup, we featured Docker Architect Elton Stoneman showing how to build and run truly cross-platform apps using Docker’s buildx functionality. 

With Docker Desktop, you can now describe all the compilation and packaging steps for your app in a single Dockerfile, and use it to build an image that will run on Linux, Windows, Intel and Arm – 32-bit and 64-bit. In the video, Elton covers the Docker runtime and its understanding of OS and CPU architecture, together with the concept of multi-architecture images and manifests.

The key takeaways from the meetup on using buildx:

  • Everything should be multi-platform
  • Always use multi-stage Dockerfiles 
  • buildx is experimental but solid (based on BuildKit)
  • Alternatively use docker manifest — also experimental

Not a Docker Desktop user? Jason Andrews, a Solutions Director at Arm, posted this great article on how to setup buildx using Docker Community Engine on Linux

Check out the full meetup on Docker’s YouTube Channel:

You can also access the demo repo here. The sample code for this meetup is from Elton’s latest book, Learn Docker in a Month of Lunches, an accessible task-focused Continue reading

New in Docker Hub: Personal Access Tokens

The Docker Hub access token list view.
The Hub token list view.

On the heels of our recent update on image tag details, the Docker Hub team is excited to share the availability of personal access tokens (PATs) as an alternative way to authenticate into Docker Hub.

Already available as part of Docker Trusted Registry, personal access tokens can now be used as a substitute for your password in Docker Hub, especially for integrating your Hub account with other tools. You’ll be able to leverage these tokens for authenticating your Hub account from the Docker CLI – either from Docker Desktop or Docker Engine

docker login --username <username>

When you’re prompted for a password, enter your token instead.

The advantage of using tokens is the ability to create and manage multiple tokens at once so you can generate different tokens for each integration – and revoke them independently at any time.

Create and Manage Personal Access Tokens in Docker Hub 

Personal access tokens are created and managed in your Account Settings.

From here, you can:

  • Create new access tokens
  • Modify existing tokens
  • Delete access tokens
The creating an access token screen in Docker Hub.
Creating an access token in Docker Hub.

Note that the actual token is only shown once, at the time Continue reading

How Wiley Education Services Empowers Students with Docker Enterprise

We sat down recently with our customer, Wiley Education Services, to find out how Docker Enterprise helps them connect with and empower higher education students. Wiley Education Services (WES) is a division of Wiley Publishing that delivers online services to over 60 higher education institutions.

We spoke with Blaine Helmick, Senior Manager of Systems Engineering about innovation and technology in education. Read on to learn more about Wiley, or watch the short video interview with Blaine:

On Wiley’s Mission…

Our mission at Wiley Education Services is empowering people, to connect people to their futures. We serve over 60 higher education partners around the world, and our role is to connect you to our higher education partners when you’re looking for a degree and you’re frankly looking to change your life. 

On the Innovation at a 200 Year Old Company… 

Wiley has been around for over 200 years. One of the really amazing things about being in an organization that’s been around that long is that you have to have a culture of innovation at your core.

Technology like Docker has really empowered our business because it allows us to innovate, and it allows us to experiment. That’s critical because Continue reading

An Introduction to Kustomize

kustomize is a tool designed to let users “customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is” (wording taken directly from the kustomize GitHub repository). Users can run kustomize directly, or—starting with Kubernetes 1.14—use kubectl -k to access the functionality (although the standalone binary is newer than the functionality built into kubectl as of the Kubernetes 1.15 release). In this post, I’d like to provide an introduction to kustomize.

In its simplest form/usage, kustomize is simply a set of resources (these would be YAML files that define Kubernetes objects like Deployments, Services, etc.) plus a set of instructions on the changes to be made to these resources. Similar to the way make leverages a file named Makefile to define its function or the way Docker uses a Dockerfile to build a container, kustomize uses a file named kustomization.yaml to store the instructions on the changes the user wants made to a set of resources.

Here’s a simple kustomization.yaml file:

resources:
- deployment.yaml
- service.yaml
namePrefix: dev-
namespace: development
commonLabels:
  environment: development

This article won’t attempt to explain all the various fields that could be Continue reading

AnsibleFest Atlanta – Tech Deep Dives

Blog_AnsibleFest2019-Tech-Deep-Dives

 

Only one more week until AnsibleFest 2019 comes to Atlanta! We talked with Track Lead Sean Cavanaugh to learn more about the Technical Deep Dives track and the sessions within it. 

 

Who is this track best for? 

 

You've written playbooks. You've automated deployments. But you want to go deeper - learn new ways you could use Ansible you haven't thought of before. Extend Ansible for new functionality. Dig deep into new use cases. Then Tech Deep Dives is for you. This track is best suited for someone with existing Ansible knowledge and experience that already knows the nomenclature. It is best for engineers who want to learn how to take their automation journey to the next level. This track includes multiple talks from Ansible Automation developers, and it is your chance to ask them direct questions or provide feedback.  

 

What topics will this track cover? 

 

This track is about automation proficiency. Talks range from development and testing of modules and content to building and operationalizing automation to scale for your enterprise.  Think about best practices, but then use those takeaways to leverage automation for your entire organization.  



What should Continue reading

How InterSystems Builds an Enterprise Database at Scale with Docker Enterprise

We sat down recently with InterSystems, our partner and customer, to talk about how they deliver an enterprise database at scale to their customers. InterSystems’s software powers mission-critical applications at hospitals, banks, government agencies and other organizations.

We spoke with Joe Carroll, Product Specialist, and Todd Winey, Director of Partner Programs at InterSystems about how containerization and Docker are helping transform their business.

Here’s what they told us. You can also catch the highlights in this 2 minute video:

On InterSystems and Enterprise Databases…

Joe Carroll: InterSystems is a 41 year old database and data platform company. We’ve been in data storage for a very long time and our customers tend to be traditional enterprises — healthcare, finance, shipping and logistics as well as government agencies. Anywhere that there’s mission critical data we tend to be around. Our customers have really important systems that impact people’s lives, and the mission critical nature of that data characterizes who our customers are and who we are.

On Digital Transformation in Established Industries…

Todd Winey: Many of those organizations and industries have been traditionally seen as laggards in terms of their technology adoption in the past, so the speed with which they’re moving Continue reading

Powering Docker App: Next Steps for Cloud Native Application Bundles (CNAB)

Last year at DockerCon and Microsoft Connect, we announced the Cloud Native Application Bundle (CNAB) specification in partnership with Microsoft, HashiCorp, and Bitnami. Since then the CNAB community has grown to include Pivotal, Intel, DataDog, and others, and we are all happy to announce that the CNAB core specification has reached 1.0.

We are also announcing the formation of the CNAB project under the Joint Development Foundation, a part of the Linux Foundation that’s chartered with driving adoption of open source and standards. The CNAB specification is available at cnab.io. Docker is working hard with our partners and friends in the open source community to improve software development and operations for everyone.

Docker’s Implementation of CNAB — Docker App

Docker was one of the first to implement the CNAB specification with Docker App, our reference implementation available on GitHub. Docker App can be used to both build CNAB bundles for Docker Compose (which can then be used with any other CNAB client), and also to install, upgrade, and uninstall any other CNAB bundle.

It also forms the underpinnings of application templates in Docker Desktop Enterprise. With Docker App, we are making CNAB-compliant applications as easy to use Continue reading

AnsibleFest Atlanta – Ansible Automation

Blog_AnsibleFest2019-Ansible-Automation-Track

 

AnsibleFest is right around the corner and we are excited to go to Atlanta! We talked with Track Lead Colin McNaughton to learn more about the Ansible Automation track and the sessions within it. 

 

Who is this track best for? 

This track is best for existing users, story-tellers, curious adopters and enterprise architects. 

 

What topics will this track cover? 

This track will include conversations and presentations guided by existing Ansible Automation customers. Sessions in this track will expand on how the application of key components of Ansible change along the road to enterprise adoption of Ansible Automation. Attend sessions in this track to learn about how others manage inventories, create cloud infrastructure defined as code, and other lessons learned from real world deployments.

 

What should attendees expect to learn from this track? 

Attendees can expect to hear stories from real world experience in automating in diverse ecosystems and discussions around applying and scaling core tenets of Ansible Automation.

 

Where would you expect to find attendees to this track to hangout online?

If attendees are looking to learn more or have questions after AnsibleFest, online communities like message board style communities, Continue reading

Consuming Pre-Existing AWS Infrastructure with Cluster API

All the posts I’ve published so far about Kubernetes Cluster API (CAPI) assume that the underlying infrastructure needs to be created. This is fine, because generally speaking that’s part of the value of CAPI—it will create new cloud infrastructure for every Kubernetes cluster it instantiates. In the case of AWS, this includes VPCs, subnets, route tables, Internet gateways, NAT gateways, Elastic IPs, security groups, load balancers, and (of course) EC2 instances. But what if you didn’t want CAPA to create AWS infrastructure? In this post, I’ll show you how to consume pre-existing AWS infrastructure with Cluster API for AWS (CAPA).

Why would one not want CAPA to create the necessary AWS infrastructure? There are a variety of reasons, but the one that jumps to my mind immediately is that an organization may have established/proven expertise and a process around the use of infrastructure-as-code (IaC) tooling like Terraform, CloudFormation, or Pulumi. In cases like this, such organizations would very likely prefer to continue to use the tooling they already know and with which they are already familiar, instead of relying on CAPA. Further, the use of third-party IaC tooling may allow for greater customization of the infrastructure than CAPA Continue reading

Introducing Docker Hub’s New & Improved Tag User Experience

One of Docker’s core missions is delivering choice and flexibility across different application languages and frameworks, operating systems, and infrastructure. When it comes to modern applications, the choice of infrastructure is not just whether the application is run on-premises, on virtual machines or bare metal, or in the cloud. It can also be a choice of which architecture – x86, Arm, or GPU. 

Today, we’re happy to share some updates in Docker Hub that make it easier to access multi-architecture images and scanning results through the Tag UX. 

Navigating to Image Tags

In this example, we’re looking at a listing for a Docker Official Image that supports x86, PowerPC and IBMz as listed in the labels. When you land on the image page on Docker Hub, you can quickly identify if an image supports multiple architectures in the labels underneath the image name. For further details, you can click on ‘Tags’:

Docker Hub tags overview

In this section, you can now view the different architectures separately to easily identify the right image for the architecture you need, complete with image size and operating system information:

Docker Hub tags system info view.

If you click on the digest for a particular architecture, you will now also be able to Continue reading

AnsibleFest Atlanta – Culture and Collaboration

Blog_AnsibleFest2019-Culture-Collaboration-Track

 

Now that AnsibleFest is right around the corner, we wanted to take a closer look at each of the tracks that we will offer. We talked with Track Lead Brian Coursen and asked him a few questions about the Culture and Collaboration Track and sessions within the track.  

 

Who is this track best for? 

This track is best for attendees that want to see how Ansible is used and how it brings people and teams together in the workplace.

 

What topics will this track cover? 

Topics will include how to create an automation culture as well as highlight some automation use cases. This will include minimizing business unit conflict with patch automation, how to build an open source network service orchestrator using Ansible at the core, and why automation isn't just a passing fad but a necessity in today's enterprise.

 

What should attendees expect to learn from this track? 

Attendees will learn about DevOps culture and automation. They will also learn about how places like National Weather Service Southern Region; Datacom; and the British financial institution, RBS, are all using Ansible to create a culture of collaboration. 

 

Where would you Continue reading

Highly Available Kubernetes Clusters on AWS with Cluster API

In my previous post on Kubernetes Cluster API, I showed readers how to use the Cluster API provider for AWS (referred to as CAPA) to instantiate a Kubernetes cluster on AWS. Readers who followed through the instructions in that post may note CAPA places all the nodes for a given cluster in a single AWS availability zone (AZ) by default. While multi-AZ Kubernetes deployments are not without their own considerations, it’s generally considered beneficial to deploy across multiple AZs for higher availability. In this post, I’ll share how to deploy highly-available Kubernetes clusters—defined as having multiple control plane nodes distributed across multiple AZs—using Cluster API for AWS (CAPA).

This post assumes that you have already deployed a management cluster, so the examples may mention using kubectl to apply CAPA manifests against the management cluster to deploy a highly-available workload cluster. However, the information needed in the CAPA manifests would also work with clusterctl in order to deploy a highly-available management cluster. (Not familiar with what I mean when I say “management cluster” or “workload cluster”? Be sure to go read the introduction to Cluster API post first.)

Also, this post was written with CAPA v1apha1 in mind; a new Continue reading

AnsibleFest Atlanta – Infrastructure Automation

Blog_AnsibleFest2019-Infrastructure-Automation-Track

 

AnsibleFest is only a few short weeks away and we are excited to share with you all the great content and sessions we have lined up! On the Ansible blog, we have been taking a closer look at each of the breakout session tracks so that attendees can better personalize their AnsibleFest experience. We sat down with Track Lead Dylan Silva to find out more about the Infrastructure Automation Track and sessions within the track.  

 

Who is this track best for? 

This track is best for sysadmins that are looking for information related to general infrastructure automation with Ansible.

 

What topics will this track cover? 

Sessions in this track will cover bare-metal, server administration, and inventory management, among other related topics. There will be a session covering the automation of VMware infrastructure using REST APIs, how to use Ansible against your vSphere environment, how to use Ansible to pull approved firewall change requests from our change management system, and much more. 

 

What should attendees expect to learn from this track? 

Attendees should expect to learn best practices related to infrastructure management. This includes scaling Ansible for loT deployments, taking a closer Continue reading

VMworld 2019 Vendor Meeting: Lightbits Labs

Last week at VMworld, I had the opportunity to meet with Lightbits Labs, a relatively new startup working on what they called “disaggregated storage.” As it turns out, their product is actually quite interesting, and has relevance not only in “traditional” VMware vSphere environments but also in environments more focused on cloud-native technologies like Kubernetes.

So what is “disaggregated storage”? It’s one of the first questions I asked the Lightbits team. The basic premise behind Lightbits’ solution is that by taking the storage out of nodes—by decoupling storage from compute and memory—they can provide more efficient scaling. Frankly, it’s the same basic premise behind storage area network (SANs), although I think Lightbits wants to distance themselves from that terminology.

Instead of Fibre Channel, Fibre Channel over Ethernet (FCoE), or iSCSI, Lightbits uses NVMe over TCP. This provides good performance over 25, 50, or 100Gbps links with low latency (typically less than 300 microseconds). Disks appear “local” to the node, which allows for some interesting concepts when used in conjunction with hyperconverged platforms (more on that in a moment).

Lightbits has their own operating system, LightOS, which runs on industry-standard x64 servers from Dell, HP, Lenovo, etc. To Continue reading

1 36 37 38 39 40 125