Archive

Category Archives for "Systems"

Docker Enterprise Edition Now on G-Cloud 9 Framework

 G-Cloud 9

Docker Enterprise Edition (EE) has been accepted to G-Cloud 9, further exemplifying Docker’s commitment to delivering tools for application modernization and innovation across the UK public sector.

G-Cloud 9 is the UK government’s latest framework that is designed to simplify and accelerate adoption of cloud-based services within the public sector. The inclusion of Docker Enterprise Edition subscriptions, training and Professional Services Organization (PSO) within HM Government Crown Commercial Service’s (CCS) G-Cloud 9 Framework gives UK public sector organizations the opportunity to procure the de facto container solution through the online store known as the “Digital Marketplace” without needing to run a full tender, competition or lengthy procurement process.

Docker’s meteoric rise within enterprise-class business has been built on its ability to be agnostic, agile and secure – whether for hybrid cloud migration, modernizing the application stack or adopting a DevOps methodology.

Bringing application modernization to the public sector

With the UK government’s shift to cloud and DevOps, and move away from locked-down IT contracts in favor of smaller suppliers, Docker perfectly addresses these needs by giving  UK public sector organizations the ability to innovate, transform, define, select and control their infrastructure. Additionally, these organization can retain staff who now feel engaged as they can run their programs Continue reading

Webinar Q&A: Docker Enterprise Edition Demo

Docker Enterprise Edition (EE) is designed for enterprise development and IT teams who build, ship and run business critical applications in production at scale. Docker EE provides a fully integrated solution that includes the container engine, built-in orchestration, a private registry, and container lifecycle management to help you build a secure software supply chain. As an enterprise-grade offering with access to SLA-backed technical support and validated integrations to leading 3rd party images, plug-ins, and infrastructure, Docker EE can help organizations deliver Containers as a Service (CaaS) to improve IT efficiency, make applications more portable for the public cloud, and more secure through a smaller attack surface and image signing and scanning.

Docker EE

Watch the following webinar as Moni Sallam and I highlight some key use cases for Docker Enterprise Edition and how it differs from Community Edition. Moni also provides a demo of how end-to-end container lifecycle management can be securely controlled through Docker EE.

Here are some of the top questions from the live session:

Q: Can we Dockerize Windows apps?

A: Yes! Docker has partnered with Microsoft to deliver a native Docker container platform with Windows Server 2016. Docker containers can also be run on Windows Server and Windows Continue reading

Docker for AWS and Azure: Secure By Default Container Platform

Docker for AWS and Docker for Azure are much more than a simple way to setup Docker in the cloud. In fact they provision by default an infrastructure with security in mind to give you a secure platform to build, ship and run Docker apps in the cloud. Available for free in Community Edition and as a subscription with support and integrated management in Enterprise Edition, Docker for AWS and Docker for Azure allow you to leverage pre-configured security features for your apps today – without having to be a cloud infrastructure expert.

You don’t have to take our word for it – in February 2017, we engaged NCC Group, an independent security firm, to conduct a security assessment of Docker for AWS and Docker for Azure. Included in this assessment is Docker for AWS and Docker for Azure Community Edition and Enterprise Edition Basic. This assessment took place from February 6-17. NCC Group was tasked with assessing whether these Docker Editions not only provisioned secure infrastructure with sensible defaults, but also leveraged and integrated the best security features of each cloud. We’d like to openly share their findings with you today.

NCC Group evaluated our security model and defaults, including:

Online meetup recap: Introduction to LinuxKit

At DockerCon 2017 we introduced LinuxKit: A toolkit for building secure, lean and portable Linux subsystems. Here are the key principles and motivations behind the project:

  • Secure defaults without compromising usability
  • Everything is replaceable and customizable
  • Immutable infrastructure applied to building Linux distributions
  • Completely stateless, but persistent storage can be attached
  • Easy tooling, with easy iteration
  • Built with containers, for running containers
  • Designed for building and running clustered applications, including but not limited to container orchestration such as Docker or Kubernetes
  • Designed from the experience of building Docker Editions, but redesigned as a general-purpose toolkit
  • Designed to be managed by external tooling, such as Infrakit or similar tools
  • Includes a set of longer-term collaborative projects in various stages of development to innovate on kernel and userspace changes, particularly around security

For this Online Meetup, Docker Technical Staff member Rolf Neugebauer gave an introduction to LinuxKit, explained the rationale behind its development and gave a demo on how to get started using it.

LinuxKit

Watch the recording and slides

You’ll find below a list of additional questions asked by attendees at the end of the online meetups:

You said the ONBOOT containers are run sequentially, does it wait for one to finish before it Continue reading

CentOS Atomic Host Customization Using cloud-init

Back in early March of this year, I wrote a post on customizing the Docker Engine on CentOS Atomic Host. In that post, I showed how you could use systemd constructs like drop-in units to customize the behavior of the Docker Engine when running on CentOS Atomic Host. In this post, I’m going to build on that information to show how this can be done using cloud-init on a public cloud provider (AWS, in this case).

Although I haven’t really blogged about it, I’d already taken the information in that first post and written some Ansible playbooks to do the same thing (see here for more information). Thus, one could use Ansible to do this when running CentOS Atomic Host on a public cloud provider. However, much like the original post, I wanted to find a very “cloud-native” way of doing this, and cloud-init seemed like a pretty good candidate.

All in all, it was pretty straightforward—with one significant exception. As I was testing this, I ran into an issue where the Docker daemon wouldn’t start after cloud-init had finished. Convinced I’d done something wrong, I kept going over the files, testing and re-testing (I’ve been working on this, off Continue reading

CentOS Atomic Host Customization Using cloud-init

Back in early March of this year, I wrote a post on customizing the Docker Engine on CentOS Atomic Host. In that post, I showed how you could use systemd constructs like drop-in units to customize the behavior of the Docker Engine when running on CentOS Atomic Host. In this post, I’m going to build on that information to show how this can be done using cloud-init on a public cloud provider (AWS, in this case).

Although I haven’t really blogged about it, I’d already taken the information in that first post and written some Ansible playbooks to do the same thing (see here for more information). Thus, one could use Ansible to do this when running CentOS Atomic Host on a public cloud provider. However, much like the original post, I wanted to find a very “cloud-native” way of doing this, and cloud-init seemed like a pretty good candidate.

All in all, it was pretty straightforward—with one significant exception. As I was testing this, I ran into an issue where the Docker daemon wouldn’t start after cloud-init had finished. Convinced I’d done something wrong, I kept going over the files, testing and re-testing (I’ve been working on this, off Continue reading

CentOS Atomic Host Customization Using cloud-init

Back in early March of this year, I wrote a post on customizing the Docker Engine on CentOS Atomic Host. In that post, I showed how you could use systemd constructs like drop-in units to customize the behavior of the Docker Engine when running on CentOS Atomic Host. In this post, I’m going to build on that information to show how this can be done using cloud-init on a public cloud provider (AWS, in this case).

Although I haven’t really blogged about it, I’d already taken the information in that first post and written some Ansible playbooks to do the same thing (see here for more information). Thus, one could use Ansible to do this when running CentOS Atomic Host on a public cloud provider. However, much like the original post, I wanted to find a very “cloud-native” way of doing this, and cloud-init seemed like a pretty good candidate.

All in all, it was pretty straightforward—with one significant exception. As I was testing this, I ran into an issue where the Docker daemon wouldn’t start after cloud-init had finished. Convinced I’d done something wrong, I kept going over the files, testing and re-testing (I’ve been working on this, off Continue reading

Announcing the Docker Student Developer Kit & Campus Ambassador Program!

For quite some time now we have been receiving daily requests from students all over the world, asking for our help learning Docker, using Docker and teaching their peers how to use Docker. We love their enthusiasm, so we decided it was time to reach out to the student community and give them the helping hand they need!

Docker Education

 

Understanding how to use Docker is now a must have skill for students. Here are 5 reasons why:

  1. Understanding how to use Docker is one of the most important skills to learn if you want to advance in a career in tech, according to Business Insider.
  2. You can just start coding instead of spending time setting up your environment.
  3. You can collaborate easily with your peers and enable seamless group work: Docker eliminates any ‘works on my machine’ issues.
  4. Docker allows you to easily build applications with a modern microservices architecture.
  5. Using Docker will greatly enhance the security of your applications.

Getting Started with Docker

Are you a student who is excited about the prospect of using Docker but still don’t know exactly what Docker is or where to start learning? Now that your finals are over and you have all Continue reading

Docker Security at PyCon: Threat Modeling & State Machines

The Docker Security Team was out in force at PyCon 2017 in Portland, OR, giving two talks focussed on helping the Python Community to achieve better security. First up was David Lawrence and Ying Li with their “Introduction to Threat Modelling talk”.

Threat Modelling is a structured process that aids an engineer in uncovering security vulnerabilities in an application design or implemented software. The great majority of software grows organically, gaining new features as some critical mass of users requests them. These features are often implemented without full consideration of how they may impact every facet of the system they are augmenting.

Threat modelling aims to increase awareness of how a system operates, and in doing so, identify potential vulnerabilities. The process is broken up into three steps: data collection, analysis, and remediation. An effective way to run the process is to have a security engineer sit with the engineers responsible for design or implementation and guide a structured discussion through the three steps.

For the purpose of this article, we’re going to consider how we would  threat model a house, as the process can be applied to both real world scenarios in addition to software.

threat Modeling

Data Collection

Five categories of Continue reading

Getting Started: Tower Projects and Inventories

Getting-Started-ProjectAndIventories.png

Welcome to another post in our Getting Started series. In our previous post, we discussed how you can equip your Ansible Tower instance with users and credentials.

In this post, we will discuss how to set up projects and inventories in your Ansible Tower instance.

What Is A Tower Project?

Tower projects are a logical collection of Ansible Playbooks that are set up with each other based on what they might be doing or which hosts they might interact with.

Playbooks can be managed within Tower projects by either adding them manually to the project base path on your Tower server, (/var/lib/awx/projects) or by importing them from a source control management system (SCM) that is supported by Tower. Examples of SCMs supported by Tower are Git, Subversion and Mercurial. Managing your projects with an SCM is recommended to ensure that only users with assigned access to the repository can change the Playbook before execution, and for the extra layer of accountability and change control it provides. If your Playbooks are managed by an SCM, update options can be selected to “update on launch”, “delete on update” and “clean”.

If you select “update on launch", Tower will sync each Continue reading

Technology Short Take #83

Welcome to Technology Short Take #83! This is a slightly shorter TST than usual, which might be a nice break from the typical information overload. In any case, enjoy!

Networking

  • I enjoyed Dave McCrory’s series on the future of the network (see part 1, part 2, part 3, and part 4—part 5 hadn’t gone live yet when I published this). In my humble opinion, he’s spot on in his viewpoint that network equipment is increasingly becoming more like servers, so why not embed services and functions in the network equipment? However, this isn’t enough; you also need a strong control plane to help manage and coordinate these services. Perhaps Istio will help provide that control plane, though I suspect something more will be needed.
  • Michael Kashin has a handy little tool that functions like ssh-copy-id on servers, but for network devices (leveraging Netmiko). Check out the GitHub repository.
  • Anthony Shaw has a good comparison of Ansible, StackStorm, and Salt (with a particular view at applicability in a networking context). This one is definitely worth a read, in my opinion.
  • Miguel Gómez of Telefónica Engineering discusses maximizing performance in VXLAN overlay networks.
  • Nicolas Michel has a good Continue reading

Bastion Hosts and Custom SSH Configurations

The idea of an SSH bastion host is something I discussed here about 18 months ago. For the most part, it’s a pretty simple concept (yes, things can get quite complex in some situations, but I think these are largely corner cases). For the last few months, though, I’ve been trying to use an SSH bastion host and failing, and I could not figure out why it wouldn’t work. The answer, it turns out, lies in custom SSH configurations.

In my introduction on using SSH bastion hosts (linked above)—or in just about any tutorial out there on using SSH bastion hosts—brief mention is made of adding configuration information to SSH to use the bastion host. Borrowing from my original post, if you had an instance named “private1” that you wanted to access via a bastion named “bastion”, the SSH configuration information might look like this:

Host private1
  IdentityFile ~/.ssh/rsa_private_key
  ProxyCommand ssh user@bastion -W %h:%p

Host bastion
  IdentityFile ~/.ssh/rsa_private_key

Normally, that information would go into ~/.ssh/config, which is the default SSH configuration file.

In my case, I only allow public key authentication to “trusted” systems (I vaguely recall an article I read a while ago about a Continue reading

Technology Short Take #83

Welcome to Technology Short Take #83! This is a slightly shorter TST than usual, which might be a nice break from the typical information overload. In any case, enjoy!

Networking

  • I enjoyed Dave McCrory’s series on the future of the network (see part 1, part 2, part 3, and part 4—part 5 hadn’t gone live yet when I published this). In my humble opinion, he’s spot on in his viewpoint that network equipment is increasingly becoming more like servers, so why not embed services and functions in the network equipment? However, this isn’t enough; you also need a strong control plane to help manage and coordinate these services. Perhaps Istio will help provide that control plane, though I suspect something more will be needed.
  • Michael Kashin has a handy little tool that functions like ssh-copy-id on servers, but for network devices (leveraging Netmiko). Check out the GitHub repository.
  • Anthony Shaw has a good comparison of Ansible, StackStorm, and Salt (with a particular view at applicability in a networking context). This one is definitely worth a read, in my opinion.
  • Miguel Gómez of Telefónica Engineering discusses maximizing performance in VXLAN overlay networks.
  • Nicolas Michel has a good Continue reading

Bastion Hosts and Custom SSH Configurations

The idea of an SSH bastion host is something I discussed here about 18 months ago. For the most part, it’s a pretty simple concept (yes, things can get quite complex in some situations, but I think these are largely corner cases). For the last few months, though, I’ve been trying to use an SSH bastion host and failing, and I could not figure out why it wouldn’t work. The answer, it turns out, lies in custom SSH configurations.

In my introduction on using SSH bastion hosts (linked above)—or in just about any tutorial out there on using SSH bastion hosts—brief mention is made of adding configuration information to SSH to use the bastion host. Borrowing from my original post, if you had an instance named “private1” that you wanted to access via a bastion named “bastion”, the SSH configuration information might look like this:

Host private1
  IdentityFile ~/.ssh/rsa_private_key
  ProxyCommand ssh user@bastion -W %h:%p

Host bastion
  IdentityFile ~/.ssh/rsa_private_key

Normally, that information would go into ~/.ssh/config, which is the default SSH configuration file.

In my case, I only allow public key authentication to “trusted” systems (I vaguely recall an article I read a while ago about a Continue reading

Bastion Hosts and Custom SSH Configurations

The idea of an SSH bastion host is something I discussed here about 18 months ago. For the most part, it’s a pretty simple concept (yes, things can get quite complex in some situations, but I think these are largely corner cases). For the last few months, though, I’ve been trying to use an SSH bastion host and failing, and I could not figure out why it wouldn’t work. The answer, it turns out, lies in custom SSH configurations.

In my introduction on using SSH bastion hosts (linked above)—or in just about any tutorial out there on using SSH bastion hosts—brief mention is made of adding configuration information to SSH to use the bastion host. Borrowing from my original post, if you had an instance named “private1” that you wanted to access via a bastion named “bastion”, the SSH configuration information might look like this:

Host private1
  IdentityFile ~/.ssh/rsa_private_key
  ProxyCommand ssh user@bastion -W %h:%p

Host bastion
  IdentityFile ~/.ssh/rsa_private_key

Normally, that information would go into ~/.ssh/config, which is the default SSH configuration file.

In my case, I only allow public key authentication to “trusted” systems (I vaguely recall an article I read a while ago about a Continue reading

Technology Short Take #83

Welcome to Technology Short Take #83! This is a slightly shorter TST than usual, which might be a nice break from the typical information overload. In any case, enjoy!

Networking

  • I enjoyed Dave McCrory’s series on the future of the network (see part 1, part 2, part 3, and part 4—part 5 hadn’t gone live yet when I published this). In my humble opinion, he’s spot on in his viewpoint that network equipment is increasingly becoming more like servers, so why not embed services and functions in the network equipment? However, this isn’t enough; you also need a strong control plane to help manage and coordinate these services. Perhaps Istio will help provide that control plane, though I suspect something more will be needed.
  • Michael Kashin has a handy little tool that functions like ssh-copy-id on servers, but for network devices (leveraging Netmiko). Check out the GitHub repository.
  • Anthony Shaw has a good comparison of Ansible, StackStorm, and Salt (with a particular view at applicability in a networking context). This one is definitely worth a read, in my opinion.
  • Miguel Gómez of Telefónica Engineering discusses maximizing performance in VXLAN overlay networks.
  • Nicolas Michel has a good Continue reading

Get involved with the Moby Project by attending upcoming Moby Summits!

Last month at DockerCon, we introduced the Moby Project: an open-source project sponsored by Docker to advance the software containerization movement. The idea behind the project is to help the ecosystem take containers mainstream by providing a library of components, a framework for assembling them into custom container-based systems and a place for all container enthusiasts to experiment and exchange ideas. Going forward, Docker will be assembled using Moby, see Moby and Docker or the diagram below for more details.

Moby Project

Moby Summit at DockerCon 2017

Knowing that that a good number of maintainers, contributors and advanced Docker users would be attending DockerCon, we decided to organize the first Moby Summit in collaboration with the Cloud Native Computing Foundation (CNCF). The summit was a small collaborative event for container hackers who are actively maintaining, contributing or generally involved or interested in the design and development of components of the Moby project library in particular: LinuxKit, containerd, Infrakit, SwarmKit, libnetwork and Notary.

Here’s what we covered during the first part of the summit:

  • 0:05 – Opening words by Patrick Chanezon
  • 9:05 – Moby Project Q&A with Solomon Hykes and Justin Cormack
  • 60:14 – Quick update on containerd by Michael Continue reading

Container Deployment Demos from Interop ITX

At Interop ITX 2017 in Las Vegas, I had the privilege to lead a half-day workshop on options for deploying containers to cloud providers. As part of that workshop, I gave four live demos of using different deployment options. Those demos—along with the slides I used for my presentation along the way—are now available to anyone who might like to try them on their own.

The slides and all the resources for the demos are available in this GitHub repository. The four demos are:

  1. Docker Swarm on EC2: This demo leverages Terraform and Ansible to stand up and configure a Docker Swarm cluster on AWS.

  2. Amazon EC2 Container Service (ECS): This demo uses AWS CloudFormation to create an EC2 Container Service cluster with 3 instances and an Amazon RDS instance for backend database storage.

  3. Kubernetes on AWS using kops: Using the kops CLI tool, this demo turns up a Kubernetes cluster on AWS to show how to deploy containerized applications on Kubernetes.

  4. Google Container Engine: The final demo shows using Google Container Engine—which is Kubernetes—to deploy an application.

In the coming weeks, I plan to recreate the demos, record them, and publish them via YouTube, so that Continue reading

Container Deployment Demos from Interop ITX

At Interop ITX 2017 in Las Vegas, I had the privilege to lead a half-day workshop on options for deploying containers to cloud providers. As part of that workshop, I gave four live demos of using different deployment options. Those demos—along with the slides I used for my presentation along the way—are now available to anyone who might like to try them on their own.

The slides and all the resources for the demos are available in this GitHub repository. The four demos are:

  1. Docker Swarm on EC2: This demo leverages Terraform and Ansible to stand up and configure a Docker Swarm cluster on AWS.

  2. Amazon EC2 Container Service (ECS): This demo uses AWS CloudFormation to create an EC2 Container Service cluster with 3 instances and an Amazon RDS instance for backend database storage.

  3. Kubernetes on AWS using kops: Using the kops CLI tool, this demo turns up a Kubernetes cluster on AWS to show how to deploy containerized applications on Kubernetes.

  4. Google Container Engine: The final demo shows using Google Container Engine—which is Kubernetes—to deploy an application.

In the coming weeks, I plan to recreate the demos, record them, and publish them via YouTube, so that Continue reading

Container Deployment Demos from Interop ITX

At Interop ITX 2017 in Las Vegas, I had the privilege to lead a half-day workshop on options for deploying containers to cloud providers. As part of that workshop, I gave four live demos of using different deployment options. Those demos—along with the slides I used for my presentation along the way—are now available to anyone who might like to try them on their own.

The slides and all the resources for the demos are available in this GitHub repository. The four demos are:

  1. Docker Swarm on EC2: This demo leverages Terraform and Ansible to stand up and configure a Docker Swarm cluster on AWS.

  2. Amazon EC2 Container Service (ECS): This demo uses AWS CloudFormation to create an EC2 Container Service cluster with 3 instances and an Amazon RDS instance for backend database storage.

  3. Kubernetes on AWS using kops: Using the kops CLI tool, this demo turns up a Kubernetes cluster on AWS to show how to deploy containerized applications on Kubernetes.

  4. Google Container Engine: The final demo shows using Google Container Engine—which is Kubernetes—to deploy an application.

In the coming weeks, I plan to recreate the demos, record them, and publish them via YouTube, so that Continue reading

1 68 69 70 71 72 125