Docker Enterprise Edition (EE) has been accepted to G-Cloud 9, further exemplifying Docker’s commitment to delivering tools for application modernization and innovation across the UK public sector.
G-Cloud 9 is the UK government’s latest framework that is designed to simplify and accelerate adoption of cloud-based services within the public sector. The inclusion of Docker Enterprise Edition subscriptions, training and Professional Services Organization (PSO) within HM Government Crown Commercial Service’s (CCS) G-Cloud 9 Framework gives UK public sector organizations the opportunity to procure the de facto container solution through the online store known as the “Digital Marketplace” without needing to run a full tender, competition or lengthy procurement process.
Docker’s meteoric rise within enterprise-class business has been built on its ability to be agnostic, agile and secure – whether for hybrid cloud migration, modernizing the application stack or adopting a DevOps methodology.
Bringing application modernization to the public sector
With the UK government’s shift to cloud and DevOps, and move away from locked-down IT contracts in favor of smaller suppliers, Docker perfectly addresses these needs by giving UK public sector organizations the ability to innovate, transform, define, select and control their infrastructure. Additionally, these organization can retain staff who now feel engaged as they can run their programs Continue reading
Docker Enterprise Edition (EE) is designed for enterprise development and IT teams who build, ship and run business critical applications in production at scale. Docker EE provides a fully integrated solution that includes the container engine, built-in orchestration, a private registry, and container lifecycle management to help you build a secure software supply chain. As an enterprise-grade offering with access to SLA-backed technical support and validated integrations to leading 3rd party images, plug-ins, and infrastructure, Docker EE can help organizations deliver Containers as a Service (CaaS) to improve IT efficiency, make applications more portable for the public cloud, and more secure through a smaller attack surface and image signing and scanning.
Watch the following webinar as Moni Sallam and I highlight some key use cases for Docker Enterprise Edition and how it differs from Community Edition. Moni also provides a demo of how end-to-end container lifecycle management can be securely controlled through Docker EE.
Here are some of the top questions from the live session:
Q: Can we Dockerize Windows apps?
A: Yes! Docker has partnered with Microsoft to deliver a native Docker container platform with Windows Server 2016. Docker containers can also be run on Windows Server and Windows Continue reading
Docker for AWS and Docker for Azure are much more than a simple way to setup Docker in the cloud. In fact they provision by default an infrastructure with security in mind to give you a secure platform to build, ship and run Docker apps in the cloud. Available for free in Community Edition and as a subscription with support and integrated management in Enterprise Edition, Docker for AWS and Docker for Azure allow you to leverage pre-configured security features for your apps today – without having to be a cloud infrastructure expert.
You don’t have to take our word for it – in February 2017, we engaged NCC Group, an independent security firm, to conduct a security assessment of Docker for AWS and Docker for Azure. Included in this assessment is Docker for AWS and Docker for Azure Community Edition and Enterprise Edition Basic. This assessment took place from February 6-17. NCC Group was tasked with assessing whether these Docker Editions not only provisioned secure infrastructure with sensible defaults, but also leveraged and integrated the best security features of each cloud. We’d like to openly share their findings with you today.
NCC Group evaluated our security model and defaults, including:
At DockerCon 2017 we introduced LinuxKit: A toolkit for building secure, lean and portable Linux subsystems. Here are the key principles and motivations behind the project:
For this Online Meetup, Docker Technical Staff member Rolf Neugebauer gave an introduction to LinuxKit, explained the rationale behind its development and gave a demo on how to get started using it.
You’ll find below a list of additional questions asked by attendees at the end of the online meetups:
You said the ONBOOT containers are run sequentially, does it wait for one to finish before it Continue reading
Back in early March of this year, I wrote a post on customizing the Docker Engine on CentOS Atomic Host. In that post, I showed how you could use systemd constructs like drop-in units to customize the behavior of the Docker Engine when running on CentOS Atomic Host. In this post, I’m going to build on that information to show how this can be done using cloud-init
on a public cloud provider (AWS, in this case).
Although I haven’t really blogged about it, I’d already taken the information in that first post and written some Ansible playbooks to do the same thing (see here for more information). Thus, one could use Ansible to do this when running CentOS Atomic Host on a public cloud provider. However, much like the original post, I wanted to find a very “cloud-native” way of doing this, and cloud-init
seemed like a pretty good candidate.
All in all, it was pretty straightforward—with one significant exception. As I was testing this, I ran into an issue where the Docker daemon wouldn’t start after cloud-init
had finished. Convinced I’d done something wrong, I kept going over the files, testing and re-testing (I’ve been working on this, off Continue reading
Back in early March of this year, I wrote a post on customizing the Docker Engine on CentOS Atomic Host. In that post, I showed how you could use systemd constructs like drop-in units to customize the behavior of the Docker Engine when running on CentOS Atomic Host. In this post, I’m going to build on that information to show how this can be done using cloud-init
on a public cloud provider (AWS, in this case).
Although I haven’t really blogged about it, I’d already taken the information in that first post and written some Ansible playbooks to do the same thing (see here for more information). Thus, one could use Ansible to do this when running CentOS Atomic Host on a public cloud provider. However, much like the original post, I wanted to find a very “cloud-native” way of doing this, and cloud-init
seemed like a pretty good candidate.
All in all, it was pretty straightforward—with one significant exception. As I was testing this, I ran into an issue where the Docker daemon wouldn’t start after cloud-init
had finished. Convinced I’d done something wrong, I kept going over the files, testing and re-testing (I’ve been working on this, off Continue reading
Back in early March of this year, I wrote a post on customizing the Docker Engine on CentOS Atomic Host. In that post, I showed how you could use systemd constructs like drop-in units to customize the behavior of the Docker Engine when running on CentOS Atomic Host. In this post, I’m going to build on that information to show how this can be done using cloud-init
on a public cloud provider (AWS, in this case).
Although I haven’t really blogged about it, I’d already taken the information in that first post and written some Ansible playbooks to do the same thing (see here for more information). Thus, one could use Ansible to do this when running CentOS Atomic Host on a public cloud provider. However, much like the original post, I wanted to find a very “cloud-native” way of doing this, and cloud-init
seemed like a pretty good candidate.
All in all, it was pretty straightforward—with one significant exception. As I was testing this, I ran into an issue where the Docker daemon wouldn’t start after cloud-init
had finished. Convinced I’d done something wrong, I kept going over the files, testing and re-testing (I’ve been working on this, off Continue reading
For quite some time now we have been receiving daily requests from students all over the world, asking for our help learning Docker, using Docker and teaching their peers how to use Docker. We love their enthusiasm, so we decided it was time to reach out to the student community and give them the helping hand they need!
Understanding how to use Docker is now a must have skill for students. Here are 5 reasons why:
Are you a student who is excited about the prospect of using Docker but still don’t know exactly what Docker is or where to start learning? Now that your finals are over and you have all Continue reading
The Docker Security Team was out in force at PyCon 2017 in Portland, OR, giving two talks focussed on helping the Python Community to achieve better security. First up was David Lawrence and Ying Li with their “Introduction to Threat Modelling talk”.
Threat Modelling is a structured process that aids an engineer in uncovering security vulnerabilities in an application design or implemented software. The great majority of software grows organically, gaining new features as some critical mass of users requests them. These features are often implemented without full consideration of how they may impact every facet of the system they are augmenting.
Threat modelling aims to increase awareness of how a system operates, and in doing so, identify potential vulnerabilities. The process is broken up into three steps: data collection, analysis, and remediation. An effective way to run the process is to have a security engineer sit with the engineers responsible for design or implementation and guide a structured discussion through the three steps.
For the purpose of this article, we’re going to consider how we would threat model a house, as the process can be applied to both real world scenarios in addition to software.
Five categories of Continue reading
Welcome to another post in our Getting Started series. In our previous post, we discussed how you can equip your Ansible Tower instance with users and credentials.
In this post, we will discuss how to set up projects and inventories in your Ansible Tower instance.
Tower projects are a logical collection of Ansible Playbooks that are set up with each other based on what they might be doing or which hosts they might interact with.
Playbooks can be managed within Tower projects by either adding them manually to the project base path on your Tower server, (/var/lib/awx/projects) or by importing them from a source control management system (SCM) that is supported by Tower. Examples of SCMs supported by Tower are Git, Subversion and Mercurial. Managing your projects with an SCM is recommended to ensure that only users with assigned access to the repository can change the Playbook before execution, and for the extra layer of accountability and change control it provides. If your Playbooks are managed by an SCM, update options can be selected to “update on launch”, “delete on update” and “clean”.
If you select “update on launch", Tower will sync each Continue reading
Welcome to Technology Short Take #83! This is a slightly shorter TST than usual, which might be a nice break from the typical information overload. In any case, enjoy!
ssh-copy-id
on servers, but for network devices (leveraging Netmiko). Check out the GitHub repository.The idea of an SSH bastion host is something I discussed here about 18 months ago. For the most part, it’s a pretty simple concept (yes, things can get quite complex in some situations, but I think these are largely corner cases). For the last few months, though, I’ve been trying to use an SSH bastion host and failing, and I could not figure out why it wouldn’t work. The answer, it turns out, lies in custom SSH configurations.
In my introduction on using SSH bastion hosts (linked above)—or in just about any tutorial out there on using SSH bastion hosts—brief mention is made of adding configuration information to SSH to use the bastion host. Borrowing from my original post, if you had an instance named “private1” that you wanted to access via a bastion named “bastion”, the SSH configuration information might look like this:
Host private1
IdentityFile ~/.ssh/rsa_private_key
ProxyCommand ssh user@bastion -W %h:%p
Host bastion
IdentityFile ~/.ssh/rsa_private_key
Normally, that information would go into ~/.ssh/config
, which is the default SSH configuration file.
In my case, I only allow public key authentication to “trusted” systems (I vaguely recall an article I read a while ago about a Continue reading
Welcome to Technology Short Take #83! This is a slightly shorter TST than usual, which might be a nice break from the typical information overload. In any case, enjoy!
ssh-copy-id
on servers, but for network devices (leveraging Netmiko). Check out the GitHub repository.The idea of an SSH bastion host is something I discussed here about 18 months ago. For the most part, it’s a pretty simple concept (yes, things can get quite complex in some situations, but I think these are largely corner cases). For the last few months, though, I’ve been trying to use an SSH bastion host and failing, and I could not figure out why it wouldn’t work. The answer, it turns out, lies in custom SSH configurations.
In my introduction on using SSH bastion hosts (linked above)—or in just about any tutorial out there on using SSH bastion hosts—brief mention is made of adding configuration information to SSH to use the bastion host. Borrowing from my original post, if you had an instance named “private1” that you wanted to access via a bastion named “bastion”, the SSH configuration information might look like this:
Host private1
IdentityFile ~/.ssh/rsa_private_key
ProxyCommand ssh user@bastion -W %h:%p
Host bastion
IdentityFile ~/.ssh/rsa_private_key
Normally, that information would go into ~/.ssh/config
, which is the default SSH configuration file.
In my case, I only allow public key authentication to “trusted” systems (I vaguely recall an article I read a while ago about a Continue reading
The idea of an SSH bastion host is something I discussed here about 18 months ago. For the most part, it’s a pretty simple concept (yes, things can get quite complex in some situations, but I think these are largely corner cases). For the last few months, though, I’ve been trying to use an SSH bastion host and failing, and I could not figure out why it wouldn’t work. The answer, it turns out, lies in custom SSH configurations.
In my introduction on using SSH bastion hosts (linked above)—or in just about any tutorial out there on using SSH bastion hosts—brief mention is made of adding configuration information to SSH to use the bastion host. Borrowing from my original post, if you had an instance named “private1” that you wanted to access via a bastion named “bastion”, the SSH configuration information might look like this:
Host private1
IdentityFile ~/.ssh/rsa_private_key
ProxyCommand ssh user@bastion -W %h:%p
Host bastion
IdentityFile ~/.ssh/rsa_private_key
Normally, that information would go into ~/.ssh/config
, which is the default SSH configuration file.
In my case, I only allow public key authentication to “trusted” systems (I vaguely recall an article I read a while ago about a Continue reading
Welcome to Technology Short Take #83! This is a slightly shorter TST than usual, which might be a nice break from the typical information overload. In any case, enjoy!
ssh-copy-id
on servers, but for network devices (leveraging Netmiko). Check out the GitHub repository.Last month at DockerCon, we introduced the Moby Project: an open-source project sponsored by Docker to advance the software containerization movement. The idea behind the project is to help the ecosystem take containers mainstream by providing a library of components, a framework for assembling them into custom container-based systems and a place for all container enthusiasts to experiment and exchange ideas. Going forward, Docker will be assembled using Moby, see Moby and Docker or the diagram below for more details.
Knowing that that a good number of maintainers, contributors and advanced Docker users would be attending DockerCon, we decided to organize the first Moby Summit in collaboration with the Cloud Native Computing Foundation (CNCF). The summit was a small collaborative event for container hackers who are actively maintaining, contributing or generally involved or interested in the design and development of components of the Moby project library in particular: LinuxKit, containerd, Infrakit, SwarmKit, libnetwork and Notary.
Here’s what we covered during the first part of the summit:
At Interop ITX 2017 in Las Vegas, I had the privilege to lead a half-day workshop on options for deploying containers to cloud providers. As part of that workshop, I gave four live demos of using different deployment options. Those demos—along with the slides I used for my presentation along the way—are now available to anyone who might like to try them on their own.
The slides and all the resources for the demos are available in this GitHub repository. The four demos are:
Docker Swarm on EC2: This demo leverages Terraform and Ansible to stand up and configure a Docker Swarm cluster on AWS.
Amazon EC2 Container Service (ECS): This demo uses AWS CloudFormation to create an EC2 Container Service cluster with 3 instances and an Amazon RDS instance for backend database storage.
Kubernetes on AWS using kops
: Using the kops
CLI tool, this demo turns up a Kubernetes cluster on AWS to show how to deploy containerized applications on Kubernetes.
Google Container Engine: The final demo shows using Google Container Engine—which is Kubernetes—to deploy an application.
In the coming weeks, I plan to recreate the demos, record them, and publish them via YouTube, so that Continue reading
At Interop ITX 2017 in Las Vegas, I had the privilege to lead a half-day workshop on options for deploying containers to cloud providers. As part of that workshop, I gave four live demos of using different deployment options. Those demos—along with the slides I used for my presentation along the way—are now available to anyone who might like to try them on their own.
The slides and all the resources for the demos are available in this GitHub repository. The four demos are:
Docker Swarm on EC2: This demo leverages Terraform and Ansible to stand up and configure a Docker Swarm cluster on AWS.
Amazon EC2 Container Service (ECS): This demo uses AWS CloudFormation to create an EC2 Container Service cluster with 3 instances and an Amazon RDS instance for backend database storage.
Kubernetes on AWS using kops
: Using the kops
CLI tool, this demo turns up a Kubernetes cluster on AWS to show how to deploy containerized applications on Kubernetes.
Google Container Engine: The final demo shows using Google Container Engine—which is Kubernetes—to deploy an application.
In the coming weeks, I plan to recreate the demos, record them, and publish them via YouTube, so that Continue reading
At Interop ITX 2017 in Las Vegas, I had the privilege to lead a half-day workshop on options for deploying containers to cloud providers. As part of that workshop, I gave four live demos of using different deployment options. Those demos—along with the slides I used for my presentation along the way—are now available to anyone who might like to try them on their own.
The slides and all the resources for the demos are available in this GitHub repository. The four demos are:
Docker Swarm on EC2: This demo leverages Terraform and Ansible to stand up and configure a Docker Swarm cluster on AWS.
Amazon EC2 Container Service (ECS): This demo uses AWS CloudFormation to create an EC2 Container Service cluster with 3 instances and an Amazon RDS instance for backend database storage.
Kubernetes on AWS using kops
: Using the kops
CLI tool, this demo turns up a Kubernetes cluster on AWS to show how to deploy containerized applications on Kubernetes.
Google Container Engine: The final demo shows using Google Container Engine—which is Kubernetes—to deploy an application.
In the coming weeks, I plan to recreate the demos, record them, and publish them via YouTube, so that Continue reading