A few weeks ago we announced Docker Enterprise Edition (EE), the trusted, certified and supported container platform. Docker EE enables IT teams to establish a Containers as a Service (CaaS) environment to converge legacy, ISV and microservices apps into a single software supply chain that is flexible, secure and infrastructure independent. With a built in orchestration architecture (swarm mode) Docker EE allows app teams to compose and schedule simple to complex apps to drive their digital transformation initiatives.
On March 14th we hosted a live webinar to provide an overview and demonstration of Docker EE. View the recorded session below and read through some of the most popular questions.
Frequently Asked Questions
Q: How is Docker EE licensed?
A: Docker EE is licensed per node. A node is an instance running on a bare metal or virtual server. For more details visit www.docker.com/pricing
Q: Is Google Cloud also one of your certified infrastructure partners?
A: Docker EE is available today for both Azure and AWS. Google Cloud is currently offered as a private beta with Docker Community Edition. Learn more in this blog post and sign up at https://beta.docker.com
Q: What technology Continue reading
The last weekend in February, Holberton School and Docker held a joint Docker Hackathon where current students spent 24 hours making cool Docker hacks. Students were joined by Docker mentors who helped them along the way in addition to serving as judges for the final products.
Here are some highlights from the hackathon.
In their own words:
After discussing a few ideas, we settled on the idea of doing a Docker/Alexa integration that would abstract away repetitive command line interactions, allowing the user/developer to check the state of her Docker containers, and easily deploy them to production, only using voice commands. Hands free, we would prompt Alexa to interact with our Docker images and containers in various ways (ex1: “spin up image file x on server y”, “list all running containers on server z”, “deploy image a from server x to server y”) and Alexa would do it.
The main technical hurdle of the project was securely communicating between Alexa and our VMs running. To do this we used the Java JSch library. This class gave us the ability to programmatically shell into Continue reading
This post is part of a series of posts sharing other users’ stories about their migration to Linux as their primary desktop OS. As I mentioned in part 1 of the series, there seemed to be quite a bit of pent-up interest in using Linux as your primary desktop OS. I thought it might be helpful to readers to hear not just about my migration, but also about others’ migrations. You may also find it interesting/helpful to read part 2 and part 3 of this series for more migration stories.
This time around I’ll share with you some information from Ajay Chenampara about his Linux migration. Note that although these stories are all structured in a “question-and-answer” format, the information is unique—just as each person’s migration and the reasons for the migration are unique.
Q: Why did you switch to Linux?
I have been a long-time Linux user, but I have only really used it as a media server or for casual browsing. Recently, I inherited a 7 year old laptop from my wife, and decided to focus on making it my primary system for writing my blog and for OSS efforts. Plus, I kept hearing about Debian “Jessie” Continue reading
One major aspect of my migration to Linux as my primary desktop OS is how well it integrates with corporate communication and collaboration systems. Based on the feedback I’ve gotten from others on Twitter, this is a major concern for a lot of folks out there. In fact, a number of folks have indicated that this is the only thing keeping them from migrating to Linux. There are a number of different aspects to “corporate communication and collaboration,” so I’m breaking this down into multiple posts (each post will discuss one particular aspect). In this post, I’ll discuss integration with corporate e-mail.
Because corporate e-mail is such an important part of how people communicate these days, it’s a fairly significant concern when thinking of migrating to Linux. Fortunately, it’s actually pretty easy to solve.
My employer, like many companies out there, uses Office 365 for corporate e-mail. Many people think that this locks them into Outlook on the desktop side, but that’s not accurate. (Now, you may be locked into Outlook for other reasons, like calendaring—a topic I’ll touch on in part 2 of this series.) For Office 365 users, there are three paths open for accessing corporate e-mail:
Green as it’s cool, green as it’s quite, just like the trees. You’d think it’s all good and perfect. It’s also supposed to consume way less power. Yay, greener planet… Except…
When i started buying them in bulk, 500GB was a lot and a 32MB cache size seemed to be preferable to 16MB which the blue caviar offered at the time. From time to time when i was in hurry and couldn’t find a WD Green HDD, I’d settle with Blue. After couple of months a pattern started to emerge. Clients after clients started complaining about low performance. Their PCs would freeze. Sometimes as long as couple of minutes and then it would continue working again like nothing had happened. At the time I couldn’t quite figure out why after couple of months of usage, WD Green HDDs would start acting up like that.
The strange thing was that nothing was reported anywhere. Not a single suspicious system log or the so called SMART log. Even on couple of clients using Intel Raid, the “Intel RAID Chipset” seemed to be very happy with minutes of interrupts caused by the HDDs. And in a single case, one HDD suddenly died. Out Continue reading
We’re excited to announce that DockerCon 2017 will feature a comprehensive set of hands-on labs. We first introduced hands-on labs at DockerCon EU in 2015, and they were also part of DockerCon 2016 last year in Seattle. This year we’re offering a broader range of topics that cover the interests of both developers and operations personnel on both Windows and Linux (see below for a full list)
These hands-on labs are designed to be self-paced, and are run on the attendee’s laptop. But, don’t worry, all the infrastructure will be hosted again this year on Microsoft Azure. So, all you will need is a laptop capable of instantiating a remote session over SSH (for Linux) or RDP (for Windows).
We’ll have a nice space set up in between the ecosystem expo and breakout rooms for you to work on the labs. There will be tables and stools along with power and wireless Internet access as well as lab proctors to answer questions. But, because of the way the labs are set up, you could also stop by, sign up, and take your laptop to a quiet spot and work on your own.
As you can tell, we’re pretty stoked on Continue reading
Today, Docker announced its intention to donate the containerd project to the Cloud Native Computing Foundation (CNCF). Back in December 2016, Docker spun out its core container runtime functionality into a standalone component, incorporating it into a separate project called containerd, and announced we would be donating it to a neutral foundation early this year. Today we took a major step forward towards delivering on our commitment to the community by following the Cloud Native Computing Foundation process and presenting a proposal to the CNCF Technical Oversight Committee (TOC) for containerd to become a CNCF project. Given the consensus we have been building with the community, we are hopeful to get a positive affirmation from the TOC before CloudNativeCon/KubeCon later this month.
Over the past 4 years, the adoption of containers with Docker has triggered an unprecedented wave of innovation in our industry: we believe that donating containerd to the CNCF will unlock a whole new phase of innovation and growth across the entire container ecosystem. containerd is designed as an independent component that can be embedded in a higher level system, to provide core container capabilities. Since our December announcement, we have focused efforts on identifying the Continue reading
In case you missed it, this week we’re celebrating Docker’s 4th Birthday with meetups all over the world (check out #dockerbday on twitter). This feels like the right time to look back at the past 4 years and reflect on what makes the Docker Community so unique and vibrant: people, values, mentorship and learning opportunities. You can read our own Jérôme Petazzoni’s blog post for a more technical retrospective.
Managing an open source project at that scale and preserving a healthy community doesn’t come without challenges. Last year, Arnaud Porterie wrote a very interesting 3-part series blog post on open source at Docker covering the different challenges associated with the People, the Process and the Tooling and Automation. The most important aspect of all being the people.
Respect, fairness and openness are essential values required to create a welcoming environment for professionals and hobbyists alike. In that spirit, we’ve launched a scholarship program and partnerships in an attempt to improve opportunities for underrepresented groups in the tech industry while helping the Docker Community become more diverse. If you’re interested in this topic, we’re fortunate enough to have Austin area high school student Kate Hirschfeld presenting at DockerCon on Diversity Continue reading
Last week, we announced Docker Enterprise Edition (EE) and Docker Community Edition (CE) new and renamed versions of the Docker platform. Docker EE, supported by Docker Inc., is available on certified operating systems and cloud providers and runs certified Containers and Plugins from Docker Store. The Docker open source products are now Docker CE and we have adopted a new lifecycle and time-based versioning scheme for both Docker EE and CE.
We asked product manager and release captain, Michael Friis to introduce Docker CE + EE to our online community. The meetup took place on Wednesday, March 8th and over 600 people RSVPed to hear Michael’s presentation live. He gave an overview of both editions and highlighted the big enhancements to the lifecycle, maintainability and upgradability of Docker.
In case you missed it, you can watch the recording and access Michael’s slides below.
Here are additional resources:
Missed the #docker CE + EE Online #meetup w/ @friism? Check out the video & slides here!
Click To Continue reading
DockerCon 2017 is only a few weeks away, and the schedule is available now on the DockerCon Agenda Builder. This will be the first DockerCon since Windows Server 2016 was released, bringing native support for Docker containers to Windows. There will be plenty of content for Windows developers and admins – here are some of the standouts.
On the main stages, there will be hours of content dedicated to Windows and .NET.
Michele Bustamante, CIO of Solliance, looks at what Docker can do for .NET applications. Michele will start with a full .NET Framework application and show how to run it in a Windows container. Then Michele will move on to .NET Core and show how the new cross-platform framework can build apps which run in Windows or Linux containers, making for true portability throughout the data center and the cloud.
Escape From Your VMs with Image2Docker
I’ll be presenting with Docker Captain Jeff Nickoloff, covering the Image2Docker tool, which automates app migration from virtual machines to Docker images. There’s Image2Docker for Linux, and Image2Docker for Windows. We’ll demonstrate both, porting an app with a Linux front end and a Continue reading
Over the last few weeks, I’ve been sharing various users’ stories about their own personal migration to Linux. If you’ve not read them already, I encourage you to check out part 1 and part 2 of this multi-part series to get a feel for why folks are deciding to switch to Linux, the challenges they faced, and the benefits they’ve seen (so far). Obviously, Linux isn’t the right fit for everyone, but at least by sharing these stories you’ll get a better feel whether it’s a right fit for you.
This is Brian Hall’s story of switching to Linux.
Q: Why did you switch to Linux?
I’ve been an OS X user since 2010. It was a huge change coming from Windows, especially since the laptop I bought had the first SSD that I’ve had in my primary machine. I didn’t think it could get any better. Over the years that feeling started to wear off.
OS X started to feel bloated. It seemed like OS X started to get in my way more and more often. I ended up formatting and reinstalling OSX like I used to do with Windows (maybe not quite as often). Setting up Mail to Continue reading
Last week, Cisco and Docker jointly announced a strategic alliance between our organizations. Based on customer feedback, one of the initial joint initiatives is the validation of Docker Enterprise Edition (which includes Docker Datacenter) against Cisco UCS and the Nexus infrastructures. We are excited to announce that Cisco Validated Designs (CVDs) for Cisco UCS and Flexpod on Docker Enterprise Edition (EE) are immediately available.
CVDs represent the gold standard reference architecture methodology for enterprise customers looking to deploy an end-to-end solution. The CVDs follow defined processes and covers not only provisioning and configuration of the solution, but also test and document the solutions against performance, scale and availability/failure – something that requires a lab setup with a significant amount of hardware that reflects actual production deployments. This enables our customers achieve faster, more reliable and predictable implementations.
The two new CVDs published for container management offers enterprises a well designed and an end-to-end lab tested configuration for Docker EE on Cisco UCS and Flexpod Datacenter. The collaborative engineering effort between Cisco, NetApp and Docker provides enterprises best of breed solutions for Docker Datacenter on Cisco Infrastructure and NetApp Enterprise Storage to run stateless or stateful containers.
The first CVD includes 2 configurations:
Yesterday marked International Women’s Day, a global day celebrating the social, cultural, economic and political achievements of women. In that spirit, we’re thrilled to announce that we’re partnering with Girl Develop It, a national 501(c)3 nonprofit that provides affordable and judgment-free opportunities for adult women interested in learning web and software development through accessible in-person programs. Through welcoming, low-cost classes, GDI helps women of diverse backgrounds achieve their technology goals and build confidence in their careers and their everyday lives.
Girl Develop It deeply values community and supportive learning for women regardless of race, education levels, income and upbringing, and those are values we share. The Docker team is committed to ensuring that we create welcoming spaces for all members of the tech community. To proactively work towards this goal, we have launched several initiatives to strengthen the Docker community and promote diversity in the larger tech community including our DockerCon Diversity Scholarship Program, which provides mentorship and a financial scholarship to attend DockerCon. PS — Are you a women in tech and want to attend DockerCon in Austin April 17th-20th? Use code womenintech for 50% off your ticket!
In collaboration with the Continue reading
Today we’re excited to announce beta Docker Community Edition (CE) for Google Cloud Platform (GCP). Users interested in helping test and improve Docker CE for GCP should sign up at beta.docker.com. We’ll let in users to the beta as the product matures and stabilizes, and we’re looking forward to your input and suggestions.
Docker CE for GCP is built on the same principle as Docker CE for AWS and Docker CE for Azure and provides a Docker setup on GCP that is:
Docker CE for GCP is the first Docker edition to launch using the InfraKit project. InfraKit helps us configure cloud infrastructure quickly, design upgrade-processes and self-healing tailored to Docker built-in orchestration and smooth out infrastructure differences between different cloud providers to give Docker users a consistent container platform that maximises portability.
Welcome to the first in our series of blog posts for Getting Started with Ansible Tower. This series covers basic installation and functions of Tower and an overview of how to use Tower to implement IT automation.
To get started with Tower, you must first learn to install and stand up a single host. Future posts will cover other types of configurations, such as a redundant installation with an external database. For this post, we’ll be highlighting RHEL 7 and Ubuntu LTS.
1. Download the latest Tower edition
If you haven’t already, visit this link to the trial page to have a download link sent to you. If you would like, our AMIs for AWS and our vagrant image are found there as well. If you have network restrictions, contact Ansible Sales and they can send you the bundled installer.
Note: We are currently working on a bundled installer for Ubuntu LTS, so the standard installer will install for Ubuntu.
2. Unpack the file (tar xzvf towerlatest)
$ tar xzvf towerlatest ansible-tower-setup-3.1.0/ ansible-tower-setup-3.1.0/group_vars/ ansible-tower-setup-3.1.0/group_vars/all ...
-tar xzvf towerbundlelatest
$ tar xzvf Continue reading
Back in October 2016, Docker released Infrakit, an open source toolkit for creating and managing declarative, self-healing infrastructure. This is the second in a two part series that dives more deeply into the internals of InfraKit.
In the first installment of this two part series about the internals of InfraKit, we presented InfraKit’s design, architecture, and approach to high availability. We also discussed how it can be combined with other systems to give distributed computing clusters self-healing and self-managing properties. In this installment, we present an example of leveraging Docker Engine in Swarm Mode to achieve high availability for InfraKit, which in turn enhances the Docker Swarm cluster by making it self-healing.
One of the key architectural features of Docker in Swarm Mode is the manager quorum powered by SwarmKit. The manager quorum stores information about the cluster, and the consistency of information is achieved through consensus via the Raft consensus algorithm, which is also at the heart of other systems like Etcd. This guide gives an overview of the architecture of Docker Swarm Mode and how the manager quorum maintains the state of the cluster.
One aspect of the cluster state Continue reading
Long-time readers of my site know that I’m a fan of Markdown, and I use it extensively. (This blog, in fact, is written entirely in Markdown and converted to HTML using Jekyll on GitHub Pages.) Since migrating to Linux as my primary desktop OS, I’ve also made the transition to doing almost all my presentations in Markdown as well. Here are the details on how I’m using Markdown for creating presentations on Linux.
There are a number of tools involved in my workflow for creating Markdown-based presentations on Linux:
This post is part of a series of posts sharing the stories of other users who have decided to migrate to Linux as their primary desktop OS. Each person’s migration (and their accompanying story) is unique; some people have embraced Linux only on their home computer; others are using it at work as well. I believe that sharing this information will help readers who may be considering a migration of their own, and who have questions about whether this is right for them and their particular needs.
For more information about other migrations, see part 1 or part 2 of the series.
This time around we’re sharing the story of Rynardt Spies.
Q: Why did you switch to Linux?
In short, I’ve always been at least a part-time Linux desktop user and a heavy RHEL server user. My main work machine is Windows. However, because of my work with AWS, Docker, etc., I find that being on a Linux machine with all the Linux tools at hand (especially OpenSSL and simple built-in tools like SSH) is invaluable when working in a Linux world. However, I’ve always used Linux Mint, or Ubuntu (basically Debian-derived distributions) for my desktop Continue reading
With the introduction of swarm mode in Docker 1.12, we showed the world how simple it can be to provision a secure and fully-distributed Docker cluster on which to deploy highly available and scalable applications. The latest Docker 1.13 builds on and improves these capabilities with new features, such as secrets management.
Continuing with the trend that simplicity is paramount to empowering individuals and teams to achieve their goals, today we are bringing swarm mode support to Docker Cloud, with a number of new cloud-enabled capabilities. All of this is in addition to the continuous integration (CI) features of Docker Cloud, including automatic builds, tests, security scans and the world’s largest hosted registry of public and private Docker image repositories.
Fleet Management using Docker ID
Keeping track of many swarms sprawling multiple regions or cloud providers can be a challenge. And securely connecting to remote swarms with TLS means teams must also spend time configuring and maintaining a Public Key Infrastructure. By registering your new or existing swarms with Docker Cloud, teams can now easily manage a large number of swarms running anywhere, and only need their Docker ID to authenticate and securely access any of them.
Docker Continue reading
Welcome to Technology Short Take #79! There’s lots of interesting links for you this time around.
grep
of all my blog posts found nothing), so let me rectify that first. Skydive is (in the project’s own words) an “open source real-time network topology and protocols analyzer.” The project’s GitHub repository is here, and documentation for Skydive is here.Nothing this time around. Should I keep this section, or ditch it? Feel free to give me your feedback on Twitter.