With industry analysts declaring Windows Server with more than 60% of the x86 server market, and citing Microsoft Azure as the fastest-growing public cloud, it comes as no surprise that Microsoft, even at its current scale, is further extending its leadership as a strategic, trusted partner to enterprise IT.
It is this industry leadership that catalyzed our technical collaboration in the Docker open source project back in October 2014, to jointly bring the agility, portability, and security benefits of the Docker platform to Windows Server. After two years of joint engineering, we are excited to unveil a new, commercial partnership to extend these benefits for both Windows developers targeting Windows Server and enterprise IT professionals.
Specifically, the commercial partnership entails:
Today, Microsoft is announcing general availability of Windows Server 2016 at the Ignite conference in Atlanta. For Windows developers and IT-pros, the most exciting new Windows feature is containers, and containers on Windows Server 2016 are powered by Docker.
The first version of Docker was released in 2013, and in the 3 years since launch, Docker has completely transformed how Linux developers and ops build, ship and run apps. With Docker Engine and containers now available natively on Windows, developers and IT-pros can begin the same transformation for Windows-based apps and infrastructure and start reaping the same benefits: better security, more agility, and improved portability and freedom to move on-prem apps to the cloud.
For developers and IT-pros that build and maintain heterogenous deployments with both Linux and Windows infrastructure, Docker on Windows holds even greater significance: The Docker platform now represents a single set of tools, APIs and image formats for managing both Linux and Windows apps. As Linux and Windows apps and servers are dockerized, developers and IT-pros can bridge the operating system divide with shared Docker terminology and interfaces for managing and evolving complex microservices deployments both on-prem and in the cloud.
It’s time for your weekly roundup! Get caught up on the top Docker news including; how to maintain dev environments for Java web apps, scale with Swarm, and make your CI/CD pipeline work for you. As we begin a new week, let’s recap our top five most-read stories of the week of September 18, 2016:
Weekly #Roundup: Top 5 #Docker stories for the week 09/18/16
Click To Tweet
The post Docker Weekly Roundup | September 18, 2016 appeared first Continue reading
I’ve spent most of the summer traveling to and speaking at a lot of different trade shows: EMC World, Cisco Live!, VMworld, HP Discover, Dockercon, and LinuxCon (as well as some meetups and smaller gatherings). A lot of the time, I’m speaking to people who are just getting familiar with Docker. They may have read an article or have had someone walk into their office and say “This Docker thing, so hot right now. Go figure it out”.
Certainly there are a number of companies running Docker in production, but there are still many who are asking fundamental questions about what Docker is, and how it can benefit their organization. To help folks out in that regard, I wrote an eBook.
After someone gets a grasp on what Docker is, they tend to want to dive in and start exploring, but often times they aren’t sure how to get started.
My advice (based on the approach I took when I joined Docker last year) is to walk, jog, and then run:
Walk: Decide where you want to run Docker, and install it. This could be Docker for Mac, Docker for Windows, or just installing Docker on Linux. Continue reading
By John Mulhausen
The documentation team at Docker is excited to announce that we are consolidating all of our documentation into a single GitHub Pages-based repository on GitHub.
In this post, I’m going to show you how to install a specific version of the Docker Engine package on Ubuntu. While working on a side project (one that will hopefully bear fruit soon), I found myself in need of installing a slightly older version of Docker Engine (1.11 instead of 1.12, to be specific). While this task isn’t hard, it also wasn’t clearly spelled out anywhere, and this post aims to help address that shortcoming.
If you’ve followed the instructions to add the Docker Apt repos to your system as outlined here, then installing the Docker Engine (latest version) would be done something like this:
apt-get install docker-engine
If you do an apt-cache search docker-engine
, though, you’ll find that the “docker-engine” package is a metapackage that refers to a variety of different versions of the Docker Engine. To install a specific version of the Docker Engine, then, simply append the version (as described by the results of the apt-cache search docker-engine
command) to the end, like this:
apt-get install docker-engine=1.11.2-0~trusty
This will install version 1.11.2 of the Docker Engine.
You’ll use the same syntax when you need to install a specific Continue reading
Docker Captain is a distinction that Docker awards select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others.
This week we are highlighting 3 of our outstanding Captains who are making September one filled with Docker learnings and events. Read on to learn more about how they got started, what they love most about Docker, and why Docker.
Alex is a Principal Application Developer with expertise in the full Microsoft .NET stack, Node.js and Ruby. He enjoys making robots and IoT-connected projects with Linux and the Raspberry PI microcomputer. He is a writer for Linux User and Developer magazine and also produces tutorials on Docker, coding and IoT for his tech blog at alexellis.io.
As a Docker Captain, how do you share that learning with the community?
I started out by sharing tutorials and code on my blog alexellis.io and on Github. More recently I’ve attended local meet-up groups, conferences and tech events to speak and tell a story about Docker and cool hacks. I joined Twitter in March and it’s definitely a must-have for reaching people.
Why do you like Docker?
Docker Continue reading
Next week Microsoft will host over 20,000 IT executives, architects, engineers, partners and thought-leaders from around the world at Microsoft Ignite, September 25th-30th at the Georgia World Congress Center in Atlanta, Georgia.
Visit the Docker booth #758 to learn how developers and IT pros can build, ship, and run any application, anywhere, across both Windows and Linux operating systems with Docker. By transforming modern application architectures for Linux and Windows applications, Docker allows business to benefit from a more agile development environment with a single journey for all their applications.
Don’t miss out! Docker experts will be on-hand to for in-booth demos to help you:
Calling all Microsoft MVPs!
Attend our daily in booth theater session “Docker Containers for Linux and Windows” with Docker evangelist Mike Coleman in the Docker booth @ 2PM every day. Session attendees will receive exclusive Docker and Microsoft swag.
To learn more about how Docker powers Windows containers, add these key Docker sessions to your Ignite agenda:
GS05: Reinvent IT infrastructure for business agility
Microsoft’s strategy Continue reading
Thanks to everyone who joined us last Thursday. We were really excited to participate in the first Cloud Field Day event and to host at Docker HQ in San Francisco. In watching the trend to cloud and the changing dynamics of application development, the Tech Field Day organizers, Stephen Foskett and Tom Hollingsworth started Cloud Field Day to create a forum for companies to share and for the delegates to discuss. The delegates came from backgrounds in software development, containers, networking, virtualization, storage, data and of course, cloud… As always, the delegates asked a lot of questions, kicked off some great discussions, even had some spirited debates both in the room and online, always with the end user in mind. We are looking forward to doing this again.
ICYMI: The videos and event details are now available online and also follow the conversation from Twitter with the delegates.
#containers are really about applications, not infrastructure #cfd1 @docker https://t.co/BAabGfwKIm pic.twitter.com/S8YrLDLd92
— Karen Lopez (@datachick) September 15, 2016
It’s staggering how far apart many traditional IT departments are from where the leading edge currently is… #CFD1
— Jason Nash (@TheJasonNash) September 15, 2016
There is NO way Continue reading
Developing Java web applications often requires that they can be deployed on multiple technology stacks. These typically include an application server and a database, but these components can vary from deployment to deployment. Building and managing multiple development stacks in a development environment can be a time consuming task often requiring unique configurations for each stack.
Docker can simplify the process of building and maintaining develop environments for Java web applications by building custom images that application developers can create on demand and use for development, testing and debugging applications. We have recently published a tutorial for building a Java web application using containers and three popular Java IDEs. Docker enables developers to debug their code as it runs in containers. The tutorial covers setting up a debug session with an application server in Docker using IDEs that developers typically use such as Eclipse, IntelliJ IDEA and Netbeans. Developers can build the application, change code, and set breakpoints while the application is running in the container. The tutorials use a simple Spring MVC application to illustrate how use containers when developing Java applications
The tutorial is available on GitHub in our Docker Labs repository. These tutorials show you how to:
If you’re going to be in Barcelona for either VMworld EMEA (running the week of October 17) or the fall 2016 OpenStack Summit (running the week of October 24), then I recommend you plan for your spouse/partner/girlfriend/boyfriend/whatever to join you for what I believe are some pretty spectacular Spousetivities.
First, let’s have a quick look at the activities planned around VMworld EMEA. What’s in store? Here’s a quick sneak peek (check out the registration page for full details):
Tickets for all these events are available now. These events were sponsored by VMware NSX, Veeam, VMUG, and TVP Strategy.
If you’re coming to Barcelona for the OpenStack Summit instead (or perhaps staying over Continue reading
In July, we released Ansible Tower 3. This blog series is a deep dive into some of the new aspects of Tower. We've reworked Tower to make it simpler and easier to automate your environments and share your automation solutions. For a complete overview of the Tower 3 updates, check out this post by Bill Nottingham, Director of Product.
Before we look at what's new, let’s remember the < 3.0 installer - referred to hereafter as the legacy installer. The legacy installer configuration was designed to be ran by users without Ansible knowledge.
This requirement led to the two step process:
Step 1:
./configure prompts the user for the needed configuration information to setup Tower. This includes things like: tower mode (i.e. single machine, remote database, HA), ssh connection information, and service passwords. The Ansible variable file, tower_setup_conf.yml, is generated to be consumed by the ./setup.sh script.
tower_setup_conf.yml admin_password: password database: internal pg_password: BQgA2Z43jv86dzjDEswH7K75LAwufzSXbE7jUztq primary_machine: localhost redis_password: S3tab7QfWe2e92JEB9hNNFUunV4ircg3EdRdjpxP
Step 2:
./setup.sh wraps the Ansible install.yml, backup.yml, and restore.yml playbooks and passes in the appropriate run-time flag to include the previously generated configuration variable file and manage the generated logs. The . Continue reading
As we arrive at the conclusion of another week, the team at Docker wanted to take a moment to reflect on a few of the top posts you might have missed, while also highlighting a few other Docker stories from around the web. Here’s the weekly roundup for the week of September 11, 2016:
It’s here! HPE Docker ready servers are now available. These servers are pre-configured, integrated and validated with commercially supported Docker Engine out of the box. Enterprises can ease the adoption of Docker through a trusted hardware platform.
Announced in June, the Docker and Hewlett Packard Enterprise (HPE) partnership, has been called “The 10 Most Important Tech Partnerships In 2016 (so far),” by CRN as a way to bring infrastructure optimized Docker technology to enable a modern application platform for the enterprise.
Integrated, Validated and Supported
Docker ready servers are available for the HPE ProLiant, Cloudline, and Hyper Converged Systems. These servers come pre-installed with the commercially supported Docker Engine (CS Engine) and enterprise class support direct from HPE, backed by Docker. Whether deploying new servers or facing a hardware refresh, enterprises looking to adopt containerization can benefit from a simplified and repeatable deployment option on hardware they trust.
HPE Docker ready servers accelerate businesses time to value with everything needed in a single server to scale and support Docker environments, combining the hardware and OS you already use in your environment with the Docker CS Engine. Docker CS Engine is a commercially supported container runtime and native Continue reading
Today Docker is proud to announce that we are founding member of the Vendor Security Alliance (VSA), a coalition formed to help organizations streamline their vendor evaluation processes by establishing a standardized questionnaire for appraising a vendor’s security and compliance practices.The VSA was established to solve a fundamental problem: how can IT teams conform to its existing security practices when procuring and deploying third-party components and platforms?
The VSA solves this problem by developing a required set of security questions that will allow vendors to demonstrate to their prospective customers that they are doing a good job with security and data handling. Good security is built on great technology paired with processes and policies. Until today, there was no consistent way to discern if all these things were in place. Doing a proper security evaluation today tends to be a hard, manual process. A large number of key questions come to mind when gauging how well a third-party company manages security.
As an example, these are the types of things that IT teams must be aware of when assessing a vendor’s security posture:
In this post, I’d like to describe how to use Vagrant with AWS, as well as provide a brief description of why this combination of technologies may make sense for some use cases. In some respects, this post is similar to my posts on using Docker Machine with OpenStack and using Vagrant with OpenStack in that combining Vagrant with AWS creates another clean “provider/consumer” model that makes it easy for users to consume infrastructure.
If you aren’t already familiar with Vagrant, I’d highly recommend first taking a look at my introduction to Vagrant, which provides an overview of the tool and how it’s used.
Naturally, you’ll need to first ensure that you have Vagrant installed. This is really well-documented already, so I won’t go over it here. Next, you’ll need to install the AWS provider for Vagrant, which you can handle using this command:
vagrant plugin install vagrant-aws
Once you’ve installed the vagrant-aws
plugin, you’ll next need to install a box that Vagrant can use. Here, the use of Vagrant with AWS is a bit different than the use of Vagrant with a provider like VirtualBox or VMware Fusion/VMware Workstation. In those cases, the box Continue reading
This is a short collection of tips and tricks showing how Docker can be useful when working with Go code. For instance, I’ll show you how to compile Go code with different versions of the Go toolchain, how to cross-compile to a different platform (and test the result!), or how to produce really small container images.
The following article assumes that you have Docker installed on your system. It doesn’t have to be a recent version (we’re not going to use any fancy feature here).
… And by that, we mean “Go without installing go
”.
If you write Go code, or if you have even the slightest interest into the Go language, you certainly have the Go compiler and toolchain installed, so you might be wondering “what’s the point?”; but there are a few scenarios where you want to compile Go without installing Go.
Today the Docker team is excited to announce a new tiered Docker Partner Program to address the growing demand by companies to adopt Containers as a Service environments with Docker Datacenter. This enhanced program provides end-to end support for a community of Resellers, Regional Consulting partners, Global Systems Integrators and Federal Systems Integrators.
Since the launch of Docker over three years ago, there has been tremendous adoption from the developer community of Docker container technology to accelerate their development and CI workflows. Companies of all sizes, from startups to Fortune 500 in healthcare, financial services, mobile apps and more are leaning on Docker to transform their application pipeline by containerizing legacy and new microservices applications. As companies embark on their Docker journey, they are looking to their business partners to assist them in developing a business case, understand the functionality, architect use cases and deploy the environments.
New Program, Training and Resources
Containerization is the next catalyst for transformation in application infrastructure. The Docker Partner Program is designed to help partners build a successful “Docker practice” rooted in best practices, technical expertise and as extension of their expertise in cloud technologies, Software Defined Datacenter and converged infrastructure.
The new tiered Continue reading
In case you missed it, we recently launched Dockercast, the official Docker Podcast including all the DockerCon 2016 sessions available as podcast episodes.
In this podcast I catch up with Nirmal Mehta at Booz Allen Hamilton. Nirmal has been a big part of the Docker community and is also a Docker Captain.
Nirmal works with some large government organizations and we discussed why these types of institutions seemed to be early adopters of Docker. As most would answer, speed was an obvious driver, however, we discuss that security was also an early driver. Turns out due to tighter boundaries of Docker containers some of these organizations felt that the potential security opportunities stretched better than virtualization. We discuss these ideas as well as what is it like to be a Docker Captain.
You can find the latest #Dockercast episodes on the Itunes Store or via the SoundCloud RSS feed.
New #dockercast episode w/ host @botchagalupe & @normafaults from @BoozAllen as a guest!
Click To Tweet
The post New Dockercast Episode with Docker Captain, Nirmal Mehta appeared first on Docker Blog.
This post is a follow-up on my earlier post on using an SSH bastion host. Since that article was published, I’ve gotten some additional information that I wanted to be sure to share with my readers. It’s possible that this additional information may not affect you, but I’ll allow you to make that determination based on your use case and your specific environment.
You may recall that my original article said that you needed to enable agent forwarding, either via the -A
command-line switch or via a ForwardAgent
line in your SSH configuration file. This is unnecessary. (Thank you to several readers who contacted me about this issue.) I tested this several times using AWS instances, and was able to transparently connect to private instances (instances without a public IP address) via a bastion host without enabling agent forwarding. This is odd because almost every other tutorial I’ve seen or read instructs readers to enable agent forwarding. I’ve not yet determined why this is the case, but I’m going to do some additional testing and I’ll keep readers posted as I learn more.
Note that I’ve updated the original article accordingly.
The Continue reading