Kubernetes’s gravity as the container orchestrator of choice continues to grow, and for good reason: It has the broadest capabilities of any container orchestrator available today. But all that power comes with a price; jumping into the cockpit of a state-of-the-art jet puts a lot of power under you, but how to actually fly the thing is not obvious.
Kubernetes’ complexity is overwhelming for a lot of people jumping in for the first time. In this blog series, I’m going to walk you through the basics of architecting an application for Kubernetes, with a tactical focus on the actual Kubernetes objects you’re going to need. I’m not, however, going to spend much time reviewing 12-factor design principles and microservice architecture; there are some excellent ideas in those sort of strategic discussions with which anyone designing an application should be familiar, but here on the Docker Training Team I like to keep the focus on concrete, hands-on-keyboard implementation as much as possible.
Furthermore, while my focus is on application architecture, I would strongly encourage devops engineers and developers building to Kubernetes to follow along, in addition to readers in application architecture Continue reading
Docker support for cross-platform applications is better than ever. At this month’s Docker Virtual Meetup, we featured Docker Architect Elton Stoneman showing how to build and run truly cross-platform apps using Docker’s buildx functionality.
With Docker Desktop, you can now describe all the compilation and packaging steps for your app in a single Dockerfile, and use it to build an image that will run on Linux, Windows, Intel and Arm – 32-bit and 64-bit. In the video, Elton covers the Docker runtime and its understanding of OS and CPU architecture, together with the concept of multi-architecture images and manifests.
The key takeaways from the meetup on using buildx:
Not a Docker Desktop user? Jason Andrews, a Solutions Director at Arm, posted this great article on how to setup buildx using Docker Community Engine on Linux.
Check out the full meetup on Docker’s YouTube Channel:
You can also access the demo repo here. The sample code for this meetup is from Elton’s latest book, Learn Docker in a Month of Lunches, an accessible task-focused Continue reading
On the heels of our recent update on image tag details, the Docker Hub team is excited to share the availability of personal access tokens (PATs) as an alternative way to authenticate into Docker Hub.
Already available as part of Docker Trusted Registry, personal access tokens can now be used as a substitute for your password in Docker Hub, especially for integrating your Hub account with other tools. You’ll be able to leverage these tokens for authenticating your Hub account from the Docker CLI – either from Docker Desktop or Docker Engine:
docker login --username <username>
When you’re prompted for a password, enter your token instead.
The advantage of using tokens is the ability to create and manage multiple tokens at once so you can generate different tokens for each integration – and revoke them independently at any time.
Personal access tokens are created and managed in your Account Settings.
From here, you can:
Note that the actual token is only shown once, at the time Continue reading
We sat down recently with our customer, Wiley Education Services, to find out how Docker Enterprise helps them connect with and empower higher education students. Wiley Education Services (WES) is a division of Wiley Publishing that delivers online services to over 60 higher education institutions.
We spoke with Blaine Helmick, Senior Manager of Systems Engineering about innovation and technology in education. Read on to learn more about Wiley, or watch the short video interview with Blaine:
Our mission at Wiley Education Services is empowering people, to connect people to their futures. We serve over 60 higher education partners around the world, and our role is to connect you to our higher education partners when you’re looking for a degree and you’re frankly looking to change your life.
Wiley has been around for over 200 years. One of the really amazing things about being in an organization that’s been around that long is that you have to have a culture of innovation at your core.
Technology like Docker has really empowered our business because it allows us to innovate, and it allows us to experiment. That’s critical because Continue reading
We sat down recently with InterSystems, our partner and customer, to talk about how they deliver an enterprise database at scale to their customers. InterSystems’s software powers mission-critical applications at hospitals, banks, government agencies and other organizations.
We spoke with Joe Carroll, Product Specialist, and Todd Winey, Director of Partner Programs at InterSystems about how containerization and Docker are helping transform their business.
Here’s what they told us. You can also catch the highlights in this 2 minute video:
Joe Carroll: InterSystems is a 41 year old database and data platform company. We’ve been in data storage for a very long time and our customers tend to be traditional enterprises — healthcare, finance, shipping and logistics as well as government agencies. Anywhere that there’s mission critical data we tend to be around. Our customers have really important systems that impact people’s lives, and the mission critical nature of that data characterizes who our customers are and who we are.
Todd Winey: Many of those organizations and industries have been traditionally seen as laggards in terms of their technology adoption in the past, so the speed with which they’re moving Continue reading
Last year at DockerCon and Microsoft Connect, we announced the Cloud Native Application Bundle (CNAB) specification in partnership with Microsoft, HashiCorp, and Bitnami. Since then the CNAB community has grown to include Pivotal, Intel, DataDog, and others, and we are all happy to announce that the CNAB core specification has reached 1.0.
We are also announcing the formation of the CNAB project under the Joint Development Foundation, a part of the Linux Foundation that’s chartered with driving adoption of open source and standards. The CNAB specification is available at cnab.io. Docker is working hard with our partners and friends in the open source community to improve software development and operations for everyone.
Docker was one of the first to implement the CNAB specification with Docker App, our reference implementation available on GitHub. Docker App can be used to both build CNAB bundles for Docker Compose (which can then be used with any other CNAB client), and also to install, upgrade, and uninstall any other CNAB bundle.
It also forms the underpinnings of application templates in Docker Desktop Enterprise. With Docker App, we are making CNAB-compliant applications as easy to use Continue reading
One of Docker’s core missions is delivering choice and flexibility across different application languages and frameworks, operating systems, and infrastructure. When it comes to modern applications, the choice of infrastructure is not just whether the application is run on-premises, on virtual machines or bare metal, or in the cloud. It can also be a choice of which architecture – x86, Arm, or GPU.
Today, we’re happy to share some updates in Docker Hub that make it easier to access multi-architecture images and scanning results through the Tag UX.
In this example, we’re looking at a listing for a Docker Official Image that supports x86, PowerPC and IBMz as listed in the labels. When you land on the image page on Docker Hub, you can quickly identify if an image supports multiple architectures in the labels underneath the image name. For further details, you can click on ‘Tags’:
In this section, you can now view the different architectures separately to easily identify the right image for the architecture you need, complete with image size and operating system information:
If you click on the digest for a particular architecture, you will now also be able to Continue reading
Earlier in August, we hosted a series of virtual events to introduce Docker Enterprise 3.0. Thousands of you registered and joined us, and many of you asked great questions. This blog contains the top questions and answers from the event series.
Q: Can Docker Enterprise be used on AWS and other cloud providers?
A: Yes! Docker Enterprise, including the Docker Universal Control Plane (UCP) and Docker Trusted Registry (DTR), can be deployed to any of the leading cloud environments, including AWS, Azure and GCP. With Docker Enterprise 3.0, we also launched the Docker Cluster CLI plugin for use with Docker Certified Infrastructure. The plugin (now supporting AWS and Azure) allows for simple installation and upgrading of Docker Enterprise on selected cloud providers.
Q: Is Docker Cluster only available in the public cloud, or is it possible to add local machines or VMs?
A: Additional support for VMware vSphere environments is coming shortly. If you have other platforms that need to be supported, please engage with your account team to provide that feedback!
Q: Does Docker Kubernetes Service (DKS) work with both on-premises and other Kubernetes environments such as EKS, AKS, Continue reading
We had the chance recently to sit down with the Citizens Bank mortgage division and ask them how they’ve incorporated innovation into a regulated and traditional business that is still very much paper-based.
The most important lesson they’ve learned: you have to be willing to “fail fearlessly,” but to do that, you also have to minimize the consequences and cost of failure so you can constantly try new ideas. With Docker Enterprise, the team has been able to take ideas from concept to production in as little as a day.
Here’s what they told us. You can also catch the highlights in this 2 minute video:
Matt Rider, CIO Mortgage Division: Our focus is changing the mortgage technology experience at the front end with the borrower and on the back end for the loan officers and the processors. How do we bring those two together? How do we reduce the aggravation that comes with obtaining a mortgage?
Matt: When I came here I recognized that we were never going to achieve our vision if we kept doing things the same way. We wanted to reduce the aggravation that comes with obtaining a mortgage. Continue reading
If you’ve worked in IT for a few years, you’ve seen it happen. You select an application framework, operating system, database platform, or other infrastructure because it meets the checklist, the price is right, or sometimes because of internal politics. You quickly discover that it doesn’t play well with other solutions or across platforms — except of course it’s “easy and seamless” when used with offerings from the same vendor.
But try telling your developers that they can’t use their favorite framework, development toolset, or have to use a specific operating system for everything they do. If developers feel like they don’t have flexibility, they quickly adopt their own tools, creating a second wave of shadow IT.
And it doesn’t just affect developers. IT operations and security get bogged down in managing multiple systems and software sprawl. The business suffers because efficiency and innovation lag when teams get caught up in fighting fires.
Below are 5 things that can go wrong when you get locked in to an infrastructure platform:
Will the platform you pick work with any combination of public and private clouds? Will you get cornered into Continue reading
In all of the excitement and buzz around Kubernetes, one important factor in the conversation that seems to be glossed over is how and where containerized applications are built. Going back to Docker’s roots, it was developers who were the first ones to adopt Docker containers. It solved their own local development issues and made it easier and faster to get applications out the door.
Fast forward 5 years, and developers are more important than ever. They build modern apps and modernize existing apps that are the backbone of organizations. If you’re in IT operations and selecting application platforms, one of the biggest mistakes you can make is making this decision in isolation, without development buy-in.
In the early days of public cloud, developers started going around IT to get fast access to computing resources, creating the first round of “Shadow IT”. Today, most large enterprises have embraced cloud applications and infrastructure, and work collaboratively across application development and operations teams to serve their needs.
But there’s a risk we’ll invite the same thing to happen again by making a container platform decision that doesn’t involve your developers. Here are 3 reasons to Continue reading