From October through December, Docker User Groups all over the world are hosting a workshop for their local community! Join us for an Introduction to Docker for Developers, a hands-on workshop we run on Play with Docker.
This Docker 101 workshop for developers is designed to get you up and running with containers. You’ll learn how to build images, run containers, use volumes to persist data and mount in source code, and define your application using Docker Compose. We’ll even mix in a few advanced topics, such as networking and image building best-practices. There is definitely something for everyone!
Visit your local User Group page to see if there is a workshop scheduled in your area. Don’t see an event listed? Email the team by scrolling to the bottom of the chapter page and clicking the contact us button. Let them know you want to join in on the workshop fun!
Don’t see a user group in your area? Never fear, join the virtual meetup group for monthly meetups on all things Docker.
The #LearnDocker for #developers workshop series is coming to Continue reading
Last week I had a crazy idea: if kustomize
can be used to modify YAML files like Kubernetes manifests, then could one use kustomize
to modify a kubeadm
configuration file, which is also a YAML manifest? So I asked about it in one of the Kubernetes-related channels in Slack at work, and as it turns out it’s not such a crazy idea after all! So, in this post, I’ll show you how to use kustomize
to modify kubeadm
configuration files.
If you aren’t already familiar with kustomize
, I recommend having a look at this blog post, which provides an overview of this tool. For the base kubeadm
configuration files to modify, I’ll use kubeadm
configuration files from this post on setting up a Kubernetes 1.15 cluster with the AWS cloud provider.
While the blog post linked above provides an overview of kustomize
, it certainly doesn’t cover all the functionality kustomize
provides. In this particular use case—modifying kubeadm
configuration files—the functionality described in the linked blog post doesn’t get you where you need to go. Instead, you’ll have to use the patching functionality of kustomize
, which allows you to overwrite specific fields within the YAML definition Continue reading
Last week I had a crazy idea: if kustomize
can be used to modify YAML files like Kubernetes manifests, then could one use kustomize
to modify a kubeadm
configuration file, which is also a YAML manifest? So I asked about it in one of the Kubernetes-related channels in Slack at work, and as it turns out it’s not such a crazy idea after all! So, in this post, I’ll show you how to use kustomize
to modify kubeadm
configuration files.
If you aren’t already familiar with kustomize
, I recommend having a look at this blog post, which provides an overview of this tool. For the base kubeadm
configuration files to modify, I’ll use kubeadm
configuration files from this post on setting up a Kubernetes 1.15 cluster with the AWS cloud provider.
While the blog post linked above provides an overview of kustomize
, it certainly doesn’t cover all the functionality kustomize
provides. In this particular use case—modifying kubeadm
configuration files—the functionality described in the linked blog post doesn’t get you where you need to go. Instead, you’ll have to use the patching functionality of kustomize
, which allows you to overwrite specific fields within the YAML definition Continue reading
We’re continuing our celebration of Women in Tech Week into this week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.
Product Designer.
11 months.
The designer part, yes. But the software product part, not necessarily. My background is in architecture and industrial design and I imagined I would do physical product design. But I enjoy UX; the speed at which you can iterate is great for design.
To embrace discomfort. I don’t mean that in a bad way. A mentor once told me that the only time your brain is actually growing is when you’re uncomfortable. It has something to do with the dendrites being forced to grow because you’re forced to learn new things.
Kubernetes is a powerful container orchestrator and has been establishing itself as IT architects’ container orchestrator of choice. But Kubernetes’ power comes at a price; jumping into the cockpit of a state-of-the-art jet puts a lot of power under you, but knowing how to actually fly it is not so simple. That complexity can overwhelm a lot of people approaching the system for the first time.
I wrote a blog series recently where I walk you through the basics of architecting an application for Kubernetes, with a tactical focus on the actual Kubernetes objects you’re going to need. The posts go into quite a bit of detail, so I’ve provided an abbreviated version here, with links to the original posts.
With a machine as powerful as Kubernetes, I like to identify the absolute minimum set of things we’ll need to understand in order to be successful; there’ll be time to learn about all the other bells and whistles another day, after we master the core ideas. No matter where your application runs, in Kubernetes or anywhere else, there are four concerns we are going to have to address:
The upcoming Red Hat Ansible Engine 2.9 release has some really exciting improvements, and the following blog highlights just a few of the notable additions. In typical Ansible fashion, development of Ansible Network enhancements are done in the open with the help of the community. You can follow along by watching the GitHub project board, as well as the roadmap for the Red Hat Ansible Engine 2.9 release via the Ansible Network wiki page.
As was recently announced, Red Hat Ansible Automation Platform now includes Ansible Tower, Ansible Engine, and all Ansible Network content. To date, many of the most popular network platforms are enabled via Ansible Modules. Here are just a few:
A full list of the platforms that are fully supported by Red Hat via an Ansible Automation subscription can be found at the following location: https://docs.ansible.com/ansible/2.9/modules/network_maintained.html#network-supported
In the last four years we’ve learned a lot about developing a platform for network automation. We’ve also learned a lot about how users apply these platform artifacts as consumed in end-user Ansible Playbooks and Roles. In the Continue reading
We’re continuing our celebration of Women in Tech Week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.
SEG Engineer (Support Escalation Engineer).
4 months.
The SEG role is a combination that probably doesn’t exist as a general rule. I’ve always liked to support other engineers and work cross-functionally, as well as unravel hard problems, so it’s a great fit for me.
The only thing constant about a career in tech is change. When in doubt, keep moving. By that, I mean keep learning, keep weighing new ideas, keep trying new things.
In my first month at Docker, we hosted a summer cohort of students from Historical Black Colleges who were participating in a summer internship. As part of their visit Continue reading
We’re continuing our celebration of Women in Tech Week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.
I work as a data engineer – building and maintaining data pipelines and delivery tools for the entire company.
2 years.
Not quite! As a teenager, I wanted to become a cryptographer and spent most of my time in undergrad and grad school on research in privacy and security. I eventually realized I liked working with data and was pretty good at dealing with databases, which pushed me into my current role.
Become acquainted with the entire data journey and try to pick up one tool or language for each phase. For example, you may choose to use Python to fetch and transform data from an API and load it Continue reading
We’re continuing our celebration of Women in Tech Week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.
What is your job?
Senior Director of Product Marketing.
How long have you worked at Docker?
2 ½ years.
Is your current role one that you always intended on your career path?
Nope! I studied engineering and started in a technical role at a semiconductor company. I realized there that I really enjoyed helping others understand how technology works, and that led me to Product Marketing! What I love about the role is that it’s extremely cross-functional. You work closely with engineering, product management, sales and marketing, and it requires both left brain and right brain skills. My technical background helps me to understand our products, while my creative side helps me communicate our products’ core value propositions.
What is your advice for someone entering the field?
It’s always good to be self-aware. Know your strengths and weaknesses, and look Continue reading
Red Hat Ansible Tower offers value by allowing automation to scale in a checked manner - users can run playbooks for only the processes and targets they need access to, and no further.
Not only does Ansible Tower provide automation at scale, but it also integrates with several external platforms. In many cases, this means that users can use the interface they are accustomed to while launching Ansible Tower templates in the background.
One of the most ubiquitous self service platforms in use today is ServiceNow, and many of the enterprise conversations had with Ansible Tower customers focus on ServiceNow integration. With this in mind, this blog entry walks through the steps to set up your ServiceNow instance to make outbound RESTful API calls into Ansible Tower, using OAuth2 authentication.
This is part 3 in a multi-part series, feel free to refer to part 1 and part 2 for more context.
The following software versions are used:
If you sign up for a ServiceNow Developer account, ServiceNow offers a free instance that can be used for replicating and testing this functionality. Your ServiceNow instance needs to be able Continue reading
Docker Enterprise was built to be secure by default. When you build a secure by default platform, you need to consider security validation and governmental use. Docker Enterprise has become the first container platform to complete the Security Technical Implementation Guides (STIG) certification process. Thanks to Defense Information Systems Agency (DISA) for its support and sponsorship. Being the first container platform to complete the STIG process through DISA means a great deal to the entire Docker team.
The STIG took months of work around writing and validating the controls. What does it really mean? Having a STIG allows government agencies to ensure they are running Docker Enterprise in the most secure manner. The STIG also provides validation for the private sector. One of the great concepts with any compliance framework, like STIGs, is the idea of inherited controls. Adopting a STIG recommendation helps improve an organization’s security posture. Here is a great blurb from DISA’ site:
The Security Technical Implementation Guides (STIGs) are the configuration standards for DOD IA and IA-enabled devices/systems. Since 1998, DISA has played a critical role enhancing the security posture of DoD’s security systems by providing the Security Technical Implementation Guides (STIGs). The STIGs Continue reading
It’s Women in Tech Week, and we want to take the opportunity to celebrate some of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.
Software Engineer. I build systems, write and review code, test and analyze software. I’ve always worked on infrastructure software both at and before Docker. I participate in Moby and Kubernetes OSS projects. My current work is on persistent storage for Kubernetes workloads and integrating it with Docker’s Universal Control Plane. I also enjoy speaking at technical conferences and writing blogs about my work.
4 years and 1 month!
Yes, I’ve always been on this path. In my high school, we had the option to take biological sciences or Computer Sciences (CS). I chose CS and since then that has been my path. I earned both my bachelors’ and master’s degrees in CS.
Whether you’re a security professional looking at automation for the first time, or an ITops veteran tasked to support corporate secops teams, the following blog provides an overview of how Red Hat Ansible Automation can support your security automation program throughout all the different stages of its evolution.
Automation is becoming more and more pervasive across the entire IT stack.
Initially introduced to support ITOps, automation has been a well-established practice for years.
Today, thanks to modern automation platforms like Red Hat Ansible Automation, IT organizations are more capable of coping with the unprecedented scale, and complexity of modern infrastructures and finally have access to a level of flexibility that allows for extending automation practices to entirely new areas.
As an example, Ansible Network Automation enabled network operators to be the next group approaching automation in a structured fashion, to help simplify both maintenance and operations of their ever-growing, multi-vendor, brownfield infrastructures.
The security space started looking at automation in relatively recent times to support the already overwhelmed security teams against modern cyberattacks that are reaching an unparalleled level of speed and intricacy.
In fact, if we factor in the aforementioned scale Continue reading
Last week, we covered some of the questions about container infrastructure from our recent webinar “Demystifying VMs, Containers, and Kubernetes in the Hybrid Cloud Era.” This week, we’ll tackle the questions about Kubernetes, Docker and the software supply chain. One common misperception that we heard in the webinar — that Docker and Kubernetes are competitors. In fact, Kubernetes is better with Docker. And Docker is better with Kubernetes.
We hear questions along this line all the time. Here are some quick answers:
With the rise of the Internet of Things (IoT) combined with global rollout of 5G (Fifth-generation wireless network technology), a perfect storm is brewing that will see higher speeds, extreme lower latency, and greater network capacity that will deliver on the hype of IoT connectivity.
And industry experts are bullish on the future. For example, Arpit Joshipura, The Linux Foundation’s general manager of networking, predicts edge computing will overtake cloud computing by 2025. According to Santhosh Rao, senior research director at Gartner, around 10% of enterprise-generated data is created and processed outside a traditional centralized data center or cloud today. He predicts this will reach 75% by 2025.
Back in April 2019, Docker and Arm announced a strategic partnership enabling cloud developers to build applications for cloud, edge, and IoT environments seamlessly on the Arm® architecture. We carried the momentum with Arm from that announcement into DockerCon 2019 in our joint Techtalk, where we showcased cloud native development on Arm and how multi-architecture containers with Docker can be used to accelerate Arm development.
As part of our strategic partnership, Docker will Continue reading
When you get on a cruise ship or go to a major resort, there’s a lot happening behind the scenes. Thousands of people work to create amazing, memorable experiences, often out of sight. And increasingly, technology helps them make those experiences even better.
We sat down recently with Todd Heard, VP of Infrastructure at Carnival Corporation, to find out how technology like Docker helps them create memorable experiences for their guests. Todd and some of his colleagues worked at Disney in the past, so they know a thing or two about memorable experiences.
Here’s what he told us. You can also catch the highlights in this 2 minute video:
Our goal at Carnival Corporation is to provide a very personalized, seamless, and customized experience for each and every guest on their vacation. Our people and technology investments are what make that possible. But we also need to keep up with changes in the industry and people’s lifestyles.
One of the ironies in the travel industry is that everybody talks about technology, but the technology should be invisible Continue reading
We had a great turnout to our recent webinar “Demystifying VMs, Containers, and Kubernetes in the Hybrid Cloud Era” and tons of questions came in via the chat — so many that we weren’t able to answer all of them in real-time or in the Q&A at the end. We’ll cover the answers to the top questions in two posts (yes, there were a lot of questions!).
First up, we’ll take a look at IT infrastructure and operations topics, including whether you should deploy containers in VMs or make the leap to containers on bare metal.
Among the top questions was whether users should just run a container platform on bare metal or run it on top of their virtual infrastructure — Not surprising, given the webinar topic.
This is the first in a series of guest blog posts by Docker Captain Ajeet Raina diving in to how to run Kubernetes on Docker Enterprise. You can follow Ajeet on Twitter @ajeetsraina and read his blog at http://www.collabnix.com.
There are now a number of options for running certified Kubernetes in the cloud. But let’s say you’re looking to adopt and operationalize Kubernetes for production workloads on-premises. What then? For an on-premises certified Kubernetes distribution, you need an enterprise container platform that allows you to leverage your existing team and processes.
At DockerCon 2019, Docker announced the Docker Kubernetes Service (DKS). It is a certified Kubernetes distribution that is included with Docker Enterprise 3.0 and is designed to solve this fundamental challenge.
In this blog series, I’ll explain Kubernetes support and capabilities under Docker Enterprise 3.0, covering these topics:
In this blog series on Kubernetes, we’ve already covered:
In this series’ final installment, I’ll explain how to provision storage to a Kubernetes application.
The final component we want to think about when we build applications for Kubernetes is storage. Remember, a container’s filesystem is transient, and any data kept there is at risk of being deleted along with your container if that container ever exits or is rescheduled. If we want to guarantee that data lives beyond the short lifecycle of a container, we must write it out to external storage.
Any container that generates or collects valuable data should be pushing that data out to stable external storage. In our web app example, the database tier should be pushing its on-disk contents out to external storage so they can survive a catastrophic failure of our database pods.
Similarly, any container that requires the provisioning of a lot of data should be getting Continue reading
Welcome to Technology Short Take #119! As usual, I’ve collected some articles and links from around the Internet pertaining to various data center- and cloud-related topics. This installation in the Tech Short Takes series is much shorter than usual, but hopefully I’ve managed to find something that proves to be helpful or informative! Now, on to the content!
iptables
layer involved in most Kubernetes implementations to load balance traffic directly to Pods in the cluster. Unfortunately, this appears to be GKE-specific.Nothing this time around. I’ll stay tuned for content to include next time!