This is a guest post by Javier Ramírez, Docker Captain and IT Architect at Hopla Software. You can follow him on Twitter @frjaraur or on Github.
Docker began including Kubernetes with Docker Enterprise 2.0 last year. The recent 3.0 release includes CNCF Certified Kubernetes 1.14, which has many additional security features. In this blog post, I will review Pod Security Policies and Admission Controllers.
Pod Security Policies are rules created in Kubernetes to control security in pods. A pod will only be scheduled on a Kubernetes cluster if it passes these rules. These rules are defined in the “PodSecurityPolicy” resource and allow us to manage host namespace and filesystem usage, as well as privileged pod features. We can use the PodSecurityPolicy resource to make fine-grained security configurations, including:
The Docker Universal Control Plane (UCP) 3.2 provides two Pod Security Policies by default – which is helpful Continue reading
We recognize the central role that Docker Hub plays in modern application development and are working on many enhancements around security and content. In this blog post we will share how we are implementing two-factor authentication (2FA).
Two-factor authentication increases the security of your accounts by requiring two different forms of validation. This helps ensure that you are the rightful account owner. For Docker Hub, that means providing something you know (your username and a strong password) and something you have in your possession. Since Docker Hub is used by millions of developers and organizations for storing and sharing content – sometimes company intellectual property – we chose to use one of the more secure models for 2FA: software token (TOTP) authentication.
TOTP authentication is more secure than SMS-based 2FA, which has many attack vectors and vulnerabilities. TOTP requires a little more upfront setup, but once enabled, it is just as simple (if not simpler) than text message-based verification. It requires the use of an authenticator application, of which there are many available. These can be apps downloaded to your mobile device (e.g. Google Authenticator or Microsoft Authenticator) or it can Continue reading
From October through December, Docker User Groups all over the world are hosting a workshop for their local community! Join us for an Introduction to Docker for Developers, a hands-on workshop we run on Play with Docker.
This Docker 101 workshop for developers is designed to get you up and running with containers. You’ll learn how to build images, run containers, use volumes to persist data and mount in source code, and define your application using Docker Compose. We’ll even mix in a few advanced topics, such as networking and image building best-practices. There is definitely something for everyone!
Visit your local User Group page to see if there is a workshop scheduled in your area. Don’t see an event listed? Email the team by scrolling to the bottom of the chapter page and clicking the contact us button. Let them know you want to join in on the workshop fun!
Don’t see a user group in your area? Never fear, join the virtual meetup group for monthly meetups on all things Docker.
The #LearnDocker for #developers workshop series is coming to Continue reading
We’re continuing our celebration of Women in Tech Week into this week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.
Product Designer.
11 months.
The designer part, yes. But the software product part, not necessarily. My background is in architecture and industrial design and I imagined I would do physical product design. But I enjoy UX; the speed at which you can iterate is great for design.
To embrace discomfort. I don’t mean that in a bad way. A mentor once told me that the only time your brain is actually growing is when you’re uncomfortable. It has something to do with the dendrites being forced to grow because you’re forced to learn new things.
Kubernetes is a powerful container orchestrator and has been establishing itself as IT architects’ container orchestrator of choice. But Kubernetes’ power comes at a price; jumping into the cockpit of a state-of-the-art jet puts a lot of power under you, but knowing how to actually fly it is not so simple. That complexity can overwhelm a lot of people approaching the system for the first time.
I wrote a blog series recently where I walk you through the basics of architecting an application for Kubernetes, with a tactical focus on the actual Kubernetes objects you’re going to need. The posts go into quite a bit of detail, so I’ve provided an abbreviated version here, with links to the original posts.
With a machine as powerful as Kubernetes, I like to identify the absolute minimum set of things we’ll need to understand in order to be successful; there’ll be time to learn about all the other bells and whistles another day, after we master the core ideas. No matter where your application runs, in Kubernetes or anywhere else, there are four concerns we are going to have to address:
We’re continuing our celebration of Women in Tech Week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.
SEG Engineer (Support Escalation Engineer).
4 months.
The SEG role is a combination that probably doesn’t exist as a general rule. I’ve always liked to support other engineers and work cross-functionally, as well as unravel hard problems, so it’s a great fit for me.
The only thing constant about a career in tech is change. When in doubt, keep moving. By that, I mean keep learning, keep weighing new ideas, keep trying new things.
In my first month at Docker, we hosted a summer cohort of students from Historical Black Colleges who were participating in a summer internship. As part of their visit Continue reading
We’re continuing our celebration of Women in Tech Week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.
I work as a data engineer – building and maintaining data pipelines and delivery tools for the entire company.
2 years.
Not quite! As a teenager, I wanted to become a cryptographer and spent most of my time in undergrad and grad school on research in privacy and security. I eventually realized I liked working with data and was pretty good at dealing with databases, which pushed me into my current role.
Become acquainted with the entire data journey and try to pick up one tool or language for each phase. For example, you may choose to use Python to fetch and transform data from an API and load it Continue reading
We’re continuing our celebration of Women in Tech Week with another profile of one of many of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.
What is your job?
Senior Director of Product Marketing.
How long have you worked at Docker?
2 ½ years.
Is your current role one that you always intended on your career path?
Nope! I studied engineering and started in a technical role at a semiconductor company. I realized there that I really enjoyed helping others understand how technology works, and that led me to Product Marketing! What I love about the role is that it’s extremely cross-functional. You work closely with engineering, product management, sales and marketing, and it requires both left brain and right brain skills. My technical background helps me to understand our products, while my creative side helps me communicate our products’ core value propositions.
What is your advice for someone entering the field?
It’s always good to be self-aware. Know your strengths and weaknesses, and look Continue reading
Docker Enterprise was built to be secure by default. When you build a secure by default platform, you need to consider security validation and governmental use. Docker Enterprise has become the first container platform to complete the Security Technical Implementation Guides (STIG) certification process. Thanks to Defense Information Systems Agency (DISA) for its support and sponsorship. Being the first container platform to complete the STIG process through DISA means a great deal to the entire Docker team.
The STIG took months of work around writing and validating the controls. What does it really mean? Having a STIG allows government agencies to ensure they are running Docker Enterprise in the most secure manner. The STIG also provides validation for the private sector. One of the great concepts with any compliance framework, like STIGs, is the idea of inherited controls. Adopting a STIG recommendation helps improve an organization’s security posture. Here is a great blurb from DISA’ site:
The Security Technical Implementation Guides (STIGs) are the configuration standards for DOD IA and IA-enabled devices/systems. Since 1998, DISA has played a critical role enhancing the security posture of DoD’s security systems by providing the Security Technical Implementation Guides (STIGs). The STIGs Continue reading
It’s Women in Tech Week, and we want to take the opportunity to celebrate some of the amazing women who make a tremendous impact at Docker – this week, and every week – helping developers build modern apps.
Software Engineer. I build systems, write and review code, test and analyze software. I’ve always worked on infrastructure software both at and before Docker. I participate in Moby and Kubernetes OSS projects. My current work is on persistent storage for Kubernetes workloads and integrating it with Docker’s Universal Control Plane. I also enjoy speaking at technical conferences and writing blogs about my work.
4 years and 1 month!
Yes, I’ve always been on this path. In my high school, we had the option to take biological sciences or Computer Sciences (CS). I chose CS and since then that has been my path. I earned both my bachelors’ and master’s degrees in CS.
Last week, we covered some of the questions about container infrastructure from our recent webinar “Demystifying VMs, Containers, and Kubernetes in the Hybrid Cloud Era.” This week, we’ll tackle the questions about Kubernetes, Docker and the software supply chain. One common misperception that we heard in the webinar — that Docker and Kubernetes are competitors. In fact, Kubernetes is better with Docker. And Docker is better with Kubernetes.
We hear questions along this line all the time. Here are some quick answers:
With the rise of the Internet of Things (IoT) combined with global rollout of 5G (Fifth-generation wireless network technology), a perfect storm is brewing that will see higher speeds, extreme lower latency, and greater network capacity that will deliver on the hype of IoT connectivity.
And industry experts are bullish on the future. For example, Arpit Joshipura, The Linux Foundation’s general manager of networking, predicts edge computing will overtake cloud computing by 2025. According to Santhosh Rao, senior research director at Gartner, around 10% of enterprise-generated data is created and processed outside a traditional centralized data center or cloud today. He predicts this will reach 75% by 2025.
Back in April 2019, Docker and Arm announced a strategic partnership enabling cloud developers to build applications for cloud, edge, and IoT environments seamlessly on the Arm® architecture. We carried the momentum with Arm from that announcement into DockerCon 2019 in our joint Techtalk, where we showcased cloud native development on Arm and how multi-architecture containers with Docker can be used to accelerate Arm development.
As part of our strategic partnership, Docker will Continue reading
When you get on a cruise ship or go to a major resort, there’s a lot happening behind the scenes. Thousands of people work to create amazing, memorable experiences, often out of sight. And increasingly, technology helps them make those experiences even better.
We sat down recently with Todd Heard, VP of Infrastructure at Carnival Corporation, to find out how technology like Docker helps them create memorable experiences for their guests. Todd and some of his colleagues worked at Disney in the past, so they know a thing or two about memorable experiences.
Here’s what he told us. You can also catch the highlights in this 2 minute video:
Our goal at Carnival Corporation is to provide a very personalized, seamless, and customized experience for each and every guest on their vacation. Our people and technology investments are what make that possible. But we also need to keep up with changes in the industry and people’s lifestyles.
One of the ironies in the travel industry is that everybody talks about technology, but the technology should be invisible Continue reading
We had a great turnout to our recent webinar “Demystifying VMs, Containers, and Kubernetes in the Hybrid Cloud Era” and tons of questions came in via the chat — so many that we weren’t able to answer all of them in real-time or in the Q&A at the end. We’ll cover the answers to the top questions in two posts (yes, there were a lot of questions!).
First up, we’ll take a look at IT infrastructure and operations topics, including whether you should deploy containers in VMs or make the leap to containers on bare metal.
Among the top questions was whether users should just run a container platform on bare metal or run it on top of their virtual infrastructure — Not surprising, given the webinar topic.
This is the first in a series of guest blog posts by Docker Captain Ajeet Raina diving in to how to run Kubernetes on Docker Enterprise. You can follow Ajeet on Twitter @ajeetsraina and read his blog at http://www.collabnix.com.
There are now a number of options for running certified Kubernetes in the cloud. But let’s say you’re looking to adopt and operationalize Kubernetes for production workloads on-premises. What then? For an on-premises certified Kubernetes distribution, you need an enterprise container platform that allows you to leverage your existing team and processes.
At DockerCon 2019, Docker announced the Docker Kubernetes Service (DKS). It is a certified Kubernetes distribution that is included with Docker Enterprise 3.0 and is designed to solve this fundamental challenge.
In this blog series, I’ll explain Kubernetes support and capabilities under Docker Enterprise 3.0, covering these topics:
In this blog series on Kubernetes, we’ve already covered:
In this series’ final installment, I’ll explain how to provision storage to a Kubernetes application.
The final component we want to think about when we build applications for Kubernetes is storage. Remember, a container’s filesystem is transient, and any data kept there is at risk of being deleted along with your container if that container ever exits or is rescheduled. If we want to guarantee that data lives beyond the short lifecycle of a container, we must write it out to external storage.
Any container that generates or collects valuable data should be pushing that data out to stable external storage. In our web app example, the database tier should be pushing its on-disk contents out to external storage so they can survive a catastrophic failure of our database pods.
Similarly, any container that requires the provisioning of a lot of data should be getting Continue reading
Lisa Dethmers-Pope and Amn Rahman at Docker also contributed to this blog post.
As a Technical Recruiter at Docker, I am excited to be a part of Grace Hopper Celebration. It is a marvelous opportunity to speak with many talented women in tech and to continue pursuing one of Docker’s most valued ambitions: further diversifying our team. The Docker team will be on the show floor at the Grace Hopper Celebration, the world’s largest gathering of women technologists the week of October 1st in Orlando, Florida.
Our Vice President of Human Resources, and our Senior Director of Product Management, along with representatives from our Talent Acquisition and Engineering teams will be there to connect with attendees. We will be showing how to easily build, run, and share an applications using the Docker platform, and talking about what it’s like to work in tech today.
While we’ve made strides in diversity within tech, the 2019 Stack Overflow Developer Survey shows we have work to do. According to the survey, only 7.5 percent of professional developers are women worldwide (it’s 11 percent of all developers in the U. Continue reading
I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series, and discussed processes as pods and controllers in part 2. In part 3, I explained how to configure networking services in Kubernetes to allow pods to communicate reliably with each other. In this installment, I’ll explain how to identify and manage the environment-specific configurations expected by your application to ensure its portability between environments.
One of the core design principles of any containerized app must be portability. We absolutely do not want to reengineer our containers or even the controllers that manage them for every environment. One very common reason why an application may work in one place but not another is problems with the environment-specific configuration expected by that app.
A well-designed application should treat configuration like an independent object, separate from the containers themselves, that’s provisioned to them at runtime. That way, when you move your app from one environment to another, you don’t need to rewrite any of your containers or controllers; you simply provide a configuration object appropriate to this new environment, leaving everything else untouched.
When we design applications, we need to identify what Continue reading
I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series, and discussed processes as pods and controllers in part 2. In this post, I’ll explain how to configure networking services in Kubernetes to allow pods to communicate reliably with each other.
At this point, we’ve deployed our workloads as pods managed by controllers, but there’s no reliable, practical way for pods to communicate with each other, nor is there any way for us to visit any network-facing pod from outside the cluster. Kubernetes networking model says that any pod can reach any other pod at the target pod’s IP by default, but discovering those IPs and maintaining that list while pods are potentially being rescheduled — resulting in them getting an entirely new IP — by hand would be a lot of tedious, fragile work.
Instead, we need to think about Kubernetes services when we’re ready to start building the networking part of our application. Kubernetes services provide reliable, simple networking endpoints for routing traffic to pods via the fixed metadata defined in the controller that created them, rather than via unreliable pod IPs. For simple applications, Continue reading
I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series. In this post, I’ll explain how to use pods and controllers to create scalable processes for managing your applications.
The heart of any application is its running processes, and in Kubernetes we fundamentally create processes as pods. Pods are a bit fancier than individual containers, in that they can schedule whole groups of containers, co-located on a single host, which brings us to our first decision point:
The original idea behind a pod was to emulate a logical host – not unlike a VM. The containers in a pod will always be scheduled on the same Kubernetes node, and they’ll be able to communicate with each other via localhost
, making pods a good representation of clusters of processes that need to work together closely.
But there’s an important consideration: it’s not possible to scale individual containers in a pod separately from each other. If you need to scale Continue reading