No Postfix installation is complete without OpenDKIM and OpenDMARC.
While some people go for all-in-one solutions that does all of these for them with a single command or two (and then cry to their gods as soon as the system fails as they have no idea how to debug it), the rest of us rather to be our own boss and set things up manually and carefully based on our needs, so we can troubleshoot it if when things go wrong.
This however, is easier to be said than done. In this post, rather than trying to explain what they are and how they can be set up (which can be found everywhere on the web), I am mainly going to address the issues that you might encounter when running your Postfix and these Milters on the same system running Ubuntu.
OpenDKIM and OpenDMARC are designed to be used as Milters. They are two different programs for two different -and yet related- tasks.
They show a lot of similarities in their configuration files and both suffer from the same limitations when running along with a chrooted Postfix instance.
While in a recent enough version of Postfix, daemons are Continue reading
With KubeCon EU happening in Copenhagen, we looked back at the most popular posts with our readers on Docker and Kubernetes. For those of you that have yet to try Docker EE 2.0, this blog highlights how in Docker for Desktops you can use Docker compose to directly deploy an application onto a Kubernetes cluster.
If you’re running an edge version of Docker on your desktop (Docker for Mac or Docker for Windows Desktop), you can now stand up a single-node Kubernetes cluster with the click of a button. While I’m not a developer, I think this is great news for the millions of developers who have already been using Docker on their Macbook or Windows laptop because they now have a fully compliant Kubernetes cluster at their fingertips without installing any other tools.
Developers using Docker to build containerized applications often build Docker Compose files to deploy them. With the integration of Kubernetes into the Docker product line, some developers may want to leverage their existing Compose files but deploy these applications in Kubernetes.
With Docker on the desktop (as well as Docker Enterprise Edition) you can use Docker compose to directly deploy an application Continue reading
With less than 6 weeks until DockerCon 2018, we can barely contain our excitement! From their favorite tips and tricks for using Docker in production or levering Docker for Machine Learning, Docker Captains come together at DockerCon to share their knowledge and collaborate with the broader community. We’ve asked Docker Captains to share what they are most looking forward to at DockerCon. Here are some of their responses.
“I’m looking forward to meeting the many other Docker enthusiasts and champions and listening to other cool things that Docker makes possible” – Kinnary Jangla, Pinterest
“ In 2015, I attended DockerCon for the first time. I was sitting in a chair and listening to the amazing stories and ideas presented by speakers at the conference, which set off a chain of events that led to today. I feel privileged, and am really looking forward to being on stage and sharing our transformational journey to inspire the people who would sit in that chair. I am also looking forward to hearing the keynotes and the exciting new announcements that I am sure are being lined up for the big event.” – Alexandre Iankoulski, Baker Hughes
“Learning about the Continue reading
Highly-regulated industries like financial services, insurance and government have their own set of complex and challenging regulatory IT requirements that must be constantly maintained. For this reason, the introduction of new technology can sometimes be difficult. Docker Enterprise Edition provides these types of organization with both a secure platform on which containers are the foundation for building compliant applications and a workflow for operational governance at scale.
The problem remains that even with the technology innovation of containers, cloud and other new tools, the area of IT compliance has remained relatively unchanged with security standards that lag far behind, creating mismatches of traditional controls to modern systems. Organizations are still dependent on the same mundane, paperwork-heavy audit and reporting processes of previous decades. The time and cost to build a PCI, FISMA or HIPAA compliant system is no small feat, even for large enterprises, due to the resources required to develop and maintain the documentation and artifacts that must be continuously audited by a third party.
To address these requirements, Docker has collaborated with the National Institute of Standards and Technology (NIST), and today, we are excited to announce that Docker is fully embracing Continue reading
With KubeCon EU happening in Copenhaguen, we looked back at the most popular posts with our readers on Docker and Kubernetes. For those of you that have yet to try Docker EE 2.0, this blog highlights how Docker EE 2.0 provides a secure supply chain for Kubernetes.
The GA release of the Docker Enterprise Edition (Docker EE) container platform last month integrates Kubernetes orchestration, running alongside Swarm, to provide a single container platform that supports both legacy and new applications running on-premises or in the cloud. For organizations that are exploring Kubernetes or deploying it in production, Docker EE offers integrated security for the entire lifecycle of a containerized application, providing an additional layer of security before the workload is deployed by Kubernetes and continuing to secure the application while it is running.
Mike Coleman previously discussed access controls for Kubernetes. This week we’ll begin discussing how Docker EE secures the Kubernetes supply chain.
When you purchase something from a retail store, there is an entire supply chain that gets the product from raw materials to the manufacturer to you. Similarly, there is a software supply chain that takes an application from Continue reading
The countdown is on! It’s just a few short days until Red Hat Summit. I’m Kaete Piccirilli and I do all things Ansible Marketing. While it’s not my first Summit at Red Hat, it’s the first one I’ll be attending, and I cannot be more excited to finally be in the mix of our customers, partners and open source communities.
Red Hat Summit has an action-packed few days planned, and I have picked a few Ansible Automation sessions that you won’t want to miss.
Managing 15,000 network devices with Ansible
Ansible allows network management across virtually any device platform. Any network device can be managed via SSH or an API. We took this cutting-edge network automation to scale with a customer’s global network infrastructure, giving them the ability to manage nearly all of their network devices at one time.
In this session, we'll discuss the architecture and strategies involved in network automation.
Manage Windows like Linux with Ansible
Few questions induce fear into the heart of a Linux admin more than, "Hey, can you manage these Windows servers?"
In this session, we'll show how Ansible does simple, secure, and agentless Windows management with the exact Continue reading
GitKraken is a full-featured graphical Git client with support for multiple platforms. Given that I’m trying to live a multi-platform life, it made sense for me to give this a try and see whether it is worth making part of my (evolving and updated) multi-platform toolbelt. Along the way, though, I found that GitKraken doesn’t provide an RPM package for Fedora, and that the installation isn’t as straightforward as one might hope. I’m documenting the procedure here in the hope of helping others.
First, download the latest release of GitKraken. You can do this via the terminal with this command:
curl -LO https://release.gitkraken.com/linux/gitkraken-amd64.tar.gz
Extract the contents of the GitKraken download into its own directory under /opt
using this command (you can use a different directory if you like, but I prefer to install third-party applications like this under /opt
):
sudo tar -C /opt -xvf gitkraken-amd64.tar.gz
This will extract everything into /opt/gitkraken
.
Next, you’ll create a symbolic link to an existing library to fix an error with GitKraken when running on Fedora (this is documented here):
sudo ln -s /usr/lib64/libcurl.so.4 /usr/lib64/libcurl-gnutls.so.4
Once this is done, you could just run Continue reading
In early 2017 I posted about my (evolving) multi-platform toolbelt, describing some of the applications, standards, and services that I use across my Linux and macOS systems. In this post, I’d like to provide an updated review of that toolbelt.
Visual Studio Code: I switched from Sublime Text to Visual Studio Code during my latest migration to Fedora 27 on a Lenovo ThinkPad X1 Carbon. Since I’m also planning on expanding my coding skills with Golang, I felt that Visual Studio Code would be a better choice than Sublime Text. I’m still generating the majority of my content in Markdown (MultiMarkdown is the flavor that I generally use), and I’ve found Visual Studio Code to be pretty decent as a Markdown editor.
IMAP/SMTP: I’ve standardized on using IMAP/SMTP for all my e-mail accounts, which gives me quite a bit of flexibility in clients and OSes. It’s very likely I’ve pretty much standardized on Thunderbird (which supports OS X, Linux, and Windows).
Unison: This cross-platform file synchronization tool helps keep my files in sync across my macOS and Linux systems.
Dropbox: Dropbox gives me access to non-confidential files from any of my devices or platforms (macOS, iOS, and Linux).
This year’s summit reflected what is top of mind for government organizations, namely IT modernization and what that means for infrastructure, applications, data and the workforce. As mentioned in the keynote address, the line between government IT and private sector IT is blurring now more than ever. From the priorities outlined in the White House IT Modernization Report to the discussions at the recent IT modernization summit, the themes focus on results of better customer service and better stewardship of tax dollars.
Better customer service translates into improving existing services, delivering new services and increasing transparency. To that end, government organizations are taking cues from industry to see how the latest technology and best practices can be applied and adapted to meet the added requirements of government. The agenda featured speakers from government agencies, higher ed, system integrators and industry partners providing practical insight from their own transformation initiatives and deep dives into the modern technology stack.
Watch these featured videos from the event:
Welcome to Technology Short Take #98! Now that I’m starting to get settled into my new role at Heptio, I’ve managed to find some time to pull together another collection of links and articles pertaining to various data center technologies. Feedback is always welcome!
According to a recent stackoverflow report, the Docker Platform is in the top 10 skills to learn if you want to advance in a career in tech. So where do I go to start learning Docker you may ask? Well the good news is that we now have free workshops and hands-on Labs included as part of your DockerCon 2018 ticket.
The conference workshops will focus on a range of subjects from migrating .NET or Java apps to the Docker platform to deep dives on container monitoring and logging, networking, storage and security. Each workshop is designed to give you hands-on instructions and guidance on key container notions and mentoring by Docker Engineers and Docker Captains. The workshops are a great opportunity to zoom in a specific aspects on the Docker platform. Here is the list of free workshops available (click on the links to see the full abstracts):
Roles are an essential part of Ansible, and help in structuring your automation content. The idea is to have clearly defined roles for dedicated tasks. During your automation code, the roles will be called by the Ansible Playbooks.
Since roles usually have a well defined purpose, they make it easy to reuse your code for yourself, but also in your team. And you can even share roles with the global community. In fact, the Ansible community created Ansible Galaxy as a central place to display, search and view Ansible roles from thousands of people.
So what does a role look like? Basically it is a predefined structure of folders and files to hold your automation code. There is a folder for your templates, a folder to keep files with tasks, one for handlers, another one for your default variables, and so on:
tasks/
handlers/
files/
templates/
vars/
defaults/
meta/
In folders which contain Ansible code - like tasks, handlers, vars, defaults - there are main.yml
files. Those contain the relevant Ansible bits. In case of the tasks
directory, they often include other yaml files within the same directory. Roles even provide ways to test your automation code - in Continue reading
“You are now Certified Kubernetes.” With this comment, Docker for Windows and Docker for Mac passed the Kubernetes conformance tests. Kubernetes has been available in Docker for Mac and Docker for Windows since January, having first being announced at DockerCon EU last year. But why is this important to the many of you who are using Docker for Windows and Docker for Mac?
Kubernetes is designed to be a platform that others can build upon. As with any similar project, the risk is that different distributions vary enough that applications aren’t really portable. The Kubernetes project has always been aware of that risk – and this led directly to forming the Conformance Working Group. The group owns a test suite that anyone distributing Kubernetes can run, and submit the results for to attain official certification. This test suite checks that Kubernetes behaves like, well, Kubernetes; that the various APIs are exposed correctly and that applications built using the core APIs will run successfully. In fact, our enterprise container platform, Docker Enterprise Edition, achieved certification using the same test suite You can find more about the test suite at https://github.com/cncf/k8s-conformance.
This is important for Docker for Windows and Docker for Continue reading
Welcome to the first installment of our Windows-specific Getting Started series!
Would you like to automate some of your Windows hosts with Red Hat Ansible Tower, but don’t know how to set everything up? Are you worried that Red Hat Ansible Engine won’t be able to communicate with your Windows servers without installing a bunch of extra software? Do you want to easily automate everyone’s best friend, Clippy?
We can’t help with the last thing, but if you said yes to the other two questions, you've come to the right place. In this post, we’ll walk you through all the steps you need to take in order to set up and connect to your Windows hosts with Ansible Engine.
A few of the many things you can do for your Windows hosts with Ansible Engine include:
In addition to connecting to and automating Windows hosts using local or domain users, you’ll also be able to use runas
to execute actions as the Administrator (the Windows alternative to Linux’s sudo
or su
), so Continue reading
Last month the Linux Foundation announced the 2018 Open Container Initiative (OCI) election results of the Technical Oversight Board (TOB). Members of the TOB then voted to elect our very own Michael Crosby as the new Chairman. The result of the election should not come as a surprise to anyone in the community given Michael’s extensive contributions to the container ecosystem.
Back in February 2014, Michael led the development of libcontainer, a Go library that was developed to access the kernel’s container APIs directly, without any other dependencies. If you look at this first commit of libcontainer, you’ll see that the JSONspec is very similar to the latest version of the 1.0 runtime specification.
In the interview below, we take a closer look at Michael’s contributions to OCI, his vision for the future and how this benefits all Docker users.
I think that it is important to be part of the TOB to ensure that the specifications that have been created are generally useful and not specific to any one use case. I also feel it is important to ensure that the specifications are stable so that Continue reading
The Ansible Networking Team is excited about the release of Ansible 2.5. Back in February, I wrote about new Networking Features in Ansible 2.5, and one of the biggest areas of feedback was around the network_cli connection plugin. For more background on this connection plugin, please refer to the previous blog post.
In this post, I convert existing networking playbooks that use connection: local
to use connection: network_cli
. Please note that the passwords are in plain text for demonstration purposes only. Refer to the following Ansible Networking documentation page recommendation for using Ansible Vault for secure password storage and usage.
To demonstrate, let’s use an existing GitHub repository with working playbooks using the legacy connection local method. NOTE: The connection local method will continue to be supported for quite some time, and has not been announced as deprecated yet. This repository has several examples using Ansible and NAPALM but we are highlighting the Ansible Playbooks in this post. The GitHub repository can be found here.
Networking platforms use their specific *_config
platform module for easy backups within Ansible. For this playbook we are running the Ansible Playbook Continue reading
Docker believes in making technology easy to use and accessible and that approach also extends to our enterprise-ready container platform. That means providing out-of-the-box integrations to key extensions of the platform that enterprise organizations require, but also making it possible to swap these built-in solutions with other tools as desired.
Docker Enterprise Edition 2.0, integrates Kubernetes to our platform and delivers the only Kubernetes platform that can be deployed across multiple clouds and multiple operating systems. As part of this release, we have included Project Calico by Tigera as the “batteries included” Kubernetes CNI plug-in for a highly scalable, industry-leading networking and routing solution.
While we support our customers using their preferred CNI plug-in, we chose to integrate Project Calico for our built-in solution because it aligns well with our design objectives for Docker EE 2.0:
Earlier this morning, I asked on Twitter about good individuals to follow on Twitter for Kubernetes information. I received quite a few good responses (thank you!), and I though it might be useful to share the list of the folks that were recommended across all those responses.
The list I’ve compiled is clearly incomplete! If you think someone should be added to this list, feel free to hit me up on Twitter and let me know. Alternately, feel free to submit a pull request (PR) that adds them to this list. I’m not going to “vet” the list, so I’ll add any and all recommendations (unless they are clearly not related to Kubernetes, such as a news anchorman someone recommended to me—not sure about that one!).
Without further ado, here is the list I compiled from the responses to my tweet, in no particular order (I’ve included full name and employer, where that information is available):
Did you know that Docker Hub has millions of users pulling roughly one billion container images every two weeks — and it all runs on Docker Enterprise Edition?
Docker Enterprise Edition 2.0 may now be available to commercial customers who require an enterprise-ready container platform, but the Docker operations team has already been using it in production for some time. As part of our commitment to delivering high quality software that is ready to support your mission-critical applications, we leverage Docker Enterprise Edition 2.0 as the platform behind Docker Hub and our other SaaS services, Docker Store, and Docker Cloud.
Some organizations call it “dogfooding;” some call it “drinking your own champagne.” Whatever you call it, the importance of this program is to be fully invested in our own container platform and share in the same operational experiences as our customers.
One of the main features of this latest release is the integration of Kubernetes so we wanted to make sure we are leveraging this capability. Working closely with our SaaS team leads, we chose a few services to migrate to Kubernetes while keeping others on Swarm.
For people already running Docker EE, Continue reading
We are excited to announce Docker Enterprise Edition 2.0 – a significant leap forward in our enterprise-ready container platform. Docker Enterprise Edition (EE) 2.0 is the only platform that manages and secures applications on Kubernetes in multi-Linux, multi-OS and multi-cloud customer environments. As a complete platform that integrates and scales with your organization, Docker EE 2.0 gives you the most flexibility and choice over the types of applications supported, orchestrators used, and where it’s deployed. It also enables organizations to operationalize Kubernetes more rapidly with streamlined workflows and helps you deliver safer applications through integrated security solutions. In this blog post, we’ll walk through some of the key new capabilities of Docker EE 2.0.
As containerization becomes core to your IT strategy, the importance of having a platform that supports choice becomes even more important. Being able to address a broad set of applications across multiple lines of business, built on different technology stacks and deployed to different infrastructures means that you have the flexibility needed to make changes as business requirements evolve. In Docker EE 2.0 we are expanding our customers’ choices in a few ways: