The following blog contains answers to all questions asked during the Automating F5 BIG-IP using Ansible webinar.
Interested in exploring other Ansible webinars? Register for one of our upcoming webinars or watch an on-demand webinar.
Q: Can you pass the BIG-IP username and password by variable? Also, is there a way to mask the password in the Playbooks or manually feed the credentials as the Playbooks run? How can we ensure security here given that administrative passwords are clear text in the Playbooks themselves?
Yes, the BIG-IP username and password can be passed as a variable by referencing them from the inventory file or even provide them during runtime on the cli -- although this would show them in the process list if you did a 'ps'. You can also specify them in a vars_prompt; this would prevent them from being shown in 'ps'. The downside here is that this would limit the amount of automation you can provide because running the Playbook would require that either be typed in or specified with '-e' ('-e' auto fills vars_prompts that match). The recommended way is to get the vars from a secure location. Ansible provides Vault, Continue reading
It’s no secret that I’m a big fan of using Markdown (specifically, MultiMarkdown) for the vast majority of all the text-based content that I create. Over the last few years, I’ve created used various tools and created scripts to help “reduce the friction” involved with outputting Markdown source files into a variety of destination formats (HTML, RTF, or DOCX, for example). Recently, thanks to Cody Bunch, I was pointed toward the use of a Makefile
to assist in this area. After a short period of experimentation, I’m finding that I really like this workflow, and I wanted to share some details here with my readers.
First, if you’re not familiar with make
and its use of a Makefile
, check out this introduction. There’s a ton of power and flexibility here, of which I’ve only scratched the surface so far. The basic gist behind a Makefile
is that it provides a set of instructions to the make
command. Each set of instructions is tied to a target, which has one or more dependencies. In the “traditional” use cases for make
, this is to allow programmers to define how a set of files should be compiled as well Continue reading
I remember the first AnsibleFest I attended – it was San Francisco 2014. I had been with Ansible for a week and had flown out to meet some of my new colleagues.
As a user of Ansible for the past year, I'd discovered how cheery and helpful the community was. "Newbies" dropping by the IRC channel on Freenode were always helped out, no matter how simple the question. The community spirit is something many people comment on when first using Ansible.
I remember meeting core engineer Brian Coca for the first time at that AnsibleFest too, also a recent joiner to the company. Brian was asked that morning if he'd give a talk, a request he calmly accepted as if he'd been asked to make a cup of tea. Top tip – never miss a talk given by Brian, you will learn something new!
Later, during the happy hour, I talked with lots of attendees, many just wanting to tell us how much they'd enjoyed the day. It was great to see the open source community feel extending to our full day conferences.
Two and half years later and I still see that community spirit day in, day out. Only now it's Continue reading
Recently I was in need of setting up some windows clients to connect to my OpenVPN server. This server running on Linux, uses a specific MTU value (let’s say 1400) to ensure maximum compatibility with different clients over different links.
In addition to the OpenVPN process itself, the kernel must also know about the correct MTU so packet size could be adjusted before reaching the tun/tap interface.
This is very easy to do in Linux. In fact you most likely do not need to do anything at all. OpenVPN will adjusted the MTU of the tun/tap interface while creating it. You can check the interfaces effective MTU by using ip link show
or ifconfig
command.
The same however can not be said about Windows. In a typical scenario, OpenVPN is not even directly responsible for creating the said interface. Instead, it requires the interface to be already in placed (which is achieved by calling tapinstall.exe
during the initial setup) and then it would connect to it.
So even though you have specified your MTU settings in the OpenVPN profile, at least at the time of writing, this does not reflect the MTU of the interface that Windows kernel would Continue reading
Recent Docker releases (17.04 CE Edge onwards) bring significant performance improvements to bind-mounted directories on macOS. (Docker users on the stable channel will see the improvements in the forthcoming 17.06 release.) Commands for bind-mounting directories have new options to selectively enable caching.
Containers that perform large numbers of read operations in mounted directories are the main beneficiaries. Here’s an illustration of the improvements in a few tools and applications in common use among Docker for Mac users: go list is 2.5× faster; symfony is 2.7× faster, and rake is 3.5× faster, as illustrated by the following graphs:
go list ./...
in the moby/moby
repository
curl
of the main page of the Symfony demo app
rake -T
in @hirowatari’s benchmark
For more details about how and when to enable caching, and what’s going on under the hood, read on.
A defining characteristic of containers is isolation: by default, many parts of the execution environment of a container are isolated both from other containers and from the host system. In the filesystem, isolation shows up as layering: the filesystem Continue reading
Welcome to Technology Short Take #82! This issue is a bit behind schedule; I’ve been pretty heads-down on some projects. That work will come to fruition in a couple weeks, so I should be able to come up for some air soon. In the meantime, here’s a few links and articles for your reading pleasure.
ovs-dpctl
command to “program” the Open vSwitch (OVS) kernel module. It’s a bit geeky, but does provide some insight into Continue readingMetLife, the global provider of insurance, annuities, and employee benefit programs, will be celebrating it’s 150th birthday next year. Survival and success in their space depends on being agile and able to respond to changing market requirements. During the Day 2 General Session at DockerCon 2017, MetLife shared how they’re inspiring new innovation in their organization with Docker Enterprise Edition (EE).
MetLife offers auto, home, dental, life, disability, vision, and health insurance to over 100 million customers across 50 countries. Their business relies on information – about policyholders, risk assessments, financial and market data, etc. Aaron Ades, AVP of Solutions Engineering at MetLife offers that they’ve been in the information management business for 150 years and have accumulated over 400 systems of record – some apps are over 30 years old.
The challenge for MetLife is that they still have a lot of legacy technology that they must work with. Aaron shared that there is still code running today that was first written in 1982, but they still need to deliver a modern experience on top of those legacy systems.
To hear more about how MetLife is staying ahead of their competition using Docker, Continue reading
Docker has celebrated a number of important milestones lately. March 20th was the fourth anniversary of the launch of the Docker project at PyCon in 2013. April 10th was the fourth anniversary of the day that I joined Solomon and a team of 14 other believers to help build this remarkable company. And, on April 18th, we brought the community, customers, and partners together in Austin for the fourth US-based DockerCon.
March 20th, 2013
Docker Team in 2013
DockerCon was a great opportunity to reflect on the progress we’ve seen in the past four years. Docker the company has grown from 15 to over 330 talented individuals. The number of contributors to Docker has grown from 10 to over 3300. Docker is used by millions of developers and is running on millions of servers. There are now over 900k dockerized apps that have been downloaded over 13 billion times. Docker is being used to cure diseases, keep planes in the air, to keep soldiers safe from landmines, to power the world’s largest financial networks and institutions, to process billions in transactions, to help create new companies, and to help revitalize existing companies. Docker has rapidly scaled revenues, building a sustainable Continue reading
Weren’t able to attend DockerCon 2017 or looking for a refresher? Check out the recording and slides from the DockerCon 2017 Online Meetup highlights recap of all the announcements and highlights from DockerCon by Patrick Chanezon and Betty Junod.
The videos and slides from general session day 1 and day 2 as well as the top rated sessions are already available. The rest of the DockerCon slides and videos will soon be published on our slideshare account and all the breakout session video recordings available on our DockerCon 2017 youtube playlist.
The Moby Project is a new open-source project to advance the software containerization movement and help the ecosystem take containers mainstream. Learn more here.
LinuxKit is toolkit for building secure, portable and lean operating systems for containers. Read more about LinuxKit.
The Modernize Traditional Applications (MTA) Program aims to help enterprises make their existing legacy apps more secure, more efficient and portable to hybrid cloud infrastructure. Read more about the Modernize Traditional Apps Program.
Weren’t able to attend #dockercon? Watch this recap video for key highlights !
Click Continue reading
DockerCon 2017 was an opportunity to hear from customers across multiple industries and segments on how they are leveraging Docker technology to accelerate their business. In the keynote on Day 2 and also a breakout session that afternoon, Visa shared how Docker Enterprise Edition is empowering them on their mission is to make global economies safer by digitizing currency and making electronic payments available to everyone, everywhere.
Visa is the world’s largest retail electronic payment network that handles 130 billion transactions a year, processing $5.8 trillion annually. Swamy Kocherlakota, Global Head of Infrastructure and Operations, shared that Visa got here by expanding their global footprint which has put pressure on his organization which has remained mostly flat in headcount during that time. Since going into production with their Docker Containers-as-a-Service architecture 6 months ago, Mr. Kocherlakota has seen a 10x increase in scalability, ensuring that his organization will be able to support their overall mission and growth objectives well into the future.
In aligning his organization to the company mission, Swamy decided to focus on two primary metrics: Speed and Efficiency.
One of my favorite technology catch phrases is “all technology fails”, but when thinking about the network that thought becomes a very scary one. Yes, while all technology does fail, you will always do your best to not have the network be one of those. The concept of healing networks from a conceptual standpoint (the will to want to detect an issue and fix it as soon as possible) is not a new one, as network monitoring is always at the front of any network engineer's mind. We are just fortunate in this day and age to be able to take advantage of newer tools that provide better solutions. Ansible to the rescue!
If you are attending the Red Hat Summit, please make sure not to miss the Discovery Zone session entitled “Self-Healing Networks with Ansible” on Thursday, May 4th at 10:15AM.
In this presentation we will cover topics, such as:
At the end of this session, Continue reading
After the general session videos from DockerCon Day 1 and Day 2 yesterday, we’re happy to share with you the video recordings of the top rated sessions by DockerCon attendees. All the slides will soon be published on our slideshare account and all the breakout session video recordings available on our DockerCon 2017 youtube playlist.
Watch the top rated sessions from #dockercon cc @brendangregg @abbyfuller @lizrice @diogomonica
Click To Tweet
The post DockerCon 2017: The Top Rated Sessions appeared first on Docker Blog.
If you’ve been following the Full Stack Journey podcast, you know that the podcast has been silent for a few months. Some of that was due to some adverse situations in life (it happens to all of us from time to time), but some of it was due to the coordination of a major transition in the podcast. And that’s the big news I’m here to share—read on for the full details!
If you’ve been in the IT industry for any reasonable length of time, especially in the networking space, you’ve probably heard of the Packet Pushers Podcast. It’s a hugely popular podcast created by Greg Ferro and Ethan Banks. In recent years, Packet Pushers has expanded from the “main show” to include other shows, including the Datanauts podcast (led by Chris Wahl and Ethan Banks). They’ve also been looking to expand their stable of podcasts to include additional relevant content.
This brings me to the big news: the Full Stack Journey podcast is joining the Packet Pushers network of podcasts! That’s right—the Full Stack Journey will be part of Packet Pushers’ growing network of podcasts. In talking with Greg and Ethan and the rest of the Packet Pushers team, Continue reading
In our previous Getting Started blog post, we discussed how to install Ansible Tower in your environment.
Now we’ll discuss how you can equip your Tower host with users and credentials.
To begin, let’s cover the essentials: setting up your user base and creating credentials for appropriate delegation of tasks.
Building your user base will be the first thing you’ll need to do to get started with Tower. The user base can be broken into three easily-defined parts:
1. User: Someone who has access to Tower with associated permissions and credentials.
2. Organization: The top level of the user base - a logical collection of users, teams, projects and inventories.
3. Team: A subdivision of an organization - provides the means to set up and implement role-based access schemes as well as to delegate work across organizations.
There are three types of users that can be defined within Tower:
Following the general session highlights from DockerCon Day 1, we’re happy to share with you the video recording from general session day 2. All the slides will soon be published on our slideshare account and all the breakout session video recordings available on our DockerCon 2017 youtube playlist.
Here’s what we covered during the day 2 general session:
Ben started off his DockerCon Day 2 keynote with key facts and figures around Docker Commercial Adoption. To illustrate his points Ben invited on stage Swamy Kochelakota, Global Head of Infrastructure and Operations at Visa to talk about their journey adopting Docker Enterprise Edition to run their critical applications at scale in a very diverse environment.
During the day 2 keynote, Lily and Vivek reprise their 2016 roles of dedicated burners, finally returning from Burning Man to get back to their jobs of enterprise dev and ops. Ben returns as clueless business guy, and decides to add value by hiring a contractor, who Continue reading
What an incredible DockerCon 2017 we had last week. Big thank you to all of the 150+ confirmed speakers, 100+ sponsors and over 5,500 attendees for contributing to the success of these amazing 3 days in Austin. You’ll find below the videos and slides from general session day 1.All the slides will soon be published on our slideshare account and all the breakout session video recordings available on our DockerCon 2017 youtube playlist.
Here’s what we covered during the day 1 general session:
Solomon’s keynote started by introducing new Docker features to improve the development workflows of Docker users: multi-stage builds and desktop-to-cloud integration. With multi-stage builds you can now easily separate your build-time and runtime container images, allowing development teams to ship minimal and efficient images. It’s time to say goodbye to those custom and non-portable build scripts! With desktop-to-cloud you can easily connect to a remote swarm cluster using your Docker ID for authentication, without having to worry Continue reading
A couple years ago, I wrote an article about how I was choosing CoreOS over Project Atomic based on some initial testing with CentOS Atomic Host builds. As it turns out—and as I pointed out in the “Update” section of that article—the Atomic Host builds I was using were pre-release builds, and therefore it wasn’t really appropriate to form an assessment based on pre-release builds. Now that both CentOS Atomic Host and CoreOS Container Linux have both grown and matured, I thought I’d revisit the topic and see how—if at all—things have changed.
In my original post, there were 4 major issues I identified (not necessarily in the same order as the original post):
So how do these areas look now, 2 years later?
Container-specific cloud-init extensions: Upon a closer examination of this issue, I realized that the cloud-init extensions were actually specific to CoreOS projects, like etcd and fleet. Thus, it wouldn’t make sense for these sorts of cloud-init extensions to exist on Atomic Hosts. What would make sense would be extensions that help configure Atomic Host-specific functionality, though (to be honest) Continue reading
Docker is excited to announce that we are returning to the Newseum in Washington DC on May 2nd to host the second annual Docker Federal Summit event. A one day event packed with breakout sessions, discussions, hands on labs and technology deep dives from Docker and our ecosystem partners.
This event is designed for federal agency developers and IT ops personnel looking to learn more about how to approach Docker containers, cloud and devops to accelerate agency IT initiatives to support critical civilian and defense missions.
Technology leaders from agencies like GSA, USCIS and JIDO will share their experiences in deploying containers to production and provide pragmatic guidance in how to approach this change from technology to process and culture.
Featured Breakout Sessions: A wide variety of breakout sessions feature technical deep dives, demonstrations and discussions around compliance and security.
In the previous post, I talked about OpenVPN TCP and UDP tunnels and why you should not be using TCP. In this post, I’m going to talk about optimizing the said tunnels to get the most out of them.
Believe it or not, the default OpenVPN configuration is likely not optimized for your link. It probably works but its throughput could possibly be improved if you take the time to optimize it.
A tunnel has 2 ends! Optimizing one end, does not necessarily optimizes the other. For the proper optimization of the link, both ends of the tunnel should be in your control. That means when you are using OpenVPN in server mode serving different clients that you do not have control over, the best you could do is to optimize your own end of the tunnel and use appropriate default settings suitable for the most clients.
Below are some techniques that could be used to optimize your OpenVPN tunnels.
In today’s world where most connections are either encrypted or pre-compressed (and more commonly both), you probably should think twice before setting up compression on top of your vpn tunnel.
While it still could be an effective way Continue reading
Every year at DockerCon, we expand the bounds of what Docker can do with new features and products. And every day, we see great new apps that are built on top of Docker. And yet, there’s always a few that stand out not just for being cool apps, but for pushing the bounds of what you can do with Docker.
This year we had two great apps that we featured in the Docker Cool Hacks closing keynote. Both hacks came from members of our Docker Captains program, a group of people from the Docker community who are recognized by Docker as very knowledgeable about Docker, and contribute quite a bit to the community.
The first Cool Hack was Play with Docker by Marcos Nils and Jonathan Leibiusky. Marcos and Jonathan actually were featured in the Cool Hacks session at DockerCon EU in 2015 for their work on a Container Migration Tool.
Play with Docker is a Docker playground that you can run in your browser.
Play with Docker’s architecture is a Swarm of Swarms, running Docker in Docker instances.
Running on pretty beefy hosts r3.4xlarge on AWS – Play with Docker is able to run Continue reading