This blog is part two in a series covering how Red Hat Ansible Automation can integrate with ticket automation. This time we’ll cover dynamically adding a set of network facts from your switches and routers and into your ServiceNow tickets. If you missed Part 1 of this blog series, you can refer to it via the following link: Ansible + ServiceNow Part 1: Opening and Closing Tickets.
Suppose there was a certain network operating system software version that contained an issue you knew was always causing problems and making your uptime SLA suffer. How could you convince your management to finance an upgrade project? How could you justify to them that the fix would be well worth the cost? Better yet, how would you even know?
A great start would be having metrics that you could track. The ability to data mine against your tickets would prove just how many tickets were involved with hardware running that buggy software version. In this blog, I’ll show you how to automate adding a set of facts to all of your tickets going forward. Indisputable facts can then be pulled directly from the device with no chance of mistakes or accidentally being overlooked Continue reading
This year VMworld—VMware’s annual user conference—moves back to San Francisco from Las Vegas. Returning to the Bay Area with VMworld is Spousetivities, which is happening again this year for the 11th year at VMworld. Better get your tickets sooner rather than later, there’s quite a good chance these activities will sell out!
Registration is open right now.
This year’s activities are funded in part by the generous and community-minded support of Veeam and VMUG, who are “putting their money where their mouth is” when it comes to promoting strong work/life balance at events like VMworld.
Here’s a quick look at what’s planned for VMworld 2019 in San Francisco:
Monday, August 26: Spousetivities kicks off the week with a walking food tour. This tour, like all the others, will depart from the Marriott Marquis.
Tuesday, August 27: This full-day event will take participants up to Wine Country for several wine tastings. Transportion is provided, of course, and participants will enjoy lunch on the tour as well.
Wednesday, August 28: Nature, shopping, tranquility, and quaint towns—this tour has it all! Participants will visit the Golden Gate Bridge, the Marin headlands, Muir Woods, and Sausalito. Transportion and Continue reading
Now that Red Hat is a part of IBM, some people may wonder about the future of the Ansible project. Here is the good news: the Ansible community strategy has not changed.
As always, we want to make it as easy as possible to work with any projects and communities who want to work with Ansible. With the resources of IBM behind us, we plan to accelerate these efforts. We want to do more integrations with more open source communities and more technologies.
One of the reasons we are excited for the merger is that IBM understands the importance of a broad and diverse community. Search for “Ansible plus <open source project>” and you can find Ansible information, such as playbooks and modules and blog posts and videos and slide decks, intended to make working with that project easier. We have thousands of people attending Ansible meetups and events all over the world. We have millions of downloads. We have had this momentum because we provide users flexibility and freedom. IBM is committed to our independence as a community so that we can continue this work.
We’ve worked hard to be good open source citizens. We value the trust Continue reading
When using kubeadm
to set up a new Kubernetes cluster, the output of the kubeadm init
command that sets up the control plane for the first time contains some important information on joining additional nodes to the cluster. One piece of information in there that (until now) I hadn’t figured out how to replicate was the CA certificate hash. (Primarily I hadn’t figured it out because I hadn’t tried.) In this post, I’ll share how to calculate the CA certificate hash for kubeadm
to use when joining additional nodes to an existing cluster.
When looking to figure this out, I first started with the kubeadm
documentation. My searches led me here, which states:
The hash is calculated over the bytes of the Subject Public Key Info (SPKI) object (as in RFC7469). This value is available in the output of “kubeadm init” or can be calculated using standard tools.
That’s useful information, but what are the “standard tools” being referenced? I knew that a lot of work had been put into kubeadm init phase
(for breaking down the kubeadm init
workflow), but a quick review of that documentation didn’t reveal anything. Reviewing the referenced RFC also didn’t provide any Continue reading
Developers ranked Docker as the #1 most wanted platform, #2 most loved platform, and #3 most broadly used platform in the 2019 Stack Overflow Developer Survey. Nearly 90,000 developers from around the world responded to the survey. So we asked the community why they love Docker, and here are 10 of the reasons they shared:
“I love docker because it takes environment specific issues out of the equation – making the developer’s life easier and improving productivity by reducing time wasted debugging issues that ultimately don’t add value to the application.” @pamstr_
“Docker completely changed my life as a developer! I can spin up my project dependencies like databases for my application in a second in a clean state on any machine on our team! I can‘t not imagine the whole ci/cd-approach without docker. Automate all the stuff? Dockerize it!” @Dennis65560555
I recently decided to start working with jsonnet
, a data templating language and associated command-line interface (CLI) tool for manipulating and/or generating various data formats (like JSON, YAML, or other formats; see the Jsonnet web site for more information). However, I found that there are no prebuilt binaries for jsonnet
(at least, not that I could find), and so I thought I’d share here the process for building jsonnet
from source. It’s not hard or complicated, but hopefully sharing this information will streamline the process for others.
As some readers may already know, my primary OS is Fedora. Thus, the process I share here will be specific to Fedora (and/or CentOS and possibly RHEL).
To keep my Fedora installation clean of any unnecessary packages, I decided to use a CentOS 7 VM—instantiated and managed by Vagrant—for the build process. If you don’t want to use a build VM, you can omit the steps involving Vagrant. You’ll also need to modify the commands used to install the necessary packages (on Fedora, you’d use dnf
instead of yum
, for example). Different distributions may also use different package names for some of the dependencies, so keep that in mind.
Run Continue reading
Welcome to Technology Short Take #116! This one is a bit shorter than usual, due to holidays in the US and my life being busy. Nevertheless, I hope that I managed to capture something you find useful or helpful. As always, your feedback is welcome, so if you have suggestions, corrections, or comments, you’re welcome to contact me via Twitter.
In the previous post, I talked about why we might need a SOCKS proxy at all, and how we can properly setup a secure one using only stunnel.
That approach is fine and all, but it still suffers from some limitations. The most important of which are:
BIND
or UDPASSOCIATE
is not available.Comparing to the stunnel limited SOCKS functionality, Dante (which is one of the most popular SOCKS server available), comes with pretty much every functionality one can imagine out of a SOCKS server.
From advanced authentication and access control, to server chaining, traffic monitoring and even bandwidth control1, Dante has got them all.
While it might be okay to use a non-encrypted SOCKS proxy in you local network, it is definitely not a good idea to do so over the internet.
For this, RFC 1961 added GSS-API
authentication protocol for SOCKS Version 5. GSS-API
provides integrity, authentication and confidentiality. Dante of course completely supports GSS-API authentication and encryption.
But GSS-API
(which is typically used with Kerberos Continue reading
There are over one million Dockerfiles on GitHub today, but not all Dockerfiles are created equally. Efficiency is critical, and this blog series will cover five areas for Dockerfile best practices to help you write better Dockerfiles: incremental build time, image size, maintainability, security and repeatability. If you’re just beginning with Docker, this first blog post is for you! The next posts in the series will be more advanced.
Important note: the tips below follow the journey of ever-improving Dockerfiles for an example Java project based on Maven. The last Dockerfile is thus the recommended Dockerfile, while all intermediate ones are there only to illustrate specific best practices.
In a development cycle, when building a Docker image, making code changes, then rebuilding, it is important to leverage caching. Caching helps to avoid running build steps again when they don’t need to.
However, the order of the build steps (Dockerfile instructions) matters, because when a step’s cache is invalidated by changing files or modifying lines in the Dockerfile, subsequent steps of their cache will break. Order your steps from least to most frequently changing steps to optimize caching.
Docker Hub is home to the world’s largest library of container images. Millions of individual developers rely on Docker Hub for official and certified container images provided by independent software vendors (ISV) and the countless contributions shared by community developers and open source projects. Large enterprises can benefit from the curated content in Docker Hub by building on top of previous innovations, but these organizations often require greater control over what images are used and where they ultimately live (typically behind a firewall in a data center or cloud-based infrastructure). For these companies, building a secure content engine between Docker Hub and Docker Trusted Registry (DTR) provides the best of both worlds – an automated way to access and “download” fresh, approved content to a trusted registry that they control.
Ultimately, the Hub-to-DTR workflow gives developers a fresh source of validated and secure content to support a diverse set of application stacks and infrastructures; all while staying compliant with corporate standards. Here is an example of how this is executed in Docker Enterprise 3.0:
DTR allows customers to set up a mirror to grab content from a Hub repository by constantly polling it and pulling new image Continue reading
Deploying applications on Red Hat OpenShift or Kubernetes has come a long way. These days, it's relatively easy to use OpenShift's GUI or something like Helm to deploy applications with minimal effort. Unfortunately, these tools don't typically address the needs of operations teams tasked with maintaining the health or scalability of the application - especially if the deployed application is something stateful like a database. This is where Operators come in.
An Operator is a method of packaging, deploying and managing a Kubernetes application. Kubernetes Operators with Ansible exists to help you encode the operational knowledge of your application in Ansible.
What can we do with Ansible in a Kubernetes Operator? Because Ansible is now part of the Operator SDK, anything Operators could do should be able to be done with Ansible. It’s now possible to write an Operator as an Ansible Playbook or Role to manage components in Kubernetes clusters. In this blog, we're going to be diving into an example Operator.
For more information on Kubernetes Operators with Ansible please refer to the following resources:
Welcome to Technology Short Take #115! I’m back from my much-needed vacation in Bali, and getting settled back into work and my daily routine (which, for the last few weeks, was mostly swimming in the pool and sitting on the beach). Here’s a fresh new collection of links and articles from the around the web to propel myself back into blogging. I hope you find something useful here!
salt-sproxy
to take a different approach to network automation using Salt. Normally this would require the use of Proxy Minions, but Ulinic’s post on salt-sproxy
shows how this can be done without any Proxy Minions.Modern applications can come in many flavors, consisting of different technology stacks and architectures, from n-tier to microservices and everything in between. Regardless of the application architecture, the focus is shifting from individual containers to a new unit of measurement which defines a set of containers working together – the Docker Application. We first introduced Docker Application packages a few months ago. In this blog post, we look at what’s driving the need for these higher-level objects and how Docker Enterprise 3.0 begins to shift the focus to applications.
Since our founding in 2013, Docker – and the ecosystem that has thrived around it – has been built around the core workflow of a Dockerfile that creates a container image that in turn becomes a running container. Docker containers, in turn, helped to drive the growth and popularity of microservices architectures by allowing independent parts of an application to be turned on and off rapidly and scaled independently and efficiently. The challenge is that as microservices adoption grows, a single application is no longer based on a handful of machines but dozens of containers that can be divided amongst different development teams. Continue reading
Over the past two years Docker has worked closely with customers to modernize portfolios of traditional applications with Docker container technology and Docker Enterprise, the industry-leading container platform. Such applications are typically monolithic in nature, run atop older operating systems such as Windows Server 2008 or Windows Server 2003, and are difficult to transition from on-premises data centers to the public cloud.
The Docker platform alleviates each of these pain points by decoupling an application from a particular operating system, enabling microservice architecture patterns, and fostering portability across on-premises, cloud, and hybrid environments.
As the Modernizing Traditional Applications (MTA) program has matured, Docker has invested in tooling and methodologies that accelerate the transition to containers and decrease the time necessary to experience value from the Docker Enterprise platform. From the initial application assessment process to running containerized applications on a cluster, Docker is committed to improving the experience for customers on the MTA journey.
Enterprises develop and maintain exhaustive portfolios of applications. Such apps come in a myriad of languages, frameworks, and architectures developed by both first and third party development teams. The first step in the containerization journey is to determine which applications are Continue reading
In this post, you will learn advanced applications of Ansible facts to configure Linux networking. Instead of hard-coding device names, you will find out how to specify network devices by PCI addresses. This prepares your configuration to work on different Red Hat Enterprise Linux releases with different network naming schemes.
The RHEL System Roles provide a uniform configuration interface across multiple RHEL releases. However, the names of network devices in modern Linux distributions can often not be stable for various releases. In the past, the kernel named the devices after their order of appearance. The first device got the name eth0, the next eth1, and so on.
To make the device names more reliable, developers introduced other methods. This interferes with creating a release-independent network configuration based on interface names. An initial solution to this problem is to address network cards by MAC address. But this will require an up-to-date inventory with MAC addresses of all network cards. Also, it requires updating the inventory after replacing broken hardware. This results in extra work. To avoid this effort, it would be great to be able to specify network cards by their PCI address. Continue reading
Last week, Docker hosted our 4th annual Mid-Atlantic and Government Docker Summit, a one-day technology conference held on Wednesday, May 29 near Washington, DC. Over 425 attendees in the public and private sector came together to share and learn about the trends driving change in IT from containers, cloud and DeVops. Specifically, the presenters shared content on topics including Docker Enterprise, our industry-leading container platform, Docker’s Kubernetes Service, Container Security and more.
Attendees were a mix of technology users and IT decision makers: everyone from developers, systems admins and architects to Sr. leaders and CTOs.
Highlights include a keynote by Docker’s EVP of Customer Success, Iain Gray, and a fireside chat by the former US CTO and Insight Ventures Partner, Nick Sinai, and current Federal US CIO, Suzette Kent.
The fireside highlighted top of mind issues for Kent and how that aligns with the White House IT Modernization Report; specifically modernization of current federal IT infrastructure and preparing and scaling the workforce. Kent mentioned, “The magic of IT modernization is marrying the technology with the people and the Continue reading
As a Network Engineer, I hated filling out tickets. Anytime a router would reboot or a power outage took place at a remote site, the resulting ticket generation took up about 50% of my day. If there had been a way to automate ticket creation, I would have saved a lot of time. The only thing unique to provide would have been case-specific comment sections needing additional information about the issue.
While ticket creation was a vital activity, automating this was not an option at the time. This is surprising because my management was always asking me to include more information in my tickets. Tickets were often reviewed months later and sometimes never got created or did not have much relevant information included.
Fast forward to today, companies are now data mining from tickets with a standard set of facts that are pulled directly from the device during ticket creation, such as network platform, software version, uptime, etc. Network operations (NetOps) teams now use massive amounts of ticket data to make budget decisions.
For example, if there were 400 network outages due to power issues, NetOps could then make a case to spend $40,000 on battery backups, having proved Continue reading