Archive

Category Archives for "Systems"

Looking Ahead: My 2019 Projects

It’s been a little while now since I published my 2018 project report card, which assessed my progress against my 2018 project goals. I’ve been giving a fair amount of thought to the areas where I’d like to focus my professional (technical) development this coming year, and I think I’ve come up with some project goals that align both with where I am professionally right now and where I want to be technically as I grow and evolve. This is a really difficult balance to strike, and we’ll see at the end of the year how well I did.

Without further ado, here’s my list of 2019 project goals, along with an optional stretch goal (where it makes sense).

  1. Make at least one code contribution to an open source project. For the last few years, I’ve listed various programming- and development-related project goals. In all such cases, I haven’t done well with those goals because they were too vague, and—as I pointed out in previous project report cards—these less-than-ideal results are probably due to the way programming skills tend to be learned (by solving a problem/challenge instead of just learning language semantics and syntax). So, in an effort to Continue reading

Docker Pals Program 2019

At DockerCon Copenhagen we launched the Docker Pals program in order to connect attendees and help them make the most out of their trip. Attending a conference for the first time or by yourself can be intimidating and we don’t want anyone to feel that way at DockerCon! Pals get matched with a few others who are new (the “Pals”), and someone who knows their way around (the “Guide”) so you will have a familiar group before you arrive at the conference. Guides help Pals figure out which talks and activities to attend, and are available for questions.

This year we are excited to grow the program, matching more groups and adding Meet-and-Greets throughout the week. You won’t want to miss the best version of Docker Pals yet!

        

Here’s what Pals had to say about DockerCon Barcelona:

Docker Pals made my DockerCon experience ten times better and I’ve made friends I hope to see again!”

Our Guide was very helpful and I really enjoyed meeting other Pals at the conference.”

 

“[I enjoyed] the fact that even though I was there alone I always had a place to turn for help and fellowship.”

“[Our Continue reading

Happy Pi Day with Docker and Raspberry Pi

What better way to say “Happy Pi Day” than by installing Docker Engine – Community (CE) 18.09 on Raspberry Pi. This article will walk you through the process of installing Docker Engine 18.09 on a Raspberry Pi. There are many articles out there that show this process, but many failed due to older Engine versions and some syntax issues.

Special thanks to Docker Solutions Engineer, Stefan Scherer and his monitoring image (stefanscherer/monitor) along with the whoami image (stefanscherer/whoami) that allows Pimoroni Blinkt! LED’s to turn on/off when scaling an application within a Swarm Cluster.

Instructions

For this demo, I used 7 Raspberry Pi’s 3 (model B+) and 1 Pimoroni Blinkt! LED for each Pi.

1. Download the following Raspian image ‘2018-11-13-raspbian-stretch-full.img’ from https://www.raspberrypi.org/downloads/raspbian/

2. Use balenaEtcher to write the image to each of your microusb cards.

3. To make DNS hostname resolution a little easier, I setup local hostnames on each Pi device. Below is an example.

192.168.93.231 pi-mgr1 pi-mgr1.docker.cafe
192.168.93.232 pi-mgr2 pi-mgr2.docker.cafe
192.168.93.233 pi-mgr3 pi-mgr3.docker.cafe
192.168.93.241 pi-node1 pi-node1.docker.cafe
192. Continue reading

Split Tunneling with vpnc

vpnc is a fairly well-known VPN connectivity package available for most Linux distributions. Although the vpnc web site describes it as a client for the Cisco VPN Concentrator, it works with a wide variety of IPSec VPN solutions. I’m using it to connect to a Palo Alto Networks-based solution, for example. In this post, I’d like to share how to set up split tunneling for vpnc.

Split tunneling, as explained in this Wikipedia article, allows remote users to access corporate resources over the VPN while still accessing non-corporate resources directly (as opposed to having all traffic routed across the VPN connection). Among other things, split tunneling allows users to access things on their home LAN—like printers—while still having access to corporate resources. For users who work 100% remotely, this can make daily operations much easier.

vpnc does support split tunneling, but setting it up doesn’t seem to be very well documented. I’m publishing this post in an effort to help spread infomation on how it can be done.

First, go ahead and create a configuration file for vpnc. For example, here’s a fictional configuration file:

IPSec gateway vpn.company.com
IPSec ID VPNGroup
IPSec secret donttellanyone
Xauth username bobsmith

Continue reading

Advanced AMI Filtering with JMESPath

I recently had a need to do some “advanced” filtering of AMIs returned by the AWS CLI. I’d already mastered the use of the --filters parameter, which let me greatly reduce the number of AMIs returned by aws ec2 describe-images. In many cases, using filters alone got me what I needed. In one case, however, I needed to be even more selective in returning results, and this lead me to some (slightly more) complex JMESPath queries than I’d used before. I wanted to share them here for the benefit of my readers.

What I’d been using before was a command that looked something like this:

ec2 describe-images --owners 099720109477 \
--filters Name=name,Values="*ubuntu-xenial-16.04*" \
Name=virtualization-type,Values=hvm \
Name=root-device-type,Values=ebs \
Name=architecture,Values=x86_64 \
--query 'sort_by(Images,&CreationDate)[-1].ImageId'

The part after --query is a JMESPath query that sorts the results, returning only the ImageId attribute of the most recent result (sorted by creation date). In this particular case, this works just fine—it returns the most recent Ubuntu Xenial 16.04 LTS AMI.

Turning to Ubuntu Bionic 18.04, though, I found that the same query didn’t return the result I needed. In addition to the regular builds of 18.04, Canonical apparently also builds EKS Continue reading

Technology Short Take 111

Welcome to Technology Short Take #111! I’m a couple weeks late on this one; wanted to publish it earlier but work has been keeping me busy (lots and lots of interest in Kubernetes and cloud-native technologies out there!). In any event, here you are—I hope you find something useful for you!

Networking

Servers/Hardware

containerd Graduates Within the CNCF

We are happy to announce that as of today, containerd, an industry-standard runtime for building container solutions, graduates within the CNCF. The successful graduation demonstrates containerd has achieved the maturity, stability and community acceptance required for broad ecosystem adoption. containerd has already been deployed in tens of millions of production systems today, making it the most widely adopted runtime and an essential upstream component of the Docker platform. containerd was donated to the CNCF as a top-level project because of its strong alignment with Kubernetes, gRPC and Prometheus and is the fifth project to make it to this tier. Built to address the needs of modern container platforms like Docker Enterprise and orchestration systems like Kubernetes, containerd ensures users have a consistent dev to ops experience.

From Docker’s initial announcement that it was spinning out its core runtime to its donation to the CNCF in March 2017, the containerd project has experienced significant growth and progress over the last two years. The primary goal of Docker’s donation was to foster further innovation in the container ecosystem by providing a core container runtime that could be leveraged by container system vendors and orchestration projects such as Kubernetes, Swarm, Continue reading

Thoughts on VPNs for Road Warriors

A few days ago I was talking with a few folks on Twitter and the topic of using VPNs while traveling came up. For those that travel regularly, using a VPN to bypass traffic restrictions is not uncommon. Prompted by my former manager Martin Casado, I thought I might share a few thoughts on VPN options for road warriors. This is by no means a comprehensive list, but hopefully something I share here will be helpful.

There were a few things I wanted to share with readers:

  • I found commercial VPN services too unreliable (it’s not uncommon for commercial VPN services to get blocked and thus defeat the purpose of using a VPN).
  • Instead, I’ve found more success using something like AutoVPN. AutoVPN helps you stand up an on-demand OpenVPN endpoint on AWS. I used this successfully in Beijing, setting up endpoints in Seoul, Singapore, Tokyo, and sometimes Sydney. Because these IP addresses are “ephemeral,” they’re far less likely to be blocked. (Here’s another example of using AWS as a personal VPN service.)
  • I also had success using AutoVPN not to get around traffic restrictions, but to change my source IP. My wife needed to access some real Continue reading

Docker’s 6th Birthday: How do you #Docker?

 

 

Docker is turning 6 years old! Over the years, Docker Community members have found some amazing and innovative ways of using Docker technology and we’ve been blown away by all the use-cases we’ve seen from the community at DockerCon. From Docker for Space where NASA used Docker technology to build software to deflect asteroids to using “gloo” to glue together traditional apps, microservices and serverless, you all continue to amaze us year over year.

So this year, we want to celebrate you! From March 18th to the 31st, Docker User Groups all over the world will be hosting local birthday show-and-tell celebrations. Participants will each have 10-15 minutes of stage time to present how they’ve been using Docker. Think of these as lightning talks – your show-and-tell doesn’t need to be polished and it can absolutely be a fun hack and/or personal project. Everyone who presents their work will get a Docker Birthday #6 t-shirt and have the opportunity to submit their Docker Birthday Show-and-tell to present at DockerCon.

Are you new to Docker? Not sure you’d like to present? No worries! Join in the fun and come along to listen, learn, add to your sticker collection and Continue reading

Docker Desktop Enterprise Preview: Version Packs

This is the first in a series of articles we are publishing to provide more details on Docker Desktop Enterprise, which we announced at DockerCon Barcelona. Keep up with the latest Docker Desktop Enterprise news and release updates by signing up for the Docker Desktop Enterprise announcement list.

Docker’s engineers have been hard at work completing features and getting everything in ship-shape (pun intended) following our announcement of Docker Desktop Enterprise, a new desktop product that is the easiest, fastest and most secure way to develop production-ready containerized applications and the easiest way for developers to get Kubernetes running on their own machine.

In the first post of this series I want to highlight how we are working to bridge the gap between development and production with Docker Desktop Enterprise using our new Version Packs feature. Version Packs let you easily swap your Docker Engine and Kubernetes orchestrator versions to match the versions running in production on your Docker Enterprise clusters. For example, imagine you have a production environment running Docker Enterprise 2.0. As a developer, in order to make sure you don’t use any APIs or incompatible features that will break when you push an application to production Continue reading

Kubernetes, Kubeadm, and the AWS Cloud Provider

Over the last few weeks, I’ve noticed quite a few questions appearing in the Kubernetes Slack channels about how to use kubeadm to configure Kubernetes with the AWS cloud provider. You may recall that I wrote a post about setting up Kubernetes with the AWS cloud provider last September, and that post included a few snippets of YAML for kubeadm config files. Since I wrote that post, the kubeadm API has gone from v1alpha2 (Kubernetes 1.11) to v1alpha3 (Kubernetes 1.12) and now v1beta1 (Kubernetes 1.13). The changes in the kubeadm API result in changes in the configuration files, and so I wanted to write this post to explain how to use kubeadm 1.13 to set up a Kubernetes cluster with the AWS cloud provider.

I’d recommend reading the previous post from last September first. In that post, I listed four key configuration items that are necessary to make the AWS cloud provider work:

  1. Correct hostname (must match the EC2 Private DNS entry for the instance)
  2. Proper IAM role and policy for Kubernetes control plane nodes and worker nodes
  3. Kubernetes-specific tags on resources needed by the cluster
  4. Correct command-line flags added to the Kubernetes API server, controller Continue reading

Ansible Community Update — February 2019

Ansible-Blog-PR-Day

Ansible is a popular project by many metrics, including over 42,000 commits on GitHub. Our community contributes a lot of pull requests (PRs) every month. Unfortunately, the volume of incoming PRs means contributors often have to wait days, weeks, or months for PRs to be merged. Sometimes it takes that long for a cursory review. We want to change that, but we need your help!

The Core team and community at large are kicking off new initiatives under the contributor experience umbrella. The idea is to help address causes that slow down quality PRs from being merged into Ansible's codebase.

To help with this, we are dedicating one day a month to doing a community review. The goals we are setting for these meetings are:

  • Give potential new community members a place to learn and experiment with Ansible's review process and exchange feedback

  • Identify process and documentation improvements via feedback provided from the Ansible community

  • Give PRs needed attention; remove blockers where necessary

  • Identify PRs that could be merged or closed

We’re particularly interested in feedback from people starting their journey with open source as it helps us to improve our processes and documentation. It’s helpful to have new contributors Continue reading

We’ve Got ❤️ For Our First Batch of DockerCon Speakers

As the world celebrates Valentine’s Day, at Docker, we are celebrating what makes our heart all aflutter – gearing up for an amazing DockerCon with the individuals and organizations that make up the Docker community. With that, we are thrilled to announce our first speakers for DockerCon San Francisco, April 29 – May 2.

DockerCon fan favorites like Liz Rice, Bret Fisher and Don Bauer are returning to the conference to share new insights and experiences to help you better learn how to containerize.

And we are excited to welcome new speakers to the DockerCon family including Ana Medina, Tommy Hamilton and Ian Coldwater to talk chaos engineering, building your production container platform stack and orchestration with Docker Swarm and Kubernetes. 

And we’re just getting started! This year DockerCon is going to bring more technical deep dives, practical how-to’s, customer case studies and inspirational stories. Stay tuned as we announce the full speaker line up this month.

Register Now

 


This #ValentinesDay #Docker announces its first speakers for #DockerCon San Francisco on April 29 to May 2
Click To Tweet


The post We’ve Got ❤️ For Our First Batch of DockerCon Speakers appeared first on Docker Blog.

Docker Security Update: CVE-2018-5736 and Container Security Best Practices

On Monday, February 11, Docker released an update to fix a privilege escalation vulnerability (CVE-2019-5736) in runC, the Open Container Initiative (OCI) runtime specification used in Docker Engine and containerd. This vulnerability makes it possible for a malicious actor that has created a specially-crafted container image to gain administrative privileges on the host. Docker engineering worked with runC maintainers on the OCI to issue a patch for this vulnerability.

Docker recommends immediately applying the update to avoid any potential security threats. For Docker Engine-Community, this means updating to 18.09.2 or 18.06.2. For Docker Engine- Enterprise, this means updating to 18.09.2, 18.03.1-ee-6, or 17.06.2-ee-19. Read the release notes before applying the update due to specific instructions for Ubuntu and RHEL operating systems.

Summary of the Docker Engine versions that address the vulnerability:

 

Docker Engine Community

Docker Engine Enterprise

18.09.2

18.09.2

18.06.2

18.03.1-ee-6

17.06.2-ee-19

To better protect the container images run by Docker Engine, here are some additional recommendations and best practices:

Use Docker Official Images

Official Images are a curated set of Docker repositories hosted on Docker Hub that are designed to:

How to run stunnel on your Android device

Overview

In this post we’re going to talk about how to run the amazing stunnel program on your Android device, and do so properly!

Later, this would allow us to setup a lot of cool things like:

  • Wrapping OpenVPN traffic with it
  • Using it as a SOCKS VPN
  • Adding proper IMAPS/SMTPS support to our old email apps

For this, we’re not going to use the old and very limited SSLDroid. It’s a bad idea, I don’t know why different sites still keep pushing it. It almost certainly has unpatched vulnerabilities. Please don’t use it.

Instead, we are going to use the official stunnel program, with the help of a proper wrapper.

stunnel Android binary

stunnel already supports Android devices and even the compiled version of it is available in it’s download page.

This file is compiled for ARM architecture. Even though most Android devices run on ARM, this is particularly important to note for those devices that are not (e.g, Android-x86).

Since we’ll be using the compiled binary, you may need to compile stunnel yourself for your specific Android architecture before continuing1. Chances are though, that your device is running on ARM and you are ready Continue reading

Ansible Operator: What is it? Why it Matters? What can you do with it?

Blog-Ansible-and-Openshift

The Red Hat Ansible Automation and Red Hat OpenShift teams have been collaborating to build a new way to package, deploy, and maintain Kubernetes native applications: Ansible Operator. Given the interest in moving workloads to Kubernetes, we are happy to introduce a new tool that can help ease the move toward cloud native infrastructure.

What is Kubernetes? The simplest definition of Kubernetes I’ve ever used is, “Kubernetes is a container orchestrator.” But that is a simplified definition.

What is OpenShift? Red Hat OpenShift Container Platform is an enterprise-grade Kubernetes distribution. It enables management of container applications across hybrid cloud and multicloud infrastructure.

First, let’s identify the problem operators can help us solve. Operators help simplify deployment, management, and operations of stateful applications in Kubernetes. But, writing an operator today can be difficult because of the knowledge of Kubernetes components required to do so. The Operator SDK is a framework that uses the controller-runtime library to help make writing operators more simple. The SDK enables Operator development in Go, Helm, or Ansible.

Why It Matters

What can an Ansible Operator give us that a generic operator doesn’t? The same things Ansible can give its users: a lower barrier to entry, faster iterations, Continue reading

Deep Dive on cli_command for Network Automation

Ansible-Agnostic-Network-Automation

In October Ansible 2.7 was released and brought us two powerful agnostic network modules, cli_command and cli_config. Do you have two or more network vendors within your environment? The goal of agnostic modules is to simplify Ansible Playbooks for network engineers that deal with a variety of network platforms. Rather than having to deal with platform specific modules (e.g. eos_config, ios_config, junos_config), you can now use cli_command or cli_config to reduce the amount of tasks and conditionals within a playbook, and make the playbook easier to use. This post will demonstrate how to use these modules and contrast them to platform specific modules. I’ll show some playbook examples and common use cases to help illustrate how you can use these new platform agnostic modules.

Both the cli_command and cli_config only work with the network_cli connection plugin. For those unfamiliar with the network_cli connection plugin check out this blog post I did last April. The goal of network_cli is to make playbooks look, feel and operate on network devices, the same way Ansible works on Linux hosts.

What can you do with the cli_command?

The cli_command allows you to run arbitrary commands on network devices. Let’s show a simple Continue reading

Scraping Envoy Metrics Using the Prometheus Operator

On a recent customer project, I recommended the use of Heptio Contour for ingress on their Kubernetes cluster. For this particular customer, Contour’s support of the IngressRoute CRD and the ability to delegate paths via IngressRoutes made a lot of sense. Of course, the customer wanted to be able to scrape metrics using Prometheus, which meant I not only needed to scrape metrics from Contour but also from Envoy (which provides the data plane for Contour). In this post, I’ll show you how to scrape metrics from Envoy using the Prometheus Operator.

First, I’ll assume that you’ve already installed and configured Prometheus using the Prometheus Operator, a task which is already fairly well-documented and well-understood. If this is something you think would be helpful for me to write a blog post on, please contact me on Twitter and let me know.

The overall process looks something like this:

  1. Modify the Envoy DaemonSet (or Deployment, depending on your preference) to add a sidecar container and expose additional ports.
  2. Modify the Service for the Envoy DaemonSet/Deployment to expose the ports you added in step 1.
  3. Add a ServiceMonitor object (a CRD added by the Prometheus Operator) to tell Prometheus to scrape Continue reading

Technology Short Take 110

Welcome to Technology Short Take #110! Here’s a look at a few of the articles and posts that have caught my attention over the last few weeks. I hope something I’ve included here is useful for you also!

Networking

  • Via Kirk Byers (who is himself a fantastic resource), I read a couple of articles on network automation that I think readers may find helpful. First up is a treatise from Mircea Ulinic on whether network automation is needed. Next is an older article from Patrick Ogenstad that provides an introduction to ZTP (Zero Touch Provisioning).
  • The folks over at Cilium took a look at a recent CNI benchmark comparison and unpacked it a bit. There’s some good information in their article.
  • I first ran into Forward Networks a few years ago at Fall ONUG in New York. At the time, I was recommending that they explore integration with NSX. Fast-forward to this year, and the company announces support for NSX and (more recently) support for Cisco ACI. The recent announcement of their GraphQL-based Network Query Engine (NQE)—more information is available in this blog post—is also pretty interesting to me.

Servers/Hardware

How I lost my data trying to back it up

My data, my precious data… is gone.

Overview

This is a story about how I lost my data when trying to prevent it by backing it up.

Even though there were numerous other factors in play, I take full responsibility for what happened.

I have written this article hoping that it could save someone else from the same situation. No one should ever experience the loss of his/her data.

7 days and 10 hours ago:

Alrighty, It’s time for the offline backup routine again.

When it comes to backing up the full OS, I don’t believe in online backup solutions. This is specially true when I’m dealing with Windows servers. These kinda backups should serve as a disaster recovery solution and I’m not taking any chances.

So I always take them offline, outside of the OS, using a third party program to boot up the servers.

For years I have been using Paragon Software. Their products are great, reliable and I have nothing bad to say about them.

This time however, I thought maybe it’s time to ditch commercial softwares and use the solid ntfs-3g suite instead. Back when I started using Paragon, I didn’t even know what Linux was, Continue reading

1 42 43 44 45 46 125