Archive

Category Archives for "blog.scottlowe.org"

Liveblog: DockerCon 2015 Day 1 General Session

This is a liveblog for the day 1 general session at DockerCon 2015, taking place this week (today and tomorrow, anyway) at the Marriott Marquis in San Francisco, CA. This is my first DockerCon, and I’m looking forward to picking up lots of new knowledge.

The general session starts with a video (cartoon) about something working in development but not in production, and how Solomon Hykes came up with the idea for containers and Docker. It’s a humorous, tongue-in-cheek production. As the video wraps up, Docker CEO Ben Golub takes the stage.

Golub starts with a personal story about the various startups for which he’s worked, and the importance of his “two fold test” (that it has global significance and that it is easy to explain when you go home for Thanksgiving). Maybe the Thanksgiving test didn’t quite make it, but Golub does think (naturally) that Docker has global significance. Golub says that Docker has become a fundamental part of how companies build, ship, and run distributed applications, and that Docker is a key part of how industries and cultures are being transformed. He attributes this success to the Docker community and the Docker ecosystem. Rightfully so, Golub credits the Continue reading

Automatic Ansible Inventory with Vagrant

Yesterday, I posted about using Vagrant to learn Ansible, in which I showed you one way to combine these two tools to make it easier to learn Ansible. This is a combination I’m currently using as I continue to explore Ansible. Today, I’m going to expand on yesterday’s post by showing you how to make Vagrant automatically build an Ansible inventory for a particular Vagrant environment.

As you may already know, the Vagrantfile that Vagrant uses to instantiate and configure the VMs in a particular Vagrant environment is just Ruby. As such, it can be extended in a lot of different ways to do a lot of different things. In my case, I’ve settled on a design pattern that involves a separate YAML file with all the VM-specific data, which is read by the Vagrantfile when the user runs vagrant up. The data in the YAML file determines how many VMs are instantiated, what box is used for each VM, and the resources that are allocated to each VM. This is a design pattern I’ve used repeatedly in my GitHub “learning-tools” repository, and it seems to work pretty well (for me, at least).

Using this arrangement, since I Continue reading

Using Vagrant to Help Learn Ansible

I’ve been spending some time with Ansible recently, and I have to say that it’s really growing on me. While Ansible doesn’t have a steep learning curve, there is still a learning curve—albeit a smaller/less steep curve—so I wanted to share here a “trick” that I found for using Vagrant to help with learning Ansible. (I say “trick” here because it isn’t that this is complicated or undocumented, but rather that it may not be immediately obvious how to combine these two.)

Note that this is not to be confused with using Ansible from within Vagrant as a provisioner; that’s something different (see the Vagrant docs for more information on that use case). What I’m talking about is having a setup where you can easily explore how Ansible works and iterate through your playbooks using a Vagrant-managed VM.

Here are the key components:

  1. You’ll need a Vagrant environment (you know, a working Vagrantfile and any associated support files).
  2. You’ll need Ansible installed on the system where you’ll be running Vagrant and the appropriate back-end virtualization platform (I tested this with VMware Fusion, but there’s nothing VMware-specific here).
  3. In the same directory as the Vagrantfile, you’ll need an Continue reading

Some Cumulus Linux Networking Concepts

As I’ve recently had the opportunity to start working with Cumulus Linux (running on a Dell S6000-ON switch), in this post I wanted to share a few concepts I’ve learned about networking with Cumulus Linux.

I’m not a networking guru, but I’m also not new to configuring network equipment—I’ve configured GRE tunnels on a Cisco router, set up link-state tracking, and enabled jumbo frames on a Nexus 5000 (to name a few examples). I’ve worked with Cisco gear, HP equipment, Dell PowerConnect switches, and Arista EOS-powered switches. However, as a full distribution of Linux, networking with Cumulus Linux is definitely different from your typical network switch. To help make the transition easier, I’ll share here a few things I’ve learned so far.

It’s important to understand that Cumulus Linux isn’t just a “Linux-based network OS”—it’s actually a full Linux distribution (based on Debian). Lots of products are Linux-based these days, but often hide the full power of Linux behind some sort of custom command-line interface (CLI) or shell. Not so in this case! I think this fact is perhaps a bit easy to overlook, but it shapes everything that happens in Cumulus Linux:

Rubrik and Converged Data Management

Rubrik today announced a new Series B investment (of $41 million) and introduced their r300 Series Hybrid Cloud Appliance, powered by what they’re touting as a “Converged Data Management” platform. Wow—that’s a mouthful, isn’t it? It sounds a bit like buzzword bingo, but after having spent a bit of time talking to Rubrik last week, there are some interesting (in my opinion) things going on here.

So what exactly is Rubrik doing? Here’s the “TL;DR” for those of you that don’t have the patience (or the time) for anything more in-depth: Rubrik is targeting the secondary storage and backup/recovery market with a solution that combines a distributed file system, a distributed metadata service, clustering, and a distributed task scheduler to provide a scale-out backup/recovery solution that also seamlessly integrates cloud storage platforms for long-term retention. The catch-phrase they’re using is “Time Machine for cloud infrastructure” (I wonder how our good friends in Cupertino will react to the use of that phrase?).

Here’s a bit more detail on the various components of the solution:

  • Rubrik has its own distributed file system (imaginatively named the Rubrik Cloud-Scale File System) that was designed from scratch to store and manage versioned data. The Continue reading

Bootstrapping Servers into Ansible

As part of a lab rebuild I’ve been doing over the last few weeks (funny how hardware failures can lead to a lab rebuild), I’ve been expanding the use of Ansible for configuration automation. In this post, I’m going to share the process I’ve created for bootstrapping newly-built servers into Ansible.

I developed this Ansible bootstrapping process to work in conjunction with the fully automated Ubuntu installation method that I described in an earlier post. The idea is that I would be able to boot a new server (virtual or physical), choose a configuration from the PXE menu, and a few minutes later have a built Ubuntu system. Then, with a single command, I could “bootstrap” the server into an Ansible configuration automation system. This latter part—configuring systems to work with Ansible—is what I’ll be describing here.

First, a (very) brief overview of Ansible. Ansible is a configuration automation tool that leverages standard SSH connections to remote devices in order to perform its work. Ansible is agentless, so no software has to be pre-installed on the managed servers, but this means Ansible has to authenticate against remote systems in order to establish these SSH connections. This authentication should, in ideal Continue reading

Technology Short Take #51

Welcome to Technology Short Take #51, another collection of posts and links about key data center technologies like networking, virtualization, cloud management, and applications/operating systems. Here’s hoping you find something useful in this collection!

Networking

  • I’m not sure if this falls here or into the “Cloud Computing/Cloud Computing” category, but Shannon McFarland—fellow co-conspirator with the Denver OpenStack Meetup group—has a nice article describing some design and deployment considerations for IPv6 in the OpenStack Kilo release.
  • I’m pretty sure I’ve mentioned Open Virtual Network (OVN) here before, as I’m pretty jazzed about the work going on with this project. If you’re unfamiliar with OVN, Gal Sagie has a couple of articles that might help. I’d start with the later of the two articles, which provides an introduction to OVN, before moving on to Gal’s discussion of OVN and the distributed controller and his article on OVN and containers.
  • Speaking of OVN, Russell Bryant has a detailed description of using OVN with OpenStack Neutron (via DevStack).
  • Using Jinja2 templates for automating network device configuration is a topic that’s getting a fair amount of attention (there were at least two sessions discussing this technique while I was at Interop). Rick Sherman has Continue reading

Using an Apt Proxy

In this post I’ll show you how to use apt-cacher-ng as an Apt proxy for Ubuntu systems on your network. I’m sure there are a lot of other resources that also provide this information, but I’m including it here for the sake of completeness and making it as easy as possible for others. Using an Apt proxy will help reduce the traffic coming from your network to external repositories, but it simpler and easier than running your own internal repository or mirror.

This isn’t the first time I’ve discussed apt-cacher-ng; almost two years ago I showed you how to use Puppet to configure Ubuntu to use apt-cacher-ng. This post focuses on the manual configuration of an Apt proxy.

On the server side, setting up an Apt proxy is as simple as one command:

apt-get install apt-cacher-ng

I’m sure there are some optimizations or advanced configurations, but this is enough to get the Apt proxy up and running.

On the client side, there are a couple of ways to configure the system. You could use a tool like Puppet (as described here), or manually configure the system. If you choose manual configuration, you can place the configuration in either /etc/apt/apt. Continue reading

Building a Fully Automated Ubuntu Installation Process

Recently on Twitter, I mentioned that I had managed to successfully create a fully automated process for installing Ubuntu Server 14.04.2, along with a method for bootstrapping Ansible. In this post, I’m going to describe the installation process I built and the components that went into making it work. I’ll discuss the Ansible bootstrap process in a separate post. I significantly doubt that there is anything new or unique here, but hopefully this information will prove helpful to others facing similar challenges.

Before I continue, allow me to briefly discuss why I didn’t use a system like Cobbler instead of putting together my own system. Cobbler is a great tool. For me, though, this was also about deepening my own knowledge. I wanted to better understand the various components involved and how they interacted, and I didn’t feel I would really be able to do that with a “prebuilt” system like Cobbler. If you are more interested in getting something up and running as opposed to learning more about how it works (and that’s OK), then I’d recommend you skip this post and go download Cobbler. If, on the other hand, you want to make this into more Continue reading

Using PXE with virt-install

In this post, I’ll just share a quick command that can be used to build and install a KVM guest using PXE instead of an ISO image. There’s nothing new here; this is just me documenting a command so that it’s easier for me (and potentially others) to find next time I need it.

I shared how to use the virt-install command to build KVM guest domains in a blog post talking about working with KVM guests. In that post, I used an ISO image with the virt-install command to build the guest domain.

However, there may be times when you would prefer to use PXE instead of an ISO image. To build a KVM guest domain and instruct the guest domain to boot via PXE, you would use this command (I’ve inserted backslashes and line returns to improve readability):

sudo virt-install --name=guest-name --ram=2048 --vcpus=1   
--disk path=/var/lib/libvirt/images/guest-disk.qcow2,bus=virtio   
--pxe --noautoconsole --graphics=vnc --hvm   
--network network=net-name,model=virtio   
--os-variant=ubuntuprecise

The key here is the --pxe parameter, which virt-install uses to instruct the guest domain to PXE boot instead of booting from a virtual CD-ROM backed by an ISO image.

Naturally, you’d want to substitute the desired values for the KVM Continue reading

A Quick Introduction to LXD

With the recent release of Ubuntu 15.04, aka “Vivid Vervet”, the Ubuntu community has also unveiled an early release of LXD (pronounced “lex-dee”), a new project aimed at revitalizing the use of LXC and LXC-based containers in the face of application container efforts such as Docker and rkt. In this post, I’ll provide a quick introduction to LXD.

To make it easier to follow along with some of the examples of using LXD, I’ve created an lxd directory in my GitHub “learning-tools” repository. In that directory, you’ll find a Vagrantfile that will allow you to quickly and easily spin up one or more VMs with LXD.

Relationship between LXD and LXC

LXD works in conjunction with LXC and is not designed to replace or supplant LXC. Instead, it’s intended to make LXC-based containers easier to use through the addition of a back-end daemon supporting a REST API and a straightforward CLI client that works with both the local daemon and remote daemons via the REST API. You can get more information about LXD via the LXD web site. Also, if you’re unfamiliar with LXC, check out this brief introduction to LXC. Once you’ve read that, you can browse some Continue reading

Interop Liveblog: The Post-Cloud

This session is titled “The Post-Cloud,” and the speaker is Nick Weaver, Director of SDI-X at Intel.

Nick starts his presentation with a summary of our society: some people produce goods through an effort, and others consume what is produced. Things have changed over the years that have affected this production-consumption model, but Nick quickly turns his focus to the use of machines in the production portion of this cycle. As production efficiency increased, the level of consumption also increased. This is especially true for computing machines, and how people consume the services/information produced by the computing machines.

This brings Nick around to a discussion of Jevons’ Paradox, which basically states that the increased efficiency of producing something actually leads to an increase in consumption, not a decrease of consumption.

So what does efficiency in technology look like? Technology enables things; by itself, it doesn’t really add value. Therefore, efficiency in technology means enabling more (or more powerful) things. Nick starts his discussion on technology efficiency with a discussion of DevOps, and what DevOps means. Although a number of technologies are involved to deal with the ever-increasing complexity and density that has emerged, DevOps is really about a culture change. Continue reading

Interop Liveblog: Thursday Cloud Connect Keynote

This is a liveblog of the Thursday morning Cloud Connect keynote at Interop 2015 in Las Vegas. The title of the presentation is “Doing it Live,” and the speaker is Jared Wray (@jaredwray on Twitter; he’s Cloud CTO and SVP of Platform at CenturyLink).

As the session kicks off, Wray shares that his presentation was drastically altered, a nod to the drastic changes that he is seeing at CenturyLink. He then shares a bit of background on him, his history in IT, and the events that brought him to CenturyLink. Wray then spends a few minutes talking about CenturyLink and CenturyLink’s services, which he insists “isn’t a product pitch” (it feels like one). The key tenets of CenturyLink’s offerings are that they are fully automated; they are programmable; and they are self service.

Wray points out that CenturyLink’s transformation to next generation platform services and containers requires that they also transform their operations (and people, though that is called out separately).

According to Wray, the blanket “move everything to the cloud” doesn’t work; enterprises must embrace a “cap and grow” strategy. This means not moving applications if there is no benefit (and also moving applications to maintenance mode until Continue reading

Techniques of a Network Detective

This is “Techniques of a Network Detective,” led by Denise “Fish” Fishburne (@DeniseFishburne on Twitter). Denise starts the session with a quick introduction, in which she discloses that she is a “troubleshooting junkie.” She follows up with a short description of what life looks like in her role in the customer proof-of-concept lab at Cisco.

Denise kicks off the main content of the session by drawing an analogy between solving crimes and solving network performance/behavior problems. The key is technique and methodology, which may sound boring but really have a huge payoff in the end.

When a network error occurs, the network is the crime scene. This crime scene is filled with facts, clues, evidence, and potential witnesses—or even potential suspects. How does one get from receiving notification of the problem, to asking the right questions, to solving the problem? Basically it boils down to these major areas:

  • First, identify the suspects (even if the problem seems immediately obvious). This involves gathering facts, collecting clues, following the evidence, and interviewing witnesses.
  • Next, question the suspects. Although you may not be an SME (subject matter expert), you can still work logically through gathering facts from the suspects.
  • After you Continue reading

Interop Liveblog: IPv6 Microsegmentation

This session was titled “IPv6 Microsegmentation,” and the speaker was Ivan Pepelnjak. Ivan is, of course, a well-known figure in the networking space, and publishes content at http://ipspace.net.

The session starts with a discussion of the problems found in Layer 2 IPv6 networks. Some of the problems include spoofing RA (Router Advertisement) messages, NA (Neighbor Advertisement) messages, DHCPv6 spoofing, DAD (Duplicate Address Detection) DoS attacks, and ND (Neighbor Discovery) DoS attacks. All of these messages derive from the assumption that one subnet = one security zone, and therefore intra-subnet communications are not secured.

Note that some of these attacks are also common to IPv4 and are not necessarily unique to IPv6. The difference is that these problems are well understood in IPv4 and therefore many vendors have implemented solutions to mitigate the risks.

According to Ivan, the root cause of all these problems originates with the fact that all LAN infrastructure today emulates 40 year old thick coax cable.

The traditional fix is to add kludges….er, new features—like RA guard (prevents non-routers from sending RA messages), DHCPv6 guard (same sort of functionality), IPv6 ND inspection (same idea), and SAVI (Source Address Verification Inspection; complex idea where all these Continue reading

Running vSphere on AWS or GCE

By now you’ve probably seen or heard the news about Ravello Systems launching Inception—the ability to run nested VMware ESXi on AWS or GCE, including the ability to run VMs on these nested ESXi instances. (Here’s Ravello’s press release.)

In my opinion, this is pretty cool, and it opens the door to a lot of different possibilities: upgrade testing, automation testing, new feature testing, hosted home labs (aka “Lab as a Service”). Lots of folks are interested in using this new Ravello functionality for “Lab as a Service.” Here’s Andrea Mauro’s take on this topic.

As part of the pre-launch activities, a number of bloggers and community advocates were able to work with Ravello on some very interesting projects:

  • William Lam built both a 32-node VSAN cluster (running vSphere 5.5) as well as a 64-node VSAN cluster (running vSphere 6.0). He posted details here, along with a great walkthrough of setting up vSphere on Ravello.
  • Mike Preston built out an environment that allowed him to perform a vMotion from AWS to GCE.

I was also engaged with Ravello on a project: building a (reasonably) large-scale vSphere environment on Ravello. The original goal was to Continue reading

Technology Short Take #50

Welcome to Technology Short Take #50, the latest in my series of posts sharing various links and articles pertaining to key data center technologies. I hope that you find something useful here!

Networking

  • Tyler Christiansen recently published a post on a network automation workflow that was based on a presentation he gave at the SF Network Automation meetup. The workflow incorporates Ansible, git, Jenkins, and Gerrit. If you’re looking for more examples of how to incorporate these sorts of tools into your own network automation workflow, I’d recommend having a look at this article.
  • This post contains a link to a useful presentation on the essential parts of EVPN. It’s quite useful if you (like me) need an introduction to this technology.
  • Need to reset the CLI privileged mode password on your NSX Manager instance? Here’s a walkthrough. (Warning: as pointed out in the article, this is most likely not supported. Use at your own risk.)
  • This article by Russell Bryant is a great overview and update of the work going on with Open Virtual Network (OVN). I’m really excited about OVN and looking forward to seeing it develop and grow.
  • This is kind of cool, and (in my Continue reading

Ubuntu, cloud-init, and OpenStack Heat

In this post I’d like to share a couple of things I recently learned about the interaction between cloud-init and OpenStack Orchestration (aka “Heat”). This may be stuff that you already know, but in the interest of helping others who may not know I’m posting it here.

One issue that I’d been repeatedly running into was an apparent “failure” on the part of Heat to properly apply cloud-init configurations to deployed Ubuntu instances. So, using a Heat template with an OS::Nova::Server resource defined like this would result in an instance that apparently wasn’t reachable via SSH (I’d get back Permission denied (publickey)):

resources:
  instance0:
    type: OS::Nova::Server
    properties:
      name: cloud-init-test-01
      image: { get_param: image_id }
      flavor: m1.xsmall
      networks:
        - port: { get_resource: instance0_port0 }
      key_name: lab

Deploying an instance manually from the same image worked perfectly. So what was the deal?

The first thing I learned was that, in some circumstances (more on this in a moment) defaults to injecting SSH keys (like the key named lab specified in the template) to a user account named “ec2-user”. Ah! I’d been using the default “ubuntu” account specified in Continue reading

Running Photon on Fusion via Vagrant

Most of you have probably picked up on the news of VMware’s new container-optimized Linux distribution, code-named “Photon”. (More details here.) In this post, I’m going to provide a very quick walkthrough on running Photon on VMware Fusion via Vagrant. This walkthrough will leverage a Vagrant box for Photon that has already been created.

To make things easier, I’ve added a photon directory to my GitHub “learning-tools” repository. Feel free to pull those files down to make it easier to follow along.

I assume that you’ve already installed Vagrant, VMware Fusion, and the Vagrant plugin for VMware. If you haven’t, you’ll want to complete those tasks—and verify that everything is working as expected—before proceeding.

  1. Install an additional Vagrant plugin that enables Vagrant to better detect and interact with Photon using this command:

     vagrant plugin install vagrant-guests-photon
    

    If you don’t install this plugin, you’ll likely get a non-fatal error about Vagrant being unable to perform the networking configuration. (Review the GitHub repository for this plugin if you want/need more details. Also, note that a PR against Vagrant to eliminate the need for this plugin was opened and merged; this fix should show up in a future release of Vagrant.)

  2. Continue reading

Running an etcd-Backed Docker Swarm Cluster

In this post, I’ll build on a couple of previous posts and show you how to build your own Docker Swarm cluster that leverages etcd for cluster node discovery. This post builds on the information presented on how to run an etcd 2.0 cluster on Ubuntu as well as the information found in this post on running a Consul-backed Docker Swarm cluster.

To help you follow along, I’ve created a Vagrant environment that you can use to turn up the configuration described in this blog post. These files are found in the “docker-swarm-etcd” directory of my GitHub learning-tools repository. Feel free to use the files in this directory/repository to help with the learning process.

There are 3 major components to this configuration:

  1. A cluster of three Ubuntu 14.04 systems running etcd 2.0 (specifically, etcd 2.0.9, the last release as of this writing). This etcd cluster serves as a discovery back-end for Docker Swarm.
  2. A set of hosts running the Docker daemon (version 1.4.0 or higher, as required by Swarm). In this particular instance, I’m using CoreOS Linux (version 557.2.0 from the stable branch).
  3. A few containers running on the CoreOS hosts; Continue reading
1 28 29 30 31 32 34