Archive

Category Archives for "blog.scottlowe.org"

Technology Short Take #81

Welcome to Technology Short Take #81! I have another collection of links, articles, and thoughts about key data center technologies, and hopefully I’ve managed to include something here that will prove useful or thought-provoking. Enjoy!

Networking

The Linux Migration: Corporate Collaboration, Part 3

In discussing support for corporate communication and collaboration systems as part of my Linux migration, I’ve so far covered e-mail in part 1 and calendaring in part 2. In this post, I’m going to discuss the last few remaining aspects of corporate collaboration: instant messaging/chat, meetings and teleconferences, and document sharing.

Teleconferences and meetings

The topic of teleconferences and meetings is closely related to calendaring—it’s often necessary to access your calendar or others’ calendars when coordinating meetings or teleconferences—so I encourage you to read part 2 to get a better feel for the challenges around calendaring/scheduling. All the same challenges from that post apply here. GNOME Calendar, although it offers basic Exchange Web Services (EWS) support, does not support meeting invitations, looking up attendees, free/busy information, etc. This makes it completely unusable for setting up meetings. Evolution provides the backend support that GNOME Calendar uses but may be better suited as a frontend; I haven’t tested this functionality so I don’t know. This EWS provider for Lightning does support free/busy information, inviting attendees, etc., so it may be a good option (I’m still testing it).

The second aspect of teleconferences/meetings is the actual conduct of the meeting itself. Hosting Continue reading

Easily Finding the Latest CoreOS AMI ID

It seems as if finding the right Amazon Machine Image (AMI) ID for the workload you’d like to deploy can sometimes be a bit of a challenge. Each combination of region and AMI produces a unique ID, so you have to look up the AMI for the particular region where you’re going to deploy the workload. This in and of itself wouldn’t be so bad, but then you have to wade through multiple versions of the same AMI in each region. Fortunately, if you’re using CoreOS Container Linux on AWS, there’s an easy way to find the right AMI ID. Here’s how it works.

CoreOS publishes a JSON feed of the latest AMI for each of their channels (stable, beta, and alpha). You can find links to these JSON feeds on this page. This is powerful for 2 reasons:

  1. Because it’s available via HTTP, you can use curl to retrieve it anytime you need it.

  2. Because it’s in JSON, you can use jq (see my post on jq for more information) to easily parse it to find the information you need. (Not super comfortable with JSON? Check out my introductory post.)

Putting these two reasons together, you end up Continue reading

Canceling the OVS Cookbook Project

In my list of proposed 2017 projects, I mentioned that I wanted to launch an open source book project. In late February, I launched The Open vSwitch Cookbook, an unofficial—as in not formally affiliated with the Open vSwitch (OVS) project—effort to gather together OVS “recipes” into an open source book. Today, I’m shutting down that project, and here’s why.

It really comes down to wanting to be a better member of the OVS community. I honestly hadn’t anticipated that the OVS community might prefer that the information I was going to gather in these “recipes” be collected in the OVS documentation (which has undergone a tremendous transformation). Instead of creating yet another source of information for OVS, I’ll focus my efforts on expanding the upstream documentation. This will take some effort on my part—I’ll need to learn reStructuredText and spend some time understanding how the docs are organized now—but I think that it’s the better long-term option for the OVS community as a whole.

And what about my goal for launching an open source book project? I’ll continue to evaluate options on that front to see if it makes sense, and I’ll post here if and when something happens.

The Linux Migration: Corporate Collaboration, Part 2

This post is part 2 in a series of posts describing how I’ve integrated my Fedora Linux laptop into my employer’s corporate communication and collaboration systems. Part 1 tackled e-mail; this post tackles the topic of calendaring and scheduling. Unlike e-mail, which was solved relatively easily, this issue is one that I don’t consider fully solved.

As I mentioned in part 1, my employer uses Office 365 (O365). While O365 supports standard protocols like IMAP and STMP for mail, it does not support standard protocols like CalDAV for calendaring. This means that Linux users like me are left with only a few options:

  1. You can use Mozilla Thunderbird with the Lightning add-on, but you’ll also need an Exchange provider. (The paid Exquilla add-on only handles mail and contacts, not calendaring. There’s a Lightning provider available here, but I haven’t tested it.)
  2. You can use Evolution.
  3. You can use GNOME Calendar (which leverages the Evolution back-end along with Evolution’s support for Exchange Web Services [EWS]).
  4. You can use Microsoft Outlook, either via a VM (or possibly via WINE, though I haven’t tested the latter approach).

I’d already ruled out Evolution for e-mail, so it didn’t make a Continue reading

Technology Short Take #80

Welcome to Technology Short Take #80! This post is a week late (I try to publish these every other Friday), so my apologies for the delay. However, hopefully I’ve managed to gather together some articles with useful information for you. Enjoy!

Networking

  • Biruk Mekonnen has an introductory article on using Netmiko for network automation. It’s short and light on details, but it does provide an example snippet of Python code to illustrate what can be done with Netmiko.
  • Gabriele Gerbino has a nice write-up about Cisco’s efforts with APIs; his article includes a brief description of YANG data models and a comparison of working with network devices via SSH or via API.
  • Giuliano Bertello shares why it’s important to RTFM; or, how he fixed an issue with a Cross-vCenter NSX 6.2 installation caused by duplicate NSX Manager UUIDs.
  • Andrius Benokraitis provides a preview of some of the networking features coming soon in Ansible 2.3. From my perspective, Ansible has jumped out in front in the race among tools for network automation; I’m seeing more coverage and more interest in using Ansible for network automation.
  • Need to locate duplicate MAC addresses in your environment, possibly caused by cloning Continue reading

The Linux Migration: Other Users’ Stories, Part 4

This post is part of a series of posts sharing other users’ stories about their migration to Linux as their primary desktop OS. As I mentioned in part 1 of the series, there seemed to be quite a bit of pent-up interest in using Linux as your primary desktop OS. I thought it might be helpful to readers to hear not just about my migration, but also about others’ migrations. You may also find it interesting/helpful to read part 2 and part 3 of this series for more migration stories.

This time around I’ll share with you some information from Ajay Chenampara about his Linux migration. Note that although these stories are all structured in a “question-and-answer” format, the information is unique—just as each person’s migration and the reasons for the migration are unique.

Q: Why did you switch to Linux?

I have been a long-time Linux user, but I have only really used it as a media server or for casual browsing. Recently, I inherited a 7 year old laptop from my wife, and decided to focus on making it my primary system for writing my blog and for OSS efforts. Plus, I kept hearing about Debian “Jessie” Continue reading

The Linux Migration: Corporate Collaboration, Part 1

One major aspect of my migration to Linux as my primary desktop OS is how well it integrates with corporate communication and collaboration systems. Based on the feedback I’ve gotten from others on Twitter, this is a major concern for a lot of folks out there. In fact, a number of folks have indicated that this is the only thing keeping them from migrating to Linux. There are a number of different aspects to “corporate communication and collaboration,” so I’m breaking this down into multiple posts (each post will discuss one particular aspect). In this post, I’ll discuss integration with corporate e-mail.

Because corporate e-mail is such an important part of how people communicate these days, it’s a fairly significant concern when thinking of migrating to Linux. Fortunately, it’s actually pretty easy to solve.

My employer, like many companies out there, uses Office 365 for corporate e-mail. Many people think that this locks them into Outlook on the desktop side, but that’s not accurate. (Now, you may be locked into Outlook for other reasons, like calendaring—a topic I’ll touch on in part 2 of this series.) For Office 365 users, there are three paths open for accessing corporate e-mail:

  1. Continue reading

The Linux Migration: Other Users’ Stories, Part 3

Over the last few weeks, I’ve been sharing various users’ stories about their own personal migration to Linux. If you’ve not read them already, I encourage you to check out part 1 and part 2 of this multi-part series to get a feel for why folks are deciding to switch to Linux, the challenges they faced, and the benefits they’ve seen (so far). Obviously, Linux isn’t the right fit for everyone, but at least by sharing these stories you’ll get a better feel whether it’s a right fit for you.

This is Brian Hall’s story of switching to Linux.

Q: Why did you switch to Linux?

I’ve been an OS X user since 2010. It was a huge change coming from Windows, especially since the laptop I bought had the first SSD that I’ve had in my primary machine. I didn’t think it could get any better. Over the years that feeling started to wear off.

OS X started to feel bloated. It seemed like OS X started to get in my way more and more often. I ended up formatting and reinstalling OSX like I used to do with Windows (maybe not quite as often). Setting up Mail to Continue reading

The Linux Migration: Creating Presentations

Long-time readers of my site know that I’m a fan of Markdown, and I use it extensively. (This blog, in fact, is written entirely in Markdown and converted to HTML using Jekyll on GitHub Pages.) Since migrating to Linux as my primary desktop OS, I’ve also made the transition to doing almost all my presentations in Markdown as well. Here are the details on how I’m using Markdown for creating presentations on Linux.

There are a number of tools involved in my workflow for creating Markdown-based presentations on Linux:

  • Sublime Text 3 (with the Markdown Extended and Monokai Extended packages) is used for editing the “source” files for a presentation. Three “source” files are involved: a Markdown file, a HTML file, and a Cascading Style Sheet (CSS) file.
  • Remarkjs takes the Markdown-formatted content and converts it into a dynamic HTML-based presentation, formatting it according to the styles defined in the CSS file. This gives tremendous flexibility in formatting the presentation. (Check it out on GitHub.)
  • I use a web browser to display the HTML output generated by Remarkjs (in my case, I’m using Firefox on my Fedora laptop).
  • To help with creating a PDF version of Continue reading

The Linux Migration: Other Users’ Stories, Part 2

This post is part of a series of posts sharing the stories of other users who have decided to migrate to Linux as their primary desktop OS. Each person’s migration (and their accompanying story) is unique; some people have embraced Linux only on their home computer; others are using it at work as well. I believe that sharing this information will help readers who may be considering a migration of their own, and who have questions about whether this is right for them and their particular needs.

For more information about other migrations, see part 1 or part 2 of the series.

This time around we’re sharing the story of Rynardt Spies.

Q: Why did you switch to Linux?

In short, I’ve always been at least a part-time Linux desktop user and a heavy RHEL server user. My main work machine is Windows. However, because of my work with AWS, Docker, etc., I find that being on a Linux machine with all the Linux tools at hand (especially OpenSSL and simple built-in tools like SSH) is invaluable when working in a Linux world. However, I’ve always used Linux Mint, or Ubuntu (basically Debian-derived distributions) for my desktop Continue reading

Technology Short Take #79

Welcome to Technology Short Take #79! There’s lots of interesting links for you this time around.

Networking

  • I was sure I had mentioned Skydive before, but apparently not (a grep of all my blog posts found nothing), so let me rectify that first. Skydive is (in the project’s own words) an “open source real-time network topology and protocols analyzer.” The project’s GitHub repository is here, and documentation for Skydive is here.
  • OK, now that I’ve mentioned Skydive, I can talk about this article that provides an example of functional SDN testing with Terraform and Skydive. Terraform is used to turn up OpenStack infrastructure, and Skydive (via connections into Neutron and OpenContrail, in this example) is used to validate SDN functionality.
  • Tony Sangha took PowerNSX (a set of PowerShell cmdlets for interacting with NSX) and created a tool to help document the NSX Distributed Firewall configuration. This tool exports the DFW configuration and then converts it into Excel format, and is available on GitHub. (What’s that? You haven’t heard of PowerNSX before? See here.)

Servers/Hardware

Nothing this time around. Should I keep this section, or ditch it? Feel free to give me your feedback on Twitter.

Security

Customizing Docker Engine on CentOS Atomic Host

I’ve been spending some time recently with CentOS Atomic Host, the container-optimized version of CentOS (part of Project Atomic). By default, the Docker Engine on CentOS Atomic Host listens only to a local UNIX socket, and is not accessible over the network. While CentOS has its own particular way of configuring the Docker Engine, I wanted to see if I could—in a very “systemd-like” fashion—make Docker Engine on CentOS listen on a network socket as well as a local UNIX socket. So, I set out with an instance of CentOS Atomic Host and the Docker systemd docs to see what I could do.

The default configuration of Docker Engine on CentOS Atomic Host uses a systemd unit file that references an external environment file; specifically, it references values set in /etc/sysconfig/docker, as you can see from this snippet of the docker.service unit file:

ExecStart=/usr/bin/dockerd-current \
          --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
          --default-runtime=docker-runc \
          --exec-opt native.cgroupdriver=systemd \
          --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
          $OPTIONS \
          $DOCKER_STORAGE_OPTIONS \
          $DOCKER_NETWORK_OPTIONS \
          $ADD_REGISTRY \
          $BLOCK_REGISTRY \
          $INSECURE_REGISTRY

The $OPTIONS variable, along with the other variables at the end of the ExecStart line, are defined in /etc/sysconfig/docker. That value, by default, looks like this:

OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'

I Continue reading

The Linux Migration: Other Users’ Stories, Part 1

Shortly after I announced my intention to migrate to Linux as my primary desktop OS, a number of other folks contacted me and said they had made the same choice or they had been encouraged by my decision to also try it themselves. It seems that there is a fair amount of pent-up interest—at least in the IT community—to embrace Linux as a primary desktop OS. Given the level of interest, I thought it might be helpful for readers to hear from others who are also switching to Linux as their primary desktop OS, and so this post kicks off a series of posts where I’ll share other users’ stories about their Linux migration.

In this first post of the series, you’ll get a chance to hear from Roddy Strachan. I’ve structured the information in a “question-and-answer” format to make it a bit easier to follow.

Q: Why did you switch to Linux?

I was a heavy Windows user due to corporate requirements. It was just easy to run Windows. I never ran the standard corporate build, but instead ran my own managed version of Windows 10; this worked well. I switched because I wanted to experiment with Linux Continue reading

Adding Metadata to the Arista vEOS Vagrant Box

This post addresses a (mostly) cosmetic issue with the current way that Arista distributes its Vagrant box for vEOS. I say “mostly cosmetic” because while the Vagrant box for vEOS is perfectly functional if you use it via Arista’s instructions, adding metadata as I explain here provides a small bit of additional flexibility should you need multiple versions of the vEOS box on your system.

If you follow Arista’s instructions, then you’ll end up with something like this when you run vagrant box list:

arista-veos-4.18.0    (virtualbox, 0)
bento/ubuntu-16.04    (virtualbox, 2.3.1)
centos/6              (virtualbox, 1611.01)
centos/7              (virtualbox, 1611.01)
centos/atomic-host    (virtualbox, 7.20170131)
coreos-stable         (virtualbox, 1235.9.0)
debian/jessie64       (virtualbox, 8.7.0)

Note that the version of the vEOS box is embedded in the name. Now, you could not put the version in the name, but because there’s no metadata—which is why it shows (virtualbox, 0) on that line—you wouldn’t have any way of knowing which version you had. Further, what happens when you want to have multiple versions of the vEOS box?

Fortunately, there’s an easy fix (inspired by the way CoreOS distributes their Vagrant box). Just create a file with the Continue reading

Launching an Open Source Book Project

In my list of planned 2017 projects, I mentioned that one thing I’d like to do this year is launch an open source book project. Well, I’m excited to announce The Open vSwitch Cookbook, an Apache 2.0-licensed book project aimed at providing “how to” recipes for Open vSwitch (OVS).

Portions of the book are already available, with more content being added soon (more on that in a moment).

I’m using GitBook as the publishing platform; this allows me to write in Markdown and publish to a variety of formats. I’ll only be publishing to HTML at first; other formats may come down the road. I chose GitBook for a few reasons:

  1. It’s free for open source projects. This book, as well as the software that is its focus, are both open source projects.
  2. As I mentioned already, I can use Markdown for all the content.
  3. It allows me to store the book in a Git repository and use standard Git workflows.

I decided against using GitBook to host the Git repository for the book. Instead, the book’s source is found on GitHub. This enables collaboration on the book’s content—an aspect of this project that I think Continue reading

Technology Short Take #78

Welcome to Technology Short Take #78! Here’s another collection of links and articles from around the Internet discussing various data center-focused technologies.

Networking

Servers/Hardware

Nothing this time around, sorry!

Security

Correlating OVS and Guest Domain Interfaces

I’ve written a fair amount about Open vSwitch (OVS), including some articles on using it with KVM and Libvirt. One thing I haven’t discussed in such environments, though, is the potential challenge of mapping network interfaces in a guest domain to the corresponding OVS interface (for the purposes of troubleshooting, for example). There is no single command that will provide a guest-to-OVS interface map (as far as I know), but this information is easily gathered using a couple commands.

Gathering Information About the Guest Interface

First, we’ll need to gather some information about the interface from the guest domain’s perspective. There are two ways we can do this: from within the guest OS itself, or by interrogating Libvirt.

Working from Within the Guest OS

Inside the guest domain (I’m assuming you’re using a relatively recent Linux distribution), you only need to use standard commands like ip link list or ip addr list. The goal is to obtain the MAC address assigned to the particular guest interface. So, for example, if you wanted to get the MAC address for the guest “eth0” interface, you’d run:

    ip link list eth0

To isolate only the MAC address from the output of that Continue reading

Fixing Double Sublime Text Icons on Fedora 25

In my previous post on how to install Sublime Text 3 (ST3) on Fedora 25, I mentioned that I have observed instances where launching ST3 via the subl command creates an additional icon in the Dash. While searching for a solution to an issue with LibreOffice icons, I found a fix for this problem.

The fix is to add this line to the sublime-text.desktop file (typically found in /usr/share/applications):

StartupWMClass=subl

This tells Fedora and GNOME that when a window with the WMClass of “subl” appears, it should be considered a Sublime Text window. Once you add this line to the sublime-text.desktop file, then launching ST3 either via the GUI or via the subl command should create only a single ST3 icon in the Dash.

Now, back to trying to figure out this LibreOffice icon issue…

Installing Sublime Text 3 on Fedora 25

Sublime Text is my current text editor of choice. I won’t go into why I chose it over other tools; instead, I encourage you to have a look for yourself. Installing Sublime Text 3 (ST3) on Fedora 25, though, isn’t as simple as running a dnf install. Fortunately, it’s not a difficult process, but it is a process I wanted to document here for the sake of others.

Here’s the process I followed:

  1. Download the latest tarball of ST3. As of this writing, it was build 3126, so this cURL command accomplishes what you need:

     curl -LO https://download.sublimetext.com/sublime_text_3_build_3126_x64.tar.bz2
    

    As build numbers change, though, you’ll want to verify the correct URL for the latest build. (A lot of sites I saw provide hard-coded scripts that help perform this process for you, but don’t account for changes in the download URL.)

  2. Extract the contents of the tarball with tar xvjf sublime_text_3_build_3126_x64.tar.bz2. This will create a directory called “sublime_text_3” with the contents of the tarball.

  3. Install the desktop launcher for ST3 by copying over the .desktop file in the tarball:

     sudo cp -rf sublime_text_3/sublime_text.desktop /usr/share/applications/sublime_text.desktop
    
  4. Edit the desktop launcher to specify the full path Continue reading

1 19 20 21 22 23 33