Archive

Category Archives for "Das Blinken Lichten"

Python and IPython

I recently came across IPython while reading some Python development blogs.  IPython is an alternative to the standard Python shell that offers some additional features.  When I first read about IPython, I was a little confused because many people refer to it as the ‘Python interactive shell’.  While IPython is an interactive shell, it is not the Python interactive shell.  For instance, we can enter the Python interactive shell just by typing ‘python’ on our Python development box…

image
So, what we really did here was invoke the Python interpreter in interactive mode.  In this mode, commands can be read from the TTY and directly interpreted.  So for example, we can do something like this…

image
The Python code we type is directly interpreted and we get the output we would expect.  So instead of using the Python interpreter to run a .py script, we could do it all directly from the interpreter.  So the example from our Python up and running post works just as well in interactive mode as it did when run as a script…

image
So that’s Python interactive mode.  Now, let’s talk about IPython.  The first thing we Continue reading

Python up and running

Python has certainly become one of the top languages of the day.  In this post, I want to spend some time to get you up and running with it.  We’ll start with a base install of Python and then walk through an example to introduce some basic Python concepts.  If you’re in infrastructure, particularly networking, then Python is a language you should be putting some time towards learning.  Most of the vendors out there are coming out with some level of Python integration for their products.  Matt Oswalt spends some time in one of his recent posts talking about how important this integration is as well as gives a couple of examples.  Bottom line – all of us in infrastructure should be finding better ways to do things and Python is a good place to start. 

Note: If you’re interested in the future of networking as a whole, check out this other post from Matt Oswalt where he talks about next gen networking skills.  Good stuff.

I always like to start from the beginning so let’s start from absolute scratch.  I’m going to start with a CentOS 7 host that has Continue reading

Networking Field Day 10 – Intel

Let me start this out by saying that I was thrilled to see Intel present at a NFD event!  While Intel is well known in the network space for their NICs, they are most well known for their powerful line of processors, boards, and controllers.  Most would argue that this doesn’t make them a ‘traditional’ network vendor but, as we all know, things are rapidly changing in the network space.  As more and more network processing moves from hardware to software the Intel’s of the world will have an increasingly large role to play in the network market.

Check out the following presentations they gave at the recent NFD10 event…

Intel Intro and Strategy

Intel Open Network Platform Solutions for NFV

Intel Software Defined Infrastructure: Tips, Tricks and Tools for Network Design and Optimization

Here are some of my thoughts on the presentations that I thought were worth highlighting…

The impact of software and NFV
Intel made some interesting observations comparing telco companies using big hardware to Google using SDN and NFV.  Most telco companies are still heavily reliant on big, high performance, hardware driven switches that can cost into the 10s of millions of dollars. Continue reading

Networking Field day 10 – Nuage Networks

I just got done watching all the Nuage Networks videos from Networking Field Day 10 (NFD10) and I’m quite impressed with the presentation they gave.  If you haven’t watched them yet, I would recommend you do…

Nuage Networks Intro

Nuage Networks Evolution of Wide Area Networking

Nuage Networks Onboarding the Branch Demo

Nuage Networks Application Flexibility Demo

Nuage Networks Boundary-less Wide Are Networking

Here are some things I thought were worth highlighting…

A consistent Model
What I find interesting about Nuage is their approach.  Most startup networking companies these days limit their focus to one area of the network.  The data center is certainly a popular area but others are focusing on the WAN as well.  Nuage is tackling both. 

I heard a couple of times in the presentation statements like “users are stuck in the past” or “the network model has to be consistent”.  The problem with any overlay based network solution is that ,at some point, you need to connect it back to the ‘normal’ network.  Whether that entails bridging a physical appliance into the overlay, or actually peering the physical into the overlay, the story usually starts to get messy. Continue reading

golang – some constructs part 1

Since starting to play with golang I’ve run into a couple of interesting items I thought worth writing about.  For those of you that are seasoned developers, I assure you, this wont be interesting.  But for us that are getting started this might be worth reading. 

Pointers
Nothing super exciting here if you’ve used them in other languages but it’s worth talking about since it can be confusing.  Pointers are really just a way for us to gain access to the ‘real’ variable when you aren’t in the function that defines it.  Put another way, when you call a function that takes a variable, you are only giving that function a copy of the variable, not the real variable.  Pointers  allow us to reference the actual location in memory where the value is stored rather than the value itself.  Examples always make this more clear.  Take for instance this example of code…

package main

import "fmt"

func main() {
        //Define myname and set it to 'jonlangemak'
        myname := "jonlangemak"
        //Rename without using pointers
        rename(myname)
        fmt.Println(myname)
        //Rename using pointers
        pointerrename(&myname)
        fmt.Println(myname)
}

//Function without pointers
func rename(myname string) {
        myname =  Continue reading

golang up and running on CentOS7 – take two

After some great feedback and some additional learning/fixes on my end, I wanted to make an updated version of this post. 

This go around, I’ve added some plugins I found helpful as well as made a couple of tweaks that I think (not sure yet) will be helpful to me going forward.  So here is the brand new build script I came up with…

#Install dependancies and neccessary packages
yum -y install golang git vim wget python-devel cmake
yum -y groupinstall "Development Tools"

#Modify your bash_profile...
vim ~/.bash_profile
#Add this config...
export GOPATH=$HOME/go
#Source the file
source .bash_profile

#Make the golang workspace
mkdir ~/go
mkdir ~/go/bin
mkdir ~/go/pkg
mkdir ~/go/src

#Install and configure Vundle...
#Pull down Vundle
git clone https://github.com/gmarik/Vundle.vim.git ~/.vim/bundle/Vundle.vim
#Edit your .vimrc file...
vim ~/.vimrc
#Add this config...
set nocompatible
filetype off
colorscheme molokai
set rtp+=~/.vim/bundle/Vundle.vim
call vundle#rc()
Plugin 'gmarik/Vundle.vim'
Plugin 'nsf/gocode', {'rtp': 'vim/'}
Plugin 'fatih/vim-go'
Plugin 'Valloric/YouCompleteMe'
Plugin 'scrooloose/nerdtree.git'
filetype plugin indent on
"Prevent autocomplete help from staying visisble
autocmd CursorMovedI * if pumvisible() == 0|pclose|endif
autocmd InsertLeave * if pumvisible() == 0|pclose|endif
"Quit VIM if NERDTree is last open Window
autocmd bufenter *  Continue reading

golang up and running on CentOS7

I’ve decided recently to get serious about learning golang.  I’ve had a great time playing around with other peoples code (Docker and Kubernetes namely) and it’s time for me to learn the language so I can contribute more than bash scripts.  For better of for worse, I’ve decided to start coding on a CentOS box.  I have a couple of reasons for doing this…

-Its the Linux distro I’m most familiar with currently
-I need to get better at working in Linux.  More stick time on straight CLI can’t hurt.  I feel like jumping into a full blown IDE might be a bit premature for me and possibly allow me to miss some of the basics as well.
-I plan to run the code on Linux servers (I think…?)

Disclaimer: Im just getting started in golang.  If something I suggest below is wrong, please tell me!  Still learning here

Note: I really struggle with the language called ‘go’.  So Im trying to call it golang everywhere I can.  It can seem like a bit much at times…

So let’s get started.  The goal of this post is to end Continue reading

The basics – MTU, MSS, GRE, and PMTU

One of the truly fascinating things about networking is how much of it ‘just works’.  There are so many low level pieces of a network stack that you don’t really have to know (although you should) to be an expert at something like OSPF, BGP, or any other higher level networking protocol.  One of the ones that often gets overlooked is MTU (Maximum Transmission Unit), MSS (Maximum Segment Size) and all of the funs tuff that comes along with it.  So let’s start with the basics…

image
Here’s your average looking IP packet encapsulated in an Ethernet Header.  For the sake of conversation, I’ll assume going forward that we are referring to TCP only but I did put the UDP header length in there just for reference.  So a standard IP packet is 1500 bytes long.  There’s 20 bytes for the IP header, 20 bytes for the TCP header, leaving 1460 bytes for the data payload.  This does not include the 18 bytes of Ethernet headersFCS that surround the IP packet.

When we look at this frame layout, we can further categorize components of the frame by MTU and MSS…

image
The MTU is defined Continue reading

Dynamic Kubernetes installation/configuration with SaltStack

I’ve been playing more with SaltStack recently and I realized that my first attempt at using Salt to provision my cluster was a little shortsighted.  The problem was, it only worked for my exact lab configuration.  After playing with Salt some more, I realized that the Salt configuration could be MUCH more dynamic than what I had initially deployed.  That being said, I developed a set of Salt states that I believe can be consumed by anyone wanting to deploy a Kubernetes lab on bare metal.  To do this, I used a few more of the features that SaltStack has to offer.  Namely, pillars and the built-in Jinja templating language.

My goal was to let anyone with some Salt experience be able to quickly deploy a fully working Kubernetes cluster.  That being said, the Salt configuration can be tuned to your specific environment.  Have 3 servers you want to try Kubernetes on?  Have 10?  All you need to do is have some servers that meet the following prerequisites and tune the Salt config to your environment.

Environment Prerequisites
-You need at least 2 servers, one for the master and one for Continue reading

Saltstack – Using Pillars and starting to template

In our last post about SaltStack, we introduced the concept of grains.  Grains are bits of information that the Salt minion can pull off the system it’s running on.  SaltStack also has the concept of pillars.  Pillars are sets of data that we can push to the minions and then consume in state or managed files.  When you couple this with the ability to template with Jinja, it becomes VERY powerful.  Let’s take a quick look at how we can start using pillars and templates. 

Prep the Salt Master
The first thing we need to do is to tell Salt that we want to use Pillars.  To do this, we just tell the Salt master where the pillar state files are.  Let’s edit the salt master config file…

vi /etc/salt/master

Now find the ‘Pillar Settings’ section and uncomment the line I have highlighted in red below…

image 
Then restart the salt-master service…

systemctl restart salt-master

So we just told Salt that it should use the ‘/srv/pillar/’ directory for pillar info so we need to now go and create it…

mkdir /srv/pillar/

Now we’re all set.  Pillar information is exported to the Continue reading

Logging in Kubernetes with Fluentd and Elasticsearch

image In previous posts, we talked about running skyDNS and Heapster on your Kubernetes cluster.  In this post, I want to talk about the last of the cluster ‘addons’ available today in the Kubernetes repository.  This add on is a combination of Fluentd, Elasticsearch, and Kibana that makes a pretty powerful logging aggregation system on top of your Kubernetes cluster.   One of the major struggles with any large deployment is logging. Having a central place to aggregate logs makes troubleshooting and analysis considerably easier to do.   That being said, let’s jump right into the configuration.

Note: I have an open PR on this addon to make it a little more flexible from a configuration perspective.  Namely, I want to be able to specify the port and protocol used by the API server to access the backend service when using the API server as a service proxy.  That being said, some of my pod/controller definitions will be different from what you see on GitHub.  I’ll point out the differences below when we come across them.

The first step is to have the Kubernetes nodes collect the logs.  This is done with a local Fluentd Continue reading

How to update Docker on CentOS 7

I recently noticed that the Kubernetes guys are moving their container images from the Docker hub registry to their own repository…

image
A quick look tells me that Google now has it’s own image repository (gcr.io) so it seems to make sense that the Kubernetes team would be using that rather than the Docker hub registry.  That being said, I though all I’d have to do was update my YAML files to point to the new location.  Unfortunately, that wasn’t the case.  After pushing the controller definitions to the Kubernetes cluster it became apparent that the containers were stuck in a pending state.  When I logged into one of the hosts and check the Docker logs I saw the issue…

image 
After some digging, I found this…

image

Since the container image name had a ‘-‘ in it, Docker didn’t know what to do with it.  So the fix is to update Docker to the latest stable code which happens to be version 1.5.  In my case, the repositories I was using with YUM didn’t have 1.5 so we need to pull the latest binaries from Docker and use those.  To update, Continue reading

Salt – The basics

In my last post, I showed you how I automated my Kubernetes lab build out by using Salt.  This took the build time and cut it by more than 70% (Im guessing here but you get the point).  In addition, I’ve been making all of my changes for the cluster in Salt rather than applying them directly to the host.  Not only does this give me better documentation, it allows me to apply changes across multiple nodes very quickly.  You might be wondering why I chose Salt since I’ve blogged about Chef in the past.  The answer isn’t cut and dry, but Salt just made sense to me.  On top of that, there is VERY good documentation out there about all of the state and state functions so it’s pretty easily consumable.    As I walk through the process I used to create the lab build scripts, I hope you’ll start to catch onto some of the reasons that made me decide to learn Salt.

Let’s start by taking a look at me GitHub repo…

imageWhile there’s a lot here, the pieces we really want to talk about are the files that end Continue reading

Kubernetes – Notable changes since my first build

During my latest Kubernetes lab rebuild, I noticed some significant differences in the some functions of Kubernetes cluster.  I’ve done my best to go back and update the previous blog posts with notes and pointers to make this clear.  However – going forward please consider my GitHub Salt repo as the ‘source of truth’ for the Kubernetes configuration files and current build process.   I’ll be updating that regularly as I continue to optimize the Salt config and add onto the Kubernetes configuration.  A couple of big hitters I want to call out as differences between my initial build and this one…

cAdvisor is now part of kubelet
That’s right!  We no longer need to have a separate manifest and container for cAdvisor.  We can see that any host running the kubelet process is exposing port 4194…

image 
And sure enough…

image

kube-proxy and kubelet no longer use etcd
In my first build, both the kubelet and kube-proxy service relied on talking directly to etcd to interact with the cluster.  The associated configs looked like…

image 
The newest systemd service configuration looks like this…

image

So what’s happened here is the cluster communication has moved to Continue reading

How to update a GitHub pull request (PR)

So what happens when you submit a PR, but then you want to change it?  After reviewing my proposed changes from my last post, it was decided that I should take a different approach.  The changes I needed to make weren’t substantial, and were in the same spirit as the initial PR, so I decided that updating made more sense then starting over.  All you have to do is update the code and push another commit to the branch.  Let’s assume we’ve made the changes we want to our code again.  Let’s verify that Git sees these updates…

image 
Yep – Looks good so far.  Now we need to add these files to the commit just like last time…

git add .

image 
So now the files are ready to be committed, let’s go ahead and make a commit…

git commit -m "Updated the ENV variables and the way they are set"

image 
Perfect – So now let’s check and see if our remote (GitHub) is still defined…

image 
All looking good – So now all we need to do is push the commit…

git push -u origin fluentd-elasticsearch-kibanafix

image

Let’s go check out our PR Continue reading

How to create a GitHub pull request (PR)

Being a network engineer, Git is not something that I used to use very frequently before I started messing around with Kubernetes.  It can be a frustrating tool to work with if you don’t know exactly what you’re doing.  And while it tries to help you from cutting yourself, it’s pretty easy to lose code you’ve worked on if you aren’t careful.  On the flip side, once you learn the basics it’s a very awesome tool for all kinds of revision tracking.

While playing around with the newest Kubernetes binaries I noticed a issue with the ‘fluentd-elasticsearch’ add-on in my lab.  After some debugging, I think I found the issue so I’d like to suggest a change to the code to fix it.  This is what’s called a ‘pull request’ or often just a ‘PR’.  A PR means you are submitting a request to ‘pull’ new code into the active repository.  Once your PR is submitted, people have a chance to review and comment on your suggested changes and if everything looks good, it will get pulled into the repository.  So I thought it would be good to document this PR so Continue reading

Deploying Kubernetes with SaltStack

The more I play around with Docker and Kubernetes the more I find myself needing to rebuild my lab.  Config file changes are done all over the place, permissions change, some binaries are added or updated, and things get out of sync.  I always like to figure out how things work and then rebuild ‘the right way’ to make sure I know what I’m talking about.  The process of rebuilding the lab takes quite a bit of time and was generally annoying.  So I was looking for a way to automate some of the rebuild.  Having some previous experience with Chef, I thought I might give that a try but I never got around to it.  Then one day I was browsing the Kubernetes github repo and noticed that there was already a fair amount of SaltStack files out in the repo.  I had heard about SaltStack, but had no idea what it was so I thought I’d give it a try and see if it could help me with my lab rebuilds.   

Make a long story short, it helps, A LOT.  While I know I’ve only scratched the surface the Continue reading

Installing cAdvisor and Heapster on bare metal Kubernetes

If you’ve spent some time with Kubernetes, or docker in general, you probably start to wonder about performance.  Moreover, you’re probably wondering how to gauge performance of he overall host as well as the containers running on it.  This is where cAdvisor comes in.  CAdvisor is a open source tool for monitoring docker and it’s running containers.  The best part about cAdvisor is that it has native docker support and is super easy to integrate into an existing Kubernetes cluster.  Additionally, cAdvisor runs in a container (starting to see why docker is awesome?) so the configuration changes required on the host are super minimal.  AKA – You just need to tell the host to run the container.

In addition to installing cAdvisor on our bare metal Kubernetes cluster, we’re also going to install another awesome open source Google tool call Heapster.  Heapster gathers all of the data from each Kubernetes node via cAdvisor and puts it all together for you in one spot.

So let’s get started with installing cAdvisor…

The cAdvisor container needs to run on each host you want cAdvisor to monitor.  While we could do this through the Continue reading

Kubernetes DNS config on bare metal

One of the ‘newer’ functions of Kubernetes is the ability to register service names in DNS.  More specifically, to register them in a DNS server running in the Kubernetes cluster.  To do this, the clever folks at Google came up with a solution that leverages SkyDNS and another container (called kube2sky) to read the service entries and insert them as DNS entries.  Pretty slick huh?

Beyond the containers to run the DNS service, we also need to tell the pods to use this particular DNS server for DNS resolution.  This is done by adding a couple of lines of config to the kubernetes-kubelet service.  Once that’s done, we can configure the Kubernetes service and the replication controller for the SkyDNS pod.  So let’s start with the kubelet service configuration.  Let’s edit our service definition located here…

/usr/lib/systemd/system/kubernetes-kubelet.service

Our new config will look like this…

[Unit]
Description=Kubernetes Kubelet
After=etcd.service
After=docker.service
Wants=etcd.service
Wants=docker.service

[Service]
ExecStart=/opt/kubernetes/kubelet 
--address=10.20.30.62 
--port=10250 
--hostname_override=10.20.30.62 
--etcd_servers=http://10.20.30.61:4001 
--logtostderr=true 
--cluster_dns=10.100.0.10 
--cluster_domain=kubdomain.local 
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

Notice that Continue reading

Kubernetes 101 – External access into the cluster

In our last post, we looked at how Kubernetes handles the bulk of it’s networking.  What we didn’t cover yet, was how to access services deployed in the Kubernetes cluster from outside the cluster.  Obviously services that live in pods can be accessed directly as each pod has its own routable IP address.  But what if we want something a little more dynamic?  What if we used a replication controller to scale our web front end? We have the Kubernetes service, but what I would call its VIP range (Portal Net) isn’t routable on the network.  There are a couple of ways to solve this problem.  Let’s walk through the problem and talk about a couple of ways to solve it.  I’ll demonstrate the way I chose to solve it but that doesn’t imply that there aren’t other better ways as well.

As we’ve seen, Kubernetes has a built-in load balancer which it refers to as a service.  A service is group of pods that all provide the same function.  Services are accessible by other pods through an IP address which is allocated out of the clusters portal net allocation.  Continue reading