Author Archives: Jon Langemak
Author Archives: Jon Langemak
In our last post we covered the basics of getting an RSVP LSP setup. This was a tedious process at least when compared to what we saw with LDP setting up LSPs. So I think it’s worthwhile to spend some time talking about RSVP and what it offers that merit it’s consideration as a label distribution protocol. First off – let’s talk about naming. When talking about MPLS – RSVP is typically just called RSVP – but without the MPLS context – it might be a little confusing. That’s because RSVP itself initially had nothing to do with MPLS. RSVP was initially a means to reserve resources for flows across a network. The protocol was then extended to support setting up MPLS LSPs. In this use case, it is often referred to as “RSVP-TE” or “MPLS RSVP-TE”. For the sake of my sanity – when I reference RSVP going forward I’ll be referring to RSVP that’s used to setup MPLS LSPs.
So now let’s talk about some differences between LDP and RSVP. The first thing that everyone points out is that LDP is tightly bound to the underlying IGP. While this is an accurate statement, it doesn’t mean that RSVP Continue reading
We’ve spent a great deal of time in the last few posts talking about MPLS both with LDP and with static LSP configurations. While these approaches certainly work, they aren’t the only options to use for label distribution and LSP creation. If we take static LSPs off the table (since they’re the equivalent of static routes and not scalable) we really have two main choices for label distribution – LDP and RSVP. LDP is typically considered to be the easiest of the two options but also offers a limited set of functionality. RSVP offers many more features but also requires more configuration. When you’re designing an MPLS network you’ll have to decide which protocol to use based on your requirements. In many cases a network may leverage both in different areas. In this post the aim is just to get RSVP up and running – we’ll dig into the specifics in a later post.
So let’s get right into a simple lab where we leverage RSVP instead of LDP. Like the previous post, we’ll leverage the same lab environment…
Let’s assume as before that the base configuration of the lab includes all the routers interfaces configured, OSPF enabled on all Continue reading
In one of our last posts on MPLS – we showed how LDP can be used to dynamically exchange labels between MPLS enabled routers. This was a huge improvement from statically configured LSPs. We saw that LDP was tightly coupled to the underlying IGP but we didn’t spend a lot of time showing examples of that. In this post, I want to extend our lab a little bit to show you exactly how tightly this coupling is. To do so, we’ll use part of the extended lab that we created at the end of our last post on the JunOS routing table. For the sake of being thorough, we’ll provide the entire configuration of each device. The lab will look like what’s shown below…
For a base configuration – we’re going to start with mostly everything there with the exception of MPLS and LDP. We’ll then add that in stages so we can see how things look before and after we have multiple IGP paths to the same destination…
vMX1 Configuration…
system { host-name vmx1.lab; } interfaces { ge-0/0/0 { unit 0 { family inet { address 10.2.2.0/31; } } } ge-0/0/1 { unit 0 { Continue reading
I was just about to finish another blog post on MPLS when I got a question from a colleague about Junos routing tables. He was confused as to how to interpret the output of a basic Juniper routing table. I spent some time trying to find some resource to point him at – and was amazed at how hard it was to find anything that answered his questions specifically. Sure, there are lots of blogs and articles that explain RIB/FIB separation, but I couldn’t find any that backed it up with examples and the level of detail he was looking for. So while this is not meant to be exhaustive – I hope it might provide you some details about how to interpret the output of some of the more popular show
commands. This might be especially relevant for those of you who might be coming from more of a Cisco background (like myself a number of years ago) as there are significant differences between the two vendors in this area.
Let’s start with a basic lab that looks a lot like the one I’ve been using in the previous MPLS posts…
For the sake of focusing on the real Continue reading
In our last post, we removed our last piece of static configuration and replaced static routes with BGP. We’re going to pick up right where we left off and discuss another use case for MPLS – MPLS VPNs. Really – we’re talking about two different things here. The first is BGP VPNv4 address families used for route advertisement. The second is using MPLS as a data plane to reach the prefixes being announced by VPNv4 address family. If that doesn’t make sense yet – don’t worry – it will be pretty clear by the end of the post. So as usual – let’s jump right into this and talk about our lab setup.
As I mentioned in the last post, setting up BGP was a prerequisite to this post – so since that’s the case – Im going to pick up right where I left off. So I’ll post the lab topology picture here for the sake of posting a lab topology – but if you want to get your configuration prepped – take a look at the last post. At the end of the last post we had our Continue reading
In our last post we talked about how to make the MPLS control plane more dynamic by getting rid of static LSPs and adding in LDP to help advertise and distribute LSPs to all MPLS speaking routers. However – even once got LDP up and running, we still had to tell the routers to use a given LSP. In the last post, we accomplished this by adding recursive static routes in the inet.0
table to force the routers to recurse to the inet.3
table where the MPLS LSPs lived. In this post, we’re going to tackle getting rid of the static routes and focus on replacing it with a dynamic routing protocol – BGP.
So to start off with, let’s get our lab back to a place where we can start. To do that, we’re going to load the following configuration on each router show in the following lab topology…
interfaces { ge-0/0/0 { enable; unit 0 { family inet { address 10.2.2.0/31; } } } ge-0/0/1 { enable; unit 0 { family inet { address 10.1.1.0/31; } family mpls; } } lo0 { unit 0 { family Continue reading
In our last post, we saw a glimpse of what MPLS was capable of. We demonstrated how routers could forward traffic to IP end points without looking at the IP header. Rather, the routers performed label operations by adding (pushing), swapping, or removing (popping) the labels on and off the packet. This worked well and meant that the core routers didn’t need to have IP reachability information for all destinations. However – setting this up was time consuming. We had to configure static paths and operations on each MPLS enabled router. Even in our small example, that was time consuming and tedious. So in this post we’ll look at leveraging the Label Distribution Protocol (LDP) to do some of the work for us. For the sake of clarity, we’re going to once again start with a blank slate. So back to our base lab that looked like this…
Note: I refer to the devices as routers 1-4 but you’ll notice in the CLI output that their names are vMX1-4.
Each device had the following base configuration…
interfaces { ge-0/0/0 { enable; unit 0 { family inet { address 10.2. Continue reading
Here we are – the first day of 2018 and Im anxious and excited to get 2018 off to a good start. Looking back – it just occurred to me that I didn’t write one of these for last year. Not sure what happened there, but Im glad to be getting back on track. So let’s start with 2017…
2017 was a great year for me. I started the year continuing my work at IBM with the Watson group. About half way through the year (I think) I was offered the opportunity to transition to a role in the Cloud Networking group. It was an opportunity I couldn’t pass up to work with folks whom I had an incredible amount of respect for. So I began the transition and within 3 months had fully transitioned to the new team. Since then, I’ve been heads down working (the reason for the lack of blog posts recently (sorry!)). But being busy at work is a good thing for me. For those of you that know me well you know that “bored Jon” is “not happy Jon” so Im in my own Continue reading
In this series of posts, I want to spend some time reviewing MPLS fundamentals. This has been covered in many places many times before – but I’ve noticed lately that often times the basics are missed or skipped when looking at MPLS. How many “Introduction to MPLS” articles have you read where the first step is “Enable LDP and MPLS on the interface” and they dont actually explain whats happening? I disagree with that being a valid starting point so in this post I’d like to start with the basics. Subsequent posts will build from here as we get more and more advanced with the configuration.
Warning: In order to get up and running with even a basic configuration we’ll need to introduce ourselves to some MPLS terminology and concepts in a very brief fashion. The descriptions of these terms and concepts is being kept brief intentionally in this post and will be covered in much great depth in a future post.
Enough rambling from me, let’s get right into it…
So what is MPLS? MPLS stands for Multi-Protocol Label Switching and it provides a means to forward multiple different protocols across a network. To see what it’s capable Continue reading
Over the last several years I’ve made a couple of efforts to become better at Python. As it stands now – I’d consider myself comfortable with Python but more in terms of scripting rather than actual Python development. What does Python Development mean? To me – at this point – it’s means writing more than a single Python file that does one thing. So why have my previous efforts failed? I think there have been a few reasons I never progressed much past scripting…
I’ve been spending more time on the MX recently and I thought it would be worthwhile to document some of the basics around interface configuration. If you’re like me, and come from more of a Cisco background, some of configuration options when working with the MX weren’t as intuitive. In this post, I want to walk through the bare bone basic of configuring interfaces on a MX router.
ge-0/0/0 { unit 0 { family inet { address 10.20.20.16/24; } } }
The most basic interface configuration possible is a simple routed interface. You’ll note that the interface address is configured under a unit
. To understand what a unit is you need to understand some basic terminology that Juniper uses. Juniper describes a physical interface as an IFD (Interface Device). In our example above the IFD would be the physical interface ge-0/0/0
. We can then layer one or more IFL (Interface Logical) on top of the IFD. In our example the IFL would be the unit configuration, in this case ge-0/0/0.0
. Depending on the configuration of the IFD you may be able to provision additional units. These additional units (Logical interfaces (IFLs)) Continue reading
In the last 4 posts we’ve examined the fundamentals of Kubernetes networking…
Kubernetes networking 101 – Pods
Kubernetes networking 101 – Services
Kubernetes networking 101 – (Basic) External access into the cluster
Kubernetes Networking 101 – Ingress resources
My goal with these posts has been to focus on the primitives and to show how a Kubernetes cluster handles networking internally as well as how it interacts with the upstream or external network. Now that we’ve seen that, I want to dig into a networking plugin for Kubernetes – Calico. Calico is interesting to me as a network engineer because of wide variety of functionality that it offers. To start with though, we’re going to focus on a basic installation. To do that, I’ve updated my Ansible playbook for deploying Kubernetes to incorporate Calico. The playbook can be found here. If you’ve been following along up until this point, you have a couple of options.
I called my last post ‘basic’ external access into the cluster because I didn’t get a chance to talk about the ingress object. Ingress resources are interesting in that they allow you to use one object to load balance to different back-end objects. This could be handy for several reasons and allows you a more fine-grained means to load balance traffic. Let’s take a look at an example of using the Nginx ingress controller in our Kubernetes cluster.
To demonstrate this we’re going to continue using the same lab that we used in previous posts but for the sake of level setting we’re going to start by clearing the slate. Let’s delete all of the objects in the cluster and then we’ll start by build them from scratch so you can see every step of the way how we setup and use the ingress.
kubectl delete deployments --all kubectl delete pods --all kubectl delete services --all
Since this will kill our net-test
pod, let’s start that again…
kubectl run net-test --image=jonlangemak/net_tools
Recall that we used this pod as a testing endpoint so we could simulate traffic originating from a pod so it’s worth keeping around.
Alright – now that we Continue reading
In our last post we talked about an important Kubernetes networking construct – the service. Services provide a means for pods running within the cluster to find other pods and also provide rudimentary load balancing capabilities. We saw that services can create DNS entries within Kube-DNS which makes the service accessible by name as well. So now that we know how you can use services to access pods within the cluster it seems prudent to talk about how things outside of the cluster can access these same services. It might make sense to use the same service construct to provide this functionality, but recall that the services are assigned IP addresses that are only known to the cluster. In reality, the service CIDR isnt actually routed anywhere but the Kubernetes nodes know how to interact with service IPs because of the netfilter rules programmed by the kube-proxy. The service network just needs to be unique so that the containers running in the pod will follow their default route out to the host where the netfilter rules will come into play. So really the service network is sort of non-existent from a routing perspective as it’s only locally significant to each host. Continue reading
Every once and a great while there is a need to simulate bad network behavior. Simulating things like packet loss and high latency are often harder to do than you’d expect. However – if you’re dealing with a Linux host – there’s a simple solution. The ‘tc’ command which comes along with the ‘iproute2’ toolset can help you simulate symptoms of a bad network quite easily.
The tc command offers a lot of functionality but in this post we’re just going to walk through a couple of quick examples of using it in conjunction with the netem (network emulator) included in the Linux kernal . To do this, we’ll use just use two hosts…
To start with – let’s make sure that ‘tc’ is installed and that it’s working…
user@ubuntu-1:~$ sudo tc qdisc show dev ens32 qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 user@ubuntu-1:~$
So what did we just do here? Well we used the tc command to return the current qdisc configuration for our servers physical network interface named ‘ens32’. So what’s a qdisc? Qdisc is shorthand for ‘Queue discipline’ and defines the queuing Continue reading
In our last post we talked about how Kubernetes handles pod networking. Pods are an important networking construct in Kubernetes but by themselves they have certain limitations. Consider for instance how pods are allocated. The cluster takes care of running the pods on nodes – but how do we know which nodes it chose? Put another way – if I want to consume a service in a pod, how do I know how to get to it? We saw at the very end of the last post that the pods themselves could be reached directly by their allocated pod IP address (an anti-pattern for sure but it still works) but what happens when you have 3 or 4 replicas? Services aim to solve these problems for us by providing a means to talk to one or more pods grouped by labels. Let’s dive right in…
To start with, let’s look at our lab where we left at the end of our last post…
If you’ve been following along with me there are some pods currently running. Let’s clear the slate and delete the two existing test deployments we had out there…
user@ubuntu-1:~$ kubectl delete deployment pod-test-1 deployment "pod-test-1" Continue reading
Some time ago I wrote a post entitled ‘Kubernetes networking 101‘. Looking at the published date I see that I wrote that more than 2 years ago! That being said – I think it deserves a refresher. The time around, Im going to split the topic into smaller posts in the hopes that I can more easily maintain them as things change. In today’s post we’re going to cover how networking works for Kubernetes pods. So let’s dive right in!
In my last post – I described a means in which you can quickly deploy a small Kubernetes cluster using Ansible. I’ll be using that environment for all of the examples shown in these posts. To refresh our memory – let’s take another quick look at the topology…
The lab consists of 5 hosts with ubuntu-1 acting as the Kubernetes master and the remaining nodes acting as Kubernetes minions (often called nodes now but I cant break the habit). At the end of our last post we had what looked like a working Kubernetes cluster and had deployed our first service and pods to it. Prior to deploying to the cluster we had to add some routing in the form Continue reading
Some of you will recall that I had previously written a set of SaltStack states to provision a bare metal Kubernetes cluster. The idea was that you could use it to quickly deploy (and redeploy) a Kubernetes lab so you could get more familiar with the project and do some lab work on a real cluster. Kubernetes is a fast moving project and I think you’ll find that those states likely no longer work with all of the changes that have been introduced into Kubernetes. As I looked to refresh the posts I found that I was now much more comfortable with Ansible than I was with SaltStack so this time around I decided to write the automation using Ansible (I did also update the SaltStack version but I’ll be focusing on Ansible going forward).
However – before I could automate the configuration I had to remind myself how to do the install manually. To do this, I leaned heavily on Kelsey Hightower’s ‘Kubernetes the hard way‘ instructions. These are a great resource and if you haven’t installed a cluster manually before I suggest you do that before attempting to automate an install. You’ll find that the Ansible role Continue reading
In the first post of this series we talked about some of the CNI basics. We then followed that up with a second post showing a more real world example of how you could use CNI to network a container. We’ve covered IPAM lightly at this point since CNI relies on it for IP allocation but we haven’t talked about what it’s doing or how it works. In addition – DNS was discussed from a parameter perspective in the first post where we talked about the CNI spec but that’s about it. The reason for that is that CNI doesn’t actually configure container DNS. Confused? I was too. I mean why is it in the spec if I can’t configure it?
To answer these questions, and see how IPAM and DNS work with CNI, I think a deep dive into an actual CNI implementation would be helpful. That is – let’s look at a tool that actually implements CNI to see how it uses it. To do that we’re going to look at the container runtime from the folks at CoreOS – Rocket (rkt). Rkt can be installed fairly easily using this set of commands…
wget https://github.com/coreos/rkt/releases/download/v1.25.0/rkt_1. Continue reading
In our last post we introduced ourselves to CNI (if you haven’t read that yet, I suggest you start there) as we worked through a simple example of connecting a network namespace to a bridge. CNI managed both the creation of the bridge as well as connecting the namespace to the bridge using a VETH pair. In this post we’ll explore how to do this same thing but with a container created by Docker. As you’ll see, the process is largely the same. Let’s jump right in.
This post assumes that you followed the steps in the first post (Understanding CNI) and have a ‘cni’ directory (~/cni) that contains the CNI binaries. If you don’t have that – head back to the first post and follow the steps to download the pre-compiled CNI binaries. It also assumes that you have a default Docker installation. In my case, Im using Docker version 1.12.
The first thing we need to do is to create a Docker container. To do that we’ll run this command…
user@ubuntu-2:~/cni$ sudo docker run --name cnitest --net=none -d jonlangemak/web_server_1 835583cdf382520283c709b5a5ee866b9dccf4861672b95eccbc7b7688109b56 user@ubuntu-2:~/cni$
Notice that when we ran the command we told Docker to use a network of ‘none’. Continue reading