This is the story of studying Kubernetes basics from the perspective of network engineer. I had basic Linux background, some free time, and willingness to discover this brave new world of containers, pods and microservices.
I think one of the best ways to do this kind of studying is to follow the blueprint of recognized industry certification. This gives you a concrete study plan and bring structure to your knowledge from the very beginning.
There are such certifications in Kubernetes world – CKA and CKAD from CNCF/The Lunix Foundation. It’s quite popular certifications (as k8s in general), and so there a LOT of study material out there in the Internet. Below is the list of sources I’ve used.
In another part of his never-ending EVPN/BGP saga Ivan Pepelnjak argued with Juniper fanboys once again about sanity of iBGP-over-eBGP and eBGP-over-eBGP designs and all that fun stuff. I’ve already written my opinion about that topic in my previous post and numerous comments to Ivan’s posts (TL;DR: iBGP-over-eBGP design has its advantages, just implement it wisely – don’t place RR on spine switches).
But there is one thing that worries me. In almost every one of his posts Ivan talks about some mythical Junos limitations that don’t allow Juniper to support eBGP only (over single session) design. So let’s find out what these limitaions are.
Juniper has freely available version of vQFX for Vagrant. There are a few lab topologies available on GitHub. I will be using full-2qfx-4srv-evpnvxlan topology in this post.
This topology comes with Ansible playbook that configures vQFX switches with iBGP-over-OSPF EVPN. Standard Juniper configuration, just for reference:
Did you know that Juniper vQFX images are available in Vagrant Cloud? There is vQFX RE image and vQFX PFE one. You can use only RE image to build simple topologies, or pair every RE with PFE to use more complex protocols. There is also a bunch of examples in Juniper’s github repository.
What is Vagrant? Let me quote official website: “Vagrant is a tool for building and managing virtual machine environments in a single workflow. Vagrant gives you a disposable environment and consistent workflow for developing and testing infrastructure management scripts.” I hope you already knew that. And I also hope there is no need to present Ansible.
I played with this stuff a little bit and created a couple of new examples using full vQFX option (e.g. RE+PFE for every box) – IP fabric and EVPN-VXLAN fabric.
This is really easy way if you want to get yourself familiar with configuration of IP fabric and EVPN-VXLAN on QFX (and some Ansible as well), but don’t want to spend time figuring out how to set everything up in GNS3 or EVE-NG.
Just a few simple steps:
Ethernet Virtual Private Network (EVPN) solution is becoming pervasive for Network Virtualization Overlay (NVO) services in data center (DC) networks and as the next generation VPN services in service provider (SP) networks. As a result of this popularity a lot of work is going on in IETF in this area. In this post I collect links to some of the interesting IETF drafts in this area.
All this information is relevant for 1H 2018 timeframe. Keep in mind to pay attention to which version of draft you’re reading – this drafts are frequently updated.
Service Chaining using Virtual Networks with BGP VPNs
This document describes how service function chains (SFC) can be applied to traffic flows using routing in a virtual (overlay) network to steer traffic between service nodes. Chains can include services running in routers, on physical appliances or in virtual machines. Two techniques are described: in one the service chain is implemented as a sequence of distinct VPNs between sets of service nodes that apply each service function; in the other, the routes within a VPN are modified through the use of special route targets and modified next-hop resolution to achieve the desired result.
Ivan Pepelnjak described his views regarding BGP design options for EVPN-based Data Center Fabrics in this article. In comments to following blog post we briefly discussed sanity of eBGP underlay + iBGP overlay design option and come to conclusion that we disagree on this subject. In this blog post I try to summarize my thoughts about this design options.
Let’s start with the basics – what’s the idea behind underlay/overlay design? It’s quite simple – this is logical separation of duties of “underlying infrastructure” (providing simple IP transport in case of DC fabrics) and some “service” overlayed on top of it (be it L3VPN or EVPN or any hypervisor-based SDN solution). The key word in previous sentence – “separation”. In good design overlay and underlay should be as separate and independent from each other as possible. This provides a lot of benefits – most important of which is the ability to implement “smart edge – simple core” network design, where your core devices (= spines in DC fabric case) doesn’t need to understand all complex VPN-related protocols and hold customer-related state.
We use this design option for a long time – OSPF for underlay and iBGP for overlay is de-facto Continue reading
In this post I try to collect links to all interesting RFCs and drafts related to DC, EVPN and network overlays.
Some of this documents are complete industry standards, some are drafts aiming to become such standarts, and others are just informational documents, often already outdated and forgotten, but despite this still interesting and useful. So don’t forget to pay attention to time frame each particular document was created and updated.
RFC 7209 – Requirements for Ethernet VPN (EVPN)
This document specifies the requirements for an Ethernet VPN (EVPN) solution, which addresses various VPLS issues.
RFC 7432 – BGP MPLS-Based Ethernet VPN
This document describes procedures for BGP MPLS-based Ethernet VPNs (EVPN-MPLS).
RFC 7938 – Use of BGP for Routing in Large-Scale Data Centers
This document summarizes operational experience in designing and operating large-scale data centers using BGP as the only routing protocol.
RFC 7348 – Virtual eXtensible Local Area Network (VXLAN)
This document describes Virtual eXtensible Local Area Network (VXLAN), which is used to address the need for overlay networks within virtualized data centers accommodating multiple tenants.
RFC 7364 – Problem Statement: Overlays for Network Virtualization
This document describes issues associated with providing multi-tenancy Continue reading
Brand new book – This Week: Data Center Deployment with EVPN/VXLAN.
Author did a great job explaining and showing various examples of real world implementations of EVPN-VXLAN and DCI.
Definitely must read for anybody aiming for JNCIE-DC lab. I wish I’ve read this before my lab attempt – this book really helps to update and systematize all EVPN-VXLAN related knowledge.
But that’s for sure not the first book to read if you doesn’t know anything about EVPN-VXLAN. I recommend firstly read all materials from this post, and only after that this book will be really useful to you.
After a week of waiting (why this is taking so long? this wasn’t a particularly pleasant week), I finally got my number.
Brand new JNCIE-DC #31 !!!
The main note about the lab – time management is the most important thing on the exam. Don’t rush to the keyboard, read and understand all the tasks and it’s interdependencies. Have a plan regarding order of tasks – not all tasks can be completed in order in which they written. Don’t be affraid to skip some tasks if it takes a long time.
I am quite pleased with the level of my preparation for the lab – there were no unexpected or incomprehensible tasks. General feeling about JNCIE-DC lab – this is interesting, pretty complex but fair exam. Lot of tasks on various themes, I think all themes from blueprint are covered in the lab in some ways.
As the proctor told me, the main difficulty of this exam is that it’s something new, and people are afraid of a new and unexpected. I want to tell you – don’t be afraid! If you’re interested in learning a Juniper way of building Data Center networks, and also want to earn one more pretty Continue reading
How fast time flies! Tomorrow ( August 10) is my JNCIE-DC lab day.
I spent last couple of days repeating my notes and labbing small optional topics like CoPP, ZTP, etc; and also familiar ones like CoS and MPLS L3VPN.
Today is the rest day. Fly to Amsterdam, drink a couple of beers and go to bed. Fortunately I’ve been there before, so no worries about how to find Juniper office and be late for the exam.
Plan for tomorrow: go to the lab and get the job done.
Last big topic that I need to practice – Data Center Interconnect (DCI).
Fortunately I pretty confident in my skills in the MPLS L3VPN area, so I think I shouldn’t spend much time for this topic.
The most complex DCI option remains – EVPN stitching. In this topic I will show you my example of EVPN-VXLAN to EVPN-MPLS stitching (there is also option of EVPN-VXLAN to EVPN-VXLAN stitching, but configuration should be similar to my example).
EVPN stitching concept is pretty simple – you just need to configure two EVPN instances on each of DC Gateway devices (MX routers) and connect them to each other using Logical Tunnel (lt-) interfaces.
Scheme of my EVPN stitching lab:
Due to time constraints I’ll show you only the upper part of topology – stitching on vMX1 and vMX3 routers. Configurations of vMX2 and vMX4 should be exactly the same as this ones.
So lets see vMX1 routing-instances configuration:
Firstly, QFX5100 series doesn’t support EVPN-VXLAN inter-VXLAN routing, so I practice all IRB related topics on vMX devices. vQFXs acts as a simple L2 EVPN gateways.
This post continues the EVPN-VXLAN lab from the previous ones.
Full vMX IRB interfaces configuration:
This post continues the EVPN-VXLAN lab from the previous one.
For now I configured the simplest possible variant of RT assignment – one vrf-target for all ES and VNI routes (vrf-target target:65000:1):
My EVPN-VXLAN lab topology:
There is IP Fabric in DC1 (2 vMX and 2 vQFX), and 2 vMX_v14 to emulate CE devices. Each CE device connected to EVPN via LACP LAG ae0 (EVPN Active-Active ethernet segment on service side). vMX_old-1 also has sigle-homed interface ge-0/0/4 (just to show you the difference).
Each CE device split into two logical systems for more convenient testing of routing functionality (global device context for Vlan100 and logical-system second for Vlan200). You could also use virtual-router routing instances for that, if you prefer this way. The rest of CE config is pretty self-explanatory:
For simplified Layer 3 gateway services, where Layer 3 routing protocols are not run on the MC-LAG peers, you simply configure the same Layer 3 gateway IP address on both MC-LAG peers and enable IRB MAC address synchronization. This IP address is used as the default gateway for the MC-LAG servers or hosts.
For more advanced Layer 3 gateway services, where Layer 3 routing protocols and Layer 3 multicast operations are required on the MC-LAG peers, you configure unique IRB interface addresses on each MC-LAG peer and then configured the virtual router redundancy protocol (VRRP) between the peers in an active/standby role.
To help with some forwarding operations, the IRB MAC address of each peer is replicated on the other peer and is installed as a MAC address with the forwarding next hop of the ICL-PL. This is achieved by configuring a static ARP entry for the remote peer.
IRB interfaces configurations: