Alex

Author Archives: Alex

my CKA/CKAD study plan

This is the story of studying Kubernetes basics from the perspective of network engineer. I had basic Linux background, some free time, and willingness to discover this brave new world of containers, pods and microservices.

I think one of the best ways to do this kind of studying is to follow the blueprint of recognized industry certification. This gives you a concrete study plan and bring structure to your knowledge from the very beginning.

There are such certifications in Kubernetes world – CKA and CKAD from CNCF/The Lunix Foundation. It’s quite popular certifications (as k8s in general), and so there a LOT of study material out there in the Internet. Below is the list of sources I’ve used.

  1. KubeAcademy from VMware. Collection of short courses to study the 101 of containers, Docker and Kubernetes. I’ve found it useful to do first quick dive into this area of knowledge.
  2. Kubernetes: Up and Running. The must-read book about Kubernetes architecture and concepts. Explains everything in great details.
  3. Great cources from Mumshad Mannambeth on Udemy – CKA and CKAD. This courses contain almost everything you need to know to pass the exams, and also have a lot of practice labs to consolidate theoretical Continue reading

Juniper EVPN BGP options – eBGP-only design

In another part of his never-ending EVPN/BGP saga Ivan Pepelnjak argued with Juniper fanboys once again about sanity of iBGP-over-eBGP and eBGP-over-eBGP designs and all that fun stuff. I’ve already written my opinion about that topic in my previous post and numerous comments to Ivan’s posts (TL;DR: iBGP-over-eBGP design has its advantages, just implement it wisely – don’t place RR on spine switches).

But there is one thing that worries me. In almost every one of his posts Ivan talks about some mythical Junos limitations that don’t allow Juniper to support eBGP only (over single session) design. So let’s find out what these limitaions are.

Juniper has freely available version of vQFX for Vagrant. There are a few lab topologies available on GitHub. I will be using full-2qfx-4srv-evpnvxlan topology in this post.

This topology comes with Ansible playbook that configures vQFX switches with iBGP-over-OSPF EVPN. Standard Juniper configuration, just for reference:

protocols {
     ospf {
         area 0.0.0.0 {
             interface lo0.0 {
                 passive;
             }
          Continue reading

Vagrant + vQFX + Ansible = EVPN-VXLAN Fabric

Did you know that Juniper vQFX images are available in Vagrant Cloud? There is vQFX RE image and vQFX PFE one. You can use only RE image to build simple topologies, or pair every RE with PFE to use more complex protocols. There is also a bunch of examples in Juniper’s github repository.

What is Vagrant? Let me quote official website: “Vagrant is a tool for building and managing virtual machine environments in a single workflow. Vagrant gives you a disposable environment and consistent workflow for developing and testing infrastructure management scripts.” I hope you already knew that. And I also hope there is no need to present Ansible.

I played with this stuff a little bit and created a couple of new examples using full vQFX option (e.g. RE+PFE for every box) – IP fabric and EVPN-VXLAN fabric.

This is really easy way if you want to get yourself familiar with configuration of IP fabric and EVPN-VXLAN on QFX (and some Ansible as well), but don’t want to spend time figuring out how to set everything up in GNS3 or EVE-NG.

Just a few simple steps:

List of EVPN and DC-related IETF drafts (as of 1H 2018)

Ethernet Virtual Private Network (EVPN) solution is becoming pervasive for Network Virtualization Overlay (NVO) services in data center (DC) networks and as the next generation VPN services in service provider (SP) networks. As a result of this popularity a lot of work is going on in IETF in this area. In this post I collect links to some of the interesting IETF drafts in this area.
All this information is relevant for 1H 2018 timeframe. Keep in mind to pay attention to which version of draft you’re reading – this drafts are frequently updated.

 

Service Chaining using Virtual Networks with BGP VPNs

This document describes how service function chains (SFC) can be applied to traffic flows using routing in a virtual (overlay) network to steer traffic between service nodes. Chains can include services running in routers, on physical appliances or in virtual machines. Two techniques are described: in one the service chain is implemented as a sequence of distinct VPNs between sets of service nodes that apply each service function; in the other, the routes within a VPN are modified through the use of special route targets and modified next-hop resolution to achieve the desired result.

 

Applicability of Continue reading

BGP design options for EVPN in Data Center Fabrics

Ivan Pepelnjak described his views regarding BGP design options for EVPN-based Data Center Fabrics in this article. In comments to following blog post we briefly discussed sanity of eBGP underlay + iBGP overlay design option and come to conclusion that we disagree on this subject. In this blog post I try to summarize my thoughts about this design options.

Let’s start with the basics – what’s the idea behind underlay/overlay design? It’s quite simple – this is logical separation of duties of “underlying infrastructure” (providing simple IP transport in case of DC fabrics) and some “service” overlayed on top of it (be it L3VPN or EVPN or any hypervisor-based SDN solution). The key word in previous sentence – “separation”. In good design overlay and underlay should be as separate and independent from each other as possible. This provides a lot of benefits – most important of which is the ability to implement “smart edge – simple core” network design, where your core devices (= spines in DC fabric case) doesn’t need to understand all complex VPN-related protocols and hold customer-related state.

We use this design option for a long time – OSPF for underlay and iBGP for overlay is de-facto Continue reading

List of EVPN and DC-related RFCs

In this post I try to collect links to all interesting RFCs and drafts related to DC, EVPN and network overlays.

Some of this documents are complete industry standards, some are drafts aiming to become such standarts, and others are just informational documents, often already outdated and forgotten, but despite this still interesting and useful. So don’t forget to pay attention to time frame each particular document was created and updated.

 

RFC 7209 – Requirements for Ethernet VPN (EVPN)

This document specifies the requirements for an Ethernet VPN (EVPN) solution, which addresses various VPLS issues.

 

RFC 7432 – BGP MPLS-Based Ethernet VPN

This document describes procedures for BGP MPLS-based Ethernet VPNs (EVPN-MPLS).

 

RFC 7938 – Use of BGP for Routing in Large-Scale Data Centers

This document summarizes operational experience in designing and operating large-scale data centers using BGP as the only routing protocol.

 

RFC 7348 – Virtual eXtensible Local Area Network (VXLAN)

This document describes Virtual eXtensible Local Area Network (VXLAN), which is used to address the need for overlay networks within virtualized data centers accommodating multiple tenants.

 

RFC 7364 – Problem Statement: Overlays for Network Virtualization

This document describes issues associated with providing multi-tenancy Continue reading

This Week: Data Center Deployment with EVPN/VXLAN

Brand new book – This Week: Data Center Deployment with EVPN/VXLAN.

Author did a great job explaining and showing various examples of real world implementations of EVPN-VXLAN and DCI.
Definitely must read for anybody aiming for JNCIE-DC lab. I wish I’ve read this before my lab attempt – this book really helps to update and systematize all EVPN-VXLAN related knowledge.
But that’s for sure not the first book to read if you doesn’t know anything about EVPN-VXLAN. I recommend firstly read all materials from this post, and only after that this book will be really useful to you.

Got my number!

After a week of waiting (why this is taking so long? this wasn’t a particularly pleasant week), I finally got my number.

Brand new JNCIE-DC #31 !!!

The main note about the lab – time management is the most important thing on the exam. Don’t rush to the keyboard, read and understand all the tasks and it’s interdependencies. Have a plan regarding order of tasks – not all tasks can be completed in order in which they written. Don’t be affraid to skip some tasks if it takes a long time.

I am quite pleased with the level of my preparation for the lab – there were no unexpected or incomprehensible tasks. General feeling about JNCIE-DC lab – this is interesting, pretty complex but fair exam. Lot of tasks on various themes, I think all themes from blueprint are covered in the lab in some ways.

As the proctor told me, the main difficulty of this exam is that it’s something new, and people are afraid of a new and unexpected. I want to tell you – don’t be afraid! If you’re interested in learning a Juniper way of building Data Center networks, and also want to earn one more pretty Continue reading

Tomorrow is the big day!

How fast time flies! Tomorrow ( August 10) is my JNCIE-DC lab day.

I spent last couple of days repeating my notes and labbing small optional topics like CoPP, ZTP, etc; and also familiar ones like CoS and MPLS L3VPN.

Today is the rest day. Fly to Amsterdam, drink a couple of beers and go to bed. Fortunately I’ve been there before, so no worries about how to find Juniper office and be late for the exam.

Plan for tomorrow: go to the lab and get the job done.

EVPN lab – EVPN-VXLAN to EVPN-MPLS stitching

Last big topic that I need to practice – Data Center Interconnect (DCI).
Fortunately I pretty confident in my skills in the MPLS L3VPN area, so I think I shouldn’t spend much time for this topic.
The most complex DCI option remains – EVPN stitching. In this topic I will show you my example of EVPN-VXLAN to EVPN-MPLS stitching (there is also option of EVPN-VXLAN to EVPN-VXLAN stitching, but configuration should be similar to my example).
EVPN stitching concept is pretty simple – you just need to configure two EVPN instances on each of DC Gateway devices (MX routers) and connect them to each other using Logical Tunnel (lt-) interfaces.
Scheme of my EVPN stitching lab:

Due to time constraints I’ll show you only the upper part of topology – stitching on vMX1 and vMX3 routers. Configurations of vMX2 and vMX4 should be exactly the same as this ones.
So lets see vMX1 routing-instances configuration:

alex@vMX1# show routing-instances
evpn {
    vtep-source-interface lo0.0;
    instance-type virtual-switch;
    interface lt-0/0/10.1;
    route-distinguisher 1.1.1.1:1;
    vrf-import Continue reading

EVPN-VXLAN lab – IRB functionality

Firstly, QFX5100 series doesn’t support EVPN-VXLAN inter-VXLAN routing, so I practice all IRB related topics on vMX devices. vQFXs acts as a simple L2 EVPN gateways.
This post continues the EVPN-VXLAN lab from the previous ones.

Full vMX IRB interfaces configuration:

alex@vMX1# show interfaces irb
unit 100 {
    proxy-macip-advertisement;
    family inet {
        address 172.16.0.251/24 {
            virtual-gateway-address 172.16.0.254;
        }
    }
    family inet6 {
        address 2001:dead:beef:100::1/64 {
            virtual-gateway-address 2001:dead:beef:100::a;
        }
    }
}
unit 200 {
    proxy-macip-advertisement;
    family inet {
        address 172.16.1.251/24 {
            virtual-gateway-address 172.16.1.254;
        }
    }
    family inet6 {
        address 2001:dead:beef:200::1/64 {
            virtual-gateway-address 2001:dead:beef: Continue reading

EVPN-VXLAN lab – RT assignment methods

This post continues the EVPN-VXLAN lab from the previous one.

For now I configured the simplest possible variant of RT assignment – one vrf-target for all ES and VNI routes (vrf-target target:65000:1):

alex@vQFX1# show switch-options
service-id 1;
vtep-source-interface lo0.0;
route-distinguisher 11.11.11.11:1;
vrf-target target:65000:1;    ### This RT applies to ALL EVPN routes


alex@vMX1# show routing-instances
evpn {
    vtep-source-interface lo0.0;
    instance-type virtual-switch;
    interface ge-0/0/4.0;
    interface ae0.0;
    route-distinguisher 1.1.1.1:1;
    vrf-target target:65000:1;   ### This RT applies to ALL EVPN routes
    protocols {
        evpn {
            encapsulation vxlan;
            extended-vni-list [ 100 200 ];
            multicast-mode ingress-replication;
            default-gateway no-gateway-community;
        }
    }
    bridge-domains {
        v100 Continue reading

EVPN-VXLAN lab – basic L2 switching

My EVPN-VXLAN lab topology:

There is IP Fabric in DC1 (2 vMX and 2 vQFX), and 2 vMX_v14 to emulate CE devices. Each CE device connected to EVPN via LACP LAG ae0 (EVPN Active-Active ethernet segment on service side). vMX_old-1 also has sigle-homed interface ge-0/0/4 (just to show you the difference).
Each CE device split into two logical systems for more convenient testing of routing functionality (global device context for Vlan100 and logical-system second for Vlan200). You could also use virtual-router routing instances for that, if you prefer this way. The rest of CE config is pretty self-explanatory:

alex@MX1# show interfaces
ge-0/0/0 {
    description vMX1;
    gigether-options {
        802.3ad ae0;
    }
}
ge-0/0/1 {
    description vMX2;
    gigether-options {
        802.3ad ae0;
    }
}
ge-0/0/4 {
    description vMX1_second;
    flexible-vlan-tagging;
    encapsulation flexible-ethernet-services;
    mac 00:46:d3:04:fe:06;
}
ae0 {
    description to_MC-LAG_vMX;
Continue reading

MC-LAG lab – advanced IRB functionality

For simplified Layer 3 gateway services, where Layer 3 routing protocols are not run on the MC-LAG peers, you simply configure the same Layer 3 gateway IP address on both MC-LAG peers and enable IRB MAC address synchronization. This IP address is used as the default gateway for the MC-LAG servers or hosts.
For more advanced Layer 3 gateway services, where Layer 3 routing protocols and Layer 3 multicast operations are required on the MC-LAG peers, you configure unique IRB interface addresses on each MC-LAG peer and then configured the virtual router redundancy protocol (VRRP) between the peers in an active/standby role.
To help with some forwarding operations, the IRB MAC address of each peer is replicated on the other peer and is installed as a MAC address with the forwarding next hop of the ICL-PL. This is achieved by configuring a static ARP entry for the remote peer.

IRB interfaces configurations:

vQFX1:
irb {
    unit 100 {
        family inet {
            address 172.16.0.1/24 {
                arp 172.16.0.2 l2-interface xe-0/ Continue reading