In this post we’re going to explore a technique for steering Layer 2 Circuit traffic onto a dedicated MPLS-TE LSP using JUNOS. The use case is fairly popular amongst Service Providers where special treatment is desired for certain Layer 2 Circuits. This special treatment could be the need for the traffic to follow a certain explicit path through the network, or perhaps there are other traffic-engineering constraints that are required. A good example of this is to create a deterministic state through the network in order to guarantee path diversity or a low latency path. This technique can be used alongside LDP, RSVP or SR.
Requirements
– Layer 2 Circuit traffic between CE4 and CE1 must use a dedicated traffic-engineered LSP via the P routers.
– No other traffic is permitted to use the LSP.
– All other traffic must continue to use LDP to reach the egress PE.
Lab Overview
The IGP is based on OSPF and LDP is used as the default label distribution protocol.
PE1 vSRX1 (Ingress PE): 20.1R1.11
PE2 CSR1000V1 (Egress PE): 16.11.01b
Layer 2 Circuit
Firstly, let’s create Layer 2 Circuits between PE1 and PE2 and observe the normal default behaviour.
Alright, so Filter-Based Forwarding is nothing new. The technology has been around for a while and is relatively well documented. However, I wanted to share a specific use case where Filter-Based Forwarding can be extremely useful. In this scenario, we’re going to use Filter-Based Forwarding to forward traffic to a dedicated VRF where it is then pushed through a DDOS appliance and back to the router via a different VRF.
This construct is very useful when you only need to pass specific ingress traffic through the DDOS appliance. For example, customer destination prefixes who are paying for a DDOS service. Or traffic from certain source prefixes that are known to be malicious. Return traffic in either scenario is not passed via the appliance and is routed directly back to the source.
Challenge Statement
Specific ingress traffic received from transit & peering providers, via the TRANSIT VRF, must be pushed to the DIRTY VRF. The traffic must then be forwarded back towards the TRANSIT VRF via an appliance for inspection. Once the traffic is received back into the TRANSIT VRF it is onward routed as normal.
Solution
The solution involves defining the prefixes that should be considered within the Filter-Based Forwarding Continue reading
In this post, I want to discuss how to verify Virtual Gateway forwarding behaviour on Broadcom based Juniper QFX switches.
The general assumption with EVPN Anycast Gateway is that gateway flows are load-balanced across all gateway devices. And whilst EVPN provides the mechanism to support this behaviour, there is a requirement for the forwarding hardware to also support it.
The mechanism for an EVPN device to load balance gateway flows is to install the virtual gateway ESI as a next-hop for the virtual gateway MAC address. However, Broadcom based QFX switches do not support this behaviour and can only install a single VTEP as a next-hop. So this means that traffic flows heading towards the virtual gateway will only ever traverse via a single gateway device. This behaviour is well documented and there are some talks about Broadcom working with the vendors to improve gateway load-balancing with ESI functionality.
Now we understand the characteristics, let’s look at the steps to verify forwarding behaviour on a Broadcom based QFX switch. Here we’ll look at how to identify which VTEP is being used to reach the virtual-gateway MAC address and how the underlay is transporting the traffic.
The lab setup Continue reading
I often get asked about EVPN Layer 3 gateway options. And more specifically, what are the differences between IRB with Virtual Gateway Address (VGA) and IRB without VGA. There are many different options and configuration knobs available when configuring EVPN L3 gateway. But I’ve focused on the 3 most popular options that I see with my customers in EVPN-VXLAN environments in a centralised model. I’m also only providing the very basic configuration required.
Each IRB option can be considered an Anycast gateway solution seeing as duplicate IPs are used across all IRB gateways. However, there are some subtle, yet significant, differences between each option.
Regardless of the transport technology used, whether it be MPLS or VXLAN, a layer 3 gateway is required to route beyond a given segment.
This Week: Data Center Deployment with EVPN/VXLAN by Deepti Chandra provides in-depth analysis and examples of EVPN gateway scenarios. I highly recommend reading this book!
Duplicate IP | Unique MAC | No VGA
Duplicate IPs are configured on all gateway IRBs and unique MAC addresses are used (manually configured or IRB default). Virtual Gateway Address is not used.
EVPN provides the capability to automatically synchronise gateways Continue reading
IRB Routing over EVPN-VXLAN on Juniper QFX10K is now officially supported by Juniper! EVPN master Tom Dwyer informed me that support had been added. A quick check on the VXLAN Constraints page confirms the same. Although, note that IS-IS is still not yet officially supported.
So, what’s the use case? Let’s say there’s a vSRX, hosted in a blade chassis environment, which requires dynamic route updates from the core network. The blade chassis is connected to an EVPN-VXLAN fabric via multi-homed layer 2 uplinks utilising EVPN ESI. In order to establish a BGP peering with the core network, the vSRX must peer with the SPINE devices via the EVPN-VXLAN fabric.
This article explains how to establish an EBGP peering between a QFX10K IRB interface and a vSRX in an EVPN-VXLAN environment. The vSRX is hosted in an ESXi hypervisor, which is connected via LEAF1 and LEAF2. A /29 network is used on VNI300 and an EBGP peering is established between SPINE1 & vSRX1 and between SPINE2 & vSRX1. A default route is advertised by both QFX10K SPINEs towards vSRX1.
SPINE – QFX10K – 17.3R3
LEAF – QFX5110 – 17.3R3
vSRX – 15. Continue reading
This post follows on from a previous article which detailed how to establish a BGP peering session between Juniper QFX and VMware NSX Edge Gateway. This time we’ll take a look at how to configure BGP route policy and BGP filters.
When working with BGP, it’s important to consider how BGP routes are imported and exported. In certain scenarios, you may find that the default BGP import and export behaviour is sufficient. But more often than not, you will want to implement an import and export policy in order to control how traffic flows through your network. Here’s a quick reminder of the default behaviours:-
In the following scenario, we’re going to configure BGP import and export policies on Juniper QFX Switches and VMware NSX Edge Gateways. The Juniper QFX switches will be configured to export a default route (0.0.0.0/0) towards the Continue reading
In this post, I’m going to explain how to establish a BGP peering session between Juniper QFX Series Switches and VMware NSX Edge Service Gateway. VMware NSX provides many features and services, one of which is dynamic routing via the use of an ESG. Typically, ESGs are placed at the edge of your virtual infrastructure to act as a gateway. There are two primary deployment options, stateful HA or non-stateful ECMP. In this example, we’re looking at the ECMP deployment option.
We have a pair of Juniper QFX5110 switches that we will configure to enable EBGP peering with each NSX Edge Gateway. We also have a pair of NSX Edge Gateway devices that are placed at the edge of a virtualized infrastructure. Each QFX has a /31 point-to-point network to each ESG. These networks are enabled via 802.1q subinterfaces which provide connectivity across the underlying blade chassis interconnect modules.
We’ll start by configuring BGP on our NSX Edge Gateways.
Via global settings for ESG1, we need to set a Router ID. The router ID is used to identify from where a packet is received.
ESG1 > Manage > Continue reading
The Junos firewall filter feature can be a really useful tool for troubleshooting and verification scenarios. I was recently troubleshooting a packet loss fault and I was fairly sure it was an asymmetrical routing issue but I needed a quick way of verifying. And then a colleague said, “hey, how about a firewall filter?”. Of course, assuming IP traffic, we can use a Junos firewall filter to capture specific traffic flows.
In this scenario, we have a pair of Juniper QFX5110 switches that are both connected to an upstream IP transit provider. They are also connected to a local network via a VMware NSX edge. We’re going to use a firewall filter on QFX1 and QFX2 to identify which QFX is being used for egress traffic and which QFX is being used for ingress traffic. More specifically, the flow is an ICMP flow between a host on 212.134.79.62 and Cloudflare’s DNS service.
So we’re essentially going to create a firewall filter to count specific egress traffic and apply as an import filter on both QFX switches. We’re also going to create another firewall filter to count the return traffic and apply Continue reading
This article is the second post in a series that is all about EVPN-VXLAN and Juniper QFX technology. This particular post is focussed specifically on EVPN Anycast Gateway and how to verify control plane and data plane on Juniper QFX10k series switches.
In my first post, I explained how to verify MAC learning behaviour in a single-homed host scenario. This time we’re going to look at how to verify control plane and data plane when using EVPN Anycast Gateway. As explained in my previous post, verifying and troubleshooting EVPN-VXLAN can be very difficult. Especially when you consider all the various elements that build up the control plane and data plane.
So, what is EVPN Anycast Gateway?
During the initial conception of EVPN L3 gateway, it was assumed that all PE devices would be configured with a Layer 3 interface (IRB) for a given Virtual Network. It was also intended that all IRB interfaces would be configured with the same IP address thus creating a redundant gateway mechanism.
This worked great until EVPN-VXLAN came along and crucially the hardware that was being deployed at the leaf layer no longer provided support for VXLAN L3 Gateway (IRB). As a result, Anycast Gateway, or Virtual Gateway Continue reading
This article is all about EVPN-VXLAN and Juniper QFX technology. I’ve been working with this tech quite a lot over the past few months and figured it would be useful to share some of my experiences. This particular article is probably going to be released in 2 or 3 parts and is focused specifically on the MAC learning process and how to verify behaviour. The first post focuses on a single-homed endpoint connected to the fabric via a single leaf switch. The second part will look at a multihomed endpoint connected via two leaf switches that are utilising the EVPN multihoming feature. And, lastly, the third part will focus on Layer 3 Virtual Gateway at the QFX10k Spine switches. The setup I’m using is based on Juniper vQFX for spine and leaf functions with a vSRX acting as a VR device. I also have a Linux host that is connected to a single leaf switch.
When verifying and troubleshooting EVPN-VXLAN it can become pretty difficult to figure out exactly how the control plane and data plane are programmed and how to verify behaviours. You’ll find yourself looking at various elements such as the MAC table, EVPN database, EVPN routing Continue reading
In this post, I’m going to explain how we can use an SDN controller to provision traffic-engineered MPLS LSPs between PE nodes situated in different traffic-engineering domains.
The SDN controller that we’re going to use is NorthStar from Juniper Networks running version 3.0. For more information regarding NorthStar check here. There is also a great Day One book available: NorthStar Controller Up and Running.
Typically, in order to provision traffic-engineered MPLS LSPs between PE nodes, it is necessary for the PEs to be situated in a common TE domain. There are some exceptions to this such as inter-domain LSP or LSP stitching. However, these options are limited and do not support end-to-end traffic-engineering as the ingress PE does not have a complete traffic-engineering view. I personally haven’t seen many deployments utilising these features.
So, what’s the use case? Many service providers and network operators are using RSVP to signal traffic-engineered LSPs in order to control how traffic flows through their environments. There are many reasons for doing this, such as steering certain types of traffic via optimal paths or achieving better utilisation of network bandwidth. With many RSVP deployments, you will see RSVP used in the core of Continue reading
I’ve recently started working on a project focused on EVPN-VXLAN based on Juniper technology. I figured I’d take the opportunity to share some experiences specifically around inter-VXLAN routing. Inter-VXLAN routing can be useful when passing traffic between different tenants. For example, you may have a shared-services tenant that needs to be accessed by a number of different customer tenants whilst not allowing reachability between customer tenants. By enabling inter-VXLAN routing on the MX we can use various route-leaking techniques and policy to provide a technical point of control.
To read the article then please head over to the iNET ZERO blog
In this post, I’m quickly going to describe how to upgrade NorthStar 2.1 to 3.0. For a detailed installation and user guide refer to the 3.0 release notes here.
Firstly, let’s start off by verifying the current host OS and NorthStar versions. Note. NorthStar 3.0 requires a minimum of Centos 6.7 or above.
[root@northstar ~]# cat /etc/redhat-release
CentOS release 6.9 (Final)
To check the current version of NorthStar, navigate to the about section via the drop down menu located at the top right of the GUI
Download the NorthStar 3.0 application from Juniper.net NorthStar download page. Once downloaded, extract the RPM and copy to your host machine. Below I have copied the NorthStar-Bundle-3.0.0-20170630_141113_70366_586.x86_64.rpm to the /root/rpms/ directory.
[root@northstar ~]# ls /root/rpms/ -l
total 3843976
-rw-r–r–. 1 root root 881371892 Mar 11 2016 NorthStar-Bundle-2.0.0-20160311_005355.x86_64.rpm
-rw-r–r– 1 root root 856402720 Jul 11 2016 NorthStar-Bundle-2.1.0-20160710_201437_67989_360.x86_64.rpm
-rw-r–r– 1 root root 2148942508 Jun 30 19:24 NorthStar-Bundle-3.0.0-20170630_141113_70366_586.x86_64.rpm
-rw-r–r– 1 root root 21878016 Dec 28 2016 NorthStar-Patch-2.1.0-sp1.x86_64.rpm
-rw-r–r–. 1 root root 27610536 Mar 11 Continue reading
I was recently faced with a challenge whereby I needed to inject 30,000 BGP routes into a test environment for a DOCSIS 3.1 POC. Typically I would use IXIA to form the BGP session and inject the routes. However, all of our IXIA testers were in use thus I needed a quick alternative.
I was already aware of BIRD and it’s use as a route server in a number of IXP environments so figured it would be a good fit. The following steps detail how to install BIRD on Ubuntu and how to instantiate a BGP session with a Juniper MX router.
For more info on BIRD check here
For this install I’m using a VM running Ubuntu server 12.04LTS:
lab@ubuntu-server1:~$ lsb_release -aNo LSB modules are available.Distributor ID: UbuntuDescription: Ubuntu 12.04.3 LTSRelease: 12.04Codename: precise
First up we need to enable Linux to support IPv4 forwarding. Two options here; we can either use the sysctl net.ipv4.ip_forward=1 command, although this setting will reset when the server is rebooted. Alternatively we can modify /etc/sysctl.conf to make the change permanent. Edit /etc/sysctl.conf and Continue reading