Nornir tasks are run against all or a subset of inventory members with the result formatted into a framework structured to show what was run against whom and the results. Tasks can be custom built python code or pre-built plugins that have been installed and imported.
The inventory is at the core of Nornir holding all the hosts that tasks will be run against and the variables that will be used by those tasks. Before any tasks can be run by Nornir the inventory has to be initialised.
If you have an understanding of Python and have been working with Ansible it is likely at some point you will get to the stage where you ask yourself ‘there has to be something better’. For network automation that better could well be Nornir.
Over the years I have built numerous IPsec VPNs on ASAs using crypto maps and an ACL for the interesting traffic. For a simple solution to join small sites with no need for routing these work great and keep the complexity down to a minimum. For more complex environments or cloud connectivity you are probably going to need to use VTIs, this post goes through the process of building VTI VPNs between an ASR and ASA.
This post explains how to configure EVE-NG as a DHCP server (isc-dhcp-server) assigning IPs to lab devices that are then dynamically NATed behind the primary EVE management IP address (iptables masquerade) to provide Internet breakout.
A trip down memory lane on how things have changed in labbing from using prehistoric switches bought on eBay through emulators that took longer to configure than the labs to the present day solutions that can programmatically build a multi-vendor lab in minutes. Kids today don’t know they are born…..
The 6th post in the ‘Automate Leaf and Spine Deployment’ series goes through the validation of the fabric once deployment has been completed. A desired state validation file is built from the contents of the variable files and compared against the devices actual state to determine whether the fabric and all the services that run on top of it comply.
The 5th post in the ‘Automate Leaf and Spine Deployment’ series goes through the deployment of the services that run on top of the fabric. These services are grouped into 3 categories, tenant, interface and routing. Services are configured only on the leaf and border switches, the spines have no need for them as they just route the VXLAN encapsulated packets with no knowledge or care of what is within them.
The 4th post in the ‘Automate Leaf and Spine Deployment’ series goes through the creation of the base and fabric config snippets and their deployment to devices. Loopbacks, NVE and intra-fabric interfaces are configured and both the underlay and overlay routing protocol peerings formed leaving the fabric in a state ready for services to be added.
The 3rd post in the ‘Automate Leaf and Spine Deployment’ series goes the through the variables from which the core fabric declaration is made and how this transposes into a dynamic inventory. This uses only the base and fabric roles to create the fabric ready for the service sub-roles (tenant, interface and route) to be deployed on top of the fabric at a later stage.
The 2nd post in the ‘Automate Leaf and Spine Deployment’ series describes process used for validating the variable files format and content. The idea behind this offline pre-validation is to catch any errors in the variable files before device configuration is attempted. Fail fast based on logic instead of failing halfway through a build. It wont catch everything but will eliminate a lot of the needless errors that would break a fabric build.
This series of posts will describe the process of deploying a NXOS Leaf and spine fabric in a declarative manner using Ansible. This came from my project for the IPSpace Building Network Automation Solutions course and was used in part when we were deploying leaf and spine fabrics in our Data Centers. I originally only planned to build tenants and do fabric validation but over time this has morphed into a full blown fabric deployment.
Napalm offers an easy way to configure and gather information from network devices using a unified API. No matter what vendor it is used against the input task and returned output will be the same. The only thing that will not be vendor neutral is the actual commands run and configuration being applied. This post documents experiences of trying to replace the whole configuration on NXOS using Napalm with Ansible.
Jinja template inheritance uses the concept of block
to define sections of the base parent template that can be overridden by sections from a child template. An extends
statement links the child template to the parent template so that when the child template is rendered the parent template is also rendered and the block statement contents inherited by the parent template.
Link-state advertisements (LSA) are used to communicates the router’s local routing topology to all other local routers in the same OSPF area. There are 11 types of LSAs although only the 6 most commonly used ones are described in this post.
I am currently studying to rectify my CCIE and it is at these times that I realise there is so much I have studied and learnt but forgotten. There are many cool things I come across that I think at the time are useful features that I need to remember, but unfortunately if you don’t have a real world use for them they are soon put to the back of the brain and over time forgotten. The same applies with taking for granted the way things work, be that ARP, DHCP or the process a switch or router goes through when moving traffic. I came across some of my old notes on CEF which I thought worth sharing.
Dual-active Detection (DAD) is designed to prevent a split-brain scenario where both VSS supervisors become active in the event of a VSL link failure. It uses a separate (from the VSL link) secondary communication link to communicate the devices state.
When the VSL link fails the standby switch becomes active and the current active switch is informed of this over the DAD links and goes into recovery mode to stop a split-brain situation occurring.
The 3 main elements that run identity awareness under the hub are Active Directory Query (ADQ), PDP and PEP. They all intertwine in some way to allow the different blades of the Checkpoint to track and restrict access based on AD user and machine name. I tested these features as part of a POC and personally I would not consider them fit for purpose in a production environment. See the caveats at the end of the post for more details on this.