New year, new role, new strategy…2023 is officially the year when I return to my roots. Back in 2014, I officially became part of the Ansible community. Admittingly, back then my focus was solely on figuring out how to best demonstrate to my customers the power of having a OpenStack private cloud. Anyone who has ever stood up or experimented with OpenStack knows that this is a tall order. Imagine having to stand up that platform over and over again on a daily basis. My focus was to find a way—a tool—that could help me do that, so I could focus on helping solve the customers' true challenges. Fast forward to now, and the decision to do it with Ansible still stands as the best choice hands down.
Many of you have stories just like mine. You are seeking out a way to simplify your daily tasks, so you can focus on the business. Just like me, you have decided that Ansible is the tool to do it. Before I started in this new role, I did some reflecting on my experience as part of the community. I have so many encouraging, positive, and fun stories I could share. Our Continue reading
Last month I described how you can simplify your VLAN- or VRF lab topologies with VRF- and VLAN links, automatically setting vlan.access or vrf attribute on a set of links. Link groups allow you to do the same for any set of link attributes.
Imagine you have a small network with three PE-routers connected to a central P-router:
Last month I described how you can simplify your VLAN- or VRF lab topologies with VRF- and VLAN links, automatically setting vlan.access or vrf attribute on a set of links. Link groups allow you to do the same for any set of link attributes.
Imagine you have a small network with three PE-routers connected to a central P-router:
New year, new role, new strategy...2023 is officially the year when I return to my roots. Back in 2014, I officially became part of the Ansible community. Admittingly, back then my focus was solely on figuring out how to best demonstrate to my customers the power of having a OpenStack private cloud. Anyone who has ever stood up or experimented with OpenStack knows that this is a tall order. Imagine having to stand up that platform over and over again on a daily basis. My focus was to find a way---a tool---that could help me do that, so I could focus on helping solve the customers' true challenges. Fast forward to now, and the decision to do it with Ansible still stands as the best choice hands down.
Many of you have stories just like mine. You are seeking out a way to simplify your daily tasks, so you can focus on the business. Just like me, you have decided that Ansible is the tool to do it. Before I started in this new role, I did some reflecting on my experience as part of the community. I have so many encouraging, positive, and fun Continue reading
Special Thanks: Adrian vifino Pistol for writing this code and for the wonderful collaboration!
Ever since I first saw VPP - the Vector Packet Processor - I have been deeply impressed with its performance and versatility. For those of us who have used Cisco IOS/XR devices, like the classic ASR (aggregation service router), VPP will look and feel quite familiar as many of the approaches are shared between the two.
In the [first article] of this series, I took a look at MPLS in general, and how setting up static Label Switched Paths can be done in VPP. A few details on special case labels (such as Implicit Null which enabled the fabled Penultimate Hop Popping) were missing, so I took a good look at them in the [second article] of the series.
This was all just good fun but also allowed me to buy some time for @vifino who has been implementing MPLS handling within the Linux Control Plane plugin for VPP! This final article in the series shows the engineering considerations that went in to writing the plugin, which is currently under review but reasonably complete. Considering the VPP Continue reading
Michele Chubirka published a must-read article on technology fallacies including this gem:
Technologists often assume that all problems can be beaten into submission with a technology hammer.
As I’ve been saying for ages (not that anyone would listen): all the technology in the world won’t save you unless you change the mentality and rearchitect broken processes.
Michele Chubirka published a must-read article on technology fallacies including this gem:
Technologists often assume that all problems can be beaten into submission with a technology hammer.
As I’ve been saying for ages (not that anyone would listen): all the technology in the world won’t save you unless you change the mentality and rearchitect broken processes.
This chapter is the first part of a series on Azure's highly available Network Virtual Appliance (NVA) solutions. It explains how we can use load balancers to achieve active/active NVA redundancy for connections initiated from the Internet.
In Figure 4-1, Virtual Machine (VM) vm-prod-1 uses the load balancer's Frontend IP address 20.240.9.27 to publish an application (SSH connection) to the Internet. Vm-prod-1 is located behind an active/active NVA FW cluster. Vm-prod-1 and NVAs have vNICs attached to the subnet 10.0.2.0/24.
Both NVAs have identical Pre- and Post-routing policies. If the ingress packet's destination IP address is 20.240.9.27 (load balancer's Frontend IP) and the transport layer protocol is TCP, the policy changes the destination IP address to 10.0.2.6 (vm-prod-1). Additionally, before routing the packet through the Ethernet 1 interface, the Post-routing policy replaces the original source IP with the IP address of the egress interface Eth1.
The second vNICs of the NVAs are connected to the subnet 10.0.1.0/24. We have associated these vNICs with the load balancer's backend pool. The Inbound rule binds the Frontend IP address to the Backend pool and defines the load-sharing policies. In our example, the packets of SSH connections from the remote host to the Frontend IP are distributed between NVA1 and NVA2. Moreover, an Inbound rule determines the Health Probe policy associated with the Inbound rule.
Note! Using a single VNet design eliminates the need to define static routes in the subnet-specific route table and the VM's Linux kernel. This solution is suitable for small-scale implementations. However, the Hub-and-Spoke VNet topology offers simplified network management, enhanced security, scalability, performance, and hybrid connectivity. I will explain how to achieve NVA redundancy in the Hub-and-Spoke VNet topology in upcoming chapters.
I mentioned IP source address validation (SAV) as one of the MANRS-recommended actions in the Internet Routing Security webinar but did not go into any details (as the webinar deals with routing security, not data-plane security)… but I stumbled upon a wonderful companion article published by RIPE Labs: Why Is Source Address Validation Still a Problem?.
The article goes through the basics of SAV, best practices, and (most interesting) using free testing tools to detect non-compliant networks. Definitely worth reading!
I mentioned IP source address validation (SAV) as one of the MANRS-recommended actions in the Internet Routing Security webinar but did not go into any details (as the webinar deals with routing security, not data-plane security)… but I stumbled upon a wonderful companion article published by RIPE Labs: Why Is Source Address Validation Still a Problem?.
The article goes through the basics of SAV, best practices, and (most interesting) using free testing tools to detect non-compliant networks. Definitely worth reading!
It's common for SD-WAN vendors to offer monitoring as part of the solution, but leaves the question … how do I monitor the rest of the network? Today’s sponsor Broadcom offers digital experience monitoring that is independent of the underlying WAN infrastructure. We explore how it works with guest is Jeremy Rossbach, Chief Technical Evangelist, NetOps by Broadcom.
The post Heavy Networking 680: Speed Up Mean Time To WAN Innocence With Broadcom NetOps (Sponsored) appeared first on Packet Pushers.