Network CI/CD – Configuration Management with Napalm and Nornir

Network CI/CD - Configuration Management with Napalm and Nornir

Hi all, welcome back to part 4 of the Network CI/CD blog series. So far, we've covered the purpose of a Network CI/CD pipeline, the problems it solves for Network Engineers, and how to set up GitLab, including creating projects, installing runners, and understanding GitLab executors. We also looked at how to use GitLab variables to securely hide secrets.

In this part, we'll explore how to manage a campus network using Nornir and Napalm and deploy configurations through a CI/CD pipeline. Let's get to it!

Network CI/CD Pipeline - What’s the Point?
Hi all, welcome to the ‘Network CI/CD’ blog series. To kick things off, let’s ask the question, “Why do we even need a CI/CD pipeline for networks?” Instead of diving straight into technical definitions
Network CI/CD - Configuration Management with Napalm and Nornir

As I mentioned previously, I'm not a CI/CD expert at all and I'm still learning. The reason for creating this series is to share what I learn with the community. The pipeline we are building is far from perfect, but that's okay. The goal here is to create a simple pipeline that works and then build upon it as we go. This way, you can start small and gradually Continue reading

HN748: How AI and HPC Are Changing Data Center Networks

On today’s episode of Heavy Networking, Rob Sherwood joins us to discuss the impact that High Performance Computing (HPC)and artificial intelligence computing are having on data center network design. It’s not just a story about leaf/spine architecture. That’s the boring part. There’s also power and cooling issues, massive bandwidth requirements, and changes in how we... Read more »

TNO002: How Simplification Helps MSPs Scale

Simplification is the theme of today’s episode. Host Scott Robohn and guest Jack Maxfield explore the operational impacts of simplification for a Managed Service Provider (MSP). They discuss the challenges of managing multi-vendor environments and how to use templating and tools to simplify the management process. Proactive client communication and the integration of network and... Read more »

Technology Short Take 182

Welcome to Technology Short Take #182! I have a slightly bulkier list of links for you today, bolstered by some recent additions to my RSS feeds and supplemented by some articles I found through social media. There should be enough here to keep folks entertained this weekend—enjoy!

Networking

Servers/Hardware

  • I thought this write-up of Andy Bechtolsheim’s keynote at Hot Interconnects 2024 was an interesting summary of where we could see hardware development go in the next 4 years.
  • It turns out that Yubikeys—hardware security keys—are subject to a potential cloning vulnerability, although it does require physical access Continue reading

NOAA Gets $100 Million Windfall For “Rhea” Research Supercomputer

All of the weather and climate simulation centers on Earth are trying to figure out how to use a mixture of traditional HPC simulation and modeling with various kinds of AI prediction to create forecasts for both near-term weather and long-term climate that have higher fidelity and go out further into the future.

NOAA Gets $100 Million Windfall For “Rhea” Research Supercomputer was written by Timothy Prickett Morgan at The Next Platform.

Network CI/CD Pipeline – GitLab Variables

Network CI/CD Pipeline - GitLab Variables

Hi all, welcome back to our Network CI/CD blog series. In the previous posts, we covered what CI/CD is and why you need it for Network Automation. We also covered GitLab basics and how to set up your first pipeline. In this post, we’ll look into how to keep your credentials secure by hiding them from the repository and using GitLab variables. Let’s get to it!

GitLab Variables

In GitLab CI/CD, variables play an important role in managing dynamic values throughout your pipeline. These variables can store anything from environment-specific settings to sensitive information like credentials. By using variables, you can easily manage and change values without hardcoding them in your scripts or playbooks.

GitLab provides a secure way to store sensitive data such as passwords or API tokens. You can define these variables in your project’s Settings > CI/CD > Variables section, and they will be securely injected into your pipeline during runtime.

Network CI/CD Pipeline - GitLab Variables

If you recall, in our previous examples, we had the username and password hardcoded in the Ansible variables file. This is not secure at all, and you should never expose sensitive information like credentials directly in your repository. By using GitLab variables, you can securely Continue reading

NetPicker – A Great Network Configuration Backup Tool

NetPicker - A Great Network Configuration Backup Tool

Hi everyone, welcome back to the Packetswitch blog. Today, we're going to look into NetPicker, a tool that not only performs Network Compliance Tests but also takes backups of your network devices. In this post, we'll walk you through downloading and installing NetPicker, adding devices, taking backups, and setting up backup schedules.

Is It Free?

As of September 2024, according to NetPicker’s pricing page, there’s a ‘Free for Life’ plan that allows unlimited backup of your device configurations and unlimited automated tests for up to 10 devices. This means you can manage backups for all of your devices without spending a penny. If you need to run tests on more than 10 devices, you’ll likely need to consider purchasing a license.

💡
Disclaimer - NetPicker sponsors my blog as of writing this post. However, the opinions expressed here are entirely my own, and they have not influenced the content of this article.

Download and Installation

To get started with NetPicker, navigate to their website and fill out the form with your name and email. After you complete this step, you'll receive an email with detailed installation instructions. You have two main options for installation.

  1. Download an OVA Continue reading

A global assessment of third-party connection tampering

Have you ever made a phone call, only to have the call cut as soon as it is answered, with no obvious reason or explanation? This analogy is the starting point for understanding connection tampering on the Internet and its impact. 

We have found that 20 percent of all Internet connections are abruptly closed before any useful data can be exchanged. Essentially, every fifth call is cut before being used. As with a phone call, it can be challenging for one or both parties to know what happened. Was it a faulty connection? Did the person on the other end of the line hang up? Did a third party intervene to stop the call?  

On the Internet, Cloudflare is in a unique position to help figure out when a third party may have played a role. Our global network allows us to identify patterns that suggest that an external party may have intentionally tampered with a connection to prevent content from being accessed. Although they are often hard to decipher, the ways connections are abruptly closed give clues to what might have happened. Sources of tampering generally do not try to hide their actions, which leaves hints of Continue reading

Bringing insights into TCP resets and timeouts to Cloudflare Radar

Cloudflare handles over 60 million HTTP requests per second globally, with approximately 70% received over TCP connections (the remaining are QUIC/UDP). Ideally, every new TCP connection to Cloudflare would carry at least one request that results in a successful data exchange, but that is far from the truth. In reality, we find that, globally, approximately 20% of new TCP connections to Cloudflare’s servers time out or are closed with a TCP “abort” message either before any request can be completed or immediately after an initial request.

This post explores those connections that, for various reasons, appear to our servers to have been halted unexpectedly before any useful data exchange occurs. Our work reveals that while connections are normally ended by clients, they can also be closed due to third-party interference. Today we’re excited to launch a new dashboard and API endpoint on Cloudflare Radar that shows a near real-time view of TCP connections to Cloudflare’s network that terminate within the first 10 ingress packets due to resets or timeouts, which we’ll refer to as anomalous TCP connections in this post. Analyzing this anomalous behavior provides insights into scanning, connection tampering, DoS attacks, connectivity issues, and other behaviors.

Continue reading

MSS, MSS Clamping, PMTUD, and MTU

Maximum Segment Size (MSS) and MSS clamping are concepts that can be confusing. How do they relate to the MTU (Maximum Transmission Unit)? Before we setup a lab to demonstrate these concepts, let’s give some background. Note that this entire post assumes a maximum frame size of 1518 bytes, the original Ethernet definition, and does not cover jumbo frames.

Ethernet frame

Almost all interfaces today are Ethernet. The original 802.3 standard from 1985 defined the minimum size- and maximum size frame as the following:

  • minFrameSize – 64 octets.
  • maxFrameSize – 1518 octets.

With a maximum frame size of 1518 octets (bytes), that leaves 1500 bytes of payload as the Ethernet frame adds 18 bytes, 14 bytes of header and 4 bytes of trailer. The Ethernet frame is shown below:

IP header

An IPv4 IP header adds at least 20 bytes to the frame. The IPv4 header is shown below:

Note that more than 20 bytes can be used if the header has IP options. With no options in the IP header, there’s 1480 bytes remaining for the L4 protocol such as UDP or TCP.

TCP header

TCP also adds a minimum of 20 bytes, meaning that the maximum payload Continue reading

TACC Fires Up “Vista” Bridge To Future “Horizon” Supercomputer

The Texas Advanced Computing Center at the University of Austin is the flagship datacenter for supercomputing for the US National Science Foundation, and so what TACC does – and doesn’t do – is a kind of bellwether for academic supercomputing.

TACC Fires Up “Vista” Bridge To Future “Horizon” Supercomputer was written by Timothy Prickett Morgan at The Next Platform.

D2DO250: The Realities of Responsible Disclosure in the Cloud

Cloud security and responsible disclosure are the focus of today’s conversation with guest Kat Traxler. Kat shares her insights on identifying vulnerabilities in cloud services, particularly Google Cloud, and the importance of curiosity in her research. The episode explores the role of bug bounty programs and the shift towards issuing CVEs for cloud vulnerabilities. Lastly,... Read more »