Archive

Category Archives for "Networking"

Replay pcap files using sflowtool


It can be very useful to capture sFlow telemetry from production networks so that it can be replayed later to perform off-line analysis, or to develop or evaluate sFlow collection tools.
sudo tcpdump -i any -s 0 -w sflow.pcap udp port 6343
Run the command above on the system you are using to collect sFlow data (if you aren't yet collecting sFlow, see Agents for suggested configuration settings). Type Control-C to end the capture after 5 to 10 minutes.  Copy the resulting sflow.pcap file to your laptop.
docker run --rm -it -v $PWD/sflow.pcap:/sflow.pcap sflow/sflowtool \
  -r /sflow.pcap -P 1
Either compile the latest version of sflowtool or, as shown above, use Docker to run the pre-built sflow/sflowtool image. The -P (Playback) option replays the trace in real-time and displays the contents of each sFlow message. Running sflowtool using Docker provides additional examples, including converting the sFlow messages into JSON format for processing by a Python script. 
docker run --rm -it -v $PWD/sflow.pcap:/sflow.pcap sflow/sflowtool \
  -r /sflow.pcap -f 192.168.4.198/6343 -P 1
The -f (forwarding) option takes an IP address and UDP port number as arguments, in this Continue reading

Using the Continue VSCode Extension and Local LLMs for Improved Coding

Using the Continue VSCode Extension and Local LLMs for Improved Coding

Welcome back to another post on local LLMs. In this post, we’ll look at setting up a fully local coding assistant inside VSCode using the Continue extension and Ollama. Let’s get started.

As always, if you find this post helpful, press the ‘clap’ button on the left. It means a lot to me and helps me know you enjoy this type of content.

Overview

We’ve covered Ollama and Local LLMs in previous blog posts (linked below), but here’s a quick summary.

Ollama is a tool that lets you run large language models (LLMs) directly on your local machine. Local LLMs are language models that run on your computer instead of relying on cloud-based services like ChatGPT. This means you can use them without sending your data to external servers, which is great for privacy. They also work offline, so you’re not dependent on an Internet connection.

That said, it’s important to note that local models, especially on smaller setups, won’t match the speed or performance of cloud-based models like ChatGPT. These cloud models are powered by massive infrastructure, so they’re faster and often more accurate. However, the trade-off is privacy and offline access, which local LLMs provide.

In Continue reading

Using Ollama with a Web-Based GUI

Using Ollama with a Web-Based GUI

When I first started using local LLMs with Ollama, I quickly realised it relies on a command-line interface to interact with the models. It also comes with an API, but let’s be honest, most of us, myself included, prefer a GUI, much like the one ChatGPT provides. There are plenty of options available, but I decided to try Open Web GUI. In this blog post, we’ll explore what Open-WebGUI is and how simple it is to set up a web-based interface for your local LLMs.

As always, if you find this post helpful, press the ‘clap’ button on the left. It means a lot to me and helps me know you enjoy this type of content.

Overview

Ollama is a tool for running local LLMs, offering privacy and control over your data. Out of the box, it lets you interact with models via the terminal or through its API. Installing Ollama is straightforward, and if you’d like a detailed guide, check out my other blog post which is linked below.

This blog post assumes you already have Ollama set up and running. For reference, I’m running this on my MacBook (M3 Pro with 18GB of RAM).

open-webui

Open Continue reading

Running Large Language Models (LLM) on Your Own Machine Using Ollama

Running Large Language Models (LLM) on Your Own Machine Using Ollama

I’m going to start by saying I’m totally new to LLMs and running them locally, so I’m not going to pretend like I know what I am doing. I’ve been learning about Ollama for some time now and thought I would share it with my readers as always. This is such an interesting topic and I’m ready to go into the rabbit hole.

As always, if you find the content useful, don’t forget to press the ‘clap’ button to your left. This is one way for me to know that you like this type of content, which means a lot to me. So, let's get started.

Large Language Models (LLMs)

LLMs, or Large Language Models, are a type of artificial intelligence designed to process and generate natural language. They are trained on vast amounts of text data, enabling them to understand context, identify patterns, and produce human-like responses. These models can perform various tasks such as answering questions, translating languages, summarising text, generating creative content, and assisting with coding. LLMs have gained significant attention in recent years due to their impressive performance and versatility.

N4N011: What’s the Difference Between LAG, MLAG, MC-LAG, and Stacking?

In today’s episode, we address listener Kieren’s question about the differences between LAG, MLAG, MC-LAG, and stacking. We tackle the nuances of Link Aggregation (LAG) and the Link Aggregation Control Protocol (LACP), and explain their roles in redundancy and bandwidth efficiency. We also discuss the complexities and differences among vendors and overall benefits of Multi-Chassis... Read more »

The Curious Case of the BGP Connect State

I got this question from Paul:

Have you ever seen a BGP peer in the “Connect” state? In 20 years, I have never been able to see or reproduce this state, nor any mention in a debug/log. I am starting to believe that all the documentation is BS, and this does not exist.

The BGP Finite State Machine (FSM) (at least the one defined in RFC 4271 and amended in RFC 9687) is “a bit” hard to grasp but the basics haven’t changed from the ancient days of RFC 1771:

NAN084: From GitNops Zero to Hero

Are you ready to go from zero to hero in GitNops? On today’s podcast, we talk with Tom McGonagle, who shares and explains git, CI/CD and DevOps and how that all fits into network engineering. The conversation also covers the evolution of containerization and Kubernetes, highlighting their roles in modern network automation.  Tom also encourages... Read more »

ByteDance to Network a Million Containers with Netkit

Engineers from the Chinese social media conglomerate ByteDance are taking early advantage of a recently released feature of the Linux kernel called netkit that provides a faster way for containers to communicate with each other across a cluster. First released in Linux kernel 6.7 in December 2023, netkit is a now-discontinued Netkit that was used to create virtual networks on a single server) and has been touted as a way to streamline container networking. Like the rest of the cloud native world, ByteDance uses Virtual Ethernet (had a number bottlenecks that slowed communication rates across containers. In fact, veth requires each packet to traverse two network stacks — one of the sender and the other of the recipient — even if the two containers communicating were on the same Continue reading

Cisco Modeling Labs and Infrastructure-as-Code

Dalton Ortega, Cisco Modeling Labs Product Manager, sent me the following email as a response to my Configuring IP Addresses Won't Make You an Expert blog post:

First, your statement on Autonetkit is indeed correct. We had removed that from the product due to lack of popularity. That being said, in our roadmap we are looking at methods to reintroduce on-the-fly configuration as well as enhancing our sample labs library to make getting started with CML easier.

Secondly, CML can be run in full IaC mode because of the API-first build. In fact, many of our customers are using CML as an automated test/validation bed for their CI/CD pipelines. Tools like Ansible and Terraform are available to facilitate this inside CML too. For more details, read:

It seems it should be relatively easy to create a cml provider to generate a Terraform file from the netlab topology and use it to start a lab in CML. Any volunteers?

PP047: Why Packet Analysis (and Wireshark) Should Be In Your Security Toolkit

Don’t underestimate the value of packet analysis in your security strategy. And if you’re analyzing packets, the open-source Wireshark software is a go-to tool. On today’s episode, we talk with Chris Greer, a Wireshark trainer and consultant specializing in packet analysis. Chris explains the critical role of packet analysis in cybersecurity, particularly in threat hunting... Read more »

NB511: Cisco Sells Security Blanket for AI Nightmares; Stratoshark Captures System Calls

Take a Network Break! We start with critical vulnerabilities affecting the Android OS, Cisco Meeting Management, and SonicWall, and then discuss a report that tens of thousands of Fortinet security appliances still haven’t been patched despite active exploits. Palo Alto Networks releases an open API to make it easier for developers to access Quantum Random... Read more »

HS093: Strategic Trust-Building Among Ops, Engineering, Architecture – and Leadership

Billy Joel had it right: It’s a matter of trust. Too often Operations, Engineering, and Architecture teams don’t trust one another–and nobody trusts leadership (and vice versa!). Special guest (and PacketPushers host) Scott Robohn joins us to talk about how to build trust, and the special role of an Operations Architect. Episode Guest: Scott Robohn, ... Read more »

HW044: Unpacking NETGEAR’s Enterprise Wireless and Wired Portfolio (Sponsored)

NETGEAR is known for consumer networking products, but it also offers a robust porfolio of wireless and wired networking products designed for the enterprise. On today’s Heavy Wireless, sponsored by NETGEAR, we take a close look at the hardware, software, and services that NETGEAR offers to enterprise customers. That includes Wi-Fi 7 APs, a full... Read more »

A diversity of downtime: the Q4 2024 Internet disruption summary

Cloudflare’s network spans more than 330 cities in over 120 countries, where we interconnect with over 13,000 network providers in order to provide a broad range of services to millions of customers. The breadth of both our network and our customer base provides us with a unique perspective on Internet resilience, enabling us to observe the impact of Internet disruptions at both a local and national level, as well as at a network level.

As we have noted in the past, this post is intended as a summary overview of observed and confirmed disruptions, and is not an exhaustive or complete list of issues that have occurred during the quarter. A larger list of detected traffic anomalies is available in the Cloudflare Radar Outage Center.

In the third quarter we covered quite a few government-directed Internet shutdowns, including many intended to prevent cheating on exams. In the fourth quarter, however, we only observed a single government-directed shutdown, this one related to protests. Terrestrial cable cuts impacted connectivity in two African countries. As we have seen multiple times before, both unexpected power outages and rolling power outages following military action resulted in Internet disruptions. Violent storms and an earthquake Continue reading

Deepseek-r1 – reasoning and Chain of thought – Network Engineers

https://www.deepseek.com/ – DeepSeek has taken the AI world by storm. Their new reasoning model, which is open source, achieves results comparable to OpenAI’s O1 model but at a fraction of the cost. Many AI companies are now studying DeepSeek’s white paper to understand how they achieved this.

This post analyses reasoning capabilities from a Network Engineer’s perspective, using a simple BGP message scenario. Whether you’re new to networking or looking to refresh your reasoning skills for building networking code, DeepSeek’s model is worth exploring. The model is highly accessible – it can run on Google Colab or even a decent GPU/MacBook, thanks to DeepSeek’s focus on efficiency.

For newcomers: The model is accessed through a local endpoint, with queries and responses handled through a Python program. Think of it as a programmatic way to interact with a chat interface.

Code block

Simple code. One function block has prompt set to LLM to be a expert Network engineer. We are more interested in the thought process. The output of the block is a sample BGP output from a industry standard device, nothing fancy here.

import requests
import json

def analyze_bgp_output(device_output: str) -> str:
    url = "<http://localhost:11434/api/chat>"
    
    # Craft prompt  Continue reading

Worth Reading: Drunken Plagiarists

George V. Neville-Neil published a fantastic, must-read summary of the various code copilots’ usefulness on ACM Queue: The Drunken Plagiarists.

It pretty much mirrors my experience (plus, I got annoyed when the semi-relevant suggestions kept kicking me out of the flow) and reminds me of the early days of OpenFlow, when nobody wanted to listen to old grunts like myself telling the world it was all hype and little substance.

Cloudflare meets new Global Cross-Border Privacy standards

Cloudflare proudly leads the way with our approach to data privacy and the protection of personal information, and we’ve been an ardent supporter of the need for the free flow of data across jurisdictional borders. So today, on Data Privacy Day (also known internationally as Data Protection Day), we’re happy to announce that we’re adding our fourth and fifth privacy validations, and this time, they are global firsts! Cloudflare is the first organisation to announce that we have been successfully audited against the brand new Global Cross-Border Privacy Rules (Global CBPRs) for data controllers and the Global Privacy Rules for Processors (Global PRP). These validations demonstrate our support and adherence to global standards that provide for privacy-respecting data flows across jurisdictions. Organizations that have been successfully audited will be formally certified when the certifications officially launch, which we expect to happen later in 2025. 

Our participation in the Global CBPRs and Global PRP joins our roster of privacy validations: we were one of the first cybersecurity organizations to certify to the international privacy standard ISO 27701:2019 when it was published, and in 2022 we also certified to the cloud privacy certification, ISO 27018:2019. In 2023, we added our third Continue reading

Cloudflare thwarts over 47 million cyberthreats against Jewish and Holocaust educational websites

January 27 marks the International Holocaust Remembrance Day — a solemn occasion to honor the memory of the six million Jews who perished in the Holocaust, along with countless others who fell victim to the Nazi regime's campaign of hatred and intolerance. This tragic chapter in human history serves as a stark reminder of the catastrophic consequences of prejudice and extremism. 

The United Nations General Assembly designated January 27 — the anniversary of the liberation of Auschwitz-Birkenau —  as International Holocaust Remembrance Day. This year, we commemorate the 80th anniversary of the liberation of this infamous extermination camp.

As the world reflects on this dark period, a troubling resurgence of antisemitism underscores the importance of vigilance. This growing hatred has spilled into the digital realm, with cyberattacks increasingly targeting Jewish and Holocaust remembrance and educational websites — spaces dedicated to preserving historical truth and fostering awareness.

For this reason, here at Cloudflare, we began to publish annual reports covering cyberattacks that target these organizations. These cyberattacks include DDoS attacks as well as bot and application attacks. The insights and trends are based on websites protected by Cloudflare. This is our fourth report, and you can view our previous Holocaust Continue reading

Running Containerlab in macOS (Cisco IOL/cEOS)

Running Containerlab in macOS (Cisco IOL/cEOS)

Let me start by saying that I usually run Containerlab on a dedicated Ubuntu 22.04 VM, which sits on top of Proxmox. All my labs run on this setup. However, I recently wanted to try running Containerlab directly on my MacBook (M3 Pro with 18GB of RAM) for a few reasons. For example, I might need to run labs while I’m away, work offline, or use a MacBook at work where I can’t access my home network. So, I decided to test whether I could run Cisco IOL and Arista EOS on macOS. The answer is yes, and here’s how you can do it.

As always, if you find this post helpful, press the ‘clap’ button on the left. It means a lot to me and helps me know you enjoy this type of content.

If you’re new to Containerlab and trying to understand what it is, I highly recommend checking out my introductory post, which is linked below. It covers the basics and will help you get started.