The Containerlab project currently has limited support for Mac OS, stating "ARM-based Macs (M1/2) are not supported, and no binaries are generated for this platform. This is mainly due to the lack of network images built for arm64 architecture as of now." However, this argument doesn't apply to the Linux based images used in these examples.
First install Docker Desktop on your Apple silicon based Mac (select the Apple Chip option).
mkdir clab cd clab docker run --rm -it --privileged \ --network host --pid="host" \ -v /var/run/docker.sock:/var/run/docker.sock \ -v /run/netns:/run/netns \ -v $(pwd):$(pwd) -w $(pwd) \ sflow/clab bash
Run Containerlab by typing the above commands in a terminal. This command uses a pre-built multi-architecture Continue reading
[email protected]:~$ add container image sflow/ddos-protectFirst, download the sflow/ddos-protect image.
[email protected]:~$ mkdir -m 777 /config/sflow-rtCreate a directory to store persistent container state.
set container network sflowrt prefix 192.168.1.0/24Define an internal network to connect to container. Currently VyOS BGP does not allow direct connections to local addresses (e.g. 127.0.0.1), so we need to put controller on its own network so the router can connect and receive DDoS mitigation BGP RTBH / Flowspec controls.
set container name sflow-rt image sflow/ddos-protect set container name sflow-rt host-name sflow-rt set container name sflow-rt arguments '-Dddos_protect.router=192.168.1.1 -Dddos_protect.enable.flowspec=yes' set container name sflow-rt environment RTMEM value 200M set container name sflow-rt memory 0 set container name sflow-rt volume store source /config/sflow-rt set container name sflow-rt volume store destination /sflow-rt/store set container name sflow-rt network sflowrt address 192.168.1.2
Configure a container to run the image. The Continue reading
[email protected]:~$ add container image sflow/ddos-protectFirst, download the sflow/ddos-protect image.
[email protected]:~$ mkdir -m 777 /config/sflow-rtCreate a directory to store persistent container state.
set container name sflow-rt image sflow/ddos-protect set container name sflow-rt allow-host-networks set container name sflow-rt arguments '-Dhttp.hostname=10.0.0.240' set container name sflow-rt environment RTMEM value 200M set container name sflow-rt memory 0 set container name sflow-rt volume store source /config/sflow-rt set container name sflow-rt volume store destination /sflow-rt/storeConfigure a container to run the image. The RMEM environment variable setting limits the amount of memory that the container will use to 200M bytes. The -Dhttp.hostname argument sets the internal web server to listen on management address, 10.0.0.240, assigned to eth0 on this router. The container has is no built-in authentication, so access needs to be limited using an ACL or through a reverse proxy - see Download and install.
set system sflow interface eth0 set system sflow interface eth1 set system sflow interface Continue reading
[email protected]:~$ uname -r 6.1.22-amd64-vyos
The latest VyOS rolling release runs on a Linux 6.1 kernel and the latest release of VyOS now provides enhanced visibility into dropped packets using kernel reason codes.
[email protected]:~$ show version Version: VyOS 1.4-rolling-202303310716 Release train: current Built by: [email protected] Built on: Fri 31 Mar 2023 07:16 UTC Build UUID: 1a7448d9-d53c-48a0-8644-ed1970c1abb8 Build commit ID: 75c9311fba375e Architecture: x86_64 Boot via: installed image System type: guest Hardware vendor: innotek GmbH Hardware model: VirtualBox Hardware S/N: 0 Hardware UUID: da75808d-ff60-1d4c-babd-84a7fa341053 Copyright: VyOS maintainers and contributorsVerify that the version of of VyOS is VyOS 1.4-rolling-202303310716 or later.
In the previous article, VyOS dropped packet notifications, two tests were performed, the first a failed attempt to connect to the VyOS router using telnet (telnet has been disabled in Continue reading
The recent addition of dropreason.h in Linux 6+ kernels provides detailed reasons for packet drops. The netlink drop_monitor API has been extended to include the NET_DM_ATTR_REASON attribute to report the drop reason, see net_dropmon.h.
The following example illustrates the value of the reason code in explaining Linux packet drops.
tcp_v4_rcv+0x7c/0xef0The value of NET_DM_ATTR_SYMBOL shown above indicates that the packet was dropped in the tcp_v4_rcv function in Linux kernel at memory location 0x7c/0xef0. While this information is helpful, there are many reasons why a TCP packet may be dropped.
NO_SOCKETIn this case, the value of NET_DM_ATTR_REASON shown above indicates that the TCP packet was dropped because no application had opened a socket and so there was nowhere to deliver the packet.
In the case of Linux-based hardware switches or smart network adapters, where packet processing is offloaded to hardware, the netlink drop_monitor events include NET_DM_ATTR_HW_TRAP_GROUP_NAME and NET_DM_ATTR_HW_TRAP_NAME attributes and packet header information supplied by the hardware Continue reading
Dropped packets have a profound impact on network performance and availability. Packet discards due to congestion can significantly impact application performance. Dropped packets due to black hole routes, expired TTLs, MTU mismatches, etc. can result in insidious connection failures that are time consuming and difficult to diagnose. Visibility into dropped packets offers significant benefits for network troubleshooting, providing real-time network-wide visibility into the specific packets that were dropped as well the reason the packet was dropped. This visibility instantly reveals the root cause of drops and the impacted connections.
[email protected]:~$ show version
Version: VyOS 1.4-rolling-202303260914
Release train: current
Built by: [email protected]
Built on: Sun 26 Mar 2023 09:14 UTC
Build UUID: 72b34f74-bfcd-4b51-9b95-544319c2dac5
Build commit ID: d68bda6a295ba9
Architecture: x86_64
Boot via: installed image
System type: guest
Hardware vendor: innotek GmbH
Hardware model: VirtualBox
Hardware S/N: 0
Hardware UUID: df0a2b79-b8c4-8342-a27f-76aa3e52ad6d
Copyright: VyOS maintainers and contributors
Verify that the version of of VyOS is VyOS 1.4-rolling-202303260914 or later.
On VyOS dropped packet monitoring Continue reading[email protected]:~$ show versionVerify that the version of of VyOS is VyOS 1.4-rolling-202303170317 or later
Version: VyOS 1.4-rolling-202303170317
Release train: current
Built by: [email protected]
Built on: Fri 17 Mar 2023 03:17 UTC
Build UUID: 45391302-1240-4cc7-95a8-da8ee6390765
Build commit ID: e887f582cfd7de
Architecture: x86_64
Boot via: installed image
System type: guest
Hardware vendor: innotek GmbH
Hardware model: VirtualBox
Hardware S/N: 0
Hardware UUID: 871dd0f0-c4ec-f147-b1a7-ed536511f141
Copyright: VyOS maintainers and contributors
set system sflow interface eth0The above commands configure sFlow export in the VyOS CLI using the embedded Host sFlow agent.
set system sflow interface eth1
set system sflow interface eth2
set system sflow polling 30
set system sflow sampling-rate 1000
set system sflow server 10.0.0.30 port 6343
docker run --name sflow-rt -p 8008:8008 -p 6343:6343/udp -d sflow/prometheusA quick way to experiment with sFlow is to run the pre-built sflow/prometheus image Continue reading
VyOS claims sFlow support, so why is it necessary to install an alternative sFlow agent? The following experiment demonstrates that there are significant issues with the VyOS sFlow implementation.
[email protected]:~$ show versionInstall a recent version of VyOS under VirtualBox and configure routing between two Linux virtual machines connected to eth1 and eth2 on the router. Out of band management is configured on eth0.
Version: VyOS 1.4-rolling-202301260317
Release train: current
Built by: [email protected]
Built on: Thu 26 Jan 2023 03:17 UTC
Build UUID: a95385b7-12f9-438d-b49c-b91f47ea7ab7
Build commit ID: d5ea780295ef8e
Architecture: x86_64
Boot via: installed image
System type: KVM guest
Hardware vendor: innotek GmbH
Hardware model: VirtualBox
Hardware S/N: 0
Hardware UUID: 6988d219-49a6-0a4a-9413-756b0395a73d
Copyright: VyOS maintainers and contributors
set system flow-accounting disable-imtThe above commands configure sFlow monitoring Continue reading
set system flow-accounting sflow agent-address 10.0.0.50
set system flow-accounting sflow sampling-rate 1000
set system flow-accounting sflow server 10.0.0.30 port 6343
set system flow-accounting interface eth0
set system flow-accounting interface eth1
set system flow-accounting interface eth2
This article describes an experiment with Containerlab's advanced Generated topologies capability, taking the 3 stage Clos topology shown above and creating a template that can be used to generate topologies with any number of leaf and spine switches.
The clos3.yml topology file specifies the 2 leaf 2 spine topology shown above:
name: clos3
mgmt:
network: fixedips
ipv4_subnet: 172.100.100.0/24
ipv6_subnet: 2001:172:100:100::/80
topology:
defaults:
env:
COLLECTOR: 172.100.100.8
nodes:
leaf1:
kind: linux
image: sflow/clab-frr
mgmt_ipv4: 172.100.100.2
mgmt_ipv6: 2001:172:100:100::2
env:
LOCAL_AS: 65001
NEIGHBORS: eth1 eth2
HOSTPORT: eth3
HOSTNET: "172.16.1.1/24"
HOSTNET6: "2001:172:16:1::1/64"
exec:
- touch /tmp/initialized
leaf2:
kind: linux
image: sflow/clab-frr
mgmt_ipv4: 172.100.100.3
mgmt_ipv6: 2001:172:100:100::3
env:
LOCAL_AS: 65002
NEIGHBORS: Continue reading
CONTAINERlab is a Docker orchestration tool for creating virtual network topologies. The sflow-rt/containerlab project contains a number of topologies demonstrating industry standard streaming sFlow telemetry with realistic data center topologies. This article extends the examples in Real-time telemetry from a 5 stage Clos fabric and Real-time EVPN fabric visibility to demonstrate visibility into IPv6 traffic flows.
docker run --rm -it --privileged --network host --pid="host" \
-v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \
-v $(pwd):$(pwd) -w $(pwd) \
ghcr.io/srl-labs/clab bash
Run the above command to start Containerlab if you already have Docker installed. Otherwise, Installation provides detailed instructions for a variety of platforms.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/clos5.yml
Download the topology file for the 5 stage Clos fabric shown above.
containerlab deploy -t clos5.yml
Finally, deploy the topology.
The screen capture shows a real-time view of traffic flowing across the network during an iperf3 test. Click on the sFlow-RT Apps menu and select the browse-flows application, or click here for a direct link to a chart with the settings shown above.docker exec -it clab-clos5-h1 iperf3 -c 2001:172:16:4::2
Each of the hosts in the network has an iperf3 server, so running the above command will test bandwidth between Continue reading
Scientific network tags (scitags) is an initiative promoting identification of the science domains and their high-level activities at the network level. Participants include, dCache, ESnet, GÉANT, Internet2, Jisc, NORDUnet, OFTS, OSG, RNP, RUCIO, StarLight, XRootD.
This article will demonstrate how industry standard sFlow telemetry streaming from switches and routers can be used to report on science domain activity in real-time using the sFlow-RT analytics engine.
The scitags initiative makes use of the IPv6 packet header to mark traffic. Experiment and activity identifiers are encoded in the IPv6 Flow label field. Identifiers are published in an online registry in the form of a JSON document, https://www.scitags.org/api.json.
One might expect IPFIX / NetFlow to be a Continue reading
Sonification presents data as sounds instead of visual charts. One of the best known examples of sonification is the representation of radiation level as a click rate in a Geiger counter. This article describes ddos-sonify, an experiment to see if sound can be usefully employed to represent information about Distributed Denial of Service (DDoS) attacks. The DDoS attacks and BGP Flowspec responses testbed was used to create the video demonstration at the top of this page in which a series of simulated DDoS attacks are detected and mitigated. Play the video to hear the results.
The software uses the Tone.js library to control Web Audio sound generation functionality in a web browser.
var voices = {};
var loop;
var loopInterval = '4n';
$('#sonify').click(function() {
if($(this).prop("checked")) {
voices.synth = new Tone.PolySynth(Tone.Synth).toDestination();
voices.metal = new Tone.PolySynth(Tone.MetalSynth).toDestination();
voices.pluck = new Tone.PolySynth(Tone.PluckSynth).toDestination();
voices.membrane = new Tone.PolySynth(Tone.MembraneSynth).toDestination();
voices.am = new Tone.PolySynth(Tone.AMSynth).toDestination();
voices.fm = new Tone.PolySynth(Tone.FMSynth).toDestination();
voices.duo = new Tone.PolySynth(Tone.DuoSynth).toDestination();
Tone.Transport.bpm.value=80;
loop = new Tone.Loop((now) => {
sonify(now);
},loopInterval).start(0);
Continue reading
This article describes how use the instrumentation built into ConnectX SmartNICs for data center wide network visibility. Real-time network telemetry for automation provides some background, giving an overview of the sFlow industry standard with an example of troubleshooting a high performance GPU compute cluster.
Linux as a network operating system describes how standard Linux APIs are used in NVIDIA Spectrum switches to monitor data center network performance. Linux Kernel Upstream Release Notes v5.19 describes recent driver enhancements for ConnectX SmartNICs that extend visibility to servers for end-to-end visibility into the performance of high performance distributed compute infrastructure.
The open source Host sFlow agent uses standard Linux APIs to configure instrumentation in switches and hosts, streaming the resulting measurements to analytics software in real-time for comprehensive data center wide visibility.
Packet sampling provides detailed visibility into traffic flowing across the network. Hardware packet sampling makes it possible to monitor 400 gigabits per second interfaces on the server at line rate with minimal CPU/memory overhead.psample { Continue reading
This article builds on the Docker testbed to demonstrate how advanced flow analytics can be used to separate the two types of traffic and detect the DDoS attack.
docker run --rm -d -e "COLLECTOR=host.docker.internal" -e "SAMPLING=100" \First, start a Host sFlow agent using the pre-built sflow/host-sflow image to generate the sFlow telemetry that would stream from the switches and routers in a production deployment.
--net=host -v /var/run/docker.sock:/var/run/docker.sock:ro \
--name=host-sflow sflow/host-sflow
setFlow('ddos_amplification', {
keys:'ipdestination,udpsourceport',
value: 'frames',
values: ['count:ipsource']
});
setThreshold('ddos_amplification', {
metric:'ddos_amplification',
value: 10000,
byFlow:true,
timeout: 2
});
setEventHandler(function(event) {
var [ipdestination,udpsourceport] = event.flowKey.split(',');
var [sourcecount] = event.values;
Continue reading
This article uses Containerlab to emulate a simple network and experiment with Nokia SR Linux and sFlow telemetry. Containerlab provides a convenient method of emulating network topologies and configurations before deploying into production on physical switches.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/srlinux.yml
Download the Containerlab topology file.
containerlab deploy -t srlinux.yml
Deploy the topology.
docker exec -it clab-srlinux-h1 traceroute 172.16.2.2
Run traceroute on h1 to verify path to h2.
traceroute to 172.16.2.2 (172.16.2.2), 30 hops max, 46 byte packets
1 172.16.1.1 (172.16.1.1) 2.234 ms * 1.673 ms
2 172.16.2.2 (172.16.2.2) 0.944 ms 0.253 ms 0.152 ms
Results show path to h2 (172.16.2.2) via router interface (172.16.1.1).
docker exec -it clab-srlinux-switch sr_cli
Access SR Linux command line on switch.
Using configuration file(s): []
Welcome to the srlinux CLI.
Type 'help' (and press <ENTER>) if you need any help using this.
--{ + running }--[ ]--
A:switch#
SR Linux CLI describes how to use the interface.
A:switch# show system sflow status
Get status of sFlow telemetry.
-------------------------------------------------------------------------
Admin State Continue reading
Remote Triggered Black Hole Scenario describes how to use the Ixia-c traffic generator to simulate a DDoS flood attack. Ixia-c supports the Open Traffic Generator API that is used in the article to program two traffic flows: the first representing normal user traffic (shown in blue) and the second representing attack traffic (show in red).
The article goes on to demonstrate the use of remotely triggered black hole (RTBH) routing to automatically mitigate the simulated attack. The chart above shows traffic levels during two simulated attacks. The DDoS mitigation controller is disabled during the first attack. Enabling the controller for the second attack causes to attack traffic to be dropped the instant it crosses the threshold.
The diagram shows the Containerlab topology used in the Remote Triggered Black Hole Scenario lab (which can run on a laptop). The Ixia traffic generator's eth1 interface represents the Internet and its eth2 interface represents the Customer Network being attacked. Industry standard sFlow telemetry from the Customer router, ce-router, streams to the DDoS mitigation controller (running an instance of DDoS Protect). When the controller detects a denial of service attack it pushed a control via BGP to the ce-router, Continue reading
docker run --rm -it --privileged --network host --pid="host" \Start Containerlab.
-v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \
-v ~/clab:/home/clab -w /home/clab \
ghcr.io/srl-labs/clab bash
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/ddos.ymlDownload the Containerlab topology file.
sed -i "s/\\.ip_flood\\.action=filter/\\.ip_flood\\.action=drop/g" ddos.ymlChange mitigation policy for IP Flood attacks from Flowspec filter to RTBH.
containerlab deploy -t ddos.ymlDeploy the topology. Access the DDoS Protect screen at http://localhost:8008/app/ddos-protect/html/
docker exec -it clab-ddos-attacker hping3 \Launch an IP Flood attack. The DDoS Protect dashboard shows that as soon as the ip_flood attack traffic reaches the threshold a control is implemented and the attack traffic is immediately dropped. The entire process between the attack being launched, detected, and mitigated happens within a second, ensuring minimal impact on network capacity and services.
--flood --rawip -H 47 192.0.2.129
docker exec -it clab-ddos-sp-router vtysh -c "show running-config"See Continue reading