Archive

Category Archives for "sFlow"

VyOS

VyOS is an open source router operating system based on Linux. This article discusses how to improve network traffic visibility on VyOS based routers using the open source Host sFlow agent.

VyOS claims sFlow support, so why is it necessary to install an alternative sFlow agent? The following experiment demonstrates that there are significant issues with the VyOS sFlow implementation.

vyos@vyos:~$ show version
Version: VyOS 1.4-rolling-202301260317
Release train: current

Built by: [email protected]
Built on: Thu 26 Jan 2023 03:17 UTC
Build UUID: a95385b7-12f9-438d-b49c-b91f47ea7ab7
Build commit ID: d5ea780295ef8e

Architecture: x86_64
Boot via: installed image
System type: KVM guest

Hardware vendor: innotek GmbH
Hardware model: VirtualBox
Hardware S/N: 0
Hardware UUID: 6988d219-49a6-0a4a-9413-756b0395a73d

Copyright: VyOS maintainers and contributors
Install a recent version of VyOS under VirtualBox and configure routing between two Linux virtual machines connected to eth1 and eth2 on the router. Out of band management is configured on eth0.
set system flow-accounting disable-imt
set system flow-accounting sflow agent-address 10.0.0.50
set system flow-accounting sflow sampling-rate 1000
set system flow-accounting sflow server 10.0.0.30 port 6343
set system flow-accounting interface eth0
set system flow-accounting interface eth1
set system flow-accounting interface eth2
The above commands configure sFlow monitoring Continue reading

Real-time flow analytics with Containerlab templates

The GitHub sflow-rt/containerlab project contains example network topologies for the Containerlab network emulation tool that demonstrate real-time streaming telemetry in realistic data center topologies and network configurations. The examples use the same FRRouting (FRR) engine that is part of SONiC, NVIDIA Cumulus Linux, and DENT. Containerlab can be used to experiment before deploying solutions into production. Examples include: tracing ECMP flows in leaf and spine topologies, EVPN visibility, and automated DDoS mitigation using BGP Flowspec and RTBH controls.

This article describes an experiment with Containerlab's advanced Generated topologies capability, taking the 3 stage Clos topology shown above and creating a template that can be used to generate topologies with any number of leaf and spine switches.

The clos3.yml topology file specifies the 2 leaf 2 spine topology shown above:

name: clos3
mgmt:
network: fixedips
ipv4_subnet: 172.100.100.0/24
ipv6_subnet: 2001:172:100:100::/80

topology:
defaults:
env:
COLLECTOR: 172.100.100.8
nodes:
leaf1:
kind: linux
image: sflow/clab-frr
mgmt_ipv4: 172.100.100.2
mgmt_ipv6: 2001:172:100:100::2
env:
LOCAL_AS: 65001
NEIGHBORS: eth1 eth2
HOSTPORT: eth3
HOSTNET: "172.16.1.1/24"
HOSTNET6: "2001:172:16:1::1/64"
exec:
- touch /tmp/initialized
leaf2:
kind: linux
image: sflow/clab-frr
mgmt_ipv4: 172.100.100.3
mgmt_ipv6: 2001:172:100:100::3
env:
LOCAL_AS: 65002
NEIGHBORS: Continue reading

IPv6 flow analytics with Containerlab

CONTAINERlab is a Docker orchestration tool for creating virtual network topologies. The sflow-rt/containerlab project contains a number of topologies demonstrating industry standard streaming sFlow telemetry with realistic data center topologies. This article extends the examples in Real-time telemetry from a 5 stage Clos fabric and Real-time EVPN fabric visibility to demonstrate visibility into IPv6 traffic flows.

docker run --rm -it --privileged --network host --pid="host" \
-v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \
-v $(pwd):$(pwd) -w $(pwd) \
ghcr.io/srl-labs/clab bash

Run the above command to start Containerlab if you already have Docker installed. Otherwise, Installation provides detailed instructions for a variety of platforms.

curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/clos5.yml

Download the topology file for the 5 stage Clos fabric shown above.

containerlab deploy -t clos5.yml

Finally, deploy the topology.

The screen capture shows a real-time view of traffic flowing across the network during an iperf3 test. Click on the sFlow-RT Apps menu and select the browse-flows application, or click here for a direct link to a chart with the settings shown above.
docker exec -it clab-clos5-h1 iperf3 -c 2001:172:16:4::2

Each of the hosts in the network has an iperf3 server, so running the above command will test bandwidth between Continue reading

SC22 SCinet network monitoring

The data shown in the chart was gathered from The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC22) being held this week in Dallas. The conference network, SCinet, is described as the fastest and most powerful network on Earth, connecting the SC community to the world. The chart provides an up to the second view of overall SCinet traffic, the lower chart showing total traffic hitting a sustained 8Tbps.
The poster shows the topology of the SCinet network. Monitoring flow data from 5,852 switch/router ports with 162Tbps total bandwith with sub-second latency is required to construct the charts.
The chart was generated using industry standard streaming sFlow telemetry from switches and routers in the SCinet network. An instance of the sFlow-RT real-time analytics engine computes the flow metrics shown in the charts.
Most of the load was due to large 400Gbit/s, 200Gbit/s and 100Gbit/s flows that were part of the Network Research Exhibition. The chart above shows that 10 large flows are responsible for 1.5Tbps of traffic.
Scientific network tags (scitags) describes how IPv6 flowlabels allow network flow analytics to identify network traffic associated with bulk scientific data transfers.
RDMA network visibility shows how bulk Continue reading

RDMA network visibility

The Remote Direct Memory Access (RDMA) data shown in the chart was gathered from The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC22) being held this week in Dallas. The conference network, SCinet, is described as the fastest and most powerful network on Earth, connecting the SC community to the world.
Resilient Distributed Processing and Reconfigurable Networks is one of the demonstrations using SCinet - Location: Booth 2847 (StarLight). Planned SC22 focus is on RDMA enabled data movement and dynamic network control.
  1. RDMA Tbps performance over global distance for timely Terabyte bulk data transfers (goal << 1 min Tbyte transfer on N by 400G network).
  2. Dynamic shifting of processing and network resources from on location/path/system to another (in response to demand and availability).
The real-time chart at the top of this page shows an up to the second view of RDMA traffic (broken out by source, destination, and RDMA operation).
The chart was generated using industry standard streaming sFlow telemetry from switches and routers in the SCinet network. An instance of the sFlow-RT analytics engine computes the RDMA flow metrics shown in the chart. RESTflow describes how sFlow disaggregates the traditional NetFlow / IPFIX analytics pipeline Continue reading

Scientific network tags (scitags)

The data shown in the chart was gathered from The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC22) being held this week in Dallas. The conference network, SCinet, is described as the fastest and most powerful network on Earth, connecting the SC community to the world. The chart shows data generated as part of the Packet Marking for Networked Scientific Workflows demonstration using SCinet - Booth 2847 (StarLight).

Scientific network tags (scitags) is an initiative promoting identification of the science domains and their high-level activities at the network level. Participants include, dCacheESnet, GÉANT, Internet2, Jisc, NORDUnet, OFTS, OSG, RNP, RUCIO, StarLight, XRootD.

This article will demonstrate how industry standard sFlow telemetry streaming from switches and routers can be used to report on science domain activity in real-time using the sFlow-RT analytics engine.

The scitags initiative makes use of the IPv6 packet header to mark traffic. Experiment and activity identifiers are encoded in the IPv6 Flow label field. Identifiers are published in an online registry in the form of a JSON document, https://www.scitags.org/api.json.

One might expect IPFIX / NetFlow to be a Continue reading

Low latency flow analytics


Real-time analytics on network flow data with Apache Pinot describes LinkedIn's flow ingestion and analytics pipeline for sFlow and IPFIX exports from network devices. The solution uses Apache Kafka message queues to connect LinkedIn's InFlow flow analyzer with the Apache Pinot datastore to support low latency queries. The article describes the scale of the monitoring system, InFlow receives 50k flows per second from over 100 different network devices on the LinkedIn backbone and edge devices and states InFlow requires storage of tens of TBs of data with a retention of 30 days. The article concludes, Following the successful onboarding of flow data to a real-time table on Pinot, freshness of data improved from 15 mins to 1 minute and query latencies were reduced by as much as 95%.
The sFlow-RT real-time analytics engine provides a faster, simpler, more scaleable, alternative for flow monitoring. sFlow-RT  radically simplifies the measurement pipeline, combining flow collection, enrichment, and analytics in a single programmable stage. Removing pipeline stages improves data freshness — flow measurements represent an up to the second view of traffic flowing through the monitored network devices. The improvement from minute to sub-second data freshness enhances automation use cases such as automated DDoS Continue reading

DDoS Sonification

Sonification presents data as sounds instead of visual charts. One of the best known examples of sonification is the representation of radiation level as a click rate in a Geiger counter. This article describes ddos-sonify, an experiment to see if sound can be usefully employed to represent information about Distributed Denial of Service (DDoS) attacks. The DDoS attacks and BGP Flowspec responses testbed was used to create the video demonstration at the top of this page in which a series of simulated DDoS attacks are detected and mitigated. Play the video to hear the results.

The software uses the Tone.js library to control Web Audio sound generation functionality in a web browser.

var voices = {};
var loop;
var loopInterval = '4n';
$('#sonify').click(function() {
if($(this).prop("checked")) {
voices.synth = new Tone.PolySynth(Tone.Synth).toDestination();
voices.metal = new Tone.PolySynth(Tone.MetalSynth).toDestination();
voices.pluck = new Tone.PolySynth(Tone.PluckSynth).toDestination();
voices.membrane = new Tone.PolySynth(Tone.MembraneSynth).toDestination();
voices.am = new Tone.PolySynth(Tone.AMSynth).toDestination();
voices.fm = new Tone.PolySynth(Tone.FMSynth).toDestination();
voices.duo = new Tone.PolySynth(Tone.DuoSynth).toDestination();
Tone.Transport.bpm.value=80;
loop = new Tone.Loop((now) => {
sonify(now);
},loopInterval).start(0);
Continue reading

NVIDIA ConnectX SmartNICs

NVIDIA ConnectX SmartNICs offer best-in-class network performance, serving low-latency, high-throughput applications with one, two, or four ports at 10, 25, 40, 50, 100, 200, and up to 400 gigabits per second (Gb/s) Ethernet speeds.

This article describes how use the instrumentation built into ConnectX SmartNICs for data center wide network visibility. Real-time network telemetry for automation provides some background, giving an overview of the sFlow industry standard with an example of troubleshooting a high performance GPU compute cluster.

Linux as a network operating system describes how standard Linux APIs are used in NVIDIA Spectrum switches to monitor data center network performance. Linux Kernel Upstream Release Notes v5.19 describes recent driver enhancements for ConnectX SmartNICs that extend visibility to servers for end-to-end visibility into the performance of high performance distributed compute infrastructure.

The open source Host sFlow agent uses standard Linux APIs to configure instrumentation in switches and hosts, streaming the resulting measurements to analytics software in real-time for comprehensive data center wide visibility.

Packet sampling provides detailed visibility into traffic flowing across the network. Hardware packet sampling makes it possible to monitor 400 gigabits per second interfaces on the server at line rate with minimal CPU/memory overhead.
psample {  Continue reading

DDoS detection with advanced real-time flow analytics

The diagram shows two high bandwidth flows of traffic to the Customer Network, the first (shown in blue) is a bulk transfer of data to a big data application, and the second (shown in red) is a distributed denial of service (DDoS) attack in which large numbers of compromised hosts attempt to flood the link connecting the Customer Network to the upstream Transit Provider. Industry standard sFlow telemetry from the customer router streams to an instance of the sFlow-RT real-time analytics engine which is programmed to detect (and mitigate) the DDoS attack.

This article builds on the Docker testbed to demonstrate how advanced flow analytics can be used to separate the two types of traffic and detect the DDoS attack.

docker run --rm -d -e "COLLECTOR=host.docker.internal" -e "SAMPLING=100" \
--net=host -v /var/run/docker.sock:/var/run/docker.sock:ro \
--name=host-sflow sflow/host-sflow
First, start a Host sFlow agent using the pre-built sflow/host-sflow image to generate the sFlow telemetry that would stream from the switches and routers in a production deployment. 
setFlow('ddos_amplification', {
keys:'ipdestination,udpsourceport',
value: 'frames',
values: ['count:ipsource']
});
setThreshold('ddos_amplification', {
metric:'ddos_amplification',
value: 10000,
byFlow:true,
timeout: 2
});
setEventHandler(function(event) {
var [ipdestination,udpsourceport] = event.flowKey.split(',');
var [sourcecount] = event.values;
Continue reading

SR Linux in Containerlab

This article uses Containerlab to emulate a simple network and experiment with Nokia SR Linux and sFlow telemetry. Containerlab provides a convenient method of emulating network topologies and configurations before deploying into production on physical switches.

curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/srlinux.yml

Download the Containerlab topology file.

containerlab deploy -t srlinux.yml

Deploy the topology.

docker exec -it clab-srlinux-h1 traceroute 172.16.2.2

Run traceroute on h1 to verify path to h2.

traceroute to 172.16.2.2 (172.16.2.2), 30 hops max, 46 byte packets
1 172.16.1.1 (172.16.1.1) 2.234 ms * 1.673 ms
2 172.16.2.2 (172.16.2.2) 0.944 ms 0.253 ms 0.152 ms

Results show path to h2 (172.16.2.2) via router interface (172.16.1.1).

docker exec -it clab-srlinux-switch sr_cli

Access SR Linux command line on switch.

Using configuration file(s): []
Welcome to the srlinux CLI.
Type 'help' (and press <ENTER>) if you need any help using this.
--{ + running }--[ ]--
A:switch#

SR Linux CLI describes how to use the interface.

A:switch# show system sflow status

Get status of sFlow telemetry.

-------------------------------------------------------------------------
Admin State Continue reading

Using Ixia-c to test RTBH DDoS mitigation

Remote Triggered Black Hole Scenario describes how to use the Ixia-c traffic generator to simulate a DDoS flood attack. Ixia-c supports the Open Traffic Generator API that is used in the article to program two traffic flows: the first representing normal user traffic (shown in blue) and the second representing attack traffic (show in red).

The article goes on to demonstrate the use of remotely triggered black hole (RTBH) routing to automatically mitigate the simulated attack. The chart above shows traffic levels during two simulated attacks. The DDoS mitigation controller is disabled during the first attack. Enabling the controller for the second attack causes to attack traffic to be dropped the instant it crosses the threshold.

The diagram shows the Containerlab topology used in the Remote Triggered Black Hole Scenario lab (which can run on a laptop). The Ixia traffic generator's eth1 interface represents the Internet and its eth2 interface represents the Customer Network being attacked. Industry standard sFlow telemetry from the Customer router, ce-router, streams to the DDoS mitigation controller (running an instance of DDoS Protect). When the controller detects a denial of service attack it pushed a control via BGP to the ce-router, Continue reading

BGP Remotely Triggered Blackhole (RTBH)

DDoS attacks and BGP Flowspec responses describes how to simulate and mitigate common DDoS attacks. This article builds on the previous examples to show how BGP Remotely Triggered Blackhole (RTBH) controls can be applied in situations where BGP Flowpsec is not available, or is unsuitable as a mitigation response.
docker run --rm -it --privileged --network host --pid="host" \
-v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \
-v ~/clab:/home/clab -w /home/clab \
ghcr.io/srl-labs/clab bash
Start Containerlab.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/ddos.yml
Download the Containerlab topology file.
sed -i "s/\\.ip_flood\\.action=filter/\\.ip_flood\\.action=drop/g" ddos.yml
Change mitigation policy for IP Flood attacks from Flowspec filter to RTBH.
containerlab deploy -t ddos.yml
Deploy the topology.
Access the DDoS Protect screen at http://localhost:8008/app/ddos-protect/html/
docker exec -it clab-ddos-attacker hping3 \
--flood --rawip -H 47 192.0.2.129
Launch an IP Flood attack. The DDoS Protect dashboard shows that as soon as the ip_flood attack traffic reaches the threshold a control is implemented and the attack traffic is immediately dropped. The entire process between the attack being launched, detected, and mitigated happens within a second, ensuring minimal impact on network capacity and services.
docker exec -it clab-ddos-sp-router vtysh -c "show running-config"
See Continue reading

Real-time flow telemetry for routers

The last few years have seen leading router vendors add support for sFlow monitoring technology that has long been the industry standard for switch monitoring. Router implementations of sFlow include:
  • Arista 7020R Series Routers, 7280R Series Routers, 7500R Series Routers, 7800R3 Series Routers
  • Cisco 8000 Series Routers, ASR 9000 Series Routers, NCS 5500 Series Routers
  • Juniper ACX Series Routers, MX Series Routers, PTX Series Routers
  • Huawei NetEngine 8000 Series Routers
Broad support of sFlow in both switching and routing platforms ensures comprehensive end-to-end monitoring of traffic, see sFlow.org Network Equipment for a list of vendors and products.
Note: Most routers also support Cisco Netflow/IPFIX. Rapidly detecting large flows, sFlow vs. NetFlow/IPFIX describes why you should choose sFlow if you are interested in real-time monitoring and control applications.
DDoS mitigation is a popular use case for sFlow telemetry in routers. The combination of sFlow for real-time DDoS detection with BGP RTBH / Flowspec mitigation on routing platforms makes for a compelling solution.
DDoS protection quickstart guide describes how to deploy sFlow along with BGP RTBH/Flowspec to automatically detect and mitigate DDoS flood attacks. The use of sFlow provides sub-second visibility into network traffic during the periods of high packet loss Continue reading

DDoS attacks and BGP Flowspec responses

This article describes how to use the Containerlab DDoS testbed to simulate variety of flood attacks and observe the automated mitigation action designed to eliminate the attack traffic.

docker run --rm -it --privileged --network host --pid="host" \
-v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \
-v ~/clab:/home/clab -w /home/clab \
ghcr.io/srl-labs/clab bash
Start Containerlab.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/ddos.yml
Download the Containerlab topology file.
containerlab deploy -t ddos.yml
Deploy the topology and access the DDoS Protect screen at http://localhost:8008/app/ddos-protect/html/
docker exec -it clab-ddos-sp-router vtysh -c "show bgp ipv4 flowspec detail"

At any time, run the command above to see the BGP Flowspec rules installed on the sp-router. Simulate the volumetric attacks using hping3.

Note: While the hping3 --rand-source option to generate packets with random source addresses would create a more authentic DDoS attack simulation, the option is not used in these examples because the victims responses to the attack packets (ICMP Port Unreachable) will be sent back to the random addresses and may leak out of the Containerlab test network. Instead varying source / destination ports are used to create entropy in the attacks. 

When you are finished trying the examples below, run the following command Continue reading

Containerlab DDoS testbed

Real-time telemetry from a 5 stage Clos fabric describes lightweight emulation of realistic data center switch topologies using Containerlab. This article extends the testbed to experiment with distributed denial of service (DDoS) detection and mitigation techniques described in Real-time DDoS mitigation using BGP RTBH and FlowSpec.
docker run --rm -it --privileged --network host --pid="host" \
-v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \
-v ~/clab:/home/clab -w /home/clab \
ghcr.io/srl-labs/clab bash
Start Containerlab.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/ddos.yml
Download the Containerlab topology file.
containerlab deploy -t ddos.yml
Finally, deploy the topology.
Connect to the web interface, http://localhost:8008. The sFlow-RT dashboard verifies that telemetry is being received from 1 agent (the Customer Network, ce-router, in the diagram above). See the sFlow-RT Quickstart guide for more information.
Now access the DDoS Protect application at http://localhost:8008/app/ddos-protect/html/. The BGP chart at the bottom right verifies that BGP connection has been established so that controls can be sent to the Customer Router, ce-router.
docker exec -it clab-ddos-attacker hping3 --flood --udp -k -s 53 192.0.2.129
Start a simulated DNS amplification attack using hping3.
The udp_amplification chart shows that traffic matching the attack signature has crossed the threshold. The Controls chart shows Continue reading

Real-time EVPN fabric visibility

Real-time telemetry from a 5 stage Clos fabric describes lightweight emulation of realistic data center switch topologies using Containerlab. This article builds on the example to demonstrate visibility into Ethernet Virtual Private Network (EVPN) traffic as it crosses a routed leaf and spine fabric.
docker run --rm -it --privileged --network host --pid="host" \
-v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \
-v ~/clab:/home/clab -w /home/clab \
ghcr.io/srl-labs/clab bash
Start Containerlab.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/evpn3.yml
Download the Containerlab topology file.
containerlab deploy -t evpn3.yml
Finally, deploy the topology.
docker exec -it clab-evpn3-leaf1 vtysh -c "show running-config"
See configuration of leaf1 switch.
Building configuration...

Current configuration:
!
frr version 8.1_git
frr defaults datacenter
hostname leaf1
no ipv6 forwarding
log stdout
!
router bgp 65001
bgp bestpath as-path multipath-relax
bgp bestpath compare-routerid
neighbor fabric peer-group
neighbor fabric remote-as external
neighbor fabric description Internal Fabric Network
neighbor fabric capability extended-nexthop
neighbor eth1 interface peer-group fabric
neighbor eth2 interface peer-group fabric
!
address-family ipv4 unicast
network 192.168.1.1/32
exit-address-family
!
address-family l2vpn evpn
neighbor fabric activate
advertise-all-vni
exit-address-family
exit
!
ip nht resolve-via-default
!
end
The loopback address on the switch, 192.168.1.1/32, is advertised to neighbors so that the VxLAN tunnel endpoint Continue reading

Who monitors the monitoring systems?

Adrian Cockroft poses an interesting question in, Who monitors the monitoring systems? He states, The first thing that would be useful is to have a monitoring system that has failure modes which are uncorrelated with the infrastructure it is monitoring. ... I don’t know of a specialized monitor-of-monitors product, which is one reason I wrote this blog post.

This article offers a response, describing how to introduce an uncorrelated monitor-of-monitors into the data center to provide real-time visibility that survives when the primary monitoring systems fail.

Summary of the AWS Service Event in the Northern Virginia (US-EAST-1) Region, This congestion immediately impacted the availability of real-time monitoring data for our internal operations teams, which impaired their ability to find the source of congestion and resolve it. December 10th, 2021

Standardizing on a small set of communication primitives (gRPC, Thrift, Kafka, etc.) simplifies the creation of large scale distributed services. The communication primitives abstract the physical network to provide reliable communication to support distributed services running on compute nodes. Monitoring is typically regarded as a distributed service that is part of the compute infrastructure, relying on agents on compute nodes to transmit measurements to scale out analysis, storage, automation, and Continue reading

DDoS Mitigation with Cisco, sFlow, and BGP Flowspec

DDoS protection quickstart guide shows how sFlow streaming telemetry and BGP RTBH/Flowspec are combined by the DDoS Protect application running on the sFlow-RT real-time analytics engine to automatically detect and block DDoS attacks.

This article discusses how to deploy the solution in a Cisco environment. Cisco has a long history of supporting BGP Flowspec on their routing platforms and has recently added support for sFlow, see Cisco 8000 Series routersCisco ASR 9000 Series Routers, and Cisco NCS 5500 Series Routers.

First, IOS-XR doesn't provide a way to connect to the non-standard BGP port (1179) that sFlow-RT uses by default. Allowing sFlow-RT to open the standard BGP port (179) requires that the service be given additional Linux capabilities.

docker run --rm --net=host --sysctl net.ipv4.ip_unprivileged_port_start=0 \
sflow/ddos-protect -Dbgp.port=179

The above command launches the prebuilt sflow/ddos-protect Docker image. Alternatively, if sFlow-RT has been installed as a deb / rpm package, then the required permissions can be added to the service.

sudo systemctl edit sflow-rt.service

Type the above command to edit the service configuration and add the following lines:

[Service]
AmbientCapabilities=CAP_NET_BIND_SERVICE

Next, edit the sFlow-RT configuration file for the DDoS Protect application:

sudo vi /usr/local/sflow-rt/conf.d/ddos-protect.conf

Continue reading

Topology aware fabric analytics

Real-time telemetry from a 5 stage Clos fabric describes how to emulate and monitor the topology shown above using Containerlab and sFlow-RT. This article extends the example to demonstrate how topology awareness enhances analytics.
docker run --rm -it --privileged --network host --pid="host" \
-v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \
-v ~/clab:/home/clab -w /home/clab \
ghcr.io/srl-labs/clab bash
Start Containerlab.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/clos5.yml
Download the Containerlab topology file.
sed -i "s/prometheus/topology/g" clos5.yml
Change the sFlow-RT image from sflow/prometheus to sflow/topology in the Containerlab topology. The sflow/topology image packages sFlow-RT with useful applications that combine topology awareness with analytics.
containerlab deploy -t clos5.yml
Deploy the topology.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/clos5.json
Download the sFlow-RT topology file.
curl -X PUT -H "Content-Type: application/json" -d @clos5.json \
http://localhost:8008/topology/json
Post the topology to sFlow-RT.
Connect to the sFlow-RT Topology application, http://localhost:8008/app/topology/html/. The dashboard confirms that all the links and nodes in the topology are streaming telemetry. There is currently no traffic on the network, so none of the nodes in the topology are sending flow data.
docker exec -it clab-clos5-h1 iperf3 -c 172.16.4.2
Generate traffic. You should see the Nodes No Flows number drop Continue reading