DDoS protection quickstart guide shows how sFlow streaming telemetry and BGP RTBH/Flowspec are combined by the DDoS Protect application running on the sFlow-RT real-time analytics engine to automatically detect and block DDoS attacks.
This article discusses how to deploy the solution in a Cisco environment. Cisco has a long history of supporting BGP Flowspec on their routing platforms and has recently added support for sFlow, see Cisco 8000 Series routers, Cisco ASR 9000 Series Routers, and Cisco NCS 5500 Series Routers.
First, IOS-XR doesn't provide a way to connect to the non-standard BGP port (1179) that sFlow-RT uses by default. Allowing sFlow-RT to open the standard BGP port (179) requires that the service be given additional Linux capabilities.
docker run --rm --net=host --sysctl net.ipv4.ip_unprivileged_port_start=0 \
sflow/ddos-protect -Dbgp.port=179
The above command launches the prebuilt sflow/ddos-protect Docker image. Alternatively, if sFlow-RT has been installed as a deb / rpm package, then the required permissions can be added to the service.
sudo systemctl edit sflow-rt.service
Type the above command to edit the service configuration and add the following lines:
[Service]
AmbientCapabilities=CAP_NET_BIND_SERVICE
Next, edit the sFlow-RT configuration file for the DDoS Protect application:
sudo vi /usr/local/sflow-rt/conf.d/ddos-protect.conf
If there’s an IPv6 netblock you’d like your host to stop responding to, one tactic is to blackhole the traffic. That is, send any traffic from your host destined to the troublesome IPv6 netblock into a blackhole. Blackholes are also called null routes.
Let’s say I’m getting repeated SQL injection attacks from various hosts in IPv6 block 2a09:8700:1::/48. Just a totally random example with no basis in reality whatsoever, whoever you are in Belize. There are various ways I can defend against this, but one (sorta ugly) option (I don’t actually recommend, read to the bottom to see my logic) is to create a blackhole aka a null route.
On many flavors of Linux, including Ubuntu 18.04, 20.04, and 22.04, I can accomplish this task with the ip route utility. Let’s take a look at our existing host routing table.
user@host:~$ ip route
default via 123.94.146.1 dev enp1s0 proto dhcp src 123.94.146.227 metric 100
169.254.169.254 via 123.94.146.1 dev enp1s0 proto dhcp src 123.94.146.227 metric 100
123.94.146.0/23 dev enp1s0 proto kernel scope link src 123.94.146. Continue reading
Minh Ha left this comment on the Packet Forwarding 101 blog post. As is usually the case, it’s fun reading and it would be a shame not to repost it as a standalone blog post (even though I don’t necessarily agree with all his conclusions).
I always enjoy Bela’s great insights, esp. on hardware and transport networks, but this time I beg to differ. LISP, is a false economy. It was twisted from the start, unscalable right from the get-go. In Networking and OS, to name (ID) something is to locate it, and vice versa. So the name LISP itself reflects a false distinction. Due to this misconception, LISP proponents are unable to establish the right boundary conditions, leading to the size of xTRs' RIB diverging (going unbounded). In a word, it has come full circle back to BGP, an exemplary manifestation of RFC 1925 rule 6.
Minh Ha left this comment on the Packet Forwarding 101 blog post. As is usually the case, it’s fun reading and it would be a shame not to repost it as a standalone blog post (even though I don’t necessarily agree with all his conclusions).
I always enjoy Bela’s great insights, esp. on hardware and transport networks, but this time I beg to differ. LISP, is a false economy. It was twisted from the start, unscalable right from the get-go. In Networking and OS, to name (ID) something is to locate it, and vice versa. So the name LISP itself reflects a false distinction. Due to this misconception, LISP proponents are unable to establish the right boundary conditions, leading to the size of xTRs’ RIB diverging (going unbounded). In a word, it has come full circle back to BGP, an exemplary manifestation of RFC 1925 rule 6.
The next installment of Michael Levan’s series on networking in public clouds walks through how to set up a VPC (Virtual Private Cloud) in AWS and a VNet (Virtual Network) in Microsoft Azure. You can subscribe to the Packet Pushers’ YouTube channel for more videos as they are published. It’s a diverse a mix of […]
The post Cloud Engineering For The Network Pro: Part 3 – VPCs And Virtual Networks (Video) appeared first on Packet Pushers.
Today’s Tech Bytes podcast explores threat intelligence with sponsor Fortinet and its FortiGuard Labs. FortiGuard Labs analyzes billions of global security events daily and distills them into actionable information for network and security teams. Fortinet also uses those events to inform security updates to its products.
The post Tech Bytes: How Fortinet’s FortiGuard Labs Turns Billions Of Security Events Into Intelligence (Sponsored) appeared first on Packet Pushers.
This post originally appeared on the Packet Pushers’ Ignition site on October 19, 2020. The network edge is desirable new territory for software and hardware vendors. The objective is to get compute, networking, storage, and security features as close as possible to data sources and data-hungry applications. VMware and NVIDIA have launched new initiatives to […]
The post VMware And NVIDIA Focus On The Far Edge To Host Network Services On SmartNICs appeared first on Packet Pushers.
docker run --rm -it --privileged --network host --pid="host" \Start Containerlab.
-v /var/run/docker.sock:/var/run/docker.sock -v /run/netns:/run/netns \
-v ~/clab:/home/clab -w /home/clab \
ghcr.io/srl-labs/clab bash
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/clos5.ymlDownload the Containerlab topology file.
sed -i "s/prometheus/topology/g" clos5.ymlChange the sFlow-RT image from sflow/prometheus to sflow/topology in the Containerlab topology. The sflow/topology image packages sFlow-RT with useful applications that combine topology awareness with analytics.
containerlab deploy -t clos5.ymlDeploy the topology.
curl -O https://raw.githubusercontent.com/sflow-rt/containerlab/master/clos5.jsonDownload the sFlow-RT topology file.
curl -X PUT -H "Content-Type: application/json" -d @clos5.json \Post the topology to sFlow-RT. Connect to the sFlow-RT Topology application, http://localhost:8008/app/topology/html/. The dashboard confirms that all the links and nodes in the topology are streaming telemetry. There is currently no traffic on the network, so none of the nodes in the topology are sending flow data.
http://localhost:8008/topology/json
docker exec -it clab-clos5-h1 iperf3 -c 172.16.4.2Generate traffic. You should see the Nodes No Flows number drop Continue reading
If you’re brand-new to Python and Ansible, you might be a bit reluctant to install a bunch of packages and Ansible collections on your production laptop to start building your automation skills. The usual recommendation I make to get past that hurdle is to create a Ubuntu virtual machine that can be destroyed every time to mess it up.
Creating a virtual machine is trivial on Linux and MacOS with Intel CPU (install VirtualBox and Vagrant). The same toolset no longer works on newer Macs with M1 CPU (VMware Fusion is in tech preview, so we’re getting there), but there’s an amazingly simple alternative: Multipass by Canonical.