Archive

Category Archives for "sFlow"

SONiC

SONiC is part of the Open Compute Project (OCP), creating "an open source network operating system based on Linux that runs on switches from multiple vendors and ASICs." The latest SONiC.201911 release of the open source SONiC network operating system adds sFlow support.
SONiC: sFlow High Level Design
The diagram shows the elements of the implementation.
  1. The open source Host sFlow agent running in the sFlow container monitors the Redis database (in the Database container) for sFlow related configuration changes.
  2. The syncd container monitors the configuration database and pushes hardware settings (packet sampling) to the ASIC using the SAI (Switch Abstraction Inteface) driver (see SAI 1.5).
  3. The ASIC driver hands sampled packet headers and associated metadata captured by the ASIC to user space via the Linux PSAMPLE netlink channel (see Linux 4.11 kernel extends packet sampling support).
  4. The Host sFlow agent receives the PSAMPLE messages and forwards them to configured sFlow collector(s) as standard sFlow packet samples.
  5. In addition, the Host sFlow agent streams telemetry (interface counters and host metrics gathered from the Redis database and Linux kernel) to the collector(s) as standard sFlow counter records.
The following CLI commands enable sFlow Continue reading

Real-time DDoS mitigation using BGP RTBH and FlowSpec

DDoS Protect is a recently released open source application running on the sFlow-RT real-time analytics engine. The software uses streaming analytics to rapidly detect and characterize DDoS flood attacks and automatically applies BGP remote triggered black hole (RTBH) and/or FlowSpec controls to mitigate their impact. The total time to detect and mitigate an attack is in the order of a second.

The combination of multi-vendor standard telemetry (sFlow) and control (BGP FlowSpec) provide the real-time visibility and control needed to quickly and automatically adapt the network to address a range of challenging problems, including: DDoS, traffic engineering, and security.

Solutions are deployable today: Arista BGP FlowSpec describes the recent addition of BGP FlowSpec support to Arista EOS (EOS has long supported sFlow), and sFlow available on Juniper MX series routers describes the release of sFlow support on Juniper MX routers (which have long had BGP FlowSpec support). This article demonstrates DDoS mitigation using Arista EOS. Similar configurations should work with any router that supports sFlow and BGP FlowSpec.
The diagram shows a typical deployment scenario in which an instance of sFlow-RT (running the DDoS Protect application) receives sFlow from the site router (ce-router). A  Continue reading

SAI 1.5

The Open Compute Project (OCP), "is a rapidly growing community of engineers around the world whose mission is to design and enable the delivery of the most efficient server, storage and data center hardware designs available for scalable computing."

The OCP SAI (Switch Abstraction Interface) Project is an important part of the networking effort, defining "a vendor-independent way of controlling forwarding elements, such as a switching ASIC, an NPU or a software switch in a uniform manner." SAI 1.5 Release Notes describe enhancements to existing sFlow API, in particular adding support for the Linux psample netlink channel, see  Linux 4.11 kernel extends packet sampling support. Supporting the standard Linux interface for packet sampling simplifies the implementation of sFlow agents (e.g. Host sFlow) and ensures consistent behavior across hardware platforms to deliver real-time network-wide visibility using industry standard sFlow protocol.

Real-time monitoring at terabit speeds

The Flow Trend chart above shows a real-time, up to the second, view of nearly 3 terabits per second of traffic flowing across the SCinet network, described as the fastest, most powerful volunteer-built network in the world. The network is build each year to support The International Conference for High Performance Computing, Networking, Storage, and Analysis. The SC19 conference is currently underway in Denver, Colorado.
The diagram shows the Joint Big Data Testbed generating the traffic in the chart. The Caltech demonstration is described in NRE-19: SC19 Network Research Exhibition: Caltech Booth 543 Demonstrations Hosting NRE-13, NRE-19, NRE-20, NRE-22, NRE-23, NRE-24, NRE-35:
400GE First Data Networks: Caltech, Starlight/NRL, USC, SCinet/XNET, Ciena, Mellanox, Arista, Dell, 2CRSI, Echostreams, DDN and Pavilion Data, as well as other supporting optical, switch and server vendor partners will demonstrate the first fully functional 3 X400GE local ring network as well as 400GE wide area network ring, linking the Starlight and Caltech booths and Starlight in Chicago. This network will integrate storage using NVMe over Fabric, the latest high throughput methods, in-depth monitoring and realtime flow steering. As part of these demonstrations, we will make use of the latest DWDM, Waveserver Ai, and 400GE as Continue reading

SC19 SCinet: Grafana network traffic dashboard

The Grafana sFlow-RT Countries and Networks dashboard above shows traffic on the SCinet network, described as the fastest, most powerful volunteer-built network in the world. The network is build each year to support The International Conference for High Performance Computing, Networking, Storage, and Analysis. The SC19 conference is currently underway in Denver, Colorado and the screen capture is live data from the conference network.
The high speed switches and routers used to construct the SCinet network support industry standard sFlow streaming telemetry. In this case an instance of the sFlow-RT analytics engine receives the telemetry stream and generates flow analytics that are scraped every 15 seconds by an instance of the Prometheus time series database. The Prometheus database is in turn queried by an instance of Grafana which generated the dashboard shown at the top of the page.
In addition, sFlow-RT is running an embedded application that generates a real-time, up to the second, view of the traffic over the last 5 minutes.
This solution is extremely scalable. A single sFlow-RT instance, allocated only 1G of memory, easily monitors 158 network devices, while supporting 11 different applications (including the real-time dashboard and Prometheus export applications shown above).

Observability in Data Center Networks


Observability in Data Center Networks: In this session, you’ll learn how the sFlow protocol provides broad visibility in modern data center environments as they migrate to highly meshed topologies. Our data center workloads are shifting to take advantage of higher speeds and bandwidth, so visibility to east-west traffic within the data center is becoming more important. Join Peter Phaal—one of the inventors of sFlow—and Joe Reves from SolarWinds product management as they discuss how sFlow differs from other flow instrumentation to deliver visibility in the switching fabric.
THWACKcamp is SolarWinds’ free, annual, worldwide virtual IT learning event connecting thousands of skilled IT professionals with industry experts and SolarWinds technical staff. This video was one of the sessions.

InfluxDB 2.0

Introducing the Next-Generation InfluxDB 2.0 Platform mentions that InfluxDB 2.0 will be able to scrape Prometheus exporters. Get started with InfluxDB provides instructions for running an alpha version of the new software using Docker:
docker run --name influxdb -p 9999:9999 quay.io/influxdb/influxdb:2.0.0-alpha
Prometheus exporter describes an application that runs on the sFlow-RT analytics platform that converts real-time streaming telemetry from industry standard sFlow agents. Host, Docker, Swarm and Kubernetes monitoring describes how to deploy agents on popular container orchestration platforms.
The screen capture above shows three scrapers configured in InfluxDB 2.0:
  1. sflow-rt-analyzer,
    URL: http://10.0.0.70:8008/prometheus/analyzer/txt
  2. sflow-rt-dump,
    URL: http://10.0.0.70:8008/prometheus/metrics/ALL/ALL/txt
  3. sflow-rt-flow-src-dst,
    URL: http://10.0.0.70:8008/app/prometheus/scripts/export.js/flows/ALL/txt?metric=flow_src_dst_bps&key=ipsource,ipdestination&value=bytes&aggMode=max&maxFlows=100&minValue=1000&scale=8
The first collects metrics about the performance of the sFlow-RT analytics engine, the second, all the metrics exported by the sFlow agents, and the third, is a flow metric (see Flow metrics with Prometheus and Grafana).

Updated 19 October 2019, native support for Prometheus export added to sFlow-RT, URLs 1 and 2 modified to reflect new API.
InfluxDB 2.0 now includes the data exploration and dashboard building capabilities that were previously in the separate Chronograf application. The screen Continue reading

Flow metrics with Prometheus and Grafana

The Grafana dashboard above shows real-time network traffic flow metrics. This article describes how to define and collect flow metrics using the Prometheus time series database and build Grafana dashboards using those metrics.
Prometheus exporter describes an application that runs on the sFlow-RT analytics platform that converts real-time streaming telemetry from industry standard sFlow agents. Host, Docker, Swarm and Kubernetes monitoring describes how to deploy agents on popular container orchestration platforms.

The latest version of the Prometheus exporter application adds flow export.
global:
scrape_interval: 15s
evaluation_interval: 15s

rule_files:
# - "first.rules"
# - "second.rules"

scrape_configs:
- job_name: 'sflow-rt-metrics'
metrics_path: /prometheus/metrics/ALL/ALL/txt
static_configs:
- targets: ['10.0.0.70:8008']
- job_name: 'sflow-rt-src-dst-bps'
metrics_path: /app/prometheus/scripts/export.js/flows/ALL/txt
static_configs:
- targets: ['10.0.0.70:8008']
params:
metric: ['ip_src_dst_bps']
key: ['ipsource','ipdestination']
label: ['src','dst']
value: ['bytes']
scale: ['8']
minValue: ['1000']
maxFlows: ['100']
- job_name: 'sflow-rt-countries-bps'
metrics_path: /app/prometheus/scripts/export.js/flows/ALL/txt
static_configs:
- targets: ['10.0.0.70:8008']
params:
metric: ['ip_countries_bps']
key: ['null:[country:ipsource]:unknown','null:[country:ipdestination]:unknown']
label: ['src','dst']
value: ['bytes']
scale: ['8']
aggMode: ['sum']
minValue: ['1000']
maxFlows: ['100']
The above prometheus.yml file extends the previous example to add two additional scrape jobs, sflow-rt-src-dst-bps and sflow-rt-countries-bps, that return flow metrics. Defining flows describes the attributes and settings available to build Continue reading

Host, Docker, Swarm and Kubernetes monitoring

The open source Host sFlow agent incorporates technologies that address the challenges of microservice monitoring; leveraging recent enhancements to Berkeley Packet Filter (BPF) in the Linux kernel to randomly sample packets, and  Asynchronous Docker metrics to track rapidly changing workloads. The continuous stream of real-time telemetry from all compute nodes, transported using the industry standard sFlow protocol, provides comprehensive real-time cluster-wide visibility into all services and the traffic flowing between them.

The Host sFlow agent is available as pre-packaged rpm/deb files that can be downloaded and installed on each node in a cluster.
sflow {
collector { ip=10.0.0.70 }
docker { }
pcap { dev=docker0 }
pcap { dev=docker_gwbridge }
}
The above /etc/hsflowd.conf file, see Configuring Host sFlow for Linux via /etc/hsflowd.conf, enables the docker {} and pcap {} modules for detailed visibility into container metrics and network traffic flows, and streams telemetry to an sFlow collector (10.0.0.70). The configuration is the same for every node making it simple to install and configure Host sFlow on all nodes using orchestration software such as Puppet, Chef, Ansible, etc.

The agent is also available as the pre-build sflow/host-sflow image, Continue reading

Packet analysis using Docker

Why use sFlow for packet analysis? To rephrase the Heineken slogan, sFlow reaches the parts of the network that other technologies cannot reach. Industry standard sFlow is widely supported by switch vendors, embedding wire-speed packet monitoring throughout the network. With sFlow, any link or group of links can be remotely monitored. The alternative approach of physically attaching a probe to a SPAN/Mirror port is becoming much less feasible with increasing network sizes (10's of thousands of switch ports) and link speeds (10, 100, and 400 Gigabits). Using sFlow for packet capture doesn't replace traditional packet analysis, instead sFlow extends the capabilities of existing packet capture tools into the high speed switched network.

This article describes the sflow/tcpdump  and sflow/tshark Docker images, which provide a convenient way to analyze packets captured using sFlow.

Run the following command to analyze packets using tcpdump:
$ docker run -p 6343:6343/udp -p 8008:8008 sflow/tcpdump

19:06:42.000000 ARP, Reply 10.0.0.254 is-at c0:ea:e4:89:b0:98 (oui Unknown), length 64
19:06:42.000000 IP 10.0.0.236.548 > 10.0.0.70.61719: Flags [P.], seq 3380015689:3380015713, ack 515038158, win 41992, options [nop,nop,TS val 1720029042 ecr 904769627], length 24
19:06:42.000000 Continue reading

Running sflowtool using Docker

The sflowtool command line utility is used to convert standard sFlow records into a variety of different formats. While there are a large number of native sFlow analysis applications, familiarity with sflowtool is worthwhile since it provides a simple way to verify receipt of sFlow, understand the contents of the sFlow telemetry stream, and build simple applications through custom scripting.

The sflow/sflowtool Docker image provides a simple way to run sflowtool. Run the following command to print the contents of sFlow packets:
$ docker run -p 6343:6343/udp sflow/sflowtool
startDatagram =================================
datagramSourceIP 10.0.0.111
datagramSize 144
unixSecondsUTC 1321922602
datagramVersion 5
agentSubId 0
agent 10.0.0.20
packetSequenceNo 3535127
sysUpTime 270660704
samplesInPacket 1
startSample ----------------------
sampleType_tag 0:2
sampleType COUNTERSSAMPLE
sampleSequenceNo 228282
sourceId 0:14
counterBlock_tag 0:1
ifIndex 14
networkType 6
ifSpeed 100000000
ifDirection 0
ifStatus 3
ifInOctets 4839078
ifInUcastPkts 15205
ifInMulticastPkts 0
ifInBroadcastPkts 4294967295
ifInDiscards 0
ifInErrors 0
ifInUnknownProtos 4294967295
ifOutOctets 149581962744
ifOutUcastPkts 158884229
ifOutMulticastPkts 4294967295
ifOutBroadcastPkts 4294967295
ifOutDiscards 101
ifOutErrors 0
ifPromiscuousMode 0
endSample ----------------------
endDatagram =================================
The -g option flattens the output so that it is more easily filtered using grep:
$ docker run -p 6343:6343/udp sflow/sflowtool -g | grep ifInOctets
2019-09-03T22:37:21+0000 10.0.0.231 Continue reading

Forwarding using sFlow-RT

The diagrams show different two different configurations for sFlow monitoring:
  1. Without Forwarding Each sFlow agent is configured to stream sFlow telemetry to each of the analysis applications. This configuration is appropriate when a small number of applications is being used to continuously monitor performance. However, the overhead on the network and agents increases as additional analyzers are added. Often it is not possible to increase the number of analyzers since many embedded sFlow agents have limited resources and only support a small number of sFlow streams. In addition, the complexity of configuring each agent to add or remove an analysis application can be significant since agents may reside in Ethernet switches, routers, servers, hypervisors and applications on many different platforms from a variety of vendors.
  2. With Forwarding In this case all the agents are configured to send sFlow to a forwarding module which resends the data to the analysis applications. In this case analyzers can be added and removed simply by reconfiguring the forwarder without any changes required to the agent configurations.
There are many variations between these two extremes. Typically there will be one or two analyzers used for continuous monitoring and additional tools, like Wireshark, might be deployed Continue reading

sFlow-RT 3.0 released

The sFlow-RT 3.0 release has a simplified user interface that focusses on metrics needed to manage the performance of the sFlow-RT analytics software and installed applications.

Applications are available that replace features from the previous 2.3 release. The following instructions show how to install sFlow-RT 3.0 along with basic data exploration applications.

On a system with Java 1.8+ installed:
wget https://inmon.com/products/sFlow-RT/sflow-rt.tar.gz
tar -xvzf sflow-rt.tar.gz
./sflow-rt/get-app.sh sflow-rt flow-trend
./sflow-rt/get-app.sh sflow-rt browse-metrics
./sflow-rt/start.sh
On a system with Docker installed:
mkdir app
docker run -v $PWD/app:/sflow-rt/app --entrypoint /sflow-rt/get-app.sh sflow/sflow-rt sflow-rt flow-trend
docker run -v $PWD/app:/sflow-rt/app --entrypoint /sflow-rt/get-app.sh sflow/sflow-rt sflow-rt browse-metrics
docker run -v $PWD/app:/sflow-rt/app -p 6343:6343/udp -p 8008:8008 sflow/sflow-rt
The product user interface can be accessed on port 8008. The Status page, shown at the top of this article, displays key metrics about the performance of the software.
The Apps tab lists the two applications we installed, browse-metrics and flow-trend, and the green color of the buttons indicates both applications are healthy.

Click on the flow-trend button to open the application and trend traffic flows in real-time. The RESTflow article describes the flow analytics capabilities of sFlow-RT in Continue reading

Arista BGP FlowSpec


The video of a talk by Peter Lundqvist from DKNOG9 describes BGP FlowSpec, use cases, and details of Arista's implementation.

FlowSpec for real-time control and sFlow telemetry for real-time visibility is a powerful combination that can be used to automate DDoS mitigation and traffic engineering. The article, Real-time DDoS mitigation using sFlow and BGP FlowSpec, gives an example using the sFlow-RT analytics software.

EOS 4.22 includes support for BGP FlowSpec. This article uses a virtual machine running vEOS-4.22 to demonstrate how to configure FlowSpec and sFlow so that the switch can be controlled by an sFlow-RT application (such as the DDoS mitigation application referenced earlier).

The following output shows the EOS configuration statements related to sFlow and FlowSpec:
!
service routing protocols model multi-agent
!
sflow sample 16384
sflow polling-interval 30
sflow destination 10.0.0.70
sflow run
!
interface Ethernet1
flow-spec ipv4 ipv6
!
interface Management1
ip address 10.0.0.96/24
!
ip routing
!
router bgp 65096
router-id 10.0.0.96
neighbor 10.0.0.70 remote-as 65070
neighbor 10.0.0.70 transport remote-port 1179
neighbor 10.0.0.70 send-community extended
neighbor 10.0.0.70 maximum-routes 12000
!
address-family flow-spec ipv4
neighbor 10.0.0.70 Continue reading

Mininet flow analytics with custom scripts

Mininet flow analytics describes how to use the sflow.py helper script that ships with the sFlow-RT analytics engine to enable sFlow telemetry, e.g.
sudo mn --custom sflow-rt/extras/sflow.py --link tc,bw=10 \
--topo tree,depth=2,fanout=2
Mininet, ONOS, and segment routing provides an example using a Custom Topology, e.g.
sudo env ONOS=10.0.0.73 mn --custom sr.py,sflow-rt/extras/sflow.py \
--link tc,bw=10 --topo=sr '--controller=remote,ip=$ONOS,port=6653'
This article describes how to incorporate sFlow monitoring in a fully custom Mininet script. Consider the following simpletest.py script based on Working with Mininet:
#!/usr/bin/python                                                                            

from mininet.topo import Topo
from mininet.net import Mininet
from mininet.util import dumpNodeConnections
from mininet.log import setLogLevel

class SingleSwitchTopo(Topo):
"Single switch connected to n hosts."
def build(self, n=2):
switch = self.addSwitch('s1')
# Python's range(N) generates 0..N-1
for h in range(n):
host = self.addHost('h%s' % (h + 1))
self.addLink(host, switch)

def simpleTest():
"Create and test a simple network"
topo = SingleSwitchTopo(n=4)
net = Mininet(topo)
net.start()
print "Dumping host connections"
dumpNodeConnections(net.hosts)
print "Testing bandwidth between h1 and h4"
h1, h4 = net.get( 'h1', 'h4' )
net.iperf( (h1, h4) )
net.stop()

if __name__ == '__main__':
# Continue reading

Secure forwarding of sFlow using ssh

Typically sFlow datagrams are sent unencrypted from agents embedded in switches and routers to a local collector/analyzer. Sending sFlow datagrams over the management VLAN or out of band management network generally provides adequate isolation and security within the site. Inter-site traffic within an organization is typically carried over a virtual private network (VPN) which encrypts the data and protects it from eavesdropping.

This article describes a simple method of carrying sFlow datagrams over an encrypted ssh connection which can be useful in situations where a VPN is not available, for example, sending sFlow to an analyzer in the public cloud, or to an external consultant.

The diagram shows the elements of the solution. A collector on the site receives sFlow datagrams from the network devices and uses the sflow_fwd.py script to convert the datagrams into line delimited hexadecimal strings that are sent over an ssh connection to another instance of sflow_fwd.py running on the analyzer that converts the hexadecimal strings back to sFlow datagrams.

The following sflow_fwd.py Python script accomplishes the task:
#!/usr/bin/python

import socket
import sys
import argparse

parser = argparse.ArgumentParser(description='Serialize/deserialize sFlow')
parser.add_argument('-c', '--collector', default='')
parser.add_argument('-s', '--server')
parser.add_argument('-p', '--port', type=int, default=6343)
Continue reading

Prometheus exporter

Prometheus is an open source time series database optimized to collect large numbers of metrics from cloud infrastructure. This article will explore how industry standard sFlow telemetry streaming supported by network devices (Arista, Aruba, Cisco, Dell, Huawei, Juniper, etc.) and Host sFlow agents (Linux, Windows, FreeBSD, AIX, Solaris, Docker, Systemd, Hyper-V, KVM, Nutanix AHV, Xen) can be integrated with Prometheus to extend visibility into the network.

The diagram above shows the elements of the solution: sFlow telemetry streams from hosts and switches to an instance of sFlow-RT. The sFlow-RT analytics software converts the raw measurements into metrics that are accessible through a REST API. The sflow-rt/prometheus application extends the REST API to include native Prometheus exporter functionality allowing Prometheus to retrieve metrics. Prometheus stores metrics in a time series database that can be queries by Grafana to build dashboards.

Update 19 October 2019, native support for Prometheus export added to sFlow-RT, Prometheus application no longer needed to run this example, use URL: /prometheus/metrics/ALL/ALL/txt. The Prometheus application is needed for exporting traffic flows, see Flow metrics with Prometheus and Grafana.

The Docker sflow/prometheus image provides a simple way to run the application:
docker run --name sflow-rt -p 8008:8008 -p  Continue reading

Loggly

Loggly is a cloud logging and and analysis platform. This article will demonstrate how to integrate network events generated from industry standard sFlow instrumentation build into network switches.
Loggly offers a free 14 day evaluation, so you can try this example at no cost.
ICMP unreachable describes how monitoring ICMP destination unreachable messages can help identify misconfigured hosts and scanning behavior. The article uses the sFlow-RT real-time analytics software to process the raw sFlow and report on unreachable messages.

The following script, loggly.js, modifies the sFlow-RT script from the article to send events to the Loggly HTTP/S Event Endpoint:
var token = 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx';

var url = 'https://logs-01.loggly.com/inputs/'+token+'/tag/http/';

var keys = [
'icmpunreachablenet',
'icmpunreachablehost',
'icmpunreachableprotocol',
'icmpunreachableport'
];

for (var i = 0; i < keys.length; i++) {
var key = keys[i];
setFlow(key, {
keys:'macsource,ipsource,macdestination,ipdestination,' + key,
value:'frames',
log:true,
flowStart:true
});
}

setFlowHandler(function(rec) {
var keys = rec.flowKeys.split(',');
var msg = {
flow_type:rec.name,
src_mac:keys[0],
src_ip:keys[1],
dst_mac:keys[2],
dst_ip:keys[3],
unreachable:keys[4]
};

try { http(url,'post','application/json',JSON.stringify(msg)); }
catch(e) { logWarning(e); };
}, keys);
Some notes on the script:
  1. Modify the script to use the correct token for your Loggly account.
  2. Including MAC addresses can help identify Continue reading

sFlow to JSON

The latest version of sflowtool can convert sFlow datagrams into JSON, making it easy to write scripts to process the standard sFlow telemetry streaming from devices in the network.

Download and compile the latest version of sflowtool:
git clone https://github.com/sflow/sflowtool.git
cd sflowtool/
./boot.sh
./configure
make
sudo make install
The -J option formats the JSON output to be human readable:
$ sflowtool -J
{
"datagramSourceIP":"10.0.0.162",
"datagramSize":"396",
"unixSecondsUTC":"1544241239",
"localtime":"2018-12-07T19:53:59-0800",
"datagramVersion":"5",
"agentSubId":"0",
"agent":"10.0.0.231",
"packetSequenceNo":"1068783",
"sysUpTime":"1338417874",
"samplesInPacket":"2",
"samples":[
{
"sampleType_tag":"0:2",
"sampleType":"COUNTERSSAMPLE",
"sampleSequenceNo":"148239",
"sourceId":"0:3",
"elements":[
{
"counterBlock_tag":"0:1",
"ifIndex":"3",
"networkType":"6",
"ifSpeed":"1000000000",
"ifDirection":"1",
"ifStatus":"3",
"ifInOctets":"4162076356",
"ifInUcastPkts":"16312256",
"ifInMulticastPkts":"187789",
"ifInBroadcastPkts":"2566",
"ifInDiscards":"0",
"ifInErrors":"0",
"ifInUnknownProtos":"0",
"ifOutOctets":"2115351089",
"ifOutUcastPkts":"7087570",
"ifOutMulticastPkts":"4453258",
"ifOutBroadcastPkts":"6141715",
"ifOutDiscards":"0",
"ifOutErrors":"0",
"ifPromiscuousMode":"0"
},
{
"counterBlock_tag":"0:2",
"dot3StatsAlignmentErrors":"0",
"dot3StatsFCSErrors":"0",
"dot3StatsSingleCollisionFrames":"0",
"dot3StatsMultipleCollisionFrames":"0",
"dot3StatsSQETestErrors":"0",
"dot3StatsDeferredTransmissions":"0",
"dot3StatsLateCollisions":"0",
"dot3StatsExcessiveCollisions":"0",
"dot3StatsInternalMacTransmitErrors":"0",
"dot3StatsCarrierSenseErrors":"0",
"dot3StatsFrameTooLongs":"0",
"dot3StatsInternalMacReceiveErrors":"0",
"dot3StatsSymbolErrors":"0"
}
]
},
{
"sampleType_tag":"0:1",
"sampleType":"FLOWSAMPLE",
"sampleSequenceNo":"11791",
"sourceId":"0:3",
"meanSkipCount":"2000",
"samplePool":"34185160",
"dropEvents":"0",
"inputPort":"3",
"outputPort":"10",
"elements":[
{
"flowBlock_tag":"0:1",
"flowSampleType":"HEADER",
"headerProtocol":"1",
"sampledPacketSize":"102",
"strippedBytes":"0",
"headerLen":"104",
"headerBytes":"0C-AE-4E-98-0B-89-05-B6-D8-D9-A2-66-80-00-54-00-00-45-08-12-04-00-04-10-4A-FB-A0-00-00-BC-A0-00-00-EF-80-00-DE-B1-E7-26-00-20-75-04-B0-C5-00-00-00-00-96-01-20-00-00-00-00-00-01-11-21-31-41-51-61-71-81-91-A1-B1-C1-D1-E1-F1-02-12-22-32-42-52-62-72-82-92-A2-B2-C2-D2-E2-F2-03-13-23-33-43-53-63-73-1A-1D-4D-76-00-00",
"dstMAC":"0cae4e980b89",
"srcMAC":"05b6d8d9a266",
"IPSize":"88",
"ip.tot_len":"84",
"srcIP":"10.0.0.203",
"dstIP":"10.0.0.254",
"IPProtocol":"1",
"IPTOS":"0",
"IPTTL":"64",
"IPID":"8576",
"ICMPType":"8",
"ICMPCode":"0"
},
{
"flowBlock_tag":"0:1001",
"extendedType":"SWITCH",
"in_vlan":"1",
"in_priority":"0",
"out_vlan":"1",
"out_priority":"0"
}
]
}
]
}
The output shows the JSON representation of a single sFlow datagram containing one counter sample and one flow sample.

The Continue reading
1 4 5 6 7 8 15