Tuesday, December 31, 2013

Suricata cocktails (handy one-liners)




Some of my favorite cocktails (one-liners) :)

Suricata cocktails with git master. Tested on Ubuntu and Debian.
You can just copy/paste.

Before you start, make sure you have the below packages installed.

General packages needed:
apt-get -y install libpcre3 libpcre3-dbg libpcre3-dev \
build-essential autoconf automake libtool libpcap-dev libnet1-dev \
libyaml-0-2 libyaml-dev zlib1g zlib1g-dev libcap-ng-dev libcap-ng0 \
make flex bison git git-core subversion libmagic-dev


For MD5 support(file extraction):
apt-get install libnss3-dev libnspr4-dev


For GeoIP:
apt-get install libgeoip1 libgeoip-dev


For the first three (3) cocktails/recipes you would need:
  1. PF_RING as explained HERE 
  2. luajit as explained HERE
and use Suricata's git master - latest dev edition.


Cocktail 1

Suricata - latest dev edition plus enabled:
  1. pf_ring
  2. Luajit scripting
  3. GeoIP
  4. Filemagic/MD5

In case you get the pfring err:
    checking for pfring_open in -lpfring... no

       ERROR! --enable-pfring was passed but the library was not found
       or version is >4, go get it
       from http://www.ntop.org/PF_RING.html

The "LIBS=-lrt" infront of "./configure" below addresses that problem (err message above)


git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && \
git clone https://github.com/ironbee/libhtp.git -b 0.5.x && \
./autogen.sh && LIBS=-lrt ./configure  --enable-pfring --enable-luajit --enable-geoip \
--with-libpfring-includes=/usr/local/pfring/include/ \
--with-libpfring-libraries=/usr/local/pfring/lib/ \
--with-libpcap-includes=/usr/local/pfring/include/ \
--with-libpcap-libraries=/usr/local/pfring/lib/ \
--with-libnss-libraries=/usr/lib \
--with-libnss-includes=/usr/include/nss/ \
--with-libnspr-libraries=/usr/lib \
--with-libnspr-includes=/usr/include/nspr \
--with-libluajit-includes=/usr/local/include/luajit-2.0/ \
--with-libluajit-libraries=/usr/lib/x86_64-linux-gnu/ \
&& sudo make clean && sudo make && sudo make install \
&& sudo ldconfig


Cocktail 2

Suricata - latest dev edition plus enabled:
  1. pf_ring
  2. Luajit scripting
  3. GeoIP
  4. Filemagic/MD5
  5. Debugging

git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && \
git clone https://github.com/ironbee/libhtp.git -b 0.5.x && \
./autogen.sh && CFLAGS="-O0 -ggdb"  \
./configure  \
--enable-pfring --enable-luajit --enable-geoip \
--with-libpfring-includes=/usr/local/pfring/include/ \
--with-libpfring-libraries=/usr/local/pfring/lib/ \
--with-libpcap-includes=/usr/local/pfring/include/ \
--with-libpcap-libraries=/usr/local/pfring/lib/ \
--with-libnss-libraries=/usr/lib \
--with-libnss-includes=/usr/include/nss/ \
--with-libnspr-libraries=/usr/lib \
--with-libnspr-includes=/usr/include/nspr \
--with-libluajit-includes=/usr/local/include/luajit-2.0/ \
--with-libluajit-libraries=/usr/lib/x86_64-linux-gnu/ \
&& sudo make clean && sudo make && sudo make install \
&& sudo ldconfig


Cocktail 3

Suricata - latest dev edition plus enabled:
  1. pf_ring
  2. Luajit scripting
  3. GeoIP
  4. Filemagic/MD5


git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && \
git clone https://github.com/ironbee/libhtp.git -b 0.5.x && \
./autogen.sh && ./configure  \
--enable-pfring --enable-luajit --enable-geoip \
--with-libpfring-includes=/usr/local/pfring/include/ \
--with-libpfring-libraries=/usr/local/pfring/lib/ \
--with-libpcap-includes=/usr/local/pfring/include/ \
--with-libpcap-libraries=/usr/local/pfring/lib/ \
--with-libnss-libraries=/usr/lib \
--with-libnss-includes=/usr/include/nss/ \
--with-libnspr-libraries=/usr/lib \
--with-libnspr-includes=/usr/include/nspr \
--with-libluajit-includes=/usr/local/include/luajit-2.0/ \
--with-libluajit-libraries=/usr/lib/x86_64-linux-gnu/ \
&& sudo make clean && sudo make && sudo make install \
&& sudo ldconfig

Cocktail 4

Suricata - latest dev edition plus enabled:
  1. GeoIP
  2. Filemagic/MD5

git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && \
git clone https://github.com/ironbee/libhtp.git -b 0.5.x && \
./autogen.sh && ./configure --enable-geoip \
--with-libnss-libraries=/usr/lib \
--with-libnss-includes=/usr/include/nss/ \
--with-libnspr-libraries=/usr/lib \
--with-libnspr-includes=/usr/include/nspr \
&& sudo make clean && sudo make && sudo make install \
&& sudo ldconfig

Cocktail 5

Suricata - latest dev edition plus enabled:
  1. GeoIP

git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && \
git clone https://github.com/ironbee/libhtp.git -b 0.5.x && \
./autogen.sh && ./configure --enable-geoip \
&& sudo make clean && sudo make && sudo make install \
&& sudo ldconfig






Cocktail 6

Suricata - latest dev edition plus enabled:
  1. Filemagic/MD5

git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && \
git clone https://github.com/ironbee/libhtp.git -b 0.5.x && \
./autogen.sh && ./configure \
--with-libnss-libraries=/usr/lib \
--with-libnss-includes=/usr/include/nss/ \
--with-libnspr-libraries=/usr/lib \
--with-libnspr-includes=/usr/include/nspr \
&& sudo make clean && sudo make && sudo make install \
&& sudo ldconfig



Cocktail 7

Suricata - latest dev edition - default

git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && \
git clone https://github.com/ironbee/libhtp.git -b 0.5.x && \
./autogen.sh && ./configure \
&& sudo make clean \
&& sudo make \
&& sudo make install \
&& sudo ldconfig



Issue  - suricata --build-info - to verify after compile and installation.
You could twist it anyway you want - depending on library locations and features enabled in Suricata.




Monday, December 30, 2013

Suricata - setting up flows





So looking at the suricata.log file (after starting suricata):


root@suricata:/var/data/log/suricata# more suricata.log                                                       
 [1372] 17/12/2013 -- 17:47:35 - (suricata.c:962) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev e7f6107)
[1372] 17/12/2013 -- 17:47:35 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 16
[1372] 17/12/2013 -- 17:47:35 - (app-layer-dns-udp.c:315) <Info> (DNSUDPConfigure) -- DNS request flood protection level: 500
[1372] 17/12/2013 -- 17:47:35 - (defrag-hash.c:212) <Info> (DefragInitConfig) -- allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
[1372] 17/12/2013 -- 17:47:35 - (defrag-hash.c:237) <Info> (DefragInitConfig) -- preallocated 65535 defrag trackers of size 152
[1372] 17/12/2013 -- 17:47:35 - (defrag-hash.c:244) <Info> (DefragInitConfig) -- defrag memory usage: 13631336 bytes, maximum: 536870912
[1372] 17/12/2013 -- 17:47:35 - (tmqh-flow.c:76) <Info> (TmqhFlowRegister) -- AutoFP mode using default "Active Packets" flow load balancer
[1373] 17/12/2013 -- 17:47:35 - (tmqh-packetpool.c:142) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 229106864
[1373] 17/12/2013 -- 17:47:35 - (host.c:205) <Info> (HostInitConfig) -- allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
[1373] 17/12/2013 -- 17:47:35 - (host.c:228) <Info> (HostInitConfig) -- preallocated 1000 hosts of size 112
[1373] 17/12/2013 -- 17:47:35 - (host.c:230) <Info> (HostInitConfig) -- host memory usage: 390144 bytes, maximum: 16777216

[1373] 17/12/2013 -- 17:47:36 - (flow.c:386) <Info> (FlowInitConfig) -- allocated 1006632960 bytes of memory for the flow hash... 15728640 buckets of size 64
[1373] 17/12/2013 -- 17:47:37 - (flow.c:410) <Info> (FlowInitConfig) -- preallocated 8000000 flows of size 280

[1373] 17/12/2013 -- 17:47:37 - (flow.c:412) <Info> (FlowInitConfig) -- flow memory usage: 3310632960 bytes, maximum: 6442450944
[1373] 17/12/2013 -- 17:47:37 - (reputation.c:459) <Info> (SRepInit) -- IP reputation disabled
[1373] 17/12/2013 -- 17:47:37 - (util-magic.c:62) <Info> (MagicInit) -- using magic-file /usr/share/file/magic
[1373] 17/12/2013 -- 17:47:37 - (suricata.c:1769) <Info> (SetupDelayedDetect) -- Delayed detect disabled



We see:

[1373] 17/12/2013 -- 17:47:36 - (flow.c:386) <Info> (FlowInitConfig) -- allocated 1006632960 bytes of memory for the flow hash... 15728640 buckets of size 64
[1373] 17/12/2013 -- 17:47:37 - (flow.c:410) <Info> (FlowInitConfig) -- preallocated 8000000 flows of size 280

-> This is approximatelly 3GB of RAM
How did we get to this number... well ... I  have custom defined it in suricata.yaml under the flow section:
  hash-size: 15728640
  prealloc: 8000000

So we need to sum up->
15728640 x 64 ("15728640 buckets of size 64" = 1006632960 bytes)
+
8000000 x 280 ("preallocated 8000000 flows of size 280" = 2240000000 bytes )
=
total of 3246632960 bytes which is 3096.23MB

(15728640 x 64) + (8000000 x 280) =  3246632960 bytes


That would define our flow memcap value in suricata.yaml.
So this would work like this ->

flow:
  memcap: 4gb
  hash-size: 15728640
  prealloc: 8000000
  emergency-recovery: 30


This would work as well ->

flow:
  memcap: 3500mb
  hash-size: 15728640
  prealloc: 8000000
  emergency-recovery: 30


That's it :)

Saturday, December 21, 2013

Playing with segfaults , core dumps and such.

How to determine when (time-wise) you had a segfault with/from any tool,software or program:

dmesg | gawk -v uptime=$( grep btime /proc/stat | cut -d ' ' -f 2 ) '/^[[ 0-9.]*]/ { print strftime("[%Y/%m/%d %H:%M:%S]", substr($0,2,index($0,".")-2)+uptime) substr($0,index($0,"]")+1) }' |grep segf

Like so:
root@suricata:/root# dmesg | gawk -v uptime=$( grep btime /proc/stat | cut -d ' ' -f 2 ) '/^[[ 0-9.]*]/ { print strftime("[%Y/%m/%d %H:%M:%S]", substr($0,2,index($0,".")-2)+uptime) substr($0,index($0,"]")+1) }' |grep segf
[2013/12/19 02:21:49] AFPacketeth38[8874]: segfault at 17c ip 00007f1831f919a0 sp 00007f181c85b5d0 error 4 in libhtp-0.5.8.so.1.0.0[7f1831f83000+1d000]
root@suricata:/root#


The command above will give you the exact time when Suricata issued a core dump -
[2013/12/19 02:21:49] AFPacketeth38[8874]: segfault...


You could also speed up things :)
[force Suricata to core dump/crash (or many other software products) ...part of my job description :) ]
If this is what you want -

1) Start Suricata.
2) Kill it with an abort signal.
sudo kill -n ABRT `pidof suricata`


Suricata will now abort and dump core. Then issue the following command:
gdb /usr/bin/suricata /var/data/peter/crashes/suricata/core

/usr/bin/suricata  - is the location of the suricata binary (if not sure, issue the command-
which suricata)
/var/data/peter/crashes/suricata/core - this is the location/name of the core file


The location of the core dump file could be specified in suricata.yaml:
# Daemon working directory
# Suricata will change directory to this one if provided
# Default: "/"
daemon-directory: "/var/data/peter/crashes/suricata"


Once in  gdb:
thread apply all bt


NOTE: To be able to get any useful info of the core dump file, you should compile Suricata with CFLAGS, like so:
CFLAGS="-O0 -ggdb"  ./configure

instead of just
./configure



Suri 2.0beta2 very informative - when you need it

 With the release of Suricata 2.0beta2 one can notice right away a few of the many changes.

 root@LTS-64-1:~# suricata -c /etc/suricata/suricata.yaml -i eth0 -v
19/12/2013 -- 08:57:48 - <Notice> - This is Suricata version 2.0beta2 RELEASE
19/12/2013 -- 08:57:48 - <Info> - CPUs/cores online: 2
19/12/2013 -- 08:57:48 - <Info> - 'default' server has 'request-body-minimal-inspect-size' set to 33882 and 'request-body-inspect-window' set to 4053 after randomization.
19/12/2013 -- 08:57:48 - <Info> - 'default' server has 'response-body-minimal-inspect-size' set to 33695 and 'response-body-inspect-window' set to 4218 after randomization.
19/12/2013 -- 08:57:48 - <Info> - 'apache' server has 'request-body-minimal-inspect-size' set to 34116 and 'request-body-inspect-window' set to 3973 after randomization.
19/12/2013 -- 08:57:48 - <Info> - 'apache' server has 'response-body-minimal-inspect-size' set to 32229 and 'response-body-inspect-window' set to 4205 after randomization.
19/12/2013 -- 08:57:48 - <Info> - 'iis7' server has 'request-body-minimal-inspect-size' set to 32040 and 'request-body-inspect-window' set to 4118 after randomization.
19/12/2013 -- 08:57:48 - <Info> - 'iis7' server has 'response-body-minimal-inspect-size' set to 32694 and 'response-body-inspect-window' set to 4148 after randomization.
19/12/2013 -- 08:57:48 - <Info> - DNS request flood protection level: 500
19/12/2013 -- 08:57:48 - <Info> - Found an MTU of 1500 for 'eth0'
19/12/2013 -- 08:57:48 - <Info> - allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
19/12/2013 -- 08:57:48 - <Info> - preallocated 65535 defrag trackers of size 152
19/12/2013 -- 08:57:48 - <Info> - defrag memory usage: 13631336 bytes, maximum: 33554432
19/12/2013 -- 08:57:48 - <Info> - AutoFP mode using default "Active Packets" flow load balancer
19/12/2013 -- 08:57:48 - <Info> - preallocated 1024 packets. Total memory 3567616
19/12/2013 -- 08:57:48 - <Info> - allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
19/12/2013 -- 08:57:48 - <Info> - preallocated 1000 hosts of size 112
19/12/2013 -- 08:57:48 - <Info> - host memory usage: 390144 bytes, maximum: 16777216
19/12/2013 -- 08:57:48 - <Info> - allocated 4194304 bytes of memory for the flow hash... 65536 buckets of size 64
19/12/2013 -- 08:57:48 - <Info> - preallocated 10000 flows of size 280
19/12/2013 -- 08:57:48 - <Info> - flow memory usage: 7074304 bytes, maximum: 134217728
19/12/2013 -- 08:57:48 - <Info> - IP reputation disabled
19/12/2013 -- 08:57:48 - <Info> - using magic-file /usr/share/file/magic
19/12/2013 -- 08:57:48 - <Info> - Delayed detect disabled
19/12/2013 -- 08:57:53 - <Info> - 48 rule files processed. 14045 rules successfully loaded, 0 rules failed
19/12/2013 -- 08:57:53 - <Info> - 14053 signatures processed. 1136 are IP-only rules, 4310 are inspecting packet payload, 10513 inspect application layer, 72 are decoder event only
19/12/2013 -- 08:57:53 - <Info> - building signature grouping structure, stage 1: preprocessing rules... complete
19/12/2013 -- 08:57:54 - <Info> - building signature grouping structure, stage 2: building source address list... complete
19/12/2013 -- 08:58:00 - <Info> - building signature grouping structure, stage 3: building destination address lists... complete
19/12/2013 -- 08:58:03 - <Info> - Threshold config parsed: 0 rule(s) found
19/12/2013 -- 08:58:03 - <Info> - Core dump size set to unlimited.
19/12/2013 -- 08:58:03 - <Info> - fast output device (regular) initialized: fast.log
19/12/2013 -- 08:58:03 - <Info> - http-log output device (regular) initialized: http.log
19/12/2013 -- 08:58:03 - <Info> - dns-log output device (regular) initialized: dns.log
19/12/2013 -- 08:58:03 - <Info> - file-log output device (regular) initialized: files-json.log
19/12/2013 -- 08:58:03 - <Info> - forcing magic lookup for logged files
19/12/2013 -- 08:58:03 - <Info> - forcing md5 calculation for logged files
19/12/2013 -- 08:58:03 - <Info> - Using 1 live device(s).
19/12/2013 -- 08:58:03 - <Info> - using interface eth0
19/12/2013 -- 08:58:03 - <Info> - Running in 'auto' checksum mode. Detection of interface state will require 1000 packets.
19/12/2013 -- 08:58:03 - <Info> - Found an MTU of 1500 for 'eth0'
19/12/2013 -- 08:58:03 - <Info> - Set snaplen to 1516 for 'eth0'
19/12/2013 -- 08:58:03 - <Info> - Generic Receive Offload is set on eth0
19/12/2013 -- 08:58:03 - <Info> - Large Receive Offload is unset on eth0
19/12/2013 -- 08:58:03 - <Warning> - [ERRCODE: SC_ERR_PCAP_CREATE(21)] - Using Pcap capture with GRO or LRO activated can lead to capture problems.
19/12/2013 -- 08:58:03 - <Info> - RunModeIdsPcapAutoFp initialised
19/12/2013 -- 08:58:03 - <Info> - stream "prealloc-sessions": 2048 (per thread)
19/12/2013 -- 08:58:03 - <Info> - stream "memcap": 536870912
19/12/2013 -- 08:58:03 - <Info> - stream "midstream" session pickups: disabled
19/12/2013 -- 08:58:03 - <Info> - stream "async-oneside": disabled
19/12/2013 -- 08:58:03 - <Info> - stream "checksum-validation": disabled
19/12/2013 -- 08:58:03 - <Info> - stream."inline": disabled
19/12/2013 -- 08:58:03 - <Info> - stream "max-synack-queued": 5
19/12/2013 -- 08:58:03 - <Info> - stream.reassembly "memcap": 1073741824
19/12/2013 -- 08:58:03 - <Info> - stream.reassembly "depth": 8388608
19/12/2013 -- 08:58:03 - <Info> - stream.reassembly "toserver-chunk-size": 2447
19/12/2013 -- 08:58:03 - <Info> - stream.reassembly "toclient-chunk-size": 2489
19/12/2013 -- 08:58:03 - <Info> - stream.reassembly.raw: enabled
19/12/2013 -- 08:58:03 - <Notice> - all 4 packet processing threads, 3 management threads initialized, engine started.
19/12/2013 -- 08:58:32 - <Info> - No packets with invalid checksum, assuming checksum offloading is NOT used




Note 1)
19/12/2013 -- 08:58:03 - <Info> - Generic Receive Offload is set on eth0
19/12/2013 -- 08:58:03 - <Info> - Large Receive Offload is unset on eth0
19/12/2013 -- 08:58:03 - <Warning> - [ERRCODE: SC_ERR_PCAP_CREATE(21)] - Using Pcap capture with GRO or LRO activated can lead to capture problems.

Note 2 ...after some packets)
19/12/2013 -- 08:58:32 - <Info> - No packets with invalid checksum, assuming checksum offloading is NOT used

So  for Note 1)  if we check our interface using ethtool, (if you do not have it  -
apt-get install ethtool on Ubuntu/Debian like systems ):

root@LTS-64-1:~# ethtool -k eth0
Offload parameters for eth0:
rx-checksumming: off
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: off
root@LTS-64-1:~#

We see that :
generic-receive-offload: on
large-receive-offload: off

exactly as Suricata reports.
(
do not forget to run it with the -v option ! :
suricata -c /etc/suricata/suricata.yaml -i eth0 -v
)
Do not forget  - all offloading and checksumming features should be OFF(disabled) on the network interface, so that Suricata processes correctly all the traffic !
 

Saturday, December 14, 2013

Suricata - per host/network fragmentation timeouts


Suricata can have the ip fragmentation time out values on a configurable per network/host basis in the suricata.yaml.

Through a light but time consuming research it (frag timeout) seems to be different for the different OSs . It does not matter if the system is 32 or 64 bit, but it does matter if it handles IPv4 or IPv6 addresses. Most of those values you can find under /proc/sys/net/ipv4/ipfrag_time (or using sysctl -a) on most of the Linux/Unix systems – the amount of time a fragment will be kept in memory, after that it will be discarded.

All  (default) values in seconds ->
IPv4:
Suse - 20
CentOS - 30
Ubuntu - 30
Debian - 30
Fedora - 30
Windows (all) - hard coded, can not be changed – 60

IPv6:
Suse - 60
CentOS - 60
Ubuntu - 60
Debian - 60
Fedora - 60
Windows (all) – hardcoded, can not be changed – 60

There are other ip fragmentation values that differ for the different OSs as well. Some values could not have the default value due to network/OS/application specific tuning and other reasons.

However for those hosts and networks that you are sure and know what the timeouts are in seconds - you could use the defrag timeout values in suricata.yaml section and accordingly. That way Suricata will inspect the ip fragments with the same timeouts as the receiving hosts.


Setup defrag timeouts on a per network/host type basics:
# Enable defrag per host settings
  host-config:

    - dmz:
        timeout: 30
        address: [192.168.1.0/24, 127.0.0.0/8, 1.1.1.0/24, 2.2.2.0/24, "1.1.1.1", "2.2.2.2", "::1"]

    - lan:
        timeout: 45
        address:
          - 192.168.0.0/24
          - 192.168.10.0/24
          - 172.16.14.0/24





Sunday, December 8, 2013

Suricata (and the grand slam of) Open Source IDPS - Chapter III - AF_PACKET

Introduction


NOTE: An updated article is available here.

This is Chapter III - AF_PACKET of a series of articles about high performance  and advance tuning of Suricata IDPS

This article will consist of series of instructions on setting up and configuring Suricata IDPS with  AF_PACKET for a 10Gbps traffic interface monitoring.



Chapter III - AF_PACKET

AF_PACKET works "out of the box " with Suricata. Please make sure your kernel level is at least 3.2 in order to get the best results.

Once you have followed all the steps in Chapter I - Preparation  The only thing left to do is adjust the suricata.yaml settings.


AF_PACKET - suricata.yaml tune up and configuration




NOTE:
AF_PACKET - Which kernel version not to use with Suricata in AF_PACKET mode
(thanks to Regit)


We make sure we use runmode workers (feel free to try other modes and experiment what is best for your specific set up):
#runmode: autofp
runmode: workers


Adjust the packet size:
# Preallocated size for packet. Default is 1514 which is the classical
# size for pcap on ethernet. You should adjust this value to the highest
# packet size (MTU + hardware header) on your system.
default-packet-size: 1520


Use custom profile in detect-engine with a lot more groups (high gives you about 15 groups per variable, but you can customize as needed depending on the network ranges you monitor ):
detect-engine:
  - profile: high
  - custom-values:
      toclient-src-groups: 200
      toclient-dst-groups: 200
      toclient-sp-groups: 200
      toclient-dp-groups: 300
      toserver-src-groups: 200
      toserver-dst-groups: 400
      toserver-sp-groups: 200
      toserver-dp-groups: 250
  - sgh-mpm-context: full
  - inspection-recursion-limit: 3000


Adjust your defrag settings:
# Defrag settings:
defrag:
  memcap: 512mb
  hash-size: 65536
  trackers: 65535 # number of defragmented flows to follow
  max-frags: 65535 # number of fragments to keep
  prealloc: yes
  timeout: 30



Adjust your flow settings:
flow:
  memcap: 1gb
  hash-size: 1048576
  prealloc: 1048576
  emergency-recovery: 30


Adjust your per protocol timeout values:
flow-timeouts:

  default:
    new: 3
    established: 30
    closed: 0
    emergency-new: 10
    emergency-established: 10
    emergency-closed: 0
  tcp:
    new: 6
    established: 100
    closed: 12
    emergency-new: 1
    emergency-established: 5
    emergency-closed: 2
  udp:
    new: 3
    established: 30
    emergency-new: 3
    emergency-established: 10
  icmp:
    new: 3
    established: 30
    emergency-new: 1
    emergency-established: 10



Adjust your stream engine settings:
stream:
  memcap: 16gb
  checksum-validation: no      # reject wrong csums
  prealloc-sesions: 500000     #per thread
  midstream: true
  asyn-oneside: true
  inline: no                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 20gb
    depth: 12mb                  # reassemble 12mb into a stream
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes
    #randomize-chunk-range: 10


Make sure you enable suricata.log for troubleshooting if something goes wrong:
  outputs:
  - console:
      enabled: yes
  - file:
      enabled: yes
      filename: /var/log/suricata/suricata.log



The AF_PACKET section:
af-packet:
  - interface: eth3
    # Number of receive threads (>1 will enable experimental flow pinned
    # runmode)
    threads: 16
    # Default clusterid.  AF_PACKET will load balance packets based on flow.
    # All threads/processes that will participate need to have the same
    # clusterid.
    cluster-id: 98
    # Default AF_PACKET cluster type. AF_PACKET can load balance per flow or per hash.
    # This is only supported for Linux kernel > 3.1
    # possible value are:
    #  * cluster_round_robin: round robin load balancing
    #  * cluster_flow: all packets of a given flow are send to the same socket
    #  * cluster_cpu: all packets treated in kernel by a CPU are send to the same socket
    cluster-type: cluster_cpu
    # In some fragmentation case, the hash can not be computed. If "defrag" is set
    # to yes, the kernel will do the needed defragmentation before sending the packets.
    defrag: no
    # To use the ring feature of AF_PACKET, set 'use-mmap' to yes
    use-mmap: yes
    # Ring size will be computed with respect to max_pending_packets and number
    # of threads. You can set manually the ring size in number of packets by setting
    # the following value. If you are using flow cluster-type and have really network
    # intensive single-flow you could want to set the ring-size independantly of the number
    # of threads:
    ring-size: 200000
    # On busy system, this could help to set it to yes to recover from a packet drop
    # phase. This will result in some packets (at max a ring flush) being non treated.
    #use-emergency-flush: yes
    # recv buffer size, increase value could improve performance
    # buffer-size: 100000
    # Set to yes to disable promiscuous mode
    # disable-promisc: no
    # Choose checksum verification mode for the interface. At the moment
    # of the capture, some packets may be with an invalid checksum due to
    # offloading to the network card of the checksum computation.
    # Possible values are:
    #  - kernel: use indication sent by kernel for each packet (default)
    #  - yes: checksum validation is forced
    #  - no: checksum validation is disabled
    #  - auto: suricata uses a statistical approach to detect when
    #  checksum off-loading is used.
    # Warning: 'checksum-validation' must be set to yes to have any validation
    checksum-checks: kernel
    # BPF filter to apply to this interface. The pcap filter syntax apply here.
    #bpf-filter: port 80 or udp
   



We had these rules enabled:
rule-files:
   - trojan.rules
   - md5.rules # 134 000 specially selected file md5s
   - dns.rules
   - malware.rules
   - local.rules
   - current_events.rules
   -  mobile_malware.rules
   - user_agents.rules 



Make sure you adjust your Network and Port variables:
  # Holds the address group vars that would be passed in a Signature.
  # These would be retrieved during the Signature address parsing stage.
  address-groups:

    HOME_NET: "[ HOME NET HERE ]"

    EXTERNAL_NET: "!$HOME_NET"

    HTTP_SERVERS: "$HOME_NET"

    SMTP_SERVERS: "$HOME_NET"

    SQL_SERVERS: "$HOME_NET"

    DNS_SERVERS: "$HOME_NET"

    TELNET_SERVERS: "$HOME_NET"

    AIM_SERVERS: "$EXTERNAL_NET"

    DNP3_SERVER: "$HOME_NET"

    DNP3_CLIENT: "$HOME_NET"

    MODBUS_CLIENT: "$HOME_NET"

    MODBUS_SERVER: "$HOME_NET"

    ENIP_CLIENT: "$HOME_NET"

    ENIP_SERVER: "$HOME_NET"

  # Holds the port group vars that would be passed in a Signature.
  # These would be retrieved during the Signature port parsing stage.
  port-groups:

    HTTP_PORTS: "80"

    SHELLCODE_PORTS: "!80"

    ORACLE_PORTS: 1521

    SSH_PORTS: 22

    DNP3_PORTS: 20000


Your app parsers:
# Holds details on the app-layer. The protocols section details each protocol.
# Under each protocol, the default value for detection-enabled and "
# parsed-enabled is yes, unless specified otherwise.
# Each protocol covers enabling/disabling parsers for all ipprotos
# the app-layer protocol runs on.  For example "dcerpc" refers to the tcp
# version of the protocol as well as the udp version of the protocol.
# The option "enabled" takes 3 values - "yes", "no", "detection-only".
# "yes" enables both detection and the parser, "no" disables both, and
# "detection-only" enables detection only(parser disabled).
app-layer:
  protocols:
    tls:
      enabled: yes
      detection-ports:
        tcp:
          toserver: 443

      #no-reassemble: yes
    dcerpc:
      enabled: yes
    ftp:
      enabled: yes
    ssh:
      enabled: yes
    smtp:
      enabled: yes
    imap:
      enabled: detection-only
    msn:
      enabled: detection-only
    smb:
      enabled: yes
      detection-ports:
        tcp:
          toserver: 139
    # smb2 detection is disabled internally inside the engine.
    #smb2:
    #  enabled: yes
    dnstcp:
       enabled: yes
       detection-ports:
         tcp:
           toserver: 53
    dnsudp:
       enabled: yes
       detection-ports:
         udp:
           toserver: 53
    http:
      enabled: yes


Libhtp body limits:
      libhtp:

         default-config:
           personality: IDS

           # Can be specified in kb, mb, gb.  Just a number indicates
           # it's in bytes.
           request-body-limit: 12mb
           response-body-limit: 12mb

           # inspection limits
           request-body-minimal-inspect-size: 32kb
           request-body-inspect-window: 4kb
           response-body-minimal-inspect-size: 32kb
           response-body-inspect-window: 4kb



Run it

 /usr/local/bin/suricata -c /etc/suricata/suricata.yaml --af-packet=eth3 -D -v



Results


We take a look at the suricata.log file:
[13915] 4/12/2013 -- 15:38:15 - (suricata.c:962) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev e7f6107)
[13915] 4/12/2013 -- 15:38:15 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 16
[13915] 4/12/2013 -- 15:38:15 - (app-layer-dns-udp.c:315) <Info> (DNSUDPConfigure) -- DNS request flood protection level: 500
[13915] 4/12/2013 -- 15:38:15 - (util-ioctl.c:99) <Info> (GetIfaceMTU) -- Found an MTU of 1500 for 'eth3'
[13915] 4/12/2013 -- 15:38:15 - (defrag-hash.c:212) <Info> (DefragInitConfig) -- allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
[13915] 4/12/2013 -- 15:38:15 - (defrag-hash.c:237) <Info> (DefragInitConfig) -- preallocated 65535 defrag trackers of size 152
[13915] 4/12/2013 -- 15:38:15 - (defrag-hash.c:244) <Info> (DefragInitConfig) -- defrag memory usage: 13631336 bytes, maximum: 536870912
[13915] 4/12/2013 -- 15:38:15 - (tmqh-flow.c:76) <Info> (TmqhFlowRegister) -- AutoFP mode using default "Active Packets" flow load balancer
[13916] 4/12/2013 -- 15:38:15 - (tmqh-packetpool.c:142) <Info> (PacketPoolInit) -- preallocated 2048 packets. Total memory 7151616
[13916] 4/12/2013 -- 15:38:15 - (host.c:205) <Info> (HostInitConfig) -- allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
[13916] 4/12/2013 -- 15:38:15 - (host.c:228) <Info> (HostInitConfig) -- preallocated 1000 hosts of size 112
[13916] 4/12/2013 -- 15:38:15 - (host.c:230) <Info> (HostInitConfig) -- host memory usage: 390144 bytes, maximum: 16777216
[13916] 4/12/2013 -- 15:38:15 - (flow.c:386) <Info> (FlowInitConfig) -- allocated 67108864 bytes of memory for the flow hash... 1048576 buckets of size 64
[13916] 4/12/2013 -- 15:38:15 - (flow.c:410) <Info> (FlowInitConfig) -- preallocated 1048576 flows of size 280
[13916] 4/12/2013 -- 15:38:15 - (flow.c:412) <Info> (FlowInitConfig) -- flow memory usage: 369098752 bytes, maximum: 1073741824
[13916] 4/12/2013 -- 15:38:15 - (reputation.c:459) <Info> (SRepInit) -- IP reputation disabled
[13916] 4/12/2013 -- 15:38:15 - (util-magic.c:62) <Info> (MagicInit) -- using magic-file /usr/share/file/magic
[13916] 4/12/2013 -- 15:38:15 - (suricata.c:1769) <Info> (SetupDelayedDetect) -- Delayed detect disabled
[13916] 4/12/2013 -- 15:38:17 - (detect-filemd5.c:275) <Info> (DetectFileMd5Parse) -- MD5 hash size 2143616 bytes


...8 rule files, 7947 rules loaded
[13916] 4/12/2013 -- 15:38:17 - (detect.c:453) <Info> (SigLoadSignatures) -- 8 rule files processed. 7947 rules successfully loaded, 0 rules failed
[13916] 4/12/2013 -- 15:38:17 - (detect.c:2568) <Info> (SigAddressPrepareStage1) -- 7947 signatures processed. 1 are IP-only rules, 1976 are inspecting packet payload, 6714 inspect application laye
r, 0 are decoder event only
[13916] 4/12/2013 -- 15:38:17 - (detect.c:2571) <Info> (SigAddressPrepareStage1) -- building signature grouping structure, stage 1: preprocessing rules... complete
[13916] 4/12/2013 -- 15:38:17 - (detect.c:3194) <Info> (SigAddressPrepareStage2) -- building signature grouping structure, stage 2: building source address list... complete
[13916] 4/12/2013 -- 15:39:51 - (detect.c:3836) <Info> (SigAddressPrepareStage3) -- building signature grouping structure, stage 3: building destination address lists... complete
[13916] 4/12/2013 -- 15:39:51 - (util-threshold-config.c:1186) <Info> (SCThresholdConfParseFile) -- Threshold config parsed: 0 rule(s) found
[13916] 4/12/2013 -- 15:39:51 - (util-coredump-config.c:122) <Info> (CoredumpLoadConfig) -- Core dump size set to unlimited.
[13916] 4/12/2013 -- 15:39:51 - (util-logopenfile.c:168) <Info> (SCConfLogOpenGeneric) -- fast output device (regular) initialized: fast.log
[13916] 4/12/2013 -- 15:39:51 - (util-logopenfile.c:168) <Info> (SCConfLogOpenGeneric) -- http-log output device (regular) initialized: http.log
[13916] 4/12/2013 -- 15:39:51 - (util-logopenfile.c:168) <Info> (SCConfLogOpenGeneric) -- tls-log output device (regular) initialized: tls.log
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "management-cpu-set"
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'low'
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "receive-cpu-set"
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "decode-cpu-set"
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "stream-cpu-set"
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "detect-cpu-set"
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'high'
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "verdict-cpu-set"
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'high'
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "reject-cpu-set"
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'low'
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "output-cpu-set"
[13916] 4/12/2013 -- 15:39:51 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'medium'
[13916] 4/12/2013 -- 15:39:51 - (runmode-af-packet.c:200) <Info> (ParseAFPConfig) -- Enabling mmaped capture on iface eth3
[13916] 4/12/2013 -- 15:39:51 - (runmode-af-packet.c:268) <Info> (ParseAFPConfig) -- Using cpu cluster mode for AF_PACKET (iface eth3)
[13916] 4/12/2013 -- 15:39:51 - (util-runmodes.c:545) <Info>


...going to use 16 threads:
(RunModeSetLiveCaptureWorkersForDevice) -- Going to use 16 thread(s)
[13918] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 0
[13918] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth31" Module to cpu/core 0, thread id 13918
[13918] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13918] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13919] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 1
[13919] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth32" Module to cpu/core 1, thread id 13919
[13919] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13919] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13920] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 2
[13920] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth33" Module to cpu/core 2, thread id 13920
[13920] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13920] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13921] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 3
[13921] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth34" Module to cpu/core 3, thread id 13921
[13921] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13921] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13922] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 4
[13922] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth35" Module to cpu/core 4, thread id 13922
[13922] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13922] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13923] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 5
[13923] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth36" Module to cpu/core 5, thread id 13923
[13923] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13923] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13924] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 6
[13924] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth37" Module to cpu/core 6, thread id 13924
[13924] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13924] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13925] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 7
[13925] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth38" Module to cpu/core 7, thread id 13925
[13925] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13925] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13926] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 8
[13926] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth39" Module to cpu/core 8, thread id 13926
[13926] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13926] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13927] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 9
[13927] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth310" Module to cpu/core 9, thread id 13927
[13927] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13927] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13928] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 10
[13928] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth311" Module to cpu/core 10, thread id 13928
[13928] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13928] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13929] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 11
[13929] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth312" Module to cpu/core 11, thread id 13929
[13929] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13929] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13930] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 12
[13930] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth313" Module to cpu/core 12, thread id 13930
[13930] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13930] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13931] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 13
[13931] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth314" Module to cpu/core 13, thread id 13931
[13931] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13931] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13932] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 14
[13932] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth315" Module to cpu/core 14, thread id 13932
[13932] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13932] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[13933] 4/12/2013 -- 15:39:51 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 15
[13933] 4/12/2013 -- 15:39:51 - (tm-threads.c:1332) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth316" Module to cpu/core 15, thread id 13933
[13933] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1554) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[13933] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1564) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call


...reading in some  memory settings from yaml:
[13916] 4/12/2013 -- 15:39:51 - (runmode-af-packet.c:529) <Info> (RunModeIdsAFPWorkers) -- RunModeIdsAFPWorkers initialised
[13934] 4/12/2013 -- 15:39:51 - (tm-threads.c:1338) <Info> (TmThreadSetupOptions) -- Setting prio 2 for "FlowManagerThread" thread , thread id 13934
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:376) <Info> (StreamTcpInitConfig) -- stream "prealloc-sessions": 375000 (per thread)
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:392) <Info> (StreamTcpInitConfig) -- stream "memcap": 17179869184
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:398) <Info> (StreamTcpInitConfig) -- stream "midstream" session pickups: enabled
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:404) <Info> (StreamTcpInitConfig) -- stream "async-oneside": disabled
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:421) <Info> (StreamTcpInitConfig) -- stream "checksum-validation": disabled
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:443) <Info> (StreamTcpInitConfig) -- stream."inline": disabled
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:456) <Info> (StreamTcpInitConfig) -- stream "max-synack-queued": 5
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:474) <Info> (StreamTcpInitConfig) -- stream.reassembly "memcap": 21474836480
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:492) <Info> (StreamTcpInitConfig) -- stream.reassembly "depth": 12582912
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:575) <Info> (StreamTcpInitConfig) -- stream.reassembly "toserver-chunk-size": 2671
[13916] 4/12/2013 -- 15:39:51 - (stream-tcp.c:577) <Info> (StreamTcpInitConfig) -- stream.reassembly "toclient-chunk-size": 2582
[13935] 4/12/2013 -- 15:39:51 - (tm-threads.c:1338) <Info> (TmThreadSetupOptions) -- Setting prio 2 for "SCPerfWakeupThread" thread , thread id 13935
[13936] 4/12/2013 -- 15:39:51 - (tm-threads.c:1338) <Info> (TmThreadSetupOptions) -- Setting prio 2 for "SCPerfMgmtThread" thread , thread id 13936
[13916] 4/12/2013 -- 15:39:51 - (tm-threads.c:2191) <Notice> (TmThreadWaitOnThreadInit) -- all 16 packet processing threads, 3 management threads initialized, engine started.


....have  a look - Suricata detects if OFFloading (discussed in  Chapter I - Preparation) is used on the network interface:
[13918] 4/12/2013 -- 15:39:51 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13918] 4/12/2013 -- 15:39:51 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13918] 4/12/2013 -- 15:39:51 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13918] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 8
[13918] 4/12/2013 -- 15:39:52 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth31 using socket 8
[13919] 4/12/2013 -- 15:39:52 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13919] 4/12/2013 -- 15:39:52 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13919] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13919] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 9
[13919] 4/12/2013 -- 15:39:52 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth32 using socket 9
[13920] 4/12/2013 -- 15:39:52 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13920] 4/12/2013 -- 15:39:52 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13920] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13920] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 10
[13920] 4/12/2013 -- 15:39:52 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth33 using socket 10
[13921] 4/12/2013 -- 15:39:52 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13921] 4/12/2013 -- 15:39:52 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13921] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13921] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 11
[13921] 4/12/2013 -- 15:39:52 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth34 using socket 11
[13922] 4/12/2013 -- 15:39:52 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13922] 4/12/2013 -- 15:39:52 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13922] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13922] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 12
[13922] 4/12/2013 -- 15:39:52 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth35 using socket 12
[13923] 4/12/2013 -- 15:39:52 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13923] 4/12/2013 -- 15:39:52 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13923] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13923] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 13
[13923] 4/12/2013 -- 15:39:52 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth36 using socket 13
[13924] 4/12/2013 -- 15:39:52 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13924] 4/12/2013 -- 15:39:52 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13924] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13924] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 14
[13924] 4/12/2013 -- 15:39:52 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth37 using socket 14
[13925] 4/12/2013 -- 15:39:52 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13925] 4/12/2013 -- 15:39:52 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13925] 4/12/2013 -- 15:39:52 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13925] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 15
[13925] 4/12/2013 -- 15:39:53 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth38 using socket 15
[13926] 4/12/2013 -- 15:39:53 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13926] 4/12/2013 -- 15:39:53 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13926] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13926] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 16
[13926] 4/12/2013 -- 15:39:53 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth39 using socket 16
[13927] 4/12/2013 -- 15:39:53 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13927] 4/12/2013 -- 15:39:53 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13927] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13927] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 17
[13927] 4/12/2013 -- 15:39:53 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth310 using socket 17
[13928] 4/12/2013 -- 15:39:53 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13928] 4/12/2013 -- 15:39:53 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13928] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13928] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 18
[13928] 4/12/2013 -- 15:39:53 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth311 using socket 18
[13929] 4/12/2013 -- 15:39:53 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13929] 4/12/2013 -- 15:39:53 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13929] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13929] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 19
[13929] 4/12/2013 -- 15:39:53 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth312 using socket 19
[13930] 4/12/2013 -- 15:39:53 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13930] 4/12/2013 -- 15:39:53 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13930] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13930] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 20
[13930] 4/12/2013 -- 15:39:53 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth313 using socket 20
[13931] 4/12/2013 -- 15:39:53 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13931] 4/12/2013 -- 15:39:53 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13931] 4/12/2013 -- 15:39:53 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13931] 4/12/2013 -- 15:39:54 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 21
[13931] 4/12/2013 -- 15:39:54 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth314 using socket 21
[13932] 4/12/2013 -- 15:39:54 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13932] 4/12/2013 -- 15:39:54 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13932] 4/12/2013 -- 15:39:54 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13932] 4/12/2013 -- 15:39:54 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 22
[13932] 4/12/2013 -- 15:39:54 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth315 using socket 22
[13933] 4/12/2013 -- 15:39:54 - (util-ioctl.c:175) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[13933] 4/12/2013 -- 15:39:54 - (util-ioctl.c:194) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[13933] 4/12/2013 -- 15:39:54 - (source-af-packet.c:1189) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=10001 frame_size=1584 frame_nr=200020
[13933] 4/12/2013 -- 15:39:54 - (source-af-packet.c:1380) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 23
[13933] 4/12/2013 -- 15:39:54 - (source-af-packet.c:439) <Info> (AFPPeersListReachedInc) -- All AFP capture threads are running.
[13933] 4/12/2013 -- 15:39:54 - (source-af-packet.c:988) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth316 using socket 23



htop - Now that we have been up  and running for a while (6-7 hrs) on a 10Gbps link ( 9.3 Gbps traffic - to be precise - at the moment of these statistics):




we have about 1-2% drops in total (on 7947 rules):





and then after 13 hrs:





we still have 1-2% drops
(1.897% to be precise - total kernel drops 1 337 487 757 out of total packets 70 491 114 835 is 1.897%) :




And that is just half the job done on Suricata's high performance tuning. Before you arrive at this point  there is much more work to be done - pre-study, HW choice, rule selection and tuning, traffic analysis , office/organization needs analysis, network location design and deployment, testing/PoCs and  more...

Next - Chapter IV - Logstash / Kibana / Elasticsearch





Wednesday, December 4, 2013

Suricata (and the grand slam of) Open Source IDPS - Chapter II - PF_RING / DNA , Part Two - DNA

Introduction


NOTE: An updated article is available here.

This is Chapter II (Part Two - DNA) of a series of articles about high performance  and advance tuning of Suricata IDPS

This article will consist of two parts  - on setting up and configuring PF_RING and PF_RING DNA for a 10Gbps interface monitoring.
This is Part Two - DNA , describing setup and  tuning of DNA - PF_RING™ DNA (Direct NIC Access)

Many thanks to Luca Deri and Alfredo Cardigliano from ntop for providing a license and support that made possible this article/guide for 10 Gbps deployment scenario.


Part Two - DNA

NOTE: PF_RING is open source and free, but for DNA you need a license. However the DNA license is free for non-profit organizations or education institutions (universities,colleges etc.)
In general PF_RING DNA is much faster than the usual PF_RING.

If you do not have PF_RING installed on your system you should follow all of the
Part One - PF_RING
guide except the section "Run it". After that come back and continue from here onwards.

If you have PF_RING already installed  , you should follow this article to get the PF_RING DNA set up and installed.

NOTE: Know your network card. This set up uses Intel 82599EB 10-Gigabit SFI/SFP+

NOTE: When one application is using the DNA interface no other application can use that same interface. Example if you have Suricata running with this guide, if you want to do "./pfcount" you would not be able to , since the DNA interface is already used by an application. For cases where you would like multiple applications to use the same DNA interface, you should consider Libzero.

Compile

Once you have acquired your DNA license (instructions of "how to" are included in the license), cd to the src of your latest pfring pull:

cd /home/pevman/pfring-svn-latest/drivers/DNA/ixgbe-3.18.7-DNA/src
make



Configure

Elevate as root. EDIT the script load_dna_driver.sh found in the  directory below
(/pfring-svn-latest/drivers/DNA/ixgbe-3.18.7-DNA/src/load_dna_driver.sh)
Make changes in the script load_dna_driver.sh like so (we use only one dna interface):
# Configure here the network interfaces to activate
IF[0]=dna0
#IF[1]=dna1
#IF[2]=dna2
#IF[3]=dna3



Leave rmmod like so (default):
# Remove old modules (if loaded)
rmmod ixgbe
rmmod pf_ring


Leave only two insmod lines uncommented
# We assume that you have compiled PF_RING
insmod ../../../../kernel/pf_ring.ko


Adjust the queues, use your own MAC address, increase the buffers, up the laser on the SFP:
# As many queues as the number of processors
#insmod ./ixgbe.ko RSS=0,0,0,0
insmod ./ixgbe.ko RSS=0 mtu=1522 adapters_to_enable=00:e0:ed:19:e3:e1 num_rx_slots=32768 FdirPballoc=3

Above we have 16 CPUs and we want to use 16 queues, enable only this adapter with this MAC address, bump up the rx slots and comment all the other insmod lines (besides these two shown above for pf_ring.ko and ixgbe.ko)

In the case above we enable 16 queues (cause we have 16 cpus) for the first port of the 10Gbps Intel network card.


 +++++ CORNER CASE +++++
( the bonus round !! - with the help of  Alfredo Cardigliano from ntop )

Question:
So what should you do if you have this scenario - 32 core system with a 
10Gbps network card and DNA. The card has  4 ports each port getting 1,2,6,1 Gbps
of traffic, respectively.

You would like to get 4,8 16,4 queues - dedicated cpus (as written ) per
port. In other words:
Gbps of traffic (port 0,1,2,3) - >            1,2,6,1
Number of cpus/queues dedicated - >  4,8,16,4

Answer:
Simple -> You should use
insmod ./ixgbe.ko RSS=4,8,16,4 ....

instead of :
insmod ./ixgbe.ko RSS=0 ....

+++++ END of the CORNER CASE +++++


Execute load_dna_driver.sh from the same directory it resides in.
(ex for this tutorial - /home/pevman/pfring-svn-latest/drivers/DNA/ixgbe-3.18.7-DNA/src) :
./ load_dna_driver.sh

Make sure offloading is disabled (substitute the correct interface name below name):
ethtool -K dna0 tso off
ethtool -K dna0 gro off
ethtool -K dna0 lro off
ethtool -K dna0 gso off
ethtool -K dna0 rx off
ethtool -K dna0 tx off
ethtool -K dna0 sg off
ethtool -K dna0 rxvlan off
ethtool -K dna0 txvlan off
ethtool -N dna0 rx-flow-hash udp4 sdfn
ethtool -N dna0 rx-flow-hash udp6 sdfn
ethtool -n dna0 rx-flow-hash udp6
ethtool -n dna0 rx-flow-hash udp4
ethtool -C dna0 rx-usecs 1000
ethtool -C dna0 adaptive-rx off



Configuration in suricata.yaml

In suricata.yaml, make sure your pfring section looks like this:

# PF_RING configuration. for use with native PF_RING support
# for more info see http://www.ntop.org/PF_RING.html  #dna0@0
pfring:
  - interface: dna0@0
    # Number of receive threads (>1 will enable experimental flow pinned
    # runmode)
    #threads: 1

    # Default clusterid.  PF_RING will load balance packets based on flow.
    # All threads/processes that will participate need to have the same
    # clusterid.
    #cluster-id: 1

    # Default PF_RING cluster type. PF_RING can load balance per flow or per hash.
    # This is only supported in versions of PF_RING > 4.1.1.
    cluster-type: cluster_flow
    # bpf filter for this interface
    #bpf-filter: tcp
    # Choose checksum verification mode for the interface. At the moment
    # of the capture, some packets may be with an invalid checksum due to
    # offloading to the network card of the checksum computation.
    # Possible values are:
    #  - rxonly: only compute checksum for packets received by network card.
    #  - yes: checksum validation is forced
    #  - no: checksum validation is disabled
    #  - auto: suricata uses a statistical approach to detect when
    #  checksum off-loading is used. (default)
    # Warning: 'checksum-validation' must be set to yes to have any validation
    #checksum-checks: auto
  # Second interface
  - interface: dna0@1
    threads: 1
  - interface: dna0@2
    threads: 1
  - interface: dna0@3
    threads: 1
  - interface: dna0@4
    threads: 1
  - interface: dna0@5
    threads: 1
  - interface: dna0@6
    threads: 1
  - interface: dna0@7
    threads: 1
  - interface: dna0@8
    threads: 1
  - interface: dna0@9
    threads: 1
  - interface: dna0@10
    threads: 1
  - interface: dna0@11
    threads: 1
  - interface: dna0@12
    threads: 1
  - interface: dna0@13
    threads: 1
  - interface: dna0@14
    threads: 1
  - interface: dna0@15
    threads: 1
  # Put default values here
  #- interface: default
    #threads: 2


Rules enabled in suricata.yaml:

default-rule-path: /etc/suricata/et-config/
rule-files:
 - trojan.rules
 - dns.rules
 - malware.rules
 - local.rules
 - jonkman.rules
 - worm.rules
 - current_events.rules
 - mobile_malware.rules
 - user_agents.rules



The rest of the suricata.yaml configuration guide you can take from Part One - PF_RING- regarding Suricata's specific settings - timeouts, memory settings, fragmentation , reassembly limits and so on.


Notice the DNA driver loaded:
 lshw -c Network
  *-network:1
       description: Ethernet interface
       product: 82599EB 10-Gigabit SFI/SFP+ Network Connection
       vendor: Intel Corporation
       physical id: 0.1
       bus info: pci@0000:04:00.1
       logical name: dna0
       version: 01
       serial: 00:e0:ed:19:e3:e1
       width: 64 bits
       clock: 33MHz
       capabilities: pm msi msix pciexpress vpd bus_master cap_list ethernet physical fibre
       configuration: autonegotiation=off broadcast=yes driver=ixgbe driverversion=3.18.7-DNA duplex=full firmware=0x800000cb latency=0 link=yes multicast=yes port=fibre promiscuous=yes
       resources: irq:37 memory:fbc00000-fbc1ffff ioport:e000(size=32) memory:fbc40000-fbc43fff memory:fa700000-fa7fffff memory:fa600000-fa6fffff



Start Suricata with DNA

(make sure  you adjust your directories in the command below)
suricata --pfring -c /etc/suricata/peter-yaml/suricata-pfring-dna.yaml -v -D


Some stats from suricata.log:

root@suricata:/home/pevman/pfring-svn-latest/userland/examples# more /var/log/suricata/suricata.log
[32055] 27/11/2013 -- 13:31:38 - (suricata.c:932) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev 77b09fc)
[32055] 27/11/2013 -- 13:31:38 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 16
[32055] 27/11/2013 -- 13:31:38 - (app-layer-dns-udp.c:315) <Info> (DNSUDPConfigure) -- DNS request flood protection level: 500
[32055] 27/11/2013 -- 13:31:38 - (defrag-hash.c:209) <Info> (DefragInitConfig) -- allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
[32055] 27/11/2013 -- 13:31:38 - (defrag-hash.c:234) <Info> (DefragInitConfig) -- preallocated 65535 defrag trackers of size 152
[32055] 27/11/2013 -- 13:31:38 - (defrag-hash.c:241) <Info> (DefragInitConfig) -- defrag memory usage: 13631336 bytes, maximum: 536870912
[32055] 27/11/2013 -- 13:31:38 - (tmqh-flow.c:76) <Info> (TmqhFlowRegister) -- AutoFP mode using default "Active Packets" flow load balancer
[32056] 27/11/2013 -- 13:31:38 - (tmqh-packetpool.c:141) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 288873872
[32056] 27/11/2013 -- 13:31:38 - (host.c:205) <Info> (HostInitConfig) -- allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
[32056] 27/11/2013 -- 13:31:38 - (host.c:228) <Info> (HostInitConfig) -- preallocated 1000 hosts of size 112
[32056] 27/11/2013 -- 13:31:38 - (host.c:230) <Info> (HostInitConfig) -- host memory usage: 390144 bytes, maximum: 16777216
[32056] 27/11/2013 -- 13:31:38 - (flow.c:386) <Info> (FlowInitConfig) -- allocated 67108864 bytes of memory for the flow hash... 1048576 buckets of size 64
[32056] 27/11/2013 -- 13:31:38 - (flow.c:410) <Info> (FlowInitConfig) -- preallocated 1048576 flows of size 376
[32056] 27/11/2013 -- 13:31:38 - (flow.c:412) <Info> (FlowInitConfig) -- flow memory usage: 469762048 bytes, maximum: 1073741824
[32056] 27/11/2013 -- 13:31:38 - (reputation.c:459) <Info> (SRepInit) -- IP reputation disabled
[32056] 27/11/2013 -- 13:31:38 - (util-magic.c:62) <Info> (MagicInit) -- using magic-file /usr/share/file/magic
[32056] 27/11/2013 -- 13:31:38 - (suricata.c:1725) <Info> (SetupDelayedDetect) -- Delayed detect disabled

.....rules loaded  - 8010 :

[32056] 27/11/2013 -- 13:31:40 - (detect.c:453) <Info> (SigLoadSignatures) -- 9 rule files processed. 8010 rules successfully loaded, 0 rules failed
[32056] 27/11/2013 -- 13:31:40 - (detect.c:2589) <Info> (SigAddressPrepareStage1) -- 8017 signatures processed. 1 are IP-only rules, 2147 are inspecting packet payload, 6625 inspect application lay
er, 0 are decoder event only
[32056] 27/11/2013 -- 13:31:40 - (detect.c:2592) <Info> (SigAddressPrepareStage1) -- building signature grouping structure, stage 1: adding signatures to signature source addresses... complete
[32056] 27/11/2013 -- 13:31:40 - (detect.c:3218) <Info> (SigAddressPrepareStage2) -- building signature grouping structure, stage 2: building source address list... complete
[32056] 27/11/2013 -- 13:35:28 - (detect.c:3860) <Info> (SigAddressPrepareStage3) -- building signature grouping structure, stage 3: building destination address lists... complete
[32056] 27/11/2013 -- 13:35:28 - (util-threshold-config.c:1186) <Info> (SCThresholdConfParseFile) -- Threshold config parsed: 0 rule(s) found
[32056] 27/11/2013 -- 13:35:28 - (util-coredump-config.c:122) <Info> (CoredumpLoadConfig) -- Core dump size set to unlimited.
[32056] 27/11/2013 -- 13:35:28 - (util-logopenfile.c:168) <Info> (SCConfLogOpenGeneric) -- fast output device (regular) initialized: fast.log
[32056] 27/11/2013 -- 13:35:28 - (util-logopenfile.c:168) <Info> (SCConfLogOpenGeneric) -- http-log output device (regular) initialized: http.log
[32056] 27/11/2013 -- 13:35:28 - (util-logopenfile.c:168) <Info> (SCConfLogOpenGeneric) -- tls-log output device (regular) initialized: tls.log
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@0 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@1 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@2 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@3 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@4 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@5 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@6 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@7 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@8 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@9 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@10 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@11 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@12 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@13 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@14 from config file
[32056] 27/11/2013 -- 13:35:28 - (util-device.c:147) <Info> (LiveBuildDeviceList) -- Adding interface dna0@15 from config file
........
......
[32056] 27/11/2013 -- 13:35:28 - (runmode-pfring.c:555) <Info> (RunModeIdsPfringWorkers) -- RunModeIdsPfringWorkers initialised
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:374) <Info> (StreamTcpInitConfig) -- stream "prealloc-sessions": 2048 (per thread)
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:390) <Info> (StreamTcpInitConfig) -- stream "memcap": 17179869184
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:396) <Info> (StreamTcpInitConfig) -- stream "midstream" session pickups: enabled
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:402) <Info> (StreamTcpInitConfig) -- stream "async-oneside": disabled
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:419) <Info> (StreamTcpInitConfig) -- stream "checksum-validation": disabled
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:441) <Info> (StreamTcpInitConfig) -- stream."inline": disabled
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:454) <Info> (StreamTcpInitConfig) -- stream "max-synack-queued": 5
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:472) <Info> (StreamTcpInitConfig) -- stream.reassembly "memcap": 25769803776
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:490) <Info> (StreamTcpInitConfig) -- stream.reassembly "depth": 12582912
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:573) <Info> (StreamTcpInitConfig) -- stream.reassembly "toserver-chunk-size": 2509
[32056] 27/11/2013 -- 13:35:28 - (stream-tcp.c:575) <Info> (StreamTcpInitConfig) -- stream.reassembly "toclient-chunk-size": 2459
[32056] 27/11/2013 -- 13:35:28 - (tm-threads.c:2191) <Notice> (TmThreadWaitOnThreadInit) -- all 16 packet processing threads, 3 management threads initialized, engine started.




Results: after 45 min running (and counting) on 10Gbps 8010 rules (impressive) ->
root@suricata:/var/log/suricata# grep  kernel /var/log/suricata/stats.log | tail -32
capture.kernel_packets    | RxPFRdna0@01              | 467567844
capture.kernel_drops      | RxPFRdna0@01              | 0
capture.kernel_packets    | RxPFRdna0@11              | 440973548
capture.kernel_drops      | RxPFRdna0@11              | 0
capture.kernel_packets    | RxPFRdna0@21              | 435088258
capture.kernel_drops      | RxPFRdna0@21              | 0
capture.kernel_packets    | RxPFRdna0@31              | 453131090
capture.kernel_drops      | RxPFRdna0@31              | 0
capture.kernel_packets    | RxPFRdna0@41              | 469334903
capture.kernel_drops      | RxPFRdna0@41              | 0
capture.kernel_packets    | RxPFRdna0@51              | 430412652
capture.kernel_drops      | RxPFRdna0@51              | 0
capture.kernel_packets    | RxPFRdna0@61              | 438056484
capture.kernel_drops      | RxPFRdna0@61              | 0
capture.kernel_packets    | RxPFRdna0@71              | 428234219
capture.kernel_drops      | RxPFRdna0@71              | 0
capture.kernel_packets    | RxPFRdna0@81              | 452883734
capture.kernel_drops      | RxPFRdna0@81              | 0
capture.kernel_packets    | RxPFRdna0@91              | 469565553
capture.kernel_drops      | RxPFRdna0@91              | 0
capture.kernel_packets    | RxPFRdna0@101             | 442010263
capture.kernel_drops      | RxPFRdna0@101             | 0
capture.kernel_packets    | RxPFRdna0@111             | 451989862
capture.kernel_drops      | RxPFRdna0@111             | 0
capture.kernel_packets    | RxPFRdna0@121             | 452650397
capture.kernel_drops      | RxPFRdna0@121             | 0
capture.kernel_packets    | RxPFRdna0@131             | 464907229
capture.kernel_drops      | RxPFRdna0@131             | 0
capture.kernel_packets    | RxPFRdna0@141             | 443403243
capture.kernel_drops      | RxPFRdna0@141             | 0
capture.kernel_packets    | RxPFRdna0@151             | 432499371
capture.kernel_drops      | RxPFRdna0@151             | 0

Some htop stats




In the examples directory of your PF_RING sources - /pfring-svn-latest/userland/examples you have some tools you can use to look at packets stats and such - example:

root@suricata:/home/pevman/pfring-svn-latest/userland/examples# ./pfcount_multichannel -i dna0
Capturing from dna0
Found 16 channels
Using PF_RING v.5.6.2
=========================
Absolute Stats: [channel=0][181016 pkts rcvd][0 pkts dropped]
Total Pkts=181016/Dropped=0.0 %
181016 pkts - 151335257 bytes [181011.8 pkt/sec - 1210.65 Mbit/sec]
=========================
Absolute Stats: [channel=1][179532 pkts rcvd][0 pkts dropped]
Total Pkts=179532/Dropped=0.0 %
179532 pkts - 145662057 bytes [179527.9 pkt/sec - 1165.27 Mbit/sec]
=========================
Absolute Stats: [channel=2][165293 pkts rcvd][0 pkts dropped]
Total Pkts=165293/Dropped=0.0 %
165293 pkts - 136544046 bytes [165289.2 pkt/sec - 1092.33 Mbit/sec]
=========================
Absolute Stats: [channel=3][170460 pkts rcvd][0 pkts dropped]
Total Pkts=170460/Dropped=0.0 %
170460 pkts - 140635250 bytes [170456.1 pkt/sec - 1125.06 Mbit/sec]
=========================
Absolute Stats: [channel=4][175195 pkts rcvd][0 pkts dropped]
Total Pkts=175195/Dropped=0.0 %
175274 pkts - 152625282 bytes [175270.0 pkt/sec - 1220.46 Mbit/sec]
=========================
Absolute Stats: [channel=5][183791 pkts rcvd][0 pkts dropped]
Total Pkts=183791/Dropped=0.0 %
183885 pkts - 160108632 bytes [183880.8 pkt/sec - 1280.29 Mbit/sec]
=========================
Absolute Stats: [channel=6][195090 pkts rcvd][0 pkts dropped]
Total Pkts=195090/Dropped=0.0 %
195090 pkts - 151078761 bytes [195085.5 pkt/sec - 1208.60 Mbit/sec]
=========================
Absolute Stats: [channel=7][176625 pkts rcvd][0 pkts dropped]
Total Pkts=176625/Dropped=0.0 %
176625 pkts - 149183724 bytes [176620.9 pkt/sec - 1193.44 Mbit/sec]
=========================
Absolute Stats: [channel=8][226365 pkts rcvd][0 pkts dropped]
Total Pkts=226365/Dropped=0.0 %
226365 pkts - 214464585 bytes [226359.8 pkt/sec - 1715.68 Mbit/sec]
=========================
Absolute Stats: [channel=9][183973 pkts rcvd][0 pkts dropped]
Total Pkts=183973/Dropped=0.0 %
183973 pkts - 154206146 bytes [183968.8 pkt/sec - 1233.62 Mbit/sec]
=========================
Absolute Stats: [channel=10][193904 pkts rcvd][0 pkts dropped]
Total Pkts=193904/Dropped=0.0 %
193904 pkts - 170982720 bytes [193899.5 pkt/sec - 1367.83 Mbit/sec]
=========================
Absolute Stats: [channel=11][159307 pkts rcvd][0 pkts dropped]
Total Pkts=159307/Dropped=0.0 %
159307 pkts - 130492164 bytes [159303.3 pkt/sec - 1043.91 Mbit/sec]
=========================
Absolute Stats: [channel=12][198685 pkts rcvd][0 pkts dropped]
Total Pkts=198685/Dropped=0.0 %
198685 pkts - 173157408 bytes [198680.4 pkt/sec - 1385.23 Mbit/sec]
=========================
Absolute Stats: [channel=13][196712 pkts rcvd][0 pkts dropped]
Total Pkts=196712/Dropped=0.0 %
196712 pkts - 172714889 bytes [196707.5 pkt/sec - 1381.69 Mbit/sec]
=========================
Absolute Stats: [channel=14][180239 pkts rcvd][0 pkts dropped]
Total Pkts=180239/Dropped=0.0 %
180239 pkts - 153796845 bytes [180234.9 pkt/sec - 1230.35 Mbit/sec]
=========================
Absolute Stats: [channel=15][174886 pkts rcvd][0 pkts dropped]
Total Pkts=174886/Dropped=0.0 %
174886 pkts - 149870888 bytes [174882.0 pkt/sec - 1198.94 Mbit/sec]
=========================
Aggregate stats (all channels): [0.0 pkt/sec][0.00 Mbit/sec][0 pkts dropped]
=========================

=========================
Absolute Stats: [channel=0][280911 pkts rcvd][0 pkts dropped]
Total Pkts=280911/Dropped=0.0 %
280911 pkts - 238246030 bytes [140327.9 pkt/sec - 952.12 Mbit/sec]
=========================
Actual Stats: [channel=0][99895 pkts][1001.8 ms][99715.9 pkt/sec]
=========================
Absolute Stats: [channel=1][271128 pkts rcvd][0 pkts dropped]
Total Pkts=271128/Dropped=0.0 %
271128 pkts - 220184576 bytes [135440.8 pkt/sec - 879.94 Mbit/sec]
=========================
Actual Stats: [channel=1][91540 pkts][1001.8 ms][91375.9 pkt/sec]
=========================
Absolute Stats: [channel=2][251004 pkts rcvd][0 pkts dropped]
Total Pkts=251004/Dropped=0.0 %
251090 pkts - 210457632 bytes [125430.9 pkt/sec - 840.91 Mbit/sec]
=========================
Actual Stats: [channel=2][85799 pkts][1001.8 ms][85645.2 pkt/sec]
=========================
Absolute Stats: [channel=3][256648 pkts rcvd][0 pkts dropped]
Total Pkts=256648/Dropped=0.0 %
256648 pkts - 213116218 bytes [128207.4 pkt/sec - 851.69 Mbit/sec]
=========================
Actual Stats: [channel=3][86188 pkts][1001.8 ms][86033.5 pkt/sec]
=========================
Absolute Stats: [channel=4][261802 pkts rcvd][0 pkts dropped]
Total Pkts=261802/Dropped=0.0 %
261802 pkts - 225272589 bytes [130782.1 pkt/sec - 900.27 Mbit/sec]
=========================
Actual Stats: [channel=4][86528 pkts][1001.8 ms][86372.9 pkt/sec]
=========================
Absolute Stats: [channel=5][275665 pkts rcvd][0 pkts dropped]
Total Pkts=275665/Dropped=0.0 %
275665 pkts - 239259529 bytes [137707.3 pkt/sec - 956.17 Mbit/sec]
=========================
Actual Stats: [channel=5][91780 pkts][1001.8 ms][91615.5 pkt/sec]
=========================
Absolute Stats: [channel=6][295611 pkts rcvd][0 pkts dropped]
Total Pkts=295611/Dropped=0.0 %
295611 pkts - 231543496 bytes [147671.2 pkt/sec - 925.33 Mbit/sec]
=========================
Actual Stats: [channel=6][100521 pkts][1001.8 ms][100340.8 pkt/sec]
=========================
Absolute Stats: [channel=7][268374 pkts rcvd][0 pkts dropped]
Total Pkts=268374/Dropped=0.0 %
268374 pkts - 230010930 bytes [134065.1 pkt/sec - 919.21 Mbit/sec]
=========================
Actual Stats: [channel=7][91749 pkts][1001.8 ms][91584.5 pkt/sec]
=========================
Absolute Stats: [channel=8][312726 pkts rcvd][0 pkts dropped]
Total Pkts=312726/Dropped=0.0 %
312726 pkts - 286419690 bytes [156220.9 pkt/sec - 1144.64 Mbit/sec]
=========================
Actual Stats: [channel=8][86361 pkts][1001.8 ms][86206.2 pkt/sec]
=========================
Absolute Stats: [channel=9][275091 pkts rcvd][0 pkts dropped]
Total Pkts=275091/Dropped=0.0 %
275091 pkts - 229807313 bytes [137420.5 pkt/sec - 918.39 Mbit/sec]
=========================
Actual Stats: [channel=9][91118 pkts][1001.8 ms][90954.6 pkt/sec]
=========================
Absolute Stats: [channel=10][289441 pkts rcvd][0 pkts dropped]
Total Pkts=289441/Dropped=0.0 %
289441 pkts - 254843198 bytes [144589.0 pkt/sec - 1018.45 Mbit/sec]
=========================
Actual Stats: [channel=10][95537 pkts][1001.8 ms][95365.7 pkt/sec]
=========================
Absolute Stats: [channel=11][241318 pkts rcvd][0 pkts dropped]
Total Pkts=241318/Dropped=0.0 %
241318 pkts - 200442927 bytes [120549.4 pkt/sec - 801.04 Mbit/sec]
=========================
Actual Stats: [channel=11][82011 pkts][1001.8 ms][81864.0 pkt/sec]
=========================
Absolute Stats: [channel=12][300209 pkts rcvd][0 pkts dropped]
Total Pkts=300209/Dropped=0.0 %
300209 pkts - 261259342 bytes [149968.1 pkt/sec - 1044.09 Mbit/sec]
=========================
Actual Stats: [channel=12][101524 pkts][1001.8 ms][101342.0 pkt/sec]
=========================
Absolute Stats: [channel=13][293733 pkts rcvd][0 pkts dropped]
Total Pkts=293733/Dropped=0.0 %
293733 pkts - 259477621 bytes [146733.0 pkt/sec - 1036.97 Mbit/sec]
=========================
Actual Stats: [channel=13][97021 pkts][1001.8 ms][96847.1 pkt/sec]
=========================
Absolute Stats: [channel=14][267101 pkts rcvd][0 pkts dropped]
Total Pkts=267101/Dropped=0.0 %
267101 pkts - 226064969 bytes [133429.1 pkt/sec - 903.44 Mbit/sec]
=========================
Actual Stats: [channel=14][86862 pkts][1001.8 ms][86706.3 pkt/sec]
=========================
Absolute Stats: [channel=15][266323 pkts rcvd][0 pkts dropped]
Total Pkts=266323/Dropped=0.0 %
266323 pkts - 232926529 bytes [133040.5 pkt/sec - 930.86 Mbit/sec]
=========================
Actual Stats: [channel=15][91437 pkts][1001.8 ms][91273.1 pkt/sec]
=========================
Aggregate stats (all channels): [1463243.0 pkt/sec][15023.51 Mbit/sec][0 pkts dropped]
=========================

=========================
Absolute Stats: [channel=0][373933 pkts rcvd][0 pkts dropped]
Total Pkts=373933/Dropped=0.0 %
374021 pkts - 319447715 bytes [124511.0 pkt/sec - 850.55 Mbit/sec]
=========================
Actual Stats: [channel=0][93110 pkts][1002.1 ms][92914.8 pkt/sec]
=========================
Absolute Stats: [channel=1][364673 pkts rcvd][0 pkts dropped]
Total Pkts=364673/Dropped=0.0 %
364673 pkts - 297909054 bytes [121399.0 pkt/sec - 793.39 Mbit/sec]
=========================
Actual Stats: [channel=1][93545 pkts][1002.1 ms][93348.9 pkt/sec]
=========================
Absolute Stats: [channel=2][340006 pkts rcvd][0 pkts dropped]
Total Pkts=340006/Dropped=0.0 %
340006 pkts - 286127223 bytes [113187.4 pkt/sec - 762.01 Mbit/sec]
=========================
Actual Stats: [channel=2][88914 pkts][1002.1 ms][88727.6 pkt/sec]
=========================
Absolute Stats: [channel=3][345742 pkts rcvd][0 pkts dropped]
Total Pkts=345742/Dropped=0.0 %
345744 pkts - 291400583 bytes [115097.6 pkt/sec - 776.05 Mbit/sec]
=========================
Actual Stats: [channel=3][89096 pkts][1002.1 ms][88909.2 pkt/sec]
=========================
Absolute Stats: [channel=4][347349 pkts rcvd][0 pkts dropped]
Total Pkts=347349/Dropped=0.0 %
347349 pkts - 298935146 bytes [115631.9 pkt/sec - 796.12 Mbit/sec]
=========================
Actual Stats: [channel=4][85547 pkts][1002.1 ms][85367.6 pkt/sec]
=========================
Absolute Stats: [channel=5][364298 pkts rcvd][0 pkts dropped]
Total Pkts=364298/Dropped=0.0 %
364298 pkts - 316328192 bytes [121274.2 pkt/sec - 842.44 Mbit/sec]
=========================
Actual Stats: [channel=5][88755 pkts][1002.1 ms][88568.9 pkt/sec]
=========================
Absolute Stats: [channel=6][389332 pkts rcvd][0 pkts dropped]
Total Pkts=389332/Dropped=0.0 %
389332 pkts - 304943539 bytes [129608.0 pkt/sec - 812.12 Mbit/sec]
=========================
Actual Stats: [channel=6][93721 pkts][1002.1 ms][93524.5 pkt/sec]
=========================
Absolute Stats: [channel=7][358297 pkts rcvd][0 pkts dropped]
Total Pkts=358297/Dropped=0.0 %
358297 pkts - 306416899 bytes [119276.5 pkt/sec - 816.05 Mbit/sec]
=========================
Actual Stats: [channel=7][89923 pkts][1002.1 ms][89734.5 pkt/sec]
=========================
Absolute Stats: [channel=8][401267 pkts rcvd][0 pkts dropped]
Total Pkts=401267/Dropped=0.0 %
401267 pkts - 360814291 bytes [133581.1 pkt/sec - 960.92 Mbit/sec]
=========================
Actual Stats: [channel=8][88541 pkts][1002.1 ms][88355.4 pkt/sec]
=========================
Absolute Stats: [channel=9][367106 pkts rcvd][0 pkts dropped]
Total Pkts=367106/Dropped=0.0 %
367106 pkts - 308110795 bytes [122209.0 pkt/sec - 820.56 Mbit/sec]
=========================
Actual Stats: [channel=9][92015 pkts][1002.1 ms][91822.1 pkt/sec]
=========================
Absolute Stats: [channel=10][379460 pkts rcvd][0 pkts dropped]
Total Pkts=379460/Dropped=0.0 %
379460 pkts - 333159086 bytes [126321.6 pkt/sec - 887.26 Mbit/sec]
=========================
Actual Stats: [channel=10][90019 pkts][1002.1 ms][89830.3 pkt/sec]
=========================
Absolute Stats: [channel=11][325694 pkts rcvd][0 pkts dropped]
Total Pkts=325694/Dropped=0.0 %
325694 pkts - 275299638 bytes [108423.0 pkt/sec - 733.17 Mbit/sec]
=========================
Actual Stats: [channel=11][84376 pkts][1002.1 ms][84199.1 pkt/sec]
=========================
Absolute Stats: [channel=12][404043 pkts rcvd][0 pkts dropped]
Total Pkts=404043/Dropped=0.0 %
404043 pkts - 354268267 bytes [134505.2 pkt/sec - 943.48 Mbit/sec]
=========================
Actual Stats: [channel=12][103834 pkts][1002.1 ms][103616.3 pkt/sec]
=========================
Absolute Stats: [channel=13][387853 pkts rcvd][0 pkts dropped]
Total Pkts=387853/Dropped=0.0 %
387853 pkts - 341947698 bytes [129115.6 pkt/sec - 910.67 Mbit/sec]
=========================
Actual Stats: [channel=13][94120 pkts][1002.1 ms][93922.7 pkt/sec]
=========================
Absolute Stats: [channel=14][355203 pkts rcvd][0 pkts dropped]
Total Pkts=355203/Dropped=0.0 %
355203 pkts - 299561170 bytes [118246.5 pkt/sec - 797.79 Mbit/sec]
=========================
Actual Stats: [channel=14][88102 pkts][1002.1 ms][87917.3 pkt/sec]
=========================
Absolute Stats: [channel=15][358170 pkts rcvd][0 pkts dropped]
Total Pkts=358170/Dropped=0.0 %
358170 pkts - 317357718 bytes [119234.2 pkt/sec - 845.18 Mbit/sec]
=========================
Actual Stats: [channel=15][91847 pkts][1002.1 ms][91654.4 pkt/sec]
=========================
Aggregate stats (all channels): [1452413.5 pkt/sec][13347.76 Mbit/sec][0 pkts dropped]
=========================

=========================
Absolute Stats: [channel=0][468626 pkts rcvd][0 pkts dropped]
Total Pkts=468626/Dropped=0.0 %
468626 pkts - 400765367 bytes [116978.6 pkt/sec - 800.31 Mbit/sec]
=========================
Actual Stats: [channel=0][94605 pkts][1002.2 ms][94400.9 pkt/sec]
=========================
Absolute Stats: [channel=1][459038 pkts rcvd][0 pkts dropped]
Total Pkts=459038/Dropped=0.0 %
459038 pkts - 375498207 bytes [114585.3 pkt/sec - 749.86 Mbit/sec]
=========================
Actual Stats: [channel=1][94365 pkts][1002.2 ms][94161.4 pkt/sec]
=========================
Absolute Stats: [channel=2][427693 pkts rcvd][0 pkts dropped]
Total Pkts=427693/Dropped=0.0 %
427756 pkts - 360740091 bytes [106776.6 pkt/sec - 720.30 Mbit/sec]
=========================
Actual Stats: [channel=2][87750 pkts][1002.2 ms][87560.7 pkt/sec]
=========================
Absolute Stats: [channel=3][430086 pkts rcvd][0 pkts dropped]
Total Pkts=430086/Dropped=0.0 %
430086 pkts - 360783155 bytes [107358.3 pkt/sec - 720.47 Mbit/sec]
=========================
Actual Stats: [channel=3][84342 pkts][1002.2 ms][84160.0 pkt/sec]
=========================
Absolute Stats: [channel=4][441175 pkts rcvd][0 pkts dropped]
Total Pkts=441175/Dropped=0.0 %
441175 pkts - 381517772 bytes [110126.3 pkt/sec - 761.88 Mbit/sec]
=========================
Actual Stats: [channel=4][93826 pkts][1002.2 ms][93623.6 pkt/sec]
=========================
Absolute Stats: [channel=5][452388 pkts rcvd][0 pkts dropped]
Total Pkts=452388/Dropped=0.0 %
452388 pkts - 392565040 bytes [112925.3 pkt/sec - 783.94 Mbit/sec]
=========================
Actual Stats: [channel=5][87966 pkts][1002.2 ms][87776.2 pkt/sec]
=========================
Absolute Stats: [channel=6][484619 pkts rcvd][0 pkts dropped]
Total Pkts=484619/Dropped=0.0 %
484619 pkts - 380513369 bytes [120970.8 pkt/sec - 759.87 Mbit/sec]
=========================
Actual Stats: [channel=6][95287 pkts][1002.2 ms][95081.4 pkt/sec]
=========================
Absolute Stats: [channel=7][444354 pkts rcvd][0 pkts dropped]
Total Pkts=444354/Dropped=0.0 %
444354 pkts - 380437307 bytes [110919.8 pkt/sec - 759.72 Mbit/sec]
=========================
Actual Stats: [channel=7][86057 pkts][1002.2 ms][85871.3 pkt/sec]
=========================
Absolute Stats: [channel=8][492232 pkts rcvd][0 pkts dropped]
Total Pkts=492232/Dropped=0.0 %
492232 pkts - 439080930 bytes [122871.2 pkt/sec - 876.83 Mbit/sec]
=========================
Actual Stats: [channel=8][90965 pkts][1002.2 ms][90768.8 pkt/sec]
=========================
Absolute Stats: [channel=9][456986 pkts rcvd][0 pkts dropped]
Total Pkts=456986/Dropped=0.0 %
456986 pkts - 384635529 bytes [114073.1 pkt/sec - 768.10 Mbit/sec]
=========================
Actual Stats: [channel=9][89880 pkts][1002.2 ms][89686.1 pkt/sec]
=========================
Absolute Stats: [channel=10][465784 pkts rcvd][0 pkts dropped]
Total Pkts=465784/Dropped=0.0 %
465784 pkts - 406987442 bytes [116269.2 pkt/sec - 812.74 Mbit/sec]
=========================
Actual Stats: [channel=10][86324 pkts][1002.2 ms][86137.8 pkt/sec]
=========================
Absolute Stats: [channel=11][414559 pkts rcvd][0 pkts dropped]
Total Pkts=414559/Dropped=0.0 %
414559 pkts - 356117478 bytes [103482.4 pkt/sec - 711.15 Mbit/sec]
=========================
Actual Stats: [channel=11][88865 pkts][1002.2 ms][88673.3 pkt/sec]
=========================
Absolute Stats: [channel=12][505441 pkts rcvd][0 pkts dropped]
Total Pkts=505441/Dropped=0.0 %
505441 pkts - 445085395 bytes [126168.4 pkt/sec - 888.82 Mbit/sec]
=========================
Actual Stats: [channel=12][101398 pkts][1002.2 ms][101179.3 pkt/sec]
=========================
Absolute Stats: [channel=13][484235 pkts rcvd][0 pkts dropped]
Total Pkts=484235/Dropped=0.0 %
484235 pkts - 428890010 bytes [120875.0 pkt/sec - 856.48 Mbit/sec]
=========================
Actual Stats: [channel=13][96382 pkts][1002.2 ms][96174.1 pkt/sec]
=========================
Absolute Stats: [channel=14][441791 pkts rcvd][0 pkts dropped]
Total Pkts=441791/Dropped=0.0 %
441791 pkts - 370987385 bytes [110280.1 pkt/sec - 740.85 Mbit/sec]
=========================
Actual Stats: [channel=14][86588 pkts][1002.2 ms][86401.2 pkt/sec]
=========================
Absolute Stats: [channel=15][447444 pkts rcvd][0 pkts dropped]
Total Pkts=447444/Dropped=0.0 %
447444 pkts - 400157776 bytes [111691.2 pkt/sec - 799.10 Mbit/sec]
=========================
Actual Stats: [channel=15][89274 pkts][1002.2 ms][89081.4 pkt/sec]
=========================
Aggregate stats (all channels): [1450737.5 pkt/sec][12510.42 Mbit/sec][0 pkts dropped]
=========================

^CLeaving...
=========================
Absolute Stats: [channel=0][526704 pkts rcvd][0 pkts dropped]
Total Pkts=526704/Dropped=0.0 %
526704 pkts - 449622006 bytes [112996.9 pkt/sec - 771.68 Mbit/sec]
=========================
Actual Stats: [channel=0][58078 pkts][655.1 ms][88649.3 pkt/sec]
=========================
Absolute Stats: [channel=1][518742 pkts rcvd][0 pkts dropped]
Total Pkts=518742/Dropped=0.0 %
518742 pkts - 423173503 bytes [111288.8 pkt/sec - 726.29 Mbit/sec]
=========================
Actual Stats: [channel=1][59704 pkts][655.1 ms][91131.2 pkt/sec]
=========================
Absolute Stats: [channel=2][482833 pkts rcvd][0 pkts dropped]
Total Pkts=482833/Dropped=0.0 %
482833 pkts - 408272765 bytes [103585.0 pkt/sec - 700.71 Mbit/sec]
=========================
Actual Stats: [channel=2][55077 pkts][655.1 ms][84068.7 pkt/sec]
=========================
Absolute Stats: [channel=3][484505 pkts rcvd][0 pkts dropped]
Total Pkts=484505/Dropped=0.0 %
484505 pkts - 407952853 bytes [103943.7 pkt/sec - 700.16 Mbit/sec]
=========================
Actual Stats: [channel=3][54419 pkts][655.1 ms][83064.3 pkt/sec]
=========================
Absolute Stats: [channel=4][497847 pkts rcvd][0 pkts dropped]
Total Pkts=497847/Dropped=0.0 %
497847 pkts - 430545046 bytes [106806.0 pkt/sec - 738.94 Mbit/sec]
=========================
Actual Stats: [channel=4][56672 pkts][655.1 ms][86503.3 pkt/sec]
=========================
Absolute Stats: [channel=5][509084 pkts rcvd][0 pkts dropped]
Total Pkts=509084/Dropped=0.0 %
509084 pkts - 442684546 bytes [109216.8 pkt/sec - 759.77 Mbit/sec]
=========================
Actual Stats: [channel=5][56696 pkts][655.1 ms][86539.9 pkt/sec]
=========================
Absolute Stats: [channel=6][590352 pkts rcvd][0 pkts dropped]
Total Pkts=590352/Dropped=0.0 %
590352 pkts - 488140796 bytes [126651.7 pkt/sec - 837.79 Mbit/sec]
=========================
Actual Stats: [channel=6][105733 pkts][655.1 ms][161389.2 pkt/sec]
=========================
Absolute Stats: [channel=7][498739 pkts rcvd][0 pkts dropped]
Total Pkts=498739/Dropped=0.0 %
498739 pkts - 426878095 bytes [106997.4 pkt/sec - 732.65 Mbit/sec]
=========================
Actual Stats: [channel=7][54385 pkts][655.1 ms][83012.4 pkt/sec]
=========================
Absolute Stats: [channel=8][545746 pkts rcvd][0 pkts dropped]
Total Pkts=545746/Dropped=0.0 %
545746 pkts - 483616307 bytes [117082.1 pkt/sec - 830.02 Mbit/sec]
=========================
Actual Stats: [channel=8][53514 pkts][655.1 ms][81682.9 pkt/sec]
=========================
Absolute Stats: [channel=9][513518 pkts rcvd][0 pkts dropped]
Total Pkts=513518/Dropped=0.0 %
513518 pkts - 433042435 bytes [110168.0 pkt/sec - 743.23 Mbit/sec]
=========================
Actual Stats: [channel=9][56532 pkts][655.1 ms][86289.6 pkt/sec]
=========================
Absolute Stats: [channel=10][518808 pkts rcvd][0 pkts dropped]
Total Pkts=518808/Dropped=0.0 %
518808 pkts - 451299621 bytes [111302.9 pkt/sec - 774.56 Mbit/sec]
=========================
Actual Stats: [channel=10][53024 pkts][655.1 ms][80935.0 pkt/sec]
=========================
Absolute Stats: [channel=11][463009 pkts rcvd][0 pkts dropped]
Total Pkts=463009/Dropped=0.0 %
463009 pkts - 396962614 bytes [99332.0 pkt/sec - 681.30 Mbit/sec]
=========================
Actual Stats: [channel=11][48372 pkts][655.1 ms][73834.3 pkt/sec]
=========================
Absolute Stats: [channel=12][568457 pkts rcvd][0 pkts dropped]
Total Pkts=568457/Dropped=0.0 %
568457 pkts - 501652006 bytes [121954.4 pkt/sec - 860.98 Mbit/sec]
=========================
Actual Stats: [channel=12][63016 pkts][655.1 ms][96186.6 pkt/sec]
=========================
Absolute Stats: [channel=13][540529 pkts rcvd][0 pkts dropped]
Total Pkts=540529/Dropped=0.0 %
540529 pkts - 477373633 bytes [115962.9 pkt/sec - 819.31 Mbit/sec]
=========================
Actual Stats: [channel=13][56294 pkts][655.1 ms][85926.3 pkt/sec]
=========================
Absolute Stats: [channel=14][493059 pkts rcvd][0 pkts dropped]
Total Pkts=493059/Dropped=0.0 %
493059 pkts - 413762408 bytes [105778.8 pkt/sec - 710.14 Mbit/sec]
=========================
Actual Stats: [channel=14][51268 pkts][655.1 ms][78254.7 pkt/sec]
=========================
Absolute Stats: [channel=15][500543 pkts rcvd][0 pkts dropped]
Total Pkts=500543/Dropped=0.0 %
500543 pkts - 447149624 bytes [107384.4 pkt/sec - 767.44 Mbit/sec]
=========================
Actual Stats: [channel=15][53099 pkts][655.1 ms][81049.5 pkt/sec]
=========================
Aggregate stats (all channels): [1428517.1 pkt/sec][12154.96 Mbit/sec][0 pkts dropped]
=========================

Shutting down sockets...
        0...
        1...
        2...
        3...
        4...
        5...
        6...
        7...
        8...
        9...
        10...
        11...
        12...
        13...
        14...
        15...
root@suricata:/home/pevman/pfring-svn-latest/userland/examples#



That is it for the DNA configuration and setup/installation part.
The next chapter - Chapter III - AF_PACKET - deals with configuration,setup and tuning of the  AF_PACKET mode usage for the Suricata IDPS.