tag:blogger.com,1999:blog-63525608438194531312024-02-07T20:18:35.689-08:00IT Security through Open Source IT infrastructure and network security, Suricata and such...Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.comBlogger42125tag:blogger.com,1999:blog-6352560843819453131.post-46805905293099687232015-10-15T12:10:00.001-07:002016-08-10T02:35:36.048-07:00Suricata with afpacket - the memory of it all<br />
Suricata IDS/IPS/NSM is a highly scalable, modular and flexible platform. There are numerous configuration options available which empower a lot.<br />
<br />
This blog post aims to give you an overview of the ones that have an impact on the memory consumption for Suricata and how does the suricata.yaml config settings affect the memory usage of/for Suricata and the system it resides on.<br />
<br />
One of the always relevant questions with regards to performance tuning and production deployment is - What is the total memory consumption of Suricata? Or to be correct - what is the total memory that Suricata will consume/use and how can that be calculated and configured more precisely?<br />
<br />
The details of the answer are very relevant since it will most certainly affect the deployment set up. The risk of not correctly setting up the configuration can lead to a RAM starvation which in turn would force the use of swap which would most likely make your particular set up not optimal (to be frank - useless).<br />
<br />
In this blog post we will try to walk through the relevant settings in a suricata.yaml configuration example and come up with an equation for the total memory consumption.<br />
<br />
For this particular set up I use:<br />
<ul>
<li>af-packet running mode with 16 threads configuration</li>
<li>runmode: workers</li>
<li>latest dev edition(git - 2.1dev (rev dcbbda5)) of Suricata at the time of this writing.</li>
<li>IDS mode is used in this example</li>
<li>Debian Jessie/Ubuntu LTS (the OS should not matter)</li>
</ul>
<br />
Lets dive into it...<br />
<br />
<h2>
MTU size does matter</h2>
How so?<br />
<br />
If you look into the setting for max-pending-packets in suricata.yaml<br />
<br />
<blockquote class="tr_bq">
max pending packets: 1024</blockquote>
<br />
that will lead to the following output into suricata.log:<br />
<br />
<blockquote class="tr_bq">
.....<br />
(tmqh-packetpool.c:398) <Info> (PacketPoolInit) -- preallocated 1024 packets. Total memory 3321856<br />
.....</blockquote>
which is - 3244 bytes per packet per thread(pool).<br />
<br />
<br />
The size of each packet takes in the memory is the sizeof(struct<br />
Packet_) + DEFAULT_PACKET_SIZE. So ~1.7K plus ~1.5K or about 3.2K.<br />
Total memory used would be:<br />
<br />
<number_of_threads>*<(sizeof(struct Packet_) + DEFAULT_PACKET_SIZE)>*<max-pending-packets> = 16 * 65534 * 3.2K = 3.20GB.<br />
<br />
<div class="a3s" id=":15s">
<b>NOTE</b>: That much RAM will be reserved right away<br />
<b>NOTE</b>: The number of threads does matter as well :)<br />
<br />
So why is the NIC MTU important?<br />
<br />
The MTU setting on the NIC (IDS) interface is used by af-packet as a default packet size(aka - DEFAULT_PACKET_SIZE) if no explicit default packet size is specified in the suricata.yaml:<br />
<blockquote class="tr_bq">
<span class="im"># Preallocated size for packet. Default is 1514 which is the classical<br />
# size for pcap on ethernet. You should adjust this value to the highest<br />
# packet size (MTU + hardware header) on your system.<br />
#default-packet-size: 1514</span></blockquote>
<span class="im">Note above the "</span><span class="im"><i><b><span class="im">default-packet-size</span></b></i>" is commented/unset. In that case af-packet will use the MTU set on the NIC as a default packet size - which in this particular set up (NIC) if you do "<i>ifconfig</i>" is 1514.</span> <br />
<br />
So when you would like to play "big" and enable those 9KB jumbo frames as MTU on your NIC - without having a need for it ....you may end up with an unwanted side effect the least :)<br />
<br />
<br />
<h2>
Defrag memory settings and consumption</h2>
</div>
<blockquote class="tr_bq">
<div class="a3s" id=":15s">
defrag:<br />
memcap: 512mb<br />
hash-size: 65536<br />
trackers: 65535 # number of defragmented flows to follow<br />
max-frags: 65535 # number of fragments to keep (higher than trackers)<br />
prealloc: yes<br />
timeout: 60</div>
</blockquote>
<div class="a3s" id=":15s">
<br />
The setting above from the defrag section of the suricata.yaml will result in the following if you check your suricata.log:<br />
<br />
<blockquote class="tr_bq">
defrag-hash.c:220) <Info> (DefragInitConfig) -- allocated 3670016<br />
bytes of memory for the defrag hash... 65536 buckets of size 56<br />
defrag-hash.c:245) <Info> (DefragInitConfig) -- preallocated 65535<br />
defrag trackers of size 168<br />
(defrag-hash.c:252) <Info> (DefragInitConfig) -- <b>defrag memory usage: 14679896 bytes, maximum: 536870912</b></blockquote>
<br />
Here we have(in bytes) -<br />
(defrag hash size * 56) + (prealloc defrag trackers * 168). In this case that would be a total of<br />
(65536 * 56) + (65535 * 168) = 13.99MB<br />
which is "<i><b>defrag memory usage: 14679896 bytes</b></i>" from the above output.<br />
<br />
That much memory is immediately allocated/reserved.<br />
The maximum memory usage allowed to be used by defrag will be 512MB.<br />
<br />
<b>NOTE</b>: The defrag settings you configure for preallocation must sum up to be lower than the max amount allocated (defrag.memcap) <br />
<br />
<br />
<h2>
Host memory settings and consumption</h2>
</div>
<blockquote class="tr_bq">
<div class="a3s" id=":15s">
host:<br />
hash-size: 4096<br />
prealloc: 10000<br />
memcap: 16777216</div>
</blockquote>
<div class="a3s" id=":15s">
<br />
The setting above (host memory settings have effect on the ip reputation usage ) from the hosts section of the suricata.yaml will result in the following if you check your suricata.log:<br />
<br />
<blockquote class="tr_bq">
(host.c:212) <Info> (HostInitConfig) -- allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64<br />
(host.c:235) <Info> (HostInitConfig) -- preallocated 10000 hosts of size 136<br />
(host.c:237) <Info> (HostInitConfig) -- host memory usage: <b>1622144</b> bytes, maximum: 1622144 </blockquote>
<br />
Pretty simple (in bytes) -<br />
(hash-size*64) + (prealloc_hosts * 136) =<br />
(4096*64) + (10000 * 136) = <b>1622144</b> =1.54MB are allocated/reserved right away at start.<br />
The maximum memory allowed is 162MB(<b>16777216</b> Bytes)<br />
<br />
<br />
<h2>
Ippair memory settings and consumption</h2>
<br />
<blockquote class="tr_bq">
<div class="a3s" id=":15s">
ippair:<br />
hash-size: 4096<br />
prealloc: 1000<br />
memcap: 16777216</div>
</blockquote>
<br />
The setting above (ippair memory settings have effect on the <a href="http://blog.inliniac.net/tag/xbits/" target="_blank">xbits usage</a>) from the hosts section of the suricata.yaml will result in the following if you check your suricata.log:<br />
<br />
<blockquote class="tr_bq">
(ippair.c:207) <Info> (IPPairInitConfig) -- allocated 262144 bytes of memory for the ippair hash... 4096 buckets of size 64<br />
(ippair.c:230) <Info> (IPPairInitConfig) -- preallocated 1000 ippairs of size 136<br />
(ippair.c:232) <Info> (IPPairInitConfig) -- ippair memory usage: <b>398144 bytes</b>, maximum: <b>16777216</b></blockquote>
<br />
Pretty simple as well (in bytes) -<br />
(hash-size*64) + (prealloc_ippair * 136) =<br />
(4096*64) + (1000 * 136) = <b>398144</b> =1.54MB will be allocated/reserved immediately upon start.<br />
The maximum memory allowed is 162MB(<b>16777216</b> Bytes)<br />
<br />
<h2>
Flow memory settings and consumption</h2>
<br />
<blockquote class="tr_bq">
flow:<br />
memcap: 1gb<br />
hash-size: 1048576<br />
prealloc: 1048576<br />
emergency-recovery: 30<br />
#managers: 1 # default to one flow manager<br />
#recyclers: 1 # default to one flow recycler thread</blockquote>
<br />
The setting above from the flow config section of the suricata.yaml will result in the following in your suricata.log:<br />
<br />
<blockquote class="tr_bq">
[393] 7/6/2015 -- 15:37:55 - (flow.c:441) <Info> (FlowInitConfig) --<br />
allocated 67108864 bytes of memory for the flow hash... 1048576<br />
buckets of size 64<br />
[393] 7/6/2015 -- 15:37:55 - (flow.c:465) <Info> (FlowInitConfig) --<br />
preallocated 1048576 flows of size 280<br />
[393] 7/6/2015 -- 15:37:55 - (flow.c:467) <Info> (FlowInitConfig) --<br />
<b>flow memory usage: 369098752 bytes, maximum: 1073741824</b></blockquote>
<br />
Here we have (in bytes) -<br />
(flow hash * 64) + (prealloc flows * 280) which in this case would be<br />
(1048576 * 64) + (1048576 * 280) = 344MB<br />
The above is what is going to be immediately used/reserved at start up.<br />
The max allowed usage will be 1024MB <br />
<br />
A piece of advice if I may - don't ever add zeros here if you do not need to. By don't need to - I mean if you do not see flow emergency mode counters increasing in your stats.log.<br />
<br />
<h2>
Prealloc-sessions settings and consumption</h2>
<blockquote class="tr_bq">
stream:<br />
memcap: 32mb<br />
checksum-validation: no # reject wrong csums<br />
<b>prealloc-sessions: 20000</b><br />
inline: auto </blockquote>
<br />
The setting above from the prealloc sessions config section of the suricata.yaml will result in the following in your suricata.log:<br />
<br />
<blockquote class="tr_bq">
(stream-tcp.c:377) <Info> (StreamTcpInitConfig) -- stream "prealloc-sessions": 20000 (per thread)</blockquote>
<br />
This translates into bytes as follows (TcpSession structure is <span class="il">192</span> bytes, PoolBucket is <span class="il">24 bytes</span>):<br />
<blockquote class="tr_bq">
(192 + 24) * prealloc_sessions * number of threads = memory use in bytes</blockquote>
In our case we have - (192 + 24) * 20000 * 16 = 65.91MB. This amount will be immediately allocated upon start up.<br />
<b>NOTE</b>: The number of threads does matter as well :)<br />
<br />
<h2>
af-packet ring size memory settings and consumption</h2>
<blockquote class="tr_bq">
<br />
use-mmap: yes<br />
# Ring size will be computed with respect to max_pending_packets and number<br />
# of threads. You can set manually the ring size in number of packets by setting<br />
# the following value. If you are using flow cluster-type and have really network<br />
# intensive single-flow you could want to set the ring-size independently of the number<br />
# of threads:<br />
<b>ring-size: 2048</b></blockquote>
<br />
The setting above from the af-packet ring size config section of the suricata.yaml will result in the following in your suricata.log the ringsize setting actually controls the size of the buffer for each<br />
ring(per thread) - buffer for af-packet:<br />
<br />
<blockquote class="tr_bq">
[7636] 31/8/2015 -- 22:50:51 - (source-af-packet.c:1365) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=103 frame_size=1584 frame_nr=2060<br />
[7636] 31/8/2015 -- 22:50:51 - (source-af-packet.c:1573) <Info> (AFPCreateSocket) -- Using interface 'eth0' via socket 7<br />
[7636] 31/8/2015 -- 22:50:51 - (source-af-packet.c:1157) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth01 using socket 7<br />
[7637] 31/8/2015 -- 22:50:51 - (source-af-packet.c:1365) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=103 frame_size=1584 frame_nr=2060<br />
[7637] 31/8/2015 -- 22:50:51 - (source-af-packet.c:1573) <Info> (AFPCreateSocket) -- Using interface 'eth0' via socket 8</blockquote>
<br />
<br />
In general - that would mean -<br />
<number of threads> * <ringsize> * <(sizeof(structPacket_) + DEFAULT_PACKET_SIZE)><br />
or in our case - 16*2048*3514=109MB<br />
This is memory allocated/reserved immediately.<br />
<br />
Above I say "in general". You might wonder where does this come from:<br />
<blockquote class="tr_bq">
(source-af-packet.c:1365) <Info> (AFPComputeRingParams) --
AF_PACKET RX Ring params:<b><i> block_size=32768 block_nr=103 frame_size=1584
frame_nr=2060</i></b></blockquote>
<br />
Why 2060 frames while we have specified 2048,why <i>block_size/frame_size</i> and what is their relation? Full detailed description about that you can find here - <a href="https://www.kernel.org/doc/Documentation/networking/packet_mmap.txt" target="_blank">https://www.kernel.org/doc/Documentation/networking/packet_mmap.txt </a>(thanks regit) <br />
<br />
<br />
<h2>
Stream and reassembly memory settings and consumption</h2>
<br />
<blockquote class="tr_bq">
stream:<br />
memcap: 14gb<br />
reassembly:<br />
memcap: 20gb</blockquote>
<br />
<br />
The setting above from the stream and reassembly config section of the<br />
suricata.yaml will result in the following in your suricata.log:</div>
<blockquote class="tr_bq">
<div class="a3s" id=":15s">
...... </div>
<div class="a3s" id=":15s">
(stream-tcp.c:393) <Info> (StreamTcpInitConfig) -- stream "memcap": 15032385536</div>
<div class="a3s" id=":15s">
......</div>
<div class="a3s" id=":15s">
(stream-tcp.c:475) <Info> (StreamTcpInitConfig) -- stream.reassembly "memcap": 21474836480</div>
</blockquote>
<div class="a3s" id=":15s">
<blockquote>
...... </blockquote>
<br />
The above is very straight forward. The stream and reassembly memcaps<br />
are in total 14GB+20GB=34GB<br />
This is max memory allowed. It will not be allocated immediately.<br />
<br />
Further below in the config section we have :<br />
<br />
<blockquote class="tr_bq">
#raw: yes<br />
#chunk-prealloc: 250</blockquote>
<br />
<b>Q:</b> What does raw mean and what is chunk-prealloc?<br />
<b>A:</b> The 'raw' stream inspection (content keywords w/o http_uri etc) uses<br />
'chunks'. This is again a preallocated memory block that lives in a pool.<br />
<br />
<b>Q:</b> So what is the size of "chunks " ?<br />
<b>A:</b> 4kb/4096bytes<br />
<br />
So in this case above we have -<br />
250*4096 = 0.97MB<br />
This is deducted/taken from the memory allocated by the <b>stream.reassembly.memcap</b> value. <br />
<br />
we also have prealloc segments (values in bytes):<br />
<blockquote class="tr_bq">
#randomize-chunk-range: 10<br />
#raw: yes<br />
#chunk-prealloc: 250<br />
#segments:<br />
# - size: 4<br />
# prealloc: 256<br />
# - size: 16<br />
# prealloc: 512<br />
# - size: 112<br />
# prealloc: 512<br />
# - size: 248<br />
# prealloc: 512<br />
# - size: 512<br />
# prealloc: 512<br />
# - size: 768<br />
# prealloc: 1024<br />
# - size: 1448<br />
# prealloc: 1024<br />
# - size: 65535<br />
# prealloc: 128<br />
#zero-copy-size: 128</blockquote>
<br />
More detailed info about the above you can find from my other blog post here - <a href="http://pevma.blogspot.se/2014/06/suricata-idsips-tcp-segment-pool-size.html">http://pevma.blogspot.se/2014/06/suricata-idsips-tcp-segment-pool-size.html</a><br />
<br />
<b>NOTE:</b> Do not forget that these settings (segments preallocation) is deducted/taken from the memory allocated by the <b>stream.reassembly.memcap</b> value. <br />
<br />
<h2>
App layer memory settings and consumption</h2>
<blockquote class="tr_bq">
<br />
app-layer:<br />
protocols:<br />
dns:<br />
# memcaps. Globally and per flow/state.<br />
global-memcap: 2gb<br />
state-memcap: 512kb<br />
...<br />
....<br />
http:<br />
enabled: yes<br />
memcap:2gb</blockquote>
<br />
<br />
Here we have - app-layer dns + http or in this case - 2GB + 2GB = 4GB<br />
<br />
<h2>
Other settings that affect the memory consumption</h2>
<br />
<blockquote class="tr_bq">
...<br />
detect-engine:<br />
- profile: medium<br />
- custom-values:</blockquote>
</div>
<blockquote>
<div class="a3s" id=":15s">
... </div>
</blockquote>
<div class="a3s" id=":15s">
<br />
Some more information:<br />
<a href="https://redmine.openinfosecfoundation.org/projects/suricata/wiki/High_Performance_Configuration" rel="noreferrer" target="_blank">https://redmine.<wbr></wbr>openinfosecfoundation.org/<wbr></wbr>projects/suricata/wiki/High_<wbr></wbr>Performance_Configuration</a><br />
<br />
The more rules you load the heavier the effect of a switch in this setting will be. For example a switch from <i>profile: medium</i> to <i>profile: high</i> would be most evident if you would like to try with >10000 rules.<br />
<br />
<blockquote class="tr_bq">
mpm-algo: ac</blockquote>
<br />
The memory algorithm is of importance of course. However ac and ac-bs are most performant with ac-bs being less mem intensive but also less performant.<br />
<br />
<br />
<h2>
Grand total generic memory consumption equation</h2>
</div>
So if we sum up all the config options that have effect on the total memory consumption by Suricata with mind of the set up referred to here (afpacket with 16 threads) we have (in bytes or mb/gb depending how you have your yaml memcap settings):<br />
<br />
<number_of_total_detection_threads>*<((1728)+(default_packet_size))>*<max-pending-packets><br />
+<br />
<defrag.memcap><br />
+<br />
< host.memcap><br />
+<br />
< ippair.memcap><br />
+<br />
< flow.memcap><br />
+<br />
<number_of_threads>*<216>* <prealloc-sessions><br />
+<br />
[per af-packet interface enabled]<af-packet_number_of_threads> * <ringsize> * <((1728)+(default_packet_size))><br />
+<br />
<stream.memcap>+<stream.reassembly.memcap><br />
+<br />
<app-layer.protocols.dns.global-memcap><br />
+<br />
<app-layer.protocols.http.memcap><br />
=<br />
Total memory that is configured and should be available to be used by Suricata <br />
<br />
<br />
Thank you<br />
<br />
<b>NOTE:</b><br />
As of developments in Suricata 3.0 - sizeof(struct Packet_) is 936 not 1728 bytes<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com0tag:blogger.com,1999:blog-6352560843819453131.post-74816106887475128982015-08-28T11:43:00.002-07:002015-08-29T01:42:19.302-07:00Failed to open ethX: pfring_open error<br />
This is a blogpost about getting around the following error when using Suricata with pfring:<br />
<br />
<blockquote class="tr_bq">
(source-pfring.c:444) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_OPEN(34)] -<b> Failed to open eth2: pfring_open error. Check if eth2 exists and pf_ring module is loaded.</b><br />
(tmqh-packetpool.c:394) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 230679680<br />
pfring_set_channel_id() failed: -1</blockquote>
<br />
However in my case eth2 existed, was up and running and the pfring module was loaded. So what happened in a bit more detail below :<br />
<br />
I experienced this after a git pull update/upgrade of Suricata (latest at the moment of this writing) and after I re compiled pfring (using the latest pfring from git (<a href="https://github.com/ntop/PF_RING.git">https://github.com/ntop/PF_RING.git</a>).<br />
<br />
My set up (linux Debian/Ubuntu like systems):<br />
<br />
<blockquote class="tr_bq">
root@suricata:/var/data/log/suricata# ifconfig eth2<br />
eth2 Link encap:Ethernet HWaddr 00:e0:ed:19:e3:e0<br />
inet6 addr: fe80::2e0:edff:fe19:e3e0/64 Scope:Link<br />
<b>UP</b> BROADCAST RUNNING MULTICAST MTU:1500 Metric:1<br />
RX packets:2962266192 errors:0 dropped:5527381 overruns:0 frame:0<br />
TX packets:19 errors:0 dropped:0 overruns:0 carrier:0<br />
collisions:0 txqueuelen:1000<br />
RX bytes:2867936692537 (2.8 TB) TX bytes:3345 (3.3 KB)</blockquote>
The pfring set up I had was configured like this below:<br />
<blockquote class="tr_bq">
root@suricata:/var/data/log/suricata# modprobe pf_ring transparent_mode=0 min_num_slots=65534</blockquote>
<br />
A regular check reveals nothing abnormal: <br />
<blockquote class="tr_bq">
root@suricata:/var/data/log/suricata# modinfo pf_ring && cat /proc/net/pf_ring/info<br />
filename: /lib/modules/3.14.0-031400-generic/kernel/net/pf_ring/pf_ring.ko<br />
alias: net-pf-27<br />
description: Packet capture acceleration and analysis<br />
author: ntop.org<br />
license: GPL<br />
srcversion: E344EB01757B55E97A93D0C<br />
depends: <br />
vermagic: 3.14.0-031400-generic SMP mod_unload modversions<br />
parm: min_num_slots:Min number of ring slots (uint)<br />
parm: perfect_rules_hash_size:Perfect rules hash size (uint)<br />
parm: transparent_mode:(deprecated) (uint)<br />
parm: enable_debug:Set to 1 to enable PF_RING debug tracing into the syslog (uint)<br />
parm: enable_tx_capture:Set to 1 to capture outgoing packets (uint)<br />
parm: enable_frag_coherence:Set to 1 to handle fragments (flow coherence) in clusters (uint)<br />
parm: enable_ip_defrag:Set to 1 to enable IP defragmentation(only rx traffic is defragmentead) (uint)<br />
parm: quick_mode:Set to 1 to run at full speed but with upto one socket per interface (uint)<br />
PF_RING Version : 6.1.1 (dev:250a67fe1082121ac511a19ebc3fe1fc5f494bfe)<br />
Total rings : 0<br />
<br />
Standard (non DNA/ZC) Options<br />
Ring slots : 65534<br />
Slot version : 16<br />
Capture TX : Yes [RX+TX]<br />
IP Defragment : No<br />
Socket Mode : Standard<br />
Total plugins : 0<br />
Cluster Fragment Queue : 0<br />
Cluster Fragment Discard : 0</blockquote>
Suricata and pfring have been installed as explained here - on the <a href="https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Installation_from_GIT_with_PF_RING_on_Ubuntu_server_LTS_" target="_blank">Suricata redmine wiki.</a><br />
<blockquote class="tr_bq">
root@suricata:~# ldd /usr/local/bin/suricata<br />
linux-vdso.so.1 => (0x00007fff419fe000)<br />
libhtp-0.5.17.so.1 => /usr/local/lib/libhtp-0.5.17.so.1 (0x00007f32af5a1000)<br />
libGeoIP.so.1 => /usr/lib/x86_64-linux-gnu/libGeoIP.so.1 (0x00007f32af372000)<br />
libluajit-5.1.so.2 => /usr/local/lib/libluajit-5.1.so.2 (0x00007f32af103000)<br />
libmagic.so.1 => /usr/lib/x86_64-linux-gnu/libmagic.so.1 (0x00007f32aeee7000)<br />
libcap-ng.so.0 => /usr/local/lib/libcap-ng.so.0 (0x00007f32aece2000)<br />
<b>libpfring.so => /usr/local/lib/libpfring.so</b> (0x00007f32aeaa3000)<br />
<b>libpcap.so.1 => /usr/local/pfring/lib/libpcap.so</b>.1 (0x00007f32ae80e000)<br />
libnet.so.1 => /usr/lib/x86_64-linux-gnu/libnet.so.1 (0x00007f32ae5f5000)<br />
libjansson.so.4 => /usr/lib/x86_64-linux-gnu/libjansson.so.4 (0x00007f32ae3e8000)<br />
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f32ae1ca000)<br />
libyaml-0.so.2 => /usr/lib/x86_64-linux-gnu/libyaml-0.so.2 (0x00007f32adfaa000)<br />
libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f32add6b000)<br />
libnss3.so => /usr/lib/x86_64-linux-gnu/libnss3.so (0x00007f32ada31000)<br />
libnspr4.so => /usr/lib/x86_64-linux-gnu/libnspr4.so (0x00007f32ad7f4000)<br />
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f32ad42e000)<br />
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f32ad215000)<br />
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f32acf0f000)<br />
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f32acd0a000)<br />
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f32acaf4000)<br />
/lib64/ld-linux-x86-64.so.2 (0x00007f32af7d4000)<br />
libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1 (0x00007f32ac8e9000)<br />
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f32ac6e0000)<br />
libnssutil3.so => /usr/lib/x86_64-linux-gnu/libnssutil3.so (0x00007f32ac4b5000)<br />
libplc4.so => /usr/lib/x86_64-linux-gnu/libplc4.so (0x00007f32ac2b0000)<br />
libplds4.so => /usr/lib/x86_64-linux-gnu/libplds4.so (0x00007f32ac0ab000)</blockquote>
<br />
<br />
Further more my Suricata start line was like this:<br />
<br />
<blockquote class="tr_bq">
suricata --pfring-int=eth2 --pfring-cluster-id=99 --pfring-cluster-type=cluster_flow -c /etc/suricata/peter-yaml/suricata-pfring.yaml --pidfile /var/run/suricata.pid -v</blockquote>
<br />
Even though everything seems fine - I could not start Suricata with pfring:<br />
<br />
<blockquote class="tr_bq">
[31591] 5/8/2015 -- 17:10:31 - (tmqh-packetpool.c:394) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 230679680<br />
<b>pfring_set_channel_id() failed: -1</b><br />
[31591] 5/8/2015 -- 17:10:31 - (source-pfring.c:444) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_OPEN(34)] - <b>Failed to open eth2: pfring_open error. Check if eth2 exists and pf_ring module is loaded.</b><br />
[31592] 5/8/2015 -- 17:10:31 - (tmqh-packetpool.c:394) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 230679680<br />
pfring_set_channel_id() failed: -1<br />
[31592] 5/8/2015 -- 17:10:31 - (source-pfring.c:444) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_OPEN(34)] - Failed to open eth2: pfring_open error. Check if eth2 exists and pf_ring module is loaded.<br />
[31593] 5/8/2015 -- 17:10:32 - (tmqh-packetpool.c:394) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 230679680<br />
pfring_set_channel_id() failed: -1<br />
[31593] 5/8/2015 -- 17:10:32 - (source-pfring.c:444) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_OPEN(34)] - Failed to open eth2: pfring_open error. Check if eth2 exists and pf_ring module is loaded.<br />
[31594] 5/8/2015 -- 17:10:32 - (tmqh-packetpool.c:394) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 230679680<br />
pfring_set_channel_id() failed: -1<br />
[31594] 5/8/2015 -- 17:10:32 - (source-pfring.c:444) <Error> (ReceivePfringThreadInit) -- [ERRCODE: SC_ERR_PF_RING_OPEN(34)] - Failed to open eth2: pfring_open error. Check if eth2 exists and pf_ring module is loaded.<br />
....</blockquote>
<br />
I was getting that error even though I reloaded the pfring module:<br />
<blockquote class="tr_bq">
rmmod pr_ring<br />
modprobe pf_ring transparent_mode=0 min_num_slots=65534</blockquote>
the way I usually do...<br />
<br />
In short - this is the fix:<br />
<br />
<blockquote class="tr_bq">
LD_LIBRARY_PATH=/usr/local/pfring/lib suricata --pfring-int=eth2 --pfring-cluster-id=99 --pfring-cluster-type=cluster_flow -c /etc/suricata/peter-yaml/suricata-pfring.yaml --pidfile /var/run/suricata.pid -v</blockquote>
<br />
Notice the use of:<br />
<blockquote class="tr_bq">
<b>LD_LIBRARY_PATH=/usr/local/pfring/lib</b> suricata </blockquote>
<br />
More information about <a href="http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html" target="_blank">what is LD_LIBRARY_PATH</a> <br />
<br />
To get rid of LD_LIBRARY_PATH you can create a pfring.conf file in <i><b>/etc/ld.so.conf.d/</b></i> containing:<br />
<blockquote class="tr_bq">
<pre><b><i>/usr/local/pfring/lib</i></b>
</pre>
</blockquote>
and run<br />
<blockquote class="tr_bq">
sudo ldconfig
</blockquote>
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com7tag:blogger.com,1999:blog-6352560843819453131.post-79077086749337412982015-05-22T08:34:00.003-07:002015-05-22T08:38:50.160-07:00Suricata - wildcard rule loading<br />
Recently (few hrs ago as of writing this blog) there was a new feature (<a href="https://github.com/gozzy" target="_blank">thanks to gozzy</a>) introduced in Suricata IDS/IPS/NSM - wildcard rule loading capability.<br />
<br />
As of the moment the feature is available in our git master. If you are wondering how to get that up and running or do not have the latest Suricata from git master - here is a quick tutorial (Debian/Ubuntu):<br />
<br />
<b>1)</b><br />
<blockquote class="tr_bq">
apt-get -y install libpcre3 libpcre3-dbg libpcre3-dev build-essential \<br />
autoconf automake libtool libpcap-dev libnet1-dev libyaml-0-2 \<br />
libyaml-dev zlib1g zlib1g-dev libmagic-dev libcap-ng-dev \<br />
libjansson-dev pkg-config libnss3-dev libnspr4-dev git-core</blockquote>
<br />
<b>2)</b><br />
<blockquote class="tr_bq">
git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && git clone https://github.com/ironbee/libhtp.git -b 0.5.x</blockquote>
<br />
<b>3)</b><br />
<blockquote class="tr_bq">
./autogen.sh && \<br />
./configure --prefix=/usr/ --sysconfdir=/etc/ --localstatedir=/var/ \<br />
--enable-geoip --enable-unix-socket \<br />
--with-libnss-libraries=/usr/lib --with-libnss-includes=/usr/include/nss/ \<br />
--with-libnspr-libraries=/usr/lib --with-libnspr-includes=/usr/include/nspr \<br />
&& make clean && make && make install-full && ldconfig</blockquote>
<br />
To confirm - <br />
<blockquote class="tr_bq">
suricata --build-info</blockquote>
<br />
Now that you have latest Suricta up and running - here it is what this blog post is all about - wildcard rule loading for Suricata IDPS. Some possible scenarios of use are loading wildcarded rules form the :<br />
<br />
<h2>
Command line </h2>
<br />
Please note the <b>"quotes"</b> ! <br />
<blockquote class="tr_bq">
suricata -c /etc/suricata/suricata.yaml -v -i eth0 -S "/etc/suricata/rules/*.rules"</blockquote>
<br />
Pretty self explanatory. The command above will load all .rules files from <i>/etc/suricata/rules/</i><br />
<blockquote class="tr_bq">
suricata -c /etc/suricata/suricata.yaml -v -i eth0 -S "/etc/suricata/rules/emerging*"</blockquote>
The command above will load all <i>emerging* rules</i> files from /etc/suricata/rules/<br />
<br />
<h2>
Config file</h2>
<br />
You can also set that up in the suricata.yaml config file. Here is how (please note the <b>"quotes"</b>).<br />
<br />
In your rules section in the suricata.yaml:<br />
<br />
<blockquote class="tr_bq">
# Set the default rule path here to search for the files.<br />
# if not set, it will look at the current working dir<br />
default-rule-path: /etc/suricata/rules<br />
rule-files:<br />
#- "*.rules"<br />
- "emerging*"<br />
#- botcc.rules<br />
#- ciarmy.rules<br />
#- compromised.rules<br />
#- drop.rules<br />
#- dshield.rules<br />
#- emerging-activex.rules<br />
#- emerging-attack_response.rules </blockquote>
The set up above will load all emerging* files and the rules residing in those. Then you can start Suricata anyway you would like, examples:<br />
<br />
<blockquote class="tr_bq">
suricata -c /etc/suricata/suricata.yaml -v -i eth0</blockquote>
<blockquote class="tr_bq">
suricata -c /etc/suricata/suricata.yaml -v --af-packet=eth0 </blockquote>
<br />
and in suricata.log you should see all emerging* rule files being loaded:<br />
<blockquote class="tr_bq">
<br />
......<br />
[13558] 22/5/2015 -- 17:19:39 - (reputation.c:620) <Info> (SRepInit) -- IP reputation disabled<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-activex.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-attack_response.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-chat.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-current_events.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-deleted.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:420) <Warning> (ProcessSigFiles) -- [ERRCODE: SC_ERR_NO_RULES(42)] - No rules loaded from /etc/suricata/rules/emerging-deleted.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-dns.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-dos.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-exploit.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-ftp.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-games.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-icmp.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:420) <Warning> (ProcessSigFiles) -- [ERRCODE: SC_ERR_NO_RULES(42)] - No rules loaded from /etc/suricata/rules/emerging-icmp.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-icmp_info.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-imap.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-inappropriate.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-info.rules<br />
[13558] 22/5/2015 -- 17:19:39 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-malware.rules<br />
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-misc.rules<br />
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-mobile_malware.rules<br />
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-netbios.rules<br />
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-p2p.rules<br />
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-policy.rules<br />
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-pop3.rules<br />
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-rpc.rules<br />
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-scada.rules<br />
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-scan.rules<br />
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-shellcode.rules<br />
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-smtp.rules<br />
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-snmp.rules<br />
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-sql.rules<br />
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-telnet.rules<br />
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-tftp.rules<br />
[13558] 22/5/2015 -- 17:19:40 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-trojan.rules<br />
[13558] 22/5/2015 -- 17:19:41 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-user_agents.rules<br />
[13558] 22/5/2015 -- 17:19:41 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-voip.rules<br />
[13558] 22/5/2015 -- 17:19:41 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-web_client.rules<br />
[13558] 22/5/2015 -- 17:19:41 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-web_server.rules<br />
[13558] 22/5/2015 -- 17:19:41 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-web_specific_apps.rules<br />
[13558] 22/5/2015 -- 17:19:43 - (detect.c:410) <Info> (ProcessSigFiles) -- Loading rule file: /etc/suricata/rules/emerging-worm.rules<br />
.......</blockquote>
<br />
You can also use it like that :<br />
<br />
<blockquote class="tr_bq">
# Set the default rule path here to search for the files.<br />
# if not set, it will look at the current working dir<br />
default-rule-path: /etc/suricata/rules<br />
rule-files:<br />
#- "*.rules"<br />
- "*web*"<br />
#- "emerging*"<br />
#- botcc.rules<br />
#- ciarmy.rules<br />
#- compromised.rules<br />
#- drop.rules </blockquote>
<br />
<br />
That's it.<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com2tag:blogger.com,1999:blog-6352560843819453131.post-48037050454427410552015-05-21T09:50:00.002-07:002015-05-21T10:51:04.728-07:00Suricata - multiple interface configuration with af-packet<br />
<br />
Suricata is a very flexible and powerful multithreading IDS/IPS/NSM. <br />
<br />
Here is a simple tutorial (tested on Debian/Ubuntu) of how to configure multiple interfaces for af-packet mode with Suricata (af-packet mode works by default/out of the box on kernels 3.2 and above). Lets say you would like to start simple IDSing with Suricata on eth1, eth2 and eth3 on a particular machine/server.<br />
<br />
<br />
In your suricata.yaml config (usually located in <i>/etc/suricata/</i>) find the af-packet section and do the following:<br />
<br />
<br />
<blockquote class="tr_bq">
af-packet:<br />
- interface: eth2<br />
threads: 16<br />
<b> cluster-id: 98</b><br />
cluster-type: cluster_cpu<br />
defrag: no<br />
use-mmap: yes<br />
ring-size: 200000<br />
checksum-checks: kernel<br />
- interface: eth1<br />
threads: 2<br />
<b>cluster-id: 97</b><br />
cluster-type: cluster_flow<br />
defrag: no<br />
use-mmap: yes<br />
ring-size: 30000<br />
- interface: eth3<br />
threads: 2<br />
<b>cluster-id: 96</b><br />
cluster-type: cluster_flow<br />
defrag: no<br />
use-mmap: yes<br />
ring-size: 20000</blockquote>
Of course feel free to adjust the <i>ring-sizes</i> (packet buffers) as you see fit for your particular set up.<br />
<b>NOTE:</b> do not forget to use a different cluster-id<br />
<br />
so now you can start suricata like so:<br />
<br />
<blockquote class="tr_bq">
suricata -c /etc/suricata/suricata.yaml -v --af-packet </blockquote>
<br />
That above will start Suricata which will listen on eth2 with 16 threads with <i>cluster_type: cluster_cpu</i> and on eth1,eth3 with 2 threads each with <i>cluster_type: cluster_flow</i>. Have a look in your suricata.log file for more info.<br />
<br />
If you would like to just test and see how it goes for eth2 only: <br />
<blockquote class="tr_bq">
suricata -c /etc/suricata/suricata.yaml -v --af-packet=eth2 </blockquote>
<br />
...easy and flexible. <br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com0tag:blogger.com,1999:blog-6352560843819453131.post-16063964675204857472015-04-25T00:44:00.000-07:002015-04-25T00:46:58.401-07:00Suricata - check loaded yaml config settings with --dump-config<br />
<br />
There is a very useful command available to Suricata IDS/IPS/NSM :<br />
<blockquote class="tr_bq">
suricata --dump-config</blockquote>
<br />
The command above will dump all the config parameters and their respective values that are loaded by Suricata from the config file. You can run the command in any case - it does not matter if Suricata is running or not.<br />
<br />
There is a peculiarity however. Sometimes people would think that the command(above) would dump the currently loaded config values by Suricata.... in some case it will and in some cases it will not.<br />
<br />
So what does it depend on?.... simple:<br />
<blockquote class="tr_bq">
suricata --dump-config</blockquote>
<br />
will dump the config settings that are loaded (or will be loaded) by Suricata by default from<br />
<i><b>/etc/suricata/suricata.yaml</b></i><br />
<br />
So if you are running Suricata with a config file called <b><i>suricata-test.yaml</i></b> (or suricata.yaml located in a different directory) - you will not see those settings...unless you specify that config file in particular:<br />
<blockquote class="tr_bq">
suricata --dump-config -c /etc/suricata/suricata-test.yaml</blockquote>
Here is a real case example.<br />
I run Suricata for a specific test where I had specified the defrag memcap to be 512mb :<br />
<blockquote class="tr_bq">
defrag:<br />
<b> memcap: 512mb</b><br />
hash-size: 65536<br />
trackers: 65535 # number of defragmented flows to follow<br />
max-frags: 65535 # number of fragments to keep (higher than trackers)<br />
prealloc: yes<br />
timeout: 60</blockquote>
<br />
Suricata up and running:<br />
<blockquote class="tr_bq">
root@LTS-64-1:~/Work # ps aux |grep suricata<br />
root 8109 2.3 7.6 878444 308372 pts/6 Sl+ 12:45 1:02 suricata -c /etc/suricata/suricata-test.yaml --af-packet=eth0 -v<br />
root@LTS-64-1:~/Work #</blockquote>
<br />
And the peculiarity that this blogpost is trying to emphasize on about :<br />
<blockquote class="tr_bq">
root@LTS-64-1:~/Work # suricata --dump-config |grep defrag.memcap<br />
<b>defrag.memcap = 32mb</b><br />
root@LTS-64-1:~/Work # suricata --dump-config <b><i>-c /etc/suricata/suricata-test.yaml</i></b> |grep defrag.memcap<br />
<b>defrag.memcap = 512mb</b><br />
root@LTS-64-1:~/Work # </blockquote>
<br />
<br />
<br />
<b><i>suricata --dump-config</i></b> dumps the settings loaded(or to be loaded) from the default location <i><b>/etc/suricata/suricata.yaml</b></i> if you are running suricata with a yaml config with a different name than the default or with a different location that the default - in order to get those settings - you need to specify that particular yaml location, like so:<br />
<br />
<b><i>suricata --dump-config -c /etc/local/some_test_dir/suricata/suricata-test.yaml</i></b><br />
<br />
<br />
Thanks<br />
<br />
related article:<br />
<a href="http://pevma.blogspot.se/2014/02/suricata-override-config-parameters-on.html" target="_blank">http://pevma.blogspot.se/2014/02/suricata-override-config-parameters-on.html</a><br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com1tag:blogger.com,1999:blog-6352560843819453131.post-9937292827867967142015-04-06T04:01:00.000-07:002015-04-06T04:01:16.863-07:00Suricata IDPS - Application layer anomalies protocol detection<br />
<br />
<br />
Suricata IDS/IPS/NSM also allows you to do application layer anomaly detection.<br />
I started talking to <a href="https://twitter.com/inliniac" target="_blank">inliniac</a> about protocol anomaly detection rules one day on the Suricata IRC chat room...which evolved more into a discussion resulting in us updating the rule sets with some examples of how to do that.<br />
<br />
Below are a few examples for rules usage:<br />
<br />
<h3>
HTTP </h3>
<blockquote class="tr_bq">
alert tcp any any -> any ![80,8080] (msg:"SURICATA HTTP not tcp port 80, 8080"; flow:to_server; app-layer-protocol:http; sid:2271001; rev:1;)</blockquote>
The above rule finds http traffic that is not using dest port 80 or 8080.<br />
<br />
<br />
<blockquote class="tr_bq">
alert tcp any any -> any 80 (msg:"SURICATA Port 80 but not HTTP"; flow:to_server; app-layer-protocol:!http; sid:2271002; rev:1;)</blockquote>
The above rule is kind of the reverse of the previous one - it will alert if the tcp traffic with destination port 80 is not http. <br />
<br />
Here is another example<br />
<br />
<h3>
TLS</h3>
<blockquote class="tr_bq">
alert tcp any any -> any 443 (msg:"SURICATA Port 443 but not TLS"; flow:to_server; app-layer-protocol:!tls; sid:2271003; rev:1;)</blockquote>
<br />
<h3>
HTTPS</h3>
Detecting HTTP traffic over HTTPS port -<br />
<br /><blockquote class="tr_bq">
alert
http any any -> any 443 (msg:"SURICATA HTTP clear text on port 443";
flow:to_server; app-layer-protocol:http; sid:2271019; rev:1;)</blockquote>
<br />
You can find the full ruleset (open source and free to use) with examples for HTTP, HTTPS, TLS, FTP, SMTP, SSH, IMAP, SMB, DCERPC, DNS, MODBUS application layer anomaly detection here:<br />
<br />
<a href="https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Protocol_Anomalies_Detection" target="_blank">https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Protocol_Anomalies_Detection</a><br />
<br /><br />
<br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com0tag:blogger.com,1999:blog-6352560843819453131.post-58258233236244646622015-02-19T13:11:00.002-08:002015-02-21T06:34:30.794-08:00Chasing MTUs<br />
Setting up (configuring) the right MTU (maximum transmission unit) size when running Suricata IDS/IPS.<br />
<br />
Sometimes you can end up in a situation as follows :<br />
<br />
<br />
<blockquote class="tr_bq">
capture.kernel_packets | AFPacketeth12 | 1143428204<br />
decoder.pkts | AFPacketeth12 | 1143428143<br />
decoder.invalid | AFPacketeth12 | 416889536</blockquote>
<br />
a whole lot of <b>decoder.invalid</b>. Not good. What could be the reason for that? One thing you should check right away is the MTU of the traffic that is being mirrored.<br />
<br />
What does it mean? Well there is the MTU that you set up on the server that you run Suricata on and there is the MTU that is present in the "mirrored" traffic.<br />
<br />
What is the difference?Why should it matter?<br />
It matters because if not set correct it will result in a lot of <b>decoder.invalid</b>s (dropped by Suricata) and you will be missing on a lot of traffic inspection.<br />
Example: if on the sniffing interface that you run Suricata on has a MTU set as 1500 and in the traffic that you mirror you have jumbo frames (MTU 9000) - most likely your decoder.invalids will show a <i>whole lotta love</i> in your stats.log.<br />
<br />
How can you adjust the MTU on the interface (NIC) ? (example) <br />
First a have look what is the current value:<br />
<blockquote class="tr_bq">
ifconfig eth0</blockquote>
then adjust it<br />
<blockquote class="tr_bq">
ifconfig eth0 mtu 1514</blockquote>
<br />
By the way - <a href="http://en.wikipedia.org/wiki/Maximum_transmission_unit" target="_blank">what could be the max size of the MTU</a> (and what sizes there are in general) - <br />
(short answer - 9216) <br />
<br />
<br />
This is the easy part :). There are situations where you do not know what is the MTU of the "mirrored" traffic. There is a few ways to find this - ask the network team/guy, make a phone call or two, start manually testing and setting it on the NIC to find a middle ground ....however you can also make use of the procedure shown below (in order to get the byte size of the MTU):<br />
<br />
<br />
On your Server/Sensor<br />
1)<br />
Stop Suricata.<br />
<br />
2)<br />
Change the MTU to 9216<br />
(the interface that Suri is sniffing on)<br />
<br />
<blockquote class="tr_bq">
example - ifconfig eth0 mtu 9216</blockquote>
(non boot persistent)<br />
<br />
3)<br />
install <span class="il">tcpstat</span> - if you do not have it<br />
<blockquote class="tr_bq">
apt-get install <span class="il">tcpstat</span></blockquote>
<br />
5)<br />
run the following (substitute the interface name with yours - that Suri is sniffing on)<br />
<blockquote class="tr_bq">
<span class="il">tcpstat</span> -i eth0 -l -o "Time:%S\tn=%n\tavg=%a\tstddev=%d\tbps=%b\tMaxPacketSize=%M\n" 5</blockquote>
<div class="a3s" id=":1bo">
6)<br />
Give it a minute or two<br />
If there are Jumbo frames you should see that in the output (something like) -<br />
"MaxPacketSize=9000", if not you should see whatever the max size is.<br />
<br />
7)<br />
Adjust your interface MTU accordingly - the one that Suri is sniffing<br />
on. -> Start Suri</div>
<br />
8)<br />
Let it run for a while - lets say 1 hr. Have a look at the <b>decoder.invalid</b> stats in stats.log<br />
<br />
<b>NOTE:</b> Do <u><b>NOT</b></u> just set the MTU to 9216 directly ("<i>just to be on the safe side</i>"). Only set it that high if needed !!<br />
<br />
<b>NOTE:</b> This example below is not using the <b><i>"-l"</i></b> option of tcpstat as denoted in <b><i>point 5)</i></b> above - look at <i><b>man tcpstat</b></i> for more info<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKAnUKg7CjbssyJx0fmHFMHYYJUFb2QFy8lSAl7Q0k3IuiB0adfv8KcizMrP2kSYjxZw9jMFaWMC0sg0s2VmIy7D3bAiEpBxcbzloA1EdZtx0zS2DM3oN4cYJSOwMewQMt5FpbE5lMwXU/s1600/MaxSize.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKAnUKg7CjbssyJx0fmHFMHYYJUFb2QFy8lSAl7Q0k3IuiB0adfv8KcizMrP2kSYjxZw9jMFaWMC0sg0s2VmIy7D3bAiEpBxcbzloA1EdZtx0zS2DM3oN4cYJSOwMewQMt5FpbE5lMwXU/s1600/MaxSize.PNG" height="186" width="400" /></a></div>
<br />
<br />
(tested on Ubuntu/Debian)<br />
That's all ....feedback welcome.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com0tag:blogger.com,1999:blog-6352560843819453131.post-84756164663230475892014-12-07T04:48:00.001-08:002014-12-07T04:53:42.353-08:00Suricata - disable detection mode<br />
There is a trick for using Suricata IDS/IPS that has come in handy - in my experience that I thought would share and might be useful to others.<br />
<br />
My HW was CPU 1x E5-2680 with 64 GB RAM , NIC a 82599EB 10-Gigabit SFI/SFP+ and mirroring about 9,5Gbps:<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2EGIcGVTCKGstMaw93vZ3sySMv-nhrNFayLzPeBhTMxxfR3utMFSukKNm_oAUEGmSOHHBd1lEbdk2W0nIxHvJHjRiNoIKD9psbeDP3ZcaHdA7zwCLMW_tAYWyjRJq-xqNaOWNy2kV-pU/s1600/tcpstat.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2EGIcGVTCKGstMaw93vZ3sySMv-nhrNFayLzPeBhTMxxfR3utMFSukKNm_oAUEGmSOHHBd1lEbdk2W0nIxHvJHjRiNoIKD9psbeDP3ZcaHdA7zwCLMW_tAYWyjRJq-xqNaOWNy2kV-pU/s1600/tcpstat.PNG" height="97" width="400" /></a></div>
<br />
<br />
<br />
So I wanted to get a better understanding of what is the max log output from Suricata in that particular environment without putting detection into consideration. Just pure event logging and profiling - DNS,HTTP,TLS,SSH, File transactions, SMTP all of these.<br />
<br />
Suricata offers just that - with a really low need for HW (having in mind we are looking at 9,5Gbps). What i did was compile suricata with the "<b>--disable-detection</b>" switch. What it does it simply disables detection(alerts) in Suricata - however every other logging/parsing capability is preserved ( DNS,HTTP,TLS,SSH, File transactions, SMTP). So i downloaded a fresh copy:<br />
<br />
<blockquote class="tr_bq">
git clone git://phalanx.openinfosecfoundation.org/oisf.git && cd oisf/ && git clone https://github.com/ironbee/libhtp.git -b 0.5.x</blockquote>
<br />
then<br />
<br />
<blockquote class="tr_bq">
./autogen.sh && ./configure <b>--disable-detection</b> && make clean && make && make install && ldconfig</blockquote>
<br />
enabled all JSON outputs in the eve-log section in suricata.yaml. I confirmed that detection was disabled:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5KJDml1MbVR-LpTes_UKqhajvqdb-hohaRAgH51ADPeqVy7fYS5luVTp6eYlAOdZBIfcgEyAHjh9d5EksCAqZ1jAAblH5VostJQe5gi_K8jyuRhs164p-c6uYHtvUe8yJ687uZsNy8Zc/s1600/build-info.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5KJDml1MbVR-LpTes_UKqhajvqdb-hohaRAgH51ADPeqVy7fYS5luVTp6eYlAOdZBIfcgEyAHjh9d5EksCAqZ1jAAblH5VostJQe5gi_K8jyuRhs164p-c6uYHtvUe8yJ687uZsNy8Zc/s1600/build-info.PNG" height="245" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
and started Suricata.<br />
<br />
Careful what you asked for :) - I was getting 10-15K logs per second :<br />
<br />
<blockquote class="tr_bq">
root@suricata:/var/log/suricata# tail -f eve.json |perl -e 'while (<>) {$l++;if (time > $e) {$e=time;print "$l\n";$l=0}}'<br />
1<br />
6531<br />
13860<br />
10704<br />
10877<br />
10389<br />
10664<br />
10205<br />
9996<br />
14798<br />
15427<br />
14223</blockquote>
<br />
And the HW(CPU/RAM/HDD) usage was not much at all:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyXrfuEzFymoBRqhwh78Wr0KIunF41SH2_aBbfC0fJXB3lSjQiDe78hGWp6DU3w0KjkOGWh0TokTxfCLh1NaYUII39oxV8yoMgsfIGcWj9giDWdb5utlFpSRigxe0ZivIUmgxSYfQA_Yw/s1600/htop.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyXrfuEzFymoBRqhwh78Wr0KIunF41SH2_aBbfC0fJXB3lSjQiDe78hGWp6DU3w0KjkOGWh0TokTxfCLh1NaYUII39oxV8yoMgsfIGcWj9giDWdb5utlFpSRigxe0ZivIUmgxSYfQA_Yw/s1600/htop.PNG" height="152" width="400" /></a></div>
<br />
As you can see - 30-40% CPU with a third of RAM usage. <br />
<br />
I used this command to count logs per second -<br />
<i><b>tail -f eve.json |perl -e 'while (<>) {$l++;if (time > $e) {$e=time;print "$l\n";$l=0}}'</b></i><br />
more about similar commands and counting Suricata logs you could find here:<br />
<br />
<ul>
<li><a href="http://pevma.blogspot.se/2014/06/24-hr-full-log-run-with-suricata-idps.html">http://pevma.blogspot.se/2014/06/24-hr-full-log-run-with-suricata-idps.html</a></li>
<li><a href="http://pevma.blogspot.se/2014/05/logs-per-second-on-evejson-good-and-bad.html" target="_blank">http://pevma.blogspot.se/2014/05/logs-per-second-on-evejson-good-and-bad.html </a></li>
</ul>
<br />
So this can came in handy in a few scenarious:<br />
<ul>
<li>you can actually do a "profiling" run on that particular set up in order to size up a SIEM specification </li>
<li>and/or you can size up your prod IDS/IPS deployment needs</li>
<li>you can also just feed all those logs info to an existing log analysis system</li>
</ul>
<br />
Thanks<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com0tag:blogger.com,1999:blog-6352560843819453131.post-85257960686688484642014-12-07T04:47:00.000-08:002014-12-07T11:34:46.241-08:00Suricatasc unix socket interaction for Suricata IDS/IPS<br />
<br />
Suricatasc is a unix socket interaction script that is automatically installed when one compiles/installs Suricata IDS/IPS. An in depth description, prerequisites and how to documentation is located here - <a href="https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Interacting_via_Unix_Socket">https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Interacting_via_Unix_Socket</a><br />
<br />
However lets look at a quick usage example - that can come very handy in certain situations.<br />
<br />
Once you have unix socket command enabled in suricata.yaml :<br />
<br />
<blockquote class="tr_bq">
unix-command:<br />
enabled: yes<br />
#filename: custom.socket # use this to specify an alternate file</blockquote>
<br />
the traditional way to use the script would be type<b><i> suricatasc</i></b> and hit Enter (on the machine running Suricata):<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNsfUKfbgRUxuoj2L2RHI8c4LJkJJEyswC-6NMwcvAHkAKZSnZwrXLtD1w-UOa6bLNCcbvbsy3ufkJwZCMgWCc8J-mfcrbFK2fM2UYMC_T_NVyH0E126hkz6YkPvbE7flwd5TSSZQlTlU/s1600/suricatasc-1.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNsfUKfbgRUxuoj2L2RHI8c4LJkJJEyswC-6NMwcvAHkAKZSnZwrXLtD1w-UOa6bLNCcbvbsy3ufkJwZCMgWCc8J-mfcrbFK2fM2UYMC_T_NVyH0E126hkz6YkPvbE7flwd5TSSZQlTlU/s1600/suricatasc-1.PNG" height="131" width="400" /></a></div>
<br />
<br />
<br />
<br />
However you can also use it directly as a command line parameter for example :<br />
<blockquote class="tr_bq">
<i><b>root@suricata:~# suricatasc -c version</b></i></blockquote>
<br />
like so:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijlgAYW9gqyzDmQw4FqmKpMlty_KpywyiihBh6jW9s2Y8FikDGz-EHn4RyAznZai_WlNFFZATclInVoPqokmNbrEMQlEfuNwqTyzNRkSOLy8wvFhl_LVGqZboupAJR6cUhn12jbsx3Bi4/s1600/suricatasc-2.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijlgAYW9gqyzDmQw4FqmKpMlty_KpywyiihBh6jW9s2Y8FikDGz-EHn4RyAznZai_WlNFFZATclInVoPqokmNbrEMQlEfuNwqTyzNRkSOLy8wvFhl_LVGqZboupAJR6cUhn12jbsx3Bi4/s1600/suricatasc-2.PNG" height="191" width="400" /></a></div>
<br />
<br />
<b>NOTE:</b><br />
You need to quote commands involving interfaces:<blockquote class="tr_bq">
root@debian64:~# suricatasc -c "iface-stat eth0"</blockquote>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpNk9gr_I3eHQ16Y_2CjTUdIaiaxgYsj6ldc3cRkpX5RmA-zwlzwSJT-Qm8cfuNWuBWgoVSPOrGmOo1XxhJtpzC8Gu2rXVbdU5BRmQ6TwAcMTf2_2vSOlQRjHsPkxpnW1LQ9S5klyPalk/s1600/suricatasc.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpNk9gr_I3eHQ16Y_2CjTUdIaiaxgYsj6ldc3cRkpX5RmA-zwlzwSJT-Qm8cfuNWuBWgoVSPOrGmOo1XxhJtpzC8Gu2rXVbdU5BRmQ6TwAcMTf2_2vSOlQRjHsPkxpnW1LQ9S5klyPalk/s1600/suricatasc.PNG" height="42" width="320" /></a></div>
<br />
<br />
Very handy when you want quick interaction and info from the currently running Suricata IDS/IPS.<br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com0tag:blogger.com,1999:blog-6352560843819453131.post-58744321400790940052014-10-26T08:16:00.005-07:002014-10-26T08:38:36.780-07:00Suricata ids/ips - dropping privileges<br />
<br />
This tutorial is intended for Linux (Debian/Ubuntu).<br />
<br />
Install the prerequisite packages in order to compile Suricata. I add/enable some optional features so in my case I usually do:<br />
<blockquote class="tr_bq">
apt-get -y install libpcre3 libpcre3-dbg libpcre3-dev \<br />
build-essential autoconf automake libtool libpcap-dev libnet1-dev \<br />
libyaml-0-2 libyaml-dev zlib1g zlib1g-dev make flex bison \<br />
libmagic-dev </blockquote>
<br />
For Eve (all JSON output):<br />
<blockquote class="tr_bq">
apt-get install libjansson-dev libjansson4</blockquote>
For MD5 support(file extraction):<br />
<blockquote class="tr_bq">
apt-get install libnss3-dev libnspr4-dev</blockquote>
For GeoIP:<br />
<blockquote class="tr_bq">
apt-get install libgeoip1 libgeoip-dev</blockquote>
For nfqueue(ips mode):<br />
<blockquote class="tr_bq">
<span class="s2">apt-get install libnetfilter-queue-dev libnetfilter-queue1 libnfnetlink-dev libnfnetlink0</span></blockquote>
For the dropping privileges part you can simply do:<br />
<blockquote class="tr_bq">
apt-get install libcap-ng0 libcap-ng-dev</blockquote>
<br />
<b>OR</b> get the latest <i><b>libcap-ng</b></i> version form here:<br />
<a href="http://people.redhat.com/sgrubb/libcap-ng/">http://people.redhat.com/sgrubb/libcap-ng/</a><br />
like so:<br />
<br />
<blockquote class="tr_bq">
wget http://people.redhat.com/sgrubb/libcap-ng/libcap-ng-0.7.4.tar.gz</blockquote>
<blockquote>
tar -zxf libcap-ng-0.7.4.tar.gz<br />
cd libcap-ng-0.7.4<br />
./configure && make && make install<br />
cd ..</blockquote>
<br />
Let's fetch and compile Suricata:<br />
<blockquote class="tr_bq">
wget http://www.openinfosecfoundation.org/download/suricata-2.0.4.tar.gz<br />
tar -xzf suricata-2.0.4.tar.gz </blockquote>
<blockquote>
cd suricata-2.0.4</blockquote>
One liner... one of my favorite:<br />
<br />
<i>./configure --prefix=/usr/ --sysconfdir=/etc/ --localstatedir=/var/ --disable-gccmarch-native \</i><br />
<i>--enable-geoip --with-libnss-libraries=/usr/lib --with-libnss-includes=/usr/include/nss/ \</i><br />
<i>--enable-nfqueue \</i><br />
<i>--with-libcap_ng-libraries=/usr/local/lib --with-libcap_ng-includes=/usr/local/include \</i><br />
<i>--with-libnspr-libraries=/usr/lib --with-libnspr-includes=/usr/include/nspr && \</i><br />
<i> make clean && make && make install-full && ldconfig</i> <br />
<br />
<br />
Above we enable some other features like :<br />
<ul>
<li>GeoIP</li>
<li>MD5(libnspr/libnss)</li>
<li>nfqueue</li>
<li>we also install the necessary config file in /etc/suricata (<a href="https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Basic_Setup#Auto-setup" target="_blank">make install-full</a>)</li>
<li>download a full ET Open ruleset (<a href="https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Basic_Setup#Auto-setup" target="_blank">make install-full</a>)</li>
</ul>
(<br />
you can do like this<br />
<blockquote class="tr_bq">
root@IDS:~/suricata-2.0.4#<i><b> ./configure --help</b></i></blockquote>
to see what each option is for<br />
)<br />
<br />
but this line -<br />
<b><i>--with-libcap_ng-libraries=/usr/local/lib --with-libcap_ng-includes=/usr/local/include</i></b><br />
is the one you need to compile and enable dropping privileges with Suricata. <br />
<br />
<br />
Then you can run Suri like so<br />
<blockquote class="tr_bq">
/usr/bin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid --af-packet -D -v <i><b>--user=logstash</b></i></blockquote>
<br />
Make sure the log directory has the right permissions to allow the user <i>"logstash"</i> to write to it.<br />
After you start Suricata - you should see something similar:<br />
<blockquote class="tr_bq">
root@IDS:~# ls -lh /var/log/suricata/<br />
total 77M<br />
drwxr-xr-x 2 logstash logstash 4.0K Oct 15 13:06 certs<br />
drwxr-xr-x 2 logstash logstash 4.0K Oct 15 13:06 core<br />
-rw-r----- 1 logstash logstash 18M Oct 26 10:48 eve.json<br />
-rw-r----- 1 logstash logstash 806K Oct 26 10:48 fast.log<br />
drwxr-xr-x 2 logstash logstash 4.0K Oct 15 13:06 files<br />
drwxr-xr-x 2 logstash logstash 4.0K Oct 26 06:26 StatsByDate<br />
-rw-r--r-- 1 root root 58M Oct 26 10:48 stats.log<br />
-rw-r--r-- 1 root root 1.1K Oct 26 09:15 suricata-start.log<br />
root@IDS:~# </blockquote>
Notice the user logstash ownership.<br />
<br />
<blockquote class="tr_bq">
root@IDS:~# ps aux |grep suricata<br />
<i><b>logstash</b></i> 2189 11.0 10.6 420448 219972 ? Ssl 09:15 13:04 /usr/bin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid --af-packet -D -v --user=logstash<br />
root@IDS:~# </blockquote>
Now you have the user <i><b>logstash</b></i> running (not as root) Suricata IDS/IPS.<br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com0tag:blogger.com,1999:blog-6352560843819453131.post-90320317455966763292014-08-24T03:03:00.002-07:002014-08-24T03:03:40.336-07:00Suricata - more data for your alerts<br />
As of Suricata 2.1beta1 - Suricata IDS/IPS provides the availability of packet data and information in a
standard JSON output logging capability supplementing further the alert logging output.<br />
<br />
This guide makes use of
Suricata and ELK - <a href="http://www.elasticsearch.org/overview/" target="_blank">Elasticsearch, Logstash, Kibana</a>.<br />
You can install all of them following the guide <a href="https://redmine.openinfosecfoundation.org/projects/suricata/wiki/_Logstash_Kibana_and_Suricata_JSON_output" target="_blank">HERE</a> <br />
...or you can download and try out <a href="https://www.stamus-networks.com/open-source/" target="_blank">SELKS</a> and use directly.<br />
<br />
<br />
After everything is in place, we need to open the suricata.yaml and make the following editions in the eve.json section:<br />
<br />
<blockquote class="tr_bq">
# "United" event log in JSON format<br />
- eve-log:<br />
enabled: yes<br />
type: file #file|syslog|unix_dgram|unix_stream<br />
filename: eve.json<br />
# the following are valid when type: syslog above<br />
#identity: "suricata"<br />
#facility: local5<br />
#level: Info ## possible levels: Emergency, Alert, Critical,<br />
## Error, Warning, Notice, Info, Debug<br />
<b> types:<br /> - alert:</b><br />
<b> payload: yes</b> # enable dumping payload in Base64<br />
<b> payload-printable: yes</b> # enable dumping payload in printable (lossy) format<br />
<b> packet: yes</b> # enable dumping of packet (without stream segments)<br />
<b>http: yes </b> # enable dumping of http fields</blockquote>
<br />
You can start Suricata and let it inspect traffic for some time in order to generate alert log data.<br />
Then navigate to your Kibana web interface, find an alert record/log and you could see the usefulness of the extra data yourself.<br />
<br />
Some examples though :) : <br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOQqyrrAUWv71SVSlONrjUIvEnYLQB1RaQbB0adJ1PrhMUiUmY8XnYOd7rlhxC9GLxco2t8A9yAWdbdYm3eAiPoqcE0gifNQzGtT7LaUa7jHLGM-gi2HLHBTvZ4akQmv-roB_zozzCs8A/s1600/Alert_Payload.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOQqyrrAUWv71SVSlONrjUIvEnYLQB1RaQbB0adJ1PrhMUiUmY8XnYOd7rlhxC9GLxco2t8A9yAWdbdYm3eAiPoqcE0gifNQzGtT7LaUa7jHLGM-gi2HLHBTvZ4akQmv-roB_zozzCs8A/s1600/Alert_Payload.PNG" height="335" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZ8tEtuKukN7bwkMR_dcg42aWusoMHF1dH5andjL_R-5GP1zSVc3dV2TAiHsN7C3f-77Q7vqKrcDjsHBZKc_aG6JyGkNNBD2VmMRp0fd_cIUzUY5qYDdq0LpSsyyTPiS9wyNqVmle0uQs/s1600/Payload_Printable2.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZ8tEtuKukN7bwkMR_dcg42aWusoMHF1dH5andjL_R-5GP1zSVc3dV2TAiHsN7C3f-77Q7vqKrcDjsHBZKc_aG6JyGkNNBD2VmMRp0fd_cIUzUY5qYDdq0LpSsyyTPiS9wyNqVmle0uQs/s1600/Payload_Printable2.PNG" height="333" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFTScq5EH8GtLNN1jEfNU4I_lrjCZqMRzmNIENExvYnknOSEZpWfcOEwfjz2MHS0AXc0mbhsv9Q4qAuyRVWl1PWKbyDEE7ZiMjbOtfowhRmRmFsrN6syJhXul096YgXxK33A0VPXphfDk/s1600/Payload_Printable.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFTScq5EH8GtLNN1jEfNU4I_lrjCZqMRzmNIENExvYnknOSEZpWfcOEwfjz2MHS0AXc0mbhsv9Q4qAuyRVWl1PWKbyDEE7ZiMjbOtfowhRmRmFsrN6syJhXul096YgXxK33A0VPXphfDk/s1600/Payload_Printable.PNG" height="113" width="400" /></a></div>
<br />
<br />
Lets kick it up notch.....<br />
<br />
We want to search through - <br />
<ol>
<li>all the generated alerts that have </li>
<li>a printable payload data </li>
<li>that have the following string: <b>uid=0(root)</b></li>
</ol>
Easy, here is the query: <br />
<b> </b> <br />
<blockquote class="tr_bq">
payload_printable:"uid=0\(root\)"</blockquote>
You should enter it like this in Kibana:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitIEttQIkGSL2Ne-asCEewgge51zVB_8Yxnd1By2ZHCLvCgmPSMzvNtBD2W9K3c3jyAApaDsaIenHz5rqQD8r8WEjPdo23o_XhPnP5wLxZ5bW52aaXI_a98xaJSvsdF8rTDGzsp4Zaico/s1600/Alert_data_query1.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitIEttQIkGSL2Ne-asCEewgge51zVB_8Yxnd1By2ZHCLvCgmPSMzvNtBD2W9K3c3jyAApaDsaIenHz5rqQD8r8WEjPdo23o_XhPnP5wLxZ5bW52aaXI_a98xaJSvsdF8rTDGzsp4Zaico/s1600/Alert_data_query1.PNG" height="188" width="400" /></a></div>
<br />
<br />
<br />
Well what do you know - we got what we were looking for:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZF1bl0RMS3F9Ouy6r7mMGEtPpoRg_NpU2sQDwfRYDKqWsjCsSZUV-QL4STuroMjtKVRXMdQyr3tDUypsdNWt8J4tLDWurCnxYmfU0AfM7BaBEISCIYOaT6-dXRZ0hy6o52hyU3ErY1w0/s1600/Alert_data1_For_query1.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZF1bl0RMS3F9Ouy6r7mMGEtPpoRg_NpU2sQDwfRYDKqWsjCsSZUV-QL4STuroMjtKVRXMdQyr3tDUypsdNWt8J4tLDWurCnxYmfU0AfM7BaBEISCIYOaT6-dXRZ0hy6o52hyU3ErY1w0/s1600/Alert_data1_For_query1.PNG" height="108" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgK-XVeKnlV291x9_TKqcHfcWHxEE55NFOTgnFqwxKYRhinOrMK23jCw25AWz8Ren47ZbsahyphenhyphenEqWCUIaAn-NyjZ15HBkdsUKTA5mU7ttAdDk7mvpgttbPUsO4XqjMcFJqxnY7SVEsnb58w/s1600/Alert_data2_For_query1.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgK-XVeKnlV291x9_TKqcHfcWHxEE55NFOTgnFqwxKYRhinOrMK23jCw25AWz8Ren47ZbsahyphenhyphenEqWCUIaAn-NyjZ15HBkdsUKTA5mU7ttAdDk7mvpgttbPUsO4XqjMcFJqxnY7SVEsnb58w/s1600/Alert_data2_For_query1.PNG" height="150" width="400" /></a></div>
<br />
<br />
Some more useful reading on the Lucene Query Syntax (you should at least have a look :) ):<br />
<a href="http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html" target="_blank">http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html</a><br />
<br />
<a href="http://www.solrtutorial.com/solr-query-syntax.html" target="_blank">http://www.solrtutorial.com/solr-query-syntax.html</a><br />
<br />
<a href="http://lucene.apache.org/core/2_9_4/queryparsersyntax.html" target="_blank">http://lucene.apache.org/core/2_9_4/queryparsersyntax.html</a><br />
<br />
<br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com0tag:blogger.com,1999:blog-6352560843819453131.post-45561370303241080642014-08-24T02:55:00.000-07:002014-08-24T03:01:16.184-07:00Suricata - Flows, Flow Managers and effect on performance<br />
<br />
As of Suricata 2.1beta1 - Suricata IDS/IPS provides the availability of high performance/advanced tuning for custom thread configuration for the IDS/IPS engine management threads.<br />
<br />
Aka ..these<br />
<blockquote class="tr_bq">
[27521] 20/7/2014 -- 01:46:19 - (tm-threads.c:2206) <Notice> (TmThreadWaitOnThreadInit) -- all 16 packet processing threads, <u><b>3 management threads initialized</b></u>, engine started.</blockquote>
<br />
<br />
These <u><b>3 management threads initialized</b></u> above are flow manager (1), counter/stats related threads (2x)<br />
<br />
So ... in the default suricata.yaml setting we have:<br />
<blockquote class="tr_bq">
<br />
flow:<br />
memcap: 64mb<br />
hash-size: 65536<br />
prealloc: 10000<br />
emergency-recovery: 30<br />
#managers: 1 # default to one flow manager<br />
#recyclers: 1 # default to one flow recycler thread<br />
<br /></blockquote>
and we can choose accordingly of how many threads we would like to dedicate for the management tasks within the engine itself.<br />
The recyclers threads offload part of the flow managers work and if enabled do flow/netflow logging. <br />
<br />
Good !<br />
What does this has to do with performance?<br />
<br />
Suricata IDS/IPS is powerful, flexible and scalable - so be careful what you wish for.<br />
The examples below demonstrate the effect on a 10Gbps Suricata IDS sensor.<br />
<br />
<h2>
Example 1</h2>
<br />
suricata.yaml config - ><br />
<blockquote class="tr_bq">
<div class="de2">
<span class="co1">flow:</span></div>
<div class="de1">
<span class="co1"> memcap: 1gb</span></div>
<div class="de2">
<span class="co1"> hash-size: 1048576</span></div>
<div class="de1">
<span class="co1"> prealloc: 1048576</span></div>
<div class="de2">
<span class="co1"> emergency-recovery: 30</span></div>
<div class="de1">
<span class="co1"> prune-flows: 50000</span></div>
<div class="de2">
<span class="co1"> managers: 2 # default is 1</span></div>
</blockquote>
<br />
<span class="co1">CPU usage -></span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhedfKW9zL5brba5zxmuWR901j5LoDjbFHttZ8tDZRv-5-x5__cnE1LIwmhPaQ0nshxxHHqgcMUaCYDxwtAV2wn4SwlDSR9R4-3ZPjJ-gt0JUtRM3sBzBYLidiIz2h8NN-vHeQ3ydZf_SY/s1600/FlowMngr1.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhedfKW9zL5brba5zxmuWR901j5LoDjbFHttZ8tDZRv-5-x5__cnE1LIwmhPaQ0nshxxHHqgcMUaCYDxwtAV2wn4SwlDSR9R4-3ZPjJ-gt0JUtRM3sBzBYLidiIz2h8NN-vHeQ3ydZf_SY/s1600/FlowMngr1.PNG" height="263" width="400" /></a></div>
<br />
<div class="de2">
<span class="co1"> 2 flow management threads use 8% CPU each</span><br />
<br />
<h2>
<span class="co1"> Example 2</span></h2>
</div>
<div class="de2">
<br /></div>
<div class="de2">
<br />
suricata.yaml config - ><br />
<blockquote class="tr_bq">
<div class="de1">
<span class="co4">flow</span>:</div>
<div class="de2">
<span class="co3"> memcap</span><span class="sy2">: </span>4gb</div>
<div class="de1">
<span class="co3"> hash-size</span><span class="sy2">: </span><span class="nu0">15728640</span></div>
<div class="de2">
<span class="co3"> prealloc</span><span class="sy2">: </span><span class="nu0">8000000</span></div>
<div class="de1">
<span class="co3"> emergency-recovery</span><span class="sy2">: </span><span class="nu0">30</span></div>
</blockquote>
<blockquote>
<span class="co3"> managers</span><span class="sy2">: </span><span class="nu0">2</span> <span class="co1"># default is 1</span></blockquote>
<br />
<span class="co1"> CPU usage -></span></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzsiFSZO3FOE_6uwuGhug-Bu9pj-RHNH2W_WaFpO70eHDzHqgzLVobN_yXN-gGqDIRM8B8Onbt0YQV5_ZeuHl1r-2MbW3BolM9f3qkcmC31a8Ei5WFFrym3HV1Xx_ABWh-KxhKMLbopkE/s1600/FlowMngr2.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzsiFSZO3FOE_6uwuGhug-Bu9pj-RHNH2W_WaFpO70eHDzHqgzLVobN_yXN-gGqDIRM8B8Onbt0YQV5_ZeuHl1r-2MbW3BolM9f3qkcmC31a8Ei5WFFrym3HV1Xx_ABWh-KxhKMLbopkE/s1600/FlowMngr2.PNG" height="205" width="320" /></a></div>
<div class="de2">
<br /></div>
<div class="de2">
<span class="co1">2 flow management threads use 39% CPU each as compared to Example 1 !!</span></div>
<div class="de2">
<span class="co1"><br /></span></div>
<br />
So a 4 fold increase in memcap, 8 fold increase in prealloc and 15 fold increase on hash-size settings leads to about 3 fold increase in RAM consumption and 5 fold on CPU consumption - in terms of <b>flow management thread</b> usage.<br />
<br />
It would be very rare that you would need the settings in Example 2 - you need huge traffic for that...<br />
<br />
So how would you know when to tune/adjust those settings in suricata.yaml? It is recommended that you always keep an eye on your <u><i><b>stats.log</b></i></u> and make sure you do not enter emergency clean up mode:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPq0aIGl0ooTe5cLfNZ0iQ2n5e1S_SCgF9Z6y_VY1brku1bz4J5bvM9Spn3Qql2-eTi7kiSXR7YcxM00SOUQS7e40aj6CM5WhsgahLRiMcbEob8ZGLZSTSxSsSmm6y5WW5ax-boILL2Qk/s1600/FlowEmergMode.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPq0aIGl0ooTe5cLfNZ0iQ2n5e1S_SCgF9Z6y_VY1brku1bz4J5bvM9Spn3Qql2-eTi7kiSXR7YcxM00SOUQS7e40aj6CM5WhsgahLRiMcbEob8ZGLZSTSxSsSmm6y5WW5ax-boILL2Qk/s1600/FlowEmergMode.PNG" height="175" width="400" /></a></div>
<br />
<br />
it should always be 0<br />
<br />
Some additional reading on flows and flow managers -<br />
<a href="http://blog.inliniac.net/2014/07/28/suricata-flow-logging/" target="_blank">http://blog.inliniac.net/2014/07/28/suricata-flow-logging/</a><br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com1tag:blogger.com,1999:blog-6352560843819453131.post-80858229755197289692014-08-24T02:47:00.000-07:002014-08-24T02:47:06.965-07:00Suricata - filtering tricks for the fileinfo output with eve.json<br />
As of Suricata 2.0 - Suricata IDS/IPS provides the availability of a standard JSON output logging capability. This guide makes use of Suricata and ELK - <a href="http://www.elasticsearch.org/overview/" target="_blank">Elasticsearch, Logstash, Kibana</a>.<br />
<br />
You can install all of them following the guide <a href="https://redmine.openinfosecfoundation.org/projects/suricata/wiki/_Logstash_Kibana_and_Suricata_JSON_output" target="_blank">HERE</a> <br />
...or you can download and try out <a href="https://www.stamus-networks.com/open-source/" target="_blank">SELKS</a> and use directly.<br />
<br />
Once you have the installation in place and have the Kibana web interface up and running you can make use of the following fileinfo filters (tricks :).<br />
You can enter the queries like so:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiceIb0MkDZzTxL2TuIEw8RjTRr3XtRuWI0Sf8nobeZH7Roi0XteOEElubK3GgksNteJFwlwJ8Mb_dVCw723efMbGxnnGKGXDHJz8qAJy9cMFluQq6x-FtmCuW7GsGlQMRJUlqL1_FH2FE/s1600/fileinfo3.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiceIb0MkDZzTxL2TuIEw8RjTRr3XtRuWI0Sf8nobeZH7Roi0XteOEElubK3GgksNteJFwlwJ8Mb_dVCw723efMbGxnnGKGXDHJz8qAJy9cMFluQq6x-FtmCuW7GsGlQMRJUlqL1_FH2FE/s1600/fileinfo3.PNG" height="82" width="400" /></a></div>
<br />
<br />
<blockquote class="tr_bq">
fileinfo.magic:"PE32" -fileinfo.filename:*exe</blockquote>
will show you all "PE32 executable" executables that were seen transferred that have no exe extension in their file name:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcgeHsbiqHDA-J6HgiG756ukFKPdvwvQM4g1GcqG376MZK48DnJHWKsSAshZXIVare7sMzN4RbhW0WeXX0GcLNeBbrEqiPhLE7rqv7Sh-SCd7DzCjA_1H-LMUhfWnnNdWJC49LTD29AIM/s1600/fileinfo2.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcgeHsbiqHDA-J6HgiG756ukFKPdvwvQM4g1GcqG376MZK48DnJHWKsSAshZXIVare7sMzN4RbhW0WeXX0GcLNeBbrEqiPhLE7rqv7Sh-SCd7DzCjA_1H-LMUhfWnnNdWJC49LTD29AIM/s1600/fileinfo2.PNG" height="133" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<br />
<br />
Alternatively <br />
<blockquote class="tr_bq">
fileinfo.magic:"pdf" -fileinfo.filename:*pdf</blockquote>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXSwKdZlmQZ3mv4upzeSySRfWhyphenhyphengohgsXwOImO5xRag_3pZpw_mKoR96CidLaLZwmCYhddLArwWuOP95iEKe5vtleUlvu4ya6hRy1wJYDJThbcW5g0FxDca31AfEjaKlC-aXlvUYmAers/s1600/fileinfo4.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXSwKdZlmQZ3mv4upzeSySRfWhyphenhyphengohgsXwOImO5xRag_3pZpw_mKoR96CidLaLZwmCYhddLArwWuOP95iEKe5vtleUlvu4ya6hRy1wJYDJThbcW5g0FxDca31AfEjaKlC-aXlvUYmAers/s1600/fileinfo4.PNG" height="76" width="320" /></a></div>
<br />
<br />
will show you all "PDF document version......" files that were transferred that have no PDF extension in their file name.<br />
<br />
You can explore further :)<br />
<br />
<br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com0tag:blogger.com,1999:blog-6352560843819453131.post-62432966199872198642014-08-23T05:11:00.001-07:002014-08-23T05:11:26.967-07:00Suricata IDS/IPS - HTTP custom header logging<br />
As a continuation of the article <a href="http://www.pevma.blogspot.se/2014/06/http-header-fields-extended-logging.html" target="_blank">HERE</a>- some more screenshots from the ready to use template....<br />
<br />
For the <a href="http://www.elasticsearch.org/overview/elkdownloads/" target="_blank">Elasticsearch/Logstash/Kibana</a> users there is a ready to use template that you could download from here - "<b>HTTP-Extended-Custom</b>"<br />
<a href="https://github.com/pevma/Suricata-Logstash-Templates" target="_blank">https://github.com/pevma/Suricata-Logstash-Templates</a><br />
<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYt2IFIcv1KjwJ7dEr0lcKZOcQ4I-Ikbi6KtXMMKF_vEsoXKQI-2D6TmmIATSVoELSiBeuXUs81ZDBROkXujMAOBPKH00aHR5524SyMC8JFQtC52HCL3PEIQgXHHnd4oo4RJKvvK40iwc/s1600/Custom1.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYt2IFIcv1KjwJ7dEr0lcKZOcQ4I-Ikbi6KtXMMKF_vEsoXKQI-2D6TmmIATSVoELSiBeuXUs81ZDBROkXujMAOBPKH00aHR5524SyMC8JFQtC52HCL3PEIQgXHHnd4oo4RJKvvK40iwc/s1600/Custom1.PNG" height="141" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZCfuPWZsaSrIaqyaPiufI2MEZEFjjImMVrwj_2QdhcDqU64x5RXvBHuqJMXd6AnXXKwFjTSh8MVM3UUr3rGDTe99MCWeEg6tbaoC18JPWrhdlb33ykmXL_YR7WDYavXr6e3DBjvQCC0o/s1600/Custom2.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZCfuPWZsaSrIaqyaPiufI2MEZEFjjImMVrwj_2QdhcDqU64x5RXvBHuqJMXd6AnXXKwFjTSh8MVM3UUr3rGDTe99MCWeEg6tbaoC18JPWrhdlb33ykmXL_YR7WDYavXr6e3DBjvQCC0o/s1600/Custom2.PNG" height="164" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiotEBICQ8kpyAjUIeb06ixgHNPpFDg6Ci0c2CZgg-9iYxRn__Lm8LH9XO0pTyY-GvyxFgog2Gr-QHfMhgM8gOfVO4ISh90mKBftwMwWd6LAPFeqA0_6kaekDMt0hfJIE1o3Tc1Mn4225U/s1600/Custom3.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiotEBICQ8kpyAjUIeb06ixgHNPpFDg6Ci0c2CZgg-9iYxRn__Lm8LH9XO0pTyY-GvyxFgog2Gr-QHfMhgM8gOfVO4ISh90mKBftwMwWd6LAPFeqA0_6kaekDMt0hfJIE1o3Tc1Mn4225U/s1600/Custom3.PNG" height="164" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwJnTq1Ps6FH9VTC6KpNUcooBIw-ajjpTGojrhccbCi1qkF8B3rG28zvrMjcLpVFRy84DxkL1i-RX8XXN6uLRGMamB9aNFnyTSTqP6ruU-ske3CFXhTjrIOj2QnQzvsqxto7jLXsrObhw/s1600/Custom4.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwJnTq1Ps6FH9VTC6KpNUcooBIw-ajjpTGojrhccbCi1qkF8B3rG28zvrMjcLpVFRy84DxkL1i-RX8XXN6uLRGMamB9aNFnyTSTqP6ruU-ske3CFXhTjrIOj2QnQzvsqxto7jLXsrObhw/s1600/Custom4.PNG" height="165" width="320" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com12tag:blogger.com,1999:blog-6352560843819453131.post-19334305676627697012014-06-24T10:09:00.000-07:002014-06-24T10:26:48.875-07:00Suricata IDPS - getting the best out of undersized servers with BPFs on heavy traffic links<br />
How to inspect 3-4Gbps with Suricata IDPS with 20K rules loaded on 4CPUs(2.5GHz) and 16G RAM server while having minimal drops - less than 1% <br />
<br />
Impossible? ... definitely not...<br />
Improbable? ...not really<br />
<br />
<h2>
Setup</h2>
<br />
<b>3,2-4Gbps</b> of mirrored traffic<br />
<b>4 X CPU</b> - E5420 @ 2.50GHz (4 , NOT 8 with hyper-threading, just 4)<br />
<b>16GB RAM</b><br />
<b>Kernel Level - </b> <b>3.5.0-23-generic</b> #35~precise1-Ubuntu SMP Fri Jan 25 17:13:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux<br />
<b>Network Card</b> - 82599EB 10-Gigabit SFI/SFP+ Network Connection<br />
with driver=ixgbe driverversion=<b>3.17.3</b> <br />
<b>Suricata version 2.0dev (rev 896b614)</b> - some commits above 2.0.1<br />
<b>20K rules - ETPro ruleset</b> <br />
<br />
<br />
<br />
<br />
between 400-700K pps<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEip9cR-pkon7NbshV0dWKQ1Zlf498rS0Itk_aY5yV898hpH7tmDfb8mblEoOrda7uwjleapBxkNidCqbHx23yfgcYlP_4VbV2eKKWmNCsS-pJBs6mBqn4oWj5r0TJ8UlBN1wq1Pw80k4Lg/s1600/tcpstat.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEip9cR-pkon7NbshV0dWKQ1Zlf498rS0Itk_aY5yV898hpH7tmDfb8mblEoOrda7uwjleapBxkNidCqbHx23yfgcYlP_4VbV2eKKWmNCsS-pJBs6mBqn4oWj5r0TJ8UlBN1wq1Pw80k4Lg/s1600/tcpstat.PNG" height="80" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiz63wnV5llN_mDylxOWfB-J_Td67Z0VMMlqk76E22V10A9Y8NMxOQ6Z3YDRpDYkb9cFti6qWxg9hY91p8cjbno-nfLemoMiGKtsoggbS9-IJfakVcW39vMaReunukY2uEGrh8gPnxuupY/s1600/bwm-ng.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiz63wnV5llN_mDylxOWfB-J_Td67Z0VMMlqk76E22V10A9Y8NMxOQ6Z3YDRpDYkb9cFti6qWxg9hY91p8cjbno-nfLemoMiGKtsoggbS9-IJfakVcW39vMaReunukY2uEGrh8gPnxuupY/s1600/bwm-ng.PNG" height="117" width="400" /></a></div>
<br />
<br />
<br />
<br />
<br />
If you want to run Suricata on that HW with about 20 000 rules inspecting 3-4Gbps traffic with minimal drops - it is just not possible. There are not enough CPUs , not enough RAM....<br />
<br />
Sometimes funding is tough, convincing management to buy new/more HW could be difficult for a particular endeavor/test and a number of other reasons... <br />
<br />
So what can you do?<br />
<br />
<h2>
BPF</h2>
<br />
Suricata can utilize BPF (Berkeley Packet Filter) when running inspection. It allows to select and filter the type of traffic you would want Suricata to inspect.<br />
<br />
<br />
There are three ways you can use BPF filter with Suricata:<br />
<br />
<ul>
<li>On the command line</li>
</ul>
suricata -c /etc/suricata/suricata.yaml -i eth0 -v <u><b>dst port 80</b></u><br />
<br />
<ul>
<li>From suricata.yaml</li>
</ul>
Under each respective runmode in suricata.yaml (afpacket,pfring,pcap) - <u><b>bpf-filter: port 80 or udp</b></u><br />
<br />
<ul>
<li>From a file</li>
</ul>
suricata -c /etc/suricata/suricata.yaml -i eth0 -v <b>-F bpf.file</b><br />
<br />
Inside the bpf.file you would have your BPF filter.<br />
<br />
<br />
The examples above would filter only the traffic that has dest port 80 and would pass it too Suricata for inspection.<br />
<br />
<h2>
BPF - The tricky part</h2>
<br />
<br />
It <u><b>DOES</b></u> make a difference when using BPF if you have VLANs in the mirrored traffic.<br />
<br />
Please read here before you continue further - <a href="http://taosecurity.blogspot.se/2008/12/bpf-for-ip-or-vlan-traffic.html" target="_blank">http://taosecurity.blogspot.se/2008/12/bpf-for-ip-or-vlan-traffic.html</a><br />
<br />
<br />
<h2>
The magic</h2>
<br />
If you want to -<br />
extract all client data , TCP SYN|FIN flags to preserve session state and server response headers<br />
the BPF filter (thanks to Cooper Nelson (UCSD) who shared the filter on our (OISF) mailing list) would look like this:<br />
<br />
<br />
<blockquote class="tr_bq">
(port 53 or 443 or 6667) or (tcp dst port 80 or (tcp src port 80 and<br />
(tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or tcp[((tcp[12:1] & 0xf0) >><br />
2):4] = 0x48545450)))</blockquote>
<br />
<br />
<br />
That would inspect traffic on ports 53 (DNS) , 443(HTTPS), 6667 (IRC) and 80 (HTTP)<br />
<br />
<u><b>NOTE:</b></u> the filter above is for <b>NON VLAN</b> traffic !<br />
<br />
Now the same filter <b>for VLAN present</b> traffic would look like this below:<br />
<br />
<blockquote class="tr_bq">
((ip and port 53 or 443 or 6667) or ( ip and tcp dst port 80 or (ip<br />
and tcp src port 80 and (tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or<br />
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450))))<br />
or<br />
((vlan and port 53 or 443 or 6667) or ( vlan and tcp dst port 80 or<br />
(vlan and tcp src port 80 and (tcp[tcpflags] & (tcp-syn|tcp-fin) != 0<br />
or tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450))))</blockquote>
<br />
<h2>
BPF - my particular case</h2>
I did some traffic profiling on the sensor and it could be summed up like this (using <i><b>iptraf</b></i>):<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6fj_8E3axdKbaTXjpwjvWWcl51hHZK652Bi21iKzCq7goC0TSzPGeiqdhr7W8qfTsowNvAK61TGu7l0Te5jsUwlDY96_NZr7XmFjTkSwdGuOqvHd-NhE0OfpMhM923i_O9e47x3-RaR4/s1600/iptraf.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6fj_8E3axdKbaTXjpwjvWWcl51hHZK652Bi21iKzCq7goC0TSzPGeiqdhr7W8qfTsowNvAK61TGu7l0Te5jsUwlDY96_NZr7XmFjTkSwdGuOqvHd-NhE0OfpMhM923i_O9e47x3-RaR4/s1600/iptraf.PNG" height="296" width="400" /></a></div>
<br />
<br />
you can see that the traffic on port 53 (DNS) is just as much as the one on http. I was facing some tough choices...<br />
<br />
<b>The bpf filter that I made for this particular case was</b>:<br />
<br />
<blockquote class="tr_bq">
(<br />
(ip and port 20 or 21 or 22 or 25 or 110 or 161 or 443 or 445 or 587 or 6667) <br />
or ( ip and tcp dst port 80 or (ip and tcp src port 80 and <br />
(tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or<br />
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450))))<br />
or<br />
((vlan and port 20 or 21 or 22 or 25 or 110 or 161 or 443 or 445 or 587 or 6667) <br />
or ( vlan and tcp dst port 80 or (vlan and tcp src port 80 and <br />
(tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or <br />
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450)))<br />
)</blockquote>
<br />
That would filter MIXED (both VLAN and NON VLAN) traffic on ports<br />
<ul>
<li>20/21 (FTP) </li>
<li>25 (SMTP) </li>
<li>80 (HTTP) </li>
<li>110 (POP3)</li>
<li>161 (SNMP)</li>
<li>443(HTTPS)</li>
<li>445 (Microsoft-DS Active Directory, Windows shares) </li>
<li>587 (MSA - SNMP)</li>
<li>6667 (IRC) </li>
</ul>
<br />
and pass it to Suricata for inspection.<br />
<br />
I had to drop the DNS - I am not saying this is right to do, but tough times call for tough measures. I had a seriously undersized server (4 cpu 2,5 Ghz 16GB RAM) and traffic between 3-4Gbps<br />
<br />
<h2>
How it is actually done</h2>
<br />
Suricata<br />
<blockquote class="tr_bq">
root@snif01:/home/pmanev# suricata --build-info<br />
This is Suricata version 2.0dev (rev 896b614)<br />
Features: PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 PF_RING AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK HAVE_NSS HAVE_LIBJANSSON <br />
SIMD support: SSE_4_1 SSE_3 <br />
Atomic intrisics: 1 2 4 8 16 byte(s)<br />
64-bits, Little-endian architecture<br />
GCC version 4.6.3, C version 199901<br />
compiled with -fstack-protector<br />
compiled with _FORTIFY_SOURCE=2<br />
L1 cache line size (CLS)=64<br />
compiled with LibHTP v0.5.11, linked against LibHTP v0.5.11<br />
Suricata Configuration:<br />
AF_PACKET support: yes<br />
PF_RING support: yes<br />
NFQueue support: no<br />
NFLOG support: no<br />
IPFW support: no<br />
DAG enabled: no<br />
Napatech enabled: no<br />
Unix socket enabled: yes<br />
Detection enabled: yes<br />
<br />
libnss support: yes<br />
libnspr support: yes<br />
libjansson support: yes<br />
Prelude support: no<br />
PCRE jit: no<br />
LUA support: no<br />
libluajit: no<br />
libgeoip: yes<br />
Non-bundled htp: no<br />
Old barnyard2 support: no<br />
CUDA enabled: no<br />
<br />
Suricatasc install: yes<br />
<br />
Unit tests enabled: no<br />
Debug output enabled: no<br />
Debug validation enabled: no<br />
Profiling enabled: no<br />
Profiling locks enabled: no<br />
Coccinelle / spatch: no<br />
<br />
Generic build parameters:<br />
Installation prefix (--prefix): /usr/local<br />
Configuration directory (--sysconfdir): /usr/local/etc/suricata/<br />
Log directory (--localstatedir) : /usr/local/var/log/suricata/<br />
<br />
Host: x86_64-unknown-linux-gnu<br />
GCC binary: gcc<br />
GCC Protect enabled: no<br />
GCC march native enabled: yes<br />
GCC Profile enabled: no</blockquote>
<br />
In suricata .yaml<br />
<br />
<blockquote class="tr_bq">
#max-pending-packets: 1024<br />
max-pending-packets: 65534<br />
...<br />
# Runmode the engine should use. Please check --list-runmodes to get the available<br />
# runmodes for each packet acquisition method. Defaults to "autofp" (auto flow pinned<br />
# load balancing).<br />
#runmode: autofp<br />
runmode: workers<br />
.....<br />
.....<br />
af-packet:<br />
- interface: eth2<br />
# Number of receive threads (>1 will enable experimental flow pinned<br />
# runmode)<br />
<b>threads: 4</b><br />
# Default clusterid. AF_PACKET will load balance packets based on flow.<br />
# All threads/processes that will participate need to have the same<br />
# clusterid.<br />
cluster-id: 98<br />
# Default AF_PACKET cluster type. AF_PACKET can load balance per flow or per hash.<br />
# This is only supported for Linux kernel > 3.1<br />
# possible value are:<br />
# * cluster_round_robin: round robin load balancing<br />
# * cluster_flow: all packets of a given flow are send to the same socket<br />
# * cluster_cpu: all packets treated in kernel by a CPU are send to the same socket<br />
<b>cluster-type: cluster_cpu</b><br />
# In some fragmentation case, the hash can not be computed. If "defrag" is set<br />
# to yes, the kernel will do the needed defragmentation before sending the packets.<br />
defrag: yes<br />
# To use the ring feature of AF_PACKET, set 'use-mmap' to yes<br />
<b>use-mmap: yes</b><br />
# Ring size will be computed with respect to max_pending_packets and number<br />
# of threads. You can set manually the ring size in number of packets by setting<br />
# the following value. If you are using flow cluster-type and have really network<br />
# intensive single-flow you could want to set the ring-size independantly of the number<br />
# of threads:<br />
<b>ring-size: 200000</b><br />
# On busy system, this could help to set it to yes to recover from a packet drop<br />
# phase. This will result in some packets (at max a ring flush) being non treated.<br />
#use-emergency-flush: yes<br />
# recv buffer size, increase value could improve performance<br />
# buffer-size: 100000<br />
# Set to yes to disable promiscuous mode<br />
# disable-promisc: no<br />
# Choose checksum verification mode for the interface. At the moment<br />
# of the capture, some packets may be with an invalid checksum due to<br />
# offloading to the network card of the checksum computation.<br />
# Possible values are:<br />
# - kernel: use indication sent by kernel for each packet (default)<br />
# - yes: checksum validation is forced<br />
# - no: checksum validation is disabled<br />
# - auto: suricata uses a statistical approach to detect when<br />
# checksum off-loading is used.<br />
# Warning: 'checksum-validation' must be set to yes to have any validation<br />
#checksum-checks: kernel<br />
# BPF filter to apply to this interface. The pcap filter syntax apply here.<br />
#bpf-filter: port 80 or udp<br />
....<br />
.... </blockquote>
<blockquote class="tr_bq">
detect-engine:<br />
- profile: high<br />
- custom-values:<br />
toclient-src-groups: 2<br />
toclient-dst-groups: 2<br />
toclient-sp-groups: 2<br />
toclient-dp-groups: 3<br />
toserver-src-groups: 2<br />
toserver-dst-groups: 4<br />
toserver-sp-groups: 2<br />
toserver-dp-groups: 25<br />
- sgh-mpm-context: auto<br />
... </blockquote>
<blockquote class="tr_bq">
flow-timeouts:<br />
<br />
default:<br />
new: 5 #30<br />
established: 30 #300<br />
closed: 0<br />
emergency-new: 1 #10<br />
emergency-established: 2 #100<br />
emergency-closed: 0<br />
tcp:<br />
new: 5 #60<br />
established: 60 # 3600<br />
closed: 1 #30<br />
emergency-new: 1 # 10<br />
emergency-established: 5 # 300<br />
emergency-closed: 0 #20<br />
udp:<br />
new: 5 #30<br />
established: 60 # 300<br />
emergency-new: 5 #10<br />
emergency-established: 5 # 100<br />
icmp:<br />
new: 5 #30<br />
established: 60 # 300<br />
emergency-new: 5 #10<br />
emergency-established: 5 # 100<br />
....<br />
....<br />
stream:<br />
memcap: 4gb<br />
checksum-validation: no # reject wrong csums<br />
midstream: false<br />
prealloc-sessions: 50000<br />
inline: no # auto will use inline mode in IPS mode, yes or no set it statically<br />
reassembly:<br />
memcap: 8gb<br />
depth: 12mb # reassemble 1mb into a stream<br />
toserver-chunk-size: 2560<br />
toclient-chunk-size: 2560<br />
randomize-chunk-size: yes<br />
#randomize-chunk-range: 10<br />
...<br />
...<br />
default-rule-path: /etc/suricata/et-config/<br />
rule-files:<br />
- trojan.rules <br />
- malware.rules<br />
- local.rules<br />
- activex.rules<br />
- attack_response.rules<br />
- botcc.rules<br />
- chat.rules<br />
- ciarmy.rules<br />
- compromised.rules<br />
- current_events.rules<br />
- dos.rules<br />
- dshield.rules<br />
- exploit.rules<br />
- ftp.rules<br />
- games.rules<br />
- icmp_info.rules<br />
- icmp.rules<br />
- imap.rules<br />
- inappropriate.rules<br />
- info.rules<br />
- misc.rules<br />
- mobile_malware.rules ##<br />
- netbios.rules<br />
- p2p.rules<br />
- policy.rules<br />
- pop3.rules<br />
- rbn-malvertisers.rules<br />
- rbn.rules<br />
- rpc.rules<br />
- scada.rules<br />
- scada_special.rules<br />
- scan.rules<br />
- shellcode.rules<br />
- smtp.rules<br />
- snmp.rules<br />
<br />
...<br />
....<br />
libhtp:<br />
<br />
default-config:<br />
personality: IDS<br />
<br />
# Can be specified in kb, mb, gb. Just a number indicates<br />
# it's in bytes.<br />
request-body-limit: 12mb<br />
response-body-limit: 12mb<br />
<br />
# inspection limits<br />
request-body-minimal-inspect-size: 32kb<br />
request-body-inspect-window: 4kb<br />
response-body-minimal-inspect-size: 32kb<br />
response-body-inspect-window: 4kb<br />
<br />
# decoding<br />
double-decode-path: no<br />
double-decode-query: no</blockquote>
<br />
<br />
Create the BFP file (you can put it anywhere)<br />
<blockquote class="tr_bq">
touch /home/pmanev/test/bpf-filter</blockquote>
<br />
<br />
The bpf-filter should look like this:<br />
<br />
<br />
<blockquote>
root@snif01:/var/log/suricata# cat /home/pmanev/test/bpf-filter <br />
(<br />
(ip and port 20 or 21 or 22 or 25 or 110 or 161 or 443 or 445 or 587 or 6667) <br />
or ( ip and tcp dst port 80 or (ip and tcp src port 80 and <br />
(tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or<br />
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450))))<br />
or<br />
((vlan and port 20 or 21 or 22 or 25 or 110 or 161 or 443 or 445 or 587 or 6667) <br />
or ( vlan and tcp dst port 80 or (vlan and tcp src port 80 and <br />
(tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or <br />
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450)))<br />
)<br />
root@snif01:/var/log/suricata# </blockquote>
<br />
<br />
Start Suricata like this:<br />
<blockquote class="tr_bq">
suricata -c /etc/suricata/peter-yaml/suricata-afpacket.yaml --af-packet=eth2 -D -v -F /home/pmanev/test/bpf-filter</blockquote>
<br />
Like this I was able to achieve inspection on 3,2-4Gbps with about 20K rules with 1% drops.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQjLckurb1JwPCOZLjSma2XqwF_c_3zr4gljU6PmslsJgWxnjLSRH28e6AMFxO4jXZ6m5yqwdcKsidi7t97SbQBQQGThLctHaQYcihHmRk9ogQgFKQHkeNuOe_g-ZsdpLg1QJJqYokTjE/s1600/htop-bpf.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQjLckurb1JwPCOZLjSma2XqwF_c_3zr4gljU6PmslsJgWxnjLSRH28e6AMFxO4jXZ6m5yqwdcKsidi7t97SbQBQQGThLctHaQYcihHmRk9ogQgFKQHkeNuOe_g-ZsdpLg1QJJqYokTjE/s1600/htop-bpf.PNG" height="68" width="400" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<br />
<br />
In the suricata.log:<br />
<blockquote class="tr_bq">
root@snif01:/var/log/suricata# more suricata.log <br />
[1274] 21/6/2014 -- 19:36:35 - (suricata.c:1034) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev 896b614)<br />
[1274] 21/6/2014 -- 19:36:35 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) --<u><b> CPUs/cores online: 4</b></u><br />
......<br />
[1275] 21/6/2014 -- 19:36:46 - (detect.c:452) <Info> (SigLoadSignatures) -- 46 rule files processed. 20591 rules successfully loaded, 8 rules failed<br />
[1275] 21/6/2014 -- 19:36:47 - (detect.c:2591) <Info> (SigAddressPrepareStage1) -- <u><b>20599 signatures processed</b></u>. 827 are IP-only rules, 6510 are inspecting packet payload, 15650 inspect ap<br />
plication layer, 0 are decoder event only<br />
.....<br />
.....<br />
[1275] 21/6/2014 -- 19:37:17 - (runmode-af-packet.c:150) <Info> (ParseAFPConfig) -- Going to use command-line provided bpf filter <b>'( (ip and port 20 or 21 or 22 or 25 or 110 or 161 or 44<br />3 or 445 or 587 or 6667) or ( ip and tcp dst port 80 or (ip and tcp src port 80 and (tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450)))) or ((vl<br />an and port 20 or 21 or 22 or 25 or 110 or 161 or 443 or 445 or 587 or 6667) or ( vlan and tcp dst port 80 or (vlan and tcp src port 80 and (tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or <br />tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450))) ) '</b><br />
.....<br />
.....</blockquote>
<blockquote>
[1275] 22/6/2014 -- 01:45:34 - (stream.c:182) <Info> (StreamMsgQueuesDeinit) -- TCP segment chunk pool had a peak use of 6674 chunks, more than the prealloc setting of 250<br />
[1275] 22/6/2014 -- 01:45:34 - (host.c:245) <Info> (HostPrintStats) -- host memory usage: 825856 bytes, maximum: 16777216<br />
[1275] 22/6/2014 -- 01:45:35 - (detect.c:3890) <Info> (SigAddressCleanupStage1) -- cleaning up signature grouping structure... complete<br />
[1275] 22/6/2014 -- 01:45:35 - (util-device.c:190) <Notice> (LiveDeviceListClean) -- Stats for 'eth2': pkts: 2820563520, <b>drop: 244696588 (8.68%)</b>, invalid chksum: 0</blockquote>
<br />
That gave me about 9% drops...... I further adjusted the filter (after realizing I could drop 445 Windows Shares for the moment from inspection).<br />
<br />
The new filter was like so:<br />
<br />
<blockquote class="tr_bq">
root@snif01:/var/log/suricata# cat /home/pmanev/test/bpf-filter<br />
(<br />
(ip and port 20 or 21 or 22 or 25 or 110 or 161 or 443 or 587 or 6667) <br />
or ( ip and tcp dst port 80 or (ip and tcp src port 80 and <br />
(tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or<br />
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450))))<br />
or<br />
((vlan and port 20 or 21 or 22 or 25 or 110 or 161 or 443 or 587 or 6667) <br />
or ( vlan and tcp dst port 80 or (vlan and tcp src port 80 and <br />
(tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 or <br />
tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x48545450)))<br />
)<br />
root@snif01:/var/log/suricata# </blockquote>
Notice - I removed port 445.<br />
<br />
So with that filter I was able to do 0.95% drops with 20K rules:<br />
<br />
[16494] 22/6/2014 -- 10:13:10 - (suricata.c:1034) <Notice> (SCPrintVersion) -- <b>This is Suricata version 2.0dev (rev 896b614)</b><br />
[16494] 22/6/2014 -- 10:13:10 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 4<br />
...<br />
...<br />
[16495] 22/6/2014 -- 10:13:20 - (detect.c:452) <Info> (SigLoadSignatures) -- 46 rule files processed. 20591 rules successfully loaded, 8 rules failed<br />
[16495] 22/6/2014 -- 10:13:21 - (detect.c:2591) <Info> (SigAddressPrepareStage1) --<b> <u>20599 signatures processed.</u></b> 827 are IP-only rules, 6510 are inspecting packet payload, 15650 inspect application layer, 0 are decoder event only<br />
...<br />
...<br />
[16495] 23/6/2014 -- 01:45:32 - (host.c:245) <Info> (HostPrintStats) -- host memory usage: 1035520 bytes, maximum: 16777216<br />
[16495] 23/6/2014 -- 01:45:32 - (detect.c:3890) <Info> (SigAddressCleanupStage1) -- cleaning up signature grouping structure... complete<br />
[16495] 23/6/2014 -- 01:45:32 - (util-device.c:190) <Notice> (LiveDeviceListClean) -- Stats for 'eth2': <u><b>pkts: 6550734692, drop: 62158315 (0.95%)</b></u>, invalid chksum: 0<br />
<br />
<br />
<br />
<br />
So with that BPF filter we have -><br />
<h2>
Pros </h2>
<br />
I was able to inspect with a lot of rules(20K) a lot of traffic (4Gbps peak) with an undersized and minimal HW (4 CPU 16GB RAM ) sustained with less then 1% drops<br />
<br />
<h2>
Cons</h2>
<ul>
<li>Not inspecting DNS </li>
<li>Making an assumption that all HTTP traffic is using port 80. (Though in my case 99.9% of the http traffic was on port 80)</li>
<li>This is an advanced BPF filter , requires a good chunk of knowledge in order to understand/implement/re-edit</li>
</ul>
<br />
<br />
<h2>
Simple and efficient</h2>
<br />
In the case where you have a network or a device that generates a lot of false positives and you are sure you can disregard any traffic from that device - you could do a filter like this:<br />
<br />
<blockquote class="tr_bq">
(ip and not host 1.1.1.1 ) or (vlan and not host 1.1.1.1)</blockquote>
<br />
for a VLAN and non VLAN traffic mixed. If you are sure there is no VLAN traffic you could just do that:<br />
<blockquote class="tr_bq">
ip and not host 1.1.1.1</blockquote>
<br />
Then you can simply start Suricata like so:<br />
<blockquote class="tr_bq">
suricata -c /etc/suricata/peter-yaml/suricata-afpacket.yaml --af-packet=eth2 -D -v \(ip and not host 1.1.1.1 \) or \(vlan and not host 1.1.1.1\)</blockquote>
<br />
or like this respectively (to the two examples above):<br />
<blockquote class="tr_bq">
suricata -c /etc/suricata/peter-yaml/suricata-afpacket.yaml --af-packet=eth2 -D -v ip and not host 1.1.1.1 </blockquote>
<br />
<br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com0tag:blogger.com,1999:blog-6352560843819453131.post-7091022410364898872014-06-17T10:40:00.001-07:002014-08-10T05:36:40.238-07:00HTTP Header fields extended logging with Suricata IDPS<br />
With the release of Suricata 2.0.1 there is availability and option to do extended custom HTTP header fields logging through the JSON output module.<br />
<br />
For the <a href="http://www.elasticsearch.org/overview/elkdownloads/" target="_blank">Elasticsearch/Logstash/Kibana</a> users there is a ready to use template that you could download from here -<br />
<a href="https://github.com/pevma/Suricata-Logstash-Templates" target="_blank">https://github.com/pevma/Suricata-Logstash-Templates</a><br />
<h2>
So what does this mean? </h2>
Well besides the standard http logging in the eve.json you also get <b>47 additional HTTP fields logged</b>, mainly these:<br />
<blockquote class="tr_bq">
accept<br />
accept-charset<br />
accept-encoding<br />
accept-language<br />
accept-datetime<br />
authorization<br />
cache-control<br />
cookie<br />
from<br />
max-forwards<br />
origin<br />
pragma<br />
proxy-authorization<br />
range<br />
te<br />
via<br />
x-requested-with<br />
dnt<br />
x-forwarded-proto<br />
accept-range<br />
age<br />
allow<br />
connection<br />
content-encoding<br />
content-language<br />
content-length<br />
content-location<br />
content-md5<br />
content-range<br />
content-type<br />
date<br />
etags<br />
last-modified<br />
link<br />
location<br />
proxy-authenticate<br />
referrer<br />
refresh<br />
retry-after<br />
server<br />
set-cookie<br />
trailer<br />
transfer-encoding<br />
upgrade<br />
vary<br />
warning<br />
www-authenticate</blockquote>
<br />
What they are and what they mean/affect you could read more about here:<br />
<a href="http://en.wikipedia.org/wiki/List_of_HTTP_header_fields" target="_blank">http://en.wikipedia.org/wiki/List_of_HTTP_header_fields</a><br />
<br />
You can choose any combination of those fields above or all of them. What you need to do is simply add those to the existing logging in suricata.yaml's eve section. To add all of them if found in the HTTP traffic you could do like so:<br />
<blockquote class="tr_bq">
- eve-log:<br />
enabled: yes<br />
type: file #file|syslog|unix_dgram|unix_stream<br />
filename: eve.json<br />
# the following are valid when type: syslog above<br />
#identity: "suricata"<br />
#facility: local5<br />
#level: Info ## possible levels: Emergency, Alert, Critical,<br />
## Error, Warning, Notice, Info, Debug<br />
types:<br />
- alert<br />
- http:<br />
extended: yes # enable this for extended logging information<br />
# custom allows additional http fields to be included in eve-log<br />
# the example below adds three additional fields when uncommented<br />
#custom: [Accept-Encoding, Accept-Language, Authorization]<br />
<b> custom: [accept, accept-charset, accept-encoding, accept-language, </b><br />
<b> accept-datetime, authorization, cache-control, cookie, from, </b><br />
<b> max-forwards, origin, pragma, proxy-authorization, range, te, via, </b><br />
<b> x-requested-with, dnt, x-forwarded-proto, accept-range, age, </b><br />
<b> allow, connection, content-encoding, content-language, </b><br />
<b> content-length, content-location, content-md5, content-range, </b><br />
<b> content-type, date, etags, last-modified, link, location, </b><br />
<b> proxy-authenticate, referrer, refresh, retry-after, server, </b><br />
<b> set-cookie, trailer, transfer-encoding, upgrade, vary, warning, </b><br />
<b> www-authenticate]</b><br />
- dns<br />
- tls:<br />
extended: yes # enable this for extended logging information<br />
- files:<br />
force-magic: yes # force logging magic on all logged files<br />
force-md5: yes # force logging of md5 checksums<br />
#- drop<br />
- ssh</blockquote>
Then you just start Suricata.<br />
<br />
<h2>
What is the benefit?</h2>
You can log and search/filter/select through any or all of those 60 or so http header fields. JSON is a standard format - so depending on what you are using for DB and/or search engine, you could get very easy interesting and very helpful statistics that would help your security teams.<br />
<br />
Some possible stats using Elasticsearch and Kibana<br />
(<a href="https://redmine.openinfosecfoundation.org/projects/suricata/wiki/_Logstash_Kibana_and_Suricata_JSON_output" target="_blank">how to set up Elasticsearch, Logstash and Kibana with Suricata</a>)- <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIAWjvZSJZEL_YZM-MCx34EV74Gu6vht9vyQDi1m2u-Mmo-gmWbdttH60j2YJScS_bwadko7TDq68ESYZSuSPs-bPXzMvL-qUffcHFx6erp7qhjAY2xnzrw3OgHUo_EormPtXZkahqZXk/s1600/Fields.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIAWjvZSJZEL_YZM-MCx34EV74Gu6vht9vyQDi1m2u-Mmo-gmWbdttH60j2YJScS_bwadko7TDq68ESYZSuSPs-bPXzMvL-qUffcHFx6erp7qhjAY2xnzrw3OgHUo_EormPtXZkahqZXk/s1600/Fields.PNG" height="400" width="121" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNgrt8DJFPKQb_3uDh632jVAAd-0hWxwFGPAVVvaTZvb8t296eiWggCwyc3AK2XpKUAYs-Sw_8u1LZmunuM4F2iToKVnmKpoII8373P9x3fFTejxpukSafQqlJtbD4-V76I6b6qinwKic/s1600/ContentType.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNgrt8DJFPKQb_3uDh632jVAAd-0hWxwFGPAVVvaTZvb8t296eiWggCwyc3AK2XpKUAYs-Sw_8u1LZmunuM4F2iToKVnmKpoII8373P9x3fFTejxpukSafQqlJtbD4-V76I6b6qinwKic/s1600/ContentType.PNG" height="290" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWdxBJnCd49U_konOYI6MKPFKvgU7oFTt3hX-wQ4r7DIP5JYfE9o9UJP_cIdxaiELTs5-jL2SN8-qVPJw7XjbQcfmyFA4hHsB9fCnhyE7qXdrgSghcnRuTuyLFr1YgV04cTgUm1cQMib4/s1600/Connection.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWdxBJnCd49U_konOYI6MKPFKvgU7oFTt3hX-wQ4r7DIP5JYfE9o9UJP_cIdxaiELTs5-jL2SN8-qVPJw7XjbQcfmyFA4hHsB9fCnhyE7qXdrgSghcnRuTuyLFr1YgV04cTgUm1cQMib4/s1600/Connection.PNG" height="210" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHIub1Kl3cZZXcfq7_S-LjZsoBVzGOiL6pytnXAAAnsBBX5y9ih-oMughtQLI9c0iLw-z2eleSE1mnD-Kwjh5OtaYOSboVTsDJzzezokyY-abxGnhIPK5mKF3rCokNhz589KX9Jx-Kpa4/s1600/http-server-types.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHIub1Kl3cZZXcfq7_S-LjZsoBVzGOiL6pytnXAAAnsBBX5y9ih-oMughtQLI9c0iLw-z2eleSE1mnD-Kwjh5OtaYOSboVTsDJzzezokyY-abxGnhIPK5mKF3rCokNhz589KX9Jx-Kpa4/s1600/http-server-types.PNG" height="286" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDIigyj2pChEh0L4Ymz_XoEfQgs74mAgbJZqSWHQkmWny2qkmWDDb5EhTPiNT9Xim-EG0WRWzLiFgdlpQ7GoCi0FcnBGrGPn6ElwefHzDeGyUIR_XbVFJlCfDvgpkpmqt7NdQeC4Cvn7w/s1600/Server.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDIigyj2pChEh0L4Ymz_XoEfQgs74mAgbJZqSWHQkmWny2qkmWDDb5EhTPiNT9Xim-EG0WRWzLiFgdlpQ7GoCi0FcnBGrGPn6ElwefHzDeGyUIR_XbVFJlCfDvgpkpmqt7NdQeC4Cvn7w/s1600/Server.PNG" height="303" width="400" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLSMrxyoVpQRrLkZPhtxaMZQLuLgLGtQuemHLhh9o8w0h50IlnG9TG3g6liZ9-LfmMpJID2rXQAkezWYHbftOs9lLePT8b234J-0E9NvQ8yWeQG9oZCHI54fE8qMjv3TR40WJkrf_1Yns/s1600/Vary.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLSMrxyoVpQRrLkZPhtxaMZQLuLgLGtQuemHLhh9o8w0h50IlnG9TG3g6liZ9-LfmMpJID2rXQAkezWYHbftOs9lLePT8b234J-0E9NvQ8yWeQG9oZCHI54fE8qMjv3TR40WJkrf_1Yns/s1600/Vary.PNG" height="203" width="400" /></a></div>
<br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com0tag:blogger.com,1999:blog-6352560843819453131.post-45925487207358542822014-06-14T01:46:00.000-07:002015-08-30T08:41:01.535-07:00Suricata IDS/IPS - TCP segment pool size preallocation <br />
In the default suricata.yaml stream section we have:<br />
<blockquote class="tr_bq">
stream:<br />
memcap: 32mb<br />
checksum-validation: no # reject wrong csums<br />
async-oneside: true<br />
midstream: true<br />
inline: no # auto will use inline mode in IPS mode, yes or no set it statically<br />
reassembly:<br />
memcap: 64mb<br />
depth: 1mb # reassemble 1mb into a stream<br />
toserver-chunk-size: 2560<br />
toclient-chunk-size: 2560<br />
randomize-chunk-size: yes<br />
#randomize-chunk-range: 10<br />
<i><b> #raw: yes<br /> #chunk-prealloc: 250<br /> #segments:<br /> # - size: 4<br /> # prealloc: 256<br /> # - size: 16<br /> # prealloc: 512<br /> # - size: 112<br /> # prealloc: 512<br /> # - size: 248<br /> # prealloc: 512<br /> # - size: 512<br /> # prealloc: 512<br /> # - size: 768<br /> # prealloc: 1024<br /> # - size: 1448<br /> # prealloc: 1024<br /> # - size: 65535<br /> # prealloc: 128</b></i></blockquote>
<br />
<br />
So what are these segment preallocations for?<br />
Let's have a look. When Suricata exits (or kill -15 PidOfSuricata) it produces a lot of useful statistics in the suricata.log file (you can enable that from the suricata.yaml and use the "<b>-v</b>" switch (verbose) when starting Suricata):<br />
The example below is for exit stats.<br />
<br />
<blockquote class="tr_bq">
<b>tail -20 StatsByDate/suricata-2014-06-01.log </b><br />
[24344] 1/6/2014 -- 01:45:52 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth314) Packets 7317661624, bytes 6132661347126<br />
[24344] 1/6/2014 -- 01:45:52 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3382528539 TCP packets<br />
[24345] 1/6/2014 -- 01:45:52 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth315) Kernel: Packets 8049357450, dropped 352658715<br />
[24345] 1/6/2014 -- 01:45:52 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth315) Packets 7696486934, bytes 6666577738944<br />
[24345] 1/6/2014 -- 01:45:52 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3357321803 TCP packets<br />
[24346] 1/6/2014 -- 01:45:52 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth316) Kernel: Packets 7573051188, dropped 292897219<br />
[24346] 1/6/2014 -- 01:45:52 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth316) Packets 7279948375, bytes 6046562324948<br />
[24346] 1/6/2014 -- 01:45:52 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3454330660 TCP packets<br />
[24329] 1/6/2014 -- 01:45:53 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- <b>TCP segment pool of size 4 had a peak use of 60778 segments, more than the prealloc setting of 256</b><br />
[24329] 1/6/2014 -- 01:45:53 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- <b>TCP segment pool of size 16 had a peak use of 314953 segments, more than the prealloc setting of 512</b><br />
[24329] 1/6/2014 -- 01:45:53 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- <b>TCP segment pool of size 112 had a peak use of 113739 segments, more than the prealloc setting of 512</b><br />
[24329] 1/6/2014 -- 01:45:53 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- <b>TCP segment pool of size 248 had a peak use of 17893 segments, more than the prealloc setting of 512</b><br />
[24329] 1/6/2014 -- 01:45:53 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- <b>TCP segment pool of size 512 had a peak use of 31787 segments, more than the prealloc setting of 512</b><br />
[24329] 1/6/2014 -- 01:45:53 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- <b>TCP segment pool of size 768 had a peak use of 30769 segments, more than the prealloc setting of 1024</b><br />
[24329] 1/6/2014 -- 01:45:53 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- <b>TCP segment pool of size 1448 had a peak use of 89446 segments, more than the prealloc setting of 1024</b><br />
[24329] 1/6/2014 -- 01:45:53 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- <b>TCP segment pool of size 65535 had a peak use of 81214 segments, more than the prealloc setting of 128</b><br />
[24329] 1/6/2014 -- 01:45:53 - (stream.c:182) <Info> (StreamMsgQueuesDeinit) -- <b>TCP segment chunk pool had a peak use of 20306 chunks, more than the prealloc setting of 250</b><br />
[24329] 1/6/2014 -- 01:45:53 - (host.c:245) <Info> (HostPrintStats) -- host memory usage: 390144 bytes, maximum: 16777216<br />
[24329] 1/6/2014 -- 01:45:55 - (detect.c:3890) <Info> (SigAddressCleanupStage1) -- cleaning up signature grouping structure... complete<br />
[24329] 1/6/2014 -- 01:45:55 - (util-device.c:185) <Notice> (LiveDeviceListClean) -- Stats for 'eth3': pkts: 124068935209, drop: 5245430626 (4.23%), invalid chksum: 0</blockquote>
<br />
<br />
Notice all the "<i><u><b>TCP segment pool</b></u></i>" messages. This is the actual tcp segment pool reassembly stats for that period of time that Suricata was running. We could adjust accordingly in the suricata.yaml (as compared to the default settings above)<br />
<br />
<blockquote class="tr_bq">
stream:<br />
memcap: 14gb<br />
checksum-validation: no # reject wrong csums<br />
midstream: false<br />
prealloc-sessions: 375000<br />
inline: no # auto will use inline mode in IPS mode, yes or no set it statically<br />
reassembly:<br />
memcap: 20gb<br />
depth: 12mb # reassemble 1mb into a stream<br />
toserver-chunk-size: 2560<br />
toclient-chunk-size: 2560<br />
randomize-chunk-size: yes<br />
#randomize-chunk-range: 10<br />
raw: yes<br />
chunk-prealloc: 20556</blockquote>
<blockquote>
segments:<br />
- size: 4<br />
<b>prealloc: 61034</b><br />
- size: 16<br />
<b>prealloc: 315465</b><br />
- size: 112<br />
<b>prealloc: 114251</b><br />
- size: 248<br />
<b>prealloc: 18405</b><br />
- size: 512<br />
<b>prealloc: 30769</b><br />
- size: 768<br />
<b>prealloc: 31793</b><br />
- size: 1448<br />
<b> prealloc: 90470</b><br />
- size: 65535<br />
<b>prealloc: 81342</b></blockquote>
<br />
<br />
<br />
<br />
The total RAM (reserved) consumption for these preallocations (from the <b>stream.reassembly.memcap</b> value ) would be:<br />
<br />
<blockquote class="tr_bq">
<b>4*61034 + 16*315465 + 112*114251 + 248*18405 + 512*30769 + 768*31793 + 1448*90470 + 65535*81342 </b></blockquote>
<blockquote>
<b>= 5524571410 bytes</b><br />
<b>= 5.14 GB of RAM</b></blockquote>
<br />
So we could preallocate the tcp segments and take the Suricata tuning even a step further and improve performance as well.<br />
<br />
So now when you start Suricata with the "-v" switch in your suricata.log with this specific set up described above you should see something like:<br />
<blockquote class="tr_bq">
...<br />
...<br />
[30709] 1/6/2014 -- 12:17:34 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- <b>segment pool: pktsize 4, prealloc 61034</b><br />
[30709] 1/6/2014 -- 12:17:34 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- <b>segment pool: pktsize 16, prealloc 315465</b><br />
[30709] 1/6/2014 -- 12:17:34 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- <b>segment pool: pktsize 112, prealloc 114251</b><br />
[30709] 1/6/2014 -- 12:17:34 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- <b>segment pool: pktsize 248, prealloc 18405</b><br />
[30709] 1/6/2014 -- 12:17:34 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- <b>segment pool: pktsize 512, prealloc 30769</b><br />
[30709] 1/6/2014 -- 12:17:34 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- <b>segment pool: pktsize 768, prealloc 31793</b><br />
[30709] 1/6/2014 -- 12:17:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- <b>segment pool: pktsize 1448, prealloc 90470</b><br />
[30709] 1/6/2014 -- 12:17:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) --<b> segment pool: pktsize 65535, prealloc 81342</b><br />
[30709] 1/6/2014 -- 12:17:35 - (stream-tcp-reassemble.c:461) <Info> (StreamTcpReassemblyConfig) -- <b>stream.reassembly "chunk-prealloc": 20556</b> <br />
...<br />
...</blockquote>
<br />
<u><b>NOTE:</b></u><br />
Those 5.14 GB RAM in the example here will be preallocated (taken) from the <b>stream.reassembly.memcap</b> value. In other words it will not consume 5.14 GB of RAM more. <br />
<br />
So be careful when setting up preallocation in order not to preallocate more of what you have.<br />
In my case of 10Gbps suricata.yaml config I had:<br />
<br />
<blockquote class="tr_bq">
<b>stream:</b><br />
memcap: 14gb<br />
checksum-validation: no # reject wrong csums<br />
midstream: false<br />
prealloc-sessions: 375000<br />
inline: no # auto will use inline mode in IPS mode, yes or no set it statically<br />
<b>reassembly:</b><br />
<b>memcap: 20gb</b><br />
depth: 12mb # reassemble 1mb into a stream</blockquote>
<br />
<br />
What this helps with is that it lowers CPU usage/contention for TCP segment allocation during reassembly - it is already preallocated and Suricata just uses it instead of creating it everytime it needs it. It also helps minimize the initial drops during startup.<br />
<br />
Highly adaptable and flexible. <br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com0tag:blogger.com,1999:blog-6352560843819453131.post-86344565917703411682014-06-10T09:50:00.001-07:002014-06-10T09:55:35.825-07:00Coalesce parameters and RX ring size <br />
Please read through this very useful article :<br />
<a href="http://netoptimizer.blogspot.dk/2014/06/pktgen-for-network-overload-testing.html" target="_blank">http://netoptimizer.blogspot.dk/2014/06/pktgen-for-network-overload-testing.html</a><br />
<br />
Coalesce parameters and RX ring size can have an impact on your IDS.<br />
To see what are the coalesce parameters on the currently sniffing interface:<br />
<br />
<blockquote class="tr_bq">
root@suricata:/var/log/suricata# <b>ethtool -c eth3</b><br />
Coalesce parameters for eth3:<br />
Adaptive RX: off TX: off<br />
stats-block-usecs: 0<br />
sample-interval: 0<br />
pkt-rate-low: 0<br />
pkt-rate-high: 0<br />
<br />
<b>rx-usecs: 1000</b><br />
rx-frames: 0<br />
rx-usecs-irq: 0<br />
rx-frames-irq: 0<br />
<br />
tx-usecs: 0<br />
tx-frames: 0<br />
tx-usecs-irq: 0<br />
tx-frames-irq: 0<br />
<br />
rx-usecs-low: 0<br />
rx-frame-low: 0<br />
tx-usecs-low: 0<br />
tx-frame-low: 0<br />
<br />
rx-usecs-high: 0<br />
rx-frame-high: 0<br />
tx-usecs-high: 0<br />
tx-frame-high: 0</blockquote>
<br />
To change (try with different values) the coalesce parameter:<br />
<br />
<blockquote class="tr_bq">
root@suricata:/var/log/suricata# <b>ethtool -C eth3 rx-usecs 1</b><br />
root@suricata:/var/log/suricata# <b>ethtool -c eth3</b><br />
Coalesce parameters for eth3:<br />
Adaptive RX: off TX: off<br />
stats-block-usecs: 0<br />
sample-interval: 0<br />
pkt-rate-low: 0<br />
pkt-rate-high: 0<br />
<br />
<b>rx-usecs: 1</b><br />
rx-frames: 0<br />
rx-usecs-irq: 0<br />
rx-frames-irq: 0<br />
<br />
tx-usecs: 0<br />
tx-frames: 0<br />
tx-usecs-irq: 0<br />
tx-frames-irq: 0<br />
<br />
rx-usecs-low: 0<br />
rx-frame-low: 0<br />
tx-usecs-low: 0<br />
tx-frame-low: 0<br />
<br />
rx-usecs-high: 0<br />
rx-frame-high: 0<br />
tx-usecs-high: 0<br />
tx-frame-high: 0</blockquote>
<br />
Ring RX parameters on the network card play a role too:<br />
<br />
<br />
<blockquote class="tr_bq">
root@suricata:~# ethtool -g eth3<br />
Ring parameters for eth3:<br />
Pre-set maximums:<br />
RX: 4096<br />
RX Mini: 0<br />
RX Jumbo: 0<br />
TX: 4096</blockquote>
<blockquote>
Current hardware settings:<br />
<b>RX: 512</b><br />
RX Mini: 0<br />
RX Jumbo: 0<br />
TX: 512</blockquote>
<br />
<br />
To increase that to the max Pre-set RX:<br />
<br />
<blockquote class="tr_bq">
root@suricata:~# ethtool -G eth3 rx 4096</blockquote>
<br />
To confirm:<br />
<br />
<blockquote class="tr_bq">
root@suricata:~# ethtool -g eth3<br />
Ring parameters for eth3:<br />
Pre-set maximums:<br />
RX: 4096<br />
RX Mini: 0<br />
RX Jumbo: 0<br />
TX: 4096<br />
Current hardware settings:<br />
<b>RX: 4096</b><br />
RX Mini: 0<br />
RX Jumbo: 0<br />
TX: 512</blockquote>
<br />
Suggested approach - that worked best in my particular set up - for Suricata IDS/IPS deployment is to have the coalesce parameter to value 1 and increase the ring RX size to the max available for that particular interface/card.<br />
<br />
It is suggested that you try a few different scenarios with regards to the coalesce parameters in order to find the best combination that suits your needs.<br />
<br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com0tag:blogger.com,1999:blog-6352560843819453131.post-76484391619696440352014-06-07T04:25:00.000-07:002014-06-07T04:28:00.736-07:00Suricata - Counting enabled rules in the rules directory<br />
<br />
One liner:<br />
<b>grep -c ^alert /etc/suricata/rules/*.rules </b><br />
<br />
<blockquote class="tr_bq">
root@LTS-64-1:~/Downloads/oisf#<b> grep -c ^alert /etc/suricata/rules/*.rules</b> <br />
/etc/suricata/rules/botcc.portgrouped.rules:69<br />
/etc/suricata/rules/botcc.rules:108<br />
/etc/suricata/rules/ciarmy.rules:34<br />
/etc/suricata/rules/compromised.rules:44<br />
/etc/suricata/rules/decoder-events.rules:83<br />
/etc/suricata/rules/dns-events.rules:8<br />
/etc/suricata/rules/drop.rules:26<br />
/etc/suricata/rules/dshield.rules:1<br />
/etc/suricata/rules/emerging-activex.rules:218<br />
/etc/suricata/rules/emerging-attack_response.rules:52<br />
/etc/suricata/rules/emerging-chat.rules:80<br />
/etc/suricata/rules/emerging-current_events.rules:1736<br />
/etc/suricata/rules/emerging-deleted.rules:0<br />
/etc/suricata/rules/emerging-dns.rules:56<br />
/etc/suricata/rules/emerging-dos.rules:37<br />
/etc/suricata/rules/emerging-exploit.rules:218<br />
/etc/suricata/rules/emerging-ftp.rules:60<br />
/etc/suricata/rules/emerging-games.rules:73<br />
/etc/suricata/rules/emerging-icmp_info.rules:14<br />
/etc/suricata/rules/emerging-icmp.rules:0<br />
/etc/suricata/rules/emerging-imap.rules:17<br />
/etc/suricata/rules/emerging-inappropriate.rules:1<br />
/etc/suricata/rules/emerging-info.rules:232<br />
/etc/suricata/rules/emerging-malware.rules:909<br />
/etc/suricata/rules/emerging-misc.rules:26<br />
/etc/suricata/rules/emerging-mobile_malware.rules:98<br />
/etc/suricata/rules/emerging-netbios.rules:421<br />
/etc/suricata/rules/emerging-p2p.rules:117<br />
/etc/suricata/rules/emerging-policy.rules:307<br />
/etc/suricata/rules/emerging-pop3.rules:9<br />
/etc/suricata/rules/emerging-rpc.rules:83<br />
/etc/suricata/rules/emerging-scada.rules:14<br />
/etc/suricata/rules/emerging-scan.rules:196<br />
/etc/suricata/rules/emerging-shellcode.rules:71<br />
/etc/suricata/rules/emerging-smtp.rules:12<br />
/etc/suricata/rules/emerging-snmp.rules:24<br />
/etc/suricata/rules/emerging-sql.rules:191<br />
/etc/suricata/rules/emerging-telnet.rules:5<br />
/etc/suricata/rules/emerging-tftp.rules:13<br />
/etc/suricata/rules/emerging-trojan.rules:2305<br />
/etc/suricata/rules/emerging-user_agents.rules:61<br />
/etc/suricata/rules/emerging-voip.rules:17<br />
/etc/suricata/rules/emerging-web_client.rules:164<br />
/etc/suricata/rules/emerging-web_server.rules:418<br />
/etc/suricata/rules/emerging-web_specific_apps.rules:5406<br />
/etc/suricata/rules/emerging-worm.rules:14<br />
/etc/suricata/rules/files.rules:0<br />
/etc/suricata/rules/http-events.rules:19<br />
/etc/suricata/rules/rbn-malvertisers.rules:0<br />
/etc/suricata/rules/rbn.rules:0<br />
/etc/suricata/rules/smtp-events.rules:6<br />
/etc/suricata/rules/stream-events.rules:45<br />
/etc/suricata/rules/tls-events.rules:10<br />
/etc/suricata/rules/tor.rules:590<br />
root@LTS-64-1:~/Downloads/oisf# </blockquote>
<br />
<br />
<br />
Total rules enabled:<br />
<blockquote class="tr_bq">
root@LTS-64-1:~/Downloads/oisf# <b>grep ^alert /etc/suricata/rules/*.rules | wc -l</b><br />
14718<br />
root@LTS-64-1:~/Downloads/oisf# </blockquote>
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com0tag:blogger.com,1999:blog-6352560843819453131.post-29643696441258163572014-06-04T11:43:00.003-07:002014-06-04T11:51:49.485-07:0024 hr full log run with Suricata IDPS on a 10Gbps ISP line <br />
<br />
This is going to be quick :)<br />
<br />
<ul>
<li>9K rules (standard ET-Pro, not changed or edited)</li>
<li>Suricata 2.0.1 with AF_PACKET, 16 threads </li>
<li>number of hosts in HOME_NET - /21 /19 /19 /18 = about 34K hosts </li>
<li><u><b>24 hour run eve.json with all outputs enabled.</b></u></li>
</ul>
<br />
<br />
<br />
I used that command (<b>it took a while</b> on a 54 GB log file :) ) - as suggested by <complete id="goog_1887602044">@</complete>Packet Inspector (Twitter):<br />
<i><b>cat eve.json-20140604 | perl -ne 'print "$1\n" if /\"event_type\":\"(.*?)\"/' | sort | uniq -c </b></i><br />
<br />
<blockquote class="tr_bq">
root@suricata:/var/log/suricata/tmp# cat eve.json-20140604 | perl -ne 'print "$1\n" if /\"event_type\":\"(.*?)\"/' | sort | uniq -c <br />
384426 alert<br />
219594091 dns<br />
1384214 fileinfo<br />
3460078 http<br />
10304 ssh<br />
280184 tls<br />
root@suricata:/var/log/suricata/tmp# ls -lh<br />
total 54G<br />
-rw-r----- 1 root root 54G Jun 4 16:49 eve.json-20140604<br />
root@suricata:/var/log/suricata/tmp# </blockquote>
<br />
So basically we got (descending order) :<br />
<ul>
<li>219 594 091 - DNS</li>
<li> 3 460 078 - HTTP </li>
<li> 1 384 214 - FILEINFO</li>
<li> 384 426 - ALERTS </li>
<li> 280 184 -TLS</li>
<li> 10 304 - SSH</li>
</ul>
about 2600 logs per second on that particular day for that particular test run - yesterday.<br />
Tomorrow .... who knows :) <br />
<br />
With these 8 rule files enabled:<br />
<blockquote class="tr_bq">
rule-files:<br />
- trojan.rules<br />
- dns.rules<br />
- malware.rules<br />
- md5.rules <br />
- local.rules<br />
- current_events.rules<br />
- mobile_malware.rules<br />
- user_agents.rules</blockquote>
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com4tag:blogger.com,1999:blog-6352560843819453131.post-59629558587054131972014-05-31T05:32:00.003-07:002014-05-31T05:32:40.625-07:00Logs per second on eve.json - the good and the bad news on a 10Gbps IDPS line inspection<br />
<br />
I found this one liner which gives the amount of logs per second logged in eve.json<br />
<blockquote class="tr_bq">
tail -f eve.json |perl -e 'while (<>) {$l++;if (time > $e) {$e=time;print "$l\n";$l=0}}'</blockquote>
I take no credit for it - I got it from <a href="http://www.commandlinefu.com/commands/view/5490/lines-per-second-in-a-log-file" target="_blank">commandlinefu</a> <br />
<br />
<br />
<blockquote class="tr_bq">
tail -f eve.json |perl -e 'while (<>) {$l++;if (time > $e) {$e=time;print "$l\n";$l=0}}'<br />1<br />193<br />3301<br />3402<br />3862<br />3411<br />3719<br />3467<br />3522<br />3127<br />3354<br />^C<br /></blockquote>
<br />
Having in mind this is at Sat lunch time... 3 - 3,5K logs per second it turns to minimum 4 - 4,5K logs per second on a working day.<br />
I had "only" these logs enabled in suricata.yaml in the eve log section - dns,http,alert and ssh on a 10Gbps Suricata 2.0.1 IDS sensor:<br />
<br />
<blockquote class="tr_bq">
# "United" event log in JSON format<br /> - eve-log:<br /> enabled: yes<br /> type: file #file|syslog|unix_dgram|unix_stream<br /> filename: eve.json<br /> # the following are valid when type: syslog above<br /> #identity: "suricata"<br /> #facility: local5<br /> #level: Info ## possible levels: Emergency, Alert, Critical,<br /> ## Error, Warning, Notice, Info, Debug<br /> types:<br /> - alert<br /> - http:<br /> extended: yes # enable this for extended logging information<br /> - dns<br /> #- tls:<br /> #extended: yes # enable this for extended logging information<br /> #- files:<br /> #force-magic: yes # force logging magic on all logged files<br /> #force-md5: yes # force logging of md5 checksums<br /> #- drop<br /> - ssh<br /> append: yes</blockquote>
<br />
If you enable "files" and "tls" it will increase it to probably about 5-6K logs per second (maybe even more , depends on the type of traffic) with that set up.<br />
<br />
<h2>
The good news:</h2>
eve.json logs in standard JSON format (JavaScript Object Notation). So there are A LOT of log analysis solutions and software both open source, free and/or commercial that can digest and run analysis on JSON logs.<br />
<br />
<h2>
The bad news:</h2>
How many log analysis solutions can "really" handle 5K logs per second -<br />
<ul>
<li>indexing, </li>
<li>query, </li>
<li>search, </li>
<li>report generation, </li>
<li>log correlation, </li>
<li>filter searches by key fields,</li>
<li>nice graphs - "eye candy" for the management and/or customer , </li>
</ul>
all that while being fast?<br />
(and like that on at least 20 days of data from a 10Gbps IDPS Suricata sensor)<br />
<br />
...aka 18 mil per hour ...or 432 mil log records per day<br />
<br />
Another perspective -> 54-70GB of logs a day... <br />
<br />
<br />
<h2>
Conclusion</h2>
Deploying and tuning Suricata IDS/IPS is the first important step. Then you need to handle all the data that comes out of the sensor.<br />
You should very carefully consider your goals, requirements and design and do Proof of Concept and test runs before you end up in a production situation in which you can't handle what you asked for :)<br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com2tag:blogger.com,1999:blog-6352560843819453131.post-23018438162689423792014-05-24T11:26:00.000-07:002014-05-25T07:34:33.014-07:00Playing with memory consumption, algorithms and af_packet ring-size in Suricata IDPS<br />
<br />
<br />
How selecting the correct memory algorithm can make the difference between 40% and 4% drops of packets on 10Gbps traffic line inspection.<br />
<br />
In this article I have described some specifics through which I was able to tune up Suricata in IDS mode to getting only 4.04% drops on a 10Gbps mirror port(ISP traffic) with 9K rules.<br />
<br />
On the bottom of the post you will find the relevant configuration with suricata.log. It is highly inadvisable to just copy/paste, since every set up is unique. You would have to try to see/test what best suits your needs.<br />
<br />
<h2>
Set up</h2>
<br />
<ul>
<li>Suricata (from git, but basically 2.0.1) with AF_PACKET, 16 threads</li>
<li>16 (2.7 GhZ) cores with Hyper-threading enabled </li>
<li>64G RAM</li>
<li>Ubuntu LTS Precise (with upgraded kernel 3.14) -> Linux suricata 3.14.0-031400-generic #201403310035 SMP Mon Mar 31 04:36:23 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux</li>
<li>Intel 82599EB 10-Gigabit SFI/SFP+ with CPU affinity (<a href="http://pevma.blogspot.se/2014/03/suricata-prepearing-10gbps-network.html" target="_blank">as described here</a>)</li>
<li>9K rules (standard ET-Pro, not changed or edited)</li>
<li>number of hosts in HOME_NET - /21 /19 /19 /18 = about 34K hosts</li>
<li>MTU 1522 on the IDS/listening interface</li>
</ul>
<br />
Bummer ... why is the MTU mentioned here ... for a good reason!!<br />
Bear with me for a sec and you will see why.<br />
<br />
<h2>
Tuning Stage I - af_packet and ring-size</h2>
<br />
Lets start with af_packet's section in suricata.yaml -> the ring-size variable.<br />
With it you can actually define how many packets can be buffered on a per thread basis.<br />
<br />
Example:<br />
ring-size: 100000<br />
would mean that Suricata will create a buffer of 100K packets per thread.<br />
<br />
In other words if you have (in your suricata.yaml's af-packet section)<br />
threads: 16<br />
ring-size: 100000<br />
that would mean 16x100K buffers = 1,6 mil packets in total.<br />
<br />
So what does this mean for memory consumption?<br />
Well here is where the MTU comes into play.<br />
<br />
<i><b>MTU size * ring_size * 16 threads</b></i> <br />
<br />
<br />
or<br />
<i><b>1522 * 100 000 * 16 = 2435200000 bytes = 2,3 GBytes</b></i><br />
So with that set up, Suricata will reserve 2,3 GB RAM right away at start up.<br />
<br />
FYI - With the current set up we have about 1,5 mil incoming pps (packets per second) <br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggmYIVvwBtaaM61xdI_OnVBWWGdX2rz356f42Lji1R1O7NHNBOOcQExbu94gKbTmHIr_cTkVhQdHYu8Q9ZDkSx4Rk7IcY0IoiDzT-_5l4vzw29PcNCt8dlbBEokrQCjt7UGzI7x26zgls/s1600/Paksets-per-second.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggmYIVvwBtaaM61xdI_OnVBWWGdX2rz356f42Lji1R1O7NHNBOOcQExbu94gKbTmHIr_cTkVhQdHYu8Q9ZDkSx4Rk7IcY0IoiDzT-_5l4vzw29PcNCt8dlbBEokrQCjt7UGzI7x26zgls/s1600/Paksets-per-second.PNG" height="105" width="400" /></a></div>
<br />
<br />
<br />
<h2>
Tuning Stage II - memory algorithm (mpm-algo)</h2>
<br />
The <i><b>mpm-algo</b></i> variable in suricata.yaml selects which memory algorithm Suricata will use to do <br />
distribution of mpm contexts for signature(rule) groups matching.Very important with a huge performance impact between combining with these:<br />
<br />
<i><b>sgh-mpm-context: single<br />sgh-mpm-context: full<br />profile: custom<br />profile: low<br />profile: medium<br />profile: high</b></i><br />
More on this , you could find <a href="https://redmine.openinfosecfoundation.org/projects/suricata/wiki/High_Performance_Configuration" target="_blank">HERE </a><br />
where - <i><b>profile: custom</b></i> , would mean you can specify the groups yourself.<br />
<br />
The algorithm selected through this article is:<br />
<i><b>mpm-algo: ac</b></i><br />
<br />
Below you will find some test cases for memory consumption at Suricata start up time.<br />
(Just the values in the particular Cases are changed, the rest of suricata.yaml config <br />
is the same and not touched or changed during these test cases)<br />
<br />
<h2>
Case 1</h2>
24GB RAM at start up<br />
<br />
<blockquote class="tr_bq">
detect-engine:<br />
<b> - profile: custom</b><br />
<b>- custom-values:</b><br />
toclient-src-groups: 200<br />
toclient-dst-groups: 200<br />
toclient-sp-groups: 200<br />
toclient-dp-groups: 300<br />
toserver-src-groups: 200<br />
toserver-dst-groups: 400<br />
toserver-sp-groups: 200<br />
toserver-dp-groups: 250<br />
<b> - sgh-mpm-context: single</b></blockquote>
<br />
af_packet ring-size: <b>1000000</b><br />
16 threads <br />
<br />
<br />
<u><b>Notice:</b></u> 1 mil ring size with sgh-mpm-context: single, that gave me 19% drops:<br />
<blockquote class="tr_bq">
(util-device.c:185) <Notice> (LiveDeviceListClean) -- Stats for 'eth3': pkts: 4997993133, drop: 949059741 (18.99%), invalid chksum: 0</blockquote>
<br />
<h2>
Case 2</h2>
10GB RAM at start up<br />
<br />
<blockquote class="tr_bq">
detect-engine:<br />
<b> - profile: low</b><br />
- custom-values:<br />
toclient-src-groups: 200<br />
toclient-dst-groups: 200<br />
toclient-sp-groups: 200<br />
toclient-dp-groups: 300<br />
toserver-src-groups: 200<br />
toserver-dst-groups: 400<br />
toserver-sp-groups: 200<br />
toserver-dp-groups: 250<br />
<b> - sgh-mpm-context: full</b></blockquote>
<br />
af_packet ring-size: <b>50000</b><br />
16 threads <br />
<br />
<br />
<h2>
Case 3</h2>
26GB RAM at start up<br />
<br />
<blockquote class="tr_bq">
detect-engine:<br />
<b>- profile: high</b><br />
- custom-values:<br />
toclient-src-groups: 200<br />
toclient-dst-groups: 200<br />
toclient-sp-groups: 200<br />
toclient-dp-groups: 300<br />
toserver-src-groups: 200<br />
toserver-dst-groups: 400<br />
toserver-sp-groups: 200<br />
toserver-dp-groups: 250<br />
<b> - sgh-mpm-context: full</b></blockquote>
<br />
af_packet ring-size: <b>50000</b><br />
16 threads <br />
<br />
<br />
<h2>
Case 4</h2>
38GB RAM at start up<br />
<br />
<blockquote>
detect-engine:<br />
<b>- profile: high</b><br />
- custom-values:<br />
toclient-src-groups: 200<br />
toclient-dst-groups: 200<br />
toclient-sp-groups: 200<br />
toclient-dp-groups: 300<br />
toserver-src-groups: 200<br />
toserver-dst-groups: 400<br />
toserver-sp-groups: 200<br />
toserver-dp-groups: 250<br />
<b>- sgh-mpm-context: full</b></blockquote>
<br />
af_packet ring-size: <b>500000</b><br />
16 threads <br />
<br />
<u><b>Notice:</b></u> 500K ring size as compared to 50K in Case 3 and Case 2<br />
<br />
<br />
<b>The best config that worked for me was Case 4 !!</b><br />
<b>4.04% drops</b><br />
<br />
NOTE: depending on the number of rules <b>- sgh-mpm-context: full</b> can induce Suricata start up time of a few minutes...<br />
<br />
<br />
I also tested a different algorithm - > <b>ac-gfbs</b><br />
<blockquote class="tr_bq">
detect-engine:<br />
<b>- profile: custom</b><br />
- custom-values:<br />
toclient-src-groups: 200<br />
toclient-dst-groups: 200<br />
toclient-sp-groups: 200<br />
toclient-dp-groups: 300<br />
toserver-src-groups: 200<br />
toserver-dst-groups: 400<br />
toserver-sp-groups: 200<br />
toserver-dp-groups: 250<br />
<b> - sgh-mpm-context: full</b><br />
<br />
....<br />
<b>mpm-algo: ac-gfbs</b></blockquote>
<br />
with af-packet 200K ring size but that gave me 45% drops...<br />
<blockquote class="tr_bq">
Stats for 'eth3': pkts: 496407325, drop: 227155539 (45.76%), invalid chksum: 0</blockquote>
<br />
<br />
<u><b>Bottom line:</b></u><br />
testing/trying and selecting the correct mpm-algo and ring size buffers can have a huge performance impact on your configuration !<br />
<br />
Below you will find the specifics of the suricata.yaml configuration alongside the output and evidence of suricata.log<br />
<br />
<br />
<h2>
Configuration </h2>
<br />
<blockquote class="tr_bq">
suricata --build-info<br />
This is Suricata version 2.0dev (rev 7e8f80b)<br />
Features: PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 PF_RING AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK HAVE_NSS HAVE_LIBJANSSON <br />
SIMD support: SSE_4_2 SSE_4_1 SSE_3 <br />
Atomic intrisics: 1 2 4 8 16 byte(s)<br />
64-bits, Little-endian architecture<br />
GCC version 4.6.3, C version 199901<br />
compiled with -fstack-protector<br />
compiled with _FORTIFY_SOURCE=2<br />
L1 cache line size (CLS)=64<br />
compiled with LibHTP v0.5.11, linked against LibHTP v0.5.11<br />
Suricata Configuration:<br />
AF_PACKET support: yes<br />
PF_RING support: yes<br />
NFQueue support: no<br />
IPFW support: no<br />
DAG enabled: no<br />
Napatech enabled: no<br />
Unix socket enabled: yes<br />
Detection enabled: yes<br />
<br />
libnss support: yes<br />
libnspr support: yes<br />
libjansson support: yes<br />
Prelude support: no<br />
PCRE jit: no<br />
libluajit: no<br />
libgeoip: no<br />
Non-bundled htp: no<br />
Old barnyard2 support: no<br />
CUDA enabled: no<br />
<br />
Suricatasc install: yes<br />
<br />
Unit tests enabled: no<br />
Debug output enabled: no<br />
Debug validation enabled: no<br />
Profiling enabled: no<br />
Profiling locks enabled: no<br />
Coccinelle / spatch: yes<br />
<br />
Generic build parameters:<br />
Installation prefix (--prefix): /usr/local<br />
Configuration directory (--sysconfdir): /usr/local/etc/suricata/<br />
Log directory (--localstatedir) : /usr/local/var/log/suricata/<br />
<br />
Host: x86_64-unknown-linux-gnu<br />
GCC binary: gcc<br />
GCC Protect enabled: no<br />
GCC march native enabled: yes<br />
GCC Profile enabled: no</blockquote>
<br />
<br />
In suricata .yaml:<br />
<br />
<br />
<blockquote>
# If you are using the CUDA pattern matcher (mpm-algo: ac-cuda), different rules<br />
# apply. In that case try something like 60000 or more. This is because the CUDA<br />
# pattern matcher buffers and scans as many packets as possible in parallel.<br />
#max-pending-packets: 1024<br />
<b>max-pending-packets: 65534</b><br />
<br />
# Runmode the engine should use. Please check --list-runmodes to get the available<br />
# runmodes for each packet acquisition method. Defaults to "autofp" (auto flow pinned<br />
# load balancing).<br />
#runmode: autofp<br />
<b>runmode: workers</b><br />
<br />
...<br />
...<br />
<br />
# af-packet support<br />
# Set threads to > 1 to use PACKET_FANOUT support<br />
<b>af-packet:</b><br />
- interface: eth3<br />
# Number of receive threads (>1 will enable experimental flow pinned<br />
# runmode)<br />
<b> threads: 16</b><br />
# Default clusterid. AF_PACKET will load balance packets based on flow.<br />
# All threads/processes that will participate need to have the same<br />
# clusterid.<br />
cluster-id: 98<br />
# Default AF_PACKET cluster type. AF_PACKET can load balance per flow or per hash.<br />
# This is only supported for Linux kernel > 3.1<br />
# possible value are:<br />
# * cluster_round_robin: round robin load balancing<br />
# * cluster_flow: all packets of a given flow are send to the same socket<br />
# * cluster_cpu: all packets treated in kernel by a CPU are send to the same socket<br />
<b> cluster-type: cluster_cpu</b><br />
# In some fragmentation case, the hash can not be computed. If "defrag" is set<br />
# to yes, the kernel will do the needed defragmentation before sending the packets.<br />
defrag: no<br />
# To use the ring feature of AF_PACKET, set 'use-mmap' to yes<br />
<b>use-mmap: yes</b><br />
# Ring size will be computed with respect to max_pending_packets and number<br />
# of threads. You can set manually the ring size in number of packets by setting<br />
# the following value. If you are using flow cluster-type and have really network<br />
# intensive single-flow you could want to set the ring-size independantly of the number<br />
# of threads:<br />
<b> ring-size: 500000</b><br />
# On busy system, this could help to set it to yes to recover from a packet drop<br />
# phase. This will result in some packets (at max a ring flush) being non treated.<br />
#use-emergency-flush: yes<br />
# recv buffer size, increase value could improve performance<br />
# buffer-size: 100000<br />
# Set to yes to disable promiscuous mode<br />
# disable-promisc: no<br />
# Choose checksum verification mode for the interface. At the moment<br />
# of the capture, some packets may be with an invalid checksum due to<br />
# offloading to the network card of the checksum computation.<br />
# Possible values are:<br />
# - kernel: use indication sent by kernel for each packet (default)<br />
# - yes: checksum validation is forced<br />
# - no: checksum validation is disabled<br />
# - auto: suricata uses a statistical approach to detect when<br />
# checksum off-loading is used.<br />
# Warning: 'checksum-validation' must be set to yes to have any validation<br />
checksum-checks: kernel<br />
# BPF filter to apply to this interface. The pcap filter syntax apply here.<br />
#bpf-filter: port 80 or udp<br />
# You can use the following variables to activate AF_PACKET tap od IPS mode.<br />
# If copy-mode is set to ips or tap, the traffic coming to the current<br />
# interface will be copied to the copy-iface interface. If 'tap' is set, the<br />
# copy is complete. If 'ips' is set, the packet matching a 'drop' action<br />
# will not be copied.<br />
<br />
...</blockquote>
<blockquote>
... <br />
<br />
detect-engine:<br />
<b> - profile: high</b><br />
- custom-values:<br />
toclient-src-groups: 200<br />
toclient-dst-groups: 200<br />
toclient-sp-groups: 200<br />
toclient-dp-groups: 300<br />
toserver-src-groups: 200<br />
toserver-dst-groups: 400<br />
toserver-sp-groups: 200<br />
toserver-dp-groups: 250<br />
<b> - sgh-mpm-context: full</b><br />
- inspection-recursion-limit: 1500<br />
<br />
...<br />
<br />
....<br />
<b>mpm-algo: ac</b><br />
....</blockquote>
<blockquote>
....<br />
<br />
rule-files:<br />
- trojan.rules<br />
- dns.rules<br />
- malware.rules<br />
- md5.rules <br />
- local.rules<br />
- current_events.rules<br />
- mobile_malware.rules<br />
- user_agents.rules</blockquote>
<br />
and the Suricata.log - 24 hour run inspecting a 10Gbps line with 9K rules.<br />
(at the bottom you will find the final stats<br />
<u><b>Stats for 'eth3': pkts: 125740002178, drop: 5075326318 (4.04%)</b></u> ):<br />
<br />
<blockquote class="tr_bq">
cat StatsByDate/suricata-2014-05-25.log <br />
[26428] 24/5/2014 -- 01:46:01 - (suricata.c:1003) <Notice> (SCPrintVersion) -- This is <u><b>Suricata version 2.0dev (rev 7e8f80b)</b></u><br />
[26428] 24/5/2014 -- 01:46:01 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 16<br />
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2218) <Info> (HTPConfigSetDefaultsPhase2) -- 'default' server has 'request-body-minimal-inspect-size' set to 33882 and 'request-body-inspect-window' set to 4053 after randomization.<br />
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2233) <Info> (HTPConfigSetDefaultsPhase2) -- 'default' server has 'response-body-minimal-inspect-size' set to 33695 and 'response-body-inspect-window' set to 4218 after randomization.<br />
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp-mem.c:59) <Info> (HTPParseMemcap) -- HTTP memcap: 6442450944<br />
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2218) <Info> (HTPConfigSetDefaultsPhase2) -- 'apache' server has 'request-body-minimal-inspect-size' set to 34116 and 'request-body-inspect-window' set to 3973 after randomization.<br />
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2233) <Info> (HTPConfigSetDefaultsPhase2) -- 'apache' server has 'response-body-minimal-inspect-size' set to 32229 and 'response-body-inspect-window' set to 4205 after randomization.<br />
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2218) <Info> (HTPConfigSetDefaultsPhase2) -- 'iis7' server has 'request-body-minimal-inspect-size' set to 32040 and 'request-body-inspect-window' set to 4118 after randomization.<br />
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2233) <Info> (HTPConfigSetDefaultsPhase2) -- 'iis7' server has 'response-body-minimal-inspect-size' set to 32694 and 'response-body-inspect-window' set to 4148 after randomization.<br />
[26428] 24/5/2014 -- 01:46:01 - (app-layer-dns-udp.c:324) <Info> (DNSUDPConfigure) -- DNS request flood protection level: 500<br />
[26428] 24/5/2014 -- 01:46:01 - (app-layer-dns-udp.c:336) <Info> (DNSUDPConfigure) -- DNS per flow memcap (state-memcap): 524288<br />
[26428] 24/5/2014 -- 01:46:01 - (app-layer-dns-udp.c:348) <Info> (DNSUDPConfigure) -- DNS global memcap: 4294967296<br />
[26428] 24/5/2014 -- 01:46:01 - (util-ioctl.c:99) <Info> (GetIfaceMTU) -- Found an MTU of 1500 for 'eth3'<br />
[26428] 24/5/2014 -- 01:46:01 - (defrag-hash.c:212) <Info> (DefragInitConfig) -- allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56<br />
[26428] 24/5/2014 -- 01:46:02 - (defrag-hash.c:237) <Info> (DefragInitConfig) -- preallocated 65535 defrag trackers of size 152<br />
[26428] 24/5/2014 -- 01:46:02 - (defrag-hash.c:244) <Info> (DefragInitConfig) -- defrag memory usage: 13631336 bytes, maximum: 536870912<br />
[26428] 24/5/2014 -- 01:46:02 - (tmqh-flow.c:76) <Info> (TmqhFlowRegister) -- AutoFP mode using default "Active Packets" flow load balancer<br />
[26429] 24/5/2014 -- 01:46:02 - (tmqh-packetpool.c:142) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 228320456<br />
[26429] 24/5/2014 -- 01:46:02 - (host.c:205) <Info> (HostInitConfig) -- allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64<br />
[26429] 24/5/2014 -- 01:46:02 - (host.c:228) <Info> (HostInitConfig) -- preallocated 1000 hosts of size 112<br />
[26429] 24/5/2014 -- 01:46:02 - (host.c:230) <Info> (HostInitConfig) -- host memory usage: 390144 bytes, maximum: 16777216<br />
[26429] 24/5/2014 -- 01:46:02 - (flow.c:391) <Info> (FlowInitConfig) -- allocated 67108864 bytes of memory for the flow hash... 1048576 buckets of size 64<br />
[26429] 24/5/2014 -- 01:46:02 - (flow.c:415) <Info> (FlowInitConfig) -- preallocated 1048576 flows of size 280<br />
[26429] 24/5/2014 -- 01:46:02 - (flow.c:417) <Info> (FlowInitConfig) -- flow memory usage: 369098752 bytes, maximum: 1073741824<br />
[26429] 24/5/2014 -- 01:46:02 - (reputation.c:459) <Info> (SRepInit) -- IP reputation disabled<br />
[26429] 24/5/2014 -- 01:46:02 - (util-magic.c:62) <Info> (MagicInit) -- using magic-file /usr/share/file/magic<br />
[26429] 24/5/2014 -- 01:46:02 - (suricata.c:1835) <Info> (SetupDelayedDetect) -- Delayed detect disabled<br />
[26429] 24/5/2014 -- 01:46:04 - (detect-filemd5.c:275) <Info> (DetectFileMd5Parse) -- MD5 hash size 2143616 bytes<br />
[26429] 24/5/2014 -- 01:46:05 - (detect.c:452) <Info> (SigLoadSignatures) -- 8 rule files processed. 9055 rules successfully loaded, 0 rules failed<br />
[26429] 24/5/2014 -- 01:46:05 - (detect.c:2591) <Info> (SigAddressPrepareStage1) -- <u><b>9055 signatures processed. 1 are IP-only rules, 2299 are inspecting packet payload, 7541 inspect application layer, 0 are decoder event only</b></u><br />
[26429] 24/5/2014 -- 01:46:05 - (detect.c:2594) <Info> (SigAddressPrepareStage1) -- building signature grouping structure, stage 1: preprocessing rules... complete<br />
[26429] 24/5/2014 -- 01:46:05 - (detect.c:3217) <Info> (SigAddressPrepareStage2) -- building signature grouping structure, stage 2: building source address list... complete<br />
[26429] 24/5/2014 -- 01:48:35 - (detect.c:3859) <Info> (SigAddressPrepareStage3) -- building signature grouping structure, stage 3: building destination address lists... complete<br />
[26429] 24/5/2014 -- 01:48:35 - (util-threshold-config.c:1202) <Info> (SCThresholdConfParseFile) -- Threshold config parsed: 0 rule(s) found<br />
[26429] 24/5/2014 -- 01:48:35 - (util-coredump-config.c:122) <Info> (CoredumpLoadConfig) -- Core dump size set to unlimited.<br />
[26429] 24/5/2014 -- 01:48:35 - (util-logopenfile.c:209) <Info> (SCConfLogOpenGeneric) -- eve-log output device (regular) initialized: eve.json<br />
[26429] 24/5/2014 -- 01:48:35 - (output-json.c:471) <Info> (OutputJsonInitCtx) -- returning output_ctx 0x5b418d90<br />
[26429] 24/5/2014 -- 01:48:35 - (runmodes.c:672) <Info> (RunModeInitializeOutputs) -- enabling 'eve-log' module 'alert'<br />
[26429] 24/5/2014 -- 01:48:35 - (runmodes.c:672) <Info> (RunModeInitializeOutputs) -- enabling 'eve-log' module 'http'<br />
[26429] 24/5/2014 -- 01:48:35 - (runmodes.c:672) <Info> (RunModeInitializeOutputs) -- enabling 'eve-log' module 'dns'<br />
[26429] 24/5/2014 -- 01:48:35 - (runmodes.c:672) <Info> (RunModeInitializeOutputs) -- enabling 'eve-log' module 'ssh'<br />
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "management-cpu-set"<br />
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'low'<br />
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "receive-cpu-set"<br />
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "decode-cpu-set"<br />
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "stream-cpu-set"<br />
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "detect-cpu-set"<br />
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'high'<br />
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "verdict-cpu-set"<br />
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'high'<br />
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "reject-cpu-set"<br />
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'low'<br />
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "output-cpu-set"<br />
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'medium'<br />
[26429] 24/5/2014 -- 01:48:35 - (runmode-af-packet.c:198) <Info> (ParseAFPConfig) -- Enabling mmaped capture on iface eth3<br />
[26429] 24/5/2014 -- 01:48:35 - (runmode-af-packet.c:266) <Info> (ParseAFPConfig) -- Using cpu cluster mode for AF_PACKET (iface eth3)<br />
[26429] 24/5/2014 -- 01:48:35 - (util-runmodes.c:558) <Info> (RunModeSetLiveCaptureWorkersForDevice) -- Going to use 16 thread(s)<br />
[26431] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 0<br />
[26431] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth31" Module to cpu/core 0, thread id 26431<br />
[26431] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode<br />
[26431] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call<br />
[26432] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 1<br />
[26432] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth32" Module to cpu/core 1, thread id 26432<br />
[26432] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode<br />
[26432] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call<br />
[26433] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 2<br />
[26433] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth33" Module to cpu/core 2, thread id 26433<br />
[26433] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode<br />
[26433] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call<br />
[26434] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 3<br />
[26434] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth34" Module to cpu/core 3, thread id 26434<br />
[26434] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode<br />
[26434] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call<br />
[26435] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 4<br />
[26435] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth35" Module to cpu/core 4, thread id 26435<br />
[26435] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode<br />
[26435] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call<br />
[26436] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 5<br />
[26436] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth36" Module to cpu/core 5, thread id 26436<br />
[26436] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode<br />
[26436] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call<br />
[26437] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 6<br />
[26437] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth37" Module to cpu/core 6, thread id 26437<br />
[26437] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode<br />
[26437] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call<br />
[26438] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 7<br />
[26438] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth38" Module to cpu/core 7, thread id 26438<br />
[26438] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode<br />
[26438] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call<br />
[26439] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 8<br />
[26439] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth39" Module to cpu/core 8, thread id 26439<br />
[26439] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode<br />
[26439] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call<br />
[26440] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 9<br />
[26440] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth310" Module to cpu/core 9, thread id 26440<br />
[26440] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode<br />
[26440] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call<br />
[26441] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 10<br />
[26441] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth311" Module to cpu/core 10, thread id 26441<br />
[26441] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode<br />
[26441] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call<br />
[26442] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 11<br />
[26442] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth312" Module to cpu/core 11, thread id 26442<br />
[26442] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode<br />
[26442] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call<br />
[26443] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 12<br />
[26443] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth313" Module to cpu/core 12, thread id 26443<br />
[26443] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode<br />
[26443] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call<br />
[26444] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 13<br />
[26444] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth314" Module to cpu/core 13, thread id 26444<br />
[26444] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode<br />
[26444] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call<br />
[26445] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 14<br />
[26445] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth315" Module to cpu/core 14, thread id 26445<br />
[26445] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode<br />
[26445] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call<br />
[26446] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 15<br />
[26446] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth316" Module to cpu/core 15, thread id 26446<br />
[26446] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode<br />
[26446] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call<br />
[26429] 24/5/2014 -- 01:48:35 - (runmode-af-packet.c:527) <Info> (RunModeIdsAFPWorkers) -- RunModeIdsAFPWorkers initialised<br />
[26447] 24/5/2014 -- 01:48:35 - (tm-threads.c:1343) <Info> (TmThreadSetupOptions) -- Setting prio 2 for "FlowManagerThread" thread , thread id 26447<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:371) <Info> (StreamTcpInitConfig) -- stream "prealloc-sessions": 375000 (per thread)<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:387) <Info> (StreamTcpInitConfig) -- stream "memcap": 15032385536<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:393) <Info> (StreamTcpInitConfig) -- stream "midstream" session pickups: disabled<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:399) <Info> (StreamTcpInitConfig) -- stream "async-oneside": disabled<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:416) <Info> (StreamTcpInitConfig) -- stream "checksum-validation": disabled<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:438) <Info> (StreamTcpInitConfig) -- stream."inline": disabled<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:451) <Info> (StreamTcpInitConfig) -- stream "max-synack-queued": 5<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:469) <Info> (StreamTcpInitConfig) -- stream.reassembly "memcap": 32212254720<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:487) <Info> (StreamTcpInitConfig) -- stream.reassembly "depth": 12582912<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:570) <Info> (StreamTcpInitConfig) -- stream.reassembly "toserver-chunk-size": 2585<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:572) <Info> (StreamTcpInitConfig) -- stream.reassembly "toclient-chunk-size": 2680<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:585) <Info> (StreamTcpInitConfig) -- stream.reassembly.raw: enabled<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 4, prealloc 256<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 16, prealloc 512<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 112, prealloc 512<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 248, prealloc 512<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 512, prealloc 512<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 768, prealloc 1024<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 1448, prealloc 1024<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 65535, prealloc 128<br />
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:461) <Info> (StreamTcpReassemblyConfig) -- stream.reassembly "chunk-prealloc": 250<br />
[26448] 24/5/2014 -- 01:48:35 - (tm-threads.c:1343) <Info> (TmThreadSetupOptions) -- Setting prio 2 for "SCPerfWakeupThread" thread , thread id 26448<br />
[26449] 24/5/2014 -- 01:48:35 - (tm-threads.c:1343) <Info> (TmThreadSetupOptions) -- Setting prio 2 for "SCPerfMgmtThread" thread , thread id 26449<br />
[26429] 24/5/2014 -- 01:48:35 - (tm-threads.c:2196) <Notice> (TmThreadWaitOnThreadInit) -- all 16 packet processing threads, 3 management threads initialized, engine started.<br />
[26431] 24/5/2014 -- 01:48:35 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3<br />
[26431] 24/5/2014 -- 01:48:35 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3<br />
[26431] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020<br />
[26431] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 6<br />
[26431] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth31 using socket 6<br />
[26432] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3<br />
[26432] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3<br />
[26432] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020<br />
[26432] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 7<br />
[26432] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth32 using socket 7<br />
[26433] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3<br />
[26433] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3<br />
[26433] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020<br />
[26433] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 8<br />
[26433] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth33 using socket 8<br />
[26434] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3<br />
[26434] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3<br />
[26434] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020<br />
[26434] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 9<br />
[26434] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth34 using socket 9<br />
[26435] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3<br />
[26435] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3<br />
[26435] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020<br />
[26435] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 10<br />
[26435] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth35 using socket 10<br />
[26436] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3<br />
[26436] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3<br />
[26436] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020<br />
[26436] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 11<br />
[26436] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth36 using socket 11<br />
[26437] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3<br />
[26437] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3<br />
[26437] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020<br />
[26437] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 12<br />
[26437] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth37 using socket 12<br />
[26438] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3<br />
[26438] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3<br />
[26438] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020<br />
[26438] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 13<br />
[26438] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth38 using socket 13<br />
[26439] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3<br />
[26439] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3<br />
[26439] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020<br />
[26439] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 14<br />
[26439] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth39 using socket 14<br />
[26440] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3<br />
[26440] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3<br />
[26440] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020<br />
[26440] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 15<br />
[26440] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth310 using socket 15<br />
[26441] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3<br />
[26441] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3<br />
[26441] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020<br />
[26441] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 16<br />
[26441] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth311 using socket 16<br />
[26442] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3<br />
[26442] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3<br />
[26442] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020<br />
[26442] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 17<br />
[26442] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth312 using socket 17<br />
[26443] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3<br />
[26443] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3<br />
[26443] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020<br />
[26443] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 18<br />
[26443] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth313 using socket 18<br />
[26444] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3<br />
[26444] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3<br />
[26444] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020<br />
[26444] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 19<br />
[26444] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth314 using socket 19<br />
[26445] 24/5/2014 -- 01:48:38 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3<br />
[26445] 24/5/2014 -- 01:48:38 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3<br />
[26445] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020<br />
[26445] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 20<br />
[26445] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth315 using socket 20<br />
[26446] 24/5/2014 -- 01:48:38 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3<br />
[26446] 24/5/2014 -- 01:48:38 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3<br />
[26446] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020<br />
[26446] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 21<br />
[26446] 24/5/2014 -- 01:48:38 - (source-af-packet.c:452) <Info> (AFPPeersListReachedInc) -- All AFP capture threads are running.<br />
[26446] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth316 using socket 21<br />
[26444] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth314<br />
[26445] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth315<br />
[26437] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth37<br />
[26432] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth32<br />
[26440] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth310<br />
[26434] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth34<br />
[26435] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth35<br />
[26443] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth313<br />
[26431] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth31<br />
[26441] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth311<br />
[26433] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth33<br />
[26442] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth312<br />
[26438] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth38<br />
[26436] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth36<br />
[26439] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth39<br />
[26446] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth316<br />
[26429] 25/5/2014 -- 01:45:29 - (suricata.c:2300) <Notice> (main) -- Signal Received. Stopping engine.<br />
[26447] 25/5/2014 -- 01:45:30 - (flow-manager.c:561) <Info> (FlowManagerThread) -- 0 new flows, 0 established flows were timed out, 0 flows in closed state<br />
[26429] 25/5/2014 -- 01:45:30 - (suricata.c:1025) <Info> (SCPrintElapsedTime) -- time elapsed 86215.055s<br />
[26431] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth31) Kernel: Packets 8091169139, dropped 548918377<br />
[26431] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth31) Packets 7541009393, bytes 5856264226024<br />
[26431] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3174701772 TCP packets<br />
[26432] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth32) Kernel: Packets 7523006674, dropped 129092719<br />
[26432] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth32) Packets 7392869856, bytes 6039480366879<br />
[26432] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3273049553 TCP packets<br />
[26433] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth33) Kernel: Packets 7857365876, dropped 457724034<br />
[26433] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth33) Packets 7398849607, bytes 6186600745188<br />
[26433] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3254753683 TCP packets<br />
[26434] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth34) Kernel: Packets 7939368989, dropped 328011859<br />
[26434] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth34) Packets 7610498359, bytes 6023159311914<br />
[26434] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3294782895 TCP packets<br />
[26435] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth35) Kernel: Packets 7886105626, dropped 424755524<br />
[26435] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth35) Packets 7460672617, bytes 6304951058805<br />
[26435] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3301812001 TCP packets<br />
[26436] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth36) Kernel: Packets 7807382993, dropped 258291463<br />
[26436] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth36) Packets 7548467033, bytes 6347986611584<br />
[26436] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3359138126 TCP packets<br />
[26437] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth37) Kernel: Packets 7898330279, dropped 305037112<br />
[26437] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth37) Packets 7592601391, bytes 6136634057356<br />
[26437] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3263120334 TCP packets<br />
[26438] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth38) Kernel: Packets 7653871283, dropped 193628126<br />
[26438] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth38) Packets 7459608346, bytes 6164536552610<br />
[26438] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3337037621 TCP packets<br />
[26439] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth39) Kernel: Packets 7717771534, dropped 302582507<br />
[26439] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth39) Packets 7414991895, bytes 6068675614996<br />
[26439] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3256006501 TCP packets<br />
[26440] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth310) Kernel: Packets 7955692240, dropped 339489700<br />
[26440] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth310) Packets 7616019954, bytes 6170760218068<br />
[26440] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3309626387 TCP packets<br />
[26441] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth311) Kernel: Packets 8004841803, dropped 416027860<br />
[26441] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth311) Packets 7588633565, bytes 6152477758719<br />
[26441] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3229276967 TCP packets<br />
[26442] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth312) Kernel: Packets 7908991181, dropped 282658592<br />
[26442] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth312) Packets 7626056429, bytes 6374830613882<br />
[26442] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3289082310 TCP packets<br />
[26443] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth313) Kernel: Packets 7823655146, dropped 277468333<br />
[26443] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth313) Packets 7546046278, bytes 6174538196484<br />
[26443] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3264661076 TCP packets<br />
[26444] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth314) Kernel: Packets 7661949338, dropped 161041160<br />
[26444] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth314) Packets 7500367073, bytes 6191365130344<br />
[26444] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3299756326 TCP packets<br />
[26445] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth315) Kernel: Packets 8203393412, dropped 272996993<br />
[26445] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth315) Packets 7930265587, bytes 6802539594416<br />
[26445] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3258257071 TCP packets<br />
[26446] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth316) Kernel: Packets 7807106665, dropped 377601959<br />
[26446] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth316) Packets 7428994197, bytes 6140231305309<br />
[26446] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3337023147 TCP packets<br />
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 4 had a peak use of 11396 segments, more than the prealloc setting of 256<br />
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 16 had a peak use of 17178 segments, more than the prealloc setting of 512<br />
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 112 had a peak use of 45436 segments, more than the prealloc setting of 512<br />
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 248 had a peak use of 12049 segments, more than the prealloc setting of 512<br />
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 512 had a peak use of 26386 segments, more than the prealloc setting of 512<br />
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 768 had a peak use of 23371 segments, more than the prealloc setting of 1024<br />
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 1448 had a peak use of 67781 segments, more than the prealloc setting of 1024<br />
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 65535 had a peak use of 67333 segments, more than the prealloc setting of 128<br />
[26429] 25/5/2014 -- 01:45:31 - (stream.c:182) <Info> (StreamMsgQueuesDeinit) -- TCP segment chunk pool had a peak use of 13327 chunks, more than the prealloc setting of 250<br />
[26429] 25/5/2014 -- 01:45:31 - (host.c:245) <Info> (HostPrintStats) -- host memory usage: 390144 bytes, maximum: 16777216<br />
[26429] 25/5/2014 -- 01:45:44 - (detect.c:3890) <Info> (SigAddressCleanupStage1) -- cleaning up signature grouping structure... complete<br />
[26429] 25/5/2014 -- 01:45:44 - (util-device.c:185) <Notice> (LiveDeviceListClean) -- <u><b>Stats for 'eth3': pkts: 125740002178, drop: 5075326318 (4.04%)</b></u>, invalid chksum: 0</blockquote>
<br />
<br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com0tag:blogger.com,1999:blog-6352560843819453131.post-89654746945266750622014-05-04T06:51:00.000-07:002014-05-04T06:54:54.857-07:00Elasticsearch - err failed to connect to master - when changing/using a different IP address<br />
<br />
It is a general rule of thumbs to check first your<br />
<blockquote class="tr_bq">
<b>/var/log/elasticsearch/elasticsearch.log </b></blockquote>
and<br />
<blockquote class="tr_bq">
<b>/var/log/logstash/logstash.log</b></blockquote>
when you experience any form of issues when using Kibana.<br />
<br />
I stumbled upon this when I changed the IP/Network of the interface of my test virtual machine holding an ELK (<a href="http://www.elasticsearch.org/overview/elkdownloads/" target="_blank">Elasticsearch/Logstash/Kibana</a>) installation to do log analysis for <a href="http://suricata-ids.org/" target="_blank">Suricata IDPS</a>.<br />
<br />
I managed to solve the issue based on those two sources:<br />
<a href="https://github.com/elasticsearch/elasticsearch/issues/4194">https://github.com/elasticsearch/elasticsearch/issues/4194</a><br />
<a href="http://www.concept47.com/austin_web_developer_blog/errors/elasticsearch-error-failed-to-connect-to-master/">http://www.concept47.com/austin_web_developer_blog/errors/elasticsearch-error-failed-to-connect-to-master/</a><br />
<br />
The new IP that I changed is - 192.168.1.166 and the old one was 10.0.2.15 <br />
(notice the errs in the logs. It was still trying to connect to the old one below):<br />
<br />
<blockquote class="tr_bq">
root@debian64:~/Work/# more /var/log/elasticsearch/elasticsearch.log <br />
[2014-05-04 07:17:24,960][INFO ][node ] [Jamal Afari] version[1.1.0], pid[7178], build[2181e11/2014-03-25T15:59:51Z]<br />
[2014-05-04 07:17:24,960][INFO ][node ] [Jamal Afari] initializing ...<br />
[2014-05-04 07:17:24,964][INFO ][plugins ] [Jamal Afari] loaded [], sites []<br />
[2014-05-04 07:17:27,828][INFO ][node ] [Jamal Afari] initialized<br />
[2014-05-04 07:17:27,828][INFO ][node ] [Jamal Afari] starting ...<br />
[2014-05-04 07:17:27,959][INFO ][transport ] [Jamal Afari] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/<b>192.168.1.166:9300</b>]}<br />
[2014-05-04 07:17:57,977][WARN ][discovery ] [Jamal Afari] waited for 30s and no initial state was set by the discovery<br />
[2014-05-04 07:17:57,978][INFO ][discovery ] [Jamal Afari] elasticsearch/F9HgSmYJQcS6bxdgdeurAA<br />
[2014-05-04 07:17:57,986][INFO ][http ] [Jamal Afari] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/<b>192.168.1.166:9200</b>]}<br />
[2014-05-04 07:17:58,017][INFO ][node ] [Jamal Afari] started<br />
[2014-05-04 07:18:01,026][WARN ][discovery.zen ] [Jamal Afari] <b>failed to connect to master [[Hellion][zcx2fIF2SrmwSYQ08la6PQ][LTS-64-1][inet[/10.0.2.15:9300]]]</b>, retrying...<br />
org.elasticsearch.transport.ConnectTransportException: [Hellion][inet[/10.0.2.15:9300]] connect_timeout[30s]<br />
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:718)<br />
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:647)<br />
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:615)<br />
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:129)<br />
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:338)<br />
at org.elasticsearch.discovery.zen.ZenDiscovery.access$500(ZenDiscovery.java:79)<br />
at org.elasticsearch.discovery.zen.ZenDiscovery$1.run(ZenDiscovery.java:286)<br />
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)<br />
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)<br />
at java.lang.Thread.run(Thread.java:701)<br />
Caused by: org.elasticsearch.common.netty.channel.ConnectTimeoutException: connection timed out: /10.0.2.15:9300<br />
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:137)<br />
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83)<br />
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)<br />
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)<br />
.......<br />
.......<br />
.......<br />
[2014-05-04 07:37:05,783][WARN ][discovery.zen ] [Vivisector] failed to connect to master [[Hellion][zcx2fIF2SrmwSYQ08la6PQ][LTS-64-1][inet[/10.0.2.15:9300]]], retrying...<br />
org.elasticsearch.transport.ConnectTransportException: [Hellion][inet[/10.0.2.15:9300]] connect_timeout[30s]<br />
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:718)<br />
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:647)<br />
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:615)<br />
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:129)<br />
at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:338)<br />
at org.elasticsearch.discovery.zen.ZenDiscovery.access$500(ZenDiscovery.java:79)<br />
at org.elasticsearch.discovery.zen.ZenDiscovery$1.run(ZenDiscovery.java:286)<br />
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)<br />
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)<br />
at java.lang.Thread.run(Thread.java:701)<br />
Caused by: org.elasticsearch.common.netty.channel.ConnectTimeoutException: connection timed out: /10.0.2.15:9300<br />
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:137)<br />
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83)<br />
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)<br />
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)<br />
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)<br />
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)<br />
... 3 more<br />
<br />
</blockquote>
That was giving me all sorts of wired errs and failed queries in Kibana. The base of the problem was that I did change IP addresses on the ELK server.<br />
<br />
The solution is simple.<br />
Find the Discovery section in <b>/etc/elasticsearch/elasticsearch.yml</b><br />
and edit this line from :<br />
<blockquote class="tr_bq">
# 1. Disable multicast discovery (enabled by default):<br />
#<br />
# discovery.zen.ping.multicast.enabled: false</blockquote>
<br />
to<br />
<br />
<blockquote class="tr_bq">
# 1. Disable multicast discovery (enabled by default):<br />
#<br />
discovery.zen.ping.multicast.enabled: false</blockquote>
<br />
Only remove the "<b> # </b>" in front of "<b>discovery.zen.ping.multicast.enabled: false</b>". <br />
Save and restart the service.<br />
<blockquote class="tr_bq">
service elasticsearch restart</blockquote>
<br />
Then everything went back to normal.<br />
In /var/log/elasticsearch/elasticsearch.log:<br />
<br />
<blockquote class="tr_bq">
[2014-05-04 07:37:07,936][INFO ][node ] [Vivisector] stopping ...<br />
[2014-05-04 07:37:07,970][INFO ][node ] [Vivisector] stopped<br />
[2014-05-04 07:37:07,971][INFO ][node ] [Vivisector] closing ...<br />
[2014-05-04 07:37:07,979][INFO ][node ] [Vivisector] closed<br />
[2014-05-04 07:37:09,685][INFO ][node ] [Vibraxas] version[1.1.0], pid[5291], build[2181e11/2014-03-25T15:59:51Z]<br />
[2014-05-04 07:37:09,686][INFO ][node ] [Vibraxas] initializing ...<br />
[2014-05-04 07:37:09,689][INFO ][plugins ] [Vibraxas] loaded [], sites []<br />
[2014-05-04 07:37:12,597][INFO ][node ] [Vibraxas] initialized<br />
[2014-05-04 07:37:12,597][INFO ][node ] [Vibraxas] starting ...<br />
[2014-05-04 07:37:12,751][INFO ][transport ] [Vibraxas] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.1.166:9300]}<br />
[2014-05-04 07:37:15,777][INFO ][cluster.service ] [Vibraxas] new_master [Vibraxas][esQHE1EtTuWVK9MVNiQ5jA][debian64][inet[/192.168.1.166:9300]], reason: zen-disco-join (elected_as_master)<br />
[2014-05-04 07:37:15,806][INFO ][discovery ] [Vibraxas] elasticsearch/esQHE1EtTuWVK9MVNiQ5jA<br />
[2014-05-04 07:37:15,877][INFO ][http ] [Vibraxas] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.1.166:9200]}<br />
[2014-05-04 07:37:16,893][INFO ][gateway ] [Vibraxas] recovered [16] indices into cluster_state<br />
[2014-05-04 07:37:16,898][INFO ][node ] [Vibraxas] started<br />
<b>[2014-05-04 07:37:17,547][INFO ][cluster.service ] [Vibraxas] added {[logstash-debian64-3408-4020][dTsgT1H9Srq6mUr_w5rpXQ][debian64][inet[/192.168.1.166:9301]]{client=true, data=false},}, reason: zen-disco-receive(join from node[[logstash-debian64-3408-4020][dTsgT1H9Srq6mUr_w5rpXQ][debian64][inet[/192.168.1.166:9301]]{client=true, data=false}])</b></blockquote>
<br />
It is also <u><b>higly recommended</b></u> that you read the whole Discovery section in your elasticsearch.yml:<br />
<blockquote class="tr_bq">
############################# Discovery #############################<br />
<br />
# Discovery infrastructure ensures nodes can be found within a cluster<br />
# and master node is elected. Multicast discovery is the default.<br />
<br />
.....</blockquote>
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com0tag:blogger.com,1999:blog-6352560843819453131.post-3987444668646472282014-03-26T13:31:00.000-07:002014-03-29T06:43:08.846-07:00Suricata (and the grand slam of) Open Source IDPS - Chapter IV - Logstash / Kibana / Elasticsearch, Part One - Updated<br />
<h2>
<i>Introduction </i></h2>
<b><span style="font-size: small;">This is an updated article of the original post</span></b> - <a href="http://pevma.blogspot.se/2014/03/suricata-and-grand-slam-of-open-source.html" target="_blank">http://pevma.blogspot.se/2014/03/suricata-and-grand-slam-of-open-source.html</a><br />
<br />
<b>This article covers the new (at the time of this writing) 1.4.0 Logstash release.</b><br />
<br />
This is Chapter IV of <a href="http://pevma.blogspot.se/2013/12/suricata-and-grand-slam-of-open-source.html" target="_blank">a series of 4 articles</a> aiming at giving a general guideline on
how to deploy the <a href="http://suricata-ids.org/" target="_blank">Open Source Suricata IDPS</a> on a high speed networks
(10Gbps) in IDS mode using AF_PACKET, PF_RING or DNA and Logstash / Kibana / Elasticsearch<br />
<br />
This chapter consist of two parts:<br />
Chapter IV Part One - installation and set up of logstash.<br />
Chapter IV Part Two - showing some configuration of the different Kibana web interface widgets.<br />
<br />
The end result should be as many and as different widgets to analyze the Suricata IDPS logs , something like :<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKFadvlt_ACGh0JjE4K8Hb5p3txjiwLLRtV7Sa94ju3nzODe9geSbtFc8BINiouEKUJh52yJ7QGO0Q6tbc3sQuvXyGf-uVWji83WmCzBDluJrl9IsiF4w8dLEhzxnGg4SIbAAoZVIYTao/s1600/ALL1.PNG" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKFadvlt_ACGh0JjE4K8Hb5p3txjiwLLRtV7Sa94ju3nzODe9geSbtFc8BINiouEKUJh52yJ7QGO0Q6tbc3sQuvXyGf-uVWji83WmCzBDluJrl9IsiF4w8dLEhzxnGg4SIbAAoZVIYTao/s1600/ALL1.PNG" height="161" width="400" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjriSKCUXkvloYtenbZEiKoKvmD9k-F4bhdLw3o5uU7J_OFvItDMpMcBnougSFg2xXaETDIeMcdU_6vEwqqPljNdn2LpqT5erWBXsdAcQQZXde6Mdb9fFKUznide_cWFk9_nzW83HfPMgU/s1600/Alerts1.PNG" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjriSKCUXkvloYtenbZEiKoKvmD9k-F4bhdLw3o5uU7J_OFvItDMpMcBnougSFg2xXaETDIeMcdU_6vEwqqPljNdn2LpqT5erWBXsdAcQQZXde6Mdb9fFKUznide_cWFk9_nzW83HfPMgU/s1600/Alerts1.PNG" height="137" width="400" /></a></div>
<br />
<br />
<br />
This chapter describes a quick and easy set up of <a href="http://logstash.net/" target="_blank">Logstash</a> / <a href="http://www.elasticsearch.org/overview/kibana/" target="_blank">Kibana</a> / <a href="http://www.elasticsearch.org/" target="_blank">Elasticsearch</a> <br />
This
set up described in this chapter was not intended for a huge
deployment, but rather as a conceptual proof in a working environment as
pictured below:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifY_K2KgDK7LUBb4-nos5H5-JRzj7MDVdwVnUvtXGHDYASnmnAQY8ia4N1CafUjq31TIDBbM1yUbsRDmk5RRp-Zcw1nSxIGjEhzLAf6zer8uiDwC_8BhtY3dLah0cYu5eX3oeo6PvTP2c/s1600/Logstash-forwarder.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifY_K2KgDK7LUBb4-nos5H5-JRzj7MDVdwVnUvtXGHDYASnmnAQY8ia4N1CafUjq31TIDBbM1yUbsRDmk5RRp-Zcw1nSxIGjEhzLAf6zer8uiDwC_8BhtY3dLah0cYu5eX3oeo6PvTP2c/s1600/Logstash-forwarder.png" /></a></div>
<br />
<br />
<br />
<br />
<br />
We have two Suricata IDS deployed - IDS1 and IDS2<br />
<ul>
<li><b>IDS2</b> uses <a href="https://github.com/elasticsearch/logstash-forwarder" target="_blank">logstash-forwarder</a>
(former lumberjack) to securely forward (SSL encrypted) its eve.json
logs (configured in suricata.yaml) to IDS1, main Logstash/Kibana
deployment.</li>
<li><b>IDS1</b> has its own logging (eve.json as well) that is also digested by Logstash.</li>
</ul>
<br />
In other words IDS1 and IDS2 logs are both being digested to the Logstash platform deployed on IDS1 in the picture.<br />
<br />
<h2>
Prerequisites</h2>
Both IDS1 and IDS2 should be set up and tuned with
Suricata IDPS. This article will not cover that. If you have not done it
you could start <a href="http://pevma.blogspot.se/2013/12/suricata-and-grand-slam-of-open-source.html" target="_blank">HERE</a>.<br />
<br />
Make
sure you have installed Suricata with JSON availability. The following
two packages must be present on your system prior to
installation/compilation:<br />
<blockquote class="tr_bq">
root@LTS-64-1:~# apt-cache search libjansson<br />
libjansson-dev - C library for encoding, decoding and manipulating JSON data (dev)<br />
libjansson4 - C library for encoding, decoding and manipulating JSON data</blockquote>
If there are not present on the system - install them:<br />
<blockquote class="tr_bq">
apt-get install libjansson4 libjansson-dev</blockquote>
<br />
In both IDS1 and IDS2 you should have in your suricata.yaml:<br />
<blockquote class="tr_bq">
# "United" event log in JSON format<br />
- eve-log:<br />
enabled: yes<br />
type: file #file|syslog|unix_dgram|unix_stream<br />
filename: eve.json<br />
# the following are valid when type: syslog above<br />
#identity: "suricata"<br />
#facility: local5<br />
#level: Info ## possible levels: Emergency, Alert, Critical,<br />
## Error, Warning, Notice, Info, Debug<br />
types:<br />
- alert<br />
- http:<br />
extended: yes # enable this for extended logging information<br />
- dns<br />
- tls:<br />
extended: yes # enable this for extended logging information<br />
- files:<br />
force-magic: yes # force logging magic on all logged files<br />
force-md5: yes # force logging of md5 checksums<br />
#- drop<br />
- ssh</blockquote>
This tutorial uses <b>/var/log/suricata</b> as a default logging directory.<br />
<br />
You can do a few dry runs to confirm log generation on both systems.<br />
After
you have done and confirmed general operations of the Suricata IDPS on
both systems you can continue further as described just below.<br />
<br />
<h2>
Installation</h2>
<h2>
IDS2</h2>
For the logstash-forwarder we need <a href="http://golang.org/doc/install" target="_blank">Go</a> installed.<br />
<br />
<blockquote class="tr_bq">
cd /opt<br />
apt-get install hg-fast-export<br />
hg clone -u release https://code.google.com/p/go<br />
cd go/src<br />
./all.bash</blockquote>
<br />
<br />
If everything goes ok you should see at the end:<br />
<blockquote class="tr_bq">
<pre>ALL TESTS PASSED</pre>
</blockquote>
<br />
Update your $PATH variable, in make sure it has: <br />
<blockquote class="tr_bq">
PATH=$PATH:/opt/go/bin<br />
export PATH</blockquote>
<br />
<blockquote class="tr_bq">
root@debian64:~# nano ~/.bashrc</blockquote>
<br />
<br />
edit the file (.bashrc), add at the bottom:<br />
<br />
<blockquote class="tr_bq">
PATH=$PATH:/opt/go/bin<br />
export PATH</blockquote>
<br />
then:<br />
<br />
<blockquote class="tr_bq">
root@debian64:~# source ~/.bashrc<br />
root@debian64:~# echo $PATH<br />
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:<b>/opt/go/bin</b></blockquote>
<br />
<br />
Install logstash-forwarder:<br />
<blockquote class="tr_bq">
cd /opt <br />
git clone git://github.com/elasticsearch/logstash-forwarder.git<br />
cd logstash-forwarder<br />
go build</blockquote>
<br />
Build a debian package:<br />
<blockquote class="tr_bq">
apt-get install ruby ruby-dev<br />
gem install fpm<br />
make deb</blockquote>
That will produce a Debian package in the same directory (something like):<br />
<blockquote class="tr_bq">
logstash-forwarder_0.3.1_amd64.deb</blockquote>
<br />
Install the Debian package:<br />
<blockquote class="tr_bq">
root@debian64:/opt# dpkg -i logstash-forwarder_0.3.1_amd64.deb</blockquote>
<br />
<u><b>NOTE:</b></u>
You can use the same Debian package to copy and install it (dependency
free) on other machines/servers. So once you have the deb package you
can install it on any other server the same way, no need for rebuilding
everything again (Go and ruby)<br />
<br />
Create SSL certificates that will be used to securely encrypt and transport the logs:<br />
<blockquote class="tr_bq">
cd /opt<br />
openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout logfor.key -out logfor.crt</blockquote>
<br />
Copy on <b>IDS2</b>:<br />
<b>logfor.key</b> in <b>/etc/ssl/private/</b><br />
<b>logfor.crt</b> in <b>/etc/ssl/certs/</b><br />
<br />
Copy the same files to <b>IDS1</b>:<br />
<b>logfor.key</b> in <b>/etc/logstash/pki/</b><br />
<b>logfor.crt</b> in <b>/etc/logstash/pki/</b><br />
<br />
<br />
Now you can try to start/restart/stop the logstash-forwarder service:<br />
<blockquote class="tr_bq">
root@debian64:/opt# /etc/init.d/logstash-forwarder start<br />
root@debian64:/opt# /etc/init.d/logstash-forwarder status<br />
[ ok ] logstash-forwarder is running.<br />
root@debian64:/opt# /etc/init.d/logstash-forwarder stop<br />
root@debian64:/opt# /etc/init.d/logstash-forwarder status<br />
[FAIL] logstash-forwarder is not running ... failed!</blockquote>
<blockquote class="tr_bq">
root@debian64:/opt# /etc/init.d/logstash-forwarder start<br />
root@debian64:/opt# /etc/init.d/logstash-forwarder status<br />
[ ok ] logstash-forwarder is running. </blockquote>
<blockquote class="tr_bq">
root@debian64:/opt# /etc/init.d/logstash-forwarder stop </blockquote>
<blockquote class="tr_bq">
root@debian64:/opt# </blockquote>
Good to go.<br />
<br />
Create on IDS2 your logstash-forwarder config:<br />
<blockquote class="tr_bq">
touch /etc/logstash-forwarder</blockquote>
Make sure the file looks like this (in this tutorial - copy/paste):<br />
<br />
<blockquote class="tr_bq">
{<br />
"network": {<br />
"servers": [ "192.168.1.158:5043" ],<br />
"ssl certificate": "/etc/ssl/certs/logfor.crt",<br />
"ssl key": "/etc/ssl/private/logfor.key",<br />
"ssl ca": "/etc/ssl/certs/logfor.crt"<br />
},<br />
"files": [<br />
{<br />
"paths": [ "/var/log/suricata/eve.json" ],<br />
"codec": { "type": "json" }<br />
}<br />
]<br />
}</blockquote>
Some more info:<br />
<blockquote class="tr_bq">
Usage of ./logstash-forwarder:<br />
-config="": The config file to load<br />
-cpuprofile="": write cpu profile to file<br />
-from-beginning=false: Read new files from the beginning, instead of the end<br />
-idle-flush-time=5s: Maximum time to wait for a full spool before flushing anyway<br />
-log-to-syslog=false: Log to syslog instead of stdout<br />
-spool-size=1024: Maximum number of events to spool before a flush is forced.</blockquote>
<br />
These can be adjusted in:<br />
<blockquote class="tr_bq">
/etc/init.d/logstash-forwarder </blockquote>
<br />
<br />
This is as far as the set up on IDS2 goes....<br />
<br />
<h2>
IDS1 - indexer<a href="http://logstash.net/" target="_blank"><br /></a></h2>
<b>NOTE:</b> Each Logstash version has its corresponding Elasticsearch version to be used with it !<br />
<a href="http://logstash.net/docs/1.4.0/tutorials/getting-started-with-logstash">http://logstash.net/docs/1.4.0/tutorials/getting-started-with-logstash</a><br />
<br />
<br />
Packages needed: <br />
<blockquote class="tr_bq">
apt-get install apache2 openjdk-7-jdk openjdk-7-jre-headless </blockquote>
<br />
Downloads:<br />
<a href="http://www.elasticsearch.org/overview/elkdownloads/">http://www.elasticsearch.org/overview/elkdownloads/</a><br />
<br />
<blockquote class="tr_bq">
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.1.0.deb<br />
<br />
wget https://download.elasticsearch.org/logstash/logstash/packages/debian/logstash_1.4.0-1-c82dc09_all.deb<br />
<br />
wget https://download.elasticsearch.org/kibana/kibana/kibana-3.0.0.tar.gz</blockquote>
<br />
<blockquote class="tr_bq">
mkdir /var/log/logstash/</blockquote>
Installation:<br />
<blockquote class="tr_bq">
dpkg -i elasticsearch-1.1.0.deb<br />
dpkg -i logstash_1.4.0-1-c82dc09_all.deb<br />
tar -C /var/www/ -xzf kibana-3.0.0.tar.gz</blockquote>
<blockquote class="tr_bq">
update-rc.d elasticsearch defaults 95 10<br />
update-rc.d logstash defaults</blockquote>
<br />
elasticsearch configs are located here (nothing needs to be done): <br />
<blockquote class="tr_bq">
ls /etc/default/elasticsearch <br />
/etc/default/elasticsearch</blockquote>
<blockquote class="tr_bq">
ls /etc/elasticsearch/<br />
elasticsearch.yml logging.yml</blockquote>
the elasticsearch data is located here:<br />
<blockquote class="tr_bq">
/var/lib/elasticsearch/</blockquote>
<br />
You should have your logstash config file in <b>/etc/default/logstash</b>:<br />
<br />
Make sure it has the config and log directories correct:<br />
<br />
<blockquote>
###############################<br />
# Default settings for logstash<br />
###############################<br />
<br />
# Override Java location<br />
#JAVACMD=/usr/bin/java<br />
<br />
# Set a home directory<br />
#LS_HOME=/var/lib/logstash<br />
<br />
# Arguments to pass to logstash agent<br />
#LS_OPTS=""<br />
<br />
# Arguments to pass to java<br />
#LS_HEAP_SIZE="500m"<br />
#LS_JAVA_OPTS="-Djava.io.tmpdir=$HOME"<br />
<br />
# pidfiles aren't used for upstart; this is for sysv users.<br />
#LS_PIDFILE=/var/run/logstash.pid<br />
<br />
# user id to be invoked as; for upstart: edit /etc/init/logstash.conf<br />
#LS_USER=logstash<br />
<br />
# logstash logging<br />
<b>LS_LOG_FILE=/var/log/logstash/logstash.log</b><br />
#LS_USE_GC_LOGGING="true"<br />
<br />
# logstash configuration directory<br />
<b>LS_CONF_DIR=/etc/logstash/conf.d</b><br />
<br />
# Open file limit; cannot be overridden in upstart<br />
#LS_OPEN_FILES=16384<br />
<br />
# Nice level<br />
#LS_NICE=19</blockquote>
<br />
<br />
GeoIPLite is shipped by default with Logstash !<br />
<a href="http://logstash.net/docs/1.4.0/filters/geoip">http://logstash.net/docs/1.4.0/filters/geoip</a><br />
<br />
and it is located here(on the system after installation):<br />
<blockquote class="tr_bq">
/opt/logstash/vendor/geoip/GeoLiteCity.dat</blockquote>
<br />
Create your logstash.conf<br />
<br />
<blockquote class="tr_bq">
touch logstash.conf</blockquote>
<br />
make sure it looks like this:<br />
<br />
<blockquote class="tr_bq">
input {<br />
lumberjack {<br />
port => 5043<br />
type => "<b>IDS2-logs</b>"<br />
codec => json<br />
ssl_certificate => "/etc/logstash/pki/logfor.crt"<br />
ssl_key => "/etc/logstash/pki/logfor.key"<br />
}<br />
<br />
file { <br />
path => ["/var/log/suricata/eve.json"]<br />
codec => json <br />
type => "<b>IDS1-logs</b>"<br />
}<br />
<br />
}<br />
<br />
filter {<br />
if [src_ip] {<br />
geoip {<br />
source => "src_ip"<br />
target => "geoip"<br />
database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"<br />
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]<br />
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]<br />
}<br />
mutate {<br />
convert => [ "[geoip][coordinates]", "float" ]<br />
}<br />
}<br />
}<br />
<br />
output { <br />
elasticsearch {<br />
host => localhost<br />
}<br />
}</blockquote>
<br />
The <b>/etc/logstash/pki/logfor.crt</b> and <b>/etc/logstash/pki/logfor.key</b> are the same ones we created earlier on <b>IDS2</b> and copied here to <b>IDS1</b>.<br />
<br />
The purpose of <b>type => "IDS1-logs"</b> and <b>type => "IDS2-logs"</b> above is so that later when looking at the Kibana widgets you would be able to differentiate the logs if needed:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRVyAXhnoCfF3Mq4PveWM0Rcxvnv-qleKNLpAFFSkF9P19LT0QL02OgQfWD14xCQ7t4OztYrdiOV7hFX_-9mHwK9mWFyxf9hqbZhmWFyI_lrIQKDOeabsLrQn_anTawnyZNZqDJpfHS_c/s1600/logstash-forwarder2.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRVyAXhnoCfF3Mq4PveWM0Rcxvnv-qleKNLpAFFSkF9P19LT0QL02OgQfWD14xCQ7t4OztYrdiOV7hFX_-9mHwK9mWFyxf9hqbZhmWFyI_lrIQKDOeabsLrQn_anTawnyZNZqDJpfHS_c/s1600/logstash-forwarder2.PNG" height="157" width="400" /></a></div>
<br />
<br />
<br />
Then copy the file we just created to : <br />
<blockquote class="tr_bq">
cp logstash.conf /etc/logstash/conf.d/</blockquote>
<br />
<br />
Kibana:<br />
<br />
We have already installed Kibana during the first step :). All it is left to do now is just restart apache:<br />
<br />
<blockquote class="tr_bq">
service apache2 restart</blockquote>
<br />
<br />
<h2>
Rolling it out</h2>
<br />
On IDS1 and IDS2 - start Suricata IDPS. Genereate some logs<br />
On IDS2:<br />
<blockquote class="tr_bq">
/etc/init.d/logstash-forwarder start</blockquote>
<br />
On IDS1:<br />
<blockquote class="tr_bq">
service elasticsearch start<br />
service logstash start</blockquote>
You can check the logstash-forwarder (on <b>IDS2</b>) if it is working properly like so - ><br />
<i><b>tail -f /var/log/syslog</b></i> :<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcofxgd9jHQOmnzR61O97kL-Zxt1tArVKTgilZ1Kj_2M06M3NdoPh6oogVFeINOQ78PwidA8amWHWnolJGB314bLVa-e1fk3i5qq3uMQUiBT19HgbPXLFaSOfvzC6C4ivexX14bmZYJNc/s1600/logstash-forwarder1.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgcofxgd9jHQOmnzR61O97kL-Zxt1tArVKTgilZ1Kj_2M06M3NdoPh6oogVFeINOQ78PwidA8amWHWnolJGB314bLVa-e1fk3i5qq3uMQUiBT19HgbPXLFaSOfvzC6C4ivexX14bmZYJNc/s1600/logstash-forwarder1.PNG" height="142" width="320" /></a></div>
<br />
<br />
<br />
Go to your browser and navigate to (in this case <b>IDS1</b>)<br />
<blockquote class="tr_bq">
http://192.168.1.158/kibana-3.0.0</blockquote>
<u><b>NOTE:</b></u> This is http (as this is just a simple tutorial), you should configure it to use <b>httpS </b>and reverse proxy with authentication<b>...</b><br />
<br />
The Kibana web interface should come up.<br />
<br />
That is it. From here on it is up to you to configure the web interface with your own widgets.<br />
<br />
Chapter IV Part Two will follow with detail on that subject.<br />
However something like this is easily achievable with a few clicks in under 5 min:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjR479j_NCU-w9hEQv5CLhKtDCXMVY2OYCcI1Wdbg_4AIrlt1bDwyuERjBjGXMyQTQd8ZnrrbmXUwN_wMXu3R5vkm7mTSpLcoOfGObtIah9vW6Jy9E8M5hi6wrFSZoMwzZIFUGkw_6EvtI/s1600/ALL3.PNG" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjR479j_NCU-w9hEQv5CLhKtDCXMVY2OYCcI1Wdbg_4AIrlt1bDwyuERjBjGXMyQTQd8ZnrrbmXUwN_wMXu3R5vkm7mTSpLcoOfGObtIah9vW6Jy9E8M5hi6wrFSZoMwzZIFUGkw_6EvtI/s1600/ALL3.PNG" height="176" width="400" /></a></div>
<br />
<br />
<br />
<br />
<h2>
Troubleshooting:</h2>
You should keep an eye on <b>/var/log/logstash/logstash.log</b> - any troubles should be visible there. <br />
<br />
A GREAT article explaining elastic search cluster status (if you deploy a proper elasticsearch cluster 2 and more nodes)<br />
<a href="http://chrissimpson.co.uk/elasticsearch-yellow-cluster-status-explained.html">http://chrissimpson.co.uk/elasticsearch-yellow-cluster-status-explained.html</a><br />
<br />
ERR in logstash-indexer.out - too many open files<br />
<a href="http://www.elasticsearch.org/tutorials/too-many-open-files/">http://www.elasticsearch.org/tutorials/too-many-open-files/</a> <br />
<br />
Set
ulimit parameters on Ubuntu(this is in case you need to increase the
number of Inodes(files) available on a system "df -ih"):<br />
<a href="http://posidev.com/blog/2009/06/04/set-ulimit-parameters-on-ubuntu/">http://posidev.com/blog/2009/06/04/set-ulimit-parameters-on-ubuntu/</a><br />
<br />
This is an advanced topic - Cluster status and settings commands:<br />
<blockquote class="tr_bq">
curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'<br />
<br />
curl -XGET 'http://localhost:9200/_status?pretty=true'<br />
<br />
curl -XGET 'http://localhost:9200/_nodes?os=true&process=true&pretty=true'</blockquote>
<br />
<br />
<h2>
Very useful links:</h2>
Logstash 1.4.0 GA released: <br />
<a href="http://www.elasticsearch.org/blog/logstash-1-4-0-ga-unleashed/">http://www.elasticsearch.org/blog/logstash-1-4-0-ga-unleashed/</a> <br />
<br />
A MUST READ (explaining the usage of ".raw" in terms so that the terms re not broken by space delimiter)<br />
<a href="http://www.elasticsearch.org/blog/logstash-1-3-1-released/">http://www.elasticsearch.org/blog/logstash-1-3-1-released/</a><br />
<br />
Article explaining how to set up a 2 node cluster:<br />
<a href="http://techhari.blogspot.se/2013/03/elasticsearch-cluster-setup-in-2-minutes.html">http://techhari.blogspot.se/2013/03/elasticsearch-cluster-setup-in-2-minutes.html</a><br />
<br />
Installing Logstash Central Server (using rsyslog):<br />
<a href="https://support.shotgunsoftware.com/entries/23163863-Installing-Logstash-Central-Server">https://support.shotgunsoftware.com/entries/23163863-Installing-Logstash-Central-Server</a><br />
<br />
ElasticSearch cluster setup in 2 minutes:<br />
<a href="http://techhari.blogspot.com/2013/03/elasticsearch-cluster-setup-in-2-minutes.html">http://techhari.blogspot.com/2013/03/elasticsearch-cluster-setup-in-2-minutes.html</a><br />
<br />
<br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com2tag:blogger.com,1999:blog-6352560843819453131.post-3690787806242212422014-03-24T15:21:00.001-07:002015-08-30T05:23:13.066-07:00Suricata - preparing 10Gbps network cards for IDPS and file extraction<br />
OS used/tested for this tutorial - Debian Wheezy and/or Ubuntu LTS 12.0.4<br />
With 3.2.0 and 3.5.0 kernel level respectively with Suricata 2.0dev at the moment of this writing.<br />
<br />
<br />
<br />
This article consists of the following major 3 sections:<br />
<ul>
<li>Network card drivers and tuning</li>
<li>Kernel specific tunning</li>
<li>Suricata.yaml configuration (file extraction specific)</li>
</ul>
<br />
Network and system tools:<br />
<blockquote class="tr_bq">
apt-get install ethtool bwm-ng iptraf htop</blockquote>
<h2>
<i>Network card drivers and tuning</i></h2>
Our card is Intel 82599EB 10-Gigabit SFI/SFP+<br />
<br />
<br />
<blockquote class="tr_bq">
rmmod ixgbe<br />
sudo modprobe ixgbe FdirPballoc=3<br />
ifconfig eth3 up</blockquote>
then (we disable irqbalance and make sure it does not enable itself during reboot)<br />
<blockquote class="tr_bq">
killall irqbalance<br />
service irqbalance stop</blockquote>
<br />
<blockquote class="tr_bq">
apt-get install chkconfig<br />
chkconfig irqbalance off</blockquote>
Get the Intel network driver form here (we will use them in a second) - <a href="https://downloadcenter.intel.com/default.aspx">https://downloadcenter.intel.com/default.aspx</a><br />
<br />
Download to your directory of choice then unzip,compile and install:<br />
<br />
<blockquote class="tr_bq">
wget http://sourceforge.net/projects/e1000/files/ixgbe%20stable/3.18.7/ixgbe-3.18.7.tar.gz <br />tar -zxf ixgbe-3.18.7.tar.gz <br />cd /home/pevman/ixgbe-3.18.7/src <br />make clean && make && make install </blockquote>
<br />
Set irq affinity - do not forget to change <b><i>eth3</i></b> below with the name of the network interface you are using: <br />
<blockquote class="tr_bq">
cd ../scripts/<br />
./set_irq_affinity eth3</blockquote>
<br />
<br />
You should see something like this:<br />
<blockquote class="tr_bq">
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ./set_irq_affinity eth3<br />
no rx vectors found on eth3<br />
no tx vectors found on eth3<br />
eth3 mask=1 for /proc/irq/101/smp_affinity<br />
eth3 mask=2 for /proc/irq/102/smp_affinity<br />
eth3 mask=4 for /proc/irq/103/smp_affinity<br />
eth3 mask=8 for /proc/irq/104/smp_affinity<br />
eth3 mask=10 for /proc/irq/105/smp_affinity<br />
eth3 mask=20 for /proc/irq/106/smp_affinity<br />
eth3 mask=40 for /proc/irq/107/smp_affinity<br />
eth3 mask=80 for /proc/irq/108/smp_affinity<br />
eth3 mask=100 for /proc/irq/109/smp_affinity<br />
eth3 mask=200 for /proc/irq/110/smp_affinity<br />
eth3 mask=400 for /proc/irq/111/smp_affinity<br />
eth3 mask=800 for /proc/irq/112/smp_affinity<br />
eth3 mask=1000 for /proc/irq/113/smp_affinity<br />
eth3 mask=2000 for /proc/irq/114/smp_affinity<br />
eth3 mask=4000 for /proc/irq/115/smp_affinity<br />
eth3 mask=8000 for /proc/irq/116/smp_affinity<br />
root@suricata:/home/pevman/ixgbe-3.18.7/scripts#</blockquote>
Now we have the latest drivers installed (at the time of this writing) and we have run the affinity script:<br />
<blockquote class="tr_bq">
*-network:1<br />
description: Ethernet interface<br />
product: 82599EB 10-Gigabit SFI/SFP+ Network Connection<br />
vendor: Intel Corporation<br />
physical id: 0.1<br />
bus info: pci@0000:04:00.1<br />
logical name: eth3<br />
version: 01<br />
serial: 00:e0:ed:19:e3:e1<br />
width: 64 bits<br />
clock: 33MHz<br />
capabilities: pm msi msix pciexpress vpd bus_master cap_list ethernet physical fibre<br />
configuration: autonegotiation=off broadcast=yes driver=ixgbe<u> </u><i><b><u>driverversion=3.18.7</u> </b></i>duplex=full firmware=0x800000cb latency=0 link=yes multicast=yes port=fibre promiscuous=yes<br />
resources: irq:37 memory:fbc00000-fbc1ffff ioport:e000(size=32)
memory:fbc40000-fbc43fff memory:fa700000-fa7fffff
memory:fa600000-fa6fffff</blockquote>
<br />
<br />
<br />
We
need to disable all offloading on the network card in order for the IDS
to be able to see the traffic as it is supposed to be (without
checksums,tcp-segmentation-offloading and such..) Otherwise your IDPS
would not be able to see all "natural" network traffic the way it is
supposed to and will not inspect it properly.<br />
<br />
This would influence the correctness of <b>ALL</b> outputs including file extraction. So make sure all offloading features are <b>OFF</b> !!!<br />
<br />
When you first install the drivers and card your offloading settings might look like this:<br />
<blockquote class="tr_bq">
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# <i>ethtool -k eth3</i><br />
Offload parameters for eth3:<br />
rx-checksumming: on<br />
tx-checksumming: on<br />
scatter-gather: on<br />
tcp-segmentation-offload: on<br />
udp-fragmentation-offload: off<br />
generic-segmentation-offload: on<br />
generic-receive-offload: on<br />
large-receive-offload: on<br />
rx-vlan-offload: on<br />
tx-vlan-offload: on<br />
ntuple-filters: off<br />
receive-hashing: on<br />
root@suricata:/home/pevman/ixgbe-3.18.7/scripts#</blockquote>
<br />
So we disable all of them, like so (and we load balance the UDP flows for that particular network card):<br />
<br />
<blockquote class="tr_bq">
<blockquote class="tr_bq">
ethtool -K eth3 tso off<br />
ethtool -K eth3 gro off<br />
ethtool -K eth3 ufo off<br />
ethtool -K eth3 lro off<br />
ethtool -K eth3 gso off<br />
ethtool -K eth3 rx off<br />
ethtool -K eth3 tx off<br />
ethtool -K eth3 sg off<br />
ethtool -K eth3 rxvlan off<br />
ethtool -K eth3 txvlan off<br />
ethtool -N eth3 rx-flow-hash udp4 sdfn<br />
ethtool -N eth3 rx-flow-hash udp6 sdfn<br />
ethtool -C eth3 rx-usecs 1 rx-frames 0<br />
ethtool -C eth3 adaptive-rx off</blockquote>
</blockquote>
<br />
Your output should look something like this:<br />
<blockquote class="tr_bq">
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 tso off<br />
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 gro off<br />
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 lro off<br />
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 gso off<br />
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 rx off<br />
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 tx off<br />
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 sg off<br />
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 rxvlan off<br />
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -K eth3 txvlan off<br />
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -N eth3 rx-flow-hash udp4 sdfn<br />
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -N eth3 rx-flow-hash udp6 sdfn<br />
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -n eth3 rx-flow-hash udp6<br />
UDP over IPV6 flows use these fields for computing Hash flow key:<br />
IP SA<br />
IP DA<br />
L4 bytes 0 & 1 [TCP/UDP src port]<br />
L4 bytes 2 & 3 [TCP/UDP dst port]<br />
<br />
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -n eth3 rx-flow-hash udp4<br />
UDP over IPV4 flows use these fields for computing Hash flow key:<br />
IP SA<br />
IP DA<br />
L4 bytes 0 & 1 [TCP/UDP src port]<br />
L4 bytes 2 & 3 [TCP/UDP dst port]<br />
<br />
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -C eth3 rx-usecs 0 rx-frames 0 <br />
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# ethtool -C eth3 adaptive-rx off</blockquote>
<br />
Now we doublecheck and run ethtool again to verify that the offloading is OFF:<br />
<blockquote class="tr_bq">
root@suricata:/home/pevman/ixgbe-3.18.7/scripts# <i>ethtool -k eth3</i><br />
Offload parameters for eth3: <br />
rx-checksumming: off<br />
tx-checksumming: off<br />
scatter-gather: off<br />
tcp-segmentation-offload: off<br />
udp-fragmentation-offload: off<br />
generic-segmentation-offload: off<br />
generic-receive-offload: off<br />
large-receive-offload: off<br />
rx-vlan-offload: off<br />
tx-vlan-offload: off</blockquote>
<br />
Ring parameters on the network card:<br />
<br />
<blockquote class="tr_bq">
root@suricata:~# ethtool -g eth3<br />
Ring parameters for eth3:<br />
Pre-set maximums:<br />
RX: 4096<br />
RX Mini: 0<br />
RX Jumbo: 0<br />
TX: 4096</blockquote>
<blockquote>
Current hardware settings:<br />
<b>RX: 512</b><br />
RX Mini: 0<br />
RX Jumbo: 0<br />
TX: 512</blockquote>
<br />
<br />
We can increase that to the max Pre-set RX:<br />
<br />
<blockquote class="tr_bq">
root@suricata:~# ethtool -G eth3 rx 4096</blockquote>
<br />
Then we have a look again:<br />
<br />
<blockquote class="tr_bq">
root@suricata:~# ethtool -g eth3<br />
Ring parameters for eth3:<br />
Pre-set maximums:<br />
RX: 4096<br />
RX Mini: 0<br />
RX Jumbo: 0<br />
TX: 4096<br />
Current hardware settings:<br />
<b>RX: 4096</b><br />
RX Mini: 0<br />
RX Jumbo: 0<br />
TX: 512</blockquote>
<br />
<h2>
Making network changes permanent across reboots</h2>
<br />
On Ubuntu for example you can do:<br />
<blockquote class="tr_bq">
root@suricata:~# crontab -e </blockquote>
<br />
Add the following: <br />
<blockquote class="tr_bq">
# add cronjob at reboot - disbale network offload<br />
@reboot /opt/tmp/disable-network-offload.sh</blockquote>
<br />
and your disable-network-offload.sh script (in this case under /opt/tmp/ ) will contain the following:<br />
<br />
<br />
<br />
<blockquote>
#!/bin/bash<br />
ethtool -K eth3 tso off<br />
ethtool -K eth3 gro off<br />
ethtool -K eth3 ufo off<br />
ethtool -K eth3 lro off<br />
ethtool -K eth3 gso off<br />
ethtool -K eth3 rx off<br />
ethtool -K eth3 tx off<br />
ethtool -K eth3 sg off<br />
ethtool -K eth3 rxvlan off<br />
ethtool -K eth3 txvlan off<br />
ethtool -N eth3 rx-flow-hash udp4 sdfn<br />
ethtool -N eth3 rx-flow-hash udp6 sdfn<br />
ethtool -C eth3 rx-usecs 1 rx-frames 0<br />
ethtool -C eth3 adaptive-rx off</blockquote>
<br />
<br />
with:<br />
<blockquote class="tr_bq">
chmod 755 disable-network-offload.sh</blockquote>
To make sure you have the ixgbe module always loaded at boot time you can add "<i><b>ixgbe</b></i>" to the <i><b>/etc/modules</b></i> file.<br />
<br />
<h2>
Kernel specific tunning</h2>
<br />
Certain adjustments in parameters in the kernel can help as well :<br />
<br />
<blockquote class="tr_bq">
sysctl -w net.core.netdev_max_backlog=250000<br />
sysctl -w net.core.rmem_max=16777216<br />
sysctl -w net.core.rmem_default=16777216<br />
sysctl -w net.core.optmem_max=16777216<br />
<br /></blockquote>
<br />
<h2>
Making kernel changes permanent across reboots</h2>
<br />
example:<br />
<blockquote class="tr_bq">
echo 'net.core.netdev_max_backlog=250000' >> /etc/sysctl.conf</blockquote>
<br />
reload the changes: <br />
<blockquote class="tr_bq">
sysctl -p</blockquote>
<br />
OR for all the above adjustments:<br />
<br />
<blockquote class="tr_bq">
echo 'net.core.netdev_max_backlog=250000' >> /etc/sysctl.conf<br />
echo 'net.core.rmem_max=16777216' >> /etc/sysctl.conf<br />
echo 'net.core.rmem_default=16777216' >> /etc/sysctl.conf<br />
echo 'net.core.optmem_max=16777216' >> /etc/sysctl.conf<br />
sysctl -p</blockquote>
<br />
<br />
<h2>
Suricata.yaml configuration (file extraction specific)</h2>
As of Suricata 1.2 - it is possible to detect and extract/store over 5000 types of files from HTTP sessions.<br />
<br />
Specific file extraction instructions can also be found in the <a href="https://redmine.openinfosecfoundation.org/projects/suricata/wiki/File_Extraction" target="_blank">official page documentation</a>. <br />
<br />
The following libraries are needed on the system running Suricata :<br />
<blockquote class="tr_bq">
apt-get install libnss3-dev libnspr4-dev</blockquote>
<br />
Suricata also needs to be compiled with file extraction enabled (not covered here).<br />
<br />
<b>In short</b> in the suriacta.yaml, those are the sections below that can be tuned/configured and affect the file extraction and logging:<br />
(the bigger the mem values the better on a busy link )<br />
<br />
<br />
<blockquote class="tr_bq">
- eve-log:<br />
<b>enabled: yes</b><br />
type: file #file|syslog|unix_dgram|unix_stream<br />
filename: eve.json<br />
# the following are valid when type: syslog above<br />
#identity: "suricata"<br />
#facility: local5<br />
#level: Info ## possible levels: Emergency, Alert, Critical,<br />
## Error, Warning, Notice, Info, Debug<br />
types:<br />
- alert<br />
- http:<br />
extended: yes # enable this for extended logging information<br />
- dns<br />
- tls:<br />
extended: yes # enable this for extended logging information<br />
- files:<br />
<b>force-magic: yes</b> # force logging magic on all logged files<br />
<b> force-md5: yes </b> # force logging of md5 checksums<br />
#- drop<br />
- ssh </blockquote>
<br />
<br />
For file store to disk/extraction:<br />
<blockquote class="tr_bq">
- file-store:<br />
enabled: <b>yes</b> # set to yes to enable<br />
log-dir: files # directory to store the files<br />
force-magic: <b>yes</b> # force logging magic on all stored files<br />
force-md5: <b>yes </b> # force logging of md5 checksums<br />
#waldo: file.waldo # waldo file to store the file_id across runs</blockquote>
<br />
<br />
<blockquote class="tr_bq">
stream:<br />
memcap: <b>32mb</b><br />
checksum-validation: <b>no</b> # reject wrong csums<br />
inline: auto # auto will use inline mode in IPS mode, yes or no set it statically<br />
reassembly:<br />
memcap: <b>128mb</b><br />
depth: <b>1mb</b> # reassemble 1mb into a stream</blockquote>
<b> </b><br />
<b>depth: 1mb</b> , would mean that in one tcp reassembled flow, the max size of the file that can be extracted is just about 1mb.<br />
<br />
Both <b>stream.memcap </b>and<b> reassembly.memcap </b>(if reassembly is needed) must be big enough to accommodate the whole file on the fly in traffic that needs to be extracted <b>PLUS </b>any other stream and reassembly tasks that the engine needs to do while inspecting the traffic on a particular link. <br />
<br />
<blockquote class="tr_bq">
app-layer:<br />
protocols:<br />
....<br />
....<br />
http:<br />
enabled: yes<br />
<b> # memcap: 64mb</b></blockquote>
<br />
The default limit for mem usage for http is <b>64mb</b> , that could be increased , ex - <b>memcap: 4GB</b> - since HTTP is present everywhere and a low memcap on a busy HTTP link would limit the inspection and extraction size ability.<br />
<br />
<blockquote class="tr_bq">
libhtp:<br />
<br />
default-config:<br />
personality: IDS<br />
<br />
# Can be specified in kb, mb, gb. Just a number indicates<br />
# it's in bytes.<br />
request-body-limit: <b>3072</b><br />
response-body-limit: <b>3072</b></blockquote>
<br />
The default values above control how far the HTTP request and response body is tracked and also limit file inspection. This should be set to a much higher value:<br />
<br />
libhtp:<br />
<br />
default-config:<br />
personality: IDS<br />
<br />
# Can be specified in kb, mb, gb. Just a number indicates<br />
# it's in bytes.<br />
request-body-limit: <b>1gb</b><br />
response-body-limit: <b>1gb</b><br />
<br />
or 0 (which would mean unlimited):<br />
<br />
libhtp:<br />
<br />
default-config:<br />
personality: IDS<br />
<br />
# Can be specified in kb, mb, gb. Just a number indicates<br />
# it's in bytes.<br />
request-body-limit: <b>0</b><br />
response-body-limit: <b>0</b><br />
<br />
and then of course you would need a rule loaded(example):<br />
<blockquote class="tr_bq">
alert http any any -> any any (msg:"PDF file Extracted"; filemagic:"PDF document"; filestore; sid:11; rev:11;)</blockquote>
<br />
<br />
<br />
That's it.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Pevmahttp://www.blogger.com/profile/07698265905172078652noreply@blogger.com2