Suricata IDS/IPS/NSM is a highly scalable, modular and flexible platform. There are numerous configuration options available which empower a lot.
This blog post aims to give you an overview of the ones that have an impact on the memory consumption for Suricata and how does the suricata.yaml config settings affect the memory usage of/for Suricata and the system it resides on.
One of the always relevant questions with regards to performance tuning and production deployment is - What is the total memory consumption of Suricata? Or to be correct - what is the total memory that Suricata will consume/use and how can that be calculated and configured more precisely?
The details of the answer are very relevant since it will most certainly affect the deployment set up. The risk of not correctly setting up the configuration can lead to a RAM starvation which in turn would force the use of swap which would most likely make your particular set up not optimal (to be frank - useless).
In this blog post we will try to walk through the relevant settings in a suricata.yaml configuration example and come up with an equation for the total memory consumption.
For this particular set up I use:
- af-packet running mode with 16 threads configuration
- runmode: workers
- latest dev edition(git - 2.1dev (rev dcbbda5)) of Suricata at the time of this writing.
- IDS mode is used in this example
- Debian Jessie/Ubuntu LTS (the OS should not matter)
Lets dive into it...
MTU size does matter
How so?If you look into the setting for max-pending-packets in suricata.yaml
max pending packets: 1024
that will lead to the following output into suricata.log:
.....which is - 3244 bytes per packet per thread(pool).
(tmqh-packetpool.c:398) <Info> (PacketPoolInit) -- preallocated 1024 packets. Total memory 3321856
.....
The size of each packet takes in the memory is the sizeof(struct
Packet_) + DEFAULT_PACKET_SIZE. So ~1.7K plus ~1.5K or about 3.2K.
Total memory used would be:
<number_of_threads>*<(sizeof(struct Packet_) + DEFAULT_PACKET_SIZE)>*<max-pending-packets> = 16 * 65534 * 3.2K = 3.20GB.
NOTE: That much RAM will be reserved right away
NOTE: The number of threads does matter as well :)
So why is the NIC MTU important?
The MTU setting on the NIC (IDS) interface is used by af-packet as a default packet size(aka - DEFAULT_PACKET_SIZE) if no explicit default packet size is specified in the suricata.yaml:
So when you would like to play "big" and enable those 9KB jumbo frames as MTU on your NIC - without having a need for it ....you may end up with an unwanted side effect the least :)
NOTE: The number of threads does matter as well :)
So why is the NIC MTU important?
The MTU setting on the NIC (IDS) interface is used by af-packet as a default packet size(aka - DEFAULT_PACKET_SIZE) if no explicit default packet size is specified in the suricata.yaml:
# Preallocated size for packet. Default is 1514 which is the classicalNote above the "default-packet-size" is commented/unset. In that case af-packet will use the MTU set on the NIC as a default packet size - which in this particular set up (NIC) if you do "ifconfig" is 1514.
# size for pcap on ethernet. You should adjust this value to the highest
# packet size (MTU + hardware header) on your system.
#default-packet-size: 1514
So when you would like to play "big" and enable those 9KB jumbo frames as MTU on your NIC - without having a need for it ....you may end up with an unwanted side effect the least :)
Defrag memory settings and consumption
defrag:
memcap: 512mb
hash-size: 65536
trackers: 65535 # number of defragmented flows to follow
max-frags: 65535 # number of fragments to keep (higher than trackers)
prealloc: yes
timeout: 60
The setting above from the defrag section of the suricata.yaml will result in the following if you check your suricata.log:
defrag-hash.c:220) <Info> (DefragInitConfig) -- allocated 3670016
bytes of memory for the defrag hash... 65536 buckets of size 56
defrag-hash.c:245) <Info> (DefragInitConfig) -- preallocated 65535
defrag trackers of size 168
(defrag-hash.c:252) <Info> (DefragInitConfig) -- defrag memory usage: 14679896 bytes, maximum: 536870912
Here we have(in bytes) -
(defrag hash size * 56) + (prealloc defrag trackers * 168). In this case that would be a total of
(65536 * 56) + (65535 * 168) = 13.99MB
which is "defrag memory usage: 14679896 bytes" from the above output.
That much memory is immediately allocated/reserved.
The maximum memory usage allowed to be used by defrag will be 512MB.
NOTE: The defrag settings you configure for preallocation must sum up to be lower than the max amount allocated (defrag.memcap)
Host memory settings and consumption
host:
hash-size: 4096
prealloc: 10000
memcap: 16777216
The setting above (host memory settings have effect on the ip reputation usage ) from the hosts section of the suricata.yaml will result in the following if you check your suricata.log:
(host.c:212) <Info> (HostInitConfig) -- allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
(host.c:235) <Info> (HostInitConfig) -- preallocated 10000 hosts of size 136
(host.c:237) <Info> (HostInitConfig) -- host memory usage: 1622144 bytes, maximum: 1622144
Pretty simple (in bytes) -
(hash-size*64) + (prealloc_hosts * 136) =
(4096*64) + (10000 * 136) = 1622144 =1.54MB are allocated/reserved right away at start.
The maximum memory allowed is 162MB(16777216 Bytes)
Ippair memory settings and consumption
ippair:
hash-size: 4096
prealloc: 1000
memcap: 16777216
The setting above (ippair memory settings have effect on the xbits usage) from the hosts section of the suricata.yaml will result in the following if you check your suricata.log:
(ippair.c:207) <Info> (IPPairInitConfig) -- allocated 262144 bytes of memory for the ippair hash... 4096 buckets of size 64
(ippair.c:230) <Info> (IPPairInitConfig) -- preallocated 1000 ippairs of size 136
(ippair.c:232) <Info> (IPPairInitConfig) -- ippair memory usage: 398144 bytes, maximum: 16777216
Pretty simple as well (in bytes) -
(hash-size*64) + (prealloc_ippair * 136) =
(4096*64) + (1000 * 136) = 398144 =1.54MB will be allocated/reserved immediately upon start.
The maximum memory allowed is 162MB(16777216 Bytes)
Flow memory settings and consumption
flow:
memcap: 1gb
hash-size: 1048576
prealloc: 1048576
emergency-recovery: 30
#managers: 1 # default to one flow manager
#recyclers: 1 # default to one flow recycler thread
The setting above from the flow config section of the suricata.yaml will result in the following in your suricata.log:
[393] 7/6/2015 -- 15:37:55 - (flow.c:441) <Info> (FlowInitConfig) --
allocated 67108864 bytes of memory for the flow hash... 1048576
buckets of size 64
[393] 7/6/2015 -- 15:37:55 - (flow.c:465) <Info> (FlowInitConfig) --
preallocated 1048576 flows of size 280
[393] 7/6/2015 -- 15:37:55 - (flow.c:467) <Info> (FlowInitConfig) --
flow memory usage: 369098752 bytes, maximum: 1073741824
Here we have (in bytes) -
(flow hash * 64) + (prealloc flows * 280) which in this case would be
(1048576 * 64) + (1048576 * 280) = 344MB
The above is what is going to be immediately used/reserved at start up.
The max allowed usage will be 1024MB
A piece of advice if I may - don't ever add zeros here if you do not need to. By don't need to - I mean if you do not see flow emergency mode counters increasing in your stats.log.
Prealloc-sessions settings and consumption
stream:
memcap: 32mb
checksum-validation: no # reject wrong csums
prealloc-sessions: 20000
inline: auto
The setting above from the prealloc sessions config section of the suricata.yaml will result in the following in your suricata.log:
(stream-tcp.c:377) <Info> (StreamTcpInitConfig) -- stream "prealloc-sessions": 20000 (per thread)
This translates into bytes as follows (TcpSession structure is 192 bytes, PoolBucket is 24 bytes):
(192 + 24) * prealloc_sessions * number of threads = memory use in bytesIn our case we have - (192 + 24) * 20000 * 16 = 65.91MB. This amount will be immediately allocated upon start up.
NOTE: The number of threads does matter as well :)
af-packet ring size memory settings and consumption
use-mmap: yes
# Ring size will be computed with respect to max_pending_packets and number
# of threads. You can set manually the ring size in number of packets by setting
# the following value. If you are using flow cluster-type and have really network
# intensive single-flow you could want to set the ring-size independently of the number
# of threads:
ring-size: 2048
The setting above from the af-packet ring size config section of the suricata.yaml will result in the following in your suricata.log the ringsize setting actually controls the size of the buffer for each
ring(per thread) - buffer for af-packet:
[7636] 31/8/2015 -- 22:50:51 - (source-af-packet.c:1365) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=103 frame_size=1584 frame_nr=2060
[7636] 31/8/2015 -- 22:50:51 - (source-af-packet.c:1573) <Info> (AFPCreateSocket) -- Using interface 'eth0' via socket 7
[7636] 31/8/2015 -- 22:50:51 - (source-af-packet.c:1157) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth01 using socket 7
[7637] 31/8/2015 -- 22:50:51 - (source-af-packet.c:1365) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=103 frame_size=1584 frame_nr=2060
[7637] 31/8/2015 -- 22:50:51 - (source-af-packet.c:1573) <Info> (AFPCreateSocket) -- Using interface 'eth0' via socket 8
In general - that would mean -
<number of threads> * <ringsize> * <(sizeof(structPacket_) + DEFAULT_PACKET_SIZE)>
or in our case - 16*2048*3514=109MB
This is memory allocated/reserved immediately.
Above I say "in general". You might wonder where does this come from:
(source-af-packet.c:1365) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=103 frame_size=1584 frame_nr=2060
Why 2060 frames while we have specified 2048,why block_size/frame_size and what is their relation? Full detailed description about that you can find here - https://www.kernel.org/doc/Documentation/networking/packet_mmap.txt (thanks regit)
Stream and reassembly memory settings and consumption
stream:
memcap: 14gb
reassembly:
memcap: 20gb
The setting above from the stream and reassembly config section of the
suricata.yaml will result in the following in your suricata.log:
......(stream-tcp.c:393) <Info> (StreamTcpInitConfig) -- stream "memcap": 15032385536......(stream-tcp.c:475) <Info> (StreamTcpInitConfig) -- stream.reassembly "memcap": 21474836480
......
The above is very straight forward. The stream and reassembly memcaps
are in total 14GB+20GB=34GB
This is max memory allowed. It will not be allocated immediately.
Further below in the config section we have :
#raw: yes
#chunk-prealloc: 250
Q: What does raw mean and what is chunk-prealloc?
A: The 'raw' stream inspection (content keywords w/o http_uri etc) uses
'chunks'. This is again a preallocated memory block that lives in a pool.
Q: So what is the size of "chunks " ?
A: 4kb/4096bytes
So in this case above we have -
250*4096 = 0.97MB
This is deducted/taken from the memory allocated by the stream.reassembly.memcap value.
we also have prealloc segments (values in bytes):
#randomize-chunk-range: 10
#raw: yes
#chunk-prealloc: 250
#segments:
# - size: 4
# prealloc: 256
# - size: 16
# prealloc: 512
# - size: 112
# prealloc: 512
# - size: 248
# prealloc: 512
# - size: 512
# prealloc: 512
# - size: 768
# prealloc: 1024
# - size: 1448
# prealloc: 1024
# - size: 65535
# prealloc: 128
#zero-copy-size: 128
More detailed info about the above you can find from my other blog post here - http://pevma.blogspot.se/2014/06/suricata-idsips-tcp-segment-pool-size.html
NOTE: Do not forget that these settings (segments preallocation) is deducted/taken from the memory allocated by the stream.reassembly.memcap value.
App layer memory settings and consumption
app-layer:
protocols:
dns:
# memcaps. Globally and per flow/state.
global-memcap: 2gb
state-memcap: 512kb
...
....
http:
enabled: yes
memcap:2gb
Here we have - app-layer dns + http or in this case - 2GB + 2GB = 4GB
Other settings that affect the memory consumption
...
detect-engine:
- profile: medium
- custom-values:
...
Some more information:
https://redmine.
The more rules you load the heavier the effect of a switch in this setting will be. For example a switch from profile: medium to profile: high would be most evident if you would like to try with >10000 rules.
mpm-algo: ac
The memory algorithm is of importance of course. However ac and ac-bs are most performant with ac-bs being less mem intensive but also less performant.
Grand total generic memory consumption equation
<number_of_total_detection_threads>*<((1728)+(default_packet_size))>*<max-pending-packets>
+
<defrag.memcap>
+
< host.memcap>
+
< ippair.memcap>
+
< flow.memcap>
+
<number_of_threads>*<216>* <prealloc-sessions>
+
[per af-packet interface enabled]<af-packet_number_of_threads> * <ringsize> * <((1728)+(default_packet_size))>
+
<stream.memcap>+<stream.reassembly.memcap>
+
<app-layer.protocols.dns.global-memcap>
+
<app-layer.protocols.http.memcap>
=
Total memory that is configured and should be available to be used by Suricata
Thank you
NOTE:
As of developments in Suricata 3.0 - sizeof(struct Packet_) is 936 not 1728 bytes
No comments:
Post a Comment