Just recently(at the time of this writing) in the dev branch of Suricata IDPS (git) were introduced a few new config options for the suricata.yaml
stream:
memcap: 32mb
checksum-validation: yes # reject wrong csums
inline: auto # auto will use inline mode in IPS mode, yes or no set it statically
reassembly:
memcap: 64mb
depth: 1mb # reassemble 1mb into a stream
toserver-chunk-size: 2560
toclient-chunk-size: 2560
randomize-chunk-size: yes
randomize-chunk-range: 10
raw: yes
chunk-prealloc: 250 #size 4KB
segments:
- size: 4
prealloc: 256
- size: 16
prealloc: 512
- size: 112
prealloc: 512
- size: 248
prealloc: 512
- size: 512
prealloc: 512
- size: 768
prealloc: 1024
- size: 1448
prealloc: 1024
- size: 65535
prealloc: 128
and under the app layer protocols section (in suricata.yaml) ->
http:
enabled: yes
# memcap: 64mb
Stream segments memory preallocation - config option
This first one gives you an advance granular control over your memory consuption in terms or preallocating memory for segmented packets that are going through the stream reassembly engine ( of certain size)The patch's info:
commit b5f8f386a37f61ae0c1c874b82f978f34394fb91
Author: Victor Julien <victor@inliniac.net>
Date: Tue Jan 28 13:48:26 2014 +0100
stream: configurable segment pools
The stream reassembly engine uses a set of pools in which preallocated
segments are stored. There are various pools each with different packet
sizes. The goal is to lower memory presure. Until now, these pools were
hardcoded.
This patch introduces the ability to configure them fully from the yaml.
There can be at max 256 of these pools.
In other words to speed things up in Suricata, you could do some traffic profiling with the iptraf tool (apt-get install iptraf , then select "Statistical breakdowns", then select "By packet size", then the appropriate interface):
So partly based on the pic above (one also should determine the packet breakdown from TCP perspective) you could do some adjustments in the default config section in suricata.yaml:
segments:
- size:4
prealloc: 256
- size:74
prealloc: 65535
- size: 112
prealloc: 512
- size: 248
prealloc: 512
- size: 512
prealloc: 512
- size: 768
prealloc: 1024
- size: 1276
prealloc: 65535
- size: 1425
prealloc: 262140
- size: 1448
prealloc: 262140
- size: 9216
prealloc: 65535
- size: 65535
prealloc: 9216
Make sure you calculate your memory, this all falls under the stream reassembly memcap set in the yaml. So naturally it would have to be big enough to accommodate those changes :).
For example the changes in bold above would need 1955 MB of RAM from the stream reassembly value set in the suricata.yaml. So for example if the values is set like so:
stream:it will use 1955MB for prealloc segment packets and there will be roughly 2gb left for the other reassembly tasks - like for example allocating segments and chunks that were not prealloc in the settings.
memcap: 2gb
checksum-validation: yes # reject wrong csums
inline: auto # auto will use inline mode in IPS mode, yes or no set it statically
reassembly:
memcap: 4gb
If you would like to be exact - you can run Suricata with the -v switch to enable verbosity, thus giving you an exact picture of what your segments numbers are (for example: run it for a 24 hrs and after you stop it with kill -15 pid_of_suricata):
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 4 had a peak use of 96199 segments, more than the prealloc setting of 256So then you can adjust the values accordingly for all segments.
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 16 had a peak use of 28743 segments, more than the prealloc setting of 512
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 112 had a peak use of 96774 segments, more than the prealloc setting of 512
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 248 had a peak use of 25833 segments, more than the prealloc setting of 512
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 512 had a peak use of 24354 segments, more than the prealloc setting of 512
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 768 had a peak use of 30954 segments, more than the prealloc setting of 1024
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 1448 had a peak use of 139742 segments, more than the prealloc setting of 1024
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 65535 had a peak use of 139775 segments, more than the prealloc setting of 128
(stream.c:182) <Info> (StreamMsgQueuesDeinit) -- TCP segment chunk pool had a peak use of 21676 chunks, more than the prealloc setting of 250
HTTP memcap option
In suricata .yaml you can set an explicit limit for the http related usage (not for stream and reassembly) of memory in the inspection engine.
http:
enabled: yes
memcap: 4gb
Those two config option do add some more powerful ways of fine tuning the already highly flexible Suricata IDPS capabilities.
Of course when setting memcaps in the suricata.yaml you would have to make sure you have the total available RAM on your server/machine....otherwise funny things happen :)
No comments:
Post a Comment