Sunday, February 23, 2014

Signing Windows installation MSI packages with a certificate

This article will describe how to sign your windows installer (MSI) packages with a certificate. For that you need three things in place:
  1. p12 certificate file - Personal Information Exchange (.p12)
  2. the msi package
  3. Windows Software Development Kit (SDK) installed (for your respective windows installation)
In order to sign your package you need a Code Signing Certificate.
INFO:  Certum  offers free code certificates for open source projects - Open Source Code Signing, at the time of this writing.

After you have all of the above three prerequisites -> this below is the two ways to sign in Windows 7 and Windows 8 respectively:
(Execute the following and substitute your directory, names and password correctly!! In other words - substitute the bold text below plus the web link)

Windows 7

c:\Users\peter.manev>"C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\signtool.exe"  sign /v /f my-p12-file.p12 /p "PASSWORD-FOR-p12-HERE" /d "My install package is very cool and is about this" /du "" /t my-win-installer-package.msi

Windows 8

c:\Users\peter.manev>"C:\Program Files (x86)\Windows Kits\8.0\bin\x86\signtool.exe"  ssign /v /f my-p12-file.p12 /p "PASSWORD-FOR-p12-HERE" /d "My install package is very cool and is about this" /du "" /t my-win-installer-package.msi

The result should be something like this:
The following certificate was selected:
    Issued to: Open Source Developer, Developers Team
    Issued by: Certum Level III CA
    Expires:   Thu Mar 20 08:07:06 2014

Done Adding Additional Store
Successfully signed: my-win-installer-package.msi

Number of files successfully Signed: 1
Number of warnings: 0
Number of errors: 0

Sunday, February 16, 2014

Suricata IDPS installation on OpenSUSE

This is a quick tutorial of how to install Suricata IDPS (latest dev edition from git) on OpenSUSE with MD5/file extraction and GeoIP features enabled.

For this tutorial we use OpenSUSE 13.1 (Bottle) (x86_64) 64-bit  with 3.11.6 kernel level:

uname -a
Linux 3.11.6-4-desktop #1 SMP PREEMPT Wed Oct 30 18:04:56 UTC 2013 (e6d4a27) x86_64 x86_64 x86_64 GNU/Linux 

Step 1

Install the needed packages:
zypper install gcc zlib-devel libtool make libpcre1 autoconf automake gcc-c++ pcre-devel libz1 file-devel libnet1 libpcap1 libpcap-devel libnet-devel libyaml-devel libyaml-0-2 git-core wget libcap-ng0 libcap-ng-devel libmagic1 file-magic

Step 2

For MD5 functionality and file extraction capability:
zypper install mozilla-nss mozilla-nss-devel mozilla-nspr mozilla-nspr-devel mozilla-nss-tools

Step 3 

For the GeoIP functionality:
zypper install GeoIP libGeoIP-devel

Step 4

Git clone the latest dev branch,compile and configure(one liner, copy paste ready):

git clone git:// \
&& cd oisf/\
&&  git clone -b 0.5.x \
&& ./ \
&& ./configure --prefix=/usr/ --sysconfdir=/etc/ --localstatedir=/var/ \
--disable-gccmarch-native --enable-gccprotect \
--enable-geoip \
--with-libnss-libraries=/usr/lib64 \
--with-libnss-includes=/usr/include/nss3 \
&& make clean && make && make install \
&& ldconfig

You can change make install (above) to make install-full for an automated full set up -> directory creation, rule download and directory set up in suricata.yaml - everything ready to run!

Step 5

Some commands to confirm everything is in place:
which suricata
suricata --build-info
ldd `which suricata` 

Step 6 

Continue with basic set up of your networks,which rules to enable and other  suricata.yaml config options...Basic Setup

After you are done with all the config options, you can start it like so:
suricata -c /etc/suricata/suricata.yaml -i enp0s3
change your interface name accordingly !

if you get the following err:
 (util-magic.c:65) <Warning> (MagicInit) -- [ERRCODE: SC_ERR_FOPEN(44)] - Error opening file: "/usr/share/file/magic": No such file or directory

change the following line in your suriacta.yaml from:
magic-file: /usr/share/file/magic
magic-file: /usr/share/misc/magic

That's all.

Saturday, February 15, 2014

Suricata - override config parameters on the command line

With the release of 2.0rc1 , Suricata IDPS introduced a feature/possibility to override config parameters.
This is a brief article to give you the idea of how to  override config parameters  when you start the Suricata on the command line at will/on demand without having to edit/save the suricata.yaml config for that.

This article follows the initial instruction posted HERE....PLUS some extra examples.

There are three sections in the article:
  • First Step
  • Overriding multiple parameters
  • Take it to the next level
  • Where to get the values from

First step

So how does it work. Simple , you should use the "--set <parameter=value>" syntax:
suricata -c /etc/suricata/suricata.yaml  -i eth0 -v -S empty.rules --set threading.detect-thread-ratio=3

So imagine you start Suricata on the command line like so:
root@LTS-64-1:~/Work/tmp# suricata -c /etc/suricata/suricata.yaml  --af-packet -v -S empty.rules
 - (suricata.c:973) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev f791d0f)
 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 2
in suricata yaml in your af-packet section you have
  - interface: eth0
    # Number of receive threads (>1 will enable experimental flow pinned
    # runmode)
    threads: 6

and you get :
 - (stream-tcp-reassemble.c:456) <Info> (StreamTcpReassemblyConfig) -- stream.reassembly "chunk-prealloc": 250
 - (tm-threads.c:2196) <Notice> (TmThreadWaitOnThreadInit) -- all 6 packet processing threads, 3 management threads initialized, engine started.

Then you can try the follwoing:
root@LTS-64-1:~/Work/tmp# suricata -c /etc/suricata/suricata.yaml  --af-packet -v -S empty.rules --set  af-packet.0.threads=4
 - (suricata.c:973) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev f791d0f)
 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 2

and you would get:
 - (tm-threads.c:2196) <Notice> (TmThreadWaitOnThreadInit) -- all 4 packet processing threads, 3 management threads initialized, engine started.


Now lets try to chage some memory settings on the fly. If in your suricata.yaml you have :

  memcap: 32mb
  checksum-validation: yes      # reject wrong csums
  inline: auto                  # auto will use inline mode in IPS mode, yes or no set it statically
    memcap: 64mb
    depth: 1mb                  # reassemble 1mb into a stream
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes
    #randomize-chunk-range: 10
which is the default settings. When you start Suricata without overriding any values, it will have something like this most likely:
root@LTS-64-1:~/Work/tmp# suricata -c /etc/suricata/suricata.yaml  -i eth0 -v -S empty.rules
- (stream-tcp.c:373) <Info> (StreamTcpInitConfig) -- stream "prealloc-sessions": 2048 (per thread)
- (stream-tcp.c:389) <Info> (StreamTcpInitConfig) -- stream "memcap": 33554432
- (stream-tcp.c:395) <Info> (StreamTcpInitConfig) -- stream "midstream" session pickups: disabled
- (stream-tcp.c:401) <Info> (StreamTcpInitConfig) -- stream "async-oneside": disabled
- (stream-tcp.c:418) <Info> (StreamTcpInitConfig) -- stream "checksum-validation": enabled
- (stream-tcp.c:440) <Info> (StreamTcpInitConfig) -- stream."inline": disabled
- (stream-tcp.c:453) <Info> (StreamTcpInitConfig) -- stream "max-synack-queued": 5
- (stream-tcp.c:471) <Info> (StreamTcpInitConfig) -- stream.reassembly "memcap": 67108864
- (stream-tcp.c:489) <Info> (StreamTcpInitConfig) -- stream.reassembly "depth": 1048576

Lets say you want to double the stream reassembly memcap settings because you are seeing a lot of drops and you want to determine of this is the issue. The you could try:
root@LTS-64-1:~/Work/tmp# suricata -c /etc/suricata/suricata.yaml  -i eth0 -v -S empty.rules --set stream.reassembly.memcap=512mb
- (stream-tcp.c:373) <Info> (StreamTcpInitConfig) -- stream "prealloc-sessions": 2048 (per thread)
 - (stream-tcp.c:389) <Info> (StreamTcpInitConfig) -- stream "memcap": 33554432
 - (stream-tcp.c:395) <Info> (StreamTcpInitConfig) -- stream "midstream" session pickups: disabled
 - (stream-tcp.c:401) <Info> (StreamTcpInitConfig) -- stream "async-oneside": disabled
 - (stream-tcp.c:418) <Info> (StreamTcpInitConfig) -- stream "checksum-validation": enabled
 - (stream-tcp.c:440) <Info> (StreamTcpInitConfig) -- stream."inline": disabled
 - (stream-tcp.c:453) <Info> (StreamTcpInitConfig) -- stream "max-synack-queued": 5
 - (stream-tcp.c:471) <Info> (StreamTcpInitConfig) -- stream.reassembly "memcap": 536870912
 - (stream-tcp.c:489) <Info> (StreamTcpInitConfig) -- stream.reassembly "depth": 1048576

and there it is 512MB of stream reassembly memcap.

You could override all the variables in suricata.yaml that way. Another example:
root@LTS-64-1:~/Work/tmp# suricata -c /etc/suricata/suricata.yaml  -i eth0 -v -S empty.rules --set flow-timeouts.tcp.established=720

this would change the tcp timeouts to 720 seconds. The corresponding default section for the example above in the suricata.yaml will be:


    new: 30
    established: 300
    closed: 0
    emergency-new: 10
    emergency-established: 100
    emergency-closed: 0
    new: 60
    established: 3600
    closed: 120
    emergency-new: 10
    emergency-established: 300
    emergency-closed: 20

Override multiple parameters

Sure, no problem:
root@LTS-64-1:~/Work/tmp# suricata -c /etc/suricata/suricata.yaml  -i eth0 -v -S empty.rules --set flow-timeouts.tcp.established=720 --set stream.reassembly.memcap=512mb

Take it to the next level

Here you go:
src/suricata --af-packet=${NIC_IN} -S /dev/null -c suricata.yaml -l "${TD}/logs" -D --pidfile="${TD}/" --set "logging.outputs.1.file.enabled=yes" --set "logging.outputs.1.file.filename=${TD}/logs/suricata.log" --set "af-packet.0.interface=eth2" --set "af-packet.0.threads=4" --set "flow.memcap=256mb" --set "stream.reassembly.memcap=512mb" --runmode=workers --set "af-packet.0.buffer-size=8388608"
Yep ... one liner :)  - my favorites, compliments to Victor Julien. 
You could use variables too !  Handy...very handy I believe.

Where to get the key/values from

( thanks to a friendly reminder form regit )

So how do you know what are the key value pairs..aka where do i get the key and value for af-packet.0.buffer-size=8388608

key ->  af-packet.0.buffer-size
value -> 8388608
(the value is the one that you can adjust)

Easy, just issue a "suricata --dump-config" comand on the pc/server that you have Suricata installed:

root@LTS-64-1:~# suricata --dump-config
15/2/2014 -- 10:39:02 - <Notice> - This is Suricata version 2.0rc1 RELEASE
host-mode = auto
default-log-dir = /var/log/suricata/
unix-command = (null)
unix-command.enabled = yes
outputs = (null)
outputs.0 = fast = (null) = yes = fast.log = no
outputs.1 = eve-log
outputs.1.eve-log = (null)
outputs.1.eve-log.enabled = yes
outputs.1.eve-log.type = file
outputs.1.eve-log.filename = eve.json
outputs.1.eve-log.types = (null)
outputs.1.eve-log.types.0 = alert
outputs.1.eve-log.types.1 = http
outputs.1.eve-log.types.1.http = (null)
outputs.1.eve-log.types.1.http.extended = yes
outputs.1.eve-log.types.2 = dns
outputs.1.eve-log.types.3 = tls
outputs.1.eve-log.types.3.tls = (null)
outputs.1.eve-log.types.3.tls.extended = yes
outputs.1.eve-log.types.4 = files
outputs.1.eve-log.types.4.files = (null)
outputs.1.eve-log.types.4.files.force-magic = no
outputs.1.eve-log.types.4.files.force-md5 = no
vlan.use-for-tracking = true
flow-timeouts = (null)
flow-timeouts.default = (null) = 30
flow-timeouts.default.established = 300
flow-timeouts.default.closed = 0
flow-timeouts.default.emergency-new = 10
flow-timeouts.default.emergency-established = 100
flow-timeouts.default.emergency-closed = 0
flow-timeouts.tcp = (null) = 60
flow-timeouts.tcp.established = 3600
flow-timeouts.tcp.closed = 120
flow-timeouts.tcp.emergency-new = 10
flow-timeouts.tcp.emergency-established = 300
flow-timeouts.tcp.emergency-closed = 20
flow-timeouts.udp = (null) = 30
flow-timeouts.udp.established = 300
flow-timeouts.udp.emergency-new = 10
flow-timeouts.udp.emergency-established = 100
flow-timeouts.icmp = (null) = 30
flow-timeouts.icmp.established = 300
flow-timeouts.icmp.emergency-new = 10
flow-timeouts.icmp.emergency-established = 100
stream = (null)
stream.memcap = 32mb
stream.checksum-validation = yes
stream.inline = auto
stream.reassembly = (null)
stream.reassembly.memcap = 64mb
stream.reassembly.depth = 1mb
stream.reassembly.toserver-chunk-size = 2560
stream.reassembly.toclient-chunk-size = 2560
stream.reassembly.randomize-chunk-size = yes


it will be a LONG list, but you get all the key value pairs from that :)

Tuesday, February 11, 2014

Github for beginners - how to start your own repository - step by step

This is a brief article how to start your own github repository and push updates to it.

Powerful collaboration, code review, and code management for open source and private projects.

Create a new account and sign up at github

Create  a git repo - log in to github and create a new repo


then add README , licenses ....(check/uncheck the selection boxes):

Ona linux machine , make sure you have:
apt-get install git-core

Then in hte directory where you have your files and folders (the directory where the code that you would like to upload to github for sharing):

git init
git add <every single file/dir one by one>
git commit -a -m "First commit message - initial upload"
git remote add origin
git fetch origin
git rebase origin/master
git push -u origin master
NOTE: Make sure you change the name of your repository accordingly!!

When you are doing changes in the code directory on your machine/ update the github repo:

git commit -a -m 'Some meaningful message !'
git push -u origin master

This is enough to get you gihubbing :) and most likely , the more you use it , the more you are going to like it. At least that is my case.

Sunday, February 9, 2014

Mass deploying and updating Suricata IDPS with Ansible

aka The Ansibility side of Suricata

Talking about multiple deployments of Suricata IDPS and how complicated it could be to do it all... from compiling and installing to configuring on multiple server/locations .. actually ... it is not with Ansible.

Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy. Avoid writing scripts or custom code to deploy and update your applications— automate in a language that approaches plain English, using SSH, with no agents to install on remote systems.

If you follow this article you should be able to update/upgrade multiple Suricata deployments with a push of a button. The  set up Ansible play-book scripts in this article are available at github HERE , with detailed explanations inside the suricata-deploy.yaml (which has nothing to do with the Suricata's suricata.yaml).

This article targets Debian/Ubuntu like systems.

Why Ansible

Well, these are my reasons why Ansible:

  • Open Source  - GPLv3 (
  • Agent-less - no need for agent installation and management.
  • Python/yaml based.
  • Highly flexible and configurable management of systems.
  • Large number of ready to use modules for system management.
  • Custom modules can be added if needed - written on ANY language
  • Secure transport and deployment of the whole execution process.(SSL encrypted).
  • Fast , parallel execution. (10/20/50... machines at a time).
  • Staggered deployment - continue with the deployment process only if
    the first batch succeeds.
  • Works over slow and geographically dispersed connections - "fire and
    forget" mode-  Ansible will start execution, and periodically log in
    and check if the task is finished, no need for always ON connection.
  • Fast, secure connection - for speed Ansible can be configured to use a
    special SSL mode that is much faster than the regular ssh connection,
    while periodically(configurable) regenerating and using new encryption
  • On-demand task execution - "push the button".
  • Roll-back on err - if the deployment fails for some reason, Ansible
    can be configured to roll back the execution.
  • Auto retries -  can be configured to automatically retry failed
    tasks...for a number of times or until a condition is met.
  • Cloud - integration modules to manage  cloud services exist.
  • All that until the tasks are done(or interrupted) or the default
    (configurable) Ansible connection limit times out.

...and it works  (talking out of experience)

What you need: On the central management server

On Debian/Ubuntu systems
sudo apt-get install python-yaml python-jinja2 python-paramiko python-crypto python-keyczar ansible
NOTE: python-keyczar must be installed on the control (central AND remote) machine to use accelerated modes - fast execution.

Then in most systems you will find the config files under /etc/ansible/
The two most important files are ansible.cfg and hosts

For the purpose of this post/tutorial in the hosts file you can add:
#ssh SomeUser@
HP-Test1 ansible_ssh_host= ansible_ssh_user=SomeUser

Lines starting with "#" are comments.
So will be the IP of the machine/server that will be remotely managed. In this particular case ....a HoneyPot server.

Do not forget to add to your /etc/ssh/ssh_config (if you do not have a key, generate one - here is how)
  IdentityFile /root/.ssh/id_rsa_ansible_hp
  User SomeUser

What you need: On the remotely managed servers

What you need to do on the devices that are going to be remotely managed (for example in this tutorial) is -

 to have the following packages installed:
sudo apt-get install  python-crypto python-keyczar

Add the public key for the user "SomeUser" (in this case), under the authorized_keys on that remote machine.Example directory would be /home/SomeUser/.ssh/authorized_keys . In other words password-less(without a pass-phrase) ssh key authentication.

Make sure "SomeUser" has password-less sudo as well.
Then on the "central" machine (the one where you would be managing everything else from) you need to add this to your ssh_config:


So let's see if everything is up and good to go(some commands you can try):

Above we use the built in "ping" module of Ansible. Notice our remote machine that we will manage - or HP-Test1.

You can try as well:
ansible -m setup HP-Test1
 You will receive a full ansible inventory of the HP-Test1 machine.

Run it

The  set up in this article is available at github HERE , with detailed explanations inside the suricata-deploy.yaml.
All you need to do is git clone it and run it. Like so:

root@LTS-64-1:~/Work/test#git clone
You can do a "tree MassDeploySuricata" to have  a fast look (at the time of this writing):

root@LTS-64-1:~/Work/test#cd MassDeploySuricata

root@LTS-64-1:~/Work/test/MassDeploySuricata# ansible-playbook -i /etc/ansible/hosts suricata-deploy.yaml

and your results could look like this:

the red lines in the pics above might be worrying to you, however it is because the machines in question are xen virtuals and in this particular case if you manually run
ethtool -K eth0 rx off
it will return an err:
Cannot set device rx csum settings: Operation not supported

Do not use dashes in the "register" keyword -> suricata-process-rc-code
Instead use underscore => suricata_process_rc_code

Some useful Ansible commands:
ansible-playbook -i /etc/ansible/hosts  suricata-deploy.yaml --limit HoneyPots --list-tasks
or just
ansible-playbook -i /etc/ansible/hosts  suricata-deploy.yaml --list-tasks

will list the tasks available for execution. That is if any task is tagged, aka has
or like pictured below:

ansible-playbook -i /etc/ansible/hosts  suricata-deploy.yaml  -t deploy-packages
will deploy only the tasks tagged "deploy"

ansible-playbook -i /etc/ansible/hosts -l HP-test1 suricata-deploy.yaml
will do all the tasks but only against host "HP-test1"

Some more very good and informational links:
Complete and excellent Ansible tutorial
Ansible Documentation

Thursday, February 6, 2014

Granularity in advance memory tuning for segments and http processing with Suricata

Just recently(at the time of this writing) in the dev branch of Suricata IDPS (git) were introduced a few new config  options for the suricata.yaml

  memcap: 32mb
  checksum-validation: yes      # reject wrong csums
  inline: auto                  # auto will use inline mode in IPS mode, yes or no set it statically
    memcap: 64mb
    depth: 1mb                  # reassemble 1mb into a stream
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes
    randomize-chunk-range: 10
    raw: yes
    chunk-prealloc: 250 #size 4KB
      - size: 4
        prealloc: 256
      - size: 16
        prealloc: 512
      - size: 112
        prealloc: 512
      - size: 248
        prealloc: 512
      - size: 512
        prealloc: 512
      - size: 768
        prealloc: 1024
      - size: 1448
        prealloc: 1024
      - size: 65535
        prealloc: 128

and under the app layer protocols section (in suricata.yaml) ->

      enabled: yes
      # memcap: 64mb

Stream segments memory preallocation - config option

This first one gives you an advance granular control over your memory consuption in terms or preallocating memory for segmented packets that are going through the stream reassembly engine ( of certain size)

The patch's info:
commit b5f8f386a37f61ae0c1c874b82f978f34394fb91
Author: Victor Julien <>
Date:   Tue Jan 28 13:48:26 2014 +0100

    stream: configurable segment pools
    The stream reassembly engine uses a set of pools in which preallocated
    segments are stored. There are various pools each with different packet
    sizes. The goal is to lower memory presure. Until now, these pools were
    This patch introduces the ability to configure them fully from the yaml.
    There can be at max 256 of these pools.

In other words to speed things up in Suricata, you could do some traffic profiling with the iptraf tool (apt-get install iptraf , then select "Statistical breakdowns", then select "By packet size", then the appropriate interface):

So partly based on the pic above (one also should determine the packet breakdown from TCP perspective) you could do some adjustments in the default config section in suricata.yaml:

      - size:4
        prealloc: 256
      - size:74
        prealloc: 65535
      - size: 112
        prealloc: 512
      - size: 248
        prealloc: 512
      - size: 512
        prealloc: 512
      - size: 768
        prealloc: 1024
      - size: 1276
        prealloc: 65535
      - size: 1425
        prealloc: 262140
      - size: 1448
        prealloc: 262140
      - size: 9216 
        prealloc: 65535 
      - size: 65535
        prealloc: 9216

Make sure you calculate your memory, this all falls under the stream reassembly memcap set in the yaml. So naturally it would have to be  big enough to accommodate those changes :).
For example the changes in bold above would need 1955 MB of RAM from the stream reassembly value set in the suricata.yaml. So for example if the values is set like so:
  memcap: 2gb
  checksum-validation: yes      # reject wrong csums
  inline: auto                  # auto will use inline mode in IPS mode, yes or no set it statically
    memcap: 4gb
it will use 1955MB for prealloc segment  packets and there will be roughly 2gb left for the other reassembly tasks - like for example allocating segments and chunks that were not  prealloc in the  settings.

If you would like to be exact - you can run Suricata with the -v switch to enable verbosity, thus giving you an exact picture of what your segments numbers are (for example: run it for a 24 hrs and after you stop it with kill -15 pid_of_suricata):

(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 4 had a peak use of 96199 segments, more than the prealloc setting of 256
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 16 had a peak use of 28743 segments, more than the prealloc setting of 512
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 112 had a peak use of 96774 segments, more than the prealloc setting of 512
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 248 had a peak use of 25833 segments, more than the prealloc setting of 512
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 512 had a peak use of 24354 segments, more than the prealloc setting of 512
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 768 had a peak use of 30954 segments, more than the prealloc setting of 1024
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 1448 had a peak use of 139742 segments, more than the prealloc setting of 1024
(stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 65535 had a peak use of 139775 segments, more than the prealloc setting of 128
(stream.c:182) <Info> (StreamMsgQueuesDeinit) -- TCP segment chunk pool had a peak use of 21676 chunks, more than the prealloc setting of 250
So then you can adjust the values accordingly for all segments.

HTTP memcap option

In suricata .yaml you can set an explicit limit for the http related usage (not for stream and reassembly) of memory in the inspection engine.

      enabled: yes
      memcap: 4gb

Those two config option do add some more powerful ways of fine tuning the already highly flexible Suricata IDPS capabilities.

Of course when setting memcaps in the suricata.yaml you would have to make sure you have the total available RAM on your server/machine....otherwise funny things happen :)

Sunday, February 2, 2014

Suricata IDPS and Common Information Model

Short and to the point.
This patch (shown below) provided in the latest git master at the moment of this writing, by Eric Leblond, makes the output correlation of  log data, generated by Suricata IDPS -> Data Source Integration  CIM compliant.

In other words when using the JSON output for logging in Suricata (available in the current git master plus expected to reach maturity in Suricata 2.0) you can use Logstash and Kibana to query, filter and present log data in a way which will follow the  CIM.

The patch's info:
commit 7a9efd74e4d88e39c6671f6a0dda28ac931ffe10
Author: Eric Leblond <>
Date:   Thu Jan 30 23:33:45 2014 +0100

    json: sync key name with CIM
    This patch is synchronizing key name with Common Information Model.
    It updates key name following what is proposed in:
    The interest of these modifications is that using the same key name
    as other software will provide an easy to correlate and improve
    data. For example, geoip setting in logstash can be applied on
    all src_ip fields allowing geoip tagging of data.

How? You could try reading the following: