Sunday, February 9, 2014

Mass deploying and updating Suricata IDPS with Ansible

aka The Ansibility side of Suricata

Talking about multiple deployments of Suricata IDPS and how complicated it could be to do it all... from compiling and installing to configuring on multiple server/locations .. actually ... it is not with Ansible.

Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy. Avoid writing scripts or custom code to deploy and update your applications— automate in a language that approaches plain English, using SSH, with no agents to install on remote systems.

If you follow this article you should be able to update/upgrade multiple Suricata deployments with a push of a button. The  set up Ansible play-book scripts in this article are available at github HERE , with detailed explanations inside the suricata-deploy.yaml (which has nothing to do with the Suricata's suricata.yaml).

This article targets Debian/Ubuntu like systems.

Why Ansible

Well, these are my reasons why Ansible:

  • Open Source  - GPLv3 (
  • Agent-less - no need for agent installation and management.
  • Python/yaml based.
  • Highly flexible and configurable management of systems.
  • Large number of ready to use modules for system management.
  • Custom modules can be added if needed - written on ANY language
  • Secure transport and deployment of the whole execution process.(SSL encrypted).
  • Fast , parallel execution. (10/20/50... machines at a time).
  • Staggered deployment - continue with the deployment process only if
    the first batch succeeds.
  • Works over slow and geographically dispersed connections - "fire and
    forget" mode-  Ansible will start execution, and periodically log in
    and check if the task is finished, no need for always ON connection.
  • Fast, secure connection - for speed Ansible can be configured to use a
    special SSL mode that is much faster than the regular ssh connection,
    while periodically(configurable) regenerating and using new encryption
  • On-demand task execution - "push the button".
  • Roll-back on err - if the deployment fails for some reason, Ansible
    can be configured to roll back the execution.
  • Auto retries -  can be configured to automatically retry failed
    tasks...for a number of times or until a condition is met.
  • Cloud - integration modules to manage  cloud services exist.
  • All that until the tasks are done(or interrupted) or the default
    (configurable) Ansible connection limit times out.

...and it works  (talking out of experience)

What you need: On the central management server

On Debian/Ubuntu systems
sudo apt-get install python-yaml python-jinja2 python-paramiko python-crypto python-keyczar ansible
NOTE: python-keyczar must be installed on the control (central AND remote) machine to use accelerated modes - fast execution.

Then in most systems you will find the config files under /etc/ansible/
The two most important files are ansible.cfg and hosts

For the purpose of this post/tutorial in the hosts file you can add:
#ssh SomeUser@
HP-Test1 ansible_ssh_host= ansible_ssh_user=SomeUser

Lines starting with "#" are comments.
So will be the IP of the machine/server that will be remotely managed. In this particular case ....a HoneyPot server.

Do not forget to add to your /etc/ssh/ssh_config (if you do not have a key, generate one - here is how)
  IdentityFile /root/.ssh/id_rsa_ansible_hp
  User SomeUser

What you need: On the remotely managed servers

What you need to do on the devices that are going to be remotely managed (for example in this tutorial) is -

 to have the following packages installed:
sudo apt-get install  python-crypto python-keyczar

Add the public key for the user "SomeUser" (in this case), under the authorized_keys on that remote machine.Example directory would be /home/SomeUser/.ssh/authorized_keys . In other words password-less(without a pass-phrase) ssh key authentication.

Make sure "SomeUser" has password-less sudo as well.
Then on the "central" machine (the one where you would be managing everything else from) you need to add this to your ssh_config:


So let's see if everything is up and good to go(some commands you can try):

Above we use the built in "ping" module of Ansible. Notice our remote machine that we will manage - or HP-Test1.

You can try as well:
ansible -m setup HP-Test1
 You will receive a full ansible inventory of the HP-Test1 machine.

Run it

The  set up in this article is available at github HERE , with detailed explanations inside the suricata-deploy.yaml.
All you need to do is git clone it and run it. Like so:

root@LTS-64-1:~/Work/test#git clone
You can do a "tree MassDeploySuricata" to have  a fast look (at the time of this writing):

root@LTS-64-1:~/Work/test#cd MassDeploySuricata

root@LTS-64-1:~/Work/test/MassDeploySuricata# ansible-playbook -i /etc/ansible/hosts suricata-deploy.yaml

and your results could look like this:

the red lines in the pics above might be worrying to you, however it is because the machines in question are xen virtuals and in this particular case if you manually run
ethtool -K eth0 rx off
it will return an err:
Cannot set device rx csum settings: Operation not supported

Do not use dashes in the "register" keyword -> suricata-process-rc-code
Instead use underscore => suricata_process_rc_code

Some useful Ansible commands:
ansible-playbook -i /etc/ansible/hosts  suricata-deploy.yaml --limit HoneyPots --list-tasks
or just
ansible-playbook -i /etc/ansible/hosts  suricata-deploy.yaml --list-tasks

will list the tasks available for execution. That is if any task is tagged, aka has
or like pictured below:

ansible-playbook -i /etc/ansible/hosts  suricata-deploy.yaml  -t deploy-packages
will deploy only the tasks tagged "deploy"

ansible-playbook -i /etc/ansible/hosts -l HP-test1 suricata-deploy.yaml
will do all the tasks but only against host "HP-test1"

Some more very good and informational links:
Complete and excellent Ansible tutorial
Ansible Documentation

No comments:

Post a Comment