Saturday, May 31, 2014

Logs per second on eve.json - the good and the bad news on a 10Gbps IDPS line inspection



I found this one liner which gives the amount of logs per second logged in eve.json
tail -f  eve.json |perl -e 'while (<>) {$l++;if (time > $e) {$e=time;print "$l\n";$l=0}}'
I take no credit for it  - I got it from commandlinefu


tail -f  eve.json |perl -e 'while (<>) {$l++;if (time > $e) {$e=time;print "$l\n";$l=0}}'
1
193
3301
3402
3862
3411
3719
3467
3522
3127
3354
^C

Having in mind this is at Sat lunch time... 3 - 3,5K logs per second it turns to minimum 4 - 4,5K logs per second on a working day.
I had "only"  these logs enabled in suricata.yaml in the eve log section - dns,http,alert and ssh on a 10Gbps Suricata 2.0.1 IDS sensor:

  # "United" event log in JSON format
  - eve-log:
      enabled: yes
      type: file #file|syslog|unix_dgram|unix_stream
      filename: eve.json
      # the following are valid when type: syslog above
      #identity: "suricata"
      #facility: local5
      #level: Info ## possible levels: Emergency, Alert, Critical,
                   ## Error, Warning, Notice, Info, Debug
      types:
        - alert
        - http:
            extended: yes     # enable this for extended logging information
        - dns
        #- tls:
            #extended: yes     # enable this for extended logging information
        #- files:
            #force-magic: yes   # force logging magic on all logged files
            #force-md5: yes     # force logging of md5 checksums
        #- drop
        - ssh
      append: yes

If you enable "files" and "tls" it will increase it to probably about 5-6K logs per second (maybe even more , depends on the type of traffic) with that set up.

The good news:

eve.json logs in standard JSON format (JavaScript Object Notation). So there are A LOT of log analysis solutions and software both open source, free and/or commercial that can digest and run analysis on JSON logs.

The bad news:

How many log analysis solutions can "really" handle 5K logs per second -
  • indexing, 
  • query, 
  • search, 
  • report generation, 
  • log correlation, 
  • filter searches by key fields,
  • nice graphs - "eye candy" for the management and/or customer , 
all that while being fast?
(and like that on at least 20 days of data from a 10Gbps IDPS Suricata sensor)

...aka 18 mil per hour ...or 432 mil log records per day

Another perspective -> 54-70GB of logs a day...


Conclusion

Deploying and tuning Suricata IDS/IPS is  the first important step. Then you need to  handle all the data that comes out of the sensor.
You should very carefully consider your goals, requirements and design and do Proof of Concept and test runs before you end up in a production situation in which you can't handle what you asked for :)




Saturday, May 24, 2014

Playing with memory consumption, algorithms and af_packet ring-size in Suricata IDPS




How selecting the correct memory algorithm can make the difference between 40%  and 4% drops of packets on 10Gbps traffic line inspection.

In this article I have described some specifics through which I was able to tune up Suricata in IDS mode to getting only 4.04% drops on a 10Gbps mirror port(ISP traffic) with 9K rules.

On the bottom of the post you will find the relevant configuration with suricata.log. It is highly inadvisable to just copy/paste, since every set up is unique. You would have to try to see/test what best suits your needs.

Set up


  • Suricata (from git, but basically 2.0.1) with AF_PACKET, 16 threads
  • 16 (2.7 GhZ) cores with Hyper-threading enabled
  • 64G RAM
  • Ubuntu LTS Precise (with upgraded kernel 3.14) -> Linux suricata 3.14.0-031400-generic #201403310035 SMP Mon Mar 31 04:36:23 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
  • Intel 82599EB 10-Gigabit SFI/SFP+ with CPU affinity (as described here)
  • 9K rules (standard ET-Pro, not changed or edited)
  • number of hosts in HOME_NET - /21 /19 /19 /18 = about 34K hosts
  • MTU 1522 on the IDS/listening  interface

Bummer  ... why is the MTU mentioned here  ... for a good reason!!
Bear with me for a sec and you will see why.

Tuning Stage I - af_packet and ring-size


Lets start with af_packet's section in suricata.yaml -> the ring-size variable.
With it you can actually define how many packets can be buffered on a per thread basis.

Example:
ring-size: 100000
would mean that Suricata will create a buffer of 100K packets per thread.

In other words if you have (in your suricata.yaml's af-packet section)
threads: 16
ring-size: 100000
that would mean 16x100K buffers = 1,6 mil packets in total.

So what does this mean for memory consumption?
Well here is where the MTU comes into play.

MTU size * ring_size * 16 threads 


 or
1522 * 100 000 * 16 = 2435200000 bytes = 2,3 GBytes
So with that set up, Suricata will reserve 2,3 GB RAM right away at start up.

FYI - With the current set up we have  about 1,5 mil incoming pps (packets per second)





Tuning Stage II - memory algorithm (mpm-algo)


The mpm-algo variable in suricata.yaml selects which memory algorithm Suricata will use to do
distribution of mpm contexts for signature(rule) groups matching.Very  important with a huge performance impact between combining with these:

sgh-mpm-context: single
sgh-mpm-context: full
profile: custom
profile: low
profile: medium
profile: high

More on this , you could find HERE
where - profile: custom , would mean you can specify the groups yourself.

The algorithm selected through this article is:
mpm-algo: ac

Below you will find some test cases for memory consumption at Suricata start up time.
(Just the values in the particular Cases are changed, the rest of suricata.yaml config
is the same and not touched or changed during these test cases)

Case 1

24GB RAM at start up

detect-engine:
  - profile: custom
  - custom-values:
      toclient-src-groups: 200
      toclient-dst-groups: 200
      toclient-sp-groups: 200
      toclient-dp-groups: 300
      toserver-src-groups: 200
      toserver-dst-groups: 400
      toserver-sp-groups: 200
      toserver-dp-groups: 250
  - sgh-mpm-context: single

af_packet ring-size: 1000000
16 threads


Notice: 1 mil ring size with sgh-mpm-context: single, that gave me 19% drops:
(util-device.c:185) <Notice> (LiveDeviceListClean) -- Stats for 'eth3':  pkts: 4997993133, drop: 949059741 (18.99%), invalid chksum: 0

Case 2

10GB RAM at start up

detect-engine:
  - profile: low
  - custom-values:
      toclient-src-groups: 200
      toclient-dst-groups: 200
      toclient-sp-groups: 200
      toclient-dp-groups: 300
      toserver-src-groups: 200
      toserver-dst-groups: 400
      toserver-sp-groups: 200
      toserver-dp-groups: 250
  - sgh-mpm-context: full

af_packet ring-size: 50000
16 threads


Case 3

26GB RAM at start up

detect-engine:
  - profile: high
  - custom-values:
      toclient-src-groups: 200
      toclient-dst-groups: 200
      toclient-sp-groups: 200
      toclient-dp-groups: 300
      toserver-src-groups: 200
      toserver-dst-groups: 400
      toserver-sp-groups: 200
      toserver-dp-groups: 250
  - sgh-mpm-context: full

af_packet ring-size: 50000
16 threads


Case 4

38GB RAM at start up

detect-engine:
  - profile: high
  - custom-values:
      toclient-src-groups: 200
      toclient-dst-groups: 200
      toclient-sp-groups: 200
      toclient-dp-groups: 300
      toserver-src-groups: 200
      toserver-dst-groups: 400
      toserver-sp-groups: 200
      toserver-dp-groups: 250
  - sgh-mpm-context: full

af_packet ring-size: 500000
16 threads

Notice: 500K ring size as compared to 50K in Case 3 and Case 2


The best config that worked for me was Case 4 !!
4.04% drops

NOTE: depending on the number of rules - sgh-mpm-context: full can induce Suricata  start  up time of a few minutes...


I also tested a different algorithm - > ac-gfbs
detect-engine:
  - profile: custom
  - custom-values:
      toclient-src-groups: 200
      toclient-dst-groups: 200
      toclient-sp-groups: 200
      toclient-dp-groups: 300
      toserver-src-groups: 200
      toserver-dst-groups: 400
      toserver-sp-groups: 200
      toserver-dp-groups: 250
  - sgh-mpm-context: full

....
mpm-algo: ac-gfbs

with af-packet 200K  ring size but that gave me 45% drops...
 Stats for 'eth3':  pkts: 496407325, drop: 227155539 (45.76%), invalid chksum: 0


Bottom line:
testing/trying and selecting the correct mpm-algo and ring size buffers can have a huge performance impact on your configuration !

Below you will find the specifics of the suricata.yaml configuration alongside the output and evidence of suricata.log


Configuration


suricata --build-info
This is Suricata version 2.0dev (rev 7e8f80b)
Features: PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 PF_RING AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK HAVE_NSS HAVE_LIBJANSSON
SIMD support: SSE_4_2 SSE_4_1 SSE_3
Atomic intrisics: 1 2 4 8 16 byte(s)
64-bits, Little-endian architecture
GCC version 4.6.3, C version 199901
compiled with -fstack-protector
compiled with _FORTIFY_SOURCE=2
L1 cache line size (CLS)=64
compiled with LibHTP v0.5.11, linked against LibHTP v0.5.11
Suricata Configuration:
  AF_PACKET support:                       yes
  PF_RING support:                         yes
  NFQueue support:                         no
  IPFW support:                            no
  DAG enabled:                             no
  Napatech enabled:                        no
  Unix socket enabled:                     yes
  Detection enabled:                       yes

  libnss support:                          yes
  libnspr support:                         yes
  libjansson support:                      yes
  Prelude support:                         no
  PCRE jit:                                no
  libluajit:                               no
  libgeoip:                                no
  Non-bundled htp:                         no
  Old barnyard2 support:                   no
  CUDA enabled:                            no

  Suricatasc install:                      yes

  Unit tests enabled:                      no
  Debug output enabled:                    no
  Debug validation enabled:                no
  Profiling enabled:                       no
  Profiling locks enabled:                 no
  Coccinelle / spatch:                     yes

Generic build parameters:
  Installation prefix (--prefix):          /usr/local
  Configuration directory (--sysconfdir):  /usr/local/etc/suricata/
  Log directory (--localstatedir) :        /usr/local/var/log/suricata/

  Host:                                    x86_64-unknown-linux-gnu
  GCC binary:                              gcc
  GCC Protect enabled:                     no
  GCC march native enabled:                yes
  GCC Profile enabled:                     no


In suricata .yaml:


# If you are using the CUDA pattern matcher (mpm-algo: ac-cuda), different rules
# apply. In that case try something like 60000 or more. This is because the CUDA
# pattern matcher buffers and scans as many packets as possible in parallel.
#max-pending-packets: 1024
max-pending-packets: 65534

# Runmode the engine should use. Please check --list-runmodes to get the available
# runmodes for each packet acquisition method. Defaults to "autofp" (auto flow pinned
# load balancing).
#runmode: autofp
runmode: workers

...
...

# af-packet support
# Set threads to > 1 to use PACKET_FANOUT support
af-packet:
  - interface: eth3
    # Number of receive threads (>1 will enable experimental flow pinned
    # runmode)
    threads: 16
    # Default clusterid.  AF_PACKET will load balance packets based on flow.
    # All threads/processes that will participate need to have the same
    # clusterid.
    cluster-id: 98
    # Default AF_PACKET cluster type. AF_PACKET can load balance per flow or per hash.
    # This is only supported for Linux kernel > 3.1
    # possible value are:
    #  * cluster_round_robin: round robin load balancing
    #  * cluster_flow: all packets of a given flow are send to the same socket
    #  * cluster_cpu: all packets treated in kernel by a CPU are send to the same socket
    cluster-type: cluster_cpu
    # In some fragmentation case, the hash can not be computed. If "defrag" is set
    # to yes, the kernel will do the needed defragmentation before sending the packets.
    defrag: no
    # To use the ring feature of AF_PACKET, set 'use-mmap' to yes
    use-mmap: yes
    # Ring size will be computed with respect to max_pending_packets and number
    # of threads. You can set manually the ring size in number of packets by setting
    # the following value. If you are using flow cluster-type and have really network
    # intensive single-flow you could want to set the ring-size independantly of the number
    # of threads:
    ring-size: 500000
    # On busy system, this could help to set it to yes to recover from a packet drop
    # phase. This will result in some packets (at max a ring flush) being non treated.
    #use-emergency-flush: yes
    # recv buffer size, increase value could improve performance
    # buffer-size: 100000
    # Set to yes to disable promiscuous mode
    # disable-promisc: no
    # Choose checksum verification mode for the interface. At the moment
    # of the capture, some packets may be with an invalid checksum due to
    # offloading to the network card of the checksum computation.
    # Possible values are:
    #  - kernel: use indication sent by kernel for each packet (default)
    #  - yes: checksum validation is forced
    #  - no: checksum validation is disabled
    #  - auto: suricata uses a statistical approach to detect when
    #  checksum off-loading is used.
    # Warning: 'checksum-validation' must be set to yes to have any validation
    checksum-checks: kernel
    # BPF filter to apply to this interface. The pcap filter syntax apply here.
    #bpf-filter: port 80 or udp
    # You can use the following variables to activate AF_PACKET tap od IPS mode.
    # If copy-mode is set to ips or tap, the traffic coming to the current
    # interface will be copied to the copy-iface interface. If 'tap' is set, the
    # copy is complete. If 'ips' is set, the packet matching a 'drop' action
    # will not be copied.

...
...   
   
detect-engine:
  - profile: high
  - custom-values:
      toclient-src-groups: 200
      toclient-dst-groups: 200
      toclient-sp-groups: 200
      toclient-dp-groups: 300
      toserver-src-groups: 200
      toserver-dst-groups: 400
      toserver-sp-groups: 200
      toserver-dp-groups: 250
  - sgh-mpm-context: full
  - inspection-recursion-limit: 1500
   
...

....
mpm-algo: ac
....
....
   
rule-files:
 - trojan.rules
 - dns.rules
 - malware.rules
 - md5.rules
 - local.rules
 - current_events.rules
 - mobile_malware.rules
 - user_agents.rules

 and the Suricata.log - 24 hour run inspecting a 10Gbps line with 9K rules.
(at the bottom you will find the final stats
Stats for 'eth3':  pkts: 125740002178, drop: 5075326318 (4.04%) ):

cat StatsByDate/suricata-2014-05-25.log
[26428] 24/5/2014 -- 01:46:01 - (suricata.c:1003) <Notice> (SCPrintVersion) -- This is Suricata version 2.0dev (rev 7e8f80b)
[26428] 24/5/2014 -- 01:46:01 - (util-cpu.c:170) <Info> (UtilCpuPrintSummary) -- CPUs/cores online: 16
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2218) <Info> (HTPConfigSetDefaultsPhase2) -- 'default' server has 'request-body-minimal-inspect-size' set to 33882 and 'request-body-inspect-window' set to 4053 after randomization.
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2233) <Info> (HTPConfigSetDefaultsPhase2) -- 'default' server has 'response-body-minimal-inspect-size' set to 33695 and 'response-body-inspect-window' set to 4218 after randomization.
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp-mem.c:59) <Info> (HTPParseMemcap) -- HTTP memcap: 6442450944
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2218) <Info> (HTPConfigSetDefaultsPhase2) -- 'apache' server has 'request-body-minimal-inspect-size' set to 34116 and 'request-body-inspect-window' set to 3973 after randomization.
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2233) <Info> (HTPConfigSetDefaultsPhase2) -- 'apache' server has 'response-body-minimal-inspect-size' set to 32229 and 'response-body-inspect-window' set to 4205 after randomization.
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2218) <Info> (HTPConfigSetDefaultsPhase2) -- 'iis7' server has 'request-body-minimal-inspect-size' set to 32040 and 'request-body-inspect-window' set to 4118 after randomization.
[26428] 24/5/2014 -- 01:46:01 - (app-layer-htp.c:2233) <Info> (HTPConfigSetDefaultsPhase2) -- 'iis7' server has 'response-body-minimal-inspect-size' set to 32694 and 'response-body-inspect-window' set to 4148 after randomization.
[26428] 24/5/2014 -- 01:46:01 - (app-layer-dns-udp.c:324) <Info> (DNSUDPConfigure) -- DNS request flood protection level: 500
[26428] 24/5/2014 -- 01:46:01 - (app-layer-dns-udp.c:336) <Info> (DNSUDPConfigure) -- DNS per flow memcap (state-memcap): 524288
[26428] 24/5/2014 -- 01:46:01 - (app-layer-dns-udp.c:348) <Info> (DNSUDPConfigure) -- DNS global memcap: 4294967296
[26428] 24/5/2014 -- 01:46:01 - (util-ioctl.c:99) <Info> (GetIfaceMTU) -- Found an MTU of 1500 for 'eth3'
[26428] 24/5/2014 -- 01:46:01 - (defrag-hash.c:212) <Info> (DefragInitConfig) -- allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
[26428] 24/5/2014 -- 01:46:02 - (defrag-hash.c:237) <Info> (DefragInitConfig) -- preallocated 65535 defrag trackers of size 152
[26428] 24/5/2014 -- 01:46:02 - (defrag-hash.c:244) <Info> (DefragInitConfig) -- defrag memory usage: 13631336 bytes, maximum: 536870912
[26428] 24/5/2014 -- 01:46:02 - (tmqh-flow.c:76) <Info> (TmqhFlowRegister) -- AutoFP mode using default "Active Packets" flow load balancer
[26429] 24/5/2014 -- 01:46:02 - (tmqh-packetpool.c:142) <Info> (PacketPoolInit) -- preallocated 65534 packets. Total memory 228320456
[26429] 24/5/2014 -- 01:46:02 - (host.c:205) <Info> (HostInitConfig) -- allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
[26429] 24/5/2014 -- 01:46:02 - (host.c:228) <Info> (HostInitConfig) -- preallocated 1000 hosts of size 112
[26429] 24/5/2014 -- 01:46:02 - (host.c:230) <Info> (HostInitConfig) -- host memory usage: 390144 bytes, maximum: 16777216
[26429] 24/5/2014 -- 01:46:02 - (flow.c:391) <Info> (FlowInitConfig) -- allocated 67108864 bytes of memory for the flow hash... 1048576 buckets of size 64
[26429] 24/5/2014 -- 01:46:02 - (flow.c:415) <Info> (FlowInitConfig) -- preallocated 1048576 flows of size 280
[26429] 24/5/2014 -- 01:46:02 - (flow.c:417) <Info> (FlowInitConfig) -- flow memory usage: 369098752 bytes, maximum: 1073741824
[26429] 24/5/2014 -- 01:46:02 - (reputation.c:459) <Info> (SRepInit) -- IP reputation disabled
[26429] 24/5/2014 -- 01:46:02 - (util-magic.c:62) <Info> (MagicInit) -- using magic-file /usr/share/file/magic
[26429] 24/5/2014 -- 01:46:02 - (suricata.c:1835) <Info> (SetupDelayedDetect) -- Delayed detect disabled
[26429] 24/5/2014 -- 01:46:04 - (detect-filemd5.c:275) <Info> (DetectFileMd5Parse) -- MD5 hash size 2143616 bytes
[26429] 24/5/2014 -- 01:46:05 - (detect.c:452) <Info> (SigLoadSignatures) -- 8 rule files processed. 9055 rules successfully loaded, 0 rules failed
[26429] 24/5/2014 -- 01:46:05 - (detect.c:2591) <Info> (SigAddressPrepareStage1) -- 9055 signatures processed. 1 are IP-only rules, 2299 are inspecting packet payload, 7541 inspect application layer, 0 are decoder event only
[26429] 24/5/2014 -- 01:46:05 - (detect.c:2594) <Info> (SigAddressPrepareStage1) -- building signature grouping structure, stage 1: preprocessing rules... complete
[26429] 24/5/2014 -- 01:46:05 - (detect.c:3217) <Info> (SigAddressPrepareStage2) -- building signature grouping structure, stage 2: building source address list... complete
[26429] 24/5/2014 -- 01:48:35 - (detect.c:3859) <Info> (SigAddressPrepareStage3) -- building signature grouping structure, stage 3: building destination address lists... complete
[26429] 24/5/2014 -- 01:48:35 - (util-threshold-config.c:1202) <Info> (SCThresholdConfParseFile) -- Threshold config parsed: 0 rule(s) found
[26429] 24/5/2014 -- 01:48:35 - (util-coredump-config.c:122) <Info> (CoredumpLoadConfig) -- Core dump size set to unlimited.
[26429] 24/5/2014 -- 01:48:35 - (util-logopenfile.c:209) <Info> (SCConfLogOpenGeneric) -- eve-log output device (regular) initialized: eve.json
[26429] 24/5/2014 -- 01:48:35 - (output-json.c:471) <Info> (OutputJsonInitCtx) -- returning output_ctx 0x5b418d90
[26429] 24/5/2014 -- 01:48:35 - (runmodes.c:672) <Info> (RunModeInitializeOutputs) -- enabling 'eve-log' module 'alert'
[26429] 24/5/2014 -- 01:48:35 - (runmodes.c:672) <Info> (RunModeInitializeOutputs) -- enabling 'eve-log' module 'http'
[26429] 24/5/2014 -- 01:48:35 - (runmodes.c:672) <Info> (RunModeInitializeOutputs) -- enabling 'eve-log' module 'dns'
[26429] 24/5/2014 -- 01:48:35 - (runmodes.c:672) <Info> (RunModeInitializeOutputs) -- enabling 'eve-log' module 'ssh'
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "management-cpu-set"
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'low'
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "receive-cpu-set"
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "decode-cpu-set"
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "stream-cpu-set"
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "detect-cpu-set"
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'high'
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "verdict-cpu-set"
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'high'
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "reject-cpu-set"
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'low'
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:217) <Info> (AffinitySetupLoadFromConfig) -- Found affinity definition for "output-cpu-set"
[26429] 24/5/2014 -- 01:48:35 - (util-affinity.c:265) <Info> (AffinitySetupLoadFromConfig) -- Using default prio 'medium'
[26429] 24/5/2014 -- 01:48:35 - (runmode-af-packet.c:198) <Info> (ParseAFPConfig) -- Enabling mmaped capture on iface eth3
[26429] 24/5/2014 -- 01:48:35 - (runmode-af-packet.c:266) <Info> (ParseAFPConfig) -- Using cpu cluster mode for AF_PACKET (iface eth3)
[26429] 24/5/2014 -- 01:48:35 - (util-runmodes.c:558) <Info> (RunModeSetLiveCaptureWorkersForDevice) -- Going to use 16 thread(s)
[26431] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 0
[26431] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth31" Module to cpu/core 0, thread id 26431
[26431] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26431] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26432] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 1
[26432] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth32" Module to cpu/core 1, thread id 26432
[26432] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26432] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26433] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 2
[26433] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth33" Module to cpu/core 2, thread id 26433
[26433] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26433] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26434] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 3
[26434] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth34" Module to cpu/core 3, thread id 26434
[26434] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26434] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26435] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 4
[26435] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth35" Module to cpu/core 4, thread id 26435
[26435] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26435] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26436] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 5
[26436] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth36" Module to cpu/core 5, thread id 26436
[26436] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26436] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26437] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 6
[26437] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth37" Module to cpu/core 6, thread id 26437
[26437] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26437] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26438] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 7
[26438] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth38" Module to cpu/core 7, thread id 26438
[26438] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26438] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26439] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 8
[26439] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth39" Module to cpu/core 8, thread id 26439
[26439] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26439] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26440] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 9
[26440] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth310" Module to cpu/core 9, thread id 26440
[26440] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26440] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26441] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 10
[26441] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth311" Module to cpu/core 10, thread id 26441
[26441] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26441] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26442] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 11
[26442] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth312" Module to cpu/core 11, thread id 26442
[26442] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26442] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26443] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 12
[26443] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth313" Module to cpu/core 12, thread id 26443
[26443] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26443] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26444] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 13
[26444] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth314" Module to cpu/core 13, thread id 26444
[26444] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26444] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26445] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 14
[26445] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth315" Module to cpu/core 14, thread id 26445
[26445] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26445] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26446] 24/5/2014 -- 01:48:35 - (util-affinity.c:319) <Info> (AffinityGetNextCPU) -- Setting affinity on CPU 15
[26446] 24/5/2014 -- 01:48:35 - (tm-threads.c:1337) <Info> (TmThreadSetupOptions) -- Setting prio -2 for "AFPacketeth316" Module to cpu/core 15, thread id 26446
[26446] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1730) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode
[26446] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1740) <Info> (ReceiveAFPThreadInit) -- Enabling zero copy mode by using data release call
[26429] 24/5/2014 -- 01:48:35 - (runmode-af-packet.c:527) <Info> (RunModeIdsAFPWorkers) -- RunModeIdsAFPWorkers initialised
[26447] 24/5/2014 -- 01:48:35 - (tm-threads.c:1343) <Info> (TmThreadSetupOptions) -- Setting prio 2 for "FlowManagerThread" thread , thread id 26447
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:371) <Info> (StreamTcpInitConfig) -- stream "prealloc-sessions": 375000 (per thread)
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:387) <Info> (StreamTcpInitConfig) -- stream "memcap": 15032385536
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:393) <Info> (StreamTcpInitConfig) -- stream "midstream" session pickups: disabled
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:399) <Info> (StreamTcpInitConfig) -- stream "async-oneside": disabled
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:416) <Info> (StreamTcpInitConfig) -- stream "checksum-validation": disabled
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:438) <Info> (StreamTcpInitConfig) -- stream."inline": disabled
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:451) <Info> (StreamTcpInitConfig) -- stream "max-synack-queued": 5
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:469) <Info> (StreamTcpInitConfig) -- stream.reassembly "memcap": 32212254720
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:487) <Info> (StreamTcpInitConfig) -- stream.reassembly "depth": 12582912
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:570) <Info> (StreamTcpInitConfig) -- stream.reassembly "toserver-chunk-size": 2585
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:572) <Info> (StreamTcpInitConfig) -- stream.reassembly "toclient-chunk-size": 2680
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp.c:585) <Info> (StreamTcpInitConfig) -- stream.reassembly.raw: enabled
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 4, prealloc 256
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 16, prealloc 512
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 112, prealloc 512
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 248, prealloc 512
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 512, prealloc 512
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 768, prealloc 1024
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 1448, prealloc 1024
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:425) <Info> (StreamTcpReassemblyConfig) -- segment pool: pktsize 65535, prealloc 128
[26429] 24/5/2014 -- 01:48:35 - (stream-tcp-reassemble.c:461) <Info> (StreamTcpReassemblyConfig) -- stream.reassembly "chunk-prealloc": 250
[26448] 24/5/2014 -- 01:48:35 - (tm-threads.c:1343) <Info> (TmThreadSetupOptions) -- Setting prio 2 for "SCPerfWakeupThread" thread , thread id 26448
[26449] 24/5/2014 -- 01:48:35 - (tm-threads.c:1343) <Info> (TmThreadSetupOptions) -- Setting prio 2 for "SCPerfMgmtThread" thread , thread id 26449
[26429] 24/5/2014 -- 01:48:35 - (tm-threads.c:2196) <Notice> (TmThreadWaitOnThreadInit) -- all 16 packet processing threads, 3 management threads initialized, engine started.
[26431] 24/5/2014 -- 01:48:35 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26431] 24/5/2014 -- 01:48:35 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26431] 24/5/2014 -- 01:48:35 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26431] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 6
[26431] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth31 using socket 6
[26432] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26432] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26432] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26432] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 7
[26432] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth32 using socket 7
[26433] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26433] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26433] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26433] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 8
[26433] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth33 using socket 8
[26434] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26434] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26434] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26434] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 9
[26434] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth34 using socket 9
[26435] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26435] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26435] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26435] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 10
[26435] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth35 using socket 10
[26436] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26436] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26436] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26436] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 11
[26436] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth36 using socket 11
[26437] 24/5/2014 -- 01:48:36 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26437] 24/5/2014 -- 01:48:36 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26437] 24/5/2014 -- 01:48:36 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26437] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 12
[26437] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth37 using socket 12
[26438] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26438] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26438] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26438] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 13
[26438] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth38 using socket 13
[26439] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26439] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26439] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26439] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 14
[26439] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth39 using socket 14
[26440] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26440] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26440] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26440] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 15
[26440] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth310 using socket 15
[26441] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26441] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26441] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26441] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 16
[26441] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth311 using socket 16
[26442] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26442] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26442] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26442] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 17
[26442] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth312 using socket 17
[26443] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26443] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26443] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26443] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 18
[26443] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth313 using socket 18
[26444] 24/5/2014 -- 01:48:37 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26444] 24/5/2014 -- 01:48:37 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26444] 24/5/2014 -- 01:48:37 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26444] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 19
[26444] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth314 using socket 19
[26445] 24/5/2014 -- 01:48:38 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26445] 24/5/2014 -- 01:48:38 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26445] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26445] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 20
[26445] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth315 using socket 20
[26446] 24/5/2014 -- 01:48:38 - (util-ioctl.c:180) <Info> (GetIfaceOffloading) -- Generic Receive Offload is unset on eth3
[26446] 24/5/2014 -- 01:48:38 - (util-ioctl.c:199) <Info> (GetIfaceOffloading) -- Large Receive Offload is unset on eth3
[26446] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1359) <Info> (AFPComputeRingParams) -- AF_PACKET RX Ring params: block_size=32768 block_nr=25001 frame_size=1584 frame_nr=500020
[26446] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1572) <Info> (AFPCreateSocket) -- Using interface 'eth3' via socket 21
[26446] 24/5/2014 -- 01:48:38 - (source-af-packet.c:452) <Info> (AFPPeersListReachedInc) -- All AFP capture threads are running.
[26446] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1155) <Info> (ReceiveAFPLoop) -- Thread AFPacketeth316 using socket 21
[26444] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth314
[26445] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth315
[26437] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth37
[26432] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth32
[26440] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth310
[26434] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth34
[26435] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth35
[26443] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth313
[26431] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth31
[26441] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth311
[26433] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth33
[26442] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth312
[26438] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth38
[26436] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth36
[26439] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth39
[26446] 24/5/2014 -- 01:48:38 - (source-af-packet.c:1066) <Info> (AFPSynchronizeStart) -- Starting to read on AFPacketeth316
[26429] 25/5/2014 -- 01:45:29 - (suricata.c:2300) <Notice> (main) -- Signal Received.  Stopping engine.
[26447] 25/5/2014 -- 01:45:30 - (flow-manager.c:561) <Info> (FlowManagerThread) -- 0 new flows, 0 established flows were timed out, 0 flows in closed state
[26429] 25/5/2014 -- 01:45:30 - (suricata.c:1025) <Info> (SCPrintElapsedTime) -- time elapsed 86215.055s
[26431] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth31) Kernel: Packets 8091169139, dropped 548918377
[26431] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth31) Packets 7541009393, bytes 5856264226024
[26431] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3174701772 TCP packets
[26432] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth32) Kernel: Packets 7523006674, dropped 129092719
[26432] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth32) Packets 7392869856, bytes 6039480366879
[26432] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3273049553 TCP packets
[26433] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth33) Kernel: Packets 7857365876, dropped 457724034
[26433] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth33) Packets 7398849607, bytes 6186600745188
[26433] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3254753683 TCP packets
[26434] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth34) Kernel: Packets 7939368989, dropped 328011859
[26434] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth34) Packets 7610498359, bytes 6023159311914
[26434] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3294782895 TCP packets
[26435] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth35) Kernel: Packets 7886105626, dropped 424755524
[26435] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth35) Packets 7460672617, bytes 6304951058805
[26435] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3301812001 TCP packets
[26436] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth36) Kernel: Packets 7807382993, dropped 258291463
[26436] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth36) Packets 7548467033, bytes 6347986611584
[26436] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3359138126 TCP packets
[26437] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth37) Kernel: Packets 7898330279, dropped 305037112
[26437] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth37) Packets 7592601391, bytes 6136634057356
[26437] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3263120334 TCP packets
[26438] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth38) Kernel: Packets 7653871283, dropped 193628126
[26438] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth38) Packets 7459608346, bytes 6164536552610
[26438] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3337037621 TCP packets
[26439] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth39) Kernel: Packets 7717771534, dropped 302582507
[26439] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth39) Packets 7414991895, bytes 6068675614996
[26439] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3256006501 TCP packets
[26440] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth310) Kernel: Packets 7955692240, dropped 339489700
[26440] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth310) Packets 7616019954, bytes 6170760218068
[26440] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3309626387 TCP packets
[26441] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth311) Kernel: Packets 8004841803, dropped 416027860
[26441] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth311) Packets 7588633565, bytes 6152477758719
[26441] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3229276967 TCP packets
[26442] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth312) Kernel: Packets 7908991181, dropped 282658592
[26442] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth312) Packets 7626056429, bytes 6374830613882
[26442] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3289082310 TCP packets
[26443] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth313) Kernel: Packets 7823655146, dropped 277468333
[26443] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth313) Packets 7546046278, bytes 6174538196484
[26443] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3264661076 TCP packets
[26444] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth314) Kernel: Packets 7661949338, dropped 161041160
[26444] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth314) Packets 7500367073, bytes 6191365130344
[26444] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3299756326 TCP packets
[26445] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth315) Kernel: Packets 8203393412, dropped 272996993
[26445] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth315) Packets 7930265587, bytes 6802539594416
[26445] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3258257071 TCP packets
[26446] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1807) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth316) Kernel: Packets 7807106665, dropped 377601959
[26446] 25/5/2014 -- 01:45:30 - (source-af-packet.c:1810) <Info> (ReceiveAFPThreadExitStats) -- (AFPacketeth316) Packets 7428994197, bytes 6140231305309
[26446] 25/5/2014 -- 01:45:30 - (stream-tcp.c:4643) <Info> (StreamTcpExitPrintStats) -- Stream TCP processed 3337023147 TCP packets
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 4 had a peak use of 11396 segments, more than the prealloc setting of 256
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 16 had a peak use of 17178 segments, more than the prealloc setting of 512
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 112 had a peak use of 45436 segments, more than the prealloc setting of 512
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 248 had a peak use of 12049 segments, more than the prealloc setting of 512
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 512 had a peak use of 26386 segments, more than the prealloc setting of 512
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 768 had a peak use of 23371 segments, more than the prealloc setting of 1024
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 1448 had a peak use of 67781 segments, more than the prealloc setting of 1024
[26429] 25/5/2014 -- 01:45:31 - (stream-tcp-reassemble.c:502) <Info> (StreamTcpReassembleFree) -- TCP segment pool of size 65535 had a peak use of 67333 segments, more than the prealloc setting of 128
[26429] 25/5/2014 -- 01:45:31 - (stream.c:182) <Info> (StreamMsgQueuesDeinit) -- TCP segment chunk pool had a peak use of 13327 chunks, more than the prealloc setting of 250
[26429] 25/5/2014 -- 01:45:31 - (host.c:245) <Info> (HostPrintStats) -- host memory usage: 390144 bytes, maximum: 16777216
[26429] 25/5/2014 -- 01:45:44 - (detect.c:3890) <Info> (SigAddressCleanupStage1) -- cleaning up signature grouping structure... complete
[26429] 25/5/2014 -- 01:45:44 - (util-device.c:185) <Notice> (LiveDeviceListClean) -- Stats for 'eth3':  pkts: 125740002178, drop: 5075326318 (4.04%), invalid chksum: 0






Sunday, May 4, 2014

Elasticsearch - err failed to connect to master - when changing/using a different IP address



It is a general rule of thumbs to check first your
/var/log/elasticsearch/elasticsearch.log
and
/var/log/logstash/logstash.log
when you experience any form of issues when using Kibana.

I stumbled upon this when I changed the IP/Network of the interface of my test virtual machine holding an ELK (Elasticsearch/Logstash/Kibana) installation to do log analysis for Suricata IDPS.

I managed to solve the issue based on those two sources:
https://github.com/elasticsearch/elasticsearch/issues/4194
http://www.concept47.com/austin_web_developer_blog/errors/elasticsearch-error-failed-to-connect-to-master/

The new IP that I changed is - 192.168.1.166 and the old one was 10.0.2.15
(notice the errs in the logs. It was still trying to connect to the old one below):

root@debian64:~/Work/# more /var/log/elasticsearch/elasticsearch.log
[2014-05-04 07:17:24,960][INFO ][node                     ] [Jamal Afari] version[1.1.0], pid[7178], build[2181e11/2014-03-25T15:59:51Z]
[2014-05-04 07:17:24,960][INFO ][node                     ] [Jamal Afari] initializing ...
[2014-05-04 07:17:24,964][INFO ][plugins                  ] [Jamal Afari] loaded [], sites []
[2014-05-04 07:17:27,828][INFO ][node                     ] [Jamal Afari] initialized
[2014-05-04 07:17:27,828][INFO ][node                     ] [Jamal Afari] starting ...
[2014-05-04 07:17:27,959][INFO ][transport                ] [Jamal Afari] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.1.166:9300]}
[2014-05-04 07:17:57,977][WARN ][discovery                ] [Jamal Afari] waited for 30s and no initial state was set by the discovery
[2014-05-04 07:17:57,978][INFO ][discovery                ] [Jamal Afari] elasticsearch/F9HgSmYJQcS6bxdgdeurAA
[2014-05-04 07:17:57,986][INFO ][http                     ] [Jamal Afari] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.1.166:9200]}
[2014-05-04 07:17:58,017][INFO ][node                     ] [Jamal Afari] started
[2014-05-04 07:18:01,026][WARN ][discovery.zen            ] [Jamal Afari] failed to connect to master [[Hellion][zcx2fIF2SrmwSYQ08la6PQ][LTS-64-1][inet[/10.0.2.15:9300]]], retrying...
org.elasticsearch.transport.ConnectTransportException: [Hellion][inet[/10.0.2.15:9300]] connect_timeout[30s]
    at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:718)
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:647)
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:615)
    at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:129)
    at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:338)
    at org.elasticsearch.discovery.zen.ZenDiscovery.access$500(ZenDiscovery.java:79)
    at org.elasticsearch.discovery.zen.ZenDiscovery$1.run(ZenDiscovery.java:286)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:701)
Caused by: org.elasticsearch.common.netty.channel.ConnectTimeoutException: connection timed out: /10.0.2.15:9300
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:137)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83)
    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
.......
.......
.......
[2014-05-04 07:37:05,783][WARN ][discovery.zen            ] [Vivisector] failed to connect to master [[Hellion][zcx2fIF2SrmwSYQ08la6PQ][LTS-64-1][inet[/10.0.2.15:9300]]], retrying...
org.elasticsearch.transport.ConnectTransportException: [Hellion][inet[/10.0.2.15:9300]] connect_timeout[30s]
    at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:718)
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:647)
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:615)
    at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:129)
    at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:338)
    at org.elasticsearch.discovery.zen.ZenDiscovery.access$500(ZenDiscovery.java:79)
    at org.elasticsearch.discovery.zen.ZenDiscovery$1.run(ZenDiscovery.java:286)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:701)
Caused by: org.elasticsearch.common.netty.channel.ConnectTimeoutException: connection timed out: /10.0.2.15:9300
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processConnectTimeout(NioClientBoss.java:137)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:83)
    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
    at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
    at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    ... 3 more
   
   
That was giving me all sorts of wired errs and failed queries in Kibana. The base of the problem was that I did change IP addresses on the ELK server.

The solution is simple.
Find the Discovery section in  /etc/elasticsearch/elasticsearch.yml
and edit this line from :
# 1. Disable multicast discovery (enabled by default):
#
# discovery.zen.ping.multicast.enabled: false

to

# 1. Disable multicast discovery (enabled by default):
#
 discovery.zen.ping.multicast.enabled: false

Only remove the " # " in front of "discovery.zen.ping.multicast.enabled: false".
Save and restart the service.
service elasticsearch restart

Then everything went back to normal.
In /var/log/elasticsearch/elasticsearch.log:
   
[2014-05-04 07:37:07,936][INFO ][node                     ] [Vivisector] stopping ...
[2014-05-04 07:37:07,970][INFO ][node                     ] [Vivisector] stopped
[2014-05-04 07:37:07,971][INFO ][node                     ] [Vivisector] closing ...
[2014-05-04 07:37:07,979][INFO ][node                     ] [Vivisector] closed
[2014-05-04 07:37:09,685][INFO ][node                     ] [Vibraxas] version[1.1.0], pid[5291], build[2181e11/2014-03-25T15:59:51Z]
[2014-05-04 07:37:09,686][INFO ][node                     ] [Vibraxas] initializing ...
[2014-05-04 07:37:09,689][INFO ][plugins                  ] [Vibraxas] loaded [], sites []
[2014-05-04 07:37:12,597][INFO ][node                     ] [Vibraxas] initialized
[2014-05-04 07:37:12,597][INFO ][node                     ] [Vibraxas] starting ...
[2014-05-04 07:37:12,751][INFO ][transport                ] [Vibraxas] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.1.166:9300]}
[2014-05-04 07:37:15,777][INFO ][cluster.service          ] [Vibraxas] new_master [Vibraxas][esQHE1EtTuWVK9MVNiQ5jA][debian64][inet[/192.168.1.166:9300]], reason: zen-disco-join (elected_as_master)
[2014-05-04 07:37:15,806][INFO ][discovery                ] [Vibraxas] elasticsearch/esQHE1EtTuWVK9MVNiQ5jA
[2014-05-04 07:37:15,877][INFO ][http                     ] [Vibraxas] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.1.166:9200]}
[2014-05-04 07:37:16,893][INFO ][gateway                  ] [Vibraxas] recovered [16] indices into cluster_state
[2014-05-04 07:37:16,898][INFO ][node                     ] [Vibraxas] started
[2014-05-04 07:37:17,547][INFO ][cluster.service          ] [Vibraxas] added {[logstash-debian64-3408-4020][dTsgT1H9Srq6mUr_w5rpXQ][debian64][inet[/192.168.1.166:9301]]{client=true, data=false},}, reason: zen-disco-receive(join from node[[logstash-debian64-3408-4020][dTsgT1H9Srq6mUr_w5rpXQ][debian64][inet[/192.168.1.166:9301]]{client=true, data=false}])

 It is also higly recommended that you read the whole Discovery section in your elasticsearch.yml:
############################# Discovery #############################

# Discovery infrastructure ensures nodes can be found within a cluster
# and master node is elected. Multicast discovery is the default.

.....