Quantcast
Channel: ntop
Viewing all 544 articles
Browse latest View live

PF_RING 5.5.2 Released

$
0
0

Changelog

  • Fix for corrupted VLAN tagged packets
  • Userspace bpf support (when using dna)
  • PF_RING-aware igb default moved to 4.0.17
  • Flow Control  rx/tx automatically disabled by the driver
  • Added DAQ drivers into RPM (http://packages.ntop.org)
  • New pfring_open() flag PF_RING_DNA_FIXED_RSS_Q_0 to send all traffic to queue 0 and select other queues with hw filters (DNA cards with hw filtering only)
  • Added check for modern libc versions
  • New pfdnacluster_mt_rss_frwd sample app (packet forwarding using libzero dna cluster for rx/balancing and standard dna with zero-copy on rss queues for tx)
  • Added ability to create a stats file under /proc/net/pf_ring/stats so that
  • Applications can report stats via the /proc filesystem.
  • Added pfring_set_application_stats() for reporting stats
  • Added pfring_get_appl_stats_file_name() for getting the exact filename where the app sets the statistics
  • Updated pfcount and pfsend to report stats using these new primitives
  • pfcount: -v option changed
    •  -v 1: same as -v (in the previous version)
    •  -v 2: same as “-v 1″ with the difference that the whole packet is printed in hex
  • Due to popular demand we moved from LGPL3 to LGPL2.1

Introducing nBox 2.0 (aka how to use/configure ntop apps using a web GUI)

$
0
0

Years ago we decided to create the nBox appliance as turn-key solution for those that were not fans of the command line. Then we decided to rewrite the nBox GUI to make it simpler, more modern, and usable by all ntop users, to configure ntop, nProbe, n2disk, PF_RING and DNA.

nbox 2.0

 

In essence we have created a new web interface that can simplify your configurations, assist with complex things such as core affinity or DNA configuration, and let you focus on ntop applications rather than on their configuration. You can download the nBox from the packages site, or use (suggested) the apt site for configuring everything adding this repository to your apt sources as describe on the site.

 

Notes:

  • The nBox GUI is released under GPLv3: feel free to use it and enjoy it.
  • Initially we have packaged the nBox GUI for Linux Ubuntu, but we plan to port it soon to RedHat/CentOS.
  • We encourage you to test the nBox GUI and send us patches and bug reports.
  • Every night we build new/fresh packages, so you can keep your apps up-to-date with no hassle.
  • Unfortunately Linux Ubuntu updates the kernel very often. This requires that the PF_RING package we build uses the same kernel version of your box. Make sure that both kernel versions are the same otherwise you need to update your kernel and align it to our version. You have been warned!

 

 

Filtering n2disk-captured Packets and Replying them at 10 Gbit using the nBox

$
0
0

The nBox is not just a no-cost web GUI for ntop products, but it’s a totally new experience for dealing with pcap files. n2disk is able to index packets while capturing and then filter captured packets. Once you have filtered your favourite packets (based on a BPF filter and a time span) you can then download them to your PC or reproduce them at line rate (or at any speed you like). Even BPF filters are simplified with the nBox thanks to the ability to drag and drop filtering expressions for error-free filters.

In essence as you will see from the screenshots below, using the nBox you forget the command line, and easily accomplish your tasks with a few mouse clicks.

Click to view slideshow.

Configuring nDPI for Custom Protocol Detection

$
0
0

The first release of nDPI was basically a refresh of the OpenDPI library on which nDPI is built. Over the past few months we have made many changes including:

  • Port to various platforms including Linux, MacOSX, Windows and FreeBSD.
  • Enhancement of the demo pcapReader application both in terms of speed/features and encapsulations supported (for instance you can now analyse GTP-tunneled traffic).
  • Ability to compile nDPI for the Linux kernel so that you can use it for developing efficient kernel-based modules.
  • Various speed enhancements so that nDPI is now faster than its predecessor.
  • Added many protocols (we not support almost 160 protocols) ranging from “business” protocols such as SAP and Citrix, as well as “desktop” protocols such as Dropbox and Spotify.
  • Ability to define port (and port range)-based protocol detection, so that you can complement protocol detection with classic port-based detection.

In addition to all this, we have recently added in nDPI the ability to support sub-protocols using string-based matching. This is because many new sub-protocols such as Apple iCloud/iMessage, WhatsApp and many others use HTTP(S) can be detected by decoding the SSL certificate host or the HTTP “Host:”. Thus we have decided to embed in nDPI an efficient string-matching library based on the popular Aho-Corasick algorithm for matching hundred of thousand sub-strings efficiently (i.e. fast enough to sustain 10 Gbit traffic on commodity hardware). You can now specify sub-protocols at runtime  using a configuration file with the following format:

# Subprotocols
# Format:
# host:"<value>",host:"<value>",.....@<subproto>
host:"googlesyndacation.com"@Google
host:"venere.com"@Venere

in addition to port-based protocol detection using the following format:

#  Format:
#  <tcp|udp>:,<tcp|udp>:,.....@
tcp:81,tcp:8181@HTTP
udp:5061-5062@SIP
tcp:860,udp:860,tcp:3260,udp:3260@iSCSI
tcp:3000@ntop

You can test your custom configuration using the pcapReader (use -p option)  application or enhance your application using the ndpi_load_protocols_file() nDPI API call.

This said, every month new protocol are introduced and become popular, thus nDPI needs constant maintenance and enhancement. We need your help for developing new protocol dissectors. Please contact us if you want to join the nDPI team.

How to build yourself a nBox Probe and Packet Recorder

$
0
0

If you need a network probe or a packet recorder you have two options. Grab a turn-key nBox or built it yourself using our software. In the first case you will receive a optimised system, with the right motherboard/CPU/NIC for your monitoring tasks and all software preinstalled/configured. However if you want to build yourself your nBox (e.g. you can reuse an old/spare server or get a new one if you plan to address 10 Gbit monitoring) you can now do it. Below we will describe how to build it step by step:

Hardware

  • Sandy-Bridge (or better)-based motherboard such as X9SLC.
  • Intel E3 or E5 CPU (both CPUs with the above motherboard can do 10 Gbit NetFlow and packet-to-disk).
  • At least 4 GB of RAM.
  • A DNA-aware card.
  • RAID controller and at least 8 x 10k RPM drives (for packet to disk only, not needed for flow monitoring).

Software

  • Ubuntu Server x64 LTS. This is our favourite distribution.
  • If you prefer CentOS/RedHat you can also use CentOS Server 6.x x64. We also support CentOS but to date we have not yet ported the nBox package to CentOS and thus you need to use it from the command line.

Once you have configured the machine and installed the base operating system, depending on your OS go to:

and follow the instructions. In essence we have created an APT and YUM repository so you can use it in your favourite distro.

At this point your nbox in configured and you can point your browser to http://<your nbox IP> for accessing the nBox management interface.

For more information please refer to the nBox documentation and in particular:

Learning The ntop World of Apps

$
0
0

The main criticism to ntop is the lack of documentation. This is because we have to maintain many projects, have little time, and also because we prefer coding to documentation. We decided to fill this gap and give a positive answer to your requests:

  • We have created the nBox GUI to enable you to use all our applications without the pain of compiling and configuring them. This is a free product that everyone can use to build their own measurement gear or just to start ntop using a web browser.
  • We have refreshed all manuals and tried to include all your comments.

You can find all the documentation in the section Documentation of this web site and in particular:

We plan to soon write the nDPI manual and start working at ntop. Be patient.

As usual we await your comments and feedback. We apologise once again for you waiting so long.

Who (Really) Needs Sub-microsecond Packet Timestamps?

$
0
0

Introduction

For years network adapter manufacturer companies have educated their customers that network monitoring applications can’t live without hardware packet timestamps (i.e. the ability for the network adapter to report to the driver the time a given packet was sent or received). State of the art FPGA-based network adapters [1, 2, 3] have hardware timestamps with a resolution of +/- ~10 nsec and accuracy of +/- ~50 nsec so that monitoring applications can safely assume an accuracy of  100 nsec in measurements, for sub-usec measurements. Commodity adapters such as Intel 1 Gbit provide both RX and TX timestamps out of the box with IEEE 1588 time synchronisation, so the problem is on 10 Gbit (this until Intel comes us with a 10G adapter with hardware timestamps).

 

Who Really Needs Sub-microsecond Packet Timestamps?

This is a good question. Everyone seems to want it, but they in practice they might not need it. Let’s clarify this point a bit more in detail. For RTT (Round-Trip Time) measurements (i.e. I want to see how long a packet takes from location X to location Y) measurements on long-distance (e.g. Italy to USA and back) the order of magnitude is msec (actually tenth/hundred of msec) so usec are not needed, for a LAN is not needed either because if the probe packet used to monitor RTT is originated/received on the same adapter, 1 Gbit commodity adapters can do the trick and PF_RING supports them. For one-way delay (i.e. how to measure the time from A->B) on a WAN, 1G adapters+IEEE 1588 can do the trick (the delay is in msec), on a LAN same as above.

So who needs really sub-microsecond hardware timestamps at 10 Gbit (at 1 Gbit we have the solution as explained until now)? Reading on the Internet, it seems that one of the few markets where they are needed is in microburst detection [1, 2] in particular on critical networks such as high-frequency trading and industrial plants.

 

Can ntop Provide Sub-microsecond Timestamps in Software at 10 Gbit?

In short: yes we can. When we developed our n2disk application at 10 Gbit, we have faced with the problem of  timestamps as no commodity adapter supported them. We have spent quite some time to optimise this application and these are our findings:

  • We suppose to use a server machine with a good motherboard (i.e. Dell, Supermicro, HP), no toy PCs. This guarantees that the clock on the board is of good quality.
  • The call to clock_gettime() used to read the timestamp in software takes ~30 nsec in our tests. As at 10 Gbit the max packet ingress rate is (14.88 Mpps) is 67 nsec, reading the timestamp once the packet is received it overkilling (not to mention that the reported time will be shifted in the future with respect to real packet arrive).
  • We decided to create a thread (we called it pulse thread) that calls clock_gettime() at full speed and shares the time with the capture thread.

On our E3-1230 (CPU cost ~200 USD) starting n2disk as follows

n2disk10g -o /tmp/ -p 1024 -b 2048 -i dna0 --active-wait -C 1024 -w 0 -S 2 -c 4  -v -R 6 --nanoseconds

we can achieve both 10 Gbit to disk

25/Apr/2013 10:20:37 [n2disk.c:576] [PF_RING] Total stats: 90843997 pkts rcvd/90843997 pkts filtered/0 pkts dropped [0.0%]
25/Apr/2013 10:20:37 [n2disk.c:592] Capture Duration: 00:00:06
25/Apr/2013 10:20:37 [n2disk.c:594] Average Capture Throughout: 10.00 Gbit / 14.88 Mpps
25/Apr/2013 10:20:37 [n2disk.c:1593] [writer] Thread terminated
25/Apr/2013 10:20:37 [n2disk.c:3664] Writer thread terminated
25/Apr/2013 10:20:37 [n2disk.c:2805] Packet capture thread terminated
25/Apr/2013 10:20:37 [n2disk.c:3668] Reader thread terminated
25/Apr/2013 10:20:37 [n2disk.c:3673] Time thread terminated

and high-accuracy timestamps. In fact this is what happens:

< 30 nsec timestamps (as in the above test)

60 1366418032.342040270 6726 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342040355 6727 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342040430 6728 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342040502 6729 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342040613 6730 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342040728 6731 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342040767 6732 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342040890 6733 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342041036 6734 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342041238 6735 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342041427 6736 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342041610 6737 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342041685 6738 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342041835 6739 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342041982 6740 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342042056 6741 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342042167 6742 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342042327 6743 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342042441 6744 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366418032.342042515 6745 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)

As you can see all packets have different timestamps

100 nsec timestamps

60 1366417070.119899160 5183 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119899386 5184 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119899501 5185 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119899615 5186 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119899615 5187 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119899731 5188 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119899846 5189 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119899960 5190 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119900073 5191 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119900187 5192 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119900301 5193 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119900417 5194 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119900532 5195 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119900646 5196 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119900646 5197 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119900762 5198 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119900877 5199 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119900989 5200 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119901104 5201 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417070.119901218 5202 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)

Bad: some packets have the same timestamp.

500 nsec timestamps

60 1366417709.563877691 3466 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563878226 3467 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563878226 3468 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563878226 3469 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563878226 3470 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563878226 3471 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563878226 3472 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563878763 3473 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563878763 3474 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563878763 3475 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563878763 3476 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563878763 3477 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563878763 3478 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563879297 3479 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563879297 3480 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563879297 3481 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563879297 3482 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563879297 3483 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563879297 3484 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)
60 1366417709.563879832 3485 192.85.1.2 -> 192.0.0.1 IP Unknown (0xfd)

Very bad: too many packets have the same timestamp.

 

Conclusion

Using software timestamps and our “timestamp trick” you can achieve ~30 nsec timestamp precision, so that at 10 Gbit line rate all packets have a different timestamp (so we’re below 67 nsec timestamp resolution). This means that you can use n2disk for detecting microbursts at 10 Gbit line rate as:

  1. It can handle 14.88 Mpps with no drops when dumping them to disk with nsec timestamps
  2. You can avoid using hardware timestamps for sub-usec precision and leave them only for specific tasks where you need very accurate ~100 nsec timestamps. At this point in time however, we have not received any request from people who really need them, so we’re confident that our approach can be enough for most people.
  3. Hardware timestamps still make sense in those cases where you need a NIC with a GPS signal ingress, so that you can accurately sync the time over long distance with an accuracy better than what IEEE 1588 can offer you.

It’s time for a completely new ntop. Say hello to ntopng.

$
0
0

15 years are past since the first version of ntop. In 1998 network monitoring requirements were very different from today: few protocols (mostly in plain text) to monitor, IP was not yet “the only protocol”, low network speed, very few connected hosts, no iPhones yet, raspberry was still a fruit, Linux was still for geeks. In 2013 the whole picture is very different. One gigabit links are now commodity (10 Gbit is around the corner), (too?) many hosts interconnected and mobile, application protocols (e.g. Spotify or Skype) are “the” protocols (TCP is a generic protocol) so we need nDPI to figure out what is happening on the network.

The way the original ntop was designed was IMHO very advanced for that time, but today is no longer so for many reasons. Today people want to have a flexible network monitoring engine able to scale at multi-Gbit, using limited memory, immune to crashes “no matters what”, scriptable and extensible, able to see what’s happening in realtime with 1-second accuracy, capable of characterising hosts (call it host reputation if you wish) and storing monitoring data on the cloud for (de-)centralised monitoring even of those devices that have no disk space. Over the past years we have tried to address ntop open issues, but the code base was too old, complicated, bug-prone. In essence it was time to start over, preserve the good things of ntop, and learn from mistakes. So basically looking forward by creating a new ntop, able to survive (hopefully) 15 more years and set new monitoring standards.

This has the motivation behind what I temporarily call ntopng (ntop next generation). The work to do is huge but as you can see many things are already working.

ntopng_work_in_progress

The main design principles are:

  • Open source, self-contained with zero configuration, just like the original ntop.
  • ntopng is a cache, just like the original ntop, but contrary to its predecessor we leverage on Redis for implementing multi-level caching:
    • ntopng keeps in memory the current network traffic
    • Redis keep the “recent network history”
    • (Optionally) Persistently dump traffic history on disk for long term traffic analysis.
  • nDPI centric: ports are no longer enough, as we want to identify application protocols even on non standard ports.
  • Ability to leverage on PF_RING for monitoring million packets/second with no drops.
  • Written in C++, with clean code layout. Occasionally some routines from the original ntop will be ported to ntopng, but the idea is to write everything from scratch on a clean room. The ntop code didn’t have a real API and it was so complicated after years of patches, that people were scared of touching me.
  • The web GUI is based on Twitter Bootstrap for modern, consistent, and mobile-friendly GUI.
  • The ntopng engine is scriptable in LuaJIT.
  • Web pages are written in Lua: everyone can write its on pages without having to code in C.
  • ntopng, as well nProbe, leverage on the MicroCloud for creating a comprehensive network view.

This said, the work to do is huge and it will take some time before ntopng will be completed. This means that if you want, we need your help to expedite its development. You can access the ntopng code here:

svn co https://svn.ntop.org/svn/ntop/trunk/ntopng/

The core is stable (we have tested on Linux and OSX, but it will soon be tested/compiled on Windows) although it is still missing some pieces such as IPv6 support, historical charts, NetFlow/sFlow support and more reports. We encourage you to download and test it. Likely you can help us developing it, or at least testing it. As you can imagine, we have no time to support the original ntop, as we are focusing on this new release. We plan to have an initial release by late May: time is limited, but we’re confident to include all the core features on this release then refine it through the rest of the year.


PF_RING 5.5.3 Released

$
0
0

Today we have released a new maintenance version of PF_RING. We suggest all users to update if possible.

  • PF_RING Kernel module
    • - Support for injecting packets to the stack
    • - Added ability to balance tunneled/fragmented packets with the cluster
    • - Improved init.d script
    • - Packet len fix with GSO enabled, caplen fix with multiple clusters
    • - Bug fixes for race condition with rss rehash, memory corruption, transparent mode and tx capture, kernels >= 3.7.
  • Drivers
    • - Added PF_RING-aware driver for Chelsio cards (cxgb3-2.0.0.1)
    • - New release for PF_RING-aware igb (igb-4.1.2)
  • DNA
    • - Added support for Silicom 10 Gbit hw timestamping commodity NIC card
    • - Added pfring_flush_tx_packets() for flushing queued tx packets
    • - Fixes for cutting packets to snaplen, e1000-dna rx
  • Libzero
    • - pfdnacluster_master support for multiple instances of multiple applications
    • - Added dna_cluster_set_thread_name() to name the master rx/tx threads
    • - Fix for direct forwarding with the DNA Cluster
    • - Changed len to a ptr in DNA Bouncer decision function to allow user change forwarded packet content and lenght
  • Examples
    • - Added ability to replay a packet with pfsend passing hex from stdin
    • - Added pfwrite to the package
    • - Fix for rate control with huge files in pfsend

PF_RING 5.6.0 Released

$
0
0

This is to announce the release of PF_RING 5.6.0. We recommend all users to install this release as we have fixed a couple of critical bugs.

Changelog:

  • PF_RING Kernel module
    • Fixed bug that prevented the PF_RING cluster to work properly with specific traffic
  • Documentation
  • Libzero
    • Fixed bug that caused the DNA bouncer to process the correct packet
  • Examples
    • pfwrite
      • Added support for the microcloud so that for GTP traffic it is possible to dump traffic of specific IMSI phone
      • Added support for mobile networks (2G/3G/LTE) so that we can dump traffic of specific GTP tunnels
    • pfdump: added cluster id support (courtesy of Doug Burks )
  • Snort (PF_RING DAQ)
    • Added microcloud support for notifying into the microcloud those hosts that are victims/attackers

Comparison of Deep Packet Inspection (DPI) Tools for Traffic Classification

$
0
0

From time to time we receive emails form people asking how nDPI compares with other similar toolkits. Licio Marchetti has shared this report Comparison of Deep Packet Inspection (DPI) Tools for Traffic Classification written by the Universitat Politècnica de Catalunya that says: “the best accuracy we obtained from NDPI (91 points), PACE (82 points), UPC MLA (79 points), and Libprotoident (78 points)”. So nDPI looks in good shape :-)

This said, last week we have improved quite bit the Bittorrent and Skype dissectors and we have create a small test tool that demonstrate that we can create an inline application that for instance blocks Skype traffic (i.e. we believe that nDPI now scores much better than in this report). We’re not focusing on the (overdue) ntopng release, but once done that, we will release a tool that can demonstrate all this in practice. Thanks to the whole nDPI user community for the comments and code patches.

If interested in nDPI, you can also view this webinar organized by AlienVaultHow to Improve Network Security with nDPI.

ntop is back: ntopng 1.0 just released

$
0
0

After 15 years since the introduction of the original ntop, it was time to start over with a new, modern ntop. We called it ntopng, ntop next generation. The goal of this new application are manyfold:

  1. Released under GNU GPL3.
  2. Feature a modern, HTML5 and Ajax-based dynamic web interface (caveat: you need a modern browser to use ntopng).
  3. Small application engine, memory wise and crash proof.
  4. Ability to identify application protocols via nDPI, ntop’s open-source DPI (Deep Packet Inspection) framework.
  5. User’s ability to script, extend, and modify ntopng pages coding them in LuaJIT, a small yet lighting fast language.
  6. Characterise HTTP traffic by leveraging on block.si categorisation services. ntopng comes with a licensing key, but you can acquire a private key by contacting info@block.si.
  7. Use of redis as data cache, for splitting the ntopng engine from data being saved.
  8. Ability to collect flows (sFlow, NetFlow and IPFIX) using nProbe as probe/proxy.
  9. Fast, very fast engine able to scale up to 10 Gbit on commodity PCs when using PF_RING/DNA.
  10. Support of Unix, BSD, MacOSX and Windows (including 7/8).
Click to view slideshow.

The ntopng engine is coded in C++ with web pages written in Lua. In the next weeks we will publish some development guidelines for those who will be willing to contribute to this project and make ntopng even better. We are aware that many more features are missing, but they will come later this year as incremental updates. We will publish a roadmap in the coming weeks, and encourage users and companies, to contact us for including ntopng in they products and distributions. The idea is to create an ecosystem where everyone can contribute.

Download links:

Finally let us thank those that made all this possible, and in particular those that in the early ntop days have believed in us, and encouraged to move forward. The list (in alphabetical order) is pretty long, so we apologise in advance if we forgot some of you:

We hope you will enjoy ntopng. Thank you all.

Tracking and Troubleshooting Mobile Phone Users (IMSI) using the MicroCloud

$
0
0

The microcloud is one of the fields where s used extensively by mobile network operators. The reasons are manyfold:

  • Data aggregation facilities offered in realtime by the microcloud.
  • Realtime user-to-tunnel mapping.
  • User traffic-to-user correlation.

Unfortunately when a mobile network is populated by million of active users (IMSI), troubleshooting a problem can be a problem. Tools such as wireshark that are used on fixed networks do not work because:

  • The network is distributed, so there is not single sniffing point, but rather it is necessary to deploy our tools across the network that might mean “across a whole country”.
  • There is so much ingress traffic (multi-10 Gbit with modern LTE/4G networks).
  • Traffic is encapsulated in GTP tunnels that will then contain user traffic, so simple BPF filters won’t work.

For this reason we have developed some tools and nProbe extensions that simplify operations.

How to Dump an IMSI traffic on a pcap file

PF_RING comes with a tool named pfwrite that is a simple packet-to-disk tool (in essence it is a very entry-level version of n2disk). Tracking a user/IMSI on a mobile network is a quite dynamic activity as users move, and connect/disconnect from the mobile network. In essence it is like if a PC would change IP addresses several times during the day. We have enhanced nProbe to publis onto the microcloud when a IMSI user changes status so that we can track it.

pfwrite

 Supposing to start nProbe as follows (note that you usually you need to start several nProbe instances in order to monitor a large network, each monitoring a portion o the traffic)

nprobe --tunnel --gtpv1-dump-dir /var/gtpv1/ --redis localhost --ucloud -i dna0

nProbe will publish into the microcloud information about IMSI that connected/disconnected from the mobile network. pfwrite needs to be deployed onto a location where user traffic flows and it is started as

pfwrite -m <IMSI to track> -w imsi.pcap

As soon as it is started up, it will connect to the microcloud (local node) and fetch the GTP tunnels (if known) for the specified IMSI. Then will spawn a thread that subscribes to the microcloud and listens for events concerning the specified IMSI. This way this tool is able to dump on disk the packets of the specified IMSI independently of its status (connected or disconnected from the mobile network) and more important, it is tracked while it changes its status over time. This without restarting the tool, but just exploiting the messages published by nProbe into the microcloud.

Realtime Layer-7 IMSI Traffic Aggregation

nProbe now support an addition flag

--imsi-aggregation

(For instance: nprobe --tunnel --gtpv1-dump-dir /var/gtpv1/ --redis localhost --ucloud -i dna0 --imsi-aggregation)

that instructs nProbe to aggregate traffic per IMSI/Application protocol onto the microcloud in realtime with 5 minutes aggregation granularity. This means that whenever a flow is expired, nProbe updates the counter for the flow protocol and the IMSI that has generated the flow.

IMSIAggregation

Through a companion tool, it is possible to put onto crontab the following entry

$ crontab -l|grep ggrega
*/5 * * * * /home/imsi/imsiAggregator.py --redis localhost  --epoch -2 --outdir /export/working_dir/imsi

that walks the micrcloud every 5 minutes and dumps traffic on disk in text format as follows

#
# Timestamp IMSI Granularity Protocol Packets Bytes Flows Duration(sec)
#
1374938100 XXXXX2001106796 300 Unknown 3 298 2 2
1374938100 XXXXX1100485374 300 HTTP 393 283553 13 114
1374938100 XXXXX2001110729 300 SSL 49 14269 10 18
1374938100 XXXXX2001338233 300 Skype 15 1411 1 7
1374938100 XXXXX1101335045 300 DNS 2 385 1 1
1374938100 XXXXX2001931139 300 Viber 17 1487 4 35

Note that we do not have just the number of bytes per IMSI, but also the application protocol discovered by nDPI. In essence you can answer questions like “who’s using Viber on my mobile network?” of “how many active subscribers use Facebook?”.

Conclusion

These are just two examples of what you can do with nprobe and the microcloud. Applications are almost infinite and in realtime. No more latency in your “monitoring answers” but rather know what is happening when it is happening. Without spending a fortune on database clusters or distributed storage infrastructure. All with the power of the microcloud.

Moving Towards ntopng 1.1

$
0
0

It has been a busy summer here at ntop. Since the initial ntopng 1.0 release, we have tried to fill the gap in terms of missing with respect to the original ntop. This post is to update you about the new features of the upcoming 1.1 release schedule for this fall and that are currently available in the SVN development tree:

  • Ability to support multi-interfaces. This means that you can repeat on the command line “-i <interface>” multiple times, one per interface you want to add.
  • Use of HTTP sessions for opening multiple independent web views of the same ntopng.
  • Local hosts are now persistent (unless configured differently). This means that is host a.b.c.d is idle and is purged from memory, its state is saved in redis and thus as soon as a.b.c.d starts making traffic again, it is restored from the cache with all previous counters (in the original ntop all counters start from zero again). Obviously you can restore an host at any time, simply searching it on the search box.
  • (Most [we need some more work to update all reports]) Reports update dynamically counter values, so that you do not have to reload the page to see what happens.

  • Counters have now a trend indicator to immediately figure out which one are changing with respect to the recent past.

  • Throughout has now aa live graph so that you can see how the value changes overtime.
  • Animated Geomap We have introduced animated GeoMaps so that you can see where the traffic goes. The map is automatically centred on your location (if known) thanks to HTML 5.
  • Inside hosts and interface we have added various statistics that were not included in 1.0 such as packet distribution or host contacts (list of peers that contacted a specific host in the recent past).
  • All objects are now JSON-friendly, so that you can download for instance a snapshot of a host through it.
  • TopHosts As in ntopng everything is realtime with 1 sec granularity, it is possible to depict what happens when it happens. No average values over 5 minutes such as with NetFlow, but pure realtime data. ntopng offers now a new view that enables network administrators to see in a consistent place what traffic top hosts are doing at any given time though a dynamic, scrollable timeline.
  • Expired flows are now saved, if configured, in a SQLite database so that use SQL to play with them.

There are many other items we would like to include (alarms, PDF reports, cloud storage….) and based on the development cycle, we will decide if put them in 1.1 or leave them for version 1.2. For sure we plan to soon release the specifications of the Lua API so that you can start customising ntopng.

This said we are happy to read that ntopng had been downloaded by many users who are running it on very different hardware platforms (from RaspberryPI and up) and distributions (we have been noticed that a Gentoo and Debian packages are now available). We encourage you to provide your feedback on this pre-release code so that we can address all open issues.

Why nProbe+JSON+ZMQ instead of native sFlow/NetFlow support in ntopng?

$
0
0

Both sFlow and NetFlow/IPFIX are the two leading network monitoring protocols used today on the market. They are two binary protocols encapsulated over UDP, with data flowing (mono-directional) from the probe (usually a physical network device or a software probe such as nProbe)  to the collector (a PC that receives traffic and handles is or dumps it on a database). This architecture has been used for decades, it still makes sense from the device point of view but not for the application (developer) point of view for many reasons:

  1. The transport in NetFlow/sFlow has been created from the point of view of the device (probe) that has to send flows to all configured collectors. This means that all collectors will receive all the flows, and that all flows (regardless of their nature) will be thus sent to all collectors. Example: if you want to send to collector A only HTTP flows, and to collector B only VoIP flows it is not possible. The probe will send everything to everyone. All the time. Imagine to have a TV that is not tuned to your favourite channel at a given time, but that shows all the channels simultaneously.
  2. UDP has limitations with MTUs. The use of VPNs (with a smaller than 1500 bytes MTU) is relatively common so probes have to deal with this problem. Another problem is that it is not possible to deliver data larger than a 1400 bytes or so into a UDP packet. This means that a large HTTP cookie won’t fit onto a UDP packet and thus that you have to cut your information up to a specific upper bound. Not nice in particular if the information must be received uncut such as URLs for instance.
  3. NetFlowV9/IPFIX (and in part sFlow too) have been created with the concept of template, so the collector must store and understand the flow template in order to decode data. This means complications, errors, retransmission of templates, and so on.
  4. Due to the need to keep NetFlow templates small in size, sending a flow that contains an email header (Subject, From, To, CC, Bcc) can become a nightmare as this flow must be divided into sub flows all linked with a unique key. Example <MessageId, To Address 1>, <MessageId, To Address 2>, … Not so nice.
  5. The collector has to handle the probes idiosyncrasies with the results that flows coming from different probes might not necessarily have the same format (or flow template if you wish).
  6. Adding new fields (e.g. the Mime-Type to a HTTP Flow) to existing templates might require extra effort on the collector side.
  7. The probe cannot send partial flows easily or periodic updates (e.g. every sec a probe sends VoIP monitoring metrics) unless further specific templates are defined.

All the above facts have been enough to let us move to a different way of shipping monitoring data from the device to the collector. The application that uses monitoring data must:

  1. Receive data ready to be used. Handling templates is old fashion and must be confined on a territory/place near the probe but this complexity should not pollute all the actors that are planning to use monitoring data.
  2. The data transport should be sitting on top of TCP, so that a probe can send arbitrary long data without having to cut this data or care of MTUs.
  3. The TCP-based transport must be connectionless, namely if the probe or the collector die/disconnect the transport will handle the problem, as well it will transparently handle a future reconnection. In essence we want the freedom of a connection-less protocol over a connection oriented probe.
  4. Monitoring data should be divided in channels/topics, so that the app that has monitoring data will publish the available channels, and the data consumers will subscribe to one or multiple channels and thus receive only the information they are interested in.
  5. The data format can change over time, new fields can be added/removed as needed. For instance if nProbe monitors a YouTube video, it should send the VideoID into the flow, but in case of non-YouTube a flow can be emitted but without such field. In NetFlow doing that means create as many templates as all the combinations, or send templates with field with empty values (but that still take space at the network transport level).
  6. Receive data on a format that is plain and easy. For instance in NetFlow the flow start time (FIRST_SWITCHED) is the “sysUptime in msec at which the first packet of this Flow was switched”.So the application is limited to ms precision and in order to know this time we must first know the sysUpTime, do some math, and compute this time. Odd and naïve I believe. If there is a field, its value must be immediately available and not precomputed based on other fields that complicate the application logic.
  7. Interpret the fields it handles, and discard those that cannot be handled as they are unknown. This grants application evolution over time so that new fields are added and only the old ones are handled by legacy apps that continue to work unmodified while new apps can also handle the new fields.

ZMQ

In order to implement all this we made some design choices:

  1. Data is shipped in JSON format. Everyone can read and understand it, and in particular web browsers. The format is human-friendly and easy to read, but in the near future we might move to compressed or binary (or both) formats for efficiency reasons. The flowField is identified by a number as specified int the NetFlow RFC (FIRST_SWITCHED is mapped to 22), and the field value is printed in string format. For instance   {8:"192.168.0.200",12:"64.243.24.160",15:"0.0.0.0",10:0,14:0,2:13,1:987,22:1379457349,21:1379457349,7:50141,11:80,6:27,4:6,5:0,16:0,17:3561,9:0,13:0} represents the flow [tcp] 64.243.24.160:80 -> 192.168.0.200:50142 [12 pkt/11693 bytes].
  2. nProbe can be used as pure probe (i.e. it convert flows into flows) or as proxy (i.e. it acts as a sFlow/NetFlow collector with respect to ntopng. In no case ntopng will receive raw flows, but only via JSON.
  3. ZMQ is a great transport that allows ntopng to connect to nProbe and fetch data via ZQM only for the topics it is interested in. Currently ntopng subscribes to “flows” topic, but in the future this will change and be configurable as more topics can be subscribed. So on the above picture the arrow from nProbe to ntopng depicts the information flow, but physically is ntopng that connects (as client) to nProbe that instead acts as data source. If nProbe or ntopng are restarted, the transport takes cares of all these issues so the apps do not see any degradation or have to explicitly reimplement reconnections.

As explained in the ntopng README file, nProbe and ntopng must be started as follows:

  1. Flow collection/generation (nProbe)
    nprobe --zmq "tcp://*:5556" -i eth1 -n none (probe mode)
    nprobe --zmq "tcp://*:5556" -i none -n none --collector-port 2055 (sFlow/NetFlow collector mode)
  2. Data Collector (ntopng)
    ntopng -i tcp://127.0.0.1:5556

This means that nProbe creates a TCP endpoint available on all interfaces (* stands for all) active at the port 5556. ntopng instead is instructed to connect via TCP to such endpoint as client (in essence it is the opposite of NetFlow/sFlow). To the same nProbe endpoint you can connect multiple probes or even a zmq listener application.

Like said before, this is just the beginning. Using the above solution we can create new apps that would be much more complicated to develop by relying just on sFlow/NetFlow.

NOTE:

  1. In order to use this solution you MUST have a recent copy of nProbe that is supported with ZMQ. If unsure please check this first (nprobe -h|grep zmq).
  2. This is an interesting thread about the use of JSON in network monitoring,
  3. We (at ntop) do not plan to discontinue sFlow/NetFlow/IPFIX support on our products. We just want to say that their complexity cannot be propagated to all apps, most of which live in web browser or are coded with modern languages whose developers like to focus on the problem (network monitoring) rather than on how data is exchanged across monitoring apps. In a way think of sFlow/NetFlow/IPFIX of a old serial port, and JSON as a USB port. You can use a serial-to-USB converter, but serial ports on PCs are now legacy. nProbe is our serial-to-USB converter, and ntopng is a USB-only app.

Using ntopng and nProbe on the BeagleBone (small is beautiful)

$
0
0

For years we enjoyed pushing the limits of our software products (our nBox recorder is able to handle multi-10Gbit interfaces for instance), but our roots are not there. All started in 2003 with this small PowerPC-based nBox

nbox-cyclades

where we have first integrated nProbe into it. Now after 10 years, it is time to rethink all this and try again. On the market there are several small and cheap platforms such as the Raspberry Pi, the BeagleBone Black and the EdgeMax that are ideal platforms for our apps. We have then decided to start our endeavour with the BeagleBone. As we plan to release a new ntopng version in the near future, we decided to refresh out software and make sure it works out-of-the-box on it.

ntop-beagle

The BeagleBone Black is a 45$ ARM-powered board fast enough to run our apps (1 GHz CPU)

beaglebone:~$ cat /proc/cpuinfo 
processor	: 0
model name	: ARMv7 Processor rev 2 (v7l)
BogoMIPS	: 990.68
Features	: swp half thumb fastmult vfp edsp thumbee neon vfpv3 tls 
CPU implementer	: 0x41
CPU architecture: 7
CPU variant	: 0x3
CPU part	: 0xc08
CPU revision	: 2

Hardware	: Generic AM33XX (Flattened Device Tree)
Revision	: 0000
Serial		: 0000000000000000

and equipped with 512 MB of RAM and 2 GB of storage. It comes with Ångström Linux and all you have to do is to compile the apps. Both nProbe and ntopng compile out of the box from source code. For sake of space we cover ntopng compilation that is more complex than nProbe to compile, due to its dependencies. The first step os to install the prerequisites as follows:

opkg install subversion libpcap rrdtool-dev
wget http://download.redis.io/redis-stable.tar.gz
tar xvzf redis-stable.tar.gz
cd redis-stable
make

Done that it is time to compile ntopng as follows

svn co https://svn.ntop.org/svn/ntop/trunk/ntopng/
cd ntopng
./configure
make

That’s all. Now you can start ntopng as you do on your Linux or Windows box.

nProbe
beaglebone:~/nProbe$ ./nprobe -v
Welcome to nprobe v.6.15.131010 ($Revision: 3730 $) for armv7l-unknown-linux-gnueabi
Copyright 2002-13 by Luca Deri <deri@ntop.org>
ntopng
beaglebone:~/ntopng$ ./ntopng -h
ntopng armv7l v.1.0.2 (r6859) - (C) 1998-13 ntop.org

Usage:
  ntopng 
  or
  ntopng [-m ] [-d ] [-e] [-g ] [-n mode] [-i <iface|pcap file>]
              [-w ] [-p ] [-P] [-d ]
              [-c ] [-r ]
              [-l] [-U ] [-s] [-v] [-C] [-F]
              [-B ] [-A ]
...

Resource usage is pretty low and there is plenty of room for running ntopng.

top - 17:37:59 up  2:00,  4 users,  load average: 0.05, 0.33, 0.58
Tasks: 114 total,   1 running, 111 sleeping,   2 stopped,   0 zombie
Cpu(s):  6.7%us,  3.8%sy,  0.0%ni, 88.8%id,  0.0%wa,  0.0%hi,  0.6%si,  0.0%st
Mem:    510820k total,   505692k used,     5128k free,    32464k buffers
Swap:        0k total,        0k used,        0k free,   337412k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                        
13519 nobody    20   0  114m  11m 5128 S  5.7  2.4   0:27.73 ntopng    
13550 deri      20   0  2628 1160  896 R  2.9  0.2   0:00.21 top      
13503 root      20   0 27760 1852 1116 S  0.6  0.4   0:04.10 redis-server

As future work items, after ntopng 1.1 has been released, we plan to optimise our apps for these low-cost platforms so that everyone can monitor its Internet connection at low cost without purchasing complex (to configure and maintain) and expensive devices.

We cannot tell you more, but more news will follow. Stay tuned!

PS. When used tethered the BeagleBone has 2 ethernet interfaces (you need one for management and one for receiving packets to analyse). You can add an extra ethernet interface using a USB Ethernet adapter if you need an extra monitoring port.

Upcoming ntop meetings: Nürnberg, Luxembourg, Pisa, Milano.

$
0
0

Next week is going to be a busy week for us as we’ll (Luca and Alfredo) be make a short tour in Europe to present ntopng and the latest ntop apps.

We would like to meet ntop users and hear their feedback, criticism and suggestions.

See you next week!

ntopng Tutorial @ LinuxDay 2013

$
0
0

Last Saturday 26th of October, we have presented a tutorial on ntopng at the Italian LinuxDay 2013. The slides we used for this presentation can be used to learn the idea behind ntopng and highlight the main design principles.

We are also glad that this presentation has been accepted for submission consideration at the Italy in a Day contest, so it might have the chance to become part of this upcoming movie.

ntopng 1.1 Released

$
0
0

This is to announce the release of ntopng 1.1. The main changes with respect to 1.0 include:

  • Enhanced web GUI with new menus and extension of previous sections.
  • Ability to specify multiple interfaces simulatenously (just repeat -i).
  • Performance improvements both in nDPI and the ntopng engine (yes multi-Gbit traffic analysis is possible).
  • Several enhancements to the flow collection interface (note that you need the very latest nProbe) that is not much faster and written in native C++ code.
  • Added Google Maps support and HTML 5 map geolocation support.
  • Ability to save flows (both collected and computed from packets) in SQLite format (-F).
  • Introduced data aggregations (-A) for clustering information based on homogeneous information (e.g. HTTP servers contacted or DNS hosts resolved).
  • Implemented passive OS detection by dissecting, via nDPI, HTTP request headers.
  • Added compatibility with embedded platforms such as RaspberryPi and BeagleBoard.
  • Added several new reports.
  • All report counters have not an activity icon.
  • Added icons in menu headers and HTML pages.
  • Extended host reporting information with new reports and enhancements to existing ones.
  • Fixed various interface and engine bugs.
  • Reduced memory usage.
  • Added activity map for having at 1-second visibility of hosts activities.

In the next release we will focus on various areas including (but not limited to):

  • Ability to deploy ntopng based sensors across the Internet while accessing them from a single GUI (read it as: create a centralized monitoring console based on a plethora of distributed ntopng monitoring instances).
  • Cloud support for remote data storage.
  • Custom reports for selected protocols such as VoIP and HTTP for providing detailed activity reports.
  • New graphical reports for depicting data that is currently not yet/properly displayed.
  • Ability to visualize stored/historical flows already saved by ntopng.
  • Comparison of hosts activities to spot similarities and non-standard behavior.

Enjoy!

Accelerating Suricata with PF_RING DNA

$
0
0

Below you can find an excerpt of the “Suricata (and the grand slam of) Open Source IDPS” article written by our friend Peter Manev (Suricata core team) describing how to install and configure PF_RING, DNA and Suricata.
The original blog entries can be found at Part One – PF_RING and Part Two – DNA.
————-

Part One – PF_RING

If you have pf_ring already installed, you might want to do:

sudo rmmod pf_ring

If you are not sure if you have pf_ring installed , you can do:

sudo modinfo pf_ring

Get the latest pf_ring sources:

svn export https://svn.ntop.org/svn/ntop/trunk/PF_RING/ pfring-svn-latest

Compile and install PF_RING

Next, enter the following commands for configuration and installation:
(!!! NOT AS ROOT !!!)

    cd pfring-svn-latest/kernel
    make && sudo make install
    cd ../userland/lib
    ./configure –prefix=/usr/local/pfring && make && sudo make install
    cd ../libpcap-1.1.1-ring
    ./configure –prefix=/usr/local/pfring && make && sudo make install
    cd ../tcpdump-4.1.1
    ./configure –prefix=/usr/local/pfring && make && sudo make install
    sudo ldconfig

  

Then we load the module:

sudo modprobe pf_ring

  
Elevate as root and check if you have everything you need -enter:

modinfo pf_ring && cat /proc/net/pf_ring/info

   
Increase the throttle rate of the ixgbe module:

modprobe ixgbe InterruptThrottleRate=4000

The default pf_ring setup will look something like this:

root@suricata:/var/og/suricata# cat /proc/net/pf_ring/info
PF_RING Version          : 5.6.2 ($Revision: exported$)
Total rings              : 16
Standard (non DNA) Options
Ring slots               : 4096
Slot version             : 15
Capture TX               : Yes [RX+TX]
IP Defragment            : No
Socket Mode              : Standard
Transparent mode         : Yes [mode 0]
Total plugins            : 0
Cluster Fragment Queue   : 0
Cluster Fragment Discard : 0

Notice the ring slots above. We would actually like to increase that in order to meet the needs of a high speed network that we are going to monitor with Suricata.

So we do:

rmmod pf_ring
modprobe pf_ring transparent_mode=0 min_num_slots=65534

root@suricata:/home/pevman/pfring-svn-latest# modprobe pf_ring transparent_mode=0 min_num_slots=65534

root@suricata:/home/pevman/pfring-svn-latest# cat /proc/net/pf_ring/info
PF_RING Version          : 5.6.2 ($Revision: exported$)
Total rings              : 0
Standard (non DNA) Options
Ring slots               : 65534
Slot version             : 15
Capture TX               : Yes [RX+TX]
IP Defragment            : No
Socket Mode              : Standard
Transparent mode         : Yes [mode 0]
Total plugins            : 0
Cluster Fragment Queue   : 0
Cluster Fragment Discard : 0

Notice the difference above  – Ring slots: 65534

Compile and install Suricata with PF_RING enabled

Get the latest Suricata dev branch:

git
clone git://phalanx.openinfosecfoundation.org/oisf.git && cd
oisf/ &&  git clone https://github.com/ironbee/libhtp.git -b
0.5.x

 Compile and install

./autogen.sh && LIBS=-lrt ./configure –enable-pfring –enable-geoip \
–with-libpfring-includes=/usr/local/pfring/include/ \
–with-libpfring-libraries=/usr/local/pfring/lib/ \
–with-libpcap-includes=/usr/local/pfring/include/ \
–with-libpcap-libraries=/usr/local/pfring/lib/ \
–with-libnss-libraries=/usr/lib \
–with-libnss-includes=/usr/include/nss/ \
–with-libnspr-libraries=/usr/lib \
–with-libnspr-includes=/usr/include/nspr \
&& sudo make clean && sudo make && sudo make install && sudo ldconfig

The “LIBS=-lrt” infront of “./configure” above is in case you get the following error  without the use of  “LIBS=-lrt ” :

checking for pfring_open in -lpfring... no

   ERROR! --enable-pfring was passed but the library was not found or version is >4, go get it
   from http://www.ntop.org/PF_RING.html

PF_RING – suricata.yaml tune up and configuration

The following values and variables in the default suricata.yaml need to be changed ->

We make sure we use runmode workers (feel free to try other modes and experiment what is best for your specific set up):

#runmode: autofp
runmode: workers

Adjust the packet size:

# Preallocated size for packet. Default is 1514 which is the classical
# size for pcap on ethernet. You should adjust this value to the highest
# packet size (MTU + hardware header) on your system.
default-packet-size: 1522

Use custom profile in detect-engine with a lot more groups (high gives you about 15 groups per variable, but you can customize as needed depending on the network ranges you monitor ):

detect-engine:
  – profile: custom
  – custom-values:
      toclient-src-groups: 200
      toclient-dst-groups: 200
      toclient-sp-groups: 200
      toclient-dp-groups: 300
      toserver-src-groups: 200
      toserver-dst-groups: 400
      toserver-sp-groups: 200
      toserver-dp-groups: 250
  – sgh-mpm-context: full
  – inspection-recursion-limit: 3000

Adjust your defrag settings:
# Defrag settings:

defrag:
  memcap: 512mb
  hash-size: 65536
  trackers: 65535 # number of defragmented flows to follow
  max-frags: 65535 # number of fragments to keep
  prealloc: yes
  timeout: 30

Adjust your flow settings:

flow:
  memcap: 1gb
  hash-size: 1048576
  prealloc: 1048576
  emergency-recovery: 30

Adjust your per protocol timeout values:

flow-timeouts:

  default:
    new: 3
    established: 30
    closed: 0
    emergency-new: 10
    emergency-established: 10
    emergency-closed: 0
  tcp:
    new: 6
    established: 100
    closed: 12
    emergency-new: 1
    emergency-established: 5
    emergency-closed: 2
  udp:
    new: 3
    established: 30
    emergency-new: 3
    emergency-established: 10
  icmp:
    new: 3
    established: 30
    emergency-new: 1
    emergency-established: 10

Adjust your stream engine settings:

stream:
  memcap: 12gb
  checksum-validation: no      # reject wrong csums
  prealloc-sesions: 500000     #per thread
  midstream: true
  asyn-oneside: true
  inline: no                  # auto will use inline mode in IPS mode, yes or no set it statically
  reassembly:
    memcap: 20gb
    depth: 12mb                  # reassemble 12mb into a stream
    toserver-chunk-size: 2560
    toclient-chunk-size: 2560
    randomize-chunk-size: yes
    #randomize-chunk-range: 10

Make sure you enable suricata.log for troubleshooting if something goes wrong:
  outputs:
  – console:
      enabled: yes
  – file:
      enabled: yes
      filename: /var/log/suricata/suricata.log

The PF_RING section:

# PF_RING configuration. for use with native PF_RING support
# for more info see http://www.ntop.org/PF_RING.html
pfring:
  – interface: eth3
    # Number of receive threads (>1 will enable experimental flow pinned
    # runmode)
    threads: 16

    # Default clusterid.  PF_RING will load balance packets based on flow.
    # All threads/processes that will participate need to have the same
    # clusterid.
    cluster-id: 99

    # Default PF_RING cluster type. PF_RING can load balance per flow or per hash.
    # This is only supported in versions of PF_RING > 4.1.1.
    cluster-type: cluster_flow
    # bpf filter for this interface
    #bpf-filter: tcp
    # Choose checksum verification mode for the interface. At the moment
    # of the capture, some packets may be with an invalid checksum due to
    # offloading to the network card of the checksum computation.
    # Possible values are:
    #  – rxonly: only compute checksum for packets received by network card.
    #  – yes: checksum validation is forced
    #  – no: checksum validation is disabled
    #  – auto: suricata uses a statistical approach to detect when
    #  checksum off-loading is used. (default)
    # Warning: ‘checksum-validation’ must be set to yes to have any validation
    #checksum-checks: auto

We had these rules enabled:

rule-files:

 - md5.rules # 134 000 specially selected file md5s
 - dns.rules
 - malware.rules
 - local.rules
 - current_events.rules
 - mobile_malware.rules
 - user_agents.rules

Make sure you adjust your Network and Port variables:

  # Holds the address group vars that would be passed in a Signature.
  # These would be retrieved during the Signature address parsing stage.
  address-groups:

    HOME_NET: “[ HOME NET HERE ]“

    EXTERNAL_NET: “!$HOME_NET”

    HTTP_SERVERS: “$HOME_NET”

    SMTP_SERVERS: “$HOME_NET”

    SQL_SERVERS: “$HOME_NET”

    DNS_SERVERS: “$HOME_NET”

    TELNET_SERVERS: “$HOME_NET”

    AIM_SERVERS: “$EXTERNAL_NET”

    DNP3_SERVER: “$HOME_NET”

    DNP3_CLIENT: “$HOME_NET”

    MODBUS_CLIENT: “$HOME_NET”

    MODBUS_SERVER: “$HOME_NET”

    ENIP_CLIENT: “$HOME_NET”

    ENIP_SERVER: “$HOME_NET”

  # Holds the port group vars that would be passed in a Signature.
  # These would be retrieved during the Signature port parsing stage.
  port-groups:

    HTTP_PORTS: “80″

    SHELLCODE_PORTS: “!80″

    ORACLE_PORTS: 1521

    SSH_PORTS: 22

    DNP3_PORTS: 20000

Your app parsers:

# Holds details on the app-layer. The protocols section details each protocol.
# Under each protocol, the default value for detection-enabled and “
# parsed-enabled is yes, unless specified otherwise.
# Each protocol covers enabling/disabling parsers for all ipprotos
# the app-layer protocol runs on.  For example “dcerpc” refers to the tcp
# version of the protocol as well as the udp version of the protocol.
# The option “enabled” takes 3 values – “yes”, “no”, “detection-only”.
# “yes” enables both detection and the parser, “no” disables both, and
# “detection-only” enables detection only(parser disabled).
app-layer:
  protocols:
    tls:
      enabled: yes
      detection-ports:
        tcp:
          toserver: 443

      #no-reassemble: yes
    dcerpc:
      enabled: yes
    ftp:
      enabled: yes
    ssh:
      enabled: yes
    smtp:
      enabled: yes
    imap:
      enabled: detection-only
    msn:
      enabled: detection-only
    smb:
      enabled: yes
      detection-ports:
        tcp:
          toserver: 139
    # smb2 detection is disabled internally inside the engine.
    #smb2:
    #  enabled: yes
    dnstcp:
       enabled: yes
       detection-ports:
         tcp:
           toserver: 53
    dnsudp:
       enabled: yes
       detection-ports:
         udp:
           toserver: 53
    http:
      enabled: yes

Libhtp body limits:

      libhtp:

         default-config:
           personality: IDS

           # Can be specified in kb, mb, gb.  Just a number indicates
           # it’s in bytes.
           request-body-limit: 12mb
           response-body-limit: 12mb

           # inspection limits
           request-body-minimal-inspect-size: 32kb
           request-body-inspect-window: 4kb
           response-body-minimal-inspect-size: 32kb
           response-body-inspect-window: 4kb

Run it

With all that done and in place  – you can start Suricata like this (change your directory locations and such !)

 LD_LIBRARY_PATH=/usr/local/pfring/lib suricata –pfring-int=eth3 \
 –pfring-cluster-id=99 –pfring-cluster-type=cluster_flow \
 -c /etc/suricata/peter-yaml/suricata-pfring.yaml -D -v

this would also work:

suricata –pfring-int=eth3  –pfring-cluster-id=99 –pfring-cluster-type=cluster_flow \
 -c /etc/suricata/peter-yaml/suricata-pfring.yaml -D -v

After you start Suricata with PF_RING, you could use htop and the logs info of suricata.log to determine if everything is ok

EXAMPLE:

 [29966] 30/11/2013 — 14:29:12 – (util-cpu.c:170) <Info> (UtilCpuPrintSummary) — CPUs/cores online: 16
[29966] 30/11/2013 — 14:29:12 – (app-layer-dns-udp.c:315) <Info> (DNSUDPConfigure) — DNS request flood protection level: 500
[29966] 30/11/2013 — 14:29:12 – (defrag-hash.c:212) <Info> (DefragInitConfig) — allocated 3670016 bytes of memory for the defrag hash… 65536 buckets of size 56
[29966] 30/11/2013 — 14:29:12 – (defrag-hash.c:237) <Info> (DefragInitConfig) — preallocated 65535 defrag trackers of size 152
[29966] 30/11/2013 — 14:29:12 – (defrag-hash.c:244) <Info> (DefragInitConfig) — defrag memory usage: 13631336 bytes, maximum: 536870912
[29966] 30/11/2013 — 14:29:12 – (tmqh-flow.c:76) <Info> (TmqhFlowRegister) — AutoFP mode using default “Active Packets” flow load balancer
[29967] 30/11/2013 — 14:29:12 – (tmqh-packetpool.c:141) <Info> (PacketPoolInit) — preallocated 65534 packets. Total memory 229106864
[29967] 30/11/2013 — 14:29:12 – (host.c:205) <Info> (HostInitConfig) — allocated 262144 bytes of memory for the host hash… 4096 buckets of size 64
[29967] 30/11/2013 — 14:29:12 – (host.c:228) <Info> (HostInitConfig) — preallocated 1000 hosts of size 112
[29967] 30/11/2013 — 14:29:12 – (host.c:230) <Info> (HostInitConfig) — host memory usage: 390144 bytes, maximum: 16777216
[29967] 30/11/2013 — 14:29:12 – (flow.c:386) <Info> (FlowInitConfig) — allocated 67108864 bytes of memory for the flow hash… 1048576 buckets of size 64
[29967] 30/11/2013 — 14:29:13 – (flow.c:410) <Info> (FlowInitConfig) — preallocated 1048576 flows of size 280
[29967] 30/11/2013 — 14:29:13 – (flow.c:412) <Info> (FlowInitConfig) — flow memory usage: 369098752 bytes, maximum: 1073741824
…..
[29967] 30/11/2013 — 14:30:23 – (util-runmodes.c:545) <Info> (RunModeSetLiveCaptureWorkersForDevice) — Going to use 16 thread(s)
[30000] 30/11/2013 — 14:30:23 – (source-pfring.c:445) <Info> (ReceivePfringThreadInit) — (RxPFReth31) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30001] 30/11/2013 — 14:30:23 – (source-pfring.c:445) <Info> (ReceivePfringThreadInit) — (RxPFReth32) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30002] 30/11/2013 — 14:30:23 – (source-pfring.c:445) <Info> (ReceivePfringThreadInit) — (RxPFReth33) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30003] 30/11/2013 — 14:30:23 – (source-pfring.c:445) <Info> (ReceivePfringThreadInit) — (RxPFReth34) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30004] 30/11/2013 — 14:30:24 – (source-pfring.c:445) <Info> (ReceivePfringThreadInit) — (RxPFReth35) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30005] 30/11/2013 — 14:30:24 – (source-pfring.c:445) <Info> (ReceivePfringThreadInit) — (RxPFReth36) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30006] 30/11/2013 — 14:30:24 – (source-pfring.c:445) <Info> (ReceivePfringThreadInit) — (RxPFReth37) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30007] 30/11/2013 — 14:30:24 – (source-pfring.c:445) <Info> (ReceivePfringThreadInit) — (RxPFReth38) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30008] 30/11/2013 — 14:30:24 – (source-pfring.c:445) <Info> (ReceivePfringThreadInit) — (RxPFReth39) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30009] 30/11/2013 — 14:30:24 – (source-pfring.c:445) <Info> (ReceivePfringThreadInit) — (RxPFReth310) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30010] 30/11/2013 — 14:30:24 – (source-pfring.c:445) <Info> (ReceivePfringThreadInit) — (RxPFReth311) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30011] 30/11/2013 — 14:30:24 – (source-pfring.c:445) <Info> (ReceivePfringThreadInit) — (RxPFReth312) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30012] 30/11/2013 — 14:30:24 – (source-pfring.c:445) <Info> (ReceivePfringThreadInit) — (RxPFReth313) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30013] 30/11/2013 — 14:30:24 – (source-pfring.c:445) <Info> (ReceivePfringThreadInit) — (RxPFReth314) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30014] 30/11/2013 — 14:30:25 – (source-pfring.c:445) <Info> (ReceivePfringThreadInit) — (RxPFReth315) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[30015] 30/11/2013 — 14:30:25 – (source-pfring.c:445) <Info> (ReceivePfringThreadInit) — (RxPFReth316) Using PF_RING v.5.6.2, interface eth3, cluster-id 99
[29967] 30/11/2013 — 14:30:25 – (runmode-pfring.c:555) <Info> (RunModeIdsPfringWorkers) — RunModeIdsPfringWorkers initialised

…..
[29967] 30/11/2013 — 14:30:25 – (tm-threads.c:2191) <Notice> (TmThreadWaitOnThreadInit) — all 16 packet processing threads, 3 management threads initialized, engine started.

So after running for about 7 hrs:

root@suricata:/var/log/suricata# grep kernel stats.log |tail -32
capture.kernel_packets    | RxPFReth31                | 2313986783
capture.kernel_drops      | RxPFReth31                | 75254447
capture.kernel_packets    | RxPFReth32                | 2420204427
capture.kernel_drops      | RxPFReth32                | 23492323
capture.kernel_packets    | RxPFReth33                | 2412343682
capture.kernel_drops      | RxPFReth33                | 71202459
capture.kernel_packets    | RxPFReth34                | 2249712177
capture.kernel_drops      | RxPFReth34                | 15290216
capture.kernel_packets    | RxPFReth35                | 2272653367
capture.kernel_drops      | RxPFReth35                | 2072826
capture.kernel_packets    | RxPFReth36                | 2281254066
capture.kernel_drops      | RxPFReth36                | 118723669
capture.kernel_packets    | RxPFReth37                | 2430047882
capture.kernel_drops      | RxPFReth37                | 13702511
capture.kernel_packets    | RxPFReth38                | 2474713911
capture.kernel_drops      | RxPFReth38                | 6512062
capture.kernel_packets    | RxPFReth39                | 2299221265
capture.kernel_drops      | RxPFReth39                | 596690
capture.kernel_packets    | RxPFReth310               | 2398183554
capture.kernel_drops      | RxPFReth310               | 15623971
capture.kernel_packets    | RxPFReth311               | 2277348230
capture.kernel_drops      | RxPFReth311               | 62773742
capture.kernel_packets    | RxPFReth312               | 2693710052
capture.kernel_drops      | RxPFReth312               | 40213266
capture.kernel_packets    | RxPFReth313               | 2470037871
capture.kernel_drops      | RxPFReth313               | 406738
capture.kernel_packets    | RxPFReth314               | 2236636480
capture.kernel_drops      | RxPFReth314               | 714360
capture.kernel_packets    | RxPFReth315               | 2314829059
capture.kernel_drops      | RxPFReth315               | 1818726
capture.kernel_packets    | RxPFReth316               | 2271917603
capture.kernel_drops      | RxPFReth316               | 1200009

about 2% drops, 85% CPU usage , about 3300 rules and inspecting traffic for match on 134 000 file MD5s.

On a side note

You could also use linux-tools to do some more analyzing and performance tuning:

apt-get install linux-tools

Example: perf top
(hit enter)

Some more info found HERE and thanks to Regit HERE.

Your task of tuning up is not yet done. You could also do a dry test runs with profiling enabled in Suricata and determine the most “expensive rules” and tune them accordingly.

Part Two – DNA

If you do not have PF_RING installed on your system you should follow all of the Part One except the section “Run it”. After that come back and continue from here onwards.

NOTE: Know your network card. This set up uses Intel 82599EB 10-Gigabit SFI/SFP+

NOTE: When one application is using the DNA interface no other application can use that same interface. Example if you have Suricata running with this guide, if you want to do “./pfcount” you would not be able to , since the DNA interface is already used by an application. For cases where you would like multiple applications to use the same DNA interface, you should consider Libzero.

Compile

Once you have acquired your DNA license (instructions of “how to” are included in the license), cd to the src of your latest pfring pull:

cd /home/pevman/pfring-svn-latest/drivers/DNA/ixgbe-3.18.7-DNA/src
make

Configure

Elevate as root. EDIT the script load_dna_driver.sh found in the  directory below
(/pfring-svn-latest/drivers/DNA/ixgbe-3.18.7-DNA/src/load_dna_driver.sh)
Make changes in the script load_dna_driver.sh like so (we use only one dna interface):

# Configure here the network interfaces to activate
IF[0]=dna0
#IF[1]=dna1
#IF[2]=dna2
#IF[3]=dna3

Leave rmmod like so (default):

# Remove old modules (if loaded)
rmmod ixgbe
rmmod pf_ring

Leave only two insmod lines uncommented

# We assume that you have compiled PF_RING
insmod ../../../../kernel/pf_ring.ko

Adjust the queues, use your own MAC address, increase the buffers, up the laser on the SFP:

# As many queues as the number of processors
#insmod ./ixgbe.ko RSS=0,0,0,0
insmod ./ixgbe.ko RSS=0 mtu=1522 adapters_to_enable=00:e0:ed:19:e3:e1 num_rx_slots=32768 FdirPballoc=3

Above we have 16 CPUs and we want to use 16 queues, enable only this adapter with this MAC address, bump up the rx slots and comment all the other insmod lines (besides these two shown above for pf_ring.ko and ixgbe.ko)

In the case above we enable 16 queues (cause we have 16 cpus) for the first port of the 10Gbps Intel network card.

 +++++ CORNER CASE +++++

( the bonus round !! – with the help of  Alfredo Cardigliano from ntop )

Question:
So what should you do if you have this scenario – 32 core system with a 

10Gbps network card and DNA. The card has  4 ports each port getting 1,2,6,1 Gbps

of traffic, respectivelly.

 You would like to get 4,8 16,4 queues – dedicated cpus (as written ) per
port. In other words:
Gbps of traffic (port 0,1,2,3) – >            1,2,6,1
Number of cpus/queues dedicated – >  4,8,16,4

Answer:
Simple -> You should use

insmod ./ixgbe.ko RSS=4,8,16,4 ….

instead of :

insmod ./ixgbe.ko RSS=0 ….

+++++ END of the CORNER CASE +++++

Execute load_dna_driver.sh from the same directory it resides in.
(ex for this tutorial – /home/pevman/pfring-svn-latest/drivers/DNA/ixgbe-3.18.7-DNA/src) :

./ load_dna_driver.sh

Make sure offloading is disabled (substitute the correct interface name below name):

ethtool -K dna0 tso off
ethtool -K dna0 gro off
ethtool -K dna0 lro off
ethtool -K dna0 gso off
ethtool -K dna0 rx off
ethtool -K dna0 tx off
ethtool -K dna0 sg off
ethtool -K dna0 rxvlan off
ethtool -K dna0 txvlan off
ethtool -N dna0 rx-flow-hash udp4 sdfn
ethtool -N dna0 rx-flow-hash udp6 sdfn
ethtool -n dna0 rx-flow-hash udp6
ethtool -n dna0 rx-flow-hash udp4
ethtool -C dna0 rx-usecs 1000
ethtool -C dna0 adaptive-rx off

Configuration in suricata.yaml

In suricata.yaml, make sure your pfring section looks like this:

# PF_RING configuration. for use with native PF_RING support
# for more info see http://www.ntop.org/PF_RING.html  #dna0@0
pfring:
  – interface: dna0@0
    # Number of receive threads (>1 will enable experimental flow pinned
    # runmode)
    #threads: 1

    # Default clusterid.  PF_RING will load balance packets based on flow.
    # All threads/processes that will participate need to have the same
    # clusterid.
    #cluster-id: 1

    # Default PF_RING cluster type. PF_RING can load balance per flow or per hash.
    # This is only supported in versions of PF_RING > 4.1.1.
    cluster-type: cluster_flow
    # bpf filter for this interface
    #bpf-filter: tcp
    # Choose checksum verification mode for the interface. At the moment
    # of the capture, some packets may be with an invalid checksum due to
    # offloading to the network card of the checksum computation.
    # Possible values are:
    #  – rxonly: only compute checksum for packets received by network card.
    #  – yes: checksum validation is forced
    #  – no: checksum validation is disabled
    #  – auto: suricata uses a statistical approach to detect when
    #  checksum off-loading is used. (default)
    # Warning: ‘checksum-validation’ must be set to yes to have any validation
    #checksum-checks: auto
  # Second interface
  – interface: dna0@1
    threads: 1
  – interface: dna0@2
    threads: 1
  – interface: dna0@3
    threads: 1
  – interface: dna0@4
    threads: 1
  – interface: dna0@5
    threads: 1
  – interface: dna0@6
    threads: 1
  – interface: dna0@7
    threads: 1
  – interface: dna0@8
    threads: 1
  – interface: dna0@9
    threads: 1
  – interface: dna0@10
    threads: 1
  – interface: dna0@11
    threads: 1
  – interface: dna0@12
    threads: 1
  – interface: dna0@13
    threads: 1
  – interface: dna0@14
    threads: 1
  – interface: dna0@15
    threads: 1
  # Put default values here
  #- interface: default
    #threads: 2

Rules enabled in suricata.yaml:

default-rule-path: /etc/suricata/et-config/
rule-files:
 - trojan.rules
 - dns.rules
 - malware.rules
 - local.rules
 - jonkman.rules

 - worm.rules
 - current_events.rules
 - mobile_malware.rules
 - user_agents.rules

The rest of the suricata.yaml configuration guide you can take from Part One – PF_RING- regarding Suricata’s specific settings – timeouts, memory settings, fragmentation , reassembly limits and so on.

Notice the DNA driver loaded:

 lshw -c Network
  *-network:1
       description: Ethernet interface
       product: 82599EB 10-Gigabit SFI/SFP+ Network Connection
       vendor: Intel Corporation
       physical id: 0.1
       bus info: pci@0000:04:00.1
       logical name: dna0
       version: 01
       serial: 00:e0:ed:19:e3:e1
       width: 64 bits
       clock: 33MHz
       capabilities: pm msi msix pciexpress vpd bus_master cap_list ethernet physical fibre
       configuration: autonegotiation=off broadcast=yes driver=ixgbe driverversion=3.18.7-DNA duplex=full firmware=0x800000cb latency=0 link=yes multicast=yes port=fibre promiscuous=yes
       resources: irq:37 memory:fbc00000-fbc1ffff ioport:e000(size=32) memory:fbc40000-fbc43fff memory:fa700000-fa7fffff memory:fa600000-fa6fffff

Start Suricata with DNA

(make sure  you adjust your directories in the command below)

suricata –pfring -c /etc/suricata/peter-yaml/suricata-pfring-dna.yaml -v -D

Some stats from suricata.log:

root@suricata:/home/pevman/pfring-svn-latest/userland/examples# more /var/log/suricata/suricata.log
[32055] 27/11/2013 — 13:31:38 – (suricata.c:932) <Notice> (SCPrintVersion) — This is Suricata version 2.0dev (rev 77b09fc)
[32055] 27/11/2013 — 13:31:38 – (util-cpu.c:170) <Info> (UtilCpuPrintSummary) — CPUs/cores online: 16
[32055] 27/11/2013 — 13:31:38 – (app-layer-dns-udp.c:315) <Info> (DNSUDPConfigure) — DNS request flood protection level: 500
[32055] 27/11/2013 — 13:31:38 – (defrag-hash.c:209) <Info> (DefragInitConfig) — allocated 3670016 bytes of memory for the defrag hash… 65536 buckets of size 56
[32055] 27/11/2013 — 13:31:38 – (defrag-hash.c:234) <Info> (DefragInitConfig) — preallocated 65535 defrag trackers of size 152
[32055] 27/11/2013 — 13:31:38 – (defrag-hash.c:241) <Info> (DefragInitConfig) — defrag memory usage: 13631336 bytes, maximum: 536870912
[32055] 27/11/2013 — 13:31:38 – (tmqh-flow.c:76) <Info> (TmqhFlowRegister) — AutoFP mode using default “Active Packets” flow load balancer
[32056] 27/11/2013 — 13:31:38 – (tmqh-packetpool.c:141) <Info> (PacketPoolInit) — preallocated 65534 packets. Total memory 288873872
[32056] 27/11/2013 — 13:31:38 – (host.c:205) <Info> (HostInitConfig) — allocated 262144 bytes of memory for the host hash… 4096 buckets of size 64
[32056] 27/11/2013 — 13:31:38 – (host.c:228) <Info> (HostInitConfig) — preallocated 1000 hosts of size 112
[32056] 27/11/2013 — 13:31:38 – (host.c:230) <Info> (HostInitConfig) — host memory usage: 390144 bytes, maximum: 16777216
[32056] 27/11/2013 — 13:31:38 – (flow.c:386) <Info> (FlowInitConfig) — allocated 67108864 bytes of memory for the flow hash… 1048576 buckets of size 64
[32056] 27/11/2013 — 13:31:38 – (flow.c:410) <Info> (FlowInitConfig) — preallocated 1048576 flows of size 376
[32056] 27/11/2013 — 13:31:38 – (flow.c:412) <Info> (FlowInitConfig) — flow memory usage: 469762048 bytes, maximum: 1073741824
[32056] 27/11/2013 — 13:31:38 – (reputation.c:459) <Info> (SRepInit) — IP reputation disabled
[32056] 27/11/2013 — 13:31:38 – (util-magic.c:62) <Info> (MagicInit) — using magic-file /usr/share/file/magic
[32056] 27/11/2013 — 13:31:38 – (suricata.c:1725) <Info> (SetupDelayedDetect) — Delayed detect disabled

…..rules loaded  – 8010 :

[32056] 27/11/2013 — 13:31:40 – (detect.c:453) <Info> (SigLoadSignatures) — 9 rule files processed. 8010 rules successfully loaded, 0 rules failed
[32056] 27/11/2013 — 13:31:40 – (detect.c:2589) <Info> (SigAddressPrepareStage1) — 8017 signatures processed. 1 are IP-only rules, 2147 are inspecting packet payload, 6625 inspect application lay
er, 0 are decoder event only
[32056] 27/11/2013 — 13:31:40 – (detect.c:2592) <Info> (SigAddressPrepareStage1) — building signature grouping structure, stage 1: adding signatures to signature source addresses… complete
[32056] 27/11/2013 — 13:31:40 – (detect.c:3218) <Info> (SigAddressPrepareStage2) — building signature grouping structure, stage 2: building source address list… complete
[32056] 27/11/2013 — 13:35:28 – (detect.c:3860) <Info> (SigAddressPrepareStage3) — building signature grouping structure, stage 3: building destination address lists… complete
[32056] 27/11/2013 — 13:35:28 – (util-threshold-config.c:1186) <Info> (SCThresholdConfParseFile) — Threshold config parsed: 0 rule(s) found
[32056] 27/11/2013 — 13:35:28 – (util-coredump-config.c:122) <Info> (CoredumpLoadConfig) — Core dump size set to unlimited.
[32056] 27/11/2013 — 13:35:28 – (util-logopenfile.c:168) <Info> (SCConfLogOpenGeneric) — fast output device (regular) initialized: fast.log
[32056] 27/11/2013 — 13:35:28 – (util-logopenfile.c:168) <Info> (SCConfLogOpenGeneric) — http-log output device (regular) initialized: http.log
[32056] 27/11/2013 — 13:35:28 – (util-logopenfile.c:168) <Info> (SCConfLogOpenGeneric) — tls-log output device (regular) initialized: tls.log
[32056] 27/11/2013 — 13:35:28 – (util-device.c:147) <Info> (LiveBuildDeviceList) — Adding interface dna0@0 from config file
[32056] 27/11/2013 — 13:35:28 – (util-device.c:147) <Info> (LiveBuildDeviceList) — Adding interface dna0@1 from config file
[32056] 27/11/2013 — 13:35:28 – (util-device.c:147) <Info> (LiveBuildDeviceList) — Adding interface dna0@2 from config file
[32056] 27/11/2013 — 13:35:28 – (util-device.c:147) <Info> (LiveBuildDeviceList) — Adding interface dna0@3 from config file
[32056] 27/11/2013 — 13:35:28 – (util-device.c:147) <Info> (LiveBuildDeviceList) — Adding interface dna0@4 from config file
[32056] 27/11/2013 — 13:35:28 – (util-device.c:147) <Info> (LiveBuildDeviceList) — Adding interface dna0@5 from config file
[32056] 27/11/2013 — 13:35:28 – (util-device.c:147) <Info> (LiveBuildDeviceList) — Adding interface dna0@6 from config file
[32056] 27/11/2013 — 13:35:28 – (util-device.c:147) <Info> (LiveBuildDeviceList) — Adding interface dna0@7 from config file
[32056] 27/11/2013 — 13:35:28 – (util-device.c:147) <Info> (LiveBuildDeviceList) — Adding interface dna0@8 from config file
[32056] 27/11/2013 — 13:35:28 – (util-device.c:147) <Info> (LiveBuildDeviceList) — Adding interface dna0@9 from config file
[32056] 27/11/2013 — 13:35:28 – (util-device.c:147) <Info> (LiveBuildDeviceList) — Adding interface dna0@10 from config file
[32056] 27/11/2013 — 13:35:28 – (util-device.c:147) <Info> (LiveBuildDeviceList) — Adding interface dna0@11 from config file
[32056] 27/11/2013 — 13:35:28 – (util-device.c:147) <Info> (LiveBuildDeviceList) — Adding interface dna0@12 from config file
[32056] 27/11/2013 — 13:35:28 – (util-device.c:147) <Info> (LiveBuildDeviceList) — Adding interface dna0@13 from config file
[32056] 27/11/2013 — 13:35:28 – (util-device.c:147) <Info> (LiveBuildDeviceList) — Adding interface dna0@14 from config file
[32056] 27/11/2013 — 13:35:28 – (util-device.c:147) <Info> (LiveBuildDeviceList) — Adding interface dna0@15 from config file
……..
……
[32056] 27/11/2013 — 13:35:28 – (runmode-pfring.c:555) <Info> (RunModeIdsPfringWorkers) — RunModeIdsPfringWorkers initialised
[32056] 27/11/2013 — 13:35:28 – (stream-tcp.c:374) <Info> (StreamTcpInitConfig) — stream “prealloc-sessions”: 2048 (per thread)
[32056] 27/11/2013 — 13:35:28 – (stream-tcp.c:390) <Info> (StreamTcpInitConfig) — stream “memcap”: 17179869184
[32056] 27/11/2013 — 13:35:28 – (stream-tcp.c:396) <Info> (StreamTcpInitConfig) — stream “midstream” session pickups: enabled
[32056] 27/11/2013 — 13:35:28 – (stream-tcp.c:402) <Info> (StreamTcpInitConfig) — stream “async-oneside”: disabled
[32056] 27/11/2013 — 13:35:28 – (stream-tcp.c:419) <Info> (StreamTcpInitConfig) — stream “checksum-validation”: disabled
[32056] 27/11/2013 — 13:35:28 – (stream-tcp.c:441) <Info> (StreamTcpInitConfig) — stream.”inline”: disabled
[32056] 27/11/2013 — 13:35:28 – (stream-tcp.c:454) <Info> (StreamTcpInitConfig) — stream “max-synack-queued”: 5
[32056] 27/11/2013 — 13:35:28 – (stream-tcp.c:472) <Info> (StreamTcpInitConfig) — stream.reassembly “memcap”: 25769803776
[32056] 27/11/2013 — 13:35:28 – (stream-tcp.c:490) <Info> (StreamTcpInitConfig) — stream.reassembly “depth”: 12582912
[32056] 27/11/2013 — 13:35:28 – (stream-tcp.c:573) <Info> (StreamTcpInitConfig) — stream.reassembly “toserver-chunk-size”: 2509
[32056] 27/11/2013 — 13:35:28 – (stream-tcp.c:575) <Info> (StreamTcpInitConfig) — stream.reassembly “toclient-chunk-size”: 2459
[32056] 27/11/2013 — 13:35:28 – (tm-threads.c:2191) <Notice> (TmThreadWaitOnThreadInit) — all 16 packet processing threads, 3 management threads initialized, engine started.

Results: after 45 min running (and counting) on 10Gbps 8010 rules (impressive) ->

root@suricata:/var/log/suricata# grep  kernel /var/log/suricata/stats.log | tail -32
capture.kernel_packets    | RxPFRdna0@01              | 467567844
capture.kernel_drops      | RxPFRdna0@01              | 0
capture.kernel_packets    | RxPFRdna0@11              | 440973548
capture.kernel_drops      | RxPFRdna0@11              | 0
capture.kernel_packets    | RxPFRdna0@21              | 435088258
capture.kernel_drops      | RxPFRdna0@21              | 0
capture.kernel_packets    | RxPFRdna0@31              | 453131090
capture.kernel_drops      | RxPFRdna0@31              | 0
capture.kernel_packets    | RxPFRdna0@41              | 469334903
capture.kernel_drops      | RxPFRdna0@41              | 0
capture.kernel_packets    | RxPFRdna0@51              | 430412652
capture.kernel_drops      | RxPFRdna0@51              | 0
capture.kernel_packets    | RxPFRdna0@61              | 438056484
capture.kernel_drops      | RxPFRdna0@61              | 0
capture.kernel_packets    | RxPFRdna0@71              | 428234219
capture.kernel_drops      | RxPFRdna0@71              | 0
capture.kernel_packets    | RxPFRdna0@81              | 452883734
capture.kernel_drops      | RxPFRdna0@81              | 0
capture.kernel_packets    | RxPFRdna0@91              | 469565553
capture.kernel_drops      | RxPFRdna0@91              | 0
capture.kernel_packets    | RxPFRdna0@101             | 442010263
capture.kernel_drops      | RxPFRdna0@101             | 0
capture.kernel_packets    | RxPFRdna0@111             | 451989862
capture.kernel_drops      | RxPFRdna0@111             | 0
capture.kernel_packets    | RxPFRdna0@121             | 452650397
capture.kernel_drops      | RxPFRdna0@121             | 0
capture.kernel_packets    | RxPFRdna0@131             | 464907229
capture.kernel_drops      | RxPFRdna0@131             | 0
capture.kernel_packets    | RxPFRdna0@141             | 443403243
capture.kernel_drops      | RxPFRdna0@141             | 0
capture.kernel_packets    | RxPFRdna0@151             | 432499371
capture.kernel_drops      | RxPFRdna0@151             | 0

Some htop stats

In the examples directory of your PF_RING sources – /pfring-svn-latest/userland/examples you have some tools you can use to look at packets stats and such – example:

root@suricata:/home/pevman/pfring-svn-latest/userland/examples# ./pfcount_multichannel -i dna0
Capturing from dna0
Found 16 channels
Using PF_RING v.5.6.2

=========================
Absolute Stats: [channel=0][280911 pkts rcvd][0 pkts dropped]
Total Pkts=280911/Dropped=0.0 %
280911 pkts – 238246030 bytes [140327.9 pkt/sec - 952.12 Mbit/sec]
=========================
Actual Stats: [channel=0][99895 pkts][1001.8 ms][99715.9 pkt/sec]
=========================
Absolute Stats: [channel=1][271128 pkts rcvd][0 pkts dropped]
Total Pkts=271128/Dropped=0.0 %
271128 pkts – 220184576 bytes [135440.8 pkt/sec - 879.94 Mbit/sec]
=========================
Actual Stats: [channel=1][91540 pkts][1001.8 ms][91375.9 pkt/sec]
=========================
Absolute Stats: [channel=2][251004 pkts rcvd][0 pkts dropped]
Total Pkts=251004/Dropped=0.0 %
251090 pkts – 210457632 bytes [125430.9 pkt/sec - 840.91 Mbit/sec]
=========================
Actual Stats: [channel=2][85799 pkts][1001.8 ms][85645.2 pkt/sec]
=========================
Absolute Stats: [channel=3][256648 pkts rcvd][0 pkts dropped]
Total Pkts=256648/Dropped=0.0 %
256648 pkts – 213116218 bytes [128207.4 pkt/sec - 851.69 Mbit/sec]
=========================
Actual Stats: [channel=3][86188 pkts][1001.8 ms][86033.5 pkt/sec]
=========================
Absolute Stats: [channel=4][261802 pkts rcvd][0 pkts dropped]
Total Pkts=261802/Dropped=0.0 %
261802 pkts – 225272589 bytes [130782.1 pkt/sec - 900.27 Mbit/sec]
=========================
Actual Stats: [channel=4][86528 pkts][1001.8 ms][86372.9 pkt/sec]
=========================
Absolute Stats: [channel=5][275665 pkts rcvd][0 pkts dropped]
Total Pkts=275665/Dropped=0.0 %
275665 pkts – 239259529 bytes [137707.3 pkt/sec - 956.17 Mbit/sec]
=========================
Actual Stats: [channel=5][91780 pkts][1001.8 ms][91615.5 pkt/sec]
=========================
Absolute Stats: [channel=6][295611 pkts rcvd][0 pkts dropped]
Total Pkts=295611/Dropped=0.0 %
295611 pkts – 231543496 bytes [147671.2 pkt/sec - 925.33 Mbit/sec]
=========================
Actual Stats: [channel=6][100521 pkts][1001.8 ms][100340.8 pkt/sec]
=========================
Absolute Stats: [channel=7][268374 pkts rcvd][0 pkts dropped]
Total Pkts=268374/Dropped=0.0 %
268374 pkts – 230010930 bytes [134065.1 pkt/sec - 919.21 Mbit/sec]
=========================
Actual Stats: [channel=7][91749 pkts][1001.8 ms][91584.5 pkt/sec]
=========================
Absolute Stats: [channel=8][312726 pkts rcvd][0 pkts dropped]
Total Pkts=312726/Dropped=0.0 %
312726 pkts – 286419690 bytes [156220.9 pkt/sec - 1144.64 Mbit/sec]
=========================
Actual Stats: [channel=8][86361 pkts][1001.8 ms][86206.2 pkt/sec]
=========================
Absolute Stats: [channel=9][275091 pkts rcvd][0 pkts dropped]
Total Pkts=275091/Dropped=0.0 %
275091 pkts – 229807313 bytes [137420.5 pkt/sec - 918.39 Mbit/sec]
=========================
Actual Stats: [channel=9][91118 pkts][1001.8 ms][90954.6 pkt/sec]
=========================
Absolute Stats: [channel=10][289441 pkts rcvd][0 pkts dropped]
Total Pkts=289441/Dropped=0.0 %
289441 pkts – 254843198 bytes [144589.0 pkt/sec - 1018.45 Mbit/sec]
=========================
Actual Stats: [channel=10][95537 pkts][1001.8 ms][95365.7 pkt/sec]
=========================
Absolute Stats: [channel=11][241318 pkts rcvd][0 pkts dropped]
Total Pkts=241318/Dropped=0.0 %
241318 pkts – 200442927 bytes [120549.4 pkt/sec - 801.04 Mbit/sec]
=========================
Actual Stats: [channel=11][82011 pkts][1001.8 ms][81864.0 pkt/sec]
=========================
Absolute Stats: [channel=12][300209 pkts rcvd][0 pkts dropped]
Total Pkts=300209/Dropped=0.0 %
300209 pkts – 261259342 bytes [149968.1 pkt/sec - 1044.09 Mbit/sec]
=========================
Actual Stats: [channel=12][101524 pkts][1001.8 ms][101342.0 pkt/sec]
=========================
Absolute Stats: [channel=13][293733 pkts rcvd][0 pkts dropped]
Total Pkts=293733/Dropped=0.0 %
293733 pkts – 259477621 bytes [146733.0 pkt/sec - 1036.97 Mbit/sec]
=========================
Actual Stats: [channel=13][97021 pkts][1001.8 ms][96847.1 pkt/sec]
=========================
Absolute Stats: [channel=14][267101 pkts rcvd][0 pkts dropped]
Total Pkts=267101/Dropped=0.0 %
267101 pkts – 226064969 bytes [133429.1 pkt/sec - 903.44 Mbit/sec]
=========================
Actual Stats: [channel=14][86862 pkts][1001.8 ms][86706.3 pkt/sec]
=========================
Absolute Stats: [channel=15][266323 pkts rcvd][0 pkts dropped]
Total Pkts=266323/Dropped=0.0 %
266323 pkts – 232926529 bytes [133040.5 pkt/sec - 930.86 Mbit/sec]
=========================
Actual Stats: [channel=15][91437 pkts][1001.8 ms][91273.1 pkt/sec]
=========================
Aggregate stats (all channels): [1463243.0 pkt/sec][15023.51 Mbit/sec][0 pkts dropped]
=========================

Viewing all 544 articles
Browse latest View live