Quantcast
Channel: ntop
Viewing all 544 articles
Browse latest View live

Best practices for using Bro IDS with PF_RING ZC. Reliably.

$
0
0

Zero copy technologies such as PF_RING ZC allow applications to read packets in memory without any actor involved, being it the kernel or a memory copy. This is the reason why using ZC you can easily fill up a 10 Gbit line using a single thread and a single network card queue. The drawback of zero copy is that applications must be well behaved as the same packet is shared across multiple applications and thus if one application pollutes the packet memory, this problem affects all the consumers. The same problem happens in case an application crashes and writes in (unexpected) packet memory locations, corrupting other processe’s memory.

You might say “ok but apps should not crash nor misbehave”. This is a fair statement but sometimes this might not be the case. For instance many Bro users use PF_RING to accelerate packet capture by leveraging on PF_RING’s speed and ability to distribute the traffic across multiple consumer queues using the zbalance_ipc clustering application. Unfortunately sometimes Bro workers die unexpectedly  and this requires zbalance_ipc to be restarted as a worker might have corrupted the memory of other workers. Even though in practice our “zbalance_ipc restart required” policy is over-safe as Bro is unlikely to corrupt the memory, we cannot risk that a fault on one worker creates problems to all the others.

In order to overcome this problem, there is a simple solution you can use to isolate the various packet consumers so that they can be restarted independently in case of crash. Load the dummy network driver specifying the number of ‘fake’ interfaces, matching the number of application instances you need for load balancing the traffic.

Example for 4 consumers, do:

modprobe dummy numdummies=4
ifconfig dummy0 up
ifconfig dummy1 up
ifconfig dummy2 up
ifconfig dummy3 up

Run zbalance_ipc (make sure you use the latest PF_RING code), specifying the number of consumers application, remapping each queue to a ‘safe’ dummy interface using -r <queue id>:<dummy interface> for each queue. In the example below zbalance_ipc is load balancing the traffic from p2p2 (-i zc:p2p2) across 4 queues/consumers (-n 4) using an IP-based hash function (-m 1) remapping queue 0 to the dummy interface dummy0 and so on.

zbalance_ipc -i zc:p2p2 -n 4 -m 1 -c 2 -r 0:dummy0 -r 1:dummy1 -r 2:dummy2 -r 3:dummy3

You can now run an application on each ‘dummy’ interface.

# ./pfcount -i dummy0
Using PF_RING v.6.3.0
Capturing from dummy0 [86:1E:71:67:FA:40][ifIndex: 56]
# Device RX channels: 1
# Polling threads:    1
Dumping statistics on /proc/net/pf_ring/stats/8712-dummy0.387
=========================
Absolute Stats: [1472571 pkts rcvd][0 pkts dropped]
Total Pkts=1472571/Dropped=0.0 %
1'472'571 pkts - 123'695'964 bytes
=========================

=========================
Absolute Stats: [2947157 pkts rcvd][0 pkts dropped]
Total Pkts=2947157/Dropped=0.0 %
2'947'157 pkts - 247'561'188 bytes [2'946'938.92 pkt/sec - 1'980.34 Mbit/sec]
=========================
Actual Stats: 1474586 pkts [1'000.07 ms][1'474'476.88 pps/0.99 Gbps]
=========================

Now all you have to do is to configure Bro to capture packets from the dummy interfaces instead of reading them directly from zbalance_ipc.


Monitoring BitTorrent Traffic with ntopng

$
0
0

ntopng has been designed not just for network administrators, but also for small companies and in particular for families. How often you have seen traffic on your network that you did not expect and you asked yourself what was that about. A good example is BitTorrent traffic that can be used for efficiently downloading files and not just for copyright-protected content (unfortunately this is how this protocol is usually perceived by the network community). If you are wondering what your colleagues/children are downloading using BitTorrent, now ntopng can help you.

In the latest development version, ntopng (thanks to nDPI) can now decode (and not just detect) BitTorrent traffic and extract the hashId of the files being searched/downloaded and tell you what is such file. Of course if you use -F this information is saved in MySQL so that you can run your queries on it.

In case you have BitTorrent traffic on your network you can check it from the interface stats

Screen Shot 2016-02-28 at 09.24.15

or looking at flows. As you can see in the info column you can see a hash

Screen Shot 2016-02-28 at 09.22.01

that is then displayed clicking on the Info blue button. In this case you will see the flow information and the BitTorrent becomes a clickable hyperlink,

Screen Shot 2016-02-28 at 09.22.14

If you are wondering how to map the hashId to a file name (so you can know what file has been downloaded), you can click on the hash hyperlink and google will tell you what is the file being downloaded.

Screen Shot 2016-02-28 at 09.22.20

Now you know how to monitoring your colleagues/children downloads and decide if they are appropriate or not.

Happy downloading!

 

Exploring Historical Data Using ntopng: Part 2

$
0
0

ntopng is able to deliver monitored traffic flows data to a MySQL server. We have already discussed how to configure ntopng to deliver this data in another blog post.

In this article we discuss the new features that allow you to dig deep into the flows dumped to MySQL using the ntopng web GUI. Earlier ntopng releases didn’t allow for thorough historical analyses and were only giving access to recorded flows and providing limited sorting features.
With the advances made in the latest ntopng Pro Small Business it is possible to drill-down historical flows and obtain, among other things:

  • Talkers
    • Historical IPv4 and IPv6 talkers;
    • The peers list of each talker, together with the amount of traffic exchanged with any peer;
    • The application protocols (layer 7) traffic exchanged between a talker and any of its peers.
  • Layer-7 Application Protocols
    • Historical layer-7 application protocols;
    • The talkers list of each application protocol, together with the amount of traffic that involves each talker;
    • The peers list that exchanged traffic with a talker using any given application protocol.

All the information pointed out above can be sorted using multiple criteria such as traffic exchanged, number of packets and number of flows. Moreover, the search criteria generated automatically while drilling-down the data can be saved and re-used directly in the future.

Additionally, it is possible to download raw flows or even pcap files matching the search criteria. Pcap files can be downloaded if an nBox with n2disk has been configured via ntopng preferences.

In the remainder of this post we show how to use the ntopng web GUI to dig deep into the recorded flows.
Drill-down features will only be available if ntopng was started with a properly configured MySQL database specified via the -F modifier. We refer the reader to this post for a detailed explanation.
Assuming ntopng has been properly started and instructed to export monitored flow data to MySQL, extra tabs will become available in the historical page of both interfaces and local hosts.

 

Chart Tab

The historical page shows in its default tab a chart of the data. The chart is clickable and zoomable to go back in time and select a time-span of interest.

01-n-historical-chart

 

IPv4 and IPv6 Flows Tab

On the right of the Chart tab there are four additional tabs. The first two visualize monitored flows in the selected time-span. If no IPv4 (IPv6) flows are present in the observation period, then the corresponding tab will be automatically hidden. And handy download button is available to fetch a pipe-separated txt file with all the flows.

02-n-browse-download-ipv4-flows

Talkers Tab

As soon as the Talkers tab is selected, a dynamic table with the talkers of the selected time-span is automatically loaded. By default, talkers are sorted based on the amount of traffic generated. Table columns are clickable to specify a custom sort order.

03-n-historical-talkers

Next to each talker there is an icon that can be clicked to inspect the peers that have exchanged traffic with the talker in the selected time period. The peers list loaded is sorted, by default, in a decreasing order of traffic exchanged.

04-n-historical-talkers-peers

The icon on the right of each peer can be clicked to inspect the Layer-7 application protocols that were used by the talker and the selected peer.

05-n-historical-talkers-peers-applications

Both the talker itself and the application protocols between any of its peers can be saved simply by clicking on the heart shown in the top breadcrumb. Saved items will be readily available in dropdown menus for future quick selections.

06-n-historical-talkers-favorites

 

Protocols Tab

A dynamic table showing Layer-7 application protocols, sorted by traffic volume, is loaded and shown as soon as the Protocols tab is selected. Different sort criteria can be selected by clicking on column headers.

07-n-historical-protocols
An icon it is shown on the right of every application protocol and can be clicked to drill-down the talkers that have used the application protocol. The resulting talkers list is shown in a table that, by default, is sorted according to the traffic volume.

08-n-historical-protocols-talkers

In order to go deeper and browse the list of peers that interacted with a talker using a given protocol, it suffices to click on any ‘double-arrow’ icon next to each talker. A new table with the peers list is shown.

09-n-historical-protocols-talker-peers

Layer-7 application protocol talkers and their peers lists can be saved by clicking on the heart icons shown in the navigation breadcrumb. Saved items will be shown in two dropdown menus.

10-n-historical-protocols-favorites

How to Build a 100$/€ “Augmented” NetFlow/IPFIX Probe

$
0
0

One of main problems of flow-based devices is their high cost or poor monitoring capabilities (nothing beyond IPv4 packets and bytes). At ntop we believe that network visibility is much more than this, as people in 2016 want application performance, deep packet inspection, export to big data system and much more. We’re experimenting with low-cost hardware devices since a long time but we finding a powerful yet cheap device with  embedded port mirror capability isn’t that simple (or cheap). Finally we have found a solution for families and small business who want to see what’s happening on their network without spending much. The Ubiquity EdgeRouter X (ntop has no relationship with Ubiquity Networks, we’re just happy users) is a good device for our purposes, but having it 128 MB or free ram, we cannot run ntopng (at least the current version) on it, but nProbe can work on it.

As depicted below, the nice thing of this device is the fact that it comes with 5 ethernet ports, that can work independently or grouped. This means you have plenty of ports for connecting your Internet connection and your LAN to it. The trick is to use this device transparently so that you do not have to mess up with IP addresses, DHCP or gateways. To do this you can configure the device to group the ports via the bridge br0.


Screen Shot 2016-03-05 at 10.59.17

The difference between it and switch0 is that on the former case packets are bridged by Linux running on the device, on the latter is the hardware switch that does it. Although the second option is more performant,  the first one is the one we are looking for packets hit Linux and thus nProbe can see them, whereas on the other case we can see just broadcast/multicast packets.

EdgeRouterX

In essence you can use the first port for exporting flows, and group the other ports as an ethernet switch where you attach your LAN/Access Point and the Internet connection/xDSL Router. Then you need to do this:

  • Download the nProbe package for EdgeRouter X to your PC.
  • scp <nprobe package you downloaded>.deb ubnt@<router IP> (the default password is ubnt)
  • ssh ubnt@<router IP>
  • sudo su
  • dpkg -i <nprobe package you downloaded>.deb
ssh ubnt@192.168.1.8
Welcome to EdgeOS

By logging in, accessing, or using the Ubiquiti product, you
acknowledge that you have read and understood the Ubiquiti
License Agreement (available in the Web UI at, by default,
http://192.168.1.1) and agree to be bound by its terms.

ubnt@192.168.1.8's password: 
Linux ubnt 3.10.14-UBNT #1 SMP Fri Jan 29 20:03:40 PST 2016 mips
Welcome to EdgeOS
ubnt@ubnt:~$ sudo su
root@ubnt:/home/ubnt# dpkg -i nprobe_7.3.160305-4921_mipsel.deb 
(Reading database ... 33861 files and directories currently installed.)
Preparing to replace nprobe 7.3.160305-4921 (using nprobe_7.3.160305-4921_mipsel.deb) ...
Unpacking replacement nprobe ...
/sbin/ldconfig: /usr/local/lib/libhiredis.so.0.13 is not a symbolic link

/sbin/ldconfig: /usr/lib/libzmq.so.3 is not a symbolic link

Setting up nprobe (7.3.160305-4921) ...
Rebuilding ld cache...
/sbin/ldconfig: /usr/lib/libzmq.so.3 is not a symbolic link

Adding the nprobe startup script
Making the /etc/nprobe directory...
Making the /var/log/nprobe directory...
root@ubnt:/home/ubnt# nprobe --version

Welcome to nProbe v.7.3.160305 (r4921)

Copyright 2002-16 ntop.org

Build OS:      EdgeRouter X
SystemID:      E809B957499602D2
Edition:       nProbe Embedded
License:       6E0AB140A8B1596D1AA5DB7E4C80064D148871506412BF6396 [valid license]
License Type:  Permanent License 
Maintenance:   Until Sun Mar  5 12:57:44 2017 [362 days left]

nProbe is subject to the terms and conditions defined in
the LICENSE and EULA files that are part of this package.

nProbe also contains third party code:
Radix tree code - (C) The Regents of the University of Michigan
                      ("The Regents") and Merit Network, Inc.
sFlow collector - (C) InMon Inc.

Now what you need to do is to create a configuration file for nprobe and start it. Note that:

  • the EdgeRouter X is based on the mipsel architecture whereas other EdeRouter models are not, so make sure you pick the right nprobe version.
  • nProbe on EdgeRouter is the embedded edition (just like for Raspberry PI) that is the same as the Pro version on x64 but that is much cheap (you can buy a license on our shop unless you are an educational user that can get it for free as all our products). In total the hardware+software combination will cost ~100$/€.

As usual, you can instruct nProbe to send flows to ntopng for collection via ZMQ as follows.

ubiquity> # nprobe --zmq tcp://0.0.0.0:1234 -i br0 -n none

your PC>  # ntopng -i tcp://ubiquity:1234

Note that you can enable on nProbe all the standard information elements such as packets out-of-order/retransmissions, DPI, HTTP URL, DNS query dissection, VoIP traffic analysis…. The fact that this solution is so cheap does not mean that it is limited, as it includes the same features you can find on a more powerful machine; the only difference is the hardware platform that makes it suitable for home and small businesses and not for an enterprise.

Happy traffic analysis!

Advanced Flow Collection with ntopng and nProbe

$
0
0

In flow-based monitoring there are two main components: the probe (a.k.a. flow exporter) and the flow collector/analyser. Usually NetFlow/sFlow is a push mode paradigm as network devices have almost no memory/storage and thus they send out data as soon as possible towards a collector. This architecture is suboptimal as the probe is pushing the same data to all collectors (i.e. collector X cannot tell the probe that it is interested only to HTTP-based flows, but it has to collect everything and discard un-needed information) and also because in case a new collector has to be added, the probe has to be reconfigured (i.e. no dynamic attach/detach). Another issue is that data exchanged is in clear, meaning that anyone intercepting flows sent by the probe, can find out what happens in the monitored network; we are aware that you can setup a dedicated VLAN/VPN to avoid this but this practice adds complexity.

ntopng has reverted this paradigm using a poll-mode architecture.

PollMode

Via ZMQ ntopng dynamically subscribes to the probe, tells the probe what type of flow data it is interested in, and the probe sends ntopng only this information, without sending all flows to ntopng as probes do. This practice optimises network traffic and limits the CPU cycles to those really necessary to carry on to collect flows.

This architecture however is unable to operate in case of a NAT.

PushMode
In fact, in case you run the ntopng collector on a public IP (e.g. on a cheap VPS host) and nProbe on a private network, ntopng is unable to connect to the probe and thus flow collection won’t work.

In the latest development versions of ntopng and nProbe, we have introduced several enhancements to address these issues. In particular:

  • nProbe/ntopng can now operate both in pull/push mode.
  • Flow information is not compressed and (optionally) encrypted: your privacy is preserved even when sending traffic over the Internet.
  • ntopng now subscribes to nProbe for second-based throughput statistics, so that you will now see in ntopng realtime throughput statistics even if you configure the probe to aggregate flows to 1 min or more.

Let’s see how to use flow collection. Suppose that you run ntopng on host X and nProbe on host Y.

Poll Mode
host X> ntopng -i "tcp://Y:1234" --zmq-encrypt-pwd myencryptionkey
host Y> nprobe -n none --zmq "tcp://*:1234" --zmq-encrypt-pwd myencryptionkey
Push Mode
host X> ntopng -i "tcp://Y:1234" --zmq-collector-mode --zmq-encrypt-pwd myencryptionkey
host Y> nprobe -n none --zmq "tcp://*:1234" --zmq-probe-mode --zmq-encrypt-pwd myencryptionkey
Notes:
  • All the zmq options have a double dash “-“. Example –zmq .. –zmq-encrypt-pwd… etc.
  • The –zmq-encrypt-pwd is optional: if you set it data is encrypted with the specified symmetric key. In ntopng, in case you have configured multiple probes, the same encryption key is used for all probes (i.e. you cannot set a per-probe encryption key).
  • Flows are always sent in compressed format. Space savings can range from -30/-40% up to -90%.
  • ntopng now automatically subscribes to nProbe for 1 second traffic updates.

In ntopng you will now see an enhanced view of your probes, knowing not just traffic stats but also additional information such as remote probe IP (even if behind a NAT), the public IP from which flows are collected, and the interface speed being monitored by the probe.

nProbe Stats

In essence you can now see in realtime both your flows, traffic statistics and remote probe information. All using as little bandwidth as possible, protecting your flow information with encryption.

We believe that with these enhancements, we have created a very advanced flow-collection architecture, that addressed concerns of using the flow paradigm over the Internet and that gives users maximum flexibility now available with traditional probes/collectors.

Commoditizing 10/25/40/100 Gbit with PF_RING ZC on Intel FM10K

$
0
0

As you know we’re working at 100 Gbit for a while, not just in terms of network speed, but also in terms of redesigning existing applications for being more efficient and powerful (BTW stay tuned as very soon we will introduce nProbe Cento). With the introduction of the new Intel FM10K ethernet controller family, it is now possible to support 10/25/40/100 Gbit using one single NIC (just replace QSFP+ to change network speed) on a product that is in the 1k USD range for dual port. Another major feature of this product is the embedded programmable ethernet switch that can be used to filter/load balance/tap traffic using up to 16k rules. In essence a dual port NIC has two external ethernet ports (where you plug the ethernet cables) connected to the two internal ports (those you see with ifconfig on Linux) via the programmable switch. This way you can reduce the amount of traffic that hits the internal ports and cross connect selected traffic to the external ethernet ports.

This said, we have added support for the FM10K in PF_RING ZC, the code is available on github so that you can start playing with it while we are still optimising it. We have run some experiments on a FM10420-based NIC provided courtesy of Silicom using 40 Gbit direct-attached-cables connected to an Intel XL710, using a low-end Supermicro server based on an Intel E3-1230 v3 @ 3.30GHz.

  • RX – 60-byte – 1 queue
    Actual Stats: [16’632’029 pkts rcvd][1’000.05 ms][16’631’031.13 pps][11.18 Gbps]
  • RX 1500-byte (no drop – 40G traffic generator based on XL710 is the limit) – 1 queue
    Actual Stats: [3’146’534 pkts rcvd][1’000.08 ms][3’146’272.85 pps][38.36 Gbps]
  • TX 60-byte – 1 queue
    TX rate: [current 19’076’813.77 pps/12.82 Gbps][average 18’945’661.37 pps/12.73 Gbps]
  • TX 1500-byte (E3 memory bandwidth is the limit) – 1 queue
    TX rate: [current 4’444’398.66 pps/54.19 Gbps][average 4’377’213.40 pps/53.37 Gbps]

The experiments confirmed that with one RSS queue it is possible to handle more than 10 Gbit with minimal size packets so that with 4 queues we can do 40 Gbit line rate. The embedded ethernet switch has been programmed (with a command line tool) to allow external traffic to hit the internal ports, but as you can see from results we have managed to generate ~53 Gbit of traffic that of course won’t fully hit the external ports running at 40 Gbit.

These results are preliminary (apparently the Intel drivers do not fully support all features such as jumbo frames for instance), but we’ve been used for some weeks and it is working reliably. Used at 40 Gbit this new product is better than XL710. We’re now working at integrating the ethernet switch onto PF_RING so that apps can drive it directly and thus fully benefit of this product.

If you are planning to monitor 10/25/40/100 Gbit networks, it’s definitively time to try out this new ZC driver and provide us feedback. Enjoy!

PS. If you are wondering if you can run ntopng/nprobe/n2disk/zsend/zcount at 100 Gbit, the answer is yes.

How to Analyse MikroTik Traffic Using ntopng

$
0
0

MikroTik routers are pretty popular in particular in the wireless community and many users of the original ntop are familiar with it. With the advent of ntopng, we have decided to avoid natively supporting netflow in ntopng due to the many “dialects” a of the protocol and leave to nProbe the task to do the conversion of flows onto something ntopng can understand. For this reason the workflow is the one depicted below:

Mikrotik

The first thing to do is to configure NetFlow (both v5 and v9 are used) on the MikroTik that cane done from the command line or from the GUI. Suppose that both nProbe and ntopng are running on the same PC active at 192.168.8.20 and suppose that nProbe collect flows at port 2055. The configuration to use is

SBTIM@T~M3`$NCS2X5R{_@K

or if configured from the command line

/ip traffic-flow
set active-flow-timeout=1m enabled=yes
/ip traffic-flow target
add dst-address=192.168.8.20 port=2055 v9-template-timeout=1m

that should be reported as

[admin@MikroTik] > /ip traffic-flow print
              enabled: yes
           interfaces: all
        cache-entries: 64k
  active-flow-timeout: 1m
inactive-flow-timeout: 15s
[admin@MikroTik] > /ip traffic-flow target print detail
Flags: X - disabled
 0   src-address=0.0.0.0 dst-address=192.168.8.20 port=2055 version=9
     v9-template-refresh=20 v9-template-timeout=1m

At this point you need to start nProbe and ntopng on 192.168.8.20 as follows

nprobe -i none -n none -3 2055 --zmq tcp://127.0.0.1:1234
ntopng -i tcp://127.0.0.1:1234

nProbe will receive flows, convert them to ZMQ/JSON and send them to ntopng running on the same host. You can now access the ntopng GUI as http://192.168.8.20:3000 and see incoming flows.

Note that if you collect NetFlow:

  • Flows are emitted periodically (in the example above the flow are cut at 1 min max duration, and 15 sec idle timeout).
  • As traffic ingress is not constant as with packets (flows are received periodically) the network throughput at the end of the ntopng page is not smooth as it could be when capturing packets from a physical interface.

We remind you that nProbe requires a license (you can use the community edition of ntopng or the professional) that you can find on our shop, but if you belong to education or no-profit we give them for free.

 

Learn more about ntopng at RIPE72

$
0
0

RIPE

This week we will attend the RIPE 72 meeting in Copenhagen, DK. Thanks to Martin Winter (co-founder of NetDEF) we will
 speak about ntopng at two events on Thursday, May 26th:

  • At 11AM we will introduce ntopng at the Open Source Working Group.
  • At 3PM in room “Akvariet 2” we will run a two hours tutorial about ntopng and current/future ongoing developments we are carrying on.

These events would be a good time for learning more about our tools, and for discussing extensions, future work items, issues you would like us to tackle in the next release. This not to mention that it is a good event to meet in person rather than via the mailing list.

Hope to see you at RIPE this week!

 

 


Released nDPI 1.8

$
0
0

This is to announce the release of nDPI 1.8. In this version we have updated many protocol dissectors, simplified the API as well started to introduce changes that will be further improved in future versions. As usual we have changed many protocols dissectors. The whole changelog can be found below. Many thanks to all contributors!

Changelog

  • Recoded DNS and QUIC dissectors
  • Code passed checks of static code analysers
  • Added API wrappers (to be used in apps using nDPI) for substring-search
    • ndpi_init_automa()
    • ndpi_free_automa()
    • ndpi_add_string_to_automa()
    • ndpi_finalize_automa()
    • ndpi_match_string()
    • set_ndpi_malloc()
    • set_ndpi_free()
  • Added new ndpi_detection_giveup() API call to call before giving up for a given flow
  • Simplified API for init/term of the nDPI library
  • Simplified code of the ndpiReader test application
  • Added stronger checks for some dissectors to avoid buffer overflows
  • Fixed many memory-related bugs thanks to the ndpi-scapy tool
  • Added ability to extract BitTorrent hash (and eventually peerId)
  • Removed unused code to compile nDPI in Linux kernel (not a good idea to use DPI in kernels).
  • Added various packet encapsulations in ndpiReader
  • Improved dissectors
    • Tor
    • Dropbox
    • Skype
    • KakaoTalk
    • WhatsApp (Added WhatsApp Voice)
    • Microsoft
    • Viber
    • Google
    • MS OneDrive
    • SIP
    • TFTP
    • QQ
    • NetBIOS
    • HTTP (over IPv6)
    • 6in4tunnel
    • RTP
    • Ebay
    • HEP2 protocol detection support (sipcapture)
    • BitTorrent
    • Netflix
    • Amazon Cloud
    • Facebook
  • Removed some obsolete protocols (e.g. WinMX)
  • Added new dissectors
    • OpenDNS
    • Weibo
    • Mqtt (IoT protocol)
    • CoAP
    • HTTPDownload (to tag HTTP flows whose goal is to download files and not to transfer HTML)
    • MS Lync
    • Ubiquity AirControl 2

How to Build a 2×10 Gbit Packet Recorder using n2disk and PF_RING (2016 Update)

$
0
0

Earlier in 2014 we advised how to build a continuous packet recorder using n2disk and PF_RING. Since that time computing architectures have progressed, we have added support for new ethernet controllers, and so it’s now time to refresh that post for all those willing to build a box themselves. The specs below are for 2 x 10 Gbit; for 1 x 10G you can use half of the components in most cases.

  • CPU: we advise an Intel E5 with at least 3 GHz and 8 cores for all options (indexing and compression). Options include E5 2667 v3 or E5-2687W v4. If you do not need anything but the pcap (i.e. no index or on-the-fly pcap compression) you can use an E3 processor (e.g. E3 1271v3) that are significantly cheaper. The CPU type depends on the network adapter you plan to use (see below in this post).
  • RAM: fill up all the available memory channels so that all the available memory bandwidth is used. This means that based on the CPU you select you need to use at least 4 memory bars. If unsure check the CPU specs, or in the very worst case fill up all the available memory slots.
  • RAID controller: we suggest an LSI or Adaptec controller with at least 1 GB of onboard memory cache. Note that CPUs often do not have enough slots for the network adapter and the RAID controller (in many servers the first CPU has often only one slot available). This means that if you need extra slots you need to buy two CPUs and in this case have one RAID controller and network adapter per node (so 2 CPUs, 2 RAID controllers, 2 network adapters with one ethernet port each).
  • Disks: we suggest 24 x 10k RPM SAS drives (the minimum is 16, but we advise not less than 20). if you only have 10 Gbit, you can use 8 (minimum) or 10 (suggested) disks. While SSDs and NVMe disks are faster (and thus you need fewer disks) than SAS drives, we do not advise you using them as:
    • You need space so you will need many disks anyway.
    • Flash memories are guaranteed up to a specified number of write cycles per day (write endurance) that do not make them suitable for permanent write 24 x 7.
  • Network adapters (A-Z) supported by PF_RING ZC. Both Myricom and Napatech support hardware timestamps and GPS synchronisation, whereas Intel do not support it.
    • Intel
      • If you need to merge 2 ports into one, usually you cannot go above 18 Mpps. Hence we advise you to use two NUMA nodes, where each node captures one traffic direction and merging happens at runtime during packet extraction.
    • Myricom
      • 10 Gbit Myricom NICs can merge packets in hardware up to 21 Mpps. In this case you can use only one NUMA node and save money on RAID controller and CPU/RAM. if you need line rate instead, you need two controllers etc. such as in the Intel case.
    • Napatech
      • Napatech NICs can merge 2×10 in hardware with no packet drops, any packet size. Although these NICs are more expensive than others listed above, one NUMA node is enough so you can save money on the server and thus the price gap decreases.

In conclusion:

  • We have provided a list of components you can use for building your own traffic recorder: they depend on the network adapter you plan to use and on the requested performance.
  • If you need the cheapest recorder and you do not care of hw timestamps or indexing, an E3 box with an Intel NIC is a good start.
  • For most users a single NUMA node E5 box with a Myricom adapter, is the best compromise in price, performance and features.
  • For high-end users Napatech is definitively the best choice you can find on the market.

PF_RING 6.4 Just Released

$
0
0

This is to announce the release of PF_RING 6.4 that contains various improvements, new network adapters supported in ZC mode (including Intel 100 Gbit), and bug fixes. Developers can access the documentation for the PF_RING 6.4 API in Doxygen format.

Changelog

  • PF_RING Library
    • Improved Myricom support, new naming scheme to improve usability
    • Improved Napatech support, 100G support
    • Improved Accolade support
    • New Invea-Tech support
    • New API pfring_get_metadata to read ZC metadata
    • New pfring_get_interface_speed API
    • New API pfring_version_noring()
    • C++ wrapper improvements
    • Removed DNA legacy
  • ZC Library
    • New API pfring_zc_set_device_proc_stats to write /proc stats per device
    • New API pfring_zc_set_device_app_name to write the application name under /proc
    • New API pfring_zc_get_cluster_id to get the cluster ID from a queue
    • New API pfring_zc_check_device_license for reading interface license status
    • New API pfring_zc_get_queue_settings to read buffer len and metadata len from queue
    • New API pfring_zc_get_queue_speed to read the link speed
    • New pfring_zc_open_device flag PF_RING_ZC_DEVICE_NOT_PROMISC to open the device without setting promisc mode
    • New packet metadata flags, including IP/L4 checksum (when available from card offloads)
    • Improved pfring_zc_builtin_gtp_hash
  • PF_RING-aware Libpcap/Tcpdump
    • New libpcap v.1.7.4
    • New tcpdump v.4.7.4
    • Libnpcap support to let libpcap-based applications (i.e. tcpdump) read compressed .npcap files produced by n2disk
    • Native nanosecond timestamps support
    • Tcpdump patch to close the pcap handle in case of errors (this avoids breaking ZC queues)
  • PF_RING kernel module
    • Fixed BPF support on kernel 4.4.x
    • Fixed RSS support on Centos 6 (it was reporting the wrong number of queues, affecting RSS rehash)
    • Reworked promisc support: handling promisc through the pf_ring kernel module in order to automatically remove it even when applications drop privileges
    • VLAN ID fix in case of vlan stripping offload enabled (it was including priority bits)
  • Drivers
    • New i40e-zc v.1.5.18
    • New fm10k-zc v.0.20.1
    • Support for latest Ubuntu 16, RHEL 6.8, Centos 7
    • Fixed i40e-zc initialisation failures due to promisc reset
    • Fixed i40e-zc ‘transmit queue 0 timed out’
    • Fixed e1000e-zc memory leak
  • Examples
    • Added ability to reforge MAC/IP also when reading packets from pcap file/stdin in pfsend
    • Added
    • f option for replaying packets from pcap file in zsend
    • Added
    • o option to pfsend to specify an offset to be used with <b
    • Added
    • r option to use egress interfaces instead of queues in zbalance_ipc
  • Snort DAQ
    • Fixed DAQ-ZC buffer leak in IPC mode
    • Fixed DAQ_DP_ADD_DC support
    • Fixed support for DAQ < 2.0.6

n2disk 2.6 Just Released

$
0
0

This is to announce the release of n2disk 2.6. In this release we have made many changes to the indexing system adding a new flow-based index that should improve packet retrieval as well pave the way to flow+packet+l7 inspection+index integration that will be completed with the next nProbe cento release that will happen later this month. This will enable you to find packets based on l7 protocol: example you can do “host 192.168.1.3 and l7proto WhatsApp”. Stay tuned for the cento release.

Finally we would like to ask the community if there is interest in us releasing the code of various n2disk components to let people interact with n2disk. If you have a project/opinion please speak up!

Changelog

  • n2disk (recording)
    • Cento integration for metadata import (including L7 proto and flow-ID)
    • Added L7 protocol support to the index (when used in combination with Cento)
    • New flow-based index (-1 2) including support for flow-ID (64-bit)
    • New –not-promisc|-3 flag to capture traffic without promisc mode
    • New –capture-direction|-2 for specifying the capture direction
    • New –packet-slicing option for cutting packets after the specified header
    • Extended -n/-m options: -n/-m -1 means unlimited number of folders/files
    • Support for Ubuntu 16
    • Removed n2disk10gdna, n2disk10gzc is now n2disk10g
  • npcapextract (extraction)
    • Extended Fast-BPF filters with L7 support (syntax: l7proto )
    • New -g option to set core affinity for the extraction thread
    • New -s option to set extraction snaplen
    • Filtering improvements: falling back to standard BPF in case of estraction filter not supported by Fast-BPF
    • New -O option to write pcap to stdout (i.e. pipeline result to tshark -i – / wireshark -k -i -)
    • New -0 option to write an empty file on empty result (useful with -O))
    • Support for legacy and new index (both standard with L7 support and flow-based index)
    • Improved extraction with O_DIRECT support
    • Compressed .npcap extraction fix
    • Index file descriptors leak fix
    • Memory leak fix
  • Tools
    • New n2membenchmark tool for benchmarking system performance

Tweaking MySQL to Improve ntopng Flows Storage Space Usage

$
0
0

This is the first post that tries to give hints on how to tweak MySQL settings to better accomodate flows exported by ntopng. In particular, in this post it is discussed how to improve disk space usage. Hopefully, a series of posts with tips and tricks on how to improve responsiveness and reduce query time will be published in the future.

ntopng  MySQL flow export can be enabled using the -F command line option. Once enabled, it is possible to chose, from the web UI preferences panel, the number of days exported flows will be retained in MySQL. By default this value is set to 30 days. Users may chose to adjust this setting on the basis of their disk space availability and quantity of exported flows.

However, if MySQL is not configured properly, disk space usage may grow indefinitely even if old flows are constantly deleted. Indeed, flow deletion does not yield automatic disk space release, neither it ensures that newly arriving flows will take the place of the older, deleted flows.

 

innodb_file_per_table

To make sure disk space can be reclaimed and that new flows will take the place of deleted flows, innodb_file_per_table must be enabled.

To check whether innodb_file_per_table is enabled one can run the following command in a mysql shell

mysql> show variables like "innodb_file_per_table";
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| innodb_file_per_table |  ON   |
+-----------------------+-------+
1 row in set (0.00 sec)

Please note that enabling innodb_file_per_table will not have effect on the already existing tables. Existing tables will need to be re-created using ALTER TABLE <table_name> ENGINE=InnoDB.

innodb_file_per_table is enabled by default on MySQL server >= 5.6.

 

Reclaiming Disk Space

Disk space can be reclaimed — provided that innodb_file_per_table is enabled — by running OPTIMIZE TABLE on the tables used by ntopng, namely, flowsv4 and flowsv6. OPTIMIZE TABLE will create, for each table it is run on, a new identical empty table. Then it will copy, row by row, data from the old table to the new one. In this process a new .ibd tablespace is created and the space is reclaimed. Optimizing the table is costly both in terms of time (a new table is created out of the old one) and in term of space usage (the new table needs to be fully created before the old one can be deleted). Therefore, optimizing a table is something that should be planned by — and agreed with — the DBA.

 

Reusing Disk Space

Disk space occupied by deleted flows is re-used automatically by newely arriving flows if using innodb_file_per_table. However, it should be noted that this may lead to fragmentation issues. Running an OPTIMIZE TABLE periodically will re-pack the flows in the most efficient way.

 

Closing Remarks

To improve ntopng flows MySQL space usage it is strongly recommended to enable innodb_file_per_table. However, even though new records will take the place of the deleted ones, it should be noted that disk space is not reclaimed automatically. This means that, for example, in an environment with an average of 1 million flows dumped to MySQL every week, setting a 7-days retention period will yield an approximately constant MySQL disk space usage — that is, the space required to accomodate 1 million flows. If, however, 2 million flows will be generated in a particular week, then the space will grow and will not be reclaimed automatically. From that point on, disk space usage will be large enough to accomodate 2 million flows even if an average of 1 million flows will be received in future weeks. Reclaiming disk space requires an OPTIMIZE TABLE on tables flowsv4 and flowvs6.

Introducing nProbe 7.4

$
0
0

This to announce the release of nProbe 7.4. We have worked hard in this version to improve it in several way by better integrating it with ntopng, improving network performance metrics computation, ability to export data to big-data systems, make VoIP quality metrics more reliable. However the bigger innovation in this release is the probe scriptability using Lua (see the nProbe User’s Guide for all details).

You can now perform actions on flows (e.g. if you see a DNS query for host www.ntop.org then execute action X) and start moving nProbe into new lands where network visibility can be combined with actions performed when specific situations occur. On a smaller scale, similar to what Bro has done for years with its scripting language using the power of Lua and on top of NetFlow/IPFIX. We hope you will enjoy this feature that will enable you to execute on the probe side actions that you used to perform on the collector side much less efficiently.

Below you can find the complete nProbe changelog.

Enjoy nProbe!


nProbe 7.4 Changelog

Main New Features

  • Lua scriptability and support for plugins: DHCP, DNS, IMAP, RADIUS, SIP, SMTP, GTPv1, POP, FTP, HTTP
  • Ability to drop HTTP flows via Lua
  • Ability to push flow information into Lua (e.g., flow.protocol, flow.l7_proto)
  • ZMQ data encryption to safely exchange information with ntopng
  • ZMQ data compression to reduce the bandwith consumed when interacting with ntopng
  • ZMQ probe mode to seamlessly work behind firewalls / NATs
  • HTTP full requests/responses content dump
  • Ability to specify traffic capture direction (tx, rx, both)
  • Flows dump to syslog in JSON format
  • Flows export to Apache Kafka via the export plugin
  • Implemented SSDP and NETBIOS plugins
  • Implemented CAPWAP protocol support
  • MIPSEL builds

New Options

  • --add-engineid-to-logfile to append the NetFlow engineId to dump files
  • --bind-export-interface to bind export socket to the specified interface
  • --capture-direction to specify packet capture direction
  • --cli-http-port command line
  • --disable-startup-checks to prevent nProbe public ip detection
  • --host to capture packets from pcap interfaces rather than from a mirror/tap
  • --json-to-syslog to export flows to syslog in JSON format
  • --notify-plugin to notify users of a pluging activity immediately and not when the flow has matched
  • --online-license-check to check nProbe license online
  • --with-minimal-nprobe to build nProbe with a minimum set of dependencies
  • --zmq-encrypt-pwd to encrypt ZMQ data
  • --zmq-probe-mode to implement ZMQ probe mode
  • --http-dump-dir to dump HTTP logs
  • --http-content-dump-dir to dump full HTTP requests content
  • --http-content-dump-response to dump both HTTP requests and responses content with --http-content-dump-dir
  • --http-content-dump-dir-layout to specify layout to be used with --http-content-dump-dir
  • --http-exec-cmd to execute a command whenever an HTTP dump directory has been written
  • --minute-expire to force active flows to expire when a minute is past

Plugin Extensions

  • Extended the export template with %BITTORENT_HASH, %PACKET_HASH, %SSL_SERVER_NAM, %UPSTREAM_SESSION_ID, %DOWNSTREAM_SESSION_ID, %SRC_AS_MAP and %DST_AS_MAP
  • Extended the export template to include longitude and latitude (%SRC_IP_LONG, %SRC_IP_LAT, %DST_IP_LONG and %DST_IP_LAT)
  • Implemented SIP RTP support, handling of early export and support of OPTIONS messages
  • Extended GTPV1 plugin support to field GTPV1_RAT_TYPE
  • Extended GTPV2 plugin support to fields GTPV2_C2S_S5_S8_GTPC_IP and GTPV2_S2C_S5_S8_GTPC_IP, GTPV2_PDN_IP, GTPV2_END_USER_IMEI, GTPV2_C2S_S5_S8_GTPU_TEID, GTPV2_C2S_S5_S8_GTPU_IP, GTPV2_S2C_S5_S8_GTPU_TEID, GTPV2_S2C_S5_S8_GTPU_IP, GTPV2_C2S_S5_S8_SGW_GTPU_TEID, GTPV2_S2C_S5_S8_SGW_GTPU_TEID, GTPV2_C2S_S5_S8_SGW_GTPU_IP and GTPV2_S2C_S5_S8_SGW_GTPU_IP
  • GTPV2 plugin check to the export bucket when response isn’t the right type for a request and when a response has been done from the same peer as the request
  • Implemented GTPV2 0x4A message type management
  • Implemented Diameter support of fields DIAMETER_CLR_CANCEL_TYPE and DIAMETER_CLR_FLAGS and 3GPP type messages (317, 319, 320, 321, 322 and 323)
  • Extended the Diameter plugin in order to export “”Diameter Hop-by-Hop Identifier” information
  • Added DOT1Q_SRC_VLAN/DOT1Q_DST_VLAN for supporting Q-in-Q VLANs
  • HTTP plugin export of HTTP_X_FORWARDED_FOR and HTTP_VIA fields
  • Extended DNS plugin with multicast DNS

ZMQ

  • ZMQ event handling to send interface statistics to ntopng
  • ZMQ statistics in flow collector mode
  • Ability to use ZMQ zlib compression
  • Enhanced ZMQ statistics with bps/pps rates

Miscellaneous

  • Added black list support for netflow data when nprobe is in proxy mode (ipv4 – V5,V9,IPFIX)
  • Twitter heuristics to guess user activities
  • Implemented support for TCP fast retransmissions

Introducing nProbe Cento: a 1/10/40/100 Gbit NetFlow/IPFIX Probe, Traffic Classifier, and Packet Shunter

$
0
0

Traditionally ntop has focused on passive traffic analysis. However we have realized that the traffic monitoring world has changed and looking at network flows is no longer enough:

  • People want to enforce policies: if the network is hit by a security threat you need to stop it, without having to tweak with router ACLs or deploying yet another box to carry on this task.
  • Combine visibility with security: flow-based analysis has to be combined with traffic introspection, activities that tools like Bro, Suricata and Snort do. Unfortunately these applications are CPU-bound so, in order to boost their performance, there are two viable ways: reduce the packet-processing cost (this has already been done years ago); or reduce the ingress by not forwarding to these applications the traffic that it does not make sense to process (e.g.  YouTube packets or the payload of encrypted SSL packets).
  • 40 Gbit networks (or multi-10Gbit links) are common, if not mainstream, and 100 Gbit is becoming commodity, therefore we need to be able to accurately (i.e., no sampling) monitor traffic at these rates.
  • Effectively utilize all the CPU cores as multi-processor and multi-core architectures are becoming cheaper every day (today you can buy a 22 physical cores CPU or a 10 physical core CPU for less that 1000$).
  • Deep packet inspection is now pervasive and thus there is the need to augment the old “host 192.168.2.222 and port 80″ BPF syntax with Layer-7 Applications: “host 192.168.2.222 and l7proto HTTP”. In essence, if packets matter to you, application protocol is yet another dimension you want to explore.
  • Integrate packet capture, traffic aggregation, flow processing, and packet analysis into a single physical box, because rack space matters and because now we have the technology and experience to achieve this.

Seen all these challenges above, and considering that we have the hardware and the technology (PF_RING ZC) to face such challenges, we have decided to rethink traffic monitoring and to design nProbe cento (cento in Italian means 100). Cento is not yet another flow-based probe. Cento is a compact application designed to a limited number of tasks very very fast, up to 100 Gbit. This because we want to compute flows out of the totality of ingress traffic.
So, yes, cento can operate at 100 Gbit unsampled and, optionally, do packet-to-disk recording with on-the-fly indexing that includes Layer-7 applications.

In addition, Cento supports packet shunting to:

  • Save precious disk space when doing packet-to-disk recording.
  • Significantly alleviate the load on IDS/IPS.

If you are wondering what we mean for packet shunting, imagine this. For protocols or flows you do not care much (e.g. a Netflix video or SSL), you may want to save the first packets of the flow (in SSL they contain the certificate exchange, in HTTP you can see the URL and the response) to preserve flow visibility but, at the same time, you would also like to avoid processing all the remaining flow packets. Why you would want to to fill up your disks with encrypted traffic? Why you would want to forward a Netflix video to your IDS/IPS?

Some Use Cases

nprobe_cento_full_duplex_tap_aggregator_and_flow_exporter

Passive Flow Probe (Packets to Flows) with (optional) Traffic Aggregation

nprobe_cento_100Gbps_probe_and_bridge

Inline Flow Probe With Traffic Enforcement Capabilities

nprobe_cento_100Gbps_probe_and_traffic_aggregator_

Passive Flow Probe with Zero Copy Packet-to-Disk with Shunting and on-the-fly Indexing up to the Layer-7

nprobe_cento_100Gbps_probe_and_traffic_balancerPassive Flow Probe With Zero Copy Balancing to IDS/IPS, including Shunting and Layer-7-based Filtering

Performance Evaluation

In order to evaluate the performance we present some results we obtained on a low-end Intel E3 server priced (server and network adapters) in the sub-1000$ range. In all the tests we have used Intel 10 Gbit NICs and the kernel bypass technology PF_RING ZC.

Flow Generation, Layer-7 Traffic Filtering and HTTP(S) Traffic Filtering

# ./cento-bridge -i zc:enp6s0f0,zc:enp6s0f1 -b doc/bridge.example -B doc/banned.example -v 4 -D 1

[bridge] default = forward
[bridge] banned-hosts = discard
[bridge] Skype = discard
[banned-hosts] 'facebook.com'
[banned-hosts] 'live.com'

Input: 128 IPs
20/Jun/2016 12:49:18 [NetworkInterface.cpp:969] [zc:enp6s0f0,zc:enp6s0f1] [14'340'967 pps/9.64 Gbps][128/128/0 act/exp/drop flows][33'648'924/1'895 RX/TX pkt drops][14'340'966 TX pps]
20/Jun/2016 12:49:18 [cento.cpp:1363] Actual stats: 14'340'967 pps/540'125 drops

Input: 8K IPs
20/Jun/2016 12:47:06 [NetworkInterface.cpp:969] [zc:enp6s0f0,zc:enp6s0f1] [14'178'764 pps/9.53 Gbps][8'192/8'192/0 act/exp/drop flows][37'367'255/0 RX/TX pkt drops][14'178'754 TX pps]
20/Jun/2016 12:47:06 [cento.cpp:1363] Actual stats: 14'178'764 pps/687'835 drops

Input: 500K IPs
20/Jun/2016 12:48:09 [NetworkInterface.cpp:969] [zc:enp6s0f0,zc:enp6s0f1] [10'091'554 pps/6.78 Gbps][500'000/4'288/0 act/exp/drop flows][58'217'698/0 RX/TX pkt drops][10'090'447 TX pps]
20/Jun/2016 12:48:09 [cento.cpp:1363] Actual stats: 10'091'554 pps/4'756'488 drops

 

Flow generation, on-the-fly Layer-7 Traffic Indexing and Packet-to-Disk Recording

# cento-ids -i eno1 --aggregated-egress-queue --egress-conf doc/egress.example --dpi-level 2 -v 4

# n2disk -i zc:10@0 -o /storage --index --timeline-dir /storage --index-version 2

As the traffic is indexed with both flow index and DPI, you can extract traffic based on DPI as shown below.

# npcapprintindex -i /storage/1.pcap.idx
10 flows found
0) vlan: 0, vlan_qinq: 0, ipv4, proto: 6, 192.168.2.143:49276 -> 192.168.2.222:22, l7proto: SSH/SSH
1) vlan: 0, vlan_qinq: 0, ipv4, proto: 17, 192.168.2.136:3242 -> 239.255.255.250:1900, l7proto: UPnP/UPnP
2) vlan: 0, vlan_qinq: 0, ipv4, proto: 17, 192.168.2.143:17500 -> 255.255.255.255:17500, l7proto: Dropbox/Dropbox
3) vlan: 0, vlan_qinq: 0, ipv4, proto: 17, 192.168.2.143:17500 -> 192.168.2.255:17500, l7proto: Dropbox/Dropbox
4) vlan: 0, vlan_qinq: 0, ipv4, proto: 6, 192.168.2.143:50253 -> 192.168.2.222:22, l7proto: SSH/SSH
5) vlan: 0, vlan_qinq: 0, ipv4, proto: 6, 192.168.2.143:49821 -> 192.168.2.222:22, l7proto: SSH/SSH
6) vlan: 0, vlan_qinq: 0, ipv4, proto: 17, 192.168.2.222:55020 -> 131.114.18.19:53, l7proto: DNS/DNS
7) vlan: 0, vlan_qinq: 0, ipv4, proto: 6, 192.168.2.222:51584 -> 52.30.119.198:80, l7proto: HTTP/HTTP
8) vlan: 0, vlan_qinq: 0, ipv4, proto: 17, 192.168.2.222:46729 -> 131.114.18.19:53, l7proto: DNS/Google

# npcapextract -t /storage/ -b "2016-06-20 17:00:00" -e "2016-06-20 17:15:00" -o /tmp/output.pcap -f "host 192.168.2.222 and l7proto HTTP"
20/Jun/2016 17:17:14 [npcapextract.c:1822] Begin time: 2016-06-20 17:00:00, End time 2016-06-20 17:15:00
20/Jun/2016 17:17:14 [npcapextract.c:1865] 850 packets (845094 bytes) matched the filter in 0.019 sec.
20/Jun/2016 17:17:14 [npcapextract.c:1877] Dumped into 1 different output files.
20/Jun/2016 17:17:14 [npcapextract.c:1899] Total processing time: 0.019 sec.

# tcpdump -nr /tmp/output.pcap | head
reading from file /tmp/output.pcap, link-type EN10MB (Ethernet)
17:12:38.895425 IP 192.168.2.222.51584 > 52.30.119.198.80: Flags [S], seq 891001947, win 29200, options [mss 1460,sackOK,TS val 4205898 ecr 0,nop,wscale 7], length 0
17:12:38.947537 IP 52.30.119.198.80 > 192.168.2.222.51584: Flags [S.], seq 1298651289, ack 891001948, win 17898, options [mss 8961,sackOK,TS val 19396500 ecr 4205898,nop,wscale 8], length 0
17:12:38.947556 IP 192.168.2.222.51584 > 52.30.119.198.80: Flags [.], ack 1, win 229, options [nop,nop,TS val 4205911 ecr 19396500], length 0
17:12:38.947591 IP 192.168.2.222.51584 > 52.30.119.198.80: Flags [P.], seq 1:82, ack 1, win 229, options [nop,nop,TS val 4205911 ecr 19396500], length 81: HTTP: GET / HTTP/1.1
17:12:39.053921 IP 52.30.119.198.80 > 192.168.2.222.51584: Flags [.], ack 82, win 70, options [nop,nop,TS val 19396516 ecr 4205911], length 0
17:12:39.059192 IP 52.30.119.198.80 > 192.168.2.222.51584: Flags [P.], seq 1:1439, ack 82, win 70, options [nop,nop,TS val 19396517 ecr 4205911], length 1438: HTTP: HTTP/1.1 200 OK
17:12:39.059199 IP 192.168.2.222.51584 > 52.30.119.198.80: Flags [.], ack 1439, win 251, options [nop,nop,TS val 4205939 ecr 19396517], length 0
17:12:39.059961 IP 52.30.119.198.80 > 192.168.2.222.51584: Flags [P.], seq 1439:2877, ack 82, win 70, options [nop,nop,TS val 19396517 ecr 4205911], length 1438: HTTP
17:12:39.059966 IP 192.168.2.222.51584 > 52.30.119.198.80: Flags [.], ack 2877, win 274, options [nop,nop,TS val 4205939 ecr 19396517], length 0
17:12:39.112307 IP 52.30.119.198.80 > 192.168.2.222.51584: Flags [.], seq 10997:12445, ack 82, win 70, options [nop,nop,TS val 19396541 ecr 4205939], length 1448: HTTP

You can read more about cento performance and use cases on this blog post or on the Cento User Guide. As a rule of thumb, keep in mind that cento can process 10 Gbit (14.88 Mpps) per core, so with a 4-core CPU as the Intel E3, you can monitor a 40 Gbit link.

Cento is not designed to be a sophisticated flow probe/collector: nProbe is already good at it and cento will not replace it. The idea is that if you need traffic visibility at line rate, with policy enforcement and packet-to-disk recording, then Cento is what you need. If instead you are looking for a sophisticated flow-based probe/collector able to dissect protocols such as VoIP, GTP and HTTP, then nProbe is the tool of choice.

For more information about availability, performance, and the full list of features, we invite you to visit the Cento web page. Awaiting your comments and feedback.


Announcing ntopng 2.4: Efficiency is Beauty

$
0
0

At ntop we are on a mission to develop enterprise-grade networking software, mostly open-source, and free of charge for no-profit/research organizations. Since our inception, we have been passionately and resiliently developing software to allow our users to monitor, protect, and preserve their network infrastructure. And we have been doing this in a relentless pursuit for the best and most efficient solution. We know that in the big-data era it is becoming increasingly easy to “add an extra appliance” — after all, it’s not that expensive — but this is not at the heart of our philosophy.

At the heart of our philosophy lies the belief that efficiency is beauty. Software must be light, optimized, and scalable enough to run on commodity hardware, pushing the “add an extra appliance” to a last resort. We believe that providing lighter, faster, and more scalable network monitoring software is the best way to deliver value to our users. We believe that such software is the catalyst for deploying enterprise-grade monitoring solutions at a fraction of the cost that would have come with conventional deployments. Software that can run seamlessly on top of commodity hardware, or even on virtual machines.

This beliefs have guided us through years of growth and innovation. During those years we released an interesting number of successful software products. ntopng is one of the most widely known tools we have developed so far. Its journey began many years ago under the name of ntop. The new generation status ng was earned a couple of years ago, when Luca Deri re-designed and re-implemented it ex-novo. Luca’s decision to entirely re-code the software was driven by the necessity to provide a modular, modern tool that could exploit the most recent web/scripting technologies. After months of intense coding ntopng was ready, and it turned out to be an exceptionally modular software composed of an heavy-lifting C/C++ core that interacts with Lua and Javascript to present results to the user via an intuitive web interface.

We have released many ntopng versions since then, each one with interesting improvements and significant new features. Today, we are proud to announce ntopng version 2.4.

This is version 2.4 from a feature perspective:

  • Memory-management, stability and speed have been fundamentally improved
  • We have kept an eye on security and hardened the code to prevent privileges escalation and XSS
  • Alerts have been extended to include support for
    • Re-arming to avoid raising trains of identical alerts in short periods of time
    • Alert propagation to the infrastructure monitoring software Nagios
    • CIDR-based triggers to monitor the behavior of whole networks
    • The detection of suspicious probing attempts
  • Netfilter support has been added together with optional packet dropping features
  • Routing visibility is now possible through RIPE RIS
  • Availability of fine-grained historical data drill-down features, including top talkers, top applications, and interactions between hosts (more details here)
  • Integrations with other software
    • LDAP authentication support
    • alerts forwarding/withdrawal to Nagios
    • nBox integration to request full packet pcaps of monitored flows
    • Data export to Apache Kafka
  • We have extended and improved traffic monitoring
    • Visibility of TCP sessions throughput estimations and state breakdown (e.g., connections established, connections reset, etc.)
    • Goodput monitoring
    • Trends detection
    • Highlight of low-goodput flows and hosts
    • Visibility of hosts top-visited sites
  • Built-in support is now included for
    • GRE detunnelling
    • per-VLAN historical statistics
    • ICMP and ICMPv6 dissection
  • We have extended the set of supported OSes to include: Ubuntu 16, Debian 7, EdgeOS
  • There is also an optional support for hosts categorization via service flashstart.it

We encourage you to play with ntopng version 2.4. Review it, test it out, open an issue on GitHub, or send us an email. Binary packages are available for many distributions including CentOS 6 and CentOS 7, Debian jessie and wheezy, Ubuntu 12/14/16, Raspbian and Windows. If you are more interested in the source code, then you should visit our GitHub page.

Best Practices for Efficiently Running ntopng

$
0
0

The default ntopng configuration, is suitable for most of our users who deploy it on a home network or small enterprise network (typically a /24 network) with link speed <= 100 Mbit. This does NOT mean that ntopng cannot operate on faster/larger networks, but that it cannot be used without any configuration.

The first thing to modify are the -x/-X settings. You need to set them to double the max size you expect on your network. Example if you expect to have (including both local and remote hosts) at most 35000 active hosts, you need to set -x at no less than 70000. Better to have a larger value than a smaller one: small values mean that you will not be able to see all hosts, and also the performance will be poor as ntopng was not tuned properly. Larger values require ntopng to use more memory, but if you have plenty of RAM it is not a good argument to use extremely large values (e.g. -x  1000000 in the previous example) as you will waste resources for no reason.

Another parameter to setup is -m that specifies the list of local networks. Please make sure you set the real networks you plan to use. Some users are lazy and set it to 0.0.0.0/0: this is not a good idea as ntopng will save stats for all the hosts and thus you will exhaust disk space quickly.

Flow persistency is setup via -F. When flows are saved to MySQL or ElasticSearch, ntopng has to do extra work, and if the database is not fast enough this will introduce a bottleneck. Please pay attention to optimising this aspect in particular if the DB runs on the same ntopng box, where resources are shared.

Packet capture in ntopng has been designed to be as efficient as possible. We decided to have one processing thread per interface configured in ntopng.  Depending on a) the CPU power b) number of hosts/flows, and c) packet capture technology, the number of packets-per-second ntopng can process can change. On a x86 server with PF_RING (non ZC) you can expect to process about 1 Mpps/interface, with PF_RING ZC at least 2/3 Mpps/interface (usually much more but typically not more than 5 Mpps). This means than if you want to monitor a 10 Gbit interface (or even worse a 40 Gbit), you need:

  • Use PF_RING ZC to accelerate packet capture.
  • Use RSS to virtualise the NIC in virtual queues. For instance you can set RSS=4 to virtualise the 10 Gbit interface into 4 virtual 10G interfaces.
  • You need to start ntopng polling packets from all virtual interfaces.

Example suppose you have a 10/40 Gbit interface named eth1, and suppose to use RSS=4 with PF_RING ZC. Then you need to start ntopng as ntopng -i zc:eth1@0  -i zc:eth1@1  -i zc:eth1@2  -i zc:eth1@3

Note that in this case ntopng will try to bind threads on different cores, but as computer architectures can change based on use of NUMA and differences on CPUs, we advise to set -g to the exact cores where you want to bind each interface polling thread. Make sure (only on multi-CPU systems) you use physical cores on the same NUMA node where the network interface has been plugged in. Of course you can use interface views (see the User’s Guide) to merge all the virtual interfaces into a single interface (please understand that if you have million of hosts this might become a bottleneck).

Hope this post will help you to optimise ntopng. If you have questions and suggestions, let’s start a discussion on github.

ntopng 2.6 Roadmap

$
0
0

As we have released 2.4, it is now time to plan for the next release and highlight the list of features we plan to implement so we can start a discussion and get some feedback. The major changes we would like to introduce include:

  • Rework interface views to make them more efficient and not an expecting as they are today.
  • Add full support for sFlow/NetFlow so that we can keep per interface statistics as many other collectors do.
  • Introduce some “enterprise-oriented” features such as per-AuthononousSystem statistics and traffic accounting, qcreate an alarm dashboard with full alarm support open/closed.
  • Traffic interpretation: as of today we graph flow x, y, z but flows are still too low level. It would be nice to correlated them into more high-level activities such as user X downloaded a file, Dropbox folder of IP Y has sync etc etc. in essence continue the transition started with packet to flows, towards some meaningful for humans.
  • Add full L2 support: keep a list of mac addresses, associate them to users/devices, implement layer-2 features such as ARP/DHCP monitoring.
  • Implement per-flow scripting so that we can execute actions in Lua at flow-level (example trigger an alert when event Z happens).
  • Time based comparison (e.g. compare today’s traffic with what happened a week ago at the same time/day of the week) and reporting.
  • Integrate messager bots to query ntopng from mobile and for distributing alerts to subscribers.

Anything else you would like to see in the next ntopng release? Willing to help? Please contact us if interested in helping with the development.

Flow-based Monitoring: nProbe Cento vs Standard/Pro

$
0
0

Since the introduction of nProbe Cento, we receive periodically emails of users wondering what are the differences between these two applications. This post is to clarify the differences, and better position them.
The nProbe family is a set of flow-oriented applications, meaning that each packet is not handled individually but as part of a flow (e.g. a TCP connection or a UDP communication such as a VoIP call). This task is significantly more expensive than handling packets individually because we need both to keep the flow state and process packets in order in addition to other restrictions (e.g. make sure all packets of the same flow are sent to the same processing core). Traditionally ntop has its roots in the network monitoring world, where people want to passively (i.e. without modifying the network traffic being watched) monitor their traffic in order to find out things like top talkers or troubleshoot problems. However in the past couple of years we have received many requests of users willing to do more than that (e.g. selectively drop traffic of specific applications via DPI) in a flow-oriented fashion. The advent of 40 and 100 Gbit ethernet, has pushed us to redesign nProbe and create an addition to the nProbe family targeting selected users who need to both monitor and manipulate traffic in a flow-oriented fashion. This is how nProbe Cento was born.

Below you can find some use-cases where we try to position all applications

Family Standard Pro Cento
Max Processing Speed 1 Gbit 10 Gbit 40/100 Gbit
Packet Processing Mode Passive Passive and Inline
Operating Systems Linux and Windows Linux
PF_RING (ZC) Integration No Yes
Platforms ARM, MIPS, x64 x64
DPI Traffic Inspection Yes (nDPI)
DNS/HTTP Traffic Dissection No Full (with DNS/HTTP plugins) Limited to core attributes
Flow-Latency Measurements Yes
Flow Collection Yes (both sFlow and NetFlow) No
Policy-based Interface Bridging No Yes
Plugin Extensibility No Yes No
Packet-to-Disk Integration No Yes (n2disk)
IDS/IPS Integration No Yes (with optional packet shunting)
Flow-based Interface Egress No Yes
Flow-based Packet Policy No Yes
Text/JSON/NetFlow v5/v9/IPFIX Export Yes
Kafka Integration No Yes

One of the most popular questions we receive is whether plugin support will be supported in Cento. Currently we have no plans for that as they would introduce significant processing overhead that will prevent cento from running at 100 Gbit (this is support on adequate hardware platforms where you have at least 12 cores for 100 Gbit line rate processing). However we might consider adding support for additional protocols fields (e.e. Cento dissects DNS/HTTP core attributes such as DNS query and HTTP URL) based on user’s feedback.

In summary, if you need to do only passive traffic monitoring at no more than 10 Gbit, then nProbe Standard/Pro is what you are looking for. Instead if you need to do both flow-based traffic inspection and inline traffic management (e.g. selectively drop Skype or NetFlix traffic) or add traffic metadata (i.e. add application protocol and flow-identifier) to packets that are recorded on disk, then Cento is the application to use.

You’re Invited to the ntop Users Meeting and (free) Tutorial

$
0
0

Earlier this year we have held a ntop meetup in USA. Now we want to invite you to attend the ntop users meeting that will take place on October 17th (2 PM-5 PM), during the SharkFest Europe 2016 conference. The idea is to meet the ntop community, present our tools, highlight future work items and teach you how to master our tools. The ntop core team will be present at the event, and we would like to meet our users in person as we need to learn what are the things we need to fix and what to improve.

You can find here the event agenda, location, and registration information. The event is free of charge (drinks will be provided) but we ask you to register as space is limited. We thank Riverbed and the Sharkfest community for make this event happen.

We’re excited to meet you next month at Sharkfest!

PS: You do NOT need to be attend/registered at the Sharkfest in order to attend this user meeting.

Viewing all 544 articles
Browse latest View live