Quantcast
Channel: ntop
Viewing all 544 articles
Browse latest View live

Learning the ntopng Lua API

$
0
0

ntopng is open source, that means you can read its code and modify it according to the GPL license. The current ntopng architecture is based on three layers where the top one is written in Lua and it is used to render the web interface as well to execute periodic activities. In essence the C++/Lua API is a clean way to interact and extend ntopng without having to code in C++.

So far we have used this API inside the ntop team without documenting it. This has been a mistake as it has made life difficult for developers as hey had to do and explore the developed code to figure out how the API worked.

We have now fixed all this, and written documentation developers can use and that is available at this URL. You can look at it and learn more about ntopng. Shall you have questions, please send us comments on the ntop mailing list or send us pull requests on github.


Introducing ntopng Edge (nEdge): Monitoring, Service Segmentation and Security for the Network Edge

$
0
0

The network edge, either wired or wireless, is becoming increasingly important as most things now happen there being the place where devices are deployed. Security-wise, central firewalls are too far from the edge, and thus devices can roam freely – and potentially create troubles – in LANs without ever hitting a security device. The consequence is that LANs are becoming increasingly insecure, and the cloud is complicating all of this as it provides in encrypted connections – that are not inspectable by monitoring and security applications – the perfect ingredients for either providing smart services to users and creating troubles to the networks.

ntopng Edge (nEdge, for short) solves this problem by “cleaning” network traffic right at the edge. nEdge does not enforce IDS-like security rules (that are almost used today as a significant part of the traffic in encrypted), it uses a novel approach that enables network administrators to enforce policies on the basis of users and  Layer-7 applications traffic.

nEdge is basically the widely-known monitoring tool ntopng with the ability to operate inline mode to offer:

  • Ensured Internet Availability
    Network bandwidth is allocated either in fair-mode (everyone can have its slice of traffic so that there is not host on the net able to use all the Internet) or in cap-mode (user X cannot exceed bandwidth Y).
  • Service segmentation
    The implementation of service segmentation allows to implement a new concept of security, that is, user X can use only protocols A,B,C regardless of the devices he runs.
  • Insecure traffic blocking and alerting
    Protection is assured with the use of security-aware DNS services and blacklists to prevent users from accessing resources that have been marked as insecure such as malware sites.
  • Users and devices management
    Devices are bound to users either manually (i.e. device X is owned by user Y) or automatically through and embedded captive portal.

The hot features which characterize ntopng are still available into nEdge: accurate per-flow view of the traffic, traffic view by host, Autonomous Systems, Operating Systems, the ability to generate traffic reports and alerts, the automatic discovery of the devices into a network.

Contrary to all the other tools we coded until now, nEdge takes over the control of your system, and reconfigures (through its web GUI) all the network interfaces of the system to operate either as a bump-in-the-wire bridge or as a router. In bridge mode it acts as a fully transparent device that can be seamlessly deployed into an existing network to enhance security without changing existing network equipments and topology.

In routing mode, nEdge turns the system into an advanced router that supports multiple egress points. You can configure nEdge to use your preferred gateway, balance traffic across multiple gateways, and use a backup gateway when your main gateway is unavailable. In a nutshell, nEdge implements load-balancing, failover and multi-egress as only costly routers do.

This said, we are working towards a simplified version of nEdge that will be available this summer for low-end devices and that will finally bring security, malware protection, DPI, fair Internet access to all of us.

Stay tuned!

Network Traffic and Security Monitoring Using ntopng and InfluxDB

Cloud, IoT, sFlow Traffic Monitoring Tutorials #SFUS18

How to accelerate Bro with PF_RING FT

$
0
0

We discussed many times about the large quantity of work IDSs have to carry on, and the high CPU load they require, this is the case of Suricata due to the thousands of rules that need to be evaluated for every single packet, but this is also the case of the Bro Network Security Monitor.
In a previous post we’ve seen How to accelerate Suricata with PF_RING FT in a few steps. In that guide we leveraged on the flow classification and L7 protocol detection provided by PF_RING FT, to filter and shunt meaningless traffic, reducing this way most of the (unneeded) load on the IDS.
In fact, as we’ve already seen, most of the Internet traffic is multimedia traffic (or encrypted traffic) that the IDS does not care about, and this traffic mainly consists of elephant flows.

In this post we will see how to enable PF_RING FT to accelerate Bro, with no change or recompilation to the application (PF_RING FT is already part of PF_RING, if you are using PF_RING or Libpcap-over-PF_RING as capture library you are ready to go).

Running FT with Bro

First of all we need to compile Bro on top of PF_RING. PF_RING should be installed first:

cd ~
git clone https://github.com/ntop/PF_RING.git
cd ~/PF_RING/kernel
make && sudo make install
cd ~/PF_RING/userland/lib
./configure && make && sudo make install

Then we can compile Bro linking it to the PF_RING-aware libpcap:

cd ~
wget https://www.bro.org/downloads/release/bro-X.X.X.tar.gz
tar xvzf bro-X.X.X.tar.gz
cd bro-X.X.X
./configure --with-pcap=/usr/local/lib
make && make install

We can make sure that Bro is linked to the PF_RING-aware libpcap with the command below:

ldd /usr/local/bro/bin/bro | grep pcap
        libpcap.so.1 => /usr/local/lib/libpcap.so.1 (0x00007fa371e33000)

At this point we can set pf_ring as capture method and the interface that we want to use in /usr/local/bro/etc/node.cfg. We can also set the number of threads and the cpu affinity in the same file. In this example we will use a basic configuration with just one thread, please read the PF_RING documentation for more details about the Bro configuration and loading PF_RING and ZC drivers for maximum capture performance.

[worker-1]
type=worker
host=localhost
interface=eth1
lb_method=pf_ring
lb_procs=1
pin_cpus=2

As we said PF_RING FT is already part of PF_RING, however in order to be able to detect and filter L7 protocols, we need to install nDPI:

cd ~
git clone https://github.com/ntop/nDPI.git
./autogen.sh
make && make install

In order to filter or shunt out multimedia or other meaningless traffic, we just need to create a simple configuration file with the list of application/protocol names: PF_RING FT is based on the nDPI Deep Packet Inspection library for protocols detection, you can specify all the protocols detected by nDPI.

In the example below, we are listing a few L7 protocols that we want to discard, and we are telling the FT library that we want to shunt SSL, specifying the number of packets to forward for each session before discarding them:

cat /etc/pf_ring/ids-rules.conf
[filter]
YouTube = discard
Netflix = discard
Spotify = discard

[shunt]
SSL = 10

In order to enable L7 filtering all we need to do is setting the PF_RING_FT_CONF environment variable with the path of the configuration file with the filtering rules. This can be done in Bro using the env_vars setting in /usr/local/bro/etc/node.cfg.

[worker-1]
type=worker
host=localhost
interface=eth1
lb_method=pf_ring
lb_procs=1
pin_cpus=2
env_vars=PF_RING_FT_CONF=/etc/pf_ring/ft-rules.conf

At this point we are able to run Bro.

/usr/local/bro/bin/broctl
[BroControl] > start

Validation

For validating this work we used a huge PCAP file with mixed internet traffic, and we replayed the PCAP file using disk2n at 4 Gigabit / ~500 Kpps. We have created a simple Bro configuration, with a single worker in order to check the limit of a single processing unit, using PF_RING ZC drivers for reducing the packet capture overhead and remove any noise. We ran a few tests with and without PF_RING FT filtering out all multimedia/meaningless data (NetFlix, Youtube, Spotify, Google, Facebook, SSL) and we compared the results.

Without PF_RING FT Bro dropped 3.1 Mpkts out of 7.1 Mpkts transmitted, 44% of packet loss. After enabling PF_RING FT, Bro was able to process the same traffic dropping just 0.8 Mpkts, 12% of packet loss, out of 7.1 Mpkts transmitted, as a huge amount of data has been filtered out by FT. After the amazing results with Suricata, we’ve seen that PF_RING FT can be used to provide a huge performance improvement also with Bro, and all this with minimal effort.

Have fun!

ntopng and Time Series: From RRD to InfluxDB, new charts with Time Shift

$
0
0

One of the main concern of our users is the ability to scale ntopng with a large number of hosts/protocols and hence how to scale time series. As already discussed, RRD has many limitations with the increase of number of time series, hence it was time to start exploring new paths. We decided to abstract the ntopng engine from RRD and thus open up the engine to new time series databases. This has enabled us to use InfluxDB to store time series instead of RRD, that (as already discussed) enabled ntopng to scale both in number of time series and speed. While our work is still ongoing, this post will explain you how to move to InfluxDB and to the new time series reports (they work with both RRD and InfluxDB, seamlessly, thank to the engine abstraction).

Suppose that you have installed InfluxDB and created a database named ntopng as described in this readme (soon the database creation will be automated and this step won’t be necessary). You need to tell in ntopng preferences to use InfluxDB.


At this point you moved ntopng to InfluxDB and all the new time series will be stored in Influx. Note that currently we do not provide a way to migrate old RRD data to Influx. You can switch back to RRD at any time using the same procedure. Now regardless of the backend used for time series you can enjoy the new time series charts that we’re developing in the ntopng 3.5.x.

The major changes are (we’re developing new features daily so the list will grow in the coming weeks):

  • Ability to zoom in (drill down) with the mouse by selecting the new time period on the graph.
  • Data point have been smoothed for better visualisation.
  • Ability to compare data with the past (time shift). On the above graph you can see (dotted grey line) the same graph on the previous period (as we’re visualising one hour, it will be the previous hour). Soon we will enable alert generation in case graphs do not overlap too much with respect to the past.
  • For spiky charts as the one above depicted it is not simple to understand the trend. For this reason we have created a trend line based on this work, that allows to better understand where is the traffic trend.
  • Average and percentile lines are now placed on top of the graph and animated.
  • Colors for multi-chart graphs are based on pastel colours for better visualisation.

This is not all on charts and time series, but we believe it is enough for pushing you to test the new ntopng version before the stable release, and report us suggestions and bugs.

 

Enjoy!

Introducing per-Second Measurements in nProbe Flow Exports

$
0
0

The need to perform on-time and per-second traffic measurements clashes with protocols such as NetFlow where all counters are cumulative with respect to the flow lifetime. So if you have a flow that lasted 2 minutes and moved X bytes, you have no clue what was the throughput of this flow across the 2 minutes. For this reason people started to shorten flow duration with the drawback of putting a lot of pressure on probes as well to increase the disk space and flow records cardinality on collectors. In essence it was not a solution nor a workaround.

In order to address requests coming from our user community, nProbe (Pro) has been extended to provide per-second byte flow counters using two new information elements

[NFv9 57944][IPFIX 35632.472] %SRC_TO_DST_SECOND_BYTES   	Bytes/sec (src->dst)
[NFv9 57945][IPFIX 35632.473] %DST_TO_SRC_SECOND_BYTES   	Bytes/sec2 (dst->src)

As flow can potentially last a while, whereas IPFIX/NetFlow space is limited in packets, we have decided to export per-second counters only on disk/JSON while exporting only the numeric byte counters on the wire.

Example:

nprobe -P /data -T "%IPV4_SRC_ADDR %IPV4_DST_ADDR %INPUT_SNMP %OUTPUT_SNMP %IN_PKTS %IN_BYTES %FIRST_SWITCHED %LAST_SWITCHED %L4_SRC_PORT %L4_DST_PORT %TCP_FLAGS %PROTOCOL  %SRC_TO_DST_SECOND_BYTES  %DST_TO_SRC_SECOND_BYTES" -i eth0

will export text files containing lines like

131.114.21.22|114.79.1.15|0|0|3|156|1380114078|1380114095|80|18151|17|6|52,,,,,,52,,,,,,,,,,,52|,,,,,,,,,,,,,,,,,,|HTTP
131.114.21.22|114.79.1.15|0|0|3|156|1380114078|1380114095|80|18144|17|6|52,,,,,,52,,,,,,,,,,,52|,,,,,,,,,,,,,,,,,,|HTTP
131.114.21.22|114.79.1.15|0|0|3|156|1380114079|1380114100|80|18156|17|6|52,,,,,,,52,,,,,,,,,,,,,,52|,,,,,,,,,,,,,,,,,,,,,,|HTTP
213.121.168.130|131.114.21.22|0|0|5|224|1380114081|1380114099|54306|80|19|6|92,,52,,,,,,,,,,,,,,,,80|52,52,,,,,,,,,,,,,,,,,40|HTTP

Let’s consider the first flow. You need to read values as follows:

The element “52,,,,,,52,,,,,,,,,,,52” means that during the first flow second (so that started at 1380114078) 52 bytes have been sent, on the following second no traffic (to shorten flow format nProbe omits the 0), …

Per-second counters are useful to troubleshoot multimedia applications such as VoIP where the codec is expecting to send the same amount of data every second. See for instance this call below to see how it works:

IPV4_SRC_ADDR|IPV4_DST_ADDR|INPUT_SNMP|OUTPUT_SNMP|IN_PKTS|IN_BYTES|FIRST_SWITCHED|LAST_SWITCHED|L4_SRC_PORT|L4_DST_PORT|TCP_FLAGS|PROTOCOL|SRC_TO_DST_SECOND_BYTES|DST_TO_SRC_SECOND_BYTES|L7_PROTO_NAME
212.97.59.76|10.6.4.71|0|0|5|3466|1187006259|1187006295|5061|5060|0|17|1055,,,,,,,,582,,,,,,,,,,,,,,603,,,,,,,,,,,,,,1226|1006,,,,,,,,876,,,,,,,,,,,,,,395,,,,,,,,,,,,,,940|SIP
10.6.4.71|212.97.59.80|0|0|933|261240|1187006267|1187006295|16418|52542|0|17|3920,9520,9240,9240,9520,9240,9240,9520,9240,9240,9520,9240,9240,9520,9240,9240,9520,9240,9240,9520,9240,9240,9520,9240,9240,9520,9240,9240,5320|4200,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,10000,6400|RTP

This will enable you to understand if from the network standpoint everything worked as planned.

In addition to this if you use “-b 1” nProbe will print also application protocol counters at every interval by dumping the total and partial layer-7 protocol counter.

03/Aug/2018 11:11:10 [nprobe.c:3228] L7 Proto                   Diff      Total
03/Aug/2018 11:11:10 [nprobe.c:3242] 	Unknown/0             164.12 KB  164.12 KB
03/Aug/2018 11:11:10 [nprobe.c:3242] 	POP3/2                 30.36 KB   30.36 KB
03/Aug/2018 11:11:10 [nprobe.c:3242] 	SMTP/3                  2.29 KB    2.29 KB
03/Aug/2018 11:11:10 [nprobe.c:3242] 	DNS/5                   2.95 KB    2.95 KB
03/Aug/2018 11:11:10 [nprobe.c:3242] 	HTTP/7                912.20 KB  912.20 KB
03/Aug/2018 11:11:10 [nprobe.c:3242] 	ICMP/81                   964 B      964 B
03/Aug/2018 11:11:10 [nprobe.c:3242] 	RTP/87                424.13 KB  424.13 KB
03/Aug/2018 11:11:10 [nprobe.c:3242] 	SIP/100                77.95 KB   77.95 KB
03/Aug/2018 11:11:10 [nprobe.c:3242] 	Google/126             85.39 KB   85.39 KB
03/Aug/2018 11:11:10 [nprobe.c:3242] 	Radius/146                622 B      622 B
03/Aug/2018 11:11:10 [nprobe.c:3242] 	H323/158                  160 B      160 B

This feature is currently implemented in nProbe 8.5.x and it will be included in the next stable release due late this summer.

Enjoy!

Introducing @ntop_community Telegram Group

$
0
0

While tools like github and mailing lists can serve developers and experts, sometimes people look for a quick help. For this reason we have create a new Telegram group called @ntop_community that you can use (even from your desktop and mobile) for asking quick help from the community. If you are a ntopng user you can select the “Help and News” menu entry for jumping to the telegram channel.

We invite people to join and help supporting other users, as well send us feedback. Thank you!


Introducing n2n 2.4

$
0
0

As announced some months ago, we have resumed the development of n2n, a peer-to-peer VPN we developed some year ago to ease the access to remote ntop installations behind firewalls, that then evolved into a full fledge application. After having put on hold the project for some years fur to lack of time and new priorities, months ago we have decided to resume developments and start developing it again. We have realised that many people started to fork and code on n2n, and thus a part of our work is to merge all these changes back on the main repository. However the very first step we decided to do, is to refresh the code, make it work again on modern operating system versions, and thus let packagers create new n2n package versions (some of them were still stuck with v1).

All the n2n code is available on github for all supported platforms, and you can find binary packages on the http://packages.ntop.org ntop package repository.

The complete list of 2.4 release is included below. We hope that people can start contribute to the project so we can make it better.

Thank you!

New features

  • Added deb/rpm packages
  • Added systemd configuration files
  • Added ability to read configuration files instead of using only the CLI (needed for packaging)
  • Added n2n Android app
  • Implemented simple API to embed n2n in applications (in addition to use it stand-alone)

Improvements

  • Major code cleanup
  • Fixed compilation issues on MacOS
  • Fixed Linux segmentation fault

Introducing nDPI 2.4

$
0
0

This is to announce the release of nDPI 2.4 that is an incremental release mainly introducing the concept of categories in addition to new dissectors and bug fixes. In a nutshell in order to limit the number of custom protocols defined as “if traffic goes from/to Internet domain X then this is protocol X” all these protocols have been grouped into a category. This eases application developers life as they do not have to handle thousand of protocols and simplify configuration. For instance instead of having malware site X, site X1, site X2… you have the category malware that contains all malware sites. Your application (e.g. ntopng Edge) could simply block category malware (so on category) and nDPI will group all malware sites X as this category.

In this release we have further improved Intel hyperscan support as well introduced new dissectors. As usual you can find below the complete changelog. Enjoy!

 

New Supported Protocols and Services

  • Showmax.com
  • Musical.ly
  • RapidVideo
  • VidTO streaming service
  • Apache JServ Protocol
  • Facebook Messenger
  • FacebookZero protocol

Improvements

  • Improved YouTube support
  • Improved Netflix support
  • Updated Google Hangout detection
  • Updated Twitter address range
  • Updated Viber ports, subnet and domain
  • Updated AmazonVideo detection
  • Updated list of FaceBook sites
  • Initial Skype in/out support
  • Improved Tor detection
  • Improved hyperscan support and category definition
  • Custom categories loading, extended ndpiReader (-c <file>) for loading name-based categories

Fixes

  • Fixes for Instagram flows classified as Facebook
  • Fixed Spotify detection
  • Fixed minimum packet payload length for SSDP
  • Fixed length check in MSN, x-steam-sid, Tor certificate name
  • Increase client’s maximum payload length for SSH
  • Fixed end-of-line bounds handling
  • Fixed substring matching
  • Fix for handling IP address based custom categories
  • Repaired wrong timestamp calculation
  • Fixed memory leak
  • Optimized memory usage

Other/Changes

  • New API calls:
    • ndpi_set_detection_preferences()
    • ndpi_load_hostname_category()
    • ndpi_enable_loaded_categories()
    • ndpi_fill_protocol_category()
    • ndpi_process_extra_packet()
  • Skype CallIn/CallOut are now set as Skype.SkypeCallOut Skype.SkypeCallIn
  • Added support for SMTPS on port 587
  • Changed RTP from VoIP to Media category
  • Added site unavailable category
  • Added custom categories CUSTOM_CATEGORY_MINING, CUSTOM_CATEGORY_MALWARE, CUSTOM_CATEGORY_ADVERTISEMENT, CUSTOM_CATEGORY_BANNED_SITE
  • Implemented hash-based categories
  • Converted some not popular protocols to NDPI_PROTOCOL_GENERIC with category detection

Say hello to ntopng and nEdge 3.6: Timeseries with TimeShift and InfluxDB

$
0
0

ntopng 3.6 release is paving the way to metrics-based traffic analysis. We have finally put ntopng on top of a timeseries-independent layer that allowed us to currently RRD and InfluxDB and in the future other backends. This means that you can now also (you can for instance use ntopng as a flow exporter and as a Grafana data source) use ntopng as a time series datasource (see the timeseries API for further information) or you can analyse data through the ntop web interface that has been greatly enhanced.

As you can see from the above chart you can now, for each time period, compare the current traffic (green) with past traffic on the same period (dotted line). This allows you to see how your traffic has changed, and soon we will improve ntopng to trigger alerts whenever (as in the above picture) the traffic has changed significantly with respect to the past. In addition to that, we have introduced the trend line (in blue) that summarises the metric curve trend in a simple way and that it can be used as baseline for comparisons. Zooming has also been modified and now you do that with a mouse on the chart dragging an area as you are used to do with other tools. In the following release we will further enhance ntopng to dump more accurate metrics by lowering the dump cycle time (e.g. for host nDPI we dump counters every 5 minutes that is reasonable on RRD but that could be lowered when InfluxDB is used) so that graphs can be even smoother and traffic comparison will be more accurate. We suggest to use RRD on small hosts that do not have much traffic/hosts to analyse, and use InfluxDB on deployments with many metrics where you need scalability that RRD cannot offer. Remember that you can select the timeseries backend from the ntopng “Timeseries” preference pane.

In addition to timeseries, we have ease debugging an troubleshooting by introducing configuration backup/restore so that you can clone your ntopng hosts with ease in case of crash. In case you need to do a quick packet capture, instead of leaving the ntopng interface for using the command line, you can now do that from within ntopng. Under the each interface and host, in addition to the existing JSON link, you can now find a new element for streaming a quick capture to your browser.

if you click on the button from a host, only the traffic from/to the host will be dumped, instead if you select the same feature from the network interface all the traffic will be considered. You can also set an optional BPF filter for further refining the traffic you are interested in (e.g. “port 53” for DNS traffic only) and thus avoid downloading too much data.

In this release we have also greatly reworked SNMP, added ubuntu 18.04 support, moved from GeoIP to libmaxmind for geolocation, improved Slack alerting, and made several changes that should have greatly hardened ntopng. As usual for all details, please refer to the complete changelog you can find below.

Enjoy!

 


ntopng 3.6 Changelog

New features

Improvements

  • Security
    • Access to the web user interface is controlled with ACLs
    • Secure ntopng cookies with SameSite and HttpOnly
    • HTTP cookie authentication
    • Improved random session id generation
  • Various SNMP improvements
    • Caching
    • Interfaces status change alerts
    • Device interfaces page
    • Devices and interfaces added to flows
    • Fixed several library memory leaks
    • Improved device and interface charts
    • Interfaces throughput calculation and visualization
    • Ability to delete all SNMP devices at once
  • Improved active devices discovery
    • OS detection via HTTP User-Agent
  • Alerts
    • Crypto miners alerts toggle
    • Detection and alerting of anomalous terminations
    • Module for sending telegram.org alerts
    • Slack
      • Configurable Slack channel names
      • Added Slack test button
  • Charts
    • Active flows vs local hosts chart
    • Active flows vs interface traffic chart
  • Ubuntu 18.04 support
  • Support for ElasticSearch 6 export
  • Added support for custom categories lists
  • Added ability to use the non-JIT Lua interpreter
  • Improved ntopng startup and shutdown time
  • Support for capturing from interface pairs with PF_RING ZC
  • Support for variable PPP header lenght
  • Migrated geolocation to GeoLite2 and libmaxminddb
  • Configuration backup and restore
  • Improved IE browser support
  • Using client SSL certificate for protocol detection
  • Optimized host/flows purging

nEdge

  • Netfilter queue fill level monitoring
  • Bridging support with VLANs
  • Added user members management page
  • Added systemd service alias to ntopng
  • Captive portal fixes
  • Informative captive portal (no login)
  • Improved captive portal support with WISPr XML
  • Disabled global DNS forging by default
  • Added netfilter stats RRDs
  • Fixed bad MAC traffic increment
  • Fixed slow shutdown/reboot
  • Fixed invalid banned site redirection
  • Fixed bad gateway status
  • Fixed gateway network unreacheable when gateway is down
  • Fixed SSL traffic not blocked when captive portal is active
  • Fixed invalid read during local DNS lookup
  • Workaround for dhclient bug stuck while a lease already exists

Fixes

  • SNMP
    • Fixed SNMP devices deletion
    • Fixed format for odd SNMP interfaces speed
    • Fixed SNMP community selection
  • Fixed MDNS decoding
  • Fixed login redirection
  • Fixed MAC manufacturers escaping
  • Fixed host validation errors
  • Fixed traffic throughput burst when loading a serialized host
  • Allowing multiple consecutive dots in password fields
  • Reworked shutdown to allow graceful periodic activities termimation
  • Fixed validation error in profiles with spaces in names
  • Fixed old top talkers stats deletion
  • Fixed 32-bit integers pushed to Lua
  • Fixed service dependency from pfring
  • Fixes for enabling broken SSL certificate mismatch alerts
  • Fixed allowed interfaces users access
  • Fixes for crashes on Windows
  • Fixed lua platform dependent execution
  • Fixed subnet search in hist data explorer
  • Fixed flow devices and sflow mappings with SNMP
  • Fixed invalid login page encoding
  • LDAP fixes (overflow, invalid LDAP fields length)
  • Fixed encoding for local/LDAP UTF-8 passwords
  • Added POST timeout to prevent housekeeping from blocking indefinitely
  • Windows resize fixes
  • Fixed invalid uPnP URL
  • Fixed wrong hosts retrv by pool id, OS, network, and country
  • Fixed JS errors with IE browser
  • Fixed custom categories matching

Workshop and Training: 20 Years of ntop

$
0
0

This is a message for the Italian speaking community willing to attend our 20 years of ntop workshop that will take place in Pisa, Italy, where ntop was born. If there is somebody willing to help us organise a ntop event somewhere else, please contact us as next year we might be able to arrange that too.

<Italian>
20 anni fa veniva rilasciata la prima versione di ntop, tool open source per il monitoraggio del traffico di rete tramite interfaccia web. Da quella prima release, dopo 20 anni sono successe molte cose ed è arrivato il momento di fermarci a raccontarle. Non solo cosa è successo finora, ma fare parlare la comunità di utenti, spiegare che cosa e’ possibile fare con lo strumento, e le cose che abbiamo pianificato di fare. Essendo un tool open source, lo spirito di questo evento non e’ di parlare di noi, ma di fare parlare la nostra comunità.

Maggiori informazioni al link https://www.eventbrite.it/e/biglietti-20-anni-di-ntop-50373009026

</Italian>

Best Practices for the Collection of Flows with ntopng and nProbe

$
0
0

ntopng can be used to visualize traffic data that has been generated or collected by nProbe. Using ntopng with nProbe is convenient in several scenarios, including:

  • The visualization of NetFlow/sFlow data originated by routers, switches, and network devices in general. In this scenario, nProbe collects and parse NetFlow/sFlow traffic from the devices, and send the resulting flows to ntopng for the visualization.
  • The monitoring of physical network interfaces that are attached to remote systems. In this scenario, ntopng cannot directly monitor network interfaces nor it can see their packets. One or multiple nProbe can be used to capture remote network interfaces traffic and send the resulting flows towards a central ntopng for the analysis and visualization.

The following picture summarizes the two scenarios highlighted above and demonstrates that they can also be combined together.

This post complements the extensive documentation already available at https://www.ntop.org/guides/ntopng/case_study/using_with_nprobe.html and wants to serve as a quick memorandum to effectively deploy ntopng and nProbe for the collection of flows.

 

Multiple nProbe to One ntopng

Collecting flows from multiple nProbe using a single ntopng can be useful to have a single place that is in charge of visualizing and archiving traffic data.

To collect flows from multiple nProbe, ntopng has to be started with an extra c (that sands for collector) at the end of the ZMQ endpoint, whereas every nProbe needs option --zmq-probe-mode. In this configuration, the nProbes initiate the connection towards ntopng that acts as a server, and not the vice versa. Therefore, you must ensure ntopng is listening on the ANY address (that is, the wildcard * in the ZMQ endpoint address) or on another address that is reachable by the various nProbe.

An example of such configuration is the following

ntopng -i tcp://*:5556c
nprobe --zmq "tcp://<ip address of ntopng>:5556" --zmq-probe-mode -i eth1 -n none -T "@NTOPNG@"
nprobe --zmq "tcp://<ip address of ntopng>:5556" --zmq-probe-mode -i none -n none --collector-port 2055 -T "@NTOPNG@"
nprobe --zmq "tcp://<ip address of ntopng>:5556" --zmq-probe-mode -i none -n none --collector-port 6343 -T "@NTOPNG@"

 

NAT

IP reachability of nProbe and ntopng cannot always be taken for granted. Sometimes, it could be necessary for an ntopng to collect flows from an nProbe in a separate network, possibly behind a NAT or even shielded with a firewall. Similarly, it could be necessary for an ntopng behind a NAT to collect flows from an nProbe in another network. Luckily, to handle these scenarios, ntopng (and nProbe) can be configured to act as either the client or the server of the JSON-Over-ZMQ communication, interchangeably. This avoids the insertion of lengthy, time-consuming, and possibly insecure rules in network devices, as it is enough to ensure the client can reach the server, while NATs will automatically handle the returning server-to-client part of the communication.

When both nProbe and ntopng are on the same network, or when ntopng is in another network but can reach nProbe, the following configuration should be used

ntopng -i tcp://<ip address of nProbe>:5556
nprobe --zmq "tcp://*:5556" -i eth1 -n none -T "@NTOPNG@"

When ntopng cannot reach nProbe, but nProbe can reach ntopng the configuration that should be used is

ntopng -i tcp://*:5556c
nprobe --zmq "tcp://<ip address of ntopng>:5556" --zmq-probe-mode -i eth1 -n none -T "@NTOPNG@"

Note that changing the client/server roles of ntopng and nProbe does not affect the subsequent flow collection so both configurations can be used interchangeably.

 

Templates

The set of flow fields that are sent from nProbe to ntopng is controlled using option -T. All fields listed there will be sent to ntopng. The macro @NTOPNG@ is be used as an alias for a minimum set of fields necessary for ntopng to operate correctly. The macro can be combined with additional fields as shown in this example

nprobe --zmq "tcp://*:5556" -i eth1 -n none -T "@NTOPNG@ %IN_SRC_MAC %OUT_DST_MAC"

Symmetric-Key Encryption

A symmetric key can be used to encrypt the stream of flows that go from nProbe to ntopng. The key needs to be specified both at ntopng and nProbe using option --zmq-encrypt-pwd

ntopng -i "tcp://127.0.0.1:1234" --zmq-encrypt-pwd myencryptionkey
nprobe -i eth0 -n none --zmq "tcp://127.0.0.1:1234" --zmq-encrypt-pwd myencryptionkey

Data Compression

The stream of flows is automatically compressed by nProbe before being sent to ntopng. For debugging purposes, it may be useful to turn off compression. Compression can be turned off using option --zmq-disable-compression

nprobe -i eth0 -n none --zmq "tcp://127.0.0.1:1234" --zmq-disable-compression

Data Buffering

To optimize performance at high speeds, nProbe automatically buffers multiple flows into a single message. This makes it possible to achieve higher throughputs at the expense of a slightly higher latency and larger messages traveling on the network. Disabling buffering may be a good choice in low speed environments. Buffering can be disabled using option --zmq-disable-buffering

nprobe -i eth0 -n none --zmq "tcp://127.0.0.1:1234" --zmq-disable-buffering

 

Introducing nProbe 8.6: Per-Second Measurements and Collection of Proprietary Flows

$
0
0

We are glad to announce the release of nProbe 8.6 stable release. Among the main new features, this release brings:

  • Per-second measurements of flows traffic
  • Ability to collect proprietary (i.e. using non standard information elements) flows

These new features come along with a wide range of new extensions and improvements to the currently existing features and, least but not last, security and stability fixes.

Let’s have a brief look at the two main new features mentioned above.

Per-second Traffic Measurements

Getting cumulative measurements with respect to the flow lifetime not always provide enough information to really understand certain traffic patterns. Traditional flow-monitoring technologies such a la NetFlow, that just report cumulative measurements at the end of the flow (i.e. average traffic values), create a blind spot across the whole flow lifetime. Receiving a 2 GB flow with a 2-minute lifetime doesn’t tell anything about the actual pattern of the traffic. Have those 2 GB been sent in a bunch of seconds right before the end of the flow? Or have them been sent at a constant rate for 2 minutes?

Well, this release of nProbe offers extra visibility into the traffic by providing second-by-second flow byte counters. This means that you’ll able to get, for every monitored flow, a timeseries with a point every second telling the exact volume of traffic done by that flow.

Cool, isn’t it? Check out this blog post for a detailed description of this feature.

Collection of Proprietary Flows

Until the previous release, nProbe was able to collect some selected proprietary information elements in addition to the standard NetFlow and IPFIX ones. As our user community has demanded us to add further support to proprietary fields, we have decided to change the nProbe engine in order to be open to extensions by means of a configuration file instead of modifying every time the application engine. As specified in the nProbe user’s guide that covers al details, nProbe is now able to collect proprietary flows from selected manufactures including:

  • Alcatel-Lucent
  • Cisco
  • Gigamon
  • Ixia
  • Palo Alto
  • Procera
  • SonicWall

just defining the proprietary fields on a text file. You can refer to this page, for details about the above manufacturers configuration files.

Changelog

The complete list of changes, including enhancements and fixes, is available below

Main New Features

  • Added second-by-second client-to-server and server-to-client flow bytes
    • https://www.ntop.org/nprobe/introducing-per-second-measurements-in-nprobe-flow-exports/
  • Implemented an embedded web server that can be optionally enabled to
    • Force a flush of all the active flows
    • Receive a response monitored traffic traffic statistics
    • Query the nProbe version
  • Seamless support of ElasticSearch 5 and 6 and automatic template push
    • ElasticSearch version is automatically determined upon nProbe startup
    • A proper template is pushed to ElasticSearch on the basis of its version
  • Implemented modbus plugin

Extensions

  • Added support for the collection of NetFlow/IPFIX vendor-proprietary information elements through simple configuration files
  • Supported vendors include Sonicwall, Cisco, IXIA, an others
  • Configuration files published at https://github.com/ntop/nProbe/tree/master/custom_fields
  • The default NetFlow version is now V9
  • Plugins are disabled in collector mode
  • Improved support for Ubuntu18
  • Implements SIP user agents dissections (Client and Server)
  • Implements TCP/UDP/Other min flow size to discard flows below a certain minimum size
  • nProbe runs with user ‘nprobe’ by default, falling back to nobody if user ‘nprobe’ does not exist
  • New NetFlow information elements %NAT_ORIGINATING_ADDRESS_REALM and %NAT_EVENT
  • L7_PROTO exports now the protocol in format <master>.<application> protocol
  • Added fields %SRC_TO_DST_SECOND_BYTES and %DST_TO_SRC_SECOND_BYTES to export second-by-second flow bytes
  • Migrates geolocation support to GeoLite2 and libmaxminddb
  • Migration of nProbe manual to HTML
  • Manual available at https://www.ntop.org/guides/nProbe/

New Options

  • --http-server to enable the embedded web server
  • --help-neflow to dump a long help including plugin and template information

Fixes

  • Checks for hardening comparisons with partial strings
  • Further Checks to avoid crossing certain memory bundaries
  • Checks to avoid loops with malformed sctp packets
  • Fixes for flow start/end times and timestamp calculation in proxy mode
  • Fixes issues with SIP call id in RTP flows
  • Fixes length calculation in IPFIX variable-length fields
  • Fixed ZMQ buffer termination when flushing ZMQ buffers
  • Fixed wrong %EXPORTER_IPV4_ADDRESS exported over ZMQ export in case on Netflow != v5
  • Fixed a race condition that was preventing all flows to be dumped on file
  • Fix to avoid dumped files to be overwritten when -P is used with -F < 60
  • Adds missing librdkafka support on Centos7

Using nProbe and ntopng for Collecting and Visualizing Sonicwall Flows

$
0
0

nProbe is both a probe and a NetFlow/sFlow collector. Recently, we’ve also added added the ability to collect flows with proprietary information elements. This greatly improves nProbe flexibility as any custon, vendor-proprietary information element can be understood, correctly parsed, and exported downstream.

Adding proprietary information elements to nProbe is a breeze. Indeed, it suffices to use a plain-text file with the elements description. That’s all. Once the fields have been loaded from the plain-text file, they can be treated as if they were regular fields. So for example they can be added to the nProbe template -T to have them exported to Kafka, ntopng, text files, or any other supported downstream store.

We maintain and make publicly available several plain-text files that contain elements descriptions for several vendors, including Cisco, Gigamon and Sonicwall. Those files can be fetched at https://github.com/ntop/nProbe/tree/master/custom_fields.

In this article we show how to properly configure nProbe to collect Sonicwall flows. However, the methodology described here is general and can be applied, mutatis mutandis, to any other vendor. It is also shown how the collected data can be visualized with ntopng.

Configuring nProbe

As anticipated above, nProbe needs a plain-text file with the elements description to understand Sonicwall proprietary fields. This file is available at https://github.com/ntop/nProbe/blob/master/custom_fields/Sonicwall/sonicwall_custom_fields.txt. Download the file somewhere in the filesystem (make sure to download it using the “raw” link) as you’ll have to feed it to nProbe using option --load-custom-fields.

Now, for the sake of example, let’s assume:

  • One or more Sonicwall devices are exporting IPFIX with Extensions to the host running nProbe on port 2055.
  • Collected IPFIX with Extensions flows have to be sent to ntopng for the visualization via ZMQ on port 5556.
  • The file sonicwall_custom_fields.txt has been downloaded to  /etc/nprobe/.

According to the assumptions above, the configuration file for nProbe can be the following

--collector-port=2055
-n=none
-i=none
--load-custom-fields="/etc/nprobe/sonicwall_custom_fields.txt"
--zmq="tcp://127.0.0.1:5556"
--zmq-probe-mode=
-T="@NTOPNG@ %FLOW_TO_APPLICATION_ID %FLOW_TO_USER_ID %FLOW_TO_IPS_ID %IF_STAT_IF_NAME %IF_STAT_IF_TYPE %IF_STAT_IF_SPEED"

In the example, only a limited number of information elements (those listed in the template) is actually exported to ntopng. As you can see, they are treated as if they were regular fields.

That’s pretty much all for nProbe. Everything is set up for the collection of Sonicwall flows. Let’s now have a look at ntopng for the visualization as there’s a juicy bonus here, that is, the ability to visualize pie charts of proprietary Sonicwall application ids and signatures.

Data Visualization with ntopng

In terms of configuration, nothing changes on the ntopng side. To collect flows coming from nProbe on port 5556, the minimum configuration needed for ntopng is a one-liner

--interface="tcp://127.0.0.1:5556c"

Start ntopng to have Sonicwall flows beautifully shown inside ntopng, along with their custom fields defined in the template.

There is also a pie chart with proprietary Sonicwall applications available both at the level of interface as well as for every host. This extra pie chart will automatically appear on the top of the Protocols page.

In addition, all the custom fields will appear in every flow details page. For example, the UDP flow below has been detected as General DNS by Sonicwall. ntopng augments this information by telling you that that is a Google flow, but is also show the Sonicwall-detected application untouched inside the Additional Flow Elements.

 


Securing ntopng with SSL and Let’s Encrypt

$
0
0

As you know ntopng web interface supports both HTTP (default) and HTTPS. The reason why ntopng does not default to HTTPS is because we provide self-signed certificates that web browsers dislike. Fortunately today you can create a free SSL certificate recognised by all browsers by using Let’s Encrypt open certificate authority (CA). This article describes how you can do this in a few simple steps: for simplicity we limit our scope to Ubuntu/Debian but on other distro’s the procedure is similar.

  1. Install certbot as described in this article
  2. Suppose that you want to run ntopng on a server named myntopng.ntop.org. Note that this host must have a public IP address and a web server installed such as Apache (with HTTP of course as we’re creating the certificate for HTTPS).
  3. Then type (as root) “certbot –apache -d myntopng.ntop.org” as shown in the example below
    root@myntopng:/home/deri/ntopng # certbot --apache  -d myntopng.ntop.org
    Saving debug log to /var/log/letsencrypt/letsencrypt.log
    Plugins selected: Authenticator apache, Installer apache
    Obtaining a new certificate
    Performing the following challenges:
    http-01 challenge for myntopng.ntop.org
    Waiting for verification...
    Cleaning up challenges
    Created an SSL vhost at /etc/apache2/sites-available/000-default-le-ssl.conf
    Enabled Apache socache_shmcb module
    Enabled Apache ssl module
    Deploying Certificate to VirtualHost /etc/apache2/sites-available/000-default-le-ssl.conf
    Enabling available site: /etc/apache2/sites-available/000-default-le-ssl.conf
    
    Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    1: No redirect - Make no further changes to the webserver configuration.
    2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
    new sites, or if you're confident your site works on HTTPS. You can undo this
    change by editing your web server's configuration.
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 1
    
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Congratulations! You have successfully enabled https://myntopng.ntop.org
    
    You should test your configuration at:
    https://www.ssllabs.com/ssltest/analyze.html?d=myntopng.ntop.org
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
  4. At this point Let’s Encrypt has created the certificate and modified the Apache configuration adding the path of the generated certificates
    SSLCertificateFile /etc/letsencrypt/live/myntopng.ntop.org/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/myntopng.ntop.org/privkey.pem
    
  5. Now you need to concatenate the private and public keys and place them into the httpdocs/ssl directory of the ntopng installation
    root@myntopng:/home/deri/ntopng # cat /etc/letsencrypt/live/myntopng.ntop.org/privkey.pem /etc/letsencrypt/live/myntopng.ntop.org/fullchain.pem > ./httpdocs/ssl/ntopng-cert.pem
  6. Then you need to restart ntopng with -W flag that allows you to specify the SSL port on which ntpng will be listening. In case you specify both -w (for HTTP) and -W (for HTTPS), whenever you connect to the HTTP port, ntopng will redirect you to the HTTPS port. If you want to disable HTTP at all you need to specify “-w 0”.

Now that you know how to secure ntopng with HTTPS you have no excuse for using the insecure HTTP protocol.

Enjoy!

Promoting Traffic Visibility: from Application Protocols to Traffic Categories in nDPI and ntopng

$
0
0

Often we receive emails asking question like: “how many protocols nDPI supports?”, “how do you position nDPI against commercial DPI toolkit A, B, C?”. Although these questions are reasonable, they do not grasp the significance of DPI. For years commercial toolkits have run the race for protocols: I have 200 protocols, I have 1000 protocols, I have 500. Then asking that is the meaning with the term “protocol” people list traffic from to sites like cnn.com or bbc.co.uk. But BBC is not a protocol but rather some traffic (for instance HTTP, or DNS) traffic going towards *.bbc.co.uk hosts. So today comparing DPI toolkits based on the number of (so called) protocols is a bad idea as these are all but protocols.

Actually having many protocols is a pain. Computer scientists know what protocols are about, but if you ask a non-tech person to list what are the peer-to-peer protocols I am not sure you will receive a correct answer. In addition having too many protocols is bad: do you prefer to say “block all the social networks traffic” or “block Facebook, Twitter….”? Which one is more error prone? This not to mention that as soon as a new social network becomes popular you have to review all the settings of your app, whereas letting the DPI toolkit to take care of this, is a transparent and thus a better solution. Finally supposing that you want to block all advertisement sites, and daily update their list via an Internet feed. This would be a nightmare to maintain when mapping a site to a protocol, instead of mapping it to a category.

To make this long story short, nDPI has introduced categories in the latest release and this has enabled is to make ntopng and nEdge better and more configurable. The problems that we are going to tackle include:

  • I want to block all advertisement sites (nEdge)
  • I want to trigger an alert whenever my employees access a malware site (ntopng, whereas in nEdge you have the bonus also to block this traffic)
  • I run a grocery shop which provides free WiFi to customers, and I want to prevent them from accessing with the WiFi sites of competitors as they are using them for comparing prices (nEdge)

While in nDPI you can manipulate categories through the API, in ntopng you can do that using the web interface. Going o the Protocol entry in the preferences menu you can access all the application known application protocols and bind them to categories (or modify the default category provided by nDPI).

You can also edit the categories by adding new hosts or IPs, accessing the Categories menu entry in the Preferences

and adding custom hosts

In addition this, you can do that easily looking at flows. Suppose that you are watching a video and you see advertisements or tracking on your screen, you can block selected sites by listing flows, then accessing the flow info of those flows that from the host name look like suspicious. As you can see from the picture below

we have added a + icon that allows you to add it to a selected category by clicking on it

This is a simple procedure that should enable everyone to manipulate categories with the mouse rather than using the keyboard.

We hope that the introduction of this new feature will enable people to better categorise their traffic and see what really happens on the network. Remember that ntopng produces reports like the one below, so you can see host-by-host or network-by-network what is flowing on the network.

Enjoy!

 

20 Years of ntop: The Conference

$
0
0

Last Friday Oct 26th, at the University of Pisa we have celebrated 20 years of ntop open source code development and hacking culture. It has been a success with over 110 registered people, and 24 people in the morning training session. We decided to celebrate this event where ntop was created, and where most of the team lives. The idea is to periodically repeat this event  in other locations. The core of these meetings is the community, rather than the core team. The main feedback we received is that people want to know more about our tools and that training is a very important piece of all this. We discussed about the history of ntop, how people use ntopng in security, and also made a preview of the experiments we’re carrying on including the use of vocal interfaces for interacting with the tools, and how we are moving towards system introspection. As this was a domestic event, most of the presentations are in Italian and can be found below.

Luca Deri
20 anni di ntop

Guido Falsi
Opensource e pacchettizzazione di ntop su FreeBSD

Piero Nicoletti
Troubleshooting e analisi del traffico di rete

Simone Bonetti
Come monitorare lo stato di salute di una rete

Antonio Pandolfi
Detection di protocol tunnelling usando ntop

Georg Kostner
Monitoraggio della Real User Experience [EN]

Cristiano Bozzi e Raffaele Borgese
ntop e il mondo della nautica

Alfredo Cardigliano
14 anni di PF_RING [EN]

Francesco Staccini
Uso di interfacce vocali in ntopng

Samuele Sabella
System e Network Monitoring in ntopng

Using nProbe for Collecting Ixia IPFIX with IxFlow extensions

$
0
0

Ixia allows to enrich IPFIX records with value-add extensions. Additional information that can be exported, along with standard fields such as source and destination IP addresses, include:

  • Geographical information such as region IP, latitude and city name
  • Application ID or name, device, browser and even SSL cipher used
  • Detail on application and handset (device) type for mobile users
  • HTTP URL and hostname for web activity tracking
  • HTTP and DNS metadata for rapid breach detection
  • Transaction Latency for application performance tracking

The latest version of nProbe provides full support for Ixia IPFIX with IxFlow extensions. This means that nProbe can be configured to export IxFlow fields as if they were regular NetFlow fields. How to do that? Well, it’s pretty straightforward, just fire up nProbe with the Ixia IxFlow configuration file we’ve prepared. The configuration file contains IxFlow field names along with other data necessary for nProbe to properly decode IxFlow fields out of the IPFIX.

This is an excerpt of the configuration file linked above

L7_APP_ID NONE 3054 110 4 dump_as_uint
L7_APP_NAME NONE 3054 111 128 dump_as_ascii
SRC_IP_COUNTRY_CODE NONE 3054 120 2 dump_as_ascii

Field names in the first column can be used in the nProbe template as if they were regular fields. Once you’ve told nProbe to use the configuration file (option --load-custom-fields) you can start exporting custom IxFlow fields. For example, the IxFlow L7_APP_ID can be exported by nProbe simply by specifying it in the template, along with other fields: -T "@NTOPNG@ %L7_APP_ID".

The following example shows a more comprehensive nProbe configuration that loads the configuration file under ../nProbe-opensource/custom_fields/Ixia/ixia_custom_fields.txt, listens for incoming Ixia IPFIX with IxFlow extensions on --collector-port 2056, and outputs to text files (-D t) under /tmp (-P /tmp/) a series of IxFlow fields, including %L7_APP_ID and  %L7_APP_NAME.

nprobe --load-custom-fields ../nProbe-opensource/custom_fields/Ixia/ixia_custom_fields.txt -i none -n none --collector-port 2056 -T "@NTOPNG@ %L7_APP_ID %L7_APP_NAME %SRC_IP_COUNTRY_CODE %SRC_IP_COUNTRY_NAME %SRC_IP_REGION_CODE %SRC_IP_REGION_NAME %SRC_IP_CITY_NAME %SRC_IP_LATITUDE %SRC_IP_LONGITUDE %DEST_IP_COUNTRY_CODE %DEST_IP_COUNTRY_NAME %DEST_IP_REGION_CODE %DEST_IP_REGION_NAME %DEST_IP_CITY_NAME %DEST_IP_LATITUDE %DEST_IP_LONGITUDE %OS_DEVICE_ID %OS_DEVICE_NAME %BROWSER_ID %BROWSER_NAME %REV_OCTET_DELTA_COUNT %REV_PACKET_DELTA_COUNT %CONNECTION_ENCRYPTION_TYPE" -D t -P /tmp/

This is a text file output by nProbe (x.x and y.y used to anonimize sensitive data):

L7_PROTO|IPV4_SRC_ADDR|IPV4_DST_ADDR|L4_SRC_PORT|L4_DST_PORT|IPV6_SRC_ADDR|IPV6_DST_ADDR|IP_PROTOCOL_VERSION|PROTOCOL|IN_BYTES|IN_PKTS|OUT_BYTES|OUT_PKTS|FIRST_SWITCHED|LAST_SWITCHED|SRC_VLAN|L7_APP_ID|L7_APP_NAME|SRC_IP_COUNTRY_CODE|SRC_IP_COUNTRY_NAME|SRC_IP_REGION_CODE|SRC_IP_REGION_NAME|SRC_IP_CITY_NAME|SRC_IP_LATITUDE|SRC_IP_LONGITUDE|DEST_IP_COUNTRY_CODE|DEST_IP_COUNTRY_NAME|DEST_IP_REGION_CODE|DEST_IP_REGION_NAME|DEST_IP_CITY_NAME|DEST_IP_LATITUDE|DEST_IP_LONGITUDE|OS_DEVICE_ID|OS_DEVICE_NAME|BROWSER_ID|BROWSER_NAME|REV_OCTET_DELTA_COUNT|REV_PACKET_DELTA_COUNT|CONNECTION_ENCRYPTION_TYPE
7|1.38.y.y|1.38.y.y|50633|80|::|::|4|6|676|3|0|0|1526982979|1526982979|0|144|facebook|IN|India|MH|Maharashtra|Mumbai|18.xxx|72.xxx|IN|India|MH|Maharashtra|Mumbai|18.xxx|72.xxx|3|MacOS|0|Chrome|0|0|Cleartext
7|1.38.y.y|1.38.y.y|80|50633|::|::|4|6|586|3|0|0|1526982979|1526982979|0|144|facebook|IN|India|MH|Maharashtra|Mumbai|18.xxx|72.xxx|IN|India|MH|Maharashtra|Mumbai|18.xxx|72.xxx|3|MacOS|0|Chrome|0|0|Cleartext
7|1.38.y.y|1.38.y.y|29194|80|::|::|4|6|664|3|586|3|1526982979|1526982979|0|144|facebook|IN|India|MH|Maharashtra|Mumbai|18.xxx|72.xxx|IN|India|MH|Maharashtra|Mumbai|18.xxx|72.xxx|3|MacOS|0|Chrome|0|0|Cleartext

Happy IxFlow parsing!

sFlow Collection and Analysis with nProbe and ntopng

$
0
0

sFlow, short for sampled Flow, is a sampling technology designed to export network devices information, namely:

  • Interface counters (à la SNMP MIB-II);
  • Traffic packets (à la ERSPAN).

sFlow agents run on switches, routers, firewalls and other devices, and periodically export interface counters and traffic packets via UDP towards one or more sFlow collectors. sFlow, relying on sampling processes to periodically counters and packets, is scalable and ultra-lightweight and has been embedded into network devices by tens of vendors and manufacturers.

Contrary to NetFlow (please note that in sFlow parlance the word ‘flow’ has a totally different meaning with respect to what ‘flow’ means in NetFlow), which requires a stateful representation of all the network flows packets to operate, sFlow merely sample packets and counters and thus its impact on network devices memory and CPU is much lower. For this reason, is should be the technology of choice when carrying out certain network monitoring tasks. ntop team members have discussed in detail this technology during a couple of SharkFest conferences (SharkFest Europe, SharkFest US). The interested reader can download the presentation slides to gain a deeper understanding of sFlow, or even watch the youtube video of the presentation:

ntop software tightly integrates with sFlow:

  • nProbe acts as an sFlow collector and can collect sFlow from tens or even hundreds of network devices, simultaneously;
  • ntopng receives collected sFlow data from nProbe and is in charge of providing visualizations and actionable insights from this data.

Hence, using ntop software nProbe and ntopng, it is possible to easily and quickly setup a monitoring architecture for multiple sFlow-capable network devices in minutes.

Let’s see an example. Let’s say there are sFlow agents exporting sFlow on port 6343 of an host running nProbe. Let’s also say and ntopng running on the same host is used for the visualization and analysis. Configuring nProbe and ntopng is a breeze and it merely boils down to:

nprobe -i none -n none --collector-port 6343 --zmq tcp://127.0.0.1:5556
ntopng -i tcp://127.0.0.1:5556

Commands above have the following meaning:

  • nprobe is neither collecting from a physical interface (-i none) nor exporting flows towards a downstream NetFlow collector (-n none); it’s just collecting incoming sFlow on port 6343 (--collector-port 6343). Collected data is then exported to an ntopng running on localhost port 5556 via ZMQ (--zmq tcp://127.0.0.1:5556).
  • ntopng is listening on port 5556 for incoming collected sFlow data (-i tcp://127.0.0.1:5556).

The ntopng web dashboard will shortly populate with collected data, including top senders and top destinations, as well as the top layer-7 application protocols detected from the traffic. Similarly, all the other ntopng pages will populate with rich data. For example, one can visualize the top layer-7 application protocols by visiting the interface page:

Nevertheless, ntopng not only provides visibility on the traffic that is traversing the network devices, it also provides visibility into the devices per se. To access the list of sFlow devices that are currently actively exporting sFlow, select the “sFlow Exporters” entry from the “Devices” dropdown.

The devices list will be shown, along with a chart link as well as data obtained via SNMP (when available). SNMP data shows up automatically when SNMP monitoring is configured in ntopng and there is a match between the IP address of the SNMP and the sFlow device.

The address of every device listed can be clicked to access a summary of all the device interfaces, including their status and total traffic:

Similarly, the chart link provides visibility into the traffic of every interface of the selected network device. An handy stacked view will provide a useful summary of the interfaces activity:

For example, from the chart above, it is clear that interface with id 8 is accounting for most of the device traffic over the 6-hour time interval selected.

Happy sFlow collection!

Viewing all 544 articles
Browse latest View live