Quantcast
Channel: ntop
Viewing all 544 articles
Browse latest View live

Traffic Classification Using nDPI over DPDK

$
0
0

Last week we have attended the DPDK Summit North America 2018 and talked about how to use nDPI over DPDK, a kernel-bypass toolkit similar to PF_RING. For those who have not attended the presentation, they can read the presentation slides.

As you will be read, nDPI is a cross platform deep packet inspection toolkit able to process about 10 Gbit of traffic with a single core on an Intel E3 CPU. Its code is portable across various architectures, you can use it from user space and kernel (not what we advise you to do), and you can find it in many Linux distribution and as well in MacOS.

Enjoy!


Use Remote Assistance to Connect to ntopng Instances

$
0
0

A problem same ntop users how to face with, is the ability to remote access a ntopng instance running behind a firewall. This can be solved using a VPN or other means that often require to deploy an additional network service. Some of our ntop users are familiar with n2n, an open source peer-to-peer VPN ntop developed and maintains. With n2n in essence is possible to create a network overlay that allows you to access your assets in a secure way, this regardless of your network configuration. For this reason we have merged n2n in ntopng, to enable you to remote connect to your ntopng instances. The idea is not to create a permanent access (thing you can do when you setup n2n), but rather to enable temporary ntopng access for troubleshooting and support.

 

When remote assistance is enabled, the local host where ntopng runs will create a virtual adapter with IP address 192.168.166.1, it will register with then n2n supernode (daemon that enables communications between two peers behind a NAT: conceptually it is like a router but it is unable to decrypt packets, but just to deliver them to peers), and it will provide a script used to run on a remote end for allowing administrators to connect to the ntopng instance. Both peers can be (or even just ntopng) behind a NAT, and n2n will take care of the communications, this regardless of the local IP address of the remote user willing to access itself, and ntopng itself. When you install a recent ntopng development package (the next stable release will include it), you will notice a new menu entry

 


that will allow you to configure it.

Using a simple user interface you will be able to enable ntopng access from remote in a matter of clicks. Once remote assistance is enabled, you will be able to download a connection script that you need to send to those who want to remote connect to this ntopng instance. The script requires n2n to be installed, and it connects to the remote ntopng instance as depicted above in this post. ntop provides a public supernode that everyone can use, but in the preferences you can configure your supernode for implementing a fully remote access not using external nodes.

Please note that:

  • ntop does NOT have access to you remote instances, only you.
  • ntop is NOT responsible for security violations, intruders etc. Make sure you understand the risks of allowing remote access.
  • By providing remote assistance, you allow remote users to access the host where ntopng is running (i.e. you can copy files etc) and not just the ntopng web interface.
  • Enabling remote assistance you enable external users to bypass firewall, NAT etc. so make sure you network policies allow you to do that.
  • You should enable remote access only for the time you need to troubleshoot your remote instance. By default, noting will disable remote access after 24 hours, this to prevent unwanted/permanent remote access. For permanent access, please setup a VPN including n2n.

 

Happy remote troubleshooting!

Remote ntopng Authentication with RADIUS and LDAP

$
0
0

In large organizations, it is common to have a centralised authentication system usually named AAA (Authentication, Authorization and Accounting). Managing users typically involves the definition and enforcement of the rights to do some operations or to access certain resources in a network. Being able to grant (or deny) such rights using a centralized authentication system is the only viable solution when it comes to dealing with large organizations with hundreds, or even thousands, of users that periodically join and leave.

AAA protocols include Remote Authentication Dial-In User Service (RADIUS) and the Lightweight Directory Access Protocol (LDAP). Supporting RADIUS and LDAP was the only way to seamlessly integrate ntopng in infrastructures with existing users and policies, in particular to avoid the need to redefine (and keep updated) ntopng web users and privileges to match the existing users in the organisation.

RADIUS and LDAP are included in the current dev release as well as in the forthcoming 3.8 stable release. LDAP and RADIUS can be configured from the ntopng preferences, simply by selecting tab “User Authentication” and turning the corresponding switch to “On“. Once the switch is set to “On“, a series of protocol-specific configuration properties pop up.

This post briefly shows how each of these two protocols can be configured in ntopng.

LDAP

LDAP properties that can be configured are show in the picture below.

  • LDAP Accounts Type allows to choose if the login should be performed in an Active Directory (AD) or in a Posix environment:
  • LDAP Server Address specifies the IP address or name of the LDAP sever:
    • The server must be preceded with prefix ldap:// when the connection is unencrypted or with ldaps:// when LDAP-over-SSL is enabled.
    • The server must be followed by a colon and a port number that indicates the port used by the LDAP server to listen for incoming connections.
  • LDAP Anonymous Binding indicates whether the server accepts anonymous binding requests or not. When the anonymous binding is disabled, a couple of extra fields appear:
    • LDAP Bind DN, that is, the distinguished name used to perform the bind.
    • LDAP Bind Authentication Password, that is, the password used to perform the bind.
  • LDAP User Group specifies the group the user must belong to in order to be authenticated as a non-administrator ntopng user
  • LDAP Admin Group specifies the group the user must belong to in order to be authenticated as an administrator ntopng user.
  • Follow Referrals specifies if the client should automatically follow referrals returned by LDAP servers.

A Configuration Example of SLAPD (Standalone LDAP Daemon) with phpLDAPadmin

The installation of slapd and phpLDAPAdmin falls outside the scope of this example. There is already a tutorial that greatly explains how to setup slapd and phpLDAPadmin on ubuntu.

This example wants to show how phpLDAPadmin can be used to create a couple of users, one administrator and an unprivileged user, and let them login in ntopng with the right privileges. Specifically, the following users will be created:

  • ntopadmin with admin privileges
  • ntopuser with user privileges

Two groups will be associated to the created users:

  • adminGroup to identify administrators
  • usersGroup to identify unprivileged users

Those those groups will be added both to the LDAP server and to ntopng to make sure the corresponding users will be able to log in with the right privileges.

Step by Step Setup

Once the installation of slapd and phpLDAPadmin is completed, one can point the browser to the running phpLDAPadmin instance and indicate the Login DN and password created during the installation.

Login DN and password are the same ntopng uses for the binding that, in this case, is not anonymous. Therefore, one must also specify them in the ntopng preferences as LDAP Bind DN and LDAP Bind Authentication Password, respectively to let ntopng access the LDAP server.

After a successful login, one will find the hierarchical LDAP tree on the left of the page, and a series of actions on the right. “Create a Child Entry”  can be used to start adding users.

To add users one can select “Create Generic Account” after clicking on “Create a child entry“.

 

Users can then be defined simply by populating the available fields as shown below. There is nothing really special. The only important thing to note is the “Common Name” as such name is the one used by ntopng when performing the match against the submitted username.

 

At this point users are created and it is time to add the adminGroup and usersGroup. One can create groups by selecting “Create Child Entry” and then “Generic: Posix Group“.

The two groups can be created simply by assigning the aforementioned names and making sure to tick the checkbox next to the user that has to be associated with the group.

Now everything is ready in phpLDAPadmin. The last thing to do is to just configure ntopng to make sure the users will be logged in as desired. The configuration is shown in the picture below

In particular, note that:

  • The LDAP Bind DN and Password are those also used when authenticating to the phpLDAPadmin page.
  • The names of LDAP User Group and LDAP Admin Group are the same as the group names created in phpLDAPadmin.
  • Groups and users are childs of the LDAP Search Path in the tree.

RADIUS

RADIUS properties that can be configured are shown below

  • RADIUS Server Address specifies the IP address or name of the RADIUS server and is followed by a colon and a number indicating the port on which the RADIUS server is listening.
  • RADIUS Secret is the shared secret between the ntopng host and the radius server, and is used to transmit obfuscated passwords.
  • RADIUS Admin Filter-Id is used to decide whether a user should be authenticated as administrator. When the value for the Filter-Id Attribute-Value pair returned by RADIUS matches the one specified in the property field, the user is authenticated as administrator, otherwise it is authenticated as unprivileged user.

A Configuration Example of a FreeRadius RADIUS Server

Installation instructions of a FreeRadius server can be found here and fall outside the scope of this example. Here it is shown how FreeRadius can be configured with two users, namely ntopadmin and ntopuser, that will log into ntopng with administrative and non-administrative privileges, respectively.

FreeRadius user are configured in file /etc/freeradius/users. For each user, a cleartext password is configured. In addition, to make sure user ntopadmin will be an administrator, an extra Filter-Id “ntopAdmin” is added for that user.

# tail /etc/freeradius/users
# #
# # Last default: shell on the local terminal server.
# #
# DEFAULT
# Service-Type = Administrative-User
# On no match, the user is denied access.
ntopuser Cleartext-Password := "Password123"
ntopadmin Cleartext-Password := "Password456"
                    Filter-Id = "ntopAdmin"

Once the user are configured, the FreeRadius server must be configured with a secret shared with the host running ntopng. Assuming such host runs with IP 192.168.2.225, file /etc/freeradius/clients.conf can be configured as

#  tail /etc/freeradius/clients.conf
client 192.168.2.225 {
secret = "testing456"
shortname = "develv5"
nastype = other
}

At this point FreeRadius can be restarted with

# /etc/init.d/freeradius restart

The last thing to do is to set up the ntopng RADIUS configuration page. Assuming the FreeRadius server has IP 192.168.2.165 and listens on the standard port 1812, one can use the following configuration

The RADIUS Secret specified is the same “testing456” indicated in file /etc/freeradius/clients.conf. Similarly, the RADIUS Admin Filter-Id is the same “ntopAdmin” specified in file /etc/freeradius/users.

Advanced SNMP Monitoring with ntopng

$
0
0

It has been a while since we have added SNMP support to ntopng. The first release, presented in this blog post, implemented basic SNMP support. Since then we have code various improvements and new feature, with the aim of turning ntopng in an advanced SNMP monitor.

Among the extensions we have implemented are the following:

  • A cache to decouple the polling of devices from the browsing of polled data
    • Devices are polled periodically by ntopng with a background task that cycles them at 5-minute intervals and sends polled data to the cache
    • Polled data is fetched from the cache when users browse ntopng SNMP web pages, yielding almost-instantaneous response times
  • Ability to add multiple devices with a single action
    • ntopng can scan and automatically add all the SNMP devices of a /24 network
  • 64-bit SNMP v2c+ counters
  • Extended monitoring of SNMP devices
    • Details page
    • Stacked charts of top interfaces
    • Seen Mac addresses
  • Extended monitoring of SNMP device interfaces
    • Throughput
    • Last-change
    • Input and output bytes
    • Seen Mac addresses
  • Ability to alert when the status of an SNMP device interface changes
    • Useful to detect a flapping interfaces or interfaces connections/disconnections

These most important improvements are discussed below.

Extended Monitoring of SNMP Devices

SNMP devices have now their detailed page with a handy menu to browse their interfaces, seen mac addresses and historical charts.

The most useful historical chart is probably the stacked one that shows the top interfaces traffic speed over a certain timeframe. Right below the chart, interfaces total for the same timeframe are shown as well.

Hyperlink on the interfaces can be clicked to access the details page of any of the monitored interfaces.

Extended Monitoring of Device Interfaces

Devices interfaces are now show in a dynamically-loaded, paginated table. Data show is fetched from a cache which is populated by ntopng in background. This means you will not have to wait for potentially long SNMP walks before seeing the results – ntopng will do periodic walks in background to keep the cache updated!

 

Among the newly added columns, “Throughput” is probably the most important as it provides an immediate way to see the current load of any interface. Wondering how is this throughput calculated? Well, ntopng makes the difference between the total traffic polled during the two most recent SNMP walks, and divide it by the time that separates these two polls. And if you are wondering when the most recent SNMP walks have been performed, you can check the bottom of the page to see the exact dates and times.

Alerts on Interfaces Status Change

Interfaces status changes could unveil potentially critical issues, including:

  • A faulty interface that flaps, periodically going from up to down and vice versa
  • Someone has connected or disconnected to an interface

ntopng has now the ability to signal such events using alerts. Alerts are reported inside ntopng but can also be exported to third-party endpoints such as email, Syslog, Slack and Nagios.

Happy SNMP monitoring with ntopng!

ntopng Disk Requirements for Timeseries and Flows

$
0
0

Being able to do a priori estimations of the space that ntopng is going to use in a production environment is fundamental for the provisioning of the storage.

In this post we try to estimate the space used by ntopng to store timeseries and flows.

Timeseries

The number of timeseries generated by ntopng depends almost exclusively on the number of local hosts. Other timeseries generated, including those for the interfaces or SNMP devices, are generally orders of magnitude less than those generated for local hosts. For this reason, it is safe to only take into account local hosts timeseries when doing the math.

For every local host, ntopng generates a timeseries for the traffic and an extra series of Layer-7 application protocol timeseries, one for each application protocol. These timeseries can be disabled from the preferences but clearly we need them enabled to write this post.

In the remainder of this section we discuss the space required by ntopng to store timeseries, as a function of the number of local hosts, both for RRDs and InfluxDB. One can either choose to use RRDs or InfluxDB from the ntopng preferences page. We refer the interested reader to the Appendix to see how these numbers are calculated.

 

RRD

RRD files are fixed in size, this means that they won’t grow as new data points arrive. ntopng creates one RRD per Timeseries. With both traffic and Layer-7 application protocols enabled, the space required to store data for every local host is highlighted in the following table.

RRD 5-minutes Resolution
Timeseries storage 500 KB / Local Host

InfluxDB

Contrary to RRD, InfluxDB timeseries grow in size as the time goes by. For this reason, the following estimation is not only function of the local hosts, but also of the number of days of monitoring. In addition, as InfluxDB allows to choose the monitoring resolution, we give the space required at two different resolutions, namely 10- and 60-seconds.

InfluxDB 10-seconds Resolution 60-seconds Resolution
Timeseries storage 450 KB / Local Host / Day 75 KB / Local Host / Day

Flows

As we are going to announce soon, we have designed and implemented an high-speed/capacity, special purpose database for the storage of flows. With this database, we are able to dump to disk tens of thousands flows per second. The space used to store each flow is shown in the following table.

Flow Index
Flows storage 11 Bytes / Flow

The above value is an average value based on IPv4 traffic with some IPv6 flows. It can increase if you mostly have IPv6 traffic and long metadata strings stored in flows.

Appendix

In this appendix we discuss the math we have done to calculate the estimations above.

InfluxDB

To do the estimations of InfluxDB we have considered an ntopng running in production on a real environment, monitoring a SPAN port at an average traffic of 444.84 Mbps, with an average of 22,323 hosts inclusive of approximately 4,000 local hosts. ntopng is monitoring Layer-7 Applications and dumping timeseries data points with a 10-seconds resolution.

Data is obtained as follows:

  • InfluxDB Storage: 154.14 GB as shown in the ntopng runtime status page.
  • Time of monitoring: 3 months as obtained from the ntopng interface stats page.

Math is the following:

  • KB / Local Host / Day @ 10s = 154.14 GB / 3 Months / 4,000 local hosts = ((154.14 * 1024 * 1024) / 4000 / 90) = 450 KB / Local Host / Day
  • KB / Local Host / Day @ 60s = (KB / Local Host / Day @ 10s) / 6 = 75 KB / Local Host / Day

RRD

To do the estimations of RRD we have used an ntopng running in a production system that is collecting sFlow from nProbe. The system has seen approximately 2,000 local hosts and has Layer-7 timeseries generation enabled.

Data is obtained as follows:

  • Number of local hosts: /var/lib/ntopng/0/rrd $ find . -name "bytes.rrd" | wc -l = 1,989
  • Number of RRDs: /var/lib/ntopng/0/rrd $ find . -name "*.rrd" | wc -l = 25,506
  • Total size of RRDs: /var/lib/ntopng/0/ $ du -hs rrd/ = 989M

Math is the following:

  • 989 M / 1,989 Local Hosts = (989 / 1989) * 1024 = 500 KB / Local Host

Flow Index

Flow index estimations have been done using the very same host used for InfluxDB. To compute the number of bytes used by each flow stored in the flow database, we have done the following math.

First, we have counted the number of flows over an hour

$ ./nindex -d /var/lib/ntopng/0/flows/ -b 1544713200 -e 1544716800 -l 0
14/Dec/2018 21:29:05 [nindex.cpp:346] Search time range [Thu Dec 13 15:00:00 2018 -> Thu Dec 13 16:00:00 2018]
14/Dec/2018 21:29:05 [nindex.cpp:356] Performing record count (-l 0)
14/Dec/2018 21:29:05 [nindex.cpp:393] Query completed in 2.1 msec, with 16'962'614 hits returned

Then, we have counted the disk space used to store data for that particular hour

$ du -hs /var/lib/ntopng/0/flows/2018/12/13/15
175M /var/lib/ntopng/0/flows/2018/12/13/15

Finally, we have obtained the Bytes / Flow as = 175M / 16962614 = (175 * 1024 * 1024) / 16962614 = 11 B / Flow.

PS. Note that we have scan ~17 million records in ~2 msec. Not too bad for a low end system using a SATA drive where ntopng is writing in them meantime.

Measuring ntopng+nProbe Flow Processing Performance

$
0
0

In this post we try to analyze the performance of nProbe and ntopng for the collection of NetFlow. ntopng and nProbe will be broken down into smaller functional units and such units will be analyzed to understand the maximum performance of every single task as well as of the overall collection architecture.

The machine used for the analysis is equipped with an 4-core Intel(R) Xeon(R) CPU E3-1230 v5 @ 3.40GHz with HT and has 16GB of RAM.

To consistently simulate a NetFlow stream and to have the ability to control stream rate, we have recorded some actual NetFlow traffic using tcpdump, and then we’re replayed the recorded traffic with pfsend, that allows to loop-replay (-n 0) the capture and to specify the replay rate (-r <rate>).

$ tcpdump -i eno1 udp and port 2055 -s 0 -w localNetflow.pcap
$ pfsend -i stack:eno1 -f ./localNetflow.pcap -n 0 -r 0.1

nProbe

This is the command we’ve used to analyze nProbe performance:

$ ./nprobe -i none -n none --collector-port 2055 --zmq tcp://127.0.0.1:5556 -b 1 --disable-cache

Using --disable-cache guarantees nProbe will transparently proxy the incoming flows without keeping them into the internal cache.

nProbe NetFlow collection can be summarized as follows:

  • Incoming NetFlow is collected by nProbe on an UDP socket
  • nProbe dissects NetFlow packets and extract flows carried in them
  • Extracted flows are enqueued for the export
  • Flows are dequeued to be actually exported via ZMQ or other downstream collectors

The following picture shows a graphical representation of the several steps listed above, and provides information on the maximum rate that can go through each step without drops.

Going above the maximum rate automatically translates into drops. In the following, we will explain how we’ve computed the drops that can occur:

  • On the UDP socket when the socket buffer overflows, when the incoming NetFlow rate is higher than the rate at which nProbe reads from the UDP socket
  • One the export queue, when the rate at which nProbe enqueues the flows for the export is higher that the rate at which flows are dequeued for the actual export

UDP Socket Drops

To analyze UDP socket drops we have have used pfsend to replay NetFlow traffic at different rates, specified with option -r. We have found that the maximum drop-free rate at which packets can be processed from the UDP socket equals to 160 Mbps or, equivalently, 25 Kpps. To detect drops and find this maximum drop-free rate we have inspected /proc/net/udp to make sure the drop counters was at zero for a sufficiently long time.

This is an example drops experienced in the socket buffers as shown in the latest column and latest row of the following output.

$ cat /proc/net/udp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode ref pointer drops
1181: 00000000:0044 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 33368 2 ffff8fc0a12d0cc0 0
1224: 00000000:006F 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 33007 2 ffff8fc0a12d0880 0
1236: E102A8C0:007B 00000000:0000 07 00000000:00000000 00:00000000 00000000 38 0 31512 2 ffff8fc09ef28880 0
1236: 0100007F:007B 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 29241 2 ffff8fc099a18880 0
1236: 00000000:007B 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 29235 2 ffff8fc099a18000 0
3168: 00000000:0807 00000000:0000 07 00000000:00068100 00:00000000 00000000 1006 0 1122119 2 ffff8fc08466bfc0 260932

Export Queue Drops

To analyze export queue drops, we have used nProbe output when run with option -b 1 to observe the “export queue full” errors and the queue fill level. The relevant output is the following

19/Dec/2018 15:07:01 [nprobe.c:3351] Flow drops: [export queue full=44524035][too many flows=0][ELK queue flow drops=0]
19/Dec/2018 15:07:01 [nprobe.c:3356] Export Queue: 511980/512000 [100.0 %]

The export queue full counter is incremented by one every time a flow is dropped as the export queue is full. We have been able to quantify the maximum drop-free rate at which flows can enter and leave the queue without drops in 90 Kfps, that is, approximately 110 Mbps of NetFlow traffic, produced with pfsend as follows

$ sudo pfsend -i stack:eno1 -f ./localNetflow.pcap -n 0 -r 0.11

90 Kfps is also the maximum drop-free rate at which nProbe can operate without drops to collect NetFlow and export it via ZMQ.

ntopng

ntopng has been set up to receive flows via ZMQ as follows

./ntopng -i tcp://127.0.0.1:5556 -m "192.168.2.0/24" --dont-change-user --disable-login 1

ntopng flows processing can be summarized as follows:

  • Flows are received via ZMQ
  • ZMQ JSON messages content is parsed to reconstruct the flows
  • Reconstructed flows are added to the ntopng internal cache for further processing

The following picture shows a graphical representation of the several steps listed above, and provides information on the maximum rate that can go through each step.

We have experienced that, without JSON parsing and internal cache processing, ntopng is able to collect flows from nProbe at 90 Kfps. However, when we’ve enabled the JSON parsing and the internal cache, necessary for ntopng to properly work, we have experienced a lower maximum drop free rate.

Specifically, enabling the JSON parsing alone, without any internal cache, brings the maximum processed flows per second down to 40 Kfps. Enabling also the internal cache costs another 5 Kfps, resulting in a maximum drop-free rate of 35 Kfps. This maximum drop-free rate can be obtained with pfsend with a rate of 40 Mbps

$ sudo pfsend -i stack:eno1 -f ./localNetflow.pcap -n 0 -r 0.04

 

Conclusion

When nProbe is not used in combination with ntopng, it can collect and export flows over ZMQ at a rate of 90 Kfps per interface, corresponding to NetFlow rate of 110 Mbps.

When nProbe is used in combination with ntopng, the maximum number of flows per second that can be processed without drops is 35 Kfps, corresponding to a NetFlow rate of 40 Mbps.

If you need to scale up, you just need to start multiple nProbe’s that send flows to multiple ntopng collector interfaces. That’s so simple!

Introducing nDPI 2.6: several new dissectors, DPDK and Hyperscan support

$
0
0

This is to announce the release of nDPI 2.6. Several dissectors have been improved and a few new ones have been added, as well we have improved the detection logic (this in case we have to guess the protocol due to incomplete data). This is also the first release of nDPI that natively supports Intel DPDK and also that improves Intel Hyperscan support. Please find below the complete changelog.

Enjoy!

 

Changelog

  • New Supported Protocols and Services
    • New Bitcoin, Ethereum, ZCash, Monero dissectors all identified as Mining
    • New Signal.org dissector
    • New Nest Log Sink dissector
    • New UPnP dissector
    • Added support for SMBv1 traffic, split from SMBv23
  • Improvements
    • Improved Skype detection, merged Skype call in/out into Skype Call
    • Improved heuristics for Skype, Teredo, Netbios
    • Improved SpeedTest (Ookla) detection
    • Improved WhatsApp detection
    • Improved WeChat detection
    • Improved Facebook Messenger detection
    • Improved Messenger/Hangout detection
    • Improved SSL detection, prevent false positives
    • Improved guess for UDP protocols
    • Improved STUN detection
    • Better Hyperscan integration
    • Added more Ubuntu servers
    • Added missing categorization with giveup/guess
    • Optimizations for TCP flows that do not start with a SYN packet (early giveup)
  • Fixes
    • Fixed eDonkey false positives
    • Fixed Dropbox dissector
    • Fixed Spotify dissector
    • Fixed custom protocol loading
    • Fixed missing Application Data packet for TLS
    • Fixed buffer overflows
    • Fixed custom categories match by IP
    • Fixed category field not accounted in ndpi_get_proto_category
    • Fixed null pointer dereference in ndpi_detection_process_packet
    • Fixed compilation on Mac
  • Other
    • Deb and RPM packages: ndpi with shared libraries and binaries, ndpi-dev with headers and static libraries
    • Protocols now have an optional subprotocol: Spotify cannot have subprotocols, DNS can (DNS.Spotify)
    • New API functions:
      • ndpi_fill_ip_protocol_category to handle ICMP flows category
      • ndpi_flowv4_flow_hash and ndpi_flowv6_flow_hash to support the Community ID Flow Hashing (https://github.com/corelight/community-id-spec)
      • ndpi_protocol2id to print the protocol as ID
      • ndpi_get_custom_category_match to search host in custom categories
    • Changed ndpi_detection_giveup API: guess is now part of the call
    • Added DPDK support to ndpiReader
    • Removed Musical.ly protocol (service no longer used)
    • Custom categories have now priority over protocol related categories
    • Improved clang support

Introducing PF_RING 7.4: PF_RING FT, Containers and Virtual Functions Support

$
0
0

This is to announce a new PF_RING major release 7.4. This release includes many improvements to the PF_RING FT library, which is now more mature thanks to new API functionalities and features that provide more flexibility. This release also addresses many issues, and moves a step forward in the same direction of release 7.2, which included full support for Containers and Namespaces, adding support for CoreOS containers and ZC Virtual Function drivers, technologies commonly available in cloud services.

This is the complete changelog:

  • PF_RING Library
    • New pfring_open PF_RING_DO_NOT_STRIP_FCS flag to disable FCS/CRC stripping (when supported by the adapter)
    • Improved support for cross-compilation
    • New PF_RING_FT_CONF environment variable to enable PF_RING FT support and load L7 filtering rules
    • New PF_RING_FT_PROTOCOLS environment variable to load L7 protocols when PF_RING FT for L7 filtering is enabled
  • ZC Library
    • New pfring_zc_open_device flag PF_RING_ZC_DO_NOT_STRIP_FCS to disable FCS/CRC stripping (when supported by the adapter)
    • New builtin hash function pfring_zc_builtin_5tuple_hash based on 5-tuple
    • Fixed SPSC queues BPF support
    • Fixed KVM/ivshmem support on Ubuntu 16
    • Fixed pfring_zc_recv_pkt_burst with ixgbe-zc drivers
  • FT Library
    • New pfring_ft_set_l7_detected_callback API to set a callback for classified flows/packets (L7 protocol detected)
    • New pfring_ft_set_default_action API to set the default action for classified L7 flows
    • New pfring_ft_flow_get_action API to get the computed/actual flow action asyncronously
    • New pfring_ft_create_table flow_lifetime_timeout parameter to configure the maximum flow duration
    • New pfring_ft_load_ndpi_protocols API to load custom nDPI protocols from a configuration file
    • New pfring_ft_is_ndpi_available API to check nDPI availability
    • Added active_flows to pfring_ft_stats to get the number of currently active flows
  • PF_RING-aware Libpcap
    • New pcap_get_pfring_handle API to get the PF_RING handle used by Libpcap
    • New PCAP_PF_RING_ALWAYS_SYNC_FD environment variable for applications not using the fd provided by pcap_get_selectable_fd
    • Fix for applications polling from the pcap selectable fd when ZC drivers are used
  • PF_RING Kernel Module
    • Updates to support kernel 4.18 or older
    • Fixed ‘stack’ TX capture in ZC mode
    • Fixed ifindex lookup
    • Fixed promiscuous mode corner cases
    • Fixed arm32 support
    • Fixed IPv6 support in software filtering rules
    • Fixed software hash rules
    • Fixed kernel clustering in case of non-IP packets (sporadically recognized as IP fragments when the fragments cache was enabled)
  • PF_RING Capture Modules
    • Timeline module fixes:
      • Fixed extraction of non-IP packets
      • Fixed permissions check when running as unprivileges user, when the user has permissions on the filesystem
    • Accolade module update to support latest SDK API and features
    • Fixed Fiberblaze module bulk mode
  • ZC Drivers
    • New ixgbevf ZC driver
    • Drivers updates to support kernel 4.18 or older
    • Fixed sporadic crashes during application startup on high traffic rates
    • Fixed the DKMS packages
    • i40e ZC driver improvements:
      • Forcing symmetric RSS hash on old firmwares
      • Improved interrupts management to fix packets delivered in batches
      • Fixed interrupts management when multiple sockets are active on the same interface (RX+TX or RSS)
    • ixgbe ZC driver improvements:
      • Increased max MTU length to 16K
      • Fixed card reset due to kernel-space TX packets pending while the interface is in use by ZC
    • Improved hardware timestamp support for igb ZC (i350/82580 adapters)
  • nBPF 
    • Fixed ‘portrange’ token in BPF-like filters
  • Examples
    • New pftimeline example to extract traffic from a n2disk dump set using the pf_ring API
    • New pfsend -M <mac> option to forge the source MAC address
    • zbalance_ipc improvements:
      • Added -m 6 distribution function (interface X to queue X)
      • Added queues and TX interface stats under /proc (-p)
      • Fixed multiapp (fanout) distribution for more than 32 egress queues
    • ftflow improvements:
      • New -F option to load rules from a configuration file
      • New -p option to load custom protocols
      • Improved output (e.g. printing information including the flow action)
    • Improved ftflow_dpdk example, added bridging support
    • Fixed software filtering in pfcount (enabling full headers when filtering is enabled)
  • IDS Support (Snort/Bro)
    • Fixed Snort DAQ filtering API
    • Fixed cluster issues on Bro (due to a libpcap symbols issue)
  • Misc
    • CoreOS support, pf_ring module and drivers installation scripts
    • Improved ‘zbalance_ipc’ clusters management with systemd:
      • Service improvements to set the status after the cluster process is actually up and running
      • Fixed hugepages memory allocation in case of clusters not using ZC drivers
    • Improved service dependencies with systemd with respect to other ntop applications
    • Added GID to the hugepages configuration file to allow nonprivileged users to use ZC applications

Introducing n2disk 3.2: towards 100 Gbit to disk

$
0
0

This is to announce a new n2disk release 3.2.

This release, besides addressing a few issues, includes new juicy features:

  • Multithreaded dump and support for multiple volumes. This is useful in a few cases:
    • If you want to record traffic above 30-40 Gbit/s to HDDs or SSDs, you should pay attention to the RAID controller limit. In fact, even if you use many disks in a RAID 0 configurations, many controllers are not able to scale above 30-40 Gbit/s of sustained write throughput. Load-balancing traffic across multiple controllers could be the solution in this case.
    • If your data retention policy requires you to keep a huge amount of data, in the order of Petabytes, you probably face with another RAID controller limit. For instance, many controllers on the market are able to handle a limited number of disks (often 32) in a single RAID 0 volume. Configuring multiple volumes, even on the same controller, could be the solution in this case.
    • If you want to record traffic to multiple “slow” volumes, like multiple HDDs without a RAID controller, or Network File Systems, load-balancing and dumping traffic in parallel to multiple volumes could be a good practice to improve the write performance.
    • If you want to record traffic at really high traffic rates (100 Gbit/s and above), and you decided to use many, fast NVMe SSDs, writing directly to those disks in parallel is probably the way to go. There are enterprise-grade Virtual RAID technologies available on the new Intel Scalable CPUs  specifically designed for NVMe SSDs, however this is not always available.
  • ZMQ export. This feature allows you to export traffic statistics and flows information through a ZMQ socket in JSON format. This is useful when recording traffic at high rates on interfaces with exclusive access (like those using PF_RING ZC or FPGA adapters), but at the same time we want to have visibility on that traffic, on the same box. The ZMQ export lets you deliver data to ntopng for traffic visualization, in the same way ntopng is used with nProbe.

If you are interested in learning more about this release, below you can find the full changelog:

  • n2disk
    • Support for multithreaded dump to multiple volumes (multiple -o <volume> are allowed, and -w <cores> now accepts a comma-separated list of cores)
    • Support for interfaces aggregation (comma-separated list of interfaces in -i <interfaces>) also with non-standard interfaces (e.g. ZC/FPGA)
    • ZMQ support (new options: –zmq <socket>, –zmq-export-flows) to export traffic stats and flows (compatible with the ntopng ZMQ import)
    • Pcap files permissions are now set by default to user rw, group r only, to allow only n2disk and the n2disk group to read recorded data
    • Support for DPI when exporting flows with ZMQ, and add L7 protocol information to the index
    • Improved CPU utilization on low traffic rate
    • Improved uburst support
    • New –dont-change-user option to prevent n2disk from changing user
    • New –dump-fcs option to dump the FCS/CRC (when not stripped by the adapter)
    • Improved /proc stats: added FirstDumpedEpoch/LastDumpedEpoch/DumpedBytes to check the dump window, CaptureLoops as watchdog for the capture thread
    • Ability to specify a file with -f/-F <filter> to provide BPF filters
    • Improved memory allocation, removed minimum memory allocation
    • Executing command specified with –exec-cmd <script> after pcap and timeline/index have been created
    • Improved simulation mode: forging real packets to test the index speed, printing stats including AVG capture speed, opening a dummy pf_ring socket to print statistics
    • Fixed –strip-header-bytes
    • Fixed volume info parsing in case of long block device name
    • Fixed root folder creation when dropping privileges
    • Fixed pcap flushing during termination
    • Preventing n2disk from failing in case of mlock failure when o-direct is disabled
    • Fixed file size limit
    • Fixed segfault on startup binding to the NUMA node
    • Fixed hardware BPF (on supported adapters) when using bulk mode
  • disk2n
    • New –takeoff-time|-T <date and time> option to schedule traffic generation (this can be used to synchronise multiple instances)
  • npcapextract
    • Allow unprivileged users to run extractions as long as they have permissions on the filesystem
    • Fixed segfault in case of empty pcap files
    • Fixed extraction of packets not supported by the index (e.g. non IP)
  • Other Tools
    • New uburst_live tool to detect microbursts on live traffic without recording traffic
    • Improved n2membenchmark benchmarking tool, added buffer size parameter
    • Fixed npcapmanage segfault
  • Packages/Misc
    • Packages improvements: reworked user/group creation and removed userdel for security reasons when removing the package
    • Improved service dependencies, n2disk and disk2n services are now ‘PartOf‘ the pf_ring service
    • Package for Ubuntu 18
    • PF_RING “timeline” module extraction fixes and improvements
    • Fixed init.d PID check, status and is-active

Say hello to nIndex: Personal Big Data System for Network Flows

$
0
0

Being able to store network flows is a very challenging task using generic databases. Networks are becoming faster and faster and, nowadays, flow-based analysis tools should store tens, or even hundreds, of thousands of flows per second, to keep up with SME and enterprise demands. Existing tools, such as relational databases, fail to accomplish this task. Unless you have unlimited resources available, tons of RAM and clusters of machines, chances are your database will choke, quickly becoming too slow to enable queries from being performed in a reasonable time. It was incredible the number of users complaining of slow MySQL instances that both slow when ingesting flows and also when performing queries for the analysis.

Another option many people use is the so called big data. In essence instead of solving the problem of efficiently storing data by exploiting the native properties of flows, big data systems maximise performance by distributing the data across various systems shards, tables etc. So in essence they do not solve the problem, but just move the problem and gain in performance as the database is no longer a unique entity but several components contribute to the performance each working on a substantial of the data.

At ntop we believe in simplicity, as thus we do not see a solution in creating a complex system that leverages on various servers as the big data paradigm dictates. Instead we believe that studying properties of flows and trying to exploit them for efficiently in both speed and storage space, is the solution. For instance an IPv4 address is a special number with high frequency for local IP addresses, of the protocol is not just a 8 bit value but it’s special as you will see UDP/TCP/ICMP very often and all the other values seldom. For this reason three years ago we have started to design a new flow indexing system able to exploit bitmap indexes, that we call nIndex. However contrary to what you can find in database literature, we do not distinguish between data and index, but the index itself includes the data, so we can save in space simply avoiding to save data but just the index. Having indexes on all columns it allows to have fast queries that are basically limited just by the I/O speed. Thanks to our new indexing system it is possible to deliver performance typical of a big data system, on a single host that can thus be self-contained and do not rely on external systems that might be unreachable in case of network faults (i.e. when the monitoring system is even more important).

Starting with ntop 3.8 we have bundled nIndex with ntopng Enterprise, so that you can drill down from alerts/activities to flows and packets on a long-term storage system that does not have external dependencies. Currently we consider this technology still in beta (this until we receive enough feedback from our user base) and we want to consolidate it in the the next major ntopng release scheduled for spring.

To use ntopng with the database, run it with option  -F "nindex"

sudo ./ntopng -i eno1 -i tcp://127.0.0.1:5557c -F "nindex"

Database flows will appear under the charts, and will reflect the data shown as timeseries.

 

You can drill down to flows clicking on the magnifier lens or download the raw flows by clicking on the black document icons on the top right corner.

We encourage you to try and give us feedback on anything you consider it important for improving it.

 

Enjoy!

Drill Down Deeper: Using ntopng to Zoom In, Filter Out and Go Straight to the Packets

$
0
0

ntopng has grown significantly over the past years, providing an increasingly-interesting set of features to support network analysts and troubleshooters in their decisions. Among the most relevant features, it is worth mentioning that timeseries inspection pages have been redesigned and reworked profoundly to facilitate the drill-down of historical data. Similarly, a home-grown high-speed special-purpose flow database has been seamlessly integrated in ntopng to ease the storage and retrieval of historical flows.

However, the circle was not really closed. A piece was missing. Something that could take us down to the packets. A feature that could allow us to start start the drill-down at the timeseries-level and then, step-by-step, after opportune selections and filters, could allow us to fetch the matching traffic packets. Real packets. Those packets responsible for the generation of the timeseries we’ve started the drill-down from.

Motivated by this, we have worked hard to add continuous traffic recording support to ntopng. We have been developing the n2disk traffic recording technology for several years now and have finally created a strong tight between n2disk and ntopng.

Let’s briefly see how it works. Don’t forget to go through the manual for more detailed information.

First of all, you have to install package n2disk and restart ntopng. An extra “Traffic Recording” entry in the “Runtime Preferences” menu will appear. You have to add an n2disk license key there. Contact us to get a demo license or use our shop to purchase one (note that licenses are free for non-profit organisations and NGOs).

 

 

Once the license is setup, you can visit the interface page, tab “Traffic Recording” to configure recording settings. You can tick the checkbox to enable and disable the traffic recording, configure the maximum disk space that should be allocated for the recording of the traffic, and even monitor the current storage utilisation. What happens when the maximum disk space is hit? Pretty simple, oldest traffic will be overwritten by the newest one.

When the traffic recording is enabled, an icon in the bottom-right corner of ntopng will appear to confirm this. Click that icon to access recording statistics and status.

And now the juicy part! Wondering how to download recorded traffic? Well, open up any of the charts page, including those of interfaces, local hosts and networks. Do the selection of interest, slice and dice using the mouse or the selectors. Finally, do you see the small top-right download button? Use that to download the pcap!

 

For the download, a dialog will ask you if you want to download the file immediately or if you want to schedule an extraction job.

You can select “Extract Now” to immediately start downloading the pcap file. Alternatively, you can select “Queue as Job” to let ntopng do the extraction. Once the file is extracted, ntopng archives it on the disk for later download and usage.

Wondering if you could also specify BPF filters? The answer is yes! Just click on the “Advanced” and specify the filter there!

 

Finally, some words on the limitations. Currently, the Layer-7 protocols cannot be specified when downloading a pcap. This means that you won’t be able to download the traffic of a single Layer-7 application protocol as detected by nDPI. The traffic of all Layer-7 protocols will be downloaded unless you create a BPF filter that you know it matches the protocol of interest. But don’t worry too much, we are already working on this to allow the download of specific nDPI-detected Layer-7 protocols.

And remember, pcap or it didn’t happen!

Welcome to ntopng 3.8 with continuous drill down: packets, flows, activities

$
0
0

We are happy to announce ntopng stable 3.8. The is the core of the next 4.0 release as it integrates new features that will be consolidated in the next release scheduled for spring.

The main features include:

  • SQL database-free high-speed traffic indexing based on a new home-grown technology. As explained in this post, we managed to store compressed flow information on disk combined with high-speed retrieval. Just add “-F nindex” to ntopng to start using this new feature, currently available in the ntopng enterprise edition. You can read more here.
  • Continuous drill-down that allows you do start from activities, down to flows and packets. All using the ntopng user interface, all with a few clicks. This allows to finally merge pieces that ntop has developed for years as separate components hat are finally available from a single place. Read more about continuous recording in ntopng.
  • Remote assistance for connecting to your ntopng instances, regardless of IP addresses, NAT, and firewalls. This thanks to ntop’s open source n2n peer-to-peer VPN.
  • Initial work towards traffic analysis with the implementation of statistical traffic indicators that will be exploited in the next major release to implement network behaviour analysis.

Enjoy !

 

New features

Improvements

  • Alerts
    • Scan-detection for remote hosts
    • Configurable alerts for long-lived and elephant flows
    • InfluxDB export failed alerts
    • Remote-to-remote host alerts
    • Optional JSON alerts export to Syslog
  • Improved InfluxDB support
    • Handles slow and aborted queries
    • Uses authentication
  • Adds RADIUS and HTTP authenticators
  • Lua 5.3 support
    • Improved performance
    • Better memory management
    • Native support for 64-bit integers
    • Native support for bitwise operations
  • Adds the new libmaxminddb geolocation library
  • Storage utilization indicators
    • Global storage indicator to show the disk used by each interface
    • Per-interface storage indicator to show the disk used to store timeseries and flows
  • Support for Sonicwall PEN field names
  • Option to disable LDAP referrals
  • Requests and configures Keepalive support for ZMQ sockets
  • Three-way-handshake detection
  • Adds SNMP mac addresses to the search function

nEdge

  • Implement nEdge policies test page
  • Implement device presets
  • DNS
    • Add more DNS servers
    • Remove deprecated DNS

Fixes

  • Fixes missing flows dump on shutdown
  • HTTP dissection fixes
  • SNMP
    • Fix SNMP step when high resolution timeseries are enabled
    • Fixes SNMP devices permissions to prevent non-admins to delete or add devices
  • Properly handles endianness over ZMQ
  • Fixes early expiration of some TCP flows
  • Fixes non-deterministic expiration of flows

Honouring System Default Policies on ntop Packages

$
0
0

Many distributions provide mechanisms to let the system administrator decide if the new installed packages should be enabled and/or started automatically. Previously, the ntop services were always enabled and started automatically after the first package installation, regardless of any system preferences. Now the ntop packages rely on system utilities to properly start, stop and restart services after installation in order to correctly honor system policies.

Due to the distribution specific defaults, this is now the default behaviour of the services installed by the ntop packages:

Debian/Ubuntu Centos 7 Other
Started after Install Yes No Yes
Enabled after Install Yes No Yes
Restarted after Upgrade Yes Yes Yes

Here are some instructions on how to modify the default behaviour on the supported systems.

On Debian/Ubuntu based distributions, the administrator can use the script policy-rc.d to define the default behavior for the services installed from packages. By default, all the services are enabled and started automatically, but this can be changed by creating the file /usr/sbin/policy-rc.d with the following contents:

exit 101

and making it executable. Now all the new services (which honor the policy-rc.d policy) will not be automatically started or enabled. The specification for policy-rc.d can be found at the following URL.

On the other hand, on CentOS/Fedora based distributions, the default behaviour for installed services is to not enable and start them. Actually, on CentOS services are never started automatically after installation. However, the administrator can decide to change the default policy or to override the policy for some specific packages. CentOS relies on the standard systemd-preset mechanism to define such policies. The directory /usr/lib/systemd/system-preset contains the configuration files which define the policies of the system. In particular, the file /usr/lib/systemd/system-preset/90-default.preset contains the default presets for some common packages.

For example, in order to automatically start ntopng after installation, the file /usr/lib/systemd/system-preset/20-ntop.preset can be created with the following contents:

enable ntopng

Now that you know all this, you can play with the new ntop dev packages that already implement this new behaviour. In the near future we will port them also to the stable version to implement a new consistent platform behaviour.

Introducing Ubuntu 18 Support for ntopng Edge (nEdge)

$
0
0

After 6 months from the first nedge announcement, as a response to our customers feedback, nEdge now provides brand new features, like the ability to apply policies based on the device type, the RADIUS integration for captive portal users authentication, the ability to add static routes when running in router mode and the programmatic configuration of users and policies.

Today, one of the most requested features is finally ready: the support for Ubuntu 18.04!

Ubuntu 18.04 is the new LTS stable release of Ubuntu. It adopts a new environment for the network configuration, netplan, which replaces the standard /etc/network/interfaces file.

The nedge package for Ubuntu 18 is now available for installation from the development ntop repositories (stable release will be supported in the coming months).
Since nEdge runs headless, for new installations it is suggested to install the Ubuntu 18 server version, which provides a minimal environment.

nEdge on Ubuntu 18 provide the same features as the Ubuntu 16 counterpart. As always, nEdge supports running into a VM, so it is quite easy to test it without the need to purchase a dedicated device.

It is important to remember that, after the first setup, nEdge will alter the network interfaces configuration to reflect the one specified in the gui. In case of issues reaching the device, the device recovery instructions can help.

Enjoy!

ntop at FOSDEM 2019: eBPF and High-Resolution Metrics

$
0
0

Hi all,

this is to invite all of our community to meet the ntop team at FOSDEM 2019, later this week-end.

We have two talks scheduled and we’ll be taking about system visibility and high-resolution network monitoring. Below you can find the talk schedule as well the presentation slides we’ll be using for our presentations.

We would like to meet our community and spend some time with you talking about our tools, and share with you some goodies we’ll bring along. Please show up at our talks, so that we can easily find each other and meet.

See you soon!


Network Traffic Analysis in ntopng (a.k.a. ntopng 2019 Roadmap)

$
0
0

Aut viam inveniam aut faciam, Hannibal 247-182 B.C.

For years ntopng has been a solution for collecting, analysing and visualising network traffic, but with a major limitation. It is too rich in data display and reporting that users needs to be experts in know what they are looking for. If not, they will be lost with all the data you can find on the web GUI, that is the opposite of what we tried to do.

It is now time to go beyond simple threshold analysis, as currently implemented in ntopng (if metric X is above value Y, then alert, when back to a value below Y we’re back to normality), and move towards a better tool able to interpret data in an automatic and autonomous fashion. Ideally the ntopng user interface should be less rich in reports, and much more powerful in telling the user something like: “everything is working as expected, in case there is a problem I will report it”. In the old ntop (non -ng), thanks to RRD we implemented exponential smoothing (Holt-Winters in our case) to detect anomalies.  It is now time to implement time series analysis in ntopng too. This is to complement (not to replace) threshold-based detection. In essence we need to still use thresholds for metrics we know (e.g. the DNS positive/error response ratio should be > 50%, otherwise there is probably something wrong happening), and more comprehensive algorithms for detecting changes in behaviour that might be relevant to report to the users.

Contrary to the current trend in the industry that is deploying machine learning even when it’s not necessary, we’ll do our best to create a traffic analysis solution able to give users the traffic analysis they expect without having to use cluster of machines, GPUs or costly cloud-based traffic analysis. It should work on a small Raspberry PI as well on a powerful server. Our work in 2018 on data indexing (BTW we’re still consolidating this work) and rich metric computation , is paving our way to this implementation. Over 2000 years ago, when his generals told Hannibal that it was impossible to cross the Alps by elephant, he said: “I shall either find a way or make one”. This is the plan for data analysis in ntopng: either use the best of existing techniques for achieving our goal, or create something new for our community.

Stay tuned!

How to Detect Malware Hosts and Scanners Using ntopng

$
0
0

Hosts directly connected to the Internet are often contacted by scanners and malware hosts. Since a few releases ntopng integrates a blacklist that is refreshed daily. Whenever a host part of this list contacts your ntopng instance and alert is triggered and displayed in the flow alerts.

This feature allows you to see who has contacted you with (usually) bad things in mind. Instead, if you want to see in realtime who blacklisted hosts are contacting you, you can click in the hosts menu and select “Blacklisted Hosts” as shown in the picture below.

If you want to see in detail what these hosts did to you, you can drill down at flow level using the flow index (don’t forget to start ntopng with “-F nindex”) that shows you what flows have been reported between your host and the scanner.


As you can imagine from the above picture, the scanner is probing ports as ntopng reports 1 packet TCP flows, and receives back an ICMP flow what will likely contain a port unreachable message. To decide if this is the case, in case you have enabled continuous traffic recording with ntopng, you can click on the pcap extract icon that will extract packets from the above conversations between your host and the scanner. ntopng will open a dialog window that already contains the scanner IP address and the timeframe of your search. At this point just click on the extract button.

ntopng will then return a pcap file via HTTP that you can open with wireshark to have evidence of what really happened.


We have done our best to simplify the whole investigation path and avoid you using the command line. Everything happens inside ntopng with a few mouse clicks.

Happy scanners hunting!

Identifying Suspicious Flows: Network Issues or Misbehaving Hosts ?

$
0
0

Starting from the latest 3.9 version, ntopng features and handy dropdown menu that allows you to filter flows on the basis of their current TCP state.

Being able to filter flows on the basis of their TCP state is particularly useful as it allows to separate the normal flows from those that are suspicious or symptomatic of certain network issues. For example, one can unveil:

  • Flows that only have a client SYN. This can identify clients attempting to connect to a server that is no longer responding (down?) or misbehaving hosts that are performing SYN-scans.
  • SYN-RST only flows. In general, when a client sends a SYN and immediately gets a RST from the server, it means that the server port the client has tried to connect to is closed. Such RST could signal a server application that is no longer working (down?) but it could also highlight misbehaving hosts that are performing port scans.
  • Not established flows. Such flows are all those flows that are not ready to exchange data. They can be FIN-ned or RST-ted flows, or flows that have still to complete the initial three-way handshake.

For example, running a port scan with nmap as

simone@devel:~$ nmap -p- 192.168.2.223

Causes scanned host 192.168.2.223 to respond to devel with 65k+ RST as the majority of its ports are closed. In this case, highlighting such flows is as easy as picking the SYN-RST Only dropdown entry.

Selecting each of the listed flow will actually confirm the client has sent a SYN to the server which, in turn, has ACKnowledged the SYN and has RST-ed the connection.

Introducing libebpfflow: packet-less network traffic and container visibility based on eBPF

$
0
0

As previewed during our FOSDEM 2019 talk, this is to introduce libebpfflow a new library for enabling network traffic and container visibility based on eBPF. Designed to be CPU and memory friendly (its presence it is almost unnoticeable) , it allows people to inspect network communications inside a system. It provides visibility for

  • processes
  • users
  • containers

Built from scratch on eBPF, it allows people to develop monitoring applications and network sensors without having to deal with packets. Sounds strange, but this is the idea: how to monitor networks without looking at packets.

The library has been designed to provide applications such as ntopng to provide system introspection, and also to be used in fields other than traffic monitoring and in particular for cybersecurity. If you are interested you can read this paper that describes how we successfully used in network security.

libebpfflow is released under the LGPL license. Enjoy!

How to Track an Fight Malware, Ransomware, Botnets… Network Traffic using ntopng

$
0
0

Malware blacklists are not something new to ntopng. ntopng (including ntopng Edge) has integrated the emerging threats blacklist https://rules.emergingthreats.net for a long time. The 3.6 stable release also introduced some webmining blacklists, which would flag online mining sites and generate alerts.

Despite the new integrations, ntopng lacked the ability to inform the user about the lists currently in use and let them verify the update status of each list. For these reasons, we’ve decided to implement the Category Lists, which gives the uses full visibility and control on the lists ntopng uses.

The page displays all the lists currently supported by ntopng. A status badge indicates if the list has been downloaded successfully or has encountered errors. A list is now a general concept not limited to malware, it simply associates a list of IP/domains to a Category. In the future, user supplied lists could be supported thanks to the flexibility of this model.

As you can see from he above image, lists are downloaded either daily or hourly according to the preference you are setting. This is because malware lists are continuously updated and thus fresh information is compulsory to keep them effectively. The Num Hosts column reports the actual rules number loaded from the list. Lists are updated on a daily basis by default, however the update frequency can be changed from the edit dialog. It is also possible to disable each individual list. Another important improvement is the use of the disk to store the downloaded lists. In this way, downloading the list on every startup is not longer required and a host which is temporary unable to download new lists can still use the previously downloaded lists.

With this update, only available in the latest development version of ntopng, we have also integrated some new powerful blacklists:

Category Lists and Custom Category Hosts are powerful features that increase the usability of ntopng in terms of visibility and threat detection.

Whenever an attack is detected, ntopng reports you an alert as the one shown below that you can use to track the problem. Remember that if you have enabled continuous traffic recording in ntopng, you can download from within ntopng a pcap of the attack for full inspection.

If alerting is not enough and you wish to block such threats and to optimize the bandwidth usage, you’ll be pleased to know that ntopng Edge implements this and much more!

Viewing all 544 articles
Browse latest View live