Quantcast
Channel: ntop
Viewing all 544 articles
Browse latest View live

Introducing nDPI 4.0: DPI for CyberSecurity and Traffic Analysis

$
0
0

This is to announce nDPI 4.0. With this new stable release we have extended the scope of nDPI that was originally conceived as a toolkit for detecting application protocols.

nDPI is now a modern library for packet processing that in addition to DPI it includes self-contained, efficient (both in memory and processing speed) streaming versions of popular algorithms for data analysis including:

This means that you can use nDPI as a layer on top of which you can build your network traffic analysis application. Do not forget that nDPI is a packet-capture neutral library, meaning that nDPI does not include packet capture facilities but it can sit on top of libpcap, PF_RING, DPDK or anything you like.

We have also boosted cybersecurity features that were designed in the 3.x series. This includes

  • Improved  ETA (Encrypted Traffic Analysis).
  • Implementation of a new nDPI-unique fingerprint named JA3+ that is an improvement (read it as less false positives) with the popular JA3.
  • Increased the number of flow risks currently supported (currently 33 in total).
  • Added the ability to mask flow risks by extending custom protocol definition.

In addition to all this, this 4.0 release has been boosted in terms of speed with a 2.5x improvement with respect to 3.x series. Below you can see a performance report when comparing the previous 3.4 stable release with the current version 4.0

  •         v3.4 – nDPI throughput:       1.29 M pps / 3.35 Gb/sec
  •         v4.0 – nDPI throughput:       3.35 M pps / 8.68 Gb/sec

Many new protocols (14) have been added, and detection of existing ones has been improved.

We would like to thank to all developers and contributors and in particular to lnslbrty, IvanNardi, vel21ripn for the time they donated to the project.

Finally many thanks to Braintrace for supporting nDPI development and triggering new ideas and features now included in this release.

Enjoy !


Introducing nProbe Cento 1.14

$
0
0

This is to announce a new release of the ntop’s 100 Gbit probe, nProbe Cento 1.14.

In this version we have integrated the latest features from nDPI, the ntop’s Deep-Packet-Inspection engine, that is now 2.5x faster than the previous version. Flows are enriched with Flow Risks, which represents a set of issues detected by nDPI, and a Flow Score, which is computed based on the risks severity, to indicates how bad is each flow.

The flow dump has also been improved by adding the Community ID (a flow identifier which is becoming a standard in the IDSs world) and extended HTTP and DNS metadata.

This release also introduces performance optimizations and a few bug fixes, mainly related to memory leaks.

Changelog

New Features

  • Add support for dumping HTTP/DNS flow information to text files (–http-dump-dir and –dns-dump-dir options)
  • Add dump of Flow Risk and Score
  • Add Community ID export when dumping to text files
  • Add support for burst capture (when supported by the PF_RING interface) to improve capture performance
  • IDS mode (cento-ids):
    • Add ability (–local-processing option) to select traffic that should be processed locally by cento vs traffic that should be forwarded to the egress queues for processing by third party applications
    • Add option (–balanced-egress-type <type>) to select the distribution function when load balancing traffic to egress queues

Improvements

  • Optimize JSON serialization by using the nDPI serializer
  • Rework hosts data structures implementation (Radix Tree)
  • Improve packet processing statistics
  • IDS mode (cento-ids):
    • Optimize number of packets necessary to decide about egress
    • Check both master and application L7 protocol when filtering, with precedence to application protocol
    • Add –egress-queue-len parameter to control the queue size on egress

Fixes

  • Fix memory leaks
  • Fix buffer-overflow on decoded URLs
  • Fix and improve hostnames lookup (automa)
  • Fix format of exported metadata for flows with unknown L7 protocol
  • Fix sanity checks on egress packets (avoid corruptions)
  • Fix initialisation of IP filters (IDS mode)
  • Fix partial DPI guess on exported flows
  • Fix client/server information in dumped flow information
  • Fix v6 flows handling
  • Fix to avoid creation of empty files when dumping to disk
  • Fix to avoid dumping TCP flags in non TCP flows
  • Fix some counters wrapping at 32 bit

Misc

  • Change installed binaries path from /usr/local/bin to /usr/bin

nProbe 9.6 Released: IPS, ClickHouse, Observation Points, FreeBSD Support

$
0
0

This is to announce the release of nProbe 9.6 whose main features include:

Enjoy !

Changelog

New Features

  • New support for FreeBSD/OPNsense/pfsense
  • New UI plugin for configuring nProbe in OPNsense
  • New IPS mode, supported both on Linux (based on Netfilter) and FreeBSD/OPNsense/pfsense (based on Netmap)
  • New support for ClickHouse and Maria DB (in addition to MySQL and other export formats)
  • New AWS VPC Flow Logs collection (via dump files)

New Command Line Options

  • Extend -E to support 16-bit observationDomainId (IPFIX)
  • Add –ips-mode to enable IPS mode
  • Add –zmq-publish-events to enable collection of events from ntopng, including IPS policies
  • Add –ignore-obs-domain-id-port to ignore probe port and observation domain id
  • Add –ja3plus to enable JA3+
  • Add –version-json for exporting the version and license information in JSON format
  • Add –host-labels to load host labels from file
  • Add -D ‘T’ dump format (compressed text)
  • Add –collector-reforge-timestamps for reforging collected timestamps

Extensions

  • Add %FLOW_VERDICT to report the verdict associated with the flow in IPS mode
  • Add %SRC_TO_DST_MAX_EST_THROUGHPUT %DST_TO_SRC_MAX_EST_THROUGHPUT to export per direction throughput
  • Add %SRC_HOST_LABEL %DST_HOST_LABEL to export host labels configured with –host-labels
  • Add %L7_RISK_SCORE for associating flow risk score with a flow
  • Add %SIP_REGISTER_MAX_RRD %SIP_REGISTER_NUM_OK %SIP_REGISTER_NUM_OTHER SIP IEs
  • Add %SRC_TO_DST_IAT_MIN %SRC_TO_DST_IAT_MAX %SRC_TO_DST_IAT_AVG %SRC_TO_DST_IAT_STDDEV %DST_TO_SRC_IAT_MIN %DST_TO_SRC_IAT_MAX %DST_TO_SRC_IAT_AVG %DST_TO_SRC_IAT_STDDEV min/max/avg/stddev packet IAT
  • Add %OBSERVATION_POINT_TYPE %OBSERVATION_POINT_ID for exporting Observation Point information
  • Add %L7_INFO with L7 flow information (used by ntopng)
  • Add collection of %IPV4_NEXT_HOP %IPV4_BGP_NEXT_HOP %FORWARDING_STATUS IEs

Improvements

  • Add support for decoding fragmented tunnelled packets
  • Improve Throughput calculation
  • Extend max template size to 256
  • Add handlign of ingress VLAN on sFlow extended switch data
  • Enhance MPLS-tagged packet decoding
  • Improve dump to Influx DB

Fixes

  • Fix crash when using –pcap-file-list with –zmq
  • Fix Win CLI option handling
  • Fix L2TP dissection of tunnels with optional lenght set
  • Fix -i DIR option (pcaps are read continuously until shutdown)
  • Fix handling of %EXPORTER_IPV4_ADDRESS in template when using @NTOPNG@
  • Fix support of large packets (> MTU) due to GTO/TSO/LRO
  • Fix RTP invalid memory allocation
  • Fix @NTOPNG@ template that caused TCP flags to be sent only on one direction, generating invalid security alerts
  • Fix/rework flow direction and %DIRECTION information element
  • Fix crash with too many templates defined

Misc

  • Add configuration Wizard (nprobe-config) for configuring nProbe
  • Windows now uses a virtual NT SERVICE\nprobe account
  • Add support for reading the configuration from both the configuration file and CLI parameters (at the same time)
  • Add scripts for configuring Netfilter with nProbe in IPS mode (installed under /usr/share/nprobe/netfilter/scripts)
  • Add/improve support for embedded systems, including:
    • OpenWRT
    • Ubiquity (e.g. EdgeRouter X)
    • Raspberry (Raspbian)
  • Removed obsolete –ndpi-proto and –ndpi-proto-ports

Infrastructure Monitoring: Observing The Health and Status of Multiple ntopng Instances

$
0
0

Introduction

Quis custodiet ipsos custodes? (Juvenal). In other words: who will guard the guards themselves? If you use ntopng to monitor your network, you also need to make sure ntopng is monitored as in case of failure, ntopng will not report any alert, and the network administrator can interpret that as a sign of good health, instead of interpreting it as lack of monitoring.Recent 4.3+ versions of ntopng have the capability to monitor other ntopng instances, being them in the same local LAN or physically/geographically distributed. This capability, also referred to as infrastructure monitoring, provides live visibility of ntopng instances’ status, as well as of the network interconnecting them.

Indeed, with infrastructure monitoring, ntopng periodically performs checks against configured instances to

  • Measure throughput and Round Trip Time (RTT)
  • Fetch key performance and security indicators (KPI/KSI) such as the number of engaged alerts

Measures are also written as timeseries data to enable historical analysis. In addition, ntopng has the ability to trigger alerts when monitored instances become unreachable, or when their throughput or RTT falls outside expected/configured bounds.

Under the hood, checks are performed using remote instances’ REST APIs. In essence, the ntopng instance in charge of doing the infrastructure monitoring, periodically fetches remote instances’ data through their exposed REST APIs. Authentication and authorization are done using (randomly-generated) API tokens.

 

Infrastructure Monitoring in Practice

Let’s see an infrastructure monitoring example in practice. Let’s assume there are three ntopng instances running in Paris, Rome and Milan, respectively. The instance running in Rome is being used to monitor the other two instances in Paris and Milan.

To view the health and status of Paris and Milan from Rome, it suffices to go under Rome’s System -> Pollers -> Infrastructure. That pages summarises measured data from each of the two instances

Under normal circumstances, green “UP” badges tell that everything is working as expected, with remote instances reachable, and with a throughput and RTT within the expected bounds.

As soon as something unexpected happens, the status toggles to “ERROR” with ntopng immediately detecting the event and triggering the corresponding alerts. For the sake of example, below it is shown what happens in the event of Paris becoming unreachable

Adding Monitored Instances

In order to add monitored instances, you need to click on the plus button of the System -> Pollers -> Infrastructure page. An instance alias must be specified, along with instance URL, token, and thresholds. The image below shows the addition of Milan’s instance.

The URL is just the base ntopng URL and can be specified either with a numeric IP as well as with an symbolic name.

The token must be generated on the Milan’s instance. To generate the token, visit Milan’s page Settings -> Users. A tab “User Authentication Token” is available when editing each of the available users to generate or read the token.

Once the token has been generated on Milan, it can be cut-and-pasted straight into Rome.

The latest two threshold allows to specify both a throughput and an RTT threshold for the instance. Every time ntopng measures fall outside these threshold, an alert is generated.

Final Remarks

In this post it has been shown how a single ntopng can be effectively used to monitor multiple sibling instances to achieve live visibility of a whole infrastructure. However the right approach is create a mesh of monitored instances so that each ntopng instance can monitor the other ones and create a robust monitoring system without a central point of failure or if you wish to positively answer the “Quis custodiet ipsos custodes?” question. This also because the measurements between ntopng instances can be different: Rome -> Paris and Paris -> Rome can have very different values in terms of throughput for instance. This is the key ingredient for reliably monitoring a ntopng-based monitoring system and indirectly monitoring the monitored network infrastructure.

Enjoy !

Configuring nDPI Flow Risk Exceptions

$
0
0

One of the newest features of nDPI 4 is the ability to identify flow risks. Unfortunately sometimes you need to add exceptions as some of those risks, while correct, need to be ignored. Examples include:

  • An old device that is speaking an outdate TLS version but that you cannot upgrade, and that you have done your best to protect.
  • A host name that looks like a DGA but that it isn’t.
  • A service running on a non-standard port but that works perfectly as is.

In order to address the need to specify exceptions to nDPI identified flow risks, you can define a mask for turning off specific flow risks for selected IP addresses (CIDR is supported) and hostnames. nDPI allows you to specify a file where you can define custom protocols (please note that tools like ndpiReader, ntopng and nprobe all support custom protocols via this configuration file).

ip_risk_mask:192.168.1.0/24=0
ip_risk_mask:10.196.157.228=0
host_risk_mask:".local"=0
host_risk_mask:".msftconnecttest.com"=0

The syntax is pretty straightforward:

  • Token name: either ip_risk_mask (for IP addresses) or host_risk_mask (for hostnames).
  • Mask: the flow risk identified is put in AND with this mask before output. Note that in the above example a 0 mask is used meaning that no risks will be generated for the IPs specified (either flow source or destination) or for hostnames matching the names.

For instance the above examples silence flow risks for the specific hosts 192.168.1.0/24 and 10.196.157.228, as well for all hostnames ending with .local or .msftconnecttest.com.

You can define multiple rules, one per line, and nDPI will honour your choice. No more unwanted flow risk alerts.

Enjoy!

Introducing PF_RING 8.0: Batch Packet Processing and XDP Support

$
0
0

This is to announce a new PF_RING release 8.0. This new stable version includes enhancements for improving application performances, by adding support for batch processing also in the standard API (it was already available in the ZC API), and consolidates XDP support, which has been reworked to fully leverage on the latest Zero-Copy support and buffers management and take full advantage of the native batch capture.

This release also adds support for the latest kernels to the ZC drivers for Intel adapters, including those shipped with CentOS (8.4) and Ubuntu LTS (20) , and it is integrated with the latest SDKs for the FPGA capture modules (Accolade, Napatech, Silicom/Fiberblaze).

A few more API extensions and improvements are included in this release, please check the full changelog below for the whole list. Enjoy!

Changelog

PF_RING Library

  • Add pfring_recv_burst API allowing batch receive (when supported by the capture module/adapter)
  • New zero-copy AF_XDP support (reworked), including pfring_recv_burst support
  • Fix breakloop when using pfring_loop

ZC Library

  • New pfring_zc_pkt_buff_data_from_cluster API to get the packet buffer providing packet handle and cluster
  • New pfring_zc_pkt_data_buff API to get the packet handle providing packet buffer and cluster
  • New pfring_zc_pkt_buff_pull_only API to remove data from the head room of a packet
  • Add PF_RING_ZC_BUFFER_HEAD_ROOM define (buffer head room size)
  • Add PF_RING_ZC_SEND_PKT_MULTI_MAX_QUEUES define (max number of queues in queues_mask)

FT Library

  • New pfring_ft_api_version API to get the API version
  • New pfring_zc_precompute_cluster_settings API to get memory information before allocating resources
  • Add VXLAN encapsulation support
  • Add tunnel_id to flow metadata
    Add support for compiling examples with DPDK >=20
  • Fix L7 metadata with short flows

PF_RING-aware Libpcap

  • Set 5-tuple clustering as default when using clustering with libpcap

PF_RING Kernel Module

  • Support for kernel >=5.9
  • Add more info to /proc, including promisc mode and ZC slots info
  • Handle long interface name (error on interface length bigger than 14-char as supported by bind)
  • Fix channel selection when channel is unknown (e.g. VM)
  • Fix triple VLAN tags with hw offload/acceleration
  • Fix check on mapped memory size
  • Fix potential data race in SO_SET_APPL_NAME
  • Fix OOB access

PF_RING Capture Modules

  • Accolade library update (SDK 1_2_20210714)
  • Napatech library update (SDK 12.7.2.1)
  • Silicom/Fiberblaze library update (SDK 3_5_9_1)
  • Add steer_to_ring and ring_id fields to Accolade rules (accolade_hw_rule)
  • Add support for recv burst on Napatech adapters in chunk mode
  • Add PF_RING_PACKET_CAPTURE_PRIO env var to set hostBufferAllowance on Napatech adapters
  • Rename ACCOLADE_RING_BLOCKS env var to ANIC_RING_BLOCKS on Accolade adapters (default is now 16)
  • Fix Accolade drop counter when capturing from rings
  • Fix extraction of packets with nsec timestamps on Timeline module (n2disk dump)

ZC Drivers

  • New ice ZC driver v.1.3.2 (Intel Columbiaville / E810 adapters) with symmetric RSS support
  • Support latest kernels, including RH/CentOS 8.4 and Ubuntu 20, for all ZC drivers
  • i40e ZC driver update (v.2.13.10)
  • e1000e ZC driver update (v.3.8.7)

nBPF

  • New nBPF primitives device ID and interface ID to match metadata from Arista MetaWatch devices

Application Examples

  • pfcount
    • Add -B option (burst mode)
  • pfsend
    • Add -n <num packets> support with -f <pcap>
    • Add support to reforge src/dst IP from pcap with -S and -D
  • ftflow
    • Add -E option to run extra DPI dissection (e.g. to print JA3 info)
  • zbalance_ipc
    • Add runtime reload of nDPI protocol configuration file
    • Add -m 7 option (sample distribution based on eth type)
    • Add default configuration file /etc/cluster/cluster.conf (when no option is specified)

Misc

  • Move libraries and utility scripts from /usr/local to /usr
  • Install pfring-aware tcpdump with packages
  • Add revision to pfring-dkms version

ntopng 5.0 Is Out: Modern Traffic Monitoring for AIOps and Cybersecurity

$
0
0

ntopng was initially designed as a tool for realtime network traffic monitoring. The idea was to create a DPI-based tool able to report traffic statistics. Overtime we have added the ability to implement active monitoring checks, SNMP, and various other features. However there was a fundamental point that was missing: go beyond traffic reporting, moving towards traffic analysis. The current Grafana-like trend of having several large screens full of dashboards is the opposite of what we believe we should do. This approach requires network and security administrators to be trained so well to understand whether the network is behaving well or not.

ntopng instead reverts this perspective by implementing an AIOps platform, able to analyse network metrics in realtime while being able to collapse thousand signals into a single comprehensive view of your network status, without human intervention or training typical of machine learning tools. For instance below you can see the service map, that is a representation of interactions of local hosts. ntopng learns this map automatically and enforces changes and violations to the policy. Humans can tune the system by selecting what services are not supposed to flow on the network, but the system has already built the knowledge base automatically, also identifying device types (is this a tablet or a router?) and the type of traffic such devices are expected to do.

In the past months we have introduced the concept of score (you can read this paper for more information on the topic), that is a way to identify with a numerical value how bad is a host currently behaving. When we account the score, we take into account not only the alerts a host generates but also its behaviour: this way we can both detect changes in behaviour that a human operator would be unable to report, in particular in modern networks where there are several thousand signals to watch.

Anomalies as well all other alerts, are reported in ntopng on a new alert dashboard that allows you to correlate the event with traffic and drill-down to the flows that created such alerts.

In essence ntopng is not just reporting what happens, but it also tells you what is wrong or not, and it can notify you on your preferred messaging application. This is a major milestone for ntopng and also for open source software.

Below you can find all details and changes of the 5.0 release.

Enjoy !

Main New Changes and Breakthroughs

During these 9 months we have invested a lot of time and energies to rework the way alerts are handled in ntopng. Initially, host and flow alerts were completely processed in Lua at fixed intervals of time. This architecture, although very flexible and extensible, had several drawbacks:

  • Lua is an interpreted language so it is intrinsically more expensive, in terms of CPU clocks, if compared to a compiled language
  • Processing at fixed intervals of time poorly utilizes the CPUs, with periods of high load and periods where the system is almost idle

To mitigate these drawbacks, we decided to move host and flow alerts from Lua to C++ and to process them continuously, rather than at fixed intervals of time, to better utilize the CPUs. It was a huge architectural change, but it eventually reduced the average load by 50% and mitigated significantly the load spikes originally caused by fixed-time processing. Here we describe how the new host and flow alerts work, and provide a guide that an interested developer can used as reference to code new alerts.

Among the benefits of having less CPU-intensive alerts is the ability to integrate more tightly with nDPI and its security risks, with ntopng 5.0 now triggering many novel security-related alerts. Alerts are also augmented with intelligence to detect attackers and victims and abrupt changes in host behaviors, e.g., when the score indicator of compromise increase significantly.

The benefits of having a reduced load, along with a less spiky behavior with reference to the CPU activity, also proved to be fundamental to break the barrier of 100Kfps NetFlow Collection. Indeed, ntopng and nProbe can collect, analyze, and dump NetFlow data at a rate that exceeds 100K flows per second. Detailed performance figures can be found here.

However, being able to work at 100Kfps is not that useful, unless there are easy and intuitive ways to drill down into the data that quickly becomes humongous. For the sake of example, consider that a system fills by 36M new flows every hours when it operates at 100Kfps. Therefore, to ease the way data can be drilled down, we added support for observation points in ntopng 5.0. This makes it possible to fluidly drill down data originating at hundredths of routers. The rationale is that, although nowadays 100Kfps is becoming a requirement when it comes to NetFlow collection, it is unlikely to have all those flows generated at a single router. In general, flows come from multiple independent routers that together can sum up to 100Kfps.

Supported Distributions

ntopng 5.0 adds to its supported distributions also FreeBSD 11 and 12, including the popular firewalls OPNsense and pfSense.

Breaking Changes

  • To ensure optimal performance and scalability and to prevent uneven resource utilization, the maximum number of interfaces handled by a single ntopng instance has been reduced to
    • 16 (Enterprise M)
    • 32 (Enterprise L)
    • 8 (all other versions)
  • REST API v1/ is deprecated and will be dropped in the next stable release in favor of REST API v2/
  • The old alerts dashboard has been removed and replaced by an advanced alerts drilldown page with integrated charts

Changelog

The complete list of changes introduced with ntopng 5.0 are:

Breakthroughs
  • Advanced alerts engine with security features, including the detection of attackers and victims
    • Integration of 30+ nDPI security risks
    • Generation of the score indicator of compromise for hosts, interfaces and other network elements
  • Ability to collect flows from hundredths of routers by means of observation points
  • Anomaly detection based on Double Exponential Smoothing (DES) to uncover possibly suspicious behaviors in the traffic and in the score
  • Encrypted Traffic Analysis (ETA) with special emphasis on the TLS to uncover self-signed, expired, invalid certificates and other issues
New features
  • Ability to configure alert exclusions for individual hosts to mitigate false positives
  • FreeBSD / OPNsense / pfSense packages
  • Ability to see the TX/RX traffic breakdown both for physical interfaces and when receiving traffic from nProbe
  • Add support for ECS when exporting to Syslog
  • Improved TCP analysis, including analysis of TCP flows with zero window and low goodput
  • Ability to send alerts to Slack
  • Implementation of a token-based REST API access
Improvements
  • Reworked the execution of hosts and flows checks (formerly user scripts), yielding a reduced CPU load of about 50%
  • Improved 100Kfps+ NetFlow/sFlow collection performance
  • Drilldown of nIndex historical flows much more flexible
  • Migration to Bootstrap 5
  • Check malicious JA3 signatures against all TLS-based protocols
  • Reworked Doh/DoT handling
Fixes
  • Fixes SSRF and stored-XSS injected with malicious SSDP responses
  • Fixes several leaks in NetworkInterface

 

HowTo Monitor Customer Traffic in Managed Service Providers and ISPs

$
0
0

ISPs have provided Internet access to customers for years and the only goal was to connect their users to the Internet. Managed Service Providers (MSP) and Managed Security Service Providers (MSSP) deliver network, services and infrastructure on customer premises and have become relatively popular in the past few years. Over time customers started to ask new services, including traffic monitoring, security (here MSSP come into the scene) and visibility.

So if you as a MSP, MSSP or ISP and you are wondering how to monitor customer traffic using ntop tools, this post can be your starting point.

Solution 1: Central Location with Static and Non Overlapping IPs

The simplest solution you can think of is depicted below:

For every service provided network, a mirror/TAP is used to duplicate traffic. One nProbe per network is used to monitor the mirrored customer traffic (note that the network cn be distributed hence nProbe instances can run on different hosts and locations) and the flows are delivered to the central ntopng via ZMQ. ntopng can be configured to collect flows on various ZMQ interfaces, one per probe, and aggregated via the view interface. This way you maximize the overall performance as every interface is independent. In order to limit every user to see its own traffic, you need to configure in ntopng one user per customer by restricting it to the IPs he owns. Example: supposing to have a user whose server has IP 192.168.160.10, then this is the configuration to use.

This solution works if customers do not have overlapping IPs and they are assigned statically (i.e. they do not change overtime).

In this case you will need one ntopng license and one nProbe license per host. Note that licenses bound to the host, so you do not have to pay multiple licenses if you start multiple nProbes per host. Configuration example (ntopng is active on host 172.16.100.10 and nProbes at 192.168.1.2-192.168.1.4 capturing traffic on interface eno1):

  • ntopng -i tcp://192.168.1.2:1234 -i tcp://192.168.1.3:1234 -i tcp://192.168.1.4:1234 -i view:all
  • nprobe -i eno1 -n none –zmq tcp://192.168.1.2:1234 (for 192.168.1.2, replicate it for all other nProbes)

Solution 2: Remote Sites and Overlapping IPs

This solution applies to service providers who have remote customer sites with routers/firewalls able to generate NetFlow/IPFIX (e.g. Mikrotik is a popular device used by many companies). As provides often “replicate” the same network for every customer, it is likely that inside the customer network the address plan is the same and thus that you need to divide the traffic per customer and not merge it with he view interface. In this case you need to configure on a central host where ntopng is running, one ZMQ interface per customer (i.e. each customer will have a ZMQ interface so we do not mix traffic of different customers). nProbe instances collecting flows can run on the same host where ntopng is active, each collecting traffic of an individual customer.

In this case, supposing to run both nProbe and ntopng on the same host,  you will need one ntopng Enterprise L Bundle license (able to support up to 32 ZMQ interfaces and thus 32 customers) that includes both nProbe and ntopng licenses. Configuration example (ntopng and nProbe are active on host 172.16.100.10):

  • ntopng -i tcp://127.0.0.1:1234 -i tcp://127.0.0.1:1235 -i tcp://127.0.0.1:1236
  • nprobe -3 2055 -n none –zmq tcp://127.0.0.1:1234 (customer A flows are collected on 172.16.100.10:2055)
  • nprobe -3 2056 -n none –zmq tcp://127.0.0.1:1235 (customer B flows are collected on 172.16.100.10:2056)
  • nprobe -3 2057 -n none –zmq tcp://127.0.0.1:1236 (customer C flows are collected on 172.16.100.10:2057)

 

In this case each customer will be configured to restrict its view to its ZMQ monitored interface

Of course if you have more than 32 customers, you can replicate the above solution until all customers are monitored.

Final Remarks

This post shows the main options you have to address monitoring needs of your customers. Note that ntopng has the ability to deliver alerts remotely or on messaging systems, so you can also configure this feature per customer to have a complete monitoring experience. Now it’s time to play with ntop tools and have fun bringing visibility to your customers in a cheap  and effective way.

 

Enjoy !


How To Configure Flow and Packet Deduplication in nProbe

$
0
0

Sometimes traffic monitoring requires data deduplication as due to topology or hardware constraints there are some network traffic activities that are monitored by multiple devices, and others that are monitored only by a single device. This means that unless some corrections are configured, traffic measurements are wrong and thus useless. Fortunately, we have implemented some features that allows you to avoid this problem by discarding duplicated traffic before this hits the collector. This is because the collector is overwhelmed by the various activities it has to carry on, so it is better do avoid duplicates at the source (i.e. at the nProbe side) rather than at the collector side where deduplication rules can be complicated when multiple issues are mixed in the same network.

As there are multiple scenarios and solutions, below some use cases are listed to explain this in detail:

A. Packet Deduplication

This section applies when nProbe sniffs traffic and converts it into flows. For flow collection please move to section B. Remember that PF_RING comes out of the box with utilities for aggregating, distributing, dropping packets: see for instance zbalance for more information about this topic.

A1. Overlapping Packets/Networks

In some networks, when merging packets coming from various sources there is some duplication as some (e.g a specific subnetwork only) packets are observed by multiple probes. If this problem is not addressed, there is some partial data duplication for those networks/hosts that are observed multiple times. In order to solve this problem, the simplest solution is to use packet filtering to discard packets that can be duplicated. For instance suppose that nprobe A sees traffic of network 172.16.0.0/16 in addition to other traffic, and that nprobe B sees also traffic of network 172.16.0.0/16 in addition to other traffic that nprobe A does not see (i.e. the only overlap is on network 172.16.0.0/16). In this case on either nprobe A or B (not on both!) you can add -f “not net 172.16.0.0/16”

A2. Consecutive Duplicated Packets

In some cases there packets are observed twice, meaning that hardware (e.g. a packet broker, or a mirror) emits the same packet twice. This is the simplest form of duplication as you see some packets twice, and others once. nProbe can deduplicate this traffic discarding consecutive copies of the same packet by using the following option –enable-ipv4-deduplication

B. Flow Deduplication

This section applies to nProbe when used in flow collector mode (i.e. when a router creates flows and they are collected by nProbe).

B1. Overlapping Flows

This is the same as A1 but for flows instead of packets. In case multiple networks are observed by multiple probes, you need to set  filter on all collectors but one (this will be the one that will emit flows for the duplicated network) to discard duplicated flows. This option –collection-filter <filter> allows you to specify a flow collection filter. The filter can be a network or an an AS; in case you have multiple filters you can separate them with a comma. Example –collection-filter “192.168.0.0/24″ means that flows where one of the peers (doesn’t matter if source or destination) belongs to 192.168.0.0/24, then such flow is discarded. Instead –collection-filter “!192.168.0.0/24” means that flows where none of the peers belong to 192.168.0.0/24 are discarded. You can also filter flows based on the autonomous system (remember to load geoip dat files). Example –collection-filter “!as12345″

B2. Partially Duplicated Flows

Due to high-availability and routing, some flows can be observed more than once depending on traffic conditions. So flows can be constantly duplicated, or only when some conditions happen (e.g. the main path is down and a backup path is observed). As there is no rule of thumb to discard duplicated flows being the duplication completely dynamic and unpredictable, the best option is to use this option –flow-deduplication <interval (sec)>. In essence you create a sliding time window of X seconds where if the same flow is observed multiple times, only the first flow is emitted and following copies are discarded in the collection period. Acceptable values for the time interval are 15 or 30 seconds to make sure that flows are deduplicated but the deduplication cache is not too large.

Final Remarks

As you have read there is no single solution to this problem as there are many use cases. nProbe offers a plethora of solutions that should allow you to cover all the possible use cases.

We hope this article walked though all the possible options nProbe offers. If you have questions, feedback or anything get in touch with us and let us know.

Enjoy!

HowTo Monitor Traffic in SMEs and Home Networks: A Primer

$
0
0

In the first part of this series of articles, we focused on monitoring ISPs and MSP traffic. Today we analyse network traffic in SMEs and home networks. The typical network layout of a home or a small business is depicted below.

 

The ISP provides a router for connecting to the Internet (e.g. xDSL or fibre) that usually also features an embedded access point used by phones, tablets or laptops to connect to the Internet. In order to monitor LAN traffic, the best solution is to replace the current switch with one that supports sFlow; however most people do not pay too much attention to what happens in the LAN but they rather focus on Internet traffic monitoring as this is the place where threats and slowdown happen. In order to monitor Internet traffic we need to hook a probe where this traffic flows and thus we need to make some changes to the topology as shown in red in the figure below.

Namely:

  • Either replace the existing switch (not advises as you need to make changes to the wiring or spend a lot of money if you have a switch with many ports) or add (better) a new switch with mirroring capabilities. Today they are very cheap (~30 Euro/35 US$) and simple to use (example see Zyxel GS1200-8 and TP-Link TL-SG105E) so the best is to add a new switch between the router and the rest of the network.
  • You need to disable WiFi on the router and add an access point you connect to the network. This is required because if you leave WiFi enabled on the router, this traffic will not pass through the mirror and thus it will be invisible.
  • Pick an existing PC or add a new one (even a Raspberry can be enough if you have up to a couple of Mbit of Internet bandwidth, otherwise better to use a more capable PC) for running ntopng. This PC will need two ethernet interfaces: one will be used to connect the PC to the network, and the other one (even a USB ethernet interface will work) to receive traffic from the mirror. Supposing that this interface is named eth2, you need to start “ntopng -i eth2”. That’s all.

With this solution you can monitor the traffic as well the security of the whole network with a relatively low hardware cost (< 100 Euro/US$), that we believe it is acceptable to keeping your network healthy and safe.

 

Enjoy !

October 7th: Webinar on ntopng 5.0. You’re invited !

$
0
0

This is to invite you to the webinar about ntopng 5.0 released this summer. The idea is to walk through the new features and possibilities offered by this version.

We hope to see you all !

Webinar Content

ntopng was initially designed as a tool for real-time network traffic monitoring, with the release 5.0. we have started its transition from monitoring to an AIOps tool. We wanted to make it more accessible and intelligent, able to analyze network metrics in real-time and collapse tens or even thousands of metrics into a subset of actionable signals.

In this webinar we will show the new insights generated with:

  • Service maps to detect expected and unexpected network services (i.e. lateral movements).
  • The ‘score’ indicator of compromise combines tens of indicators to detect misbehaving hosts analysis.
  • Behavioural Encrypted Traffic Analysis (ETA).

We will also briefly introduce the new platforms supported FreeBSD, OPNsense and pfSense.

Event Details and Registration

You can register at this URL where you will also find further details. The event will start at 4 PM CET (10 AM EST), will last one hour, and it is free of charge.

 

Introducing ntop Professional Training Service

$
0
0

Many of you are asking professional training, in particular in companies and large installations. Over the years we have produced many software applications that allow you to improve network visibility and block cybersecurity threats.

In this over increasing ecosystem, we acknowledge that blog posts and webinars might not be sufficient for everyone. For this reason we have created a professional training service designed for people who want to master ntop products in their daily activities. The idea is to divide the training in 5 session of 90 minutes each, so that you can attend the training without having to leave your daily activities. At the end of the training you can apply for a certificate of proficiency that will be provided in case you pass the final exam.

Due to current travel restrictions, we start to offer this service online but when possible we plan to run it in person. You can read more about topics, schedule and duration at this web page.

The first scheduled session will start November 16th, 2021. Make sure to join it !

Webinar on Traffic Analysis for Cybersecurity: Current State of the Art and Ongoing Developments

$
0
0

On October 28th at 4 PM CET / 10 AM EST we have organised a webinar on cybersecurity. The idea was to describe in detail what we have implemented so far for tackling cybersecurity events, and what are the future plans and ongoing developments.

Topics included

  • nDPI traffic analysis: flow risks and Encrypted Traffic Analysis (ETA).
  • Behavioural traffic analysis.
  • Combining nProbe and ntop with IPS facilities.
  • Beyond nProbe Agent: user and process analysis in monitored flows.

For those who have missed the event, here you can find the presentation slides and the video recording of the webinar

Introducing PF_RING ZC Support for Mellanox Adapters

$
0
0

PF_RING ZC is ntop’s high-speed zero-copy technology for high speed packet capture and processing. Until now ZC supported 10/40/100 Gbit adapters from Intel based on ASIC chips, in addition to the FPGA-based 100 Gbit adapters already supported by PF_RING including Accolade/Napatech/Silicom.

This post is to announce a new ZC driver, known as mlx, supporting a new family of 100 Gbit ASIC-based adapters, this time from Mellanox/NVIDIA, including ConnectX-5 and ConnectX-6 adapters.

The supported ConnectX adapters from Mellanox, in combination with the new mlx driver, demonstrated to be capable of high performance, by letting our applications to scale up to 100 Gbps with worst case traffic, and flexibility, with support for hardware packet filtering, traffic duplication and load-balancing as we will see later in this post. All this in addition to interesting and useful features like nanosecond hardware timestamping.

Before diving into details of Mellanox support, we want to list the main differences of this ZC driver with respect to all other adapters, in fact Mellanox NICs can be logically partitioned into multiple independent virtual ports, typically one per application. This means for instance that:

  • You can start cento and n2disk on top of the same Mellanox adapter port, and nProbe Cento can tell the adapter to implement 8 queue RSS for its virtual adapter, while n2disk can use its virtual adapter in single queue to avoid shuffling packets.
  • Traffic duplication: you can use the adapter to natively implement in-hardware packet duplication (in the above example both nProbe Cento and n2disk receive the same packets that have been duplicated in hardware). This is possible as each virtual adapter (created when an application opens a Mellanox NIC port) receives a (zero) copy of each incoming packet.
  • Packet Filtering: as every application opens a virtual adapter, each application can specify independent in-hardware filtering rules (ip to 32k per virtual adapter). This means for instance that cento could instruct the adapter to receive all traffic, while n2disk could discard in-hardware, using a filtering rule, all traffic on TCP/80 and UDP/53 as this is not relevant for the application.

All this described above happens in hardware, and you can start hundred of applications on top of the same adapter port, each processing a portion or all the traffic, this based on the specified filtering rule. Please note that what is just described is per port, meaning that the same application can open different virtual adapter ports with different configurations. Below in this post you will read more about this feature available on the ZC driver for Mellanox.

After this overview, it is now time to dig into the details for learning howto use ZC on top of Mellanox NICs.

Configuration

In addition to the standard pfring package installation (which is available by configuring one of our repositories at packages.ntop.org), the mlx driver requires the Mellanox OFED/EN SDK to be downloaded and installed from the Download section on the Mellanox website.

cd MLNX_OFED_LINUX-5.4-1.0.3.0-ubuntu20.04-x86_64
./mlnxofedinstall --upstream-libs --dpdk

ConnectX-5 and ConnectX-6 adapters are supported by the driver, however there is a minimum firmware version which is recommended for each adapter model. Please check the documentation for an updated list of supported adapters and firmwares. This is the main difference with respect to other drivers: you will not find a dkms package (e.g. ixgbe-zc-dkms_5.5.3.7044_all.deb) as with Intel to install, but once you have installed the Mellanox SDK as described belo, PF_RING ZC will be able to operate without installing any ZC driver for the Mellanox.

After installing the SDK, it is possible to use the pf_ringcfg tool part of the pfring packet to list the installed devices and check the compatibility.

apt install pfring
pf_ringcfg --list-interfaces
Name: eno1      Driver: e1000e RSS: 1 [Supported by ZC]
Name: eno2      Driver: igb        RSS:     4    [Supported by ZC]
Name: enp1s0f0  Driver: mlx5_core  RSS:     8    [Supported by ZC]
Name: enp1s0f1  Driver: mlx5_core  RSS:     8    [Supported by ZC]

The same tool can be used to configure the adapter: this tool loads the required modules, configures the desired number of RSS queues, and restarts the pf_ring service.

pf_ringcfg --configure-driver mlx --rss-queues 1

The new mlx interfaces should be now available in the applications. The pfcount tool can be used to list them.

pfcount -L -v 1
Name       SystemName Module  MAC               BusID         NumaNode  Status  License Expiration
eno1       eno1       pf_ring B8:CE:F6:8E:DD:5A 0000:01:00.0  -1        Up      Valid   1662797500
eno2       eno2       pf_ring B8:CE:F6:8E:DD:5B 0000:01:00.1  -1        Up      Valid   1662797500
mlx:mlx5_0 enp1s0f0   mlx     B8:CE:F6:8E:DD:5A 0000:00:00.0  -1        Up      Valid   1662797500
mlx:mlx5_1 enp1s0f1   mlx     B8:CE:F6:8E:DD:5B 0000:00:00.0  -1        Up      Valid   1662797500

pfcount can also be used to run a capture test, using the same interface name reported by the list.

pfcount -i mlx:mlx5_0

If multiple receive queues (RSS) are configured, the pfcount_multichannel tool should be used to capture traffic from all queues (this is using multiple threads).

pfcount_multichannel -i mlx:mlx5_0

Performance

During the tests we ran in our lab using a Mellanox ConnectX-5 on an Intel Xeon Gold 16-cores @ 2.2/3.5 GHz, this adapter demonstrated to be capable of capturing more than 32 Mpps (20 Gbps with worst-case 60-byte packets, 40 Gbps with an avg packet size of 128 bytes) on a single core, and scale up to 100 Gbps using 16 cores by enabling RSS support.

What is really interesting is the application performance, in fact some initial test with nProbe Cento, the 100 Gbit NetFlow probe part of the ntop suite, shown that it is possible to process 100 Gbps worst-case traffic (small packet size) using 16 cores, 40 Gbps using just 4 cores. Please note that those performance highly depend on the traffic type (less cores are required for a bigger average packet size for instance) and can change according to the input and the application configuration.

Packet transmission demonstrated to be quite fast as well in our tests, delivering more than 16 Mpps per core, and scaling linearly with the number of cores when using multiple queues (e.g. 64 Mpps with 4 cores).

Flexibility

An interesting feature of this adapter is the flexibility it provides when it comes to traffic duplication and load-balancing. In fact, as opposite to ZC drivers for Intel and FPGA adapters, access to the device is non exclusive, and it is possible to capture (duplicate) the traffic from multiple applications. In addition to this, it is also possible to apply a different load-balancing (RSS) configuration for each application. As an example, this allows us to run, on the same traffic, nProbe Cento and n2disk, where nProbe Cento is configured to load-balancing the traffic to N streams/cores, while n2disk receives all the traffic in a single data stream.

In order to test this configuration, RSS should be enabled when configuring the adapter with pf_ringcfg, by configuring the number of queues that should be used by cento to load-balance the traffic to multiple threads.

pf_ringcfg --configure-driver mlx --rss-queues 8

Run cento by specifying the queues and the cores affinity.

cento -i mlx:mlx5_0@[0-7] --processing-cores 0,1,2,3,4,5,6,7

Run n2disk on the same interface. Please note n2disk will configure the socket to use a single queue as a single data stream is required for dumping PCPA traffic to disk. Please also note that packet timestamps are provided by the adapter and can be use to dump PCAP files with nanosecond timestamps.

n2disk -i mlx:mlx5_0 -o /storage -p 1024 -b 8192 --nanoseconds --disk-limit 50% -c 8 -w 9

Hardware Filtering

The last, but not least, feature we want to mention in this post is the hardware filtering capability. The number of filters is pretty high (64 thousand rules on ConnectX-5) and flexible for an ASIC adapter. In fact it is possible to:

  • Assign a unique ID that can be used to add and remove specific rules at runtime.
  • Compose rules by specifying which packet header field (protocol, src/dst IP, src/dst port, etc) should be used to match the rule.
  • Define drop or pass rules.
  • Assign a priority to the rule.

What is interesting here, besides the flexibility of the rules themselves, is the combination of traffic duplication and rule priority, which is applied across sockets. In fact, just to mention an example, two applications capturing traffic from the same interface and setting a pass rule which is matching the same traffic and with the same priority, will both receive the same traffic. Instead, only the application which is setting the higher priority on the rule, would receive the traffic otherwise.

Please refer to the documentation for learning more about the filtering API and sample code.

License

The ZC driver for Mellanox requires a license per-port similar to what happens with Intel adapters. The price of Mellanox driver is the same of the Intel ZC, even though much richer in terms of features to what you can do with Intel. You can purchase driver licenses online from the ntop shop or from an authorised reseller.

Final Remarks

In summary a sub 1000$ Mellanox NIC can achieve the same performance of FPGA-based adapters at a fraction of the cost, and provide many more features and freedom thanks to the concept of virtual adapter.

Enjoy ZC for Mellanox !

n2n 3.0 is Here !

$
0
0

During the last year, long discussed ideas turned into implemented functionalities – adding remarkably to n2n’s rich feature set and each of them worthy of note. The level achieved made us think it justified even a major release. Welcome, n2n 3.0 !

Starting from this stable platform, future versions of n2n’s 3.x series will further promote its versatility while keeping up compatibility. To achieve this, development will mainly focus on areas outside the underlying core hole-punching protocol and will include but probably not be limited to connection handling, management capabilities, build system tuning as well as internal code structure.

For now, we would like to encourage you to have a look at the freshly released 3.0 yourself.

The following changelog intends to cause happy and eager anticipation.

Enjoy!

New Features

  • Federated supernodes to allow multiple supernodes for load balancing and fail-over (doc/Federation.md)
  • Automatic IP address assignment allows edges to draw IP addresses from the supernode (just skip -a)
  • Allowed community names can be restricted by regular expressions (community.list file)
  • Network filter for rules (-R) allowing and denying specific traffic to tunnel
  • Experimental TCP support (-S2) lets edges connect to the supernodes via TCP in case firewalls block UDP (not available on Windows yet)
  • All four supported ciphers offer integrated versions rendering OpenSSL dependency non-mandatory (optionally still available)
  • MAC and IP address spoofing prevention
  • Network interface metric can be set by command-line option -x (Windows only)
  • Re-enabled local peer detection by multicast on Windows
  • Edge identifier (-I) helps to identify edges more easily in management port output
  • Optionally bind edge to one local IP address only (extension to -p)
  • A preferred local socket can be advertised to other edges for better local peer-to-peer connections (-e)
  • Optional edge user and password authentication (-J, -P, doc/Authentication.md)
  • Optional json format at management port allows for machine-driven handling such as .html page generation (scripts/n2n-httpd) or script-based evaluation (scripts/n2n-ctl)
  • Completely overhauled build system including GitHub’s action runners performing code syntax and formal checks, creating and running test builds, providing binairies and packages as artifacts and running verification tests

Improvements

  • Increased edges’ resilience to temporary supernode failure
  • Fixed a compression-related memory leak
  • Ciphers partly come with platform-specific hardware acceleration
  • Added a test framework (tools/test-*.c and tests/)
  • Clean-up management port output
  • Polished benchmark tool output
  • Spun-off the name resolution into a separate thread avoiding lags
  • Added support for additional environment variables (N2N_COMMUNITY, N2N_PASSWORD, and N2N_FEDERATION)
  • Implemented new reload_communities command to make supernode hot-reload the -c provided community.list file, issued through management port
  • Reactivated send out of gratuitous ARP packet on establishing connection
  • Enhanced documentation (doc/ folder) including the man pages and command-line help text (-h and more detailed –help)
  • Self-monitoring time stamp accuracy for use on systems with less accurate clocks
  • Fixed man pages’ and config files’ paths
  • Code clean-up

Data Aggregation in ntopng: Host Pools vs Observation Points

$
0
0

ntopng allows users to aggregate data according to various criteria. In networking, IP addressing (network and mask/CIDR) and VLANs are typical solutions to the problem of aggregating homogeneous hosts (e.g. when hosts carry on similar tasks). Sometimes these aggregation facilities are not flexible enough to cluster hosts that have the same operating system, or flows originated by the same router/switch.

In addition to typical network-based criteria such as IP, VLAN, ntopng implements two more data aggregation facilities.

Hosts Aggregation: Host Pools

A host pool is a logical aggregation of hosts, networks and MAC addresses (this facility is available only if L2 information is available). Pools are used to group host that have a common property. For instance in ntopng there is a “Jailed Hosts” pool, that contains hosts that are considered dangerous (e.g. when their score is too high for a long time). Pools are a host aggregation facility.

Flows Aggregation: Observation Points

In flow-based analysis (e.g. when ntopng collects flows created/collected by nProbe), in addition to pools, it is often required to identify flows (not hosts) based on additional criteria. All flows, in addition to properties such as IP/port/bytes/packets, are also marked based on the IP address of the flow device exporter that has created the flow. However the exporter IP might be too granular as a single company location (e.g. site A) can have multiple probes (hence with different IPs) that need to be aggregated. In this case, the nProbe/ntopng implement the observation point concept that is a numerical identifier used to mark flows coming from various exporters that need to be logically aggregated.

In conclusion, the observation point is a way to logically aggregate flows whereas pool are used to aggregate hosts. For this reason they can be used simultaneously.

Instead, if you need yo do the opposite, i.e. divide data into homogeneous groups, ntopng offers a disaggregation facility that can implement this per-interface.

 

Enjoy !

nDPI-based Traffic Enforcement on OPNsense/pfSense/Linux using nProbe

$
0
0

nProbe IPS is an inline application able to both export traffic statistics to NetFlow/IPFIX collectors as well to ntopng, and enforce network traffic using nDPI, ntop’s Deep Packet Inspection framework. This blog post shows you how you can use a new graphical configuration tool we have developed to ease the configuration of IPS rules on OPNsense. Please note that nProbe IPS is also available on pfSense and Linux where you need to configure it using the configuration file as described later in this post and in the nProbe user’s guide.

Once installed nProbe as described in this page, you will see nProbe listed in the Services page as depicted below.

nProbe can be use in two operational modes:

  • IPS mode: nProbe will operate in inline mode, meaning that packets are received by nProbe that based on the policy rules will decide if they can be forwarded or blocked.
  • Passive mode: nProbe will observe the traffic flowing on the selected interface without interfering with the packets that flow on the monitored interface.

Enabling IPS Mode

The Interface dropdown menu allows network administrators to specify what interface nProbe will supervise. As we need to block traffic, IPS mode needs to be enabled by selecting the “Enable IPS Mode” checkbox. If you enable ntopng (either on the OPNsense box or on another host) you can specify an optional ZMQ endpoint towards which ntopng will send events to nProbe: you can read more about integrating ntopng and nProbe IPS on the user’s guide.

IPS rules are enabled only on traffic matching one of the peers specified in the “Local Networks” field. This means that if you have a connection host A: port a <-> host B:port b flowing on the specified network interface, such connection will be enforced only if at least one of the peers (either host A, or host B) is specified in the local network list. If no peer is part of the network list, such flow will not be enforced and will flow without restrictions. This way, you can decide what traffic needs to be enforced, and what traffic needs to flow unpoliced.

 

Configuring IPS Policies

The default IPS configuration is set to Allow, meaning that nProbe will not block any packet. Through the user interface you can specify what traffic needs to be blocked. In particular you can specify:

  • The list of application protocols that can be blocked. They include popular protocols such as Facebook, YouTube, Netflix as well all the over 250 protocols supported by nDPI.
  • Blocking traffic based on application protocols can be complicated for many people that are not familiar with network protocols. For this reason it is possible to specify protocol categories instead of the individual protocols. This facility can be used for fine grained tuning. For instance if you want to block all the social networks traffic, you can specify the “Social Network” category instead of selecting all the individual protocols such as TikTok, Pinterest or Snapchat.
  • Specify the flow risks that need to blocked by the IPS. For instance, you can configure nProbe to drop encrypted connections with self-signed certificates, malformed packets or traffic containing clear-text credentials. You can use this feature to block traffic connections that contain some potential security risks that instead nProbe can block.
  • The list of countries and continents forbidden to be contacted by local hosts. This way network administrators can prevents contacts with unwanted locations. A typical example is when you need to protect some resources based on location, for instance allowing servers from being accessed only by hosts located inside the local country.

Once policies are saved, they are immediately effective and with no need to restart the nProbe service.

The OPNsense user interface has been developed to simplify the policies configuration, but if you are an advanced user or if you do not run OPNsense (e.g. pfSense or Linux), you can still configure policies by editing the IPS configuration file located under /usr/local/etc/nprobe (on Linux on /etc/nprobe). This way you can specify more fine grained configuration by enabling multiple configurations as specified in the nProbe user’s guide.

 

Final Words

nProbe IPS is a versatile traffic policed based on deep packet inspection. It can complement the native firewall by adding the ability to drop traffic based on application protocols, cyber security risks, and geofencing. Enjoy !

ntop MiniConf Italia 2021: December 16, 16:00 CET

$
0
0

This year we have organised various online events for our international community. Considered that we have many Italian speaking users we have decided to organise an event in Italian that will take place December 16th.

<italian>

Giovedi 16 dicembre alle ore 16 organizziamo un evento online dove vogliamo incontrare la nostra comunità per raccontare:

  • Quando fatto nell’anno in corso.
  • I piani per l’anno a venire.
  • Parlare con i nostri utenti per capire cosa si aspettano da noi per il futuro.

Il link per la registrazione e’ disponibile a questo indirizzo: una volta registrati ricevete il link per assistere all’evento.

Vi aspettiamo numerosi !

</italian>

ntop tools and Log4J Vulnerability

$
0
0

Recently we have received many inquiries about ntop tools being immune to the Log4J vulnerability. As you know at ntop we take code security seriously, hence we confirm that:

  • In ntop we do not use Java or Log4J.
  • ntop tools are immune to the above vulnerability hence there is no action or upgrade required.

Enjoy !

A Gentle Introduction To Timeseries Similarity in nDPI (and ntopng)

$
0
0

Introduction

Let’s start from the end. In your organisation you probably have thousand of timeseries of various nature: SNMP interfaces, hosts traffic, protocols etc. You would like to know what timeseries are similar as this is necessary for addressing many different questions:

  • Host A and host B are two different hosts that have nothing in common but have the same traffic behaviour.
  • Host C is under attack: who else is also under attack?
  • SNMP interface X and interface Y are load balancing/sharing the same traffic: is their timeseries alike or not? Namely is load balancing working as it should?
  • Is host Z behaving differently with respect to last week?

In essence we want to spot timeseries that are similar (ignore for a second the case where they are exactly the same, such as two timeseries with constant traffic) such as those shown below.

In the latest ntopng we have introduced (currently limited to SNMP interfaces but soon it will be extended to other components, a new function that allows you to show what SNMP interfaces have a similar timeseries and mark them with a similarity score (the higher the more similar they are). Using this feature you can find in your list of devices what ports behave the same way and thus find similarities (or unexpected non-similarities) in a matter of clicks. All this is implemented on top of nDPI that features all the necessary tools for implementing all this as described in the next section without complex/heavy machine learning techniques that would prevent ntop tools from running on all devices.

Detecting Timeseries Similarity in nDPI

nDPI is often known only for detecting application protocols. However it also implements various methods and algorithms for analyzing data. Today we’ll talk about timeseries similarity detection in nDPI. A timeseries is a time-ordered list of data points. Detecting similarity in timeseries is a complicated task (see this document for reading more about this subject) but at the same time we need to find an easy solution that allows it to be implemented at low cost. nDPI implements data binning techniques that can make this task easy. In networking, timeseries are all created at the same time (e.g. by ntopng) so there is no need to shift the time series by aligning them using algorithms such as dynamic time warping, this unless you are comparing timeseries created at different timezones that is not our case. Two timeseries can be compared as follows: for every point in the series, we compute the sum of the square of the two timeseries difference as shown in the table below.

 

Series A

Series B

Difference

Difference^2

12 11 1 1
34 32 2 4
23 22 1 1
43 45 -2 4
23 23 0 0
Total 10

nDPI implements bins and thus once a timeseries has been converted into a bin, it can be easily compared. Two timeseries are identical if the total is zero: the larger is the total the more different are the timeseries. In many networks, you need to compare thousand of series, hence a dummy 1:1 timeseries comparison would be inefficient having a squared complexity. However, this algorithm can be simplified by implementing and early discard of possible timeseries combinations that wouldn’t match. This is possible as follows:

  • For each bin (read it as timeseries) use nDPI to compute the average and standard deviation.
  • Represent each timeseries as a circle whose center is placed on the X axis at a distance from origin of its average value, and a radius of its standard deviation.
  • Two timeseries are definitively not similar if the two circles do not intersect (the opposite is not true either).

Using this technique it is possible to avoid comparisons that would not result in similarity and focus only on those that can likely match. To make this long story short and see how all this work in practice, we have written a simple tool (this is the complete source code) that using timeseries written in RRD format by ntopng can find similarities. To give you an idea of how long this process takes, on a old dual Intel Core i3 can read 2500 host timeseries and compare them in less than a second. This is the same algorithm used by ntopng to compare SNMP interfaces traffic listed earlier in this article and that is also as fast as this simple similarity tool.

Enjoy !

Viewing all 544 articles
Browse latest View live