Quantcast
Channel: ntop
Viewing all 544 articles
Browse latest View live

Short ntop Roadmap for 2022

$
0
0

Those who attended our latest 2021 webinar, had a feeling of what are ntop plans for this year. In summary we keep focusing on cybersecurity and visibility, planning to further enhance our existing tools as follows:

  • nDPI: we plan to improve detection new threats and make it more configurable by end users. The idea is that endusers can further extend the core via configuration files in order to catch malware or contacts to suspicious/infected hosts. We do not want to turn nDPI into a rule-based tool such as many IDS that search very specific events (if X and Y and Z and … then K) but stay more general leveraging on flow risks. Note that post 4.0 release we have also significantly reduced memory usage and made nDPI faster, that are benefits for all users.
  • PF_RING: as we did with packets many years ago, we want to extend packet metadata and simplify system introspection by providing a simple and lightweight layer for observing processes, sockets, connections and users without using eBPF, that is not present in all Linux distro and overwhelming in many cases as designed to be general contrary to what we plan to achieve. Done this we can integrate system introspection in tools such as ntopng and nProbe to further provide visibility and thus security.
  • nProbe: we are completing syslog support for turning syslog entries (a sneak peak of non-NetFlow/IPFIX collection was demonstrated with AWS VPC log support) as some devices (e.g. Fortinet) provide more information via syslog than with NetFlow. In addition we want to turn nProbe into a timeseries tool able to create timeseries from flow records (for instance from sFlow counter samples) that will be sent to Timeseries databases such as InfluxDB. As already announced we continue to address cybersecurity needs by integrating new features for turning nProbe into a lightweight network EDR tool. Finally, in particular for Windows, we are enhancing local system monitoring capabilities to have a clue what process accesses what system/network resources.
  • ntopng: we are targeting 5.2 release in spring that will bring better performance and reduced resource usage (memory and CPU) as well replace flow indexing based on nIndex with a full-fledge and scalable solution based on ClickHouse (stay tuned that we will announce it next week). We are modifying the web GUI to make it less table-oriented as it is today and more graphical to implement better reports and greatly enhance analytics that didn’t change since long time. We also have plans for creating a query language for accessing ntopng information so that users can create new traffic checks and custom timeseries in seconds. This should pave the way to turning finally ntopng on a tool (also) for non-programmers that can extend it as needed in order to address super-custom needs (how much traffic host X has sent to hosts located in the EU using HTTPS that was not for a CDN?).
  • n2disk: we want to implement on-the-fly pcap encryption (i.e. during packet dump) as well enhance indexing capabilities perhaps exporting metadata to ClickHouse to be more integrated with ntopng.

This is not our complete roadmap for 2022, but what we plan to do for the coming months. Please feel free to contact us on Discord or Telegram to interact with the ntop team and provide feedback and directions.

Finally, we need to enlarge the ntop core team and we’re hiring. You can read more here: please apply if you like what we do and would like to be part of our team.

Enjoy!


HowTo Define nDPI Risk Exceptions for Networks and Domains

$
0
0

In the past couple of years we have added the concept of flow risk in nDPI that allows issues with flows to be detected (for instance expired TLS certificates). Unfortunately we need to silence some of these risk exceptions as some hosts/domain names produce risks that need to be ignored (for instance an outdated device that cannot be replaced and that has been properly protected by the security policies). In ntopng you can disable them clicking on the flow alert that will open a window as the one below,

 

and review exclusions from the settings menu:

This feature is useful when a few exceptions need to be silenced, but it is limited when you want to set more exceptions based on domain names or networks (CIDR). Not to mention that other nDPI-based tools such as nProbe don’t feature this mechanism.

In order to address this need, you can specify a nDPI protocol file that contains exceptions as follows:

#
# Risk Exceptions
#
# ip_risk_mask:   used to mask flow risks for IP addresses
# host_risk_mask: used to mask exceptions for domain names and hosts
#
# Syntax: <name>=<64 bit mask to be put in AND with the risk
#
# For IPs, the flow risk is put in AND (source IP mask OR destination IP mask)
# For Flows with a hostname (e.g. TLS) the risk is also put in AND with the host_risk_mask
ip_risk_mask:192.168.1.0/24=0
ip_risk_mask:10.196.157.228=0
host_risk_mask:".local"=0
host_risk_mask:".msftconnecttest.com"=0

where you can specify a list of networks and domain names for which risks can be masked. A 0 mask means all risks are masked, whereas you can specify a bitmap for masking specific risks. Here you can find the list of known risks so if you want to mask for CIDR 192.168.1.0/24 the risk TLS Certificate Mismatch (id 10), you need to create a 64 bit mask with bit 10 set to zero and all bits set to 1 (i.e. 0xFFFFFFFFFFFFFDFF in hex).

You can pass this protocol file to ntopng as

[--ndpi-protocols|-p] <file>.protos | Specify a nDPI protocol file
                                    | (eg. protos.txt)

and as

--ndpi-custom-protos <path>         | Custom nDPI protocols path.

in nProbe.

In addition to this, it is also possible to specify trusted CA certification authorities, often used to issue TLS certificates used in company networks. This can be defined in the protocols file as follows:

# Custom certification autorities we trust
trusted_issuer_dn:"CN=813845657003339838, O=Code42, OU=TEST, ST=MN, C=US"

so that nDPI will not generate TLS Certificate self-signed for the above CA.

Enjoy !

Historical Traffic Analysis at Scale: Using ClickHouse with ntopng

$
0
0

Last year we have announced the integration of ClickHouse, an open source high-speed database, with nProbe for high-speed flow collection and storage. Years before we have created nIndex, a columnar data indexing system that we have integrated in ntopng, but that was just an an index and not a “real” database. We have selected ClickHouse for a few reasons:

  • It is open source and developed by a vibrant community.
  • It is very efficient in both speed and size, that were the main features for which we created nIndex. This is very important as it allows to store on a single host with a single SSD several billion records and have sub second queries contrary to similar solutions that instead requires a multi-node host cluster that increases costs and complexity.
  • It is MySQL compatible, meaning that those who have used ntopng with MySQL can be familiar in no time with this new solution without having to learn something new (just remember that ClickHouse listens on port 9004 instead of MySQL’s 3389 port).

Today we announce the integration of ClickHouse in ntopng that replaces the previous MySQL and nIndex integrations (you can migrate existing data using a utility we have created). Currently it is available in ntopng dev (with Enterprise M license or better, same as nIndex) and it will be available in about two weeks in the upcoming 5.2 ntopng stable release when we’ll .

Instaling and Enabling ClickHouse in ntopng

In order to enable ClickHouse you need to

  • Update to the latest ntopng dev version.
  • Install ClickHouse using a binary package as described here. Please note that the clickhouse-client tool must be installed on the host where ntopng is running, whereas you can store data either on the same box where ntopng runs or on a remote host. Most people can put everything on the same host, but for large installations a separate ClickHouse host (or cluster) can be a better option.
  • Modify your ntopng.conf file and add “-F=clickhouse” in order to enable ntopng to send data to the clickhouse instance installed on the same box using the default user (named default) and (empty) password. For a more comprehensive solution the -F option supports several additional options whose format is “-F clickhouse;<host[@port]|socket>;<dbname>;<user>;<pw>”.

Note that when you use clickhouse, the database will be used for both flow and alert storage, meaning that alert storage will be much more efficient than the built-in SQLite backend that will still be used in non-ClickHouse setups.

Using ntopng with ClickHouse

As previously said, ClickHouse is in essence an nIndex replacement that brings many new features that only a database can offer. We have enhanced the existing historical flow explorer that can now benefit from complex query operators that are now finally supported. When you enable ClickHouse alerts and flows will be stored permanently on the database using a configurable retention time from the preferences page.

Data is stored periodically on ClickHouse as due to its nature we cannot push data continuously as you would do with a standard database. This means that data is insert in ~1 minute batches so historical data is slightly delayed with respect to realtime. In the alert view you will not see any difference with respect to SQLite beside the cardinality and the overall speed.

The main difference is instead in the Historical Flow Explorer that can be found under the dashboard menu.

As you can see in the top left menu you can see some queries you can perform on data. They are extensible via simple JSON scripts that you can find here. As you can see it is pretty straightforward to add a new reports: just drop your script in the historical folder and they will appear in your ntopng instance with no programming at all. At the page bottom you can find statistics about query just executed. In the above example 1.3 million records are returned in 0.15 sec on a single host with rotating drives, so you can imagine what you can do with flash storage. In the top right corner you can find a button that, preserving the filters that have been set, allows you to see the same results graphically by jumping into the analytics view that is also extensible via JSON scripts as explained above.

You can set filters for all the available fields including comparison operators or contains for strings.

 

Example of filtering.

 

Dimensioning Storage

Under the system interface on the Health -> ClickHouse you can find a page that shows you the storage being used. In our tests the average size of a record (flow and alerts are basically the same) is about 30 bytes for flows and 88 bytes for alerts.

This means that with a SSD 512 GB that costs about 100 Euro/$ you can store 15 billion flows and 15 million alerts and still have free disk space available. Very good isn’t it? Note that we partition records according to the day, so if you have a long retention (e.g. 6 months or 1 year) the search or insert performance is not affected as we partition the data per day. So in essence you will notice a performance degradation only if you run queries that span across several days/weeks otherwise saving 3 days of data or 3 months of data doesn’t change much in performance.

Final Remarks

If you want to read more about ClickHouse support in ntopng please refer to the user’s guide. Other than this we want you to play with this new feature we have implemented and report us errors or improvements you expect us to accommodate before the next stable release.

Enjoy !

 

ntopng and ClickHouse: Lessons Learnt at California Institute of Technology

$
0
0

Caltech has been experimenting with ntopng on our network for slightly over a year now.  We send a decent amount of traffic to ntopng, bursting up to 20Gbps, utilising Cento to read the wire and forward the data to ntopng via PF_RING ZC.  This configuration has been working pretty well, though we were encountering issues once we reached about 16 – 20 days of data retention, where ntopng would begin to drop data points from that point forward, and I noticed InfluxDB would utilize 60% or more of available memory, even when using TSI indexes, as well as a considerable amount of CPU, and ntopng would also use quite a bit of available memory.  We really wanted to try and obtain no data point drops as well as reduce the amount of memory being used and went as far as looking into different solutions to increase performance, from isolating InfluxDB CPU and memory usage to separate cores and memory banks than ntopng, to using tuned to tweak various server performance settings, to potentially licensing InfluxDB Enterprise.

A few weeks ago I read about the work with ClickHouse and decided to implement the dev version of ntopng so we could give ClickHouse support a try.  This along with the implementation of InfluxDB and ClickHouse data stores on SSD has solved our dropped data points problem.

Implementing ClickHouse and InfluxDB with their data stores located on a mere 6 Gbps Micron SSD has proven more than sufficient so far.  InfluxDB RAM usage now rarely exceeds 7% and InfluxDB CPU bursts rarely rise about 120% for very short periods of time with no effect on 30 days of ntopng data point retention.  ntopng also has reduced RAM usage and I plan to continue to experiment by slowly and incrementally increasing the amount of time that data is retained, 30 days is nice, more is better.  :)

Congratulations to the ntop team for all of their hard work continually increasing the performance and usefulness of ntopng for Information Security purposes on high speed networks.

 

Greg Grasmehr
Lead Information Security Analyst
California Institute of Technology (Caltech)


This is a report from one of our educational users that uses ntop tools on a large educational network. We encourage other users willing to share their experiences to contact us for sharing information on this blog.

 

Introducing nDPI 4.2: More Protocols and Robustness with -80% Memory

$
0
0

This is to announce the availability of nDPI 4.2 stable that brings several improvements and a reduced per-flow memory footprint (about -80% with respect to 4.0). We have continued to improve the DPI engine adding richer protocol metadata, as well as adding support for many platforms. The continuous integration toolchain along with fuzzy-testing allowed us to improve the overall library robustness and reliability which is a key feature when analyzing traffic, in particular for cybersecurity. In our vision, nDPI should be a traffic analysis layer sitting on top of packet capture toolkits such as PF_RING or DPDK that simplifies the design of applications that can finally focus on what rather than on how. This said nDPI is not just a toolkit for deep-packet inspection, but also a comprehensive toolkit for data analysis. You can hear more about this during this presentation at FOSDEM 2022.

Below you can find the complete changelog.

Enjoy !

 

nDPI 4.2 (Feb 2022)

New Features

  • Add a “confidence” field indicating the reliability of the classification
  • Add risk exceptions for services and domain names via ndpi_add_domain_risk_exceptions()
  • Add ability to report whether a protocol is encrypted

New Supported Protocols and Services

  • Add protocol detection for:
    • Badoo
    • Cassandra
    • EthernetIP

Improvements

  • Significantly reduced memory footprint from 2.94 KB to 688 B per flow
  • Improve protocol detection for:
    • BitTorrent
    • ICloud Private Relay
    • IMAP, POP3, SMTP
    • Log4J/Log4Shell
    • Microsoft Azure
    • Pandora TV
    • RTP
    • RTSP
    • Salesforce
    • STUN
    • Whatsapp
    • QUICv2
    • Zoom
  • Add flow risk:
    • NDPI_CLEAR_TEXT_CREDENTIALS
    • NDPI_POSSIBLE_EXPLOIT (Log4J)
    • NDPI_TLS_FATAL_ALERT
    • NDPI_TLS_CERTIFICATE_ABOUT_TO_EXPIRE
  • Update WhatsAPP and Instagram addresses
  • Update the list of default ports for QUIC
  • Update WindowsUpdate URLs
  • Add support for the .goog Google TLD
  • Add googletagmanager.com
  • Add bitmaps and API for handling compressed bitmaps
  • Add JA3 in risk exceptions
  • Add entropy calculation to check for suspicious (encrypted) payload
  • Add extraction of hostname in SMTP
  • Add RDP over UDP dissection
  • Add support for TLS over IPV6 in Subject Alt Names field
  • Improve JSON and CSV serialization
  • Improve IPv6 support for almost all dissectors
  • Improve CI and unit tests, add arm64, armhf and s390x as part of CI
  • Improve WHOIS detection, reduce false positives
  • Improve DGA detection for skipping potential DGAs of known/popular domain names
  • Improve user agent analysis
  • Reworked HTTP protocol dissection including HTTP proxy and HTTP connect

Changes

  • TLS obsolete protocol is set when TLS < 1.2 (used to be 1.1)
  • Numeric IPs are not considered for DGA checks
  • Differentiate between standard Amazon stuff (i.e market) and AWS
  • Remove Playstation VUE protocol
  • Remove pandora.tv from Pandora protocol
  • Remove outdated SoulSeek dissector

Fixes

  • Fix race conditions
  • Fix dissectors to be big-endian friendly
  • Fix heap overflow in realloc wrapper
  • Fix errors in Kerberos, TLS, H323, Netbios, CSGO, Bittorrent
  • Fix wrong tuple comparison
  • Fix ndpi_serialize_string_int64
  • Fix Grease values parsing
  • Fix certificate mismatch check
  • Fix null-dereference read for Zattoo with IPv6
  • Fix dissectors initialization for XBox, Diameter
  • Fix confidence for STUN classifications
  • Fix FreeBSD support
  • Fix old GQUIC versions on big-endian machines
  • Fix aho-corasick on big-endian machines
  • Fix DGA false positive
  • Fix integer overflow for QUIC
  • Fix HTTP false positives
  • Fix SonarCloud-CI support
  • Fix clashes setting the hostname on similar protocols (FTP, SMTP)
  • Fix some invalid TLS guesses
  • Fix crash on ARM (Raspberry)
  • Fix DNS (including fragmented DNS) dissection
  • Fix parsing of IPv6 packets with extension headers
  • Fix extraction of Realm attribute in STUN
  • Fix support for START-TLS sessions in FTP
  • Fix TCP retransmissions for multiple dissectors
  • Fix DES initialisation
  • Fix Git protocol dissection
  • Fix certificate mismatch for TLS flows with no client hello observed
  • Fix old versions of GQUIC on big-endian machines

Misc

  • Add tool for generating automatically the Azure IP list

Welcome to ntopng 5.2: Historical Data Analysis, Better Performance and Alerting

$
0
0

Initially designed as a maintenance release, 5.2 brings many improvements in its processing engine with over 3’000 code commits. The main goal is to enhance application scalability by optimising memory and CPU usage, while introducing a new persistency layer based on ClickHouse that has replaced nIndex a home-grown high-performance indexing system that we introduced years ago. This layer enables ntopng 5.2 to store billion of flow records and alerts with limited disk space and sub-second response time by providing full visibility in terms of packets, flows and alerts.

In essence ntopng features nDPI-based cybersecurity traffic analysis that means that network activities are not just reported but also interpreted: in case of accident you can start your analysis from alerts, then drill-down to flows and eventually packets, all from within the ntopng user interface. In addition to this we have integrated traffic visibility provided by packets/sFlow/NetFlow with SNMP-based infrastructure visibility so that you can leverage on protocols such as LLDP and CDP just introduced in 5.2 that allows you to see where your traffic flows in the company infrastructure.

The list of features is very long so you can read about them in the changelog below. If you have time, you can stop by the ntop stand at FOSDEM 2022 this Saturday, where we can show you this new release in detail and meet the ntop team.

In the coming weeks, we’ll make the plans for the next release that we’ll discuss together.

Enjoy !

 

ntopng 5.2 (February 2022)

 

Breakthroughs

  • New ClickHouse support for storing historical data, replacing nIndex support (data migration available)
  • Advanced Historical Flow Explorer, with the ability to define custom queries using JSON-based configurations
  • New Historical Data Analysis page (including Score, Applications, Alerts, AS analysis), with the ability to define custom reports with charts
  • Enhanced drill down from charts and historical flow data and alerts to PCAP data
  • nEdge support for Ubuntu 20
  • Enhanced support for Observation Points

Improvements

  • Improve CPU utilization and memory footprint
  • Improve historical data retention management for flows and timeseries
  • Improve periodic activities handling, with support for strict and relaxed (delayed) tasks
  • Improve filtering and analysis of the historical flows
  • Improve alert explorer and filtering
  • Improve Enterprise dashboard look and feel
  • Improve the speedtest support and servers selection
  • Improve support for ping and continuous ping (ICMP) for active monitoring
  • Improve flow-direction handling
  • Improve localization (including DE and IT translations)
  • Improve IPS policies management
    • Add IPS activities logging (e.g. block, unblock)
  • Improve SNMP support
    • Optimize polling of SNMP devices
    • Improve SNMP v3 support
    • Add more information including version
    • Stateful SNMP alert to detect too many MACs on non-trunk
    • Perform fat MIBs poll on average every 15 minutes
    • Add preference to disable polling of SNMP fat MIBs
  • Add more information to the historical flow data, including Latency, AS, Observation Points, SNMP interface, Host Pools
  • Add detailed view of historical flows and alerts
  • Add support for nProbe field L7_INFO
  • Add ICMP flood alert
  • Add Checks exclusion settings for subnets and for hosts and domains globally
  • Add CDP support
  • Add more regression tests
  • Add support for obsolete client SSH version
  • Add support for ERSPAN version 2 (type III)
  • Add support for all the new nDPI Flow Risks added in nDPI 4.2
  • Add extra info to service and periodicity map hosts
  • Add Top Sites check
  • REST API
    • Getter for the bridge MIB
    • Getter for LLDP adjacencies
    • Check for BPF filters
    • Score charts timeseries and analysis

Changes

  • Encapsulated traffic is accounted for the lenght of the encapsulated packet and not of the original packet
  • Remove nIndex support, including the flow explorer
  • Remove MySQL historical flow explorer (export only)
  • Hide LDAP password from logs

Fixes

  • Fix a few memory leaks, double free, buffer overflow and invalid memory access
  • Fix SQLite initialization
  • Fix support for fragmented packets
  • Fix IP validation in modals
  • Fix netplan configuration manager
  • Fix blog notifications
  • Fix time range picker to support all browsers
  • Fix binary application transfer name in alerts
  • Fix glitches in chart drag operations
  • Fix pools edit/remove
  • Fix InfluxDB timeseries export
  • Fix ELK memory leak
  • Fix TLS version for obsolete TLS alerts when collecting flows
  • Fix fields conversion in timeseries charts filters
  • Fix some invalid nProbe field mapping
  • Fix hosts Geomap
  • Fix slow shutdown termination
  • Fix wrong Call-ID 0 with RTP streams with no SIP stream associated
  • Fix ping support for FreeBSD
  • Fix active monitoring interface list
  • Fix host names not always shown
  • Fix host pools stats
  • Fix UTF8 encoding issues in localization tools
  • Fix time/timezone in forwarded syslog messages
  • Fix unknown process alert
  • Fix nil DOM javascript error
  • Fix country not always shown in flow alerts
  • Fix non-initialized traffic profiles
  • Fix traffic profiles not working over ZMQ
  • Fix syslog collection
  • Fix async SNMP calls blocking the execution
  • Fix CPU stats timeseries
  • Fix InfluxDB attempts to alwa re-create retention policies
  • Fix REST API ts.lua returning 24h data
  • Fix processing of DNS packets under certain conditions
  • Fix invalid space in SNMP Hostnames
  • Fix REST API incompat. (/get/alert/severity/counters.lua, /get/alert/type/counters.lua)
  • Fix map layout not saved correctly
  • Fix LLDP topology for Juniper routers
  • Fix not authorized error when editing SNMP devices
  • Fix double 95perc, splitted avg and 95perc in sent/rcvd in charts
  • Fix inconsistent local/remote timeseries
  • Fix Risks generation in IPS policy configuration
  • Fix deletion of sub-interface
  • Fix deadline not honored when monitoring SNMP devices
  • Fix traffic profiles on L7 protocols
  • Fix TCP connection refused check
  • Fix failures when the DB is not reacheable
  • Fix segfault with View interfaces
  • Fix hosts wrongly detected as Local
  • Fix missing throughputs in countries

Misc

  • Enforces proxy exclusions with env var no_proxy
  • Move Lua engine to 5.4
  • Major code review and cleanup

nEdge

  • Add support for Ubuntu 20
  • Add ability to logout when using the Captive Portal
  • Add per egress interface stats and timeseries
  • Add active DHCP leases in UI and REST API
  • Add daily/weekly/monthly quotas
  • Add service and periodicity maps and alerts
  • Fix Captive Portal not working due to invalid allowed interface
  • Fix addition of static DHCP leases
  • Fix factory reset
  • Fix reboot button

You’re invited at FOSDEM 2022 (5 and 6 February) in the ntop stand

$
0
0

As most of our users know, every year we were used to meet the world of open source at FOSDEM in Brussels. Due to pandemic, this yearly event has been moved online so we invite you to attend it wherever you are. You can find more info at this page, but in summary we have two main events

On Saturday we plan to show the latest tools we have developed, including ntopng 5.2 that we have just released. The idea is to highlight the main tool features, and discuss about future plans and roadmap.

On Sunday we will focus on nDPI and show how it can be used for traffic classification and cybersecurity.

We hope you can join and discuss with us. Please note that you can talk/chat using the oline Matrix client. Please see this link for more information.

Hope to see you this week-end !

Using ntopng with Checkmk: A Tutorial

$
0
0

Today we’ll discuss the ntopng integration with CheckMK, a popular open source infrastructure monitoring tool to which ntopng adds traffic visibility.

If IT infrastructure monitoring and network usage monitoring would see each other on Tinder, they would both for sure swipe right and match. Bringing the big picture perspective of IT infrastructure monitoring together with the in-depth information from network usage monitoring is thus a logical step. That’s why ntop and tribe29, the developers of the IT monitoring solution Checkmk partnered and jointly built a seamless integration of both tools.

The integration makes the data of talkers and listeners detected by ntopng directly available in  Checkmk. It adds the network flow information from ntopng to the respective hosts in Checkmk, so all data is available in one solution with several dashboards and graphing options. You will be able to find the root cause of problems faster and with less effort. The step-by-step guide below will lead you through the installation.

IT monitoring tools provide insights into servers, network devices, applications, containers and many other systems and alert you when systems are not working as expected. They analyze metrics of hardware components, e.g. sensors such as  CPU, RAM, disk usage, the operating system and applications. In case you want to get a better picture of the health and performance of a system after seeing it in ntopng, you can look for it in your IT infrastructure monitoring to gain more holistic insights.

But, you still would have to jump between ntopng and your IT infrastructure monitoring tool. By leveraging the REST API of ntopng, the integration into Checkmk puts an end to that. It takes the network flow information from ntopng and allocates it to hosts in the Checkmk monitoring. You have the information gathered by Checkmk combined with the most important traffic information from ntopng – all in one solution.

There are several use cases and, thus, the integration offers several views and dashboards in Checkmk. You can analyze hosts, applications or protocols that are communicating with each and identify possible bottlenecks or anomalies. You can identify ‘top talkers’ and ‘top listeners’ in your network, for example, or see the network usage per host and other details. Also, Checkmk can import notifications from ntopng, so you can combine them with your infrastructure alerting.

Setting up the ntop Checkmk integration

Using the integration is fairly simple, but make sure you have the right versions of Checkmk and ntopng up and running. The integration only works with ntopng in a Professional or Enterprise version 4.2 or higher, because the REST API V1 that Checkmk and ntopng use to communicate is only supported from ntopng version 4.2 onwards. The ntopng integration is a payable add-on for the Checkmk Enterprise Edition and you need to use Checkmk version 2.0 or higher. If you just want to try Checkmk, there is a free trial of the Checkmk Enterprise Edition, which includes all features, but will be limited to 25 hosts after 30 days.

Preparation: Check and prepare ntopng parameters for Checkmk

Checkmk needs a user account in ntopng to access the data. You can limit the access given to Checkmk by using a ntopng user with limited access privileges. Depending on your ntopng environment, you might have some network interfaces that you do not want to share with Checkmk.

This tutorial uses the simplest option, which is using a ntopng user with admin access that gives Checkmk full access to all interfaces. In this example, the user is called ‘mhirschvogel’. You can still limit the access for different Checkmk users later in Checkmk. Besides a ntopng user, you need to know the host name and the TCP port of your ntopng server. The server hosting Checkmk must be able to reach your ntopng server, as well. If all that is given, switch to your Checkmk site and log into the user interface.

Step 1: Set up your ntopng user in Checkmk

  • Open your Checkmk site and click on Setup -> General -> Global settings.
  • Click on the ‘Ntopng (chargeable add-on)’ and then click on ‘Ntopng Connection Parameters (chargeable add-on)’.
  • Add the necessary parameters:
    • ‘Host address’ is the host name of your ntop server. The name must be DNS resolvable. If you just add the IP address of your ntop server you cannot use TLS, because your certificate will be invalid.
    • ‘Port number’ is the TCP port over which ntopng can be reached. The port is specified when ntopng is started. The default is 3000 without TLS, change it to 3001 if you use TLS.
    • ‘Protocol’: For security reasons, HTTPS is of course preferable over HTTP. If you use a self-signed certificate, you need to check the box to disable the SSL validation.
    • Under ‘User account for authentication’ add the user account of the ntopng user you would like to use to get the data from ntopng. As mentioned, this account is called ‘mhirschvogel’ in this case.
    • Under ‘ntopng username acquire data for’ I have to use the option ‘use the ntopng username as configured in the user settings’ and adjust the Checkmk user settings in the next step. Because my Checkmk user account are just my initials ‘mh’, I cannot use the same usernames for Checkmk and ntopng: ntopng has stricter naming conventions, ‘mh’ would be too short. If you actually are using identical usernames in Checkmk and ntopng, you can use the option ‘Use the Checkmk username as ntopng username’.
    • The settings for me look like this:

Step 2: Add ntopng username to your Checkmk user

If you decide to use Checkmk and ntopng accounts with different names, you need to add the ntopng username for the Checkmk user you are using. If you went for identical names in the step before, you can skip this step.

  • To edit the user settings, go to Setup -> Users and select the properties of the Checkmk user that you are using for the integration (in my case user ‘mh’) by clicking on the pencil icon.
  • Add the ntopng name in the last line under account ‘identity’. If you cannot see the field ‘ntopng Username’ to do so, you probably did not select the option ‘use the ntopng username as configured in the user settings’ under ‘ntopng username acquire data for’ before. You need to go one step back and change that.
  • Add the name of your ntopng user, in my case ‘mhirschvogel’.
  • Click on ‘Save’. You will return to the user overview.
  • Accept the changes in Checkmk, so all these actions go into operation. Click on the highlighted field with the yellow exclamation point (!) in the top right corner. Click on ‘Activate on selected sites’.

This explicit activation for changes is a safety mechanism in Checkmk. All changes you are making in Checkmk need to be reviewed before they affect your monitoring. You must activate pending changes before they go into production.

Step 3: Check out the ntopgng integration in Checkmk

  • Click on Monitor in the sidebar. If all has worked out, you should see a new topic named ‘Network statistics’ in Checkmk. This confirms that the integration is working. 

Step 4: Add hosts to Checkmk

A major difference compared to network flow monitoring is the fact that you have to proactively add hosts to your infrastructure monitoring. In case your Checkmk monitoring environment already contains hosts communicating in your network, you can skip this step.

If you just installed Checkmk, you do not have any hosts in your Checkmk environment. Add hosts either by monitoring them via built-in interfaces such as SNMP or installing the Checkmk agent on the host. Checkmk has several features to add and manage a large number of hosts.

In this tutorial, I will show how to add a host to Checkmk through the user interface and use a device providing data via SNMP as an example.

  • Go to  Setup -> Hosts, and click on ‘Add host’.
  • Add the name of your host under ‘Hostname’. If the name of our host is not DNS resolvable, you need to add the IP address, as well.
  • Because I want to use SNMP, I need to edit that under ‘Monitoring Agents’ in Checkmk. Activate the check box next to ‘SNMP and pick your SNMP version.
  • Checkmk assumes by default that your SNMP Community is ‘public’ because it is also the default on most SNMP devices. If that is the case, you can leave the box ‘SNMP credentials unchecked. Otherwise, you have to check this box and add your SNMP credentials here.
  • After adding all the information, click on ‘Save & go to service configuration’.

  • Checkmk now automatically discovers any relevant monitoring services on that host. When you are monitoring with SNMP, Checkmk by default discovers all of the interfaces that are currently online, the uptime, and the SNMP Info check. Typically Checkmk will detect even more monitoring services automatically like CPU and memory utilization.
  • Click on ‘Fix all’. This adds all detected services and host labels to your monitoring dashboard and removes services that have vanished.
  • Again, accept the changes by clicking on the yellow exclamation point and ‘Activate on selected sites’. You added a host to Checkmk.

Step 5: Check your ntopng hosts in Checkmk

When you are done with adding hosts, you can check which hosts are set up in Checkmk and ntopng.

  • Go to Monitor > Network statistics > Ntop Hosts.
  • You should see an overview of all hosts that are monitored in Checkmk and also are visible in ntopng.

Besides my switch, I added two more hosts for which Checkmk received data from ntopng. You can inspect more details about a host by clicking on the entry ‘Ntopng integration of this host’ in the action menu. This menu is now also available on all other host views in Checkmk.

Opening the action menu item ‘Ntopng integration of this host’ will show the host-specific page ‘Network statistics and flows’ with several tabs for different perspectives. By default, Checkmk opens the ‘Host’ tab with basic information for the host and a summary of the most important details from the other tabs. You can now use the tabs to gain insights into your hosts and the way they communicate or you can click on ‘View data in ntopng’ to jump to this host in ntopng.

This is the end of this tutorial. You should now be able to use the ntop integration in Checkmk, and you should also know how to add hosts into Checkmk. That is just a start, of course. You find more details about the ntop integration in the chapter on the ntop integration in the Checkmk user guide. Checkmk supports bulk imports and has many more features. If you want to read more about the way Checkmk works in general, you can use the Checkmk beginner’s guide. You can also read more information about network monitoring with Checkmk.

Martin Hirschvogel, Director of Product

tribe29, the Checkmk company


Incident Analysis: How to Correlate Alerts with Flows and Packets

$
0
0

In incident analysis it is important to provide evidence of the problem  at various level of details:

  • Alerts
    Alerts are the result of traffic analysis (in ntopng based on checks) that have detected specific indicators in traffic that triggered the alert. For instance a host whose behavioural score has exceeded a given threshold or a flow that has is exfiltrating data.
  • Flows
    Are the result of aggregation of packets belonging to the same connection and are used to compute alerts.
  • Packets
    This is the most granular data that contains evidence of the traffic that is aggregated in flows.

In terms of cardinality, a flow contains several packets, and an alert is the result of computation of several flows. Alerts are important as they allow network operators to focus on a reduced set of information that is pre-computed by ntopng: this is nice as the cardinality of the information is reduced and also because ntopng has already discarded non-relevant information instead focusing on more relevant data.

As described here, in ntopng it is possible to enable packet dump and extract pcaps containing packets matching several criteria such as all the packets of a flow, or a given host. In this post we’ll now show how you can move one step further and correlate alerts with flows so that you have a complete pipeline: Alerts -> Flows -> Packets.

In order to enable correlation, you need to enable flow and alert persistency in ClickHouse that is implemented by starting ntopng as follows (you can read more information in the user’s guide):

  • ntopng -i ethX -F clickhouse

Done that ntopng will persistently write flows and alerts into ClickHouse. You can decide to start your incident analysis from flows (flows -> alerts) or the other way round (alerts -> flows). In both cases the ntopng user interface allows you to do that without using complex filtering rules. In fact for every flow that carries a potential alert (i.e. a non zero flow score), inside the flow explorer you can click on the dropdown menu under the Actions column and select the item “Flow Alerts” to jump to the alert that has been produced by such flow as depicted below.

This will drive you to the flow alerts page where relevant alerts are displayed. In a similar way for every alert, inside the alert explorer you can select under the Actions menu the item “Historical Flow Explorer” for displaying to the flows that ntopng has used to report the alert.

In conclusion, you have a comprehensive solution for analyzing your traffic with a matter of mouse clicks. In case of an incident you can drill down from alerts to flows to packets. You can also download alerts, flows and packets by clicking on the relevant buttons so that you can extract this data and import them on your favourite data analytics platform or analyse packets with Wireshark. All this from the ntopng GUI with no shell expertise required.

Enjoy !

Dispatching Alerts: How to Master Notifications in ntopng

$
0
0

Alerts in ntopng are the result of traffic analysis based on checks. Checks detect that specific indicators on traffic require attention: for instance a host whose behavioural score has exceeded a given threshold or a flow that is exfiltrating data. Checks process traffic information with respect to a specific Network element, and for this reason they are divided into families (e.g. host, interface, flow, …). Regardless of the family, they can cover a security aspect, or they can monitor the network performance, for this reason they belong to different categories (e.g. Network, Cybersecurity, Active Monitoring, …).

Checks can be enabled, disabled, configured (e.g. to set thresholds) through the Settings -> Checks page in ntopng (left menubar) where they are divided into multiple pages based on the family, and they are marked based on the category.

 

Settings Checks

As a consequence of traffic processing, when something requires attention, checks trigger alerts. Alerts are stored by ntopng into an internal database (SQLite by default, or ClickHouse when enabled) and can be notified in multiple ways, including messaging systems (e.g. Telegram, Slack, Discord, Teams), logging systems (syslog, Elastic), callbacks (web hooks, shell scripts), email.

Alerts from the internal database can be inspected using the Alerts explorer (left sidebar, Alerts icon), which allows you to go back in time, and analyse them by means of a set of features including filtering on all alert fields. By default, alerts are stored in SQLite that is suitable for small/mid sites, whereas for large sites or for keeping alerts with long retention we advise to enable ClickHouse.

Triggered alerts are always stored in the alert database. Instead, in order to deliver alert notifications to a remote recipient, a specific configuration is required. This process has been dramatically simplified in the latest ntopng (dev) version, to avoid the headaches related to the Pools and Recipients configuration as it used to be.

As first step, an endpoint should be configured. This identifies a target able to receive notifications. For instance, in case of Discord, this is represented by the WebHook URL which is an identifier used to deliver messages to a Discord channel.

As second step, for each endpoint, at least a recipient should be configured (one or more recipients can be configured for the same endpoint). A recipient represents a destination for the notification.

The recipient configuration includes a filter to decide what kind of alerts should be delivered to that specific destination. For instance, it is possible to specify the Minimum Severity for alerts to be delivered to the recipient and one or more Check Categories.

In addition to this, more control on alert filtering is often required, especially when it comes to deliver alerts for Flows and Host where the cardinality can be high and we are interested in receiving alert notification for a subset of (interesting) hosts out of many alerts related to many hosts. In order to allow this, a set (one or more) of Host Pools can also be configured.

The Host Pools to be selected in the recipient configuration can be defined as usual in ntopng through the Pools menu, by specifying IPs, subnets, MAC addresses.

This is the current state of the art. In the coming weeks we will further improve alerting by adding new features such as:

  • ability to silence noise alerts (e.g. do not send me a new alert if a similar one has been delivered in the recent past).
  • further alert filters (e.g. send to recipient X alerts that contain a specific value).

In essence we want to reduce alert noise, by delivering to specific recipients alerts they care about, and avoid sending specific alerts too often in case a problem is periodically repeating.

Enjoy !

How We Simplified Data Search in ntopng

$
0
0

ntopng users are familiar with the search box present at the top of each page. It was originally designed to find hosts and jump to their details page. Over the years we have added a lot of new information in ntopng, and limiting its scope only to hosts was not a good idea. The image below is how we have improved it. In the new search we do not limit our scope to hosts but to everything inside ntopng, as a a mini embedded search engine. The first column shows the family of the result returned such as an AS, a host or a network. Then we report the label of the result highlighting in bold the match. The last column returns (this requires you to enable ClickHouse) shortcuts to alerts and flows for the result, that greatly reduces the number of clicks that are necessary for jumping to the result.

For hosts we have also added the ability to search not just those that are currently active, but also to restore and search those that have been previously active. You can identify them with the moon icon. This is a great improvement as the search was previously limited only to hosts in memory.

We believe that this new search facility will ease information retrieval and move ntopng towards a more user friendly tool.

Enjoy !

ntop Professional Training: May 2022

$
0
0

This is to announce that the next ntop professional training will take place in May 2022.

All those who are using ntop tools for business are invited to attend this session. The idea is to divide the training in 5 session of 90 minutes each, so that you can attend the training without having to leave your daily activities.

At this page can read more about training content, costs, and registration information

Make sure to join it !

ntop Conference 2022: Call for Speakers

$
0
0

This is to announce the dates of the ntop conference 2022 that will take place in Milan at UniBocconi: June 23rd conference, 24th training.

We are currently looking for speakers as we want to hear your voice. Topics include (but are not limited to):

  • Cybersecurity
  • IoT monitoring
  • Integration of Kibana/Grafana/CheckMK/Nagios with ntop tools
  • Attacks and DDoS
  • Sharing of experience monitoring networks using ntop tools
  • Encrypted traffic analysis
  • Deep Packet Inspection

All details are available at this page.

How PF_RING is Used to Fight Internet Censorship: Refraction Networking

$
0
0

Internet censorship is a global phenomenon (see Figure 1) that aims to throttle or entirely block access to certain Internet resources. National or regional governments impose Internet censorship by using sophisticated networking appliances—strategically placed at the edge of their networks at various Internet inter-connection points—capable of inspecting and discarding network packets destined to sites with restricted content. Users that try to evade censorship have traditionally relied on techniques based on “domain fronting” and VPNs. However, these censorship circumvention tools are increasingly becoming harder to deploy and do not offer strong censorship resistance guarantees since regimes that impose censorship can also block access to such technologies.

Figure 1. Internet Censorship has become a global phenomenon. Figure shows prevalence of censorship via DNS injection broken down by country as measured by the Satellite project. Source: censoredplanet.org

Refraction Networking is a new censorship circumvention technique that addresses some of the shortcomings of existing circumvention tools. Refraction Networking has evolved over the past 10 years from a series of lab prototypes (i.e., Decoy Routing, Telex, Cirripede, TapDance) to a system that concurrently serves millions of users worldwide. Rather than embracing the “cat and mouse game” that, say, VPN providers and repressive regimes are engaged with (e.g., by constantly recycling VPN endpoints to avoid blockage by censors), Refraction Networking

places stations at the network’s core (e.g., within friendly ISPs, see Figure 2). Placing these stations in the “middle of the network” deters censoring entities from imposing any kind of practical censorship since their only option is to aggressively block access to the ISPs hosting the Refraction stations. This aggressive blockage would inflict collateral damage to censors themselves since they would be essentially cutting themselves off of large portions of the Internet.

Figure 2. Refraction networking. Placing “Refraction stations” in partnering ISPs allows users to access blocked sites in a covert manner, while elevating the difficulty for censors to block Internet content.

A key component for the reliable and efficient operation of the Refraction network is the ability to monitor packets at multi-10Gbps speeds and inspect them for the steganographically encoded signal that Refraction clients use to convey their intention to connect with a “blocked” site. The Refraction network currently (passively) monitors an aggregate traffic stream that exceeds 150Gbps, and the team has relied on PF_RING ZC’s unique capabilities to deliver packets to Refraction’s monitoring engines with zero loss, even when using commodity, off-the-shelf hardware and network NICs (such as Intel Ethernet network adapters). Further, the Refraction operators utilize ntop’s zbalance_ipc to efficiently distribute the traffic load into multiple PF_RING-based worker applications that serve Refraction’s global users. The Refraction community is grateful for the support that the PF_RING team has been providing since the onset of this circumvention technology.

At the moment, the Refraction coalition consists of universities, non-profit organizations and industry partners and serves upwards of one million users worldwide. Refraction monitoring stations are currently hosted at three separate networks, including a mid-size ISP, and the team is actively seeking new partners to expand its network operations and its user population. If you are interested in joining the coalition or contributing in other ways, please reach out at team@refraction.network.

Many thanks to the members of the Refraction team who contributed to this article and for having used PF_RING to fight Internet censorship.

HowTo Use TLS for Securing Flow Export/Collection

$
0
0

One of the main limitations of flow-based protocols such as IPFIX and NetFlow is that the traffic is sent in cleartext. This means that it can be observed in transit and that it is pretty simple to send fake flow packets that can then pollute the collected information. As of today, unencrypted protocols need to be avoided and thus some workarounds to this problem need to be identified. Often people use VPNs to export flows, but this is not a simple setup with cloud services or on complex networks, so people sometimes rely on ACL (Access Control List) which does not address privacy and integrity concerns.

As shown above, this article shows you how to use TLS with nProbe in order to implement secure flow collection and export. All you need is nProbe (dev) that can collect export flows over UDP/TCP/SCTP (export only on Linux), and now over TLS. All you need to do is to use the method with command line options –collector-port (flow collection) and –collector (flow export). Examples:

  • Suppose you want to collect flows over UDP on port 2055
    –collector-port udp://2055
  • Suppose you want to collect flows over TCP on port 2055 on localhost
    –collector-port tcp://127.0.0.1:2055
  • Suppose you want to export flows to remote collector 192.168.1.2 over TLS on port 2055
    –collector tls://192.168.1.2:2055

When collecting flows, nProbe opens the specified port and listens for TLS connections. In this setup it is necessary to provide a TLS certificate and private key that you can generate with services such as Let’s Encrypt or using a self-signed certificate that you can generate with

openssl req -x509 -nodes -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 365

that can be passed to nprobe with –tls-priv-key and –tls-cert command line options. A complete example of nProbe collecting flows over TLS on host mycollector.ntop.org port 2055 is:

nprobe --tls-priv-key key.pem --tls-cert cert.pem -n none -i none  --collector-port tls://mycollector.ntop.org:2055

You can now connect your remote nProbe capturing packets on eth0 and exporting flows to the above collector using the command below:

nprobe -i eth0 --collector tls://mycollector.ntop.org:2055

Note that nProbe verifies the TLS certificate, so:

  • You need to make sure that you use a hostname and not an IP address when specifying your collector IPs
  • Expired, self-signed, or insecure certificates are not accepted. In this case, you can force nProbe to ignore the above issues when connecting to a remote collector adding –tls-insecure. Example: nprobe -i eth0 --collector tls://192.168.2.243:2055 --tls-insecure

Finally do not forget that for a long time Probe can deliver via ZMQ flows in a secure and encrypted format to ntopng.

If you want to read more about nProbe and TLS, please refer to the user’s guide.

Enjoy !


Registration for ntopConf 2022 (June 23-24) is now Open

$
0
0

This year the ntop community will meet in Milan, Italy on June 23-24. Conference will take place the first day, whereas the second day will be used for training. We’ll be talking about network traffic monitoring, cybersecurity, and discuss future roadmap items. It is a good chance to get together after pandemic restrictions, as well for us to meet our community.

You can read more about this event and read the program at this page where you can also find the registration link.

Note: this is a free (no cost) event but registration is required.

Hope to see you next month in Milan !

How ntopng monitors IEC 60870-5-104 traffic

$
0
0

Busy times for OT analysts.

Last month the number of known OT (operational technology) malware increased from five to seven. First malware discovered is Industroyer2 which was caught in the Ukraine. As nowadays popular, security companies name the malware they discover. That is why for the second malware two names were assigned, Incontroller or Pipedream. This malware was discovered before it was deployed.

Industroyer2 [1] is an evolution of Industroyer1, first seen in 2014. Both variants are targeting the electrical energy sector, specifically in Ukraine. As the malware is using commands out of the industrial protocol IEC-60870-5-104, traffic looks like legit communication as during normal operation.

Incontroller [2] is a new set of malware components, targeting the LNG sector in the US. Similar to Industroyer2, pipedream is using popular industrial protocols like OPC-UA and ModbusTCP. Further more it uses build in functionality from engineering tools made to interact with OT devices like PLCs (process logic controller).

Both malware show clearly that the criminals behind have evolved and do understand OT protocols and are able to use build in functionality of legit software engineering tools like CODESYS.

What has not changed over the last years, is that SCADA systems still control like “fire and forget”. A command is sent from the control system server to the client in the field. The client translates the event into a physical action, like open or close a circuit breaker. The command source is not verified by the client. Translated back to the network traffic, it means that one packet containing a command is enough to disrupt the complete industrial process or power distribution.

Industroyer2 uses IEC-104, the short version for IEC 60870-5-104. IEC-104 is widely used in European energy sector and as well in utilities sector like water or waste water treatment.
A characteristic of many industrial protocol is, that even though the protocol is standardised, its implementation can vary between manufacturers or even system integrators. Meaning IEC-104 implemented by Hitachy Energy differs from how it is implemented by Siemens. Operators are familiar with it, but for network security monitoring it can be a challenge.

Further difficulty for monitoring is, that one packet transporting the IEC-104 payload can have multiple IEC-104 data messages, called APDU’s. Therefore traditional signature based detection on the TCP payload does not work. The payload needs to be parsed in order to understand what type of command each APDU contains:

Since the discovery of Industroyer2 in early of April 2022 until now, several reports analysing the malware were published. They contain information how the malware is working, captured network data and most of them contain recommendation how to deal with such type of malware. Having a closer look at the recommendations, or nowadays called actionable items, they are high level items. Example:

Not much actionable in my point of view or a whole set of commercial products need to be in place. Products which for most SMEs are not suitable to operate. I am therefore looking for ways how to detect the malware with minimal effort.

Let’s have a look at the environment. SCADA networks are highly deterministic. Means who is talking to whom and how, i.e. command and control patterns, is repeatable. For IEC-104 it means the same type and sequence of IEC-104 commands can be found in normal operation in time periods over a day or over weekdays and weekend days.

Example 1, time period 2 working days and one night, 36 hours:

TypeID

Type Description Number of Occurrences
13

M_ME_NC_1

Measured value, short floating point number

1’184’834

30

M_SP_TB_1

Single-point information witd time tag CP56Time2a

2
103

C_CS_NA_1

Clock synchronisation command

1

The only command sent was for time synchronisation of the client.

Comparing operations data with the available malware data, the different behaviour of the malware becomes visible:

Source

The malware is sending command after command to the client device (ASDU=3), iterating through the IOAs. Kind of similar like checking different ports on a host and trying to login.

TypeID

Type Description
100

C_IC_NA_1

Interrogation command

 

45

C_SC_NA_1

Single command

46

C_DC_NA_1

Double command

From defender point of view, we obviously can not block port 2404, neither the commands used by the malware, as one or all commands are used for normal operation by the control system itself.

But looking at the TypeID transition, the malware differentiates from legitimate traffic:

Transitions

Normal Operations Traffic

Malware

M_ to M_

> 1000

0

 

M_ to C_ or C_ to M_

> 0 && < 10

0

C_ to C_

0

> 10

In ntopng, three detection mechanism are build in:

  • IEC Unexpected TypeID. As used TypeIDs are known by the operator, this check monitors for unknown or not allowed TypeIDs and alerts them.
  • IEC Invalid Transition. In this check TypeID transitions are recorded over a predefined time period, the IEC60870 Learning Period, found under Settings / Preferences / Behaviour Analysis. An alert is generated, if a unknown TypeID transition is detected.
  • IEC Invalid Command Transition is as well checking for transitions, but specifically transitions of commands. If the amount of command to command transition exceeds a threshold, an alert is generated.

All three Check can be found in the Flow Checks.

For “IEC Invalid Transition” ntopng needs a learning period in order to track transitions. Default is set to 6 hours, but most likely a longer learning period is necessary, e.g. 2 days.

Sources:

  1. https://www.welivesecurity.com/2022/04/12/industroyer2-industroyer-reloaded/
  2. https://www.cisa.gov/uscert/ncas/alerts/aa22-103a

All ntopng versions include IEC support, so you can enjoy monitoring your network using ntop tools.

Enjoy !

ntop acknowledges Martin Scheu from switch.ch for having assisted throughout this IEC work and blog post.

Best Practices for Using ntop Tools on Containers

$
0
0

Many people use software containers to simplify application deployment. As you know ntop tools are also available on docker hub for quick deployment using Docker or other container management tools such as Portainer or Kubernetes. When using containers, there are a few things to keep in mind:

  1. Service Persistency
    ntopng relies on third party services such as Redis (required) and InfluxDB (optional) to operate. In order not to loose information at container restart, you need to persistently store data or configure ntop tools to rely on such services on an external container that provide such services persistently.
  2. Filesystem Persistency
    ntopng data is usually stored on /var/lib/ntopng/ and this directory must be persistent across restarts. You can map it with -v to a local directory “docker run -it -v /var/lib/ntopng/:/var/lib/ntopng/:rw ntop/ntopng:stable -i eth0”
  3. PF_RING
    In containers the kernel is shared and thus PF_RING must be loaded on the main host and accessed by containers. Please make sure that the PF_RING version is the same across host and containers as otherwise when starting a container you will see errors such as
    root@dell:/home/ntop# docker run -it ntop/ntopng:stable -v /etc/ntopng.license:/etc/ntopng.license:ro 
    Starting redis-server: redis-server.
    [PF_RING] Wrong RING version: kernel is 20, libpfring was compiled with 18
    

    when the kernel PF_RING and the container application (using PF_RING) are not the same version.

  4. Packet Capture
    Container network interfaces are unable to see the host traffic. If you plan to deploy ntop tools on a container and monitor the host traffic please consider using “ –network=host” when starting the container.
  5. Licensing
    The license from the host is shared across all the running containers (i.e. with 1 license you can run ‘n’ containers). In order to do that you need to map the license file as follows “-v /etc/nprobe.license:/etc/nprobe.license:ro”

We hope this post will help you easilydeploying ntop tools on containers

Enjoy !

How to Configure Flow Risk Exclusions in nDPI and ntopng

$
0
0

Flow risks are the mechanism nDPI implements for detecting issues in network traffic whose theoretical design is documented in this paper Using Deep Packet Inspection in CyberTraffic Analysis we have written last year. While we are reworking the definition of risk exceptions in ntopng to make them fully configurable with a matter of clicks, you can easily configure risk exceptions by adding them to a protos.txt file. Such file can be passed to ntopng on the configuration file by adding a line such as

--ndpi-protocols=/etc/ntopng/protos.txt

and creating the /etc/ntopng/protos.txt file.

This said how do we define flow risk exceptions in nDPI? These are the directives you have to use:

  • IP address based exceptions (caveat for the time being we support only IPv4)
    ip_risk_mask:a.b.c.d/CIDR=mask
  • Hostname based exceptions
    host_risk_mask:”name”=mask
  • Custom protocol ports
    tcp|udp:port@ExistingProtocolName
  • Custom encryption certificate authorities
    trusted_issuer_dn:”CN=….”

Usage examples:

  • Q. In my network I have HTTP running on 8008: how can I silence “Known Protocol on Non Standard Port” alerts?
    A. Add the following entry in the protos.txt file
    tcp:8008@HTTP
    Note that HTTP is the name of an existing protocol known to nDPI. Make sure the string case matches the existing protocol name. If the protocol name string does not exist, a new port-based protocol is defined in nDPI.
  • Q. My device 1.2.3.4 is old, and it has several cybersecurity issues (e.g. obsolete TLS ciphers). However the device is well protected in the network and thus such issues should be ignored. How can I silence them?
    A. Add the following entry in the protos.txt file (note: 0 means mask all exceptions)
    ip_risk_mask:1.2.3.4=0
  • Q. I see many DGA alerts for domain sms.it. How can I silence them?
    A. Add the following entry in the protos.txt file (note: 0 means mask all exceptions)
    host_risk_mask:”.sms.it”=0
  • Q. In my network we have self-signed TLS certificates created with a custom CA. How can I tell nDPI not to generate these alerts?
    A. Open ntopng (or wireshark) to see the issuer DN string inside the TLS flows, copy the string, and add a new line in the protos.txt file as follows:
    trusted_issuer_dn:”CN=813845657003339838, O=Code42, OU=TEST, ST=MN, C=US”

If you want to fully explore what you can define in the protos.txt file, please see this comprehensive example file that contains all the possible exceptions you can define.

Enjoy !

ntopConf2022: News, Announcements and Future Plans

$
0
0

Last week the ntopConf 2022 was held in presence in Milan at Bocconi University and about 100 people attended it.

Presentation material including slides and videos are available at the conference page so even if you have missed this event you can see what happened and presented.

On a nutshell:

      • This July we will release new software versions including a major nProbe 10 release.
      • We are modifying our tools to accommodate the SaaS model as some of our users provide services and we want to simplify their lives.
      • We are working at a low-cost hardware solution that we plan to introduce in Q3 that should be a turn-key solution for SMEs that need a simple solution for traffic monitoring, cybersecurity and (optional) enforcement.
    • We are integrating ntop tools with Catchpoint products, in order to complement their solutions with traffic analysis.
    • We are also working with Greenbone Networks (the creators of OpenVAS) to create an open-source solution for traffic visibility and vulnerability assessment.
    • Finally, we are planning to integrate in ntopng Microsoft-provided blacklists and in return export to Azure ntopng-monitored flows. We are still making plans (in particular for avoiding privacy concerns), but the idea is to enable our users to optionally contribute to report cybersecurity efforts and benefit from them by means of timely notifications of compromised IPs that can contact ntop-monitored hosts.

There is a lot of work on the table, we invite you to comment on all this using our community channels, so that we know your opinion.

Finally next year we’ll celebrate 25 years since the introduction of the original ntop. We will probably meet in Pisa, where everything started, but no plans are set. We have one year of hard work in front of us. If somebody wants to join the ntop core team, please send your CV to jobs@ntop.org.

Viewing all 544 articles
Browse latest View live