Quantcast
Channel: ntop
Viewing all 544 articles
Browse latest View live

Ntop at Caltech: Network Security Monitoring

$
0
0

It is a pleasure to report the feedback of Greg, one our many long-time users, who reported on LinkedIn how ntop tools have been used to monitor Caltech traffic.

NTOP for Network Security Monitoring

Enjoy !


ntop Cloud: Basic Concepts

$
0
0

We have designed the ntop Cloud as a way to securely interconnect customer applications deployed across hosts in heterogeneous environments not necessarily directly interconnected. Initially the goal of ntop Cloud is to enable users to administer easily these applications, update/restart/stop/start them with a mouse click, reconfigure them, and supervise their activities. Future SaaS (software as a service) features are planned but not a short term goals.

The idea is to simplify application deployment, check application status regardless of the physical network, detect restarts etc. things that before the ntop Cloud you would do in a more complicated fashion (e.g. connecting via SSH to hosts where those application run). For instance a typical example is nProbe and n2disk running on a datacenter and ntopng deployed on a laptop interconnected with public Wi-Fi connectivity. Those instances cannot speak directly unless a VPN is used, but in general IP address can be dynamic and so connecting to applications cannot be simple. With ntop Cloud you can do this with a mouse click as shown below.

DISCLAIMER: the ntop Cloud is still under development and if you want to use it, you need to use a recent version of ntop apps from the dev branch (non stable).

The first thing to do is to register to the ntop Cloud by connecting to cloud.ntop.org.

and registering your account. As described on a previous post, the ntop Cloud is designed for security hence all the modern features such as double-encryption, 2FA (Two Factor Authentication) and user walled garden are implemented to make sure nobody but the user can connect to its instances.

After the initial login, unless you have not yet configured your ntop Cloud, you need to click on the left menubar and download the cloud.conf file. 

This file needs to be installed on all the hosts on which you use ntop applications (again from the dev branch for the time being). The installation is simple:

$ sudo su
# mkdir /etc/ntop
# cp cloud.conf /etc/ntop

If you want to name the node on which your apps are running, you can also create a file as shown below that contains a custom node name. Done that you can restart the ntop applications in order to make them aware of the cloud configuration. This step needs to be performed once and all the ntop apps (running on the same host0 will be automatically aware of the cloud without custom application configuration.

$ cat cloud_node.conf 
{"instance_name":"www.ntop.org"}

 

 

You can check if the configuration with the cloud is established by looking at the logs or in ntopng looking at the top menu bar: a green icon means that everything is setup correctly, a red icon means that either you are disconnected from the cloud or the cloud configuration is not yet completed.

Once you have configured your applications, you can go back to cloud.ntop.org and see them in the web GUI

At this point you can perform actions such as update or restart on your instances. As said the ntop Cloud (including the cloud console) is still under development: your feedback is important in order us to fix the glitches you will encounter.

Enjoy !

 

ntop Spring Webinar: ntop Cloud, LLM/AI, SmartNIC. April 30th 3PM CET / 9 AM EST

$
0
0

This is to invite you to the ntop spring webinar. The major webinar topic will include

  • ntop Cloud
  • Usage of LLM (Large Language Models)/AI in ntop tools
  • SmartNIC support in ntop Products
  • Ongoing developments

You can register for the webinar at this page: the registration link will be include the instructions for joining the webinar.

Hope to see you online !

Fixing Packet Deduplication: Introducing nDedup

$
0
0

When it comes to monitor a busy network, network monitoring tools can become bogged down, or even worse produce misleading information for your analysis, by a hidden culprit: duplicate packets. Imagine a firehose of data streaming across your network, much of this data can be redundant, with identical packets being sent multiple times due to retransmissions or mirroring configurations. As an example, when a SPAN (Switch Port Analyzers) port is used to mirror ingress and egress direction of switch ports, the resulting mirrored traffic might contain up to 50% of duplicates. These duplicates eat up bandwidth and processing power on your monitoring tools, not to mention disk resources when dumping traffic to disk.

This is where packet deduplication tools come to the rescue.

A dedicated packet deduplication tool sits between the network source and the monitoring tools, acting as a filter. There are network packet brokers able to remediate this by finding and removing duplicate packets before they can reach the analytics tools. However the higher is their capacity of filtering out duplicated packets, the higher is usually the price. If you look at the specs, there is a time window on which they are able to detect duplicated packets, which is limited by the buffering power of the equipment, and in many cases they are only able to detect them when they are one after the other.

For this reason we decided to develop a new utility, ndedup, able to detect and eliminate packet duplication in software, on a server with a couple of network interfaces, acting as a bridge. This tool has no hard limit on the buffering power and time window used for detecting duplicates, as it mainly depends on the amount of RAM available on the system (and modern systems have plenty of RAM). Furthermore, it leverages on our kernel-bypass zero-copy drivers and the PF_RING ZC library and optimised data structures to implement fast duplicates detection and packet forwarding, achieving high speed even with a large window size. This can run on top of any adapter (Intel, Mellanox/nVidia, Napatech) and natively supports the RSS technology to scale the performance up to 100 Gbps.

How does it work under the hood?

When a packet is received by the tool, a one-way hash is used to turn the packet into a strong collision-resistant hash value. This hash is compared to the hash of every packet that came before it in a pre-configured time window (for example the last 50 msec), and if no match is found, the packet is forwarded to the twin interface to be delivered to the monitoring tools, otherwise it is discarded (duplicate!).

This makes this tool deployable as transparent bridge in front of any appliance running Network monitoring software, including n2disk, nprobe or ntopng.

Example – bridge interfaces eth1 and eth2 – window size 10 msec – link speed 10 Gbps:

ndedup -i zc:eth1 -o zc:eth2 -d 10 -s 10 -S 0 -g 1 -B

Example – bridge interfaces eth1 and eth2 – use 4 RSS queues – window size 10 msec – link speed 10 Gbps:

ndedup -i zc:eth1@[0-3] -o zc:eth2@[0-3] -d 10 -s 10 -S 0 -g 1:2:3:4 -B

Please check the user’s guide for the full list of options. Also check the PF_RING ZC user’s guide for configuring ZC drivers and RSS.

The ndedup tool is part of (and installed with) the n2disk package and it does not require an additional license to operate.

Enjoy!

Using ClickHouse Cloud with ntopng

$
0
0

We are happy to announce that from the latest ntopng dev (6.1) version, ntopng supports exporting data (flows & alerts) to ClickHouse Cloud. Below you can find a step-by-step guide.

Quick Start

First of all let’s start by creating our account and service on the ClickHouse Cloud (you can find the official guide here); remember to save the ClickHouse username and password used for accessing your database.

After that we have to jump to the ‘Connect’ section:

Then, we have to select MySQL, turn on “Enable the MySQL protocol” and collect 3 information needed to the ntopng configuration file:

  • The MySQL Username (it’s an other username, different from the one configured at the start)
  • The Hostname (used to connect to the right Host)
  • The Port (MySQL port, to connect in order to run queries)

After collecting these 3 info, let’s jump back to ntopng and let’s edit the configuration file.

Here we have to add (or modify if already there) the -F option, by following the ntopng documentation (here):

  • clickhouse-cloud;<host[@[<tcp-port>,]<mysql-port>s]|socket>;<dbname>;<clickhouse-user>,<mysql-user>;<pw>;

So let’s suppose that our previously collected info are the following:

  • ClickHouse username: CH-USER (by default it should be the ‘default’ username, it is provided when creating the Service)
  • MySQL username: MYSQL-USER
  • TCP-Port: by default it’s the 9440
  • MySQL Port: 3306 (default one). Note that you need to add a s after the port as this means that we’re pushing data over TLS.
  • Password: CH-PASSWORD (the password  used to access the DB, provided when creating the Service)
  • Hostname: CH-HOST

Then our -F option should look like:

-F=”clickhouse-cloud;CH-HOST@9440,3306s;ntopng;CH-USER,MYSQL-USER;CH-PASSWORD”

Note:

  • Both data export and query connections are encrypted and protected by TLS.
  • clickhouse-client is needed on the local machine (where ntopng is) in order to correctly push data to the ClickHouse Cloud. This means that you need to install it as described here.

If you want you can read all this in the ntopng user’s guide.

Enjoy!

 

Using WeChat For Delivering ntopng Alerts

$
0
0

WeChat is a multi-purpose messaging, social media, and mobile payment app developed by Tencent in China. Our Chinese-speaking users requested for a long  time an integration of ntopng with it, and this is to announce it.

By integrating ntopng alerts with WeChat, users can conveniently access network notifications within a platform they are already comfortable with. Overall, integrating ntopng alerts with WeChat enhances the user experience by providing timely, centralised, and customisable notifications directly to users’ preferred communication platform.

So, we are happy to announce that it is now possible to configure an endpoint notification on ntopng to receive alerts as WeChat robot group messages. All you need to do is to follow these instructions in order to obtain the WebHook URL that you will put in the endpoint page.

Enjoy !

HowTo Use Cloud Licenses

$
0
0

As discussed in our spring webinar, it is now possible to use (in beta) cloud licenses with ntopng and nProbe. Contrary to standard licenses that are bound to a physical system (based on the systemId), cloud licenses are “floating” as the same license file can be used on multiple hosts, of course not simultaneously (i.e. only one system at time can use the license). This is good news for those who use containers or VMs as they do no have to pay attention to the systemId anymore. If you want to use cloud licenses do

  • Make sure you are using the latest application version of the dev branch (non stable). The next stable release will integrate cloud licenses.
  • Register on https://cloud.ntop.org, create your account, and download the cloud.conf configuration file. This file needs to be installed on all hosts that want to use the cloud licenses or simply connect to the ntop cloud.
  • Note that in order to connect to the ntop cloud your host must be able to talk to https://cloud.ntop.org on port 8883.
  • On each host where you want to connect to the ntop cloud do:
    • mkdir /etc/ntop
    • cp cloud.conf /etc/ntop
  • Even if you have a cloud license, you need to generate a license file. When you generate a (ntopng or nProbe) license make sure that you select “Cloud” as the license type.
  • Install the license file and deploy it on your host
  • At ntopng/nProbe startup, the validation will be performed with the ntop cloud as follows:
    22/May/2024 21:51:07 [NtopPro.cpp:345] [LICENSE] Reading license from /etc/ntopng.license
    22/May/2024 21:51:07 [NtopPro.cpp:497] [LICENSE] /etc/ntopng.license: unable to validate license [License mismatch (check systemId, product version, or host date/time)]
    22/May/2024 21:51:07 [NtopPro.cpp:525] Validating the license with the ntop cloud...
    22/May/2024 21:51:12 [NtopPro.cpp:538] Cloud validation completed successfully
    

    Note that cloud licenses require permanent Internet connectivity as the application mut be connected to the ntop cloud in order to operate.

Enjoy !

ELLIO for ntopng: HowTo Prevent CyberAccidents Using Blacklists

$
0
0

Time is one of the main problems in cybersecurity. Detecting issues after they have happened can cost you money and resources to restore the system. Network traffic monitoring tools have as goal to show what is happening on a network.  Traditionally, monitoring protocols such as IPFIX/NetFlow export monitoring data periodically and often limit their analysis to the protocol header, thus the flow collector is partially blind as it is informed after a certain event happened with limited contextual information. In ntop tools we operate in real-time with pre-labelled information thanks to nDPI that is able to mark flows with cyberscore and risk information.

However with this approach we’re real-time but late, as the goal is to prevent accidents at the first packet received. In other words when host X contacts our network, we do not need to wait until X makes something wrong to label it as “attacker” and thus block it, but we need to block it before it can contact our network. Traditionally many people use IDSs (Intrusion Detection Systems) for this purpose but they are effective as long as they have a signature for a threat (what about zero-day attacks?), and even if they operate in real-time they can be used to block X after it has created a problem.

At ntop we have spent the last couple of years studying how blacklists can be used to mitigate this problem, and we’re happy to announce that we have teamed up with ELLIO, a Czech cybersecurity company active in threat prevention, for the purpose of extending ntop tools with ELLIO blacklists. We’ve used them for more than one year on production networks, and verified that they are really effective, for both incident prevention and traffic analysis.

Inside ntopng we have integrated the ELLIO community feed, a free (for non commercial use) and n effective blacklist updated daily containing a long list (~220k IPs as of today) of low-reputation IPs.

For professionals, ELLIO has created a special blacklist for ntopng that is updated every 5 minutes and that contains a list of malicious IPs used for mass exploitation, botnets, generic attacks, and other malicious activities that hits your network. Using this feed with ntopng you can be promptly alerted when a suspicious IP contacts your network, or you can block all these attacks if you use ntopng Edge.

Enjoy !


ELLIO and ntop partnership: combining cybersecurity with high-speed network traffic analysis

$
0
0

Prague, Czech Republic / Pisa, Italy, May 29, 2024ELLIO, a provider of real-time, highly accurate intelligence for filtering of unwanted network traffic and cybernoise, and ntop, a provider of open-source and commercial high-speed traffic monitoring applications, have announced a partnership to enhance visibility into malicious traffic originating from opportunistic scans and attacks within the network traffic monitoring tool ntopng.

ELLIO empowers ntopng’s users with advanced insights into mass exploitations, botnets, and other widespread activities on the Internet.

By integrating a highly accurate and real-time ELLIO: Feed, ntopng’s users gain deeper insights into their network traffic through real-time information on sources of mass exploitation, botnet activity and opportunistic attacks, even before traditional rule-based detections are available.

“Obtaining reliable and up-to-date information about mass exploits, botnets, and other widespread attacks is crucial for cybersecurity. These attacks easily disrupt normal network operations, affect service availability and performance, and overwhelm security teams,” said Vlad Iliushin, CEO at ELLIO.

Blacklists are effective for blocking attackers, but they require high-quality, frequently updated data that is immune to false positives.

In modern cybersecurity traffic analysis, the challenge is to anticipate problems before they happen. Blacklists are effective for blocking attackers, but they require high-quality, frequently updated data that is immune to false positives.

ELLIO: Feed is a threat list that is dynamically updated every minute and contains an average of up to 200,000 IP addresses currently associated with attackers, scans, and other malicious mass exploitation activities on the Internet. This database is constantly regenerated to ensure users have the most up-to-date information on emerging threats. ELLIO’s threat feed is supported by a powerful combination of an extensive internet sensor network operated by ELLIO, advanced ML algorithms, and real-time data processing. This mechanism enables highly reliable and fully automated threat detection delivery.

Free trial for all ntopng users
ntopng users with the latest version can enjoy a 30-day free trial of ELLIO: Feed integration by visiting this address: https://ellio.tech/ntop-feed-trial

ntopng is a network traffic monitoring tool that provides a web-based interface for real-time analysis and visualization of network usage. It helps users understand network performance, detect issues, and improve security by offering insights into traffic patterns, protocols, and active hosts.

About ntop
ntop is an engineering-driven company that provides software for network traffic analysis, capture-to-disk and traffic generation applications optimizing the performance of Commercial Off-The-Shelf (COTS) hardware. As a recognized leader in its field, ntop has become an industry-standard application, serving a diverse customer base that spans from individuals to key players in networking. For more information, visit https://www.ntop.org/

About ELLIO
ELLIO Technology is a cybersecurity company, streamlining cybersecurity teams’ focus on critical incidents by eliminating alerts from generic attacks and cybernoise distractions. With its extensive network of internet sensors and honeypots, ELLIO collects and analyzes internet traffic, identifies attack data while tagging exploits and vulnerabilities. Through their advanced ML engine, real-time data processing and in-depth research, ELLIO enables organizations to gain a clearer picture of cyber security attacks and incidents. ELLIO provides reliable and fully automated filtering of cyber noise and generic attacks at the network perimeter. It helps reduce “alert fatigue,” the overload caused by too many alerts and events in SIEM and SOAR tools. For more information, visit https://ellio.tech/

Upcoming Events: CheckMK Conference and Interop Tokio

$
0
0

In the next couple of weeks we’ll be active in meeting our user community at two events:

Hope to meet our community in person !

InfluxDB v2 support in ntopng is Now (partially) Available

$
0
0

It’s been 3 years since InfluxDB v.2 was released and until a couple of months ago we didn’t plan to add the support to the InfluxDB v.2 due to many reasons: migration from SQL to Flux query language, v2 performance not better than v1. The in the meantime InfluxData release InfluxDB v3 that is currently only supported on their cloud and not yet packaged as on-prem product.

However due to the more pressing requests and suggestions from our customers we finally decided to add the support as follows: as InfluxDB v.2 still partially supports v.1 REST API (that’s how we implemented the support to the v.1) , so why not just tunes the few changes still using the v.1 REST API?

HowTo Configure InfluxDB 2.x

Now let’s see how to create and configure an InfluxDB bucket that supports v.1.

First of all let’s start by logging in InfluxDB v.2 web interface and jump to the buckets; then let’s create a new bucket named ntopng.

Now what we need to do is enabling the support of this bucket, to the InfluxDB v.1 REST API; to do that we need to jump to the terminal and run the following command:

influx v1 auth create --read-bucket BUCKET_ID --write-bucket BUCKET_ID --username USERNAME_FOR_NTOPNG --password USERNAME_FOR_NTOPNG --org YOUR_ORGANIZATION --token REST_API_TOKEN_PROVIDED_BY_INFLUX

Where:

  • BUCKET_ID, is the bucket id you can find in the InfluxDB GUI (e.g. if we use the ntopng bucket of the screenshot above, it’s the 423e05d0910df7cb)
  • USERNAME_FOR_NTOPNG, PASSWORD_FOR_NTOPNG, are the username and password that soon we are going to add in the ntopng web interface (InfluxDB Authentication)
  • YOUR_ORGANIZATION, is the organization name is the one used by InfluxDB (i didn’t change it during my tests, so it was the one I inserted when logging in for the first time in InfluxDB)
  • REST_API_TOKEN_PROVIDED_BY_INFLUX, is instead the REST API provided when logging in the first time or it can be generated from the API TOKENS section of InfluxDB GU

Now the InfluxDB part is totally configured, let’s see ntopng.

Let’s directly jump to the Settings -> Preferences -> Timeseries tab

 

Let’s select as Timeseries Driver InfluxDB 1.x and lets fill the various preferences in the following way:

  • InfluxDB URL , The InfluxDB URL used for REST API (by default, http://localhost:8086)
  • InfluxDB Database , The bucket name we previously created especially for ntopng (in the example above, it was named ntopng)
  • Enable the InfluxDB Authentication
  • Username , the USERNAME_FOR_NTOPNG previously created by running the influx command
  • Password, the PASSWORD_FOR_NTOPNG previously created by running the influx command

And that’s it!

From now on all the data are going to be exported to InfluxDB v.2 correctly!

Note: an important thing to know is that in InfluxDB v.2 it was removed the support to the RETENTION, CREATE and DROP keywords from the REST API. For this reason the Retention and Relative DROP of old data, has to be managed by the InfluxDB v.2 Web Interface and cannot handled by ntopng itself (like for the v.1).

Enjoy!

Howto Build a (Cheaper) 100 Gbit Continuous Packet Recorder using Commodity Hardware

$
0
0

Those who follow this blog probably read a few posts where we described how to build a 100 Gbit continuous packet recorder using n2disk and PF_RING, providing specs for recommended hardware and sample configurations (if you missed them, read part 1, part 2 and part 3). In those posts we recommended the use of FPGA-based adapters (e.g. Napatech) with support for PCAP chunk mode (e.g. ability for the NIC to collapse packets onside the adapter in pcap format without the need to read packet-by-packet as with most network adapters), in addition to other nice features like hardware time-stamping and link aggregation. The ability to capture traffic using chunk mode improves the bus utilisation and reduces the number of CPU cycles (and PCIe bus transitions) required for copying and processing the data. This allows n2disk, our continuous traffic recorder, to process up to 50 Gbps per instance/stream, allowing us to handle full 100 Gbps load balancing traffic to 2 instances/streams (in the same box).

As you probably know, we have pioneered packet capture on commodity hardware for many years, since the introduction of PF_RING, trying to do our best to get the most out of ASIC adapters (e.g. Intel) and close the gap with specialised and costly capture adapters. For this reason, in the past months we struggled to give our community a cheaper alternative for building a 100 Gbps recorder, using ASIC adapters in place of FPGAs.

For this project we selected a NVIDIA (former Mellanox) ASIC adapter from the ConnectX family, which is quite fast, provides nice features (including hardware timestamping and flexible packet filtering), at a price range similar to the Intel E810. This adapter, just like any other ASIC adapter on the market, does not support chunk mode, but it rather uses a transaction per packet, delivering lower performance per stream with respect to FPGA adapters. This means that in order to reach 100 Gbps performance we need to scale with many more (RSS) streams.

For this reason we have decided to change the internal n2disk architecture to be able to handle multiple interfaces (and RSS streams) in a single process, by means of multiple capture threads. In order to keep the internal architecture simple, the configuration of multiple capture threads also requires multiple dump directories (and timelines), one per RSS stream. The extraction tool (npcapextract) can be seamlessly used to extract and aggregate traffic from all the timelines at extraction time, using the hardware timestamp with nanosecond resolution for merging packets in the proper order.

The hardware required for building such system consists of a 16+ cores 3Ghz CPU (e.g. in our tests we have used Xeon Gold 6526Y), with an optimal memory configuration (i.e. all memory channels filled) as already discussed in the previous posts.

Sample command for configuring 8 RSS queues on a ConnectX interface (the linux interface name should be provided):

ethtool -L ens0f0 combined 8

Sample n2disk configuration file for capturing from 8 RSS queues (mlx_5@[0-7]):

--interface=mlx:mlx_5@[0-7]
--dump-directory=/storage1/n2disk/pcap
--dump-directory=/storage2/n2disk/pcap
--dump-directory=/storage3/n2disk/pcap
--dump-directory=/storage4/n2disk/pcap
--dump-directory=/storage5/n2disk/pcap
--dump-directory=/storage6/n2disk/pcap
--dump-directory=/storage7/n2disk/pcap
--dump-directory=/storage8/n2disk/pcap
--timeline-dir=/storage1/n2disk/timeline
--timeline-dir=/storage2/n2disk/timeline
--timeline-dir=/storage3/n2disk/timeline
--timeline-dir=/storage4/n2disk/timeline
--timeline-dir=/storage5/n2disk/timeline
--timeline-dir=/storage6/n2disk/timeline
--timeline-dir=/storage7/n2disk/timeline
--timeline-dir=/storage8/n2disk/timeline
--disk-limit=80%
--max-file-len=1024
--buffer-len=8192
--index
--writer-cpu-affinity=0
--writer-cpu-affinity=0
--writer-cpu-affinity=0
--writer-cpu-affinity=0
--writer-cpu-affinity=0
--writer-cpu-affinity=0
--writer-cpu-affinity=0
--writer-cpu-affinity=0
--reader-cpu-affinity=1
--reader-cpu-affinity=2
--reader-cpu-affinity=3
--reader-cpu-affinity=4
--reader-cpu-affinity=5
--reader-cpu-affinity=6
--reader-cpu-affinity=7
--reader-cpu-affinity=8
--indexer-cpu-affinity=9,10
--indexer-cpu-affinity=11,12
--indexer-cpu-affinity=13,14
--indexer-cpu-affinity=15,0
--indexer-cpu-affinity=25,26
--indexer-cpu-affinity=27,28
--indexer-cpu-affinity=29,30
--indexer-cpu-affinity=31,16

From our tests, this configuration was able to capture, index and dump traffic to disk with no packet loss up to more than 100 Mpps (average packet size of 100 bytes) at full 100 Gbps. Worth to mention that this still delivers lower performance with respect to an FPGA with chunk mode, which is capable of 148.8 Mpps (theoretical max throughput at 100 Gbps with 60-byte packets), but for many people it’s probably enough to cope with real traffic (where the average packet size is usually much higher than 100-bytes). Also, note the configuration looks a bit more complicated with respect to the one used with FPGA adapters, as this requires more threads, and setting the affinity for all of them is required to make sure n2disk fully takes advantage of all CPU cores and get the best performance. In essence with an FPGA adapter you have to put into account the cost of the network adapter but you can save money on the system (as you can use a cheaper CPU) and you have the guarantee that no packet is lost a FPGA adapters are more efficient and have GB of on-board of memory. This said you now have a cheaper option for your high-speed packet-to-disk activities.

Enjoy!

HowTo Use nProbe to Detect and Shape Traffic Using DPI

$
0
0

Not all the nProbe users know that they can use nProbe not just as a passive monitoring tool, but also for shaping and dropping netwok traffic based on DPI. Ryan Claridge has filled the gap by writing a great article that explains that in detail.

Enjoy !

 

You’re Invited to the ntop Community Call: Thu July 18th, 15:00 CET, 9:00 AM EST

$
0
0

This is to invite you to the next ntop community call that is scheduled for Thu July 18th, 15:00 CET, 9:00 AM EST. The topics we would like to discuss with our community include

  • Planning for the next ntop Conference 2024/25: decide conference location, contents, format and details.
  • Discuss about other potential community meeting (either in person or virtual)
  • Preview of the upcoming stable released scheduled for late July.
  • Feedback and Q&A

This event does not require registration and you can simply add it to your calendar using this link ntop Community Call (or just use this call link).

ntop and Endian Enter Partnership for Open Source OT Monitoring

$
0
0

ntop develops monitoring tools for IT and OT networks, whereas Endian is a leading Italian company that develops a Secure Digital Platform for OT networks. Both companies use and develop open source tools that can be a key value in OT networks where most tools are proprietary. This partnership allows both companies to complement each other and offer better tools for their user community.

The complete announcement can be found at this page.

Enjoy !


Extended Multilanguage Support in ntopng: Korean, Spanish and French

$
0
0

This is to announce that ntopng now enables users to use a new languages: Korean, Spanish and French.  We have also improved translations of German and Italian. The translation is done using an automatic tool so, we cannot guarantee that the translation is completely correct. Error or typos are accepted as a GitHub issue: please open a ticket if you find problems.

To change language click on the top right icon in ntopng and enter in the admin page

A popup will open, select language and a list of available languages will appear, select the desired language and click on “Change User Preferences”d

Enjoy !

재미있게 보내세요!

Divertirse!

Amusez-vous!

Advancements in Traffic Processing Using Programmable Hardware Flow Offload

$
0
0

This week we have presented at the IEEE HPSR (IEEE International Conference on High Performance Switching and Routing) our latest work that shows how nProbe can benefit of acceleration provided by modern SmartNICs to achieving multi-100 Gbit traffic processing (both passive and inline) on low-end servers while deep-packet inspecting traffic using nDPI.


If you want to know more about it, you can view the presentation slides, or read thw paper.

Your feedback is welcome. Enjoy !

HowTo Export ntopng Alarms to Checkmk Event Console

$
0
0

Checkmk is a popular platform for monitoring IT infrastructure. ntopng has been integrated in Checkmk some time ago, enabling users to provide traffic visibility in additional to classic bytes/packets metrics. As ntopng is able to produce traffic alerts that, we have decided to extend ntopng in order to export alert information towards Checkmk event console where alerts are received.This guide will walk you through configuring ntopng and Checkmk to enable this functionality.

In order to do so, within ntopng, it’s necessary to configure a new Endpoints as well as a new Recipients. Navigate to Alerts -> Notifications, add a new Endpoint as shown below. In the Host field, specify the IP address of the host where the Checkmk instance is running. 

After adding the Endpoint, add a new Recipient, as you can see in the following image. You are free to customize the information to be sent based on your requirements.

Now we also need to add configuration Checkmk side.  First of all, go to Setup > General > Global settings > Edit global setting, to modify the service levels setting. This setting allows you to assign an importance to every event based on the organization that sends it and provides an additional parameter to filter events. We have decided to use this identifier to show the alert family. Map the numerical ID with the Description as follows for a comprehensive result.

Another crucial passage is to add an event rule for the event console. To do so, navigate to Setup > Events > Event Console rule packs. First of all, click on Add rule pack to create a rule pack, then edit click on Edit the rules in this pack. We can now add a new rule. The rule ID is required; other parameters can be customized based on your preferences. For example, you can filter alerts containing certain words or specific service levels (e.g., flow alerts).

After applied the changes you have made, you should now start receiving notifications from ntopng in the Event Console (Monitor > Event Console > Events).The following image shows what to expect in the Checkmk event console.

For a more specific view of the problem, click on the event ID to reach the event details screen.

For detailed documentation on how the event console works in Checkmk, refer to the Checkmk Event Console Documentation.

Enjoy !

Positioning ntopng vs nProbe for Traffic Analysis

$
0
0

Recently we have compared the use of nDPI in a realtime application (ntopng) and a near-realtime (nProbe). We have captured a short pcap with some mixed traffic and analysed it with both applications. The expectation was to find comparable results between the two applications, but this happened only partially. This blog posts explains the main differences between the two tools and why there are some discrepancies in results.

In our tests, we have configured both nProbe and ntopng to analyze the same pcap and write results on two different ClickHouse tables, so we can easily compare results. The first difference is the total traffic volume: nProbe is a NetFlow-probe that accounts traffic from the IP layer, not considering the ethernet (and VLAN) layers, whereas ntopng accounts all traffic. This means that if you want to compare data you need to deduct ethernet overhead from ntopng data.

Application ClickHouse SQL Query
nProbe select IPV4_SRC_ADDR,IPV4_DST_ADDR,IPV6_SRC_ADDR,IPV6_DST_ADDR,L4_SRC_PORT,L4_DST_PORT,(IN_PKTS+OUT_PKTS) as PKTS, (IN_BYTES+OUT_BYTES) AS BYTES from flows ORDER BY BYTES;
ntopng select TOTAL_BYTES,IPV4_SRC_ADDR,IPV6_SRC_ADDR,IPV4_DST_ADDR,IPV6_DST_ADDR,IP_SRC_PORT,IP_DST_PORT,PACKETS,TOTAL_BYTES-(PACKETS*14) as T FROM flows ORDER BY T ASC;

Another difference between the two application is in how nDPI works. In case of a detected protocol using DPI (e.g. DNS or TLS) there are not differences. Instead for protocols such as Ookla (speedtest) or BitTorrent where the detection is based on a cache filled up with signalling data, timing is important. A ntopng is pure realtime application, nDPI detection based on cache happens when enough packets have been processed for a given flow, hence in case of no match a guess or a generic protocol (i,e. HTTP for Ookla) is used.

In the case of nProbe (and nProbe Cento), nDPI works the same way but in case of no match the guess is made during flow export towards the collector. This means that in the case of nProbe, the chance to find an entry in the nDPI cache is higher with respect to ntopng this is because guess can be made several seconds (or even one minute) after nDPI detection completed. The results is that for non DPI-detected flows, the nProbe detection can be more accurate than ntopng simply because nProbe does not have the same time constraints that ntopng has.

Hope this post has clarified how the two applications work, so that you can now understand why you might see tiny differences (that are not bugs but ‘by design’) between the two applications.

Enjoy !

HowTo Extend ntopng with new Host/Flow Checks and Alerts

$
0
0

ntopng can be easily extended with new host/flow checks and alerts. They are developed in C++ with a few Lua files used by the UI to configure the check and format the emitted alerts.

In order to introduce you to thir development, we have written a short guide that shows you step-by-step how to develop a simple check and alert. If you want you can see a code example of host check that rtiggers an alert when a server contacted a new port after a learning period.

If you have questions or are willing to discuss these topics, please feel free to start a discussion on our community channels.

Enjoy !

 

Viewing all 544 articles
Browse latest View live