Quantcast
Channel: ntop
Viewing all 544 articles
Browse latest View live

How ntopng Merges Vulnerability Scan with Traffic Monitoring for Better Cybersecurity

$
0
0

ntopng was initially designed as a passive traffic monitoring tool. Over the years we have added active monitoring features such as network discovery, SNMP, and now vulnerability scan.  A network vulnerability scanner is a tool designed to identify vulnerabilities (often know as CVEs) in network services such as a web or SSH server by performing an active service scan.

In ntopng we have decided to complement passive traffic with active scanning because:

  • We want to identify vulnerabilities that can assist network and security administrators to implement a healthy network.
  • Matching passive with active traffic analysis is something unique that only ntopng can features. Doing this we are able to identity:
    • Active network services that are not in use and thus that can be safely shutdown.
    • How critical are vulnerabilities: a highly vulnerable service with almost no traffic exchanged is less problematic than a popular service with less severe vulnerabilities.
    • Hidden service, i.e. services for which we observe traffic but that are invisibile (i.e the port is closed) to scanner.
  • Identify active hosts (with scanning) that o not send/receive meaningful traffic (i.e. traffic other than ARP, or multicast) and that (probably) represent unused assets. They need to be shutdown when possible as being them unused, and probably unmanaged, can create security issues to the whole network.

How to Use the Vulnerability Scanner

The ntopng vulnerability scan is designed to be open and modular, so that we can add overtime new components to the scanning engine. Currently it features the following modules:

  • TCP and UDP portscan
  • CVE and Vulners

To date all the above modules are based on nmap and for vulnerabilities on Vulscan: ntopng implements a GUI around the tools, matches scan output with ntopng traffic analysis, and deliver alerts using ntopng standard mechanisms. 

The Vulnerability Scan module can be accessed from the left sidebar under the Monitoring. The first step in add a host (or a set of hosts) that can be scanned: you can define the scan type, the ports to scan (all ports if the port field is empty, or the set of specified ports), the host or the network to scan, and the scan periodicity (one shot, or periodic ntopng-performed daily or weekly scan).

ntopng by default performs up to 4 concurrent scans that can be modified in preferences (minimum 1, maximum 16) according to user needs. 

Clicking on the report page it is possible to have a summary of the scan result that can be printed or sent via email. Next to the port there is a ghost icon if the scanned port was not observed in ntopng monitored traffic.

In case of CVE scan, if found, they are listed according to their severity (reported in round brackets) that is a number up to 10,

Clicking on the CVE badge, the page is redirected to the vulnerability database that describes in detail the CVE.

Future Work

In the future we plan to develop new scanning modules, for instance for the popular OpenVAS tool (a.k.a. Greenbone Community Edition). Another issue we would like to address is the false positive rate of the scanning modules as sometimes the engine reports CVEs that are not relevant for the scanned host.

Enjoy !

 


ntopng 6.0 Webinar Invitation: Nov 15th 3 PM CET / 9 AM EST

$
0
0

Last week we have released ntopng 6.0 that contains many new features and a redesigned user interface. Goal of this webinar is to walk through this new release and show a demo of all the major changes we have introduced.  

Please follow this link for webinar registration.

Hope to see you online !

nDPI: Internals and Frequent Questions

$
0
0

All ntop tools are based on nDPI but not every use is familiar with nDPI internals. We often receive questions about it, and it’s time to answer frequent questions.

  • Q: How nDPI implements protocol detection?
    A: nDPI includes a list of protocol dissectors (356 as of today) that are able to dissect protocols such as WhatsApp or TLS. As soon as a new flow is submitted to nDPI, the library applies in sequence dissectors that can potentially match the protocols (i.e. telnet is a TCP-based protocol and it will not be considered for UDP flows). We start from the dissector that can most probably match using the port number. This means for traffic on TCP/22 nDPI will start with the SSH dissectors and if not matching continue with the others. Dissection completes as soon as a protocol matches or when none of them matched and in this case the flow will be labelled as Unknown.
  • Q: What is the nDPI release cycle?
    A: We cut release a approximately every 6-8 months, fixes and improvements are on a daily basis (check the nDPI code on GitHub).
  • Q: Is nDPI running on all popular platforms?
    A: Yes it runs on Linux, macOS, Windows… and also on not-so-popular ones such as IBM mainframes. We support ARM, Intel, RISC… architectures.
  • Q: How many packets does nDPI need in order to implement detection?
    A: It depends on the protocol. For UDP-based protocols such as DNS one packet is enough, for more complex protocols such as TLS about 10 packets. For sure if after 15-20 packets nDPI has not detected the application protocol, then the protocol is labelled as Unknown.
  • Q: Is nDPI detection only based on protocol dissectors?
    A: No, payload inspection is the main technique, but nDPI can also use IP address, ports, TLS certificates etc as signatures for protocols. In this case, after detection is complete, nDPI will report if the match was performed on payload inspection or other means (e.g. IP address).
  • Q: Does nDPI contain list of known IP addresses?
    A: Yes it includes lists of well known IPs such as those provided by Microsoft of Meta for identifying known service.
  • Q: Can I extend nDPI by defining new protocols with a configuration file?
    A: Yes you can. See this file as an example for defining new protocols..
  • Q: Is nDPI able to detect VPNs?
    A: Yes it can detect VPNS such as Tailscale, WireGuard, OpenVPN, FortiClient.. and also in-app VPNs such as UltraSurf or OperaVPN.
  • Q: Is nDPI able to detect malware and viruses?
    A: it can detect anomalous behaviour that can be caused by a malware, but nDPI is not a signature-based tool so it does not include signatures for malware A or B. This is because signature-based tools have various limitations and resource intensive, whereas nDPI has been designed to be used also in high-speed (100 Gbit+) networks.
  • Q: Is nDPI able to detect security issues?
    A: Yes it can by means of a technique called flow risk. it can identify 50+ threats (e.g. a host that is talking with a malware host).
  • Q: Is nDPI able to block traffic?
    A: No, nDPI is a passive traffic analysis library that does not manipulate packets. You can create applications on top of it for policing (i.e. blocking or shaping) traffic. Examples of such applications are ntopng Edge, nProbe IPS and nProbe Cento.

Any more questions? If so, please contact us on our Discord or Telegram community channels.

Enjoy !

HowTo Build a 100 Gbit NetFlow Sensor Using nProbe Cento

$
0
0

When it comes to monitor a distributed network, to get a picture of the Network traffic flowing through the uplinks or on critical Network segments, NetFlow like technologies are usually the answer.

nProbe Pro/Enterprise and nProbe Cento are software probes that can be used to build versatile sensors able to export flow information in many different formats, including NetFlow v5/v9/IPFIX, Kafka, Elasticsearch, ClickHouse, MySQL, CSV files, etc. All this at very high speed. nProbe Pro/Enterprise has been designed for low/mid rate (1/10 Gbps) while nProbe Cento has been designed to run at high speed (today we consider 100 Gbit a high-speed link).

This regardless of the collector, that can be a third-party NetFlow collector, or the ntop collector, ntopng, which takes care of traffic visualization, augmentation, behavioral analysis, alerting, and a myriad of other functionalities. By combining nProbe Cento with ntopng it is possible to build a fully fledged Network monitoring solution for 100 Gbit distributed networks that provides full visibility.

A frequent question that we get from those that are willing to use nProbe Cento at high speed is “What kind of hardware do I need to be able to process 100 Gbps full rate”? With this post we want to provide some guidelines about hardware selection.

Network Adapter

In contrast with what happens when running n2disk at high speed, where FPGA adapters, like Napatech or Silicom/Fiberblaze, able to operate in segment mode, are mandatory to get the best dump performance, nProbe Cento does not really require expensive adapters. A 100 Gbit probe can be built using commodity, under 1K$, ASIC adapters. What is mandatory here is the support for symmetric RSS. RSS is used to spread the traffic load across multiple CPU cores by means of multiple data streams, splitting the physical interface into multiple logical interfaces where traffic is distributed according to an hash function computed on packet headers. Using RSS to scale, in combination with PF_RING ZC (Zero-Copy) drivers delivering max capture performance, guarantees no packet loss at full 100 Gbit when processing flows.

For this reason the list of recommended adapters to be used in combination with nProbe Cento at 100 Gbit includes:

  • NVIDIA/Mellanox Connect-X 5/6
  • Intel E810

CPU

Not all CPUs are alike. They have different frequency, number of cores, cache size, different level of caches, instruction set, etc. However, in our experience, we can say that a modern CPU (for example a Xeon Gold 6346 3 Ghz or AMD EPYC 9124) is usually able to handle more than 10 Mpps (Million packets per second) per CPU core. Considering the average Internet packet size, a 10 Gbit link usually has 1-3 Mpps. Considering the worst case scenario, a 10 Gbit link can have up to 14.88 Mpps. x10 at 100 Gbit.

This means that in order to handle 100 Gbps, worst case, we need a CPU with at least 16 cores, 3 Ghz. Less cores may be sufficient on CPUs with higher frequency and a big cache.

For instance, if we want to build an Intel-based system, we can use a Xeon Gold 6326 or 6346 or higher. If we want to build an AMD-based system, we can use a AMD EPYC 9124 or higher.

RAM

The RAM configuration for optimal performance mainly depends on the CPU itself:

  • Number of modules: this should match the number of memory channels supported by the CPU (check the CPU specs for this)
    • Intel Xeon Gold currently supports 8 memory channels
    • AMD EPYC supports 12 memory channels for most of the models
  • Speed: select the higher speed supported by the CPU (check the CPU specs for this)
  • Size: considering the minimum size per module (8-16 GB), grabbing the smaller available size usually works just fine (8x 8GB = 64GB are more then enough for nProbe Cento)

Storage

Many users are worried about the storage. The storage does not really matter when running nProbe Cento as it does not really use disk space when data is exported to external collectors using NetFlow, ZMQ, Kafka, or other export formats other than CSV (which are actually written on the local disk). This means that a single small disk, or a RAID-1 2-disks array if you want data recovery for the system disk, is fine.

 

Software Configuration

Configuring nProbe Cento is really simple. The actual options to be provided to the command line (or configuration file) may change depending on the working mode and export format, however on the capture side it’s really straightforward. You should pay attention to 2 main options, the interfaces configuration (-i) and the CPU affinity (–processing-cores).

If you are using an Intel adapter and you configured ZC drivers with RSS , all you need to do is to specify the RSS interfaces as below:

cento -i zc:eth1@0 -i zc:eth1@1 -i zc:eth1@2 -i zc:eth1@3 ...

You can also use a shortcut for this, which is convenient especially when running on 16+ RSS streams:

cento -i zc:eth1@[0-15]

If you are using a NVIDIA/Mellanox adapter, you can use a similar syntax:

cento -i mlx:mlx5_0@[0-15]

At this point, we just have to add the CPU affinity configuration, to make sure that nProbe Cento will use all the available cores by binding one thread per core (providing max scalability and overall performance). lstopo is a tool that is really useful in understanding the CPU topology and helps you selecting the right cores.

cento -i mlx:mlx5_0@[0-15] --processing-cores 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15

At this point you just have to add the options to control the export format.

Now you have all the ingredients to build your 100 Gbit sensor.

Enjoy!

Securing ClickHouse and MySQL Flow Storage

$
0
0

ntopng stores flows data in various databases including MySQL, Elastic and ClickHouse that is the database storage that we have selected as it outpaces the others in terms of speed and reduced disk space. ClickHouse is a columnar database and while it is very fast during data access, it is optimised for batch data insertion. This means that ntopng imports flow data as follows:

  • High cardinality data such as flows are saved in a temporary file and imported every minute using clickhouse-client. The default TCP communication port is 9000.
  • Low-cardinality data such as alerts are stored with SQL queries. In this case we perform queries using the MySQL driver part or ClickHouse as it does not have a native SQL query engine. The default TCP communication port used by the MySQL driver is 9004.

Out of the box ClickHouse uses plain text communications but you can enable TLS in a few steps as described in this document and also secure the MySQL configuration. All settings are specified in /etc/clickhouse-server/config.xml as follows:

  • <tcp_port> is used for insecure clickhouse-client connection.
  • <tcp_port_secure> is used for secure (TLS) clickhouse-client connection.
  • <mysql_port> defines the port where the MySQL driver is listening to. ClickHouse does not offer TLS support for the MySQL driver, hence if you want to secure communications you need to put a TLS proxy in front of that port.
    <!-- Port for interaction by native protocol with:                                                                                                                                                                                                                            
         - clickhouse-client and other native ClickHouse tools (clickhouse-benchmark, clickhouse-copier);                                                                                                                                                                         
         - clickhouse-server with other clickhouse-servers for distributed query processing;                                                                                                                                                                                      
         - ClickHouse drivers and applications supporting native protocol                                                                                                                                                                                                         
         (this protocol is also informally called as "the TCP protocol");                                                                                                                                                                                                         
         See also 'tcp_port_secure' for secure connections.                                                                                                                                                                                                                       
    -->
    <tcp_port>9000</tcp_port>

    <!-- Compatibility with MySQL protocol.                                                                                                                                                                                                                                       
         ClickHouse will pretend to be MySQL for applications connecting to this port.                                                                                                                                                                                            
    -->
    <mysql_port>9004</mysql_port>

    <!-- Native interface with TLS.                                                                                                                                                                                                                                               
         You have to configure certificate to enable this interface.                                                                                                                                                                                                              
         See the openSSL section below.                                                                                                                                                                                                                                           
    -->
    <tcp_port_secure>9440</tcp_port_secure>

    <openSSL>
      <server> <!-- Used for https server AND secure tcp port -->
             <!-- openssl req -subj "/CN=localhost" -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout /etc/clickhouse-server/server.key -out /etc/clickhouse-server/server.crt -->
            <certificateFile>/etc/clickhouse-server/server.crt</certificateFile>
            <privateKeyFile>/etc/clickhouse-server/server.key</privateKeyFile>
            <!-- dhparams are optional. You can delete the <dhParamsFile> element.                                                                                                                                                                                                
                 To generate dhparams, use the following command:                                                                                                                                                                                                                 
                  openssl dhparam -out /etc/clickhouse-server/dhparam.pem 4096                                                                                                                                                                                                    
                 Only file format with BEGIN DH PARAMETERS is supported.                                                                                                                                                                                                          
              -->
            <!-- <dhParamsFile>/etc/clickhouse-server/dhparam.pem</dhParamsFile>-->
            <verificationMode>none</verificationMode>
            <loadDefaultCAFile>true</loadDefaultCAFile>
            <cacheSessions>true</cacheSessions>
            <disableProtocols>sslv2,sslv3</disableProtocols>
            <preferServerCiphers>true</preferServerCiphers>

            <invalidCertificateHandler>
                <!-- The server, in contrast to the client, cannot ask about the certificate interactively.                                                                                                                                                                       
                     The only reasonable option is to reject.                                                                                                                                                                                                                     
                -->
                <name>RejectCertificateHandler</name>
            </invalidCertificateHandler>
        </server>
..
    </openSSL>

The -F flag has been enhanced to specify the use of secure ports by adding a ‘s’ after the port. Example:

  • -F “clickhouse;127.0.0.1@9000,9004;ntopng;default;” [Insecure]

  • -F “clickhouse;127.0.0.1@9440s,9004;ntopng;default;” [ClickHouse TLS, plain MySQL]

  • -F “clickhouse;127.0.0.1@9440s,9014s;ntopng;default;” [ClickHouse TLS, MySQL TLS (behing a TLS proxy active on port 9014]

Note that the ‘s’ option is also available when using MySQL (without ClickHouse) with ntopng (i.e. -F “mysql;127.0.0.1;ntopng;root;“)

It’s now time to connect to your ClickHouse (or MySQL) database over TLS for secure database communications.

Enjoy !

HowTo Monitor Network Interface Usage with NetFlow/IPFIX

$
0
0

SNMP is the de-facto protocol for monitoring network devices. Using it, it is possible to monitor “how much” a link is used. What is missing is “how” a link is used. Namely if my Internet link is full, what is the device, protocol, application that is using it? ntopng was created to answer this question and see in realtime what happens on a network interface.

In this blog post we will show you how to combine network interface usage monitoring with traffic analysis. Flow-based protocols such as sFlow and NetFlow/IPFIX allow network traffic to be measured while providing contextual information about the SNMP interface Id on which such traffic was observed. ntopng can SNMP poll network devices and ready interface counters as well speed. Interface usage is a simple proportion between between the traffic metered via sFlow/NetFlow/IPFIX and the interface speed. In this case the interface speed is read by default from SNMP (if configured in ntopng for this flow exported device) and it can be customized by the use by setting a custom speed, as sometime the physical and actual interface speed are different. For instance you are connected to the Internet with a 1 Gbit ethernet link (SNMP) but your contract has a cap of 100 Mbit (speed to be used when computing interface usage.

The rest of this post shows you how we have enhanced ntopng to compute interface usage. Now it’s possible for each Flow Exporter (probe) interface, to specify the uplink and downlink speed by overriding (if necessary) the interface speed as read by SNMP. In addition ntopng automatically creates containing the Interface Usage using on the traffic received/sent via NetFlow/IPFIX and the specified interface speed.

You can configure the uplink and downlink interface speed, by jumping to the Flow Exporters page and selecting the flow exporter we want to configure. Inside the exporter for the interfaces you can specify a custom speed.

Next to the interface Id, there is a cog icon (wheel), click there to jump to the Interface configuration page, where we can configure a custom speed.

After setting the interface speeds, save the settings and jump to the Preferences (in the Settings menu), in the Timeseries section. Here there should be a section named “Exporter Timeseries”, be sure to have the “Interface Usage” preference toggled.

And that’s it! Simple isn’t it?

From now on, a new timeseries is going to be available in the Flow Exporter Timeseries section (accessible from the chart icon, highlighted in green above) with the interface usage, in percentage.

When the usage is <= 25% the bar is light green, from 25 <= 50% is dark green, from 50 <= 75% is yellow, and above is red.

Soon, the possibility to trigger alerts when the interface usage exceeds certain thresholds, will be available, stay tuned for more info.

Enjoy this new feature, and send us your feedback

Short 1-2Q24 Roadmap: ntop Cloud, Towards 200 Gbit, Cybersecurity, Low-end nBox

$
0
0

Happy new year everyone! Thos who followed our November webinar know already that we’re working at new features and improvements in our tools. Below you can find a short list of features we plan to implement by the end of spring:

    • ntop cloud. This is the major activity where we’re involved. As already said, for the time being we do not plan to create a SaaS solution (yet) but to create a communication mechanism that allow users to interact with their instances regardless of how they have been deployed. In essence enable users to:
      • Supervise all their deployed instances across the Internet or on private servers from a unique dashboard.
      • Get notified when some of them become inactive.
      • Receive notifications and alerts without having a browser open.
      • Manage licenses more easily introducing the concept of service usage rather per-systemId as today.
      • Promote collaboration across instances, in particular for sharing information about attacks or anomalies.

      Developing the ntop cloud requires a lot of work as it is a service that must be available all the time to all users who are distributed across the globe, hence we plan to make an initial release, and subsequent improvements in sub-releases. The initial version will be available by the end of winter, but the release of the ntop cloud will need some more time (3/4Q24 according to our plans).

    • 100 Gbit using commodity hardware. As our users know, it’s a while that we’re able to do traffic analysis at 100 Gbit using commodity hardware. We have developed a new version of n2disk that is able to do 40/100 Gbit using commodity network adapters and it’s in test before release. At the same time, we want to push traffic analysis and packet-to-disk at 200 Gbit using FPGA-based adapters that will be released later this spring.
    • Cybersecurity analysis will be improved adding new security indicators for understanding changes in behaviour and spotting anomalies that cannot be easily detected using the algorithms we have today. For the time being we’re still using statistical methods, but we’re also making experiments with ML to evaluate their pro/cons with respect to the methods we’re using today.
    • OT monitoring will be improved. We’ve new partners we’re working with, that are integrating our tools in their OT products to provide visibility and cybersecurity at low-cost (most OT solutions available today on the market are very expensive and not simple to use for people working in manufacturing).
    • Low-end nBox: we will introduce a new low-cost nBox that thanks for hardware improvements and resource optimisation in our tools, is able to fit on cheap x64 boxes and be able to monitor a SME. The integration with the future ntop cloud will further ease deployment and remote supervision.
    • nProbe/nProbe Cento are enhanced for monitoring mobile operator networks at 40/100 Gbit, so in essence scaling up with speed while providing a software-based architecture able to monitor modern high-speed telco networks.

As you can see we’re busy working. it would be nice to have your feedback using our community channels.

 

Enjoy !

Using ntop in Education: South Panola School District

$
0
0

ntop tools are heavily used in education and we’re glad to share a gust post that described the lessons learnt deploying our tools in a a public school district of Mississippi.

Enjoy !

South Panola School District’s (SPSD) network continues to evolve to better serve the needs of its students and staff. Upon employment at SPSD, the district had less than 1gbps to the internet and now boasts 3gpbs. With more and more traffic flowing through our network, SPSD has a need to better monitor the traffic to determine more soundly the direction of our network’s evolution.

Throughout SPSD’s evolution multiple tools have been employed for this effort. Ultimately, the tools employed throughout the years have been found deficient or lacking in some regard, be it flexibility, retention, capability, or depth. Sometimes software was too complex or expensive and failed to survive beyond its trial use. Over time, SPSD concluded it needed to take a great amount of time and effort to determine key points of data that was wanted, and ways to extrapolate and aggregate it into visualizations, as well as retain it for deep dives. Therein lies the issue, where would time afford such efforts? After leaving such goals on the backburner, SPSD discovered ntopng completely by accident.

SPSD’s discovery tied into an event wherein software and hardware costs were escalating with regards to firewalling and it needed a solution that was price efficient and capable. SPSD swapped its firewalls to an open-source firewall capable of meeting its needs and from there learned of ntopng’s existence. At this point, we decided to run ntopng on the firewalls themselves and quickly determined that ntopng was the answer to our needs. Soon, however, we realized that we were unable to take advantage of the full capabilities of ntopng due to hardware design.

Our firewalls were targeted at their task, firewalling, and ntopng, while light on resource usage for the task it was ultimately accomplishing, was in need of more resources than our firewalls could afford. It also didn’t help that it was in essence running on more than one firewall, making the data collected split between them. SPSD wanted the data all in one place. There were multiple ways to accomplish such, but ultimately SPSD had to come up with a sound plan to employ the advantages that ntopng could afford.

After the realization, and over no short amount of time, SPSD finally decided to buy the hardware necessary, from NICs to a rack-mount server with plenty of drive slots, and started to execute its plan. Overall the key areas we wanted to employ ntopng were in monitoring traffic entering and exiting the firewall, the network edge. SPSD also wanted to keep a few ports available for monitoring devices we might plug directly into the box running ntopng so as to determine issues with them or if they were inadvertent participants in malicious activity.

With the purchase of the new hardware, port mirrors set up on switches, 10gbps links established … since most of the edge is a bunch of 10gbps links to and from our network, firewall and ISP, we began to determine the software necessary and potential capabilities ntop.org’s software could offer us. The most important capability, by and far, in our opinion, was that ntop’s products could, when afforded sufficient hardware, monitor line rate 100gbps. SPSD is hard pressed to find any contender capable of the same without having to break the bank to

attain the capability. Ntopng’s capabilities don’t end there and it is not the only ntop.org software employed by SPSD either.

To monitor at line rate above 1 Gbps PF_RING ZC was determined necessary, for ntopng, nProbe Cento, and n2disk. N2disk facilitates packet captures (pcap files basically) at line rate provided your disk bandwidth is sufficient. nProbe Cento turns a stream of packets into flows in the same vein as NetFlow/IPFIX. Ntopng combines with other ntop software seamlessly, allowing us to create pcap files for whatever time period selected so long as n2disk was recording at the time. Ntopng also receives the flows from nProbe Cento allowing us to use ntopng’s capabilities to visualize and extrapolate the data into easier to consume visualizations and aggregations. Then there is nProbe, which can be used to collect flows from devices generating IPFIX/sFlow/NetFlow data. Again, ntopng eventually receives the data from nProbe so as to facilitate easier consumption. nProbe and nProbe Cento aren’t limited to just the uses we have employed either, being capable of forwarding the flows to any flow collector. nProbe Cento again has yet more capabilities that we are not currently employing due to lack of need.

At the end of all this SPSD has to consider how the data is retained, and for how long, and ntopng doesn’t disappoint here, exceeding our expectations in both regards. Ntopng has a way to limit the retention to a time period, as well as store the data such that the resolution is retained. We have ntopng export time series to influxdb and flows to ClickHouse, which ultimately store their data on a 12 disk RAID 10 of SATA SSDs. This allows SPSD to maintain high resolution historical data points for exploration upon necessity or to learn more about how the network’s usage is shifting over time, all of which facilitate future network design decisions. N2disk is now being run in such a way as to constantly capture data. Its design allows us to decide what percentage of a disk it can/should use and therefore we never overrun disk capacity. This does mean we are limited in how much pcap type data we retain but it is physically impossible to store all data sent and received over the network over a lengthy time period without major investments that are outside the reach of a school district. Besides that, the idea isn’t to have pcap’s from forever ago, but to be able to export the data for more in depth analysis as necessary. Again, ntopng facilitates such with a way to determine what time period of data is wanted. In general, SPSD believes pcaps are more an on demand thing for us, than something we want to retain for ages to come. Just as ntopng can export pcaps, it can export flows to time series as well. This means if we notice a set of flows are due to roll off but are necessary for some purpose, we can retain them manually. Finally, for graphs and visualizations, ntopng can take what amounts to some degree as a screenshot of the graph you currently have presented and store it indefinitely. All of these things are features we are using, or intend to use as time progresses and needs arise.

With all the talk of retention one might wonder as to what’s involved with the live data, derived from recent packets and flows captured. Ntopng remains just as capable with live data as it does with historical data and provides many different ways to visualize and consume the data available. Extrapolating the dearth of data provided by ntop’s products in any other fashion would be an unwieldy and time consuming task, likely resulting in poor assumptions being made with regards to SPSD’s evolving network direction. Ntopng has enough features and capabilities

to it that we may find ourselves discovering uses for years to come, especially considering that it will expand in capability and features itself as time progresses. Ntopng’s reporting capabilities are still being explored and will definitely be of use to SPSD as we move forward. Ntopng’s alerting capabilities remain to be explored to a greater depth and we find it necessary to customize in this regard to ultimately arrive at a set of alerts that are important to SPSD. Ntopng is facilitating insight into network applications crossing from our local network over to the internet. It provides us traffic graphs as well. With this list I’ve barely scratched the surface of what it can and does do for us. I know we’ll make use of its maps and ASN correlation features over time for instance, in addition to all that which has so far been listed. We’re even using its SNMP monitoring for a low resolution graph of traffic on certain interfaces within our network.

Using ntop.org’s products is by necessity a journey, as more uses continue to be discovered, and SPSD’s needs evolve. As networking evolves, traffic usage increases and changes, so does SPSD itself evolve. Ultimately, ntop.org’s products empower SPSD to monitor its network for critical information as well as malicious activity if such were to occur. SPSD intends to use ntop.org’s products to help us derive future directions to service the needs of our students and staff more efficiently as the network direction evolves.

The only question before us is can we even take advantage of all that ntop.org’s products offer us? Time will tell and every effort will be made, but with our evolving direction, ntop.org’s products evolving, and yet more capabilities being discovered within their products, it feels that SPSD will likely find that it’s still just at the tip of the iceberg and that more advantages could be provided should we just seek a bit further.

 


How Sampling and Throughput Calculation Works: NetFlow/IPFIX vs sFlow vs Packets

$
0
0

ntop tools are able to collect various type of flows NetFlow/IPFIX (including dialects such as J-Flow, NetStream) and sFlow/NetFlowLite, this in addition to packet capture/processing. We have decided to seamlessly handle all these formats so that the user does not have to know the inner details of them. so what you do is the usual pipeline

where nProbe collects flow from devices (i.e. router or switch) or turns packets into flows. In both cases nProbe will deliver this information to ntopng by enriching the exported flows with additional data (e.g. nDPI) with respect to the original flow and removing differences introduce by the various flow formats used by some vendors.

In the latest (dev) version of nProbe/ntopng, we have made some enhancements for users who collect flows. In the flow details we have added a new badge that should you whether the flow displayed originates from NetFlow/IPFIX or sFlow. In case the flow is computed by packet capture (either on nProbe or ntopng side) no badge is displayed. 

This way users who collect traffic from multiple exporters on the same interface, can figure out the nature of the flow.

Packet sampling is optional in NetFlow/IPFIX whereas is compulsory on sFlow. This means that when a flows is calculated on the flow device (i.e. router or switch), in case of packet sampling only a subset of the packets is use to account traffic. Packet sampling is a smart technique for reducing load on the monitoring device but it is handled differently based on the protocols:

  • NetFlow/IPFIX exporters usually do not store in exported flows the packet sampling rate (if used). If the router has a configured sampling rate of 1:x, it means that all the flows accounted on the device have the same sampling rate. As the used sampling rate is not always present in exported flows, you need to add “-S 1:x:1”. In case of unsampled NetFlow/IPFIX flows you do not need to specify -S at all.
  • sFlow is a bit more complex, and it has a variable sampling rate per exported flow. In this case, -S is not necessary as nProbe will automatically upscale traffic using the information part of the sFlow export.

In summary the good news is that nProbe can seamlessly take care care of flow sampling. The only caveat is that if you collect sampled NetFlow/IPFIX flows generated by multiple exporters with a single nProbe instance, as -S is a global option for all exporters, you need to make sure that all the exporters use the same sampling rate. If this is not the case, please start multiple nProbe instances, one per flow sampling rate as in the picture below.

As already stated, ntopng can seamless handle all the above flows as nProbe automatically upscales traffic to the correct sampling rate. This means that bytes/packets are correctly reported and multiplied transparently. Instead, when ntopng calculates the flow throughput it has to know more about the flow origin as explained below.

In the above flow view you see the actual throughput, and inside the flow details you can see the current, peak and average (for the whole flow duration) throughput.

Actual throughout is computed as follows:

  • Packet interfaces: periodically (typically every 5 seconds) ntopng computes a bytes/packet delta with respect to the previous period and calculates the throughput.
  • sFlow Flows: in this case a sFlow flow is basically a single packet upscaled with the specified dynamic flow sampling rate. For sFlow, the throughput is computed as for the packet interface because each received flows is basically a single packet and sFlow does not export any timing information except the time when the packet was sampled.. This is not super accurate as in sFlow we do not have the guarantee to receive a constant flow data stream for a given flow, hence it is more accurate for flows that receive more samples with respect to flows that have received only a single sFlow sample.
  • NetFlow/IPFIX flows: the throughput is computed whenever a (sub-)flow is received by ntopng. As flows contain the flow start/end, NetFlow/IPFIX throughout is computed as the average throughput of the flow. In other words is is computed as bytes/(flow_end – flow_start + 1). For this reason, the throughout of NetFlow/IPFIX flows is updated whenever a new (sub)flow is received and not periodically.

If you read until here, we hope that all the above concepts are now clear. This post applies to nProbe/ntopng part of the dev branch. The above enhancements will be included in the next stable release.

Enjoy !

HowTo Monitor SNMP Interfaces Utilisation and Congestion Rate

$
0
0

Recently, we added the ability in ntopng to monitor link utilisation using NetFlow/IPFIX. In this post, we want to show you how we further improved those functionalities by leveraging SNMP to monitor the status of many devices (interfaces) simply. SNMP is a well-known protocol used for monitoring network devices, and ntopng uses it to poll and gather information from them. ntopng computes the interface usage by using a simple proportion between the traffic metered via SNMP and the interface speed. The interface speed is read by default from SNMP, but it can be customised by the user as sometimes the physical and actual interface speeds are different.

Let’s see now how it is possible to configure SNMP link speeds and monitor them to detect unexpected behaviours with SNMP polled devices.

Similar to what happens with the Flow Exporters speed, by going to the SNMP page, selecting an interface of a device, and jumping to the configuration page (cog icon) it is possible to configure the link speed (Up and Down independently) of a specific interface.

As already mentioned above, this speed is automatically read using SNMP, however not all the times detected speed is correct, consider the example of being connected to the Internet with a 1 Gbit ethernet link but your SLA has a cap of 100 Mbit, in this case the right speed to configure is 100 Mbit.
After setting the interface speeds, save the settings, jump to the Preferences (under the Settings menu), in the SNMP section and be sure to have the “SNMP Devices Timeseries” preference enabled.

From now on, a new timeseries is going to be available in the SNMP Interface Timeseries section (accessible from the chart icon, highlighted in green in the above picture) with the interface usage in percentage.

When the usage is <= 25% the bar is light green, from 25 <= 50% is dark green, from 50 <= 75% is yellow, and above is red. However, while developing this feature, we realised that with hundreds, or even thousands, of interfaces, it is impossible to jump to all of them one by one and check whether everything is ok. In addition to that, having an interface “Congested” (i.e. high traffic load) for a short period, may or may not represent a real issue: in most cases, it is not, for example, in the case of a backup we have a couple of hours a week were the bandwidth of the interface is filled.
For this reason, we decided to gather all this information together and put it on a single page.

Here it is possible to navigate through time (by using the classic time navbar) and find if any interfaces had problems. In addition to the standard information (SNMP IP, Interface Name, etc.) we introduced a new metric called “Congestion Rate”, representing how much an Interface is filled in the selected timeframe.
For those who like formulas, the Congestion Rate is computed in the following way: sum all the points of the timeseries with usage higher than 75% and divide it by the number of total points in timeseries in the selected timeframe. In practice, this is similar to the average.
After finding some issue with the Congestion Rate of an interface, it is possible to drill down and do follow-up investigations by directly clicking on the bar of the Top Congested Interfaces chart or by clicking on the Actions -> Timeseries button.
And this is not all. As promised, we also added the ability to trigger conditional alerts based on the interface usage. It is now possible, for both SNMP and Flow Exporters, to trigger alerts when the usage exceeds a specific configurable threshold by using Traffic Rules.

For you reference you can read more about interface utilisation in the ntopng user’s guide.

Enjoy this new feature, and send us your feedback!

Introducing ntopng Customised Reports

$
0
0

In ntopng 6.0 Dashboard and Traffic Reports have been completely redesigned and rewritten from scratch with a new, flexible engine which is template-based. In a previous webinar we demonstrated how cute and powerful the new engine is, with the ability to automatically generate periodic reports, and with the promise of releasing a graphical editor for customising it, and let everyone to create its own traffic view on both historical and live traffic data.

The graphical editor has been implemented and it is available in ntopng 6.1 (and later versions). In this video we demonstrate how to use the editor to build a custom Traffic Report in seconds (and a few easy steps). Enjoy!

 

ipt_geofence: Protecting Networks using Geofencing, Blocklists and Service Analysis

$
0
0

Last week the ntop team has organised the network devroom at FOSDEM 2024, that took place in Brussels on Feb 2-3. During the devroom we have presented one tool named ipt_geofence that we have created for protecting our network infrastructure and generate blacklists that can be used with ntop tools (this task is still ongoing). ipt_geofence, an open-source tool for Linux and FreeBSD that combines in one tool IP geofencing, service (e.g. SSH, Web and mail) analysis, and blocklists. It allows malicious hosts to be blocked and hence protect services in a simple way without having to use multiple tools and complex administration practices to implement what ipt_geofence offers out of the box.

There are the presentation slides used in the presentation, and this is the source code in case you want to play with it.

 

Enjoy !

HowTo Analyse NetFlow/IPFIX/sFlow pcap Traces

$
0
0

Dumping sFlow/NetFlow/IPFIX flows in pcap format can be very useful for troubleshooting or for creating a compact traffic dump. For instance you can dump flow traffic with n2disk (wireshark, or tcpdump) and store them in pcap format, and eventually share them with a shared disk or sent via email. Flows are usually analysed live with nProbe/ntopng but how can you analyse them when saved in pcap format and not captured from the wire?

The nProbe package includes a companion tool that allows flows to be extracted from a pcap file and reproduced as if they were sent on the wire.

Welcome to sendPcap: sFlow/NetFlow/IPFIX pcap flow replay
Copyright 2011-23 ntop.org

sendPcap -i <file>.pcap [-p <port>] [-n <num pkts>]
         [-d <destination IP>] [-l] [-1 <num>] [-s]
         [ -f <filter> ] [-6][-t <pps>]

Usage:
    -i <file>.pcap Pcap to reply
    -p <port>      Collector port
    -d <IP>        Collector IP address
    -f <filter>    BPF Filter
    -l             Replay the pcap in loop
    -1 <num>       Replay packets in bunches of <num>
    -s             Replay the pcap at the original speed
    -6             Send flows over IPv6
    -t <pps>       Replay this packet rate (packets/second)

This tool takes as input a pcap file containing the flows and reproduces it resending the original flows to the local <port> specified. Please make sure that the pcap contains  only flow packets: if this is not your case please filter them (e.g. tcpdump -r unfiltered.pcap -w filtered.pcap “f<filter>”) before passing the pcap to the tool, or use -f <filter> to skip packets that do not contain flows. 

Once you have your pcap ready you need to start nProbe and ntopng. In the example below we assume that all tools will run on the same host. Supposing to resend flows towards nProbe collecting on port 2055 do:

  • ntopng -i tcp://127.0.0.1:1234 --disable-purge
  • nprobe -i none -3 2055 --zmq tcp://127.0.0.1:1234

Note that in ntopng we use the flag “–disable-purge” that is used to tell ntopng not to purge flows after they have been received. This is important as reproducing flows from a pcap causes ntopng to receive flows with a date that is in the past: without the above flag, the flow would be immediately purged as they are considered as expired with respect to the current time and therefore they would not appear in the ntopng interface. Of course this flag has to be used only for debugging purposes, as disabling purge increases the memory usage as data is not purged when no longer necessary.

Once started the tools you can send flows stored on the flows.pcap pcap file as follows:

  • sendPcap -i flows.pcap -p 2055

By default sendPcap will send flows as fast as possible, and this can be too fast or not realistic. For this reason you can add the flow -s for reproducing the flows at the same speed they were collected, or send them in batches (-1). You can stress-test the collection pipeline using -l for reproducing the pcap in a loop.

Please note that flows have the date set to the original flow (i.e. the date/time is not reforged to the current date) so you can see the flows as they were sent originally.

Enjoy !

How we have Decreased ntopng Memory Usage by more than 60%

$
0
0

In this blog post we want to shave our experience squeezing ntopng memory usage to fit into small OT monitoring devices manufactured by our partner Endian. Just to give you an idea of the work we did look at these two images taken on the same network at the same time of the day, before and after our work.

After

As you can see we managed to squeeze the memory from 4 GB to 1.3 GB. Below we describe how we did it.

The challenge was to reduce memory usage while preserving the same functionalities of ntopng. The ntopng code (and of other ntop components such as nDPI) is automatically tested with nightly testing suites, automatic GitHub actions and Google fuzzy testing. The chance to have a memory leak is very low but before starting our activities we have double checked and this was not our case (fortunately). The architecture of ntopng is a bit complex as the engine is written in C++ while periodic activities (e.g. minute checks or timeseries write) and the web interface is written on Lua. This means that ntopng continuously spawns Lua virtual machines to execute these scripts and terminate. In Lua there is no chance to have a memory leak but give the complexity of ntopng, every time we start a VM we have to load several modules that take some memory. One of the most resource intensive Lua script is the one used to show alerts in the screen that was taking about 4 MB of RAM at every run. With Lua we have split scripts in smaller modules, removed potential circular dependencies (module A includes module B, that includes module A) and loaded only the minimum dependencies in order the script to run. This has allowed to decrease the memory usage of resource intensive scripts to 1.3 MB. Please note that we spawn several VMs simultaneously at specific times (e.g. very hour we execute hourly, 5 minute, minute and second scripts) so multiply the individual VM memory usage for the number of VMs. This has decreased the resource usage (both memory and CPU) but not yet at the level we expected.

Another issue that we have tackled is called heap fragmentation that is a situation where the free memory space in a computer’s heap becomes scattered or divided into small, non-contiguous blocks. The heap is a region of a computer’s memory used for dynamic memory allocation, where programs can request and release memory as needed during runtime. As ntopng continuously spawns VMs, memory is continuously allocated/deallocated in small chunks and this promotes fragmentation. Heap memory fragmentation can lead to inefficient memory usage, as it may become challenging for the memory allocator to find suitable contiguous blocks for allocation requests. In severe cases, it may cause the program to fail due to an inability to allocate the required memory. In short fragmentation is a severe problem, as severe as a memory leak. In order to address this problem we have combined two techniques:

  • Avoid small Lua memory allocations by enlarging the memory allocations at at least 16 bytes, and making sure that the allocated block was a power of two. This has simplified the work of the memory manager as blocks are easier to compact and reclaim.
  • We have replaced the standard memory allocation (malloc/free/realloc) with a more efficient one to overcome these limitations. The best we have found are jemalloc and tcmalloc that are more efficient and responsive than the original one. Please note in the above pictures that with the new memory manager the memory usage is more bursty than before where after a while the memory usage was stable (but bore than double). This said, we have to acknowledge Apple for their great compressed memory allocator that comes with macOS our of the box, as it did not suffer of fragmentation (as in Linux and Windows).

Finally we have reworked and optimised some C++ classes to make wiser memory usage. In addition we have compressed some hash tables used internally in ntopng by hashing the string key in a way that we guarantee not to have false positives while avoiding to keep in memory large string-based keys that were taking more space.

We hope that you have enjoyed this post. The ntopng code is on GitHub so you can see yourself what we did in details. We have some ideas for further improving the memory usage. Stay tuned!

If you want to test this resource savvy version of ntopng, just update to the latest version on the dev branch. When the next stable release will be cut, these changes will also be incorporated in the stable branch.

Enjoy !

Introducing nBox Mini

$
0
0

As previously announced, we have added a new entry in the nBox product list: the nBox Mini. This is a small rugged device with 1 and 2.5 Gbit Ethernet port designed to be used as turn key solutions for monitoring small-mid size networks (typically up to 255 hosts),

it is preconfigured to accept mirrored traffic (e.g. from a span-port) or to act as a bump-in-the-wire (inline) device. It comes with ntopng pre-installed and configured through the nBox user interface.  It can optionally run also nProbe to also collect flows that can be visualised with ntopng. It has a fari amount of memory, fast CPU and flash storage that can be used for persisting flows in ClickHouse. The device is CE certified, come with 3 years hardware guarantee and advanced device replacement in case of failure.

You can buy it online at this page, or read all the specs at the nBox mini landing page. You can contact us if you need a different version or configuration. Please do not forget that for heavier workloads we have the nBox NetFlow and nBox Recorder.

Enjoy !

 

 


How ntopng Host Traffic Accounting Works

$
0
0

Despite ntop has implemented rich network metrics over the years, the two most important metrics that people keep asking us are volume (how much) and time (how long). 

Timeseries offer a quick view of the traffic and allow people to immediately spot traffic peaks or absence of transmissions. They are good for traffic analysis but are too complex for producing accounting data and comparing usage overtime. For this reason ntopng provides for each local host an additional feature that allows you to see immediately the amount and time that a host has passed online.

As shown on the above picture under the host submenu there is an icon (indicated by the arrow) that allows you to access daily/weekly/monthly traffic reports and see the difference with respect to the previous period. Example in the above picture it is shown the amount of traffic of a host for the current month (February), compared day-by-day, with respect to the previous month (January). Similar reports can be shown when showing the daily and weekly reports.

Enjoy !

DoS Detection Using ntopng and NetFlow/IPFIX

$
0
0

Recently ntopng has been used in academia for detecting DoS (Denial of Service) attacks using NetFlow flows. In this thesis (note that the document it is written in Italian) it is shown how ntopng has been successfully used collect flow and use them to detect DoS attacks.

Enjoy !

How Historical Traffic Behaviour Analysis Works

$
0
0

In ntopng we have implemented various techniques for analysing historical traffic. This post shows you the options available:

  1. In timeseries you can see the current traffic rate (line) or the traffic rate of the previous period of time (dotted). This allows you to visually analyse when traffic deviates from previous period of time (see for instance in the chart below the traffic drop happened at 10 AM).

2. You can trigger interface alerts based on statistical traffic analysis (exponential smoothing) when traffic exceeds (up/down) its baseline.

Note that when this happens you can trigger an alert by enabling the two behavioural checks below (see Settings -> Behavioural Checks)

3. You can set Local Traffic Rules (under the Hosts menu) to trigger an alert when traffic exceeds (up/down) a given threshold or % (in the example below when the current host traffic is less than 50% of the traffic of the previous hour).

In summary we have implemented both static and behavioural thresholds (you need both of them) to allow you to continuously detect hidden traffic issues.

Enjoy !

 

Announcing ntop Professional Training: May 2024

$
0
0

ntop tools range from packet capture, traffic analysis and processing, and sometimes it is not easy to keep up on product updates as well master all the tools. This has been the driving force for organising ntop professional training.

This is to announce that in May we have scheduled the next ntop Professional Training session. It will take place online (Microsoft Teams) on 14th, 16th, 21st, 23rd, 28th, 30th of May, 2024 at 3.00 PM CET (9.00 AM EDT). Training will be held in English language and each session lasts 90 minutes.

All registered attendees will receive, as part of the training, a license of ntopng Pro that you can use after the training for improving your skills as well the webinar link.

As alternative to remote training, in case you will be attending the CheckMK Conference #10, you will have the chance to attend in-person training (ntop and CheckMK training are two different events).

Here, you can read more about all the training topics. Shall you be interested to join this session, please book your seat. attendees will have access to all session recordings. Shall you have questions, please feel free to mail training@ntop.org.

ntop Cloud: Security Design and Architecture

$
0
0

In late 2023 we have announced the beginning of a new project we have called ntop Cloud. The first goal of this project is to enable ntop applications to communicate regardless of the network topology where they are deployed, This in a secure way. In essence we want to create a new network overlay that allow ntop applications to communicate and share data. Some use cases:

  • Be notified when a ntop application is no longer active or more in general when it changes its status.
  • Implement a public web interface that allows administrators to supervise the operations and setup remote instances with a mouse click.
  • Share malware/attacks/alarms information among instances so that everything looks like a large distributed network, that can cross firewalls and network boundaries. For instance you can have your nprobe running on your laptop connected via 5G that delivers flow data to a collector running on a datacenter. All topologies must be supported.

ntop Cloud Architecture

A simplified overview of the architecture is depicted below.

The core of the cloud are message brokers federated on a cluster so that you can connect your instances  to the closest node (e.g. Europe if you are in France, or US is you are in Virginia) and the brokers distribute messages in a way that all your instances can communicate regardless of the node they are connected to. This way you can see from a web console (the image below is a work in progress) all the active instances and perform actions on them such as update software or restart.

In the above table, instances are running on several different networks, behind a firewall or unprotected with a public IP address and they all look similar, with the cloud hiding all differences in connectivity.

ntop Cloud Security Design

In order to implement the architecture and convince our users that the cloud has various benefits (note that enrolling on the ntop cloud is not compulsory, and you can keep using our tools as today), we decided to do our best to make it secure and based on the following principles:

  • All the communications are TLS 1.3 encrypted and authenticated.
  • Each ntop cloud user (and its application instances) is unable to talk using the cloud with other user instances.
  • As the message broker is shared among users, we want to make sure that even in the remote case that a user is able to listen to messages of other users, he is unable to  send/receive or understand the data.

In order to implement all this, every registered ntop Cloud user has a private configuration file (that will be copied into /etc/ntop/ of the sensors) that contains the user public/private keys generated using Curve25519 elliptic curve. The cloud manager has its own public/private keys, and its public key is stored in the ntop DNS.

$ host -t TXT cloud._pubkey.ntop.org
 cloud._pubkey.ntop.org descriptive text "b61aaccbf226f2095f48a7ca9d417791f71c4b37e28827cee376b4c9ff5d4c6a"

In essence every message is encrypted twice: the inner message is AES-256 end-to-end encrypted (a shared key with a random cryptographic nonce is created for every message) and is transported on top of TLS 1.3. This guarantees that:

  • When two ntop instances belonging to the same ntop cloud user communicate, their traffic can be encrypted/decrypted only by such user.
  • When a ntop instance wants to send a message to the ntop Cloud manager (for instance whenever an instance wants to share blacklist information with all cloud users, e.g. in case a new attacker IP is detected), this message is asymmetrically encrypted and it can be decrypted only by the ntop Cloud as it must know the sender identity and public key.

In order to avoid sharing user information with the cloud, when a user registers in the ntop cloud by connecting to https://cloud.ntop.org, it generates the user keys locally inside the user browser and they are NEVER stored or shared with the ntop Cloud in clear text. In other words:

  • ntop Cloud users are the only ones who are responsible for keeping data safe. The ntop Cloud is just a secure transport that allows instances to communicate.
  • There is no way that the ntop Cloud can communicate with user instances as encryption keys have not been shared with the ntop Cloud.
  • When the user connect to the ntop Cloud GUI (shown above in this post), is the web browser that encrypts/decrypts data and communicate directly with the user instances.

ntop Cloud Availability

You can start playing with the ntop Cloud if you wish, but we’re not yet ready for releasing it. Basic communications  are working, the web GUI is very basic and the network infrastructure is not the final one we plan to use for production. We have written this blog post so that you can provide us early feedback in order to address glitches before the final release.

Soon we’ll schedule a webinar where all this is described in details, so that our community can speak up and provide a feedback.

Enjoy ! 

Viewing all 544 articles
Browse latest View live