Quantcast
Channel: ntop
Viewing all 544 articles
Browse latest View live

Combining traffic recording with visibility at 100 Gbps

$
0
0

A few months ago, with ntopng 3.8, we introduced support for continuous traffic recording, that allows you to drill down historical data from the timeseries level up to raw packets. This is useful when troubleshooting a network issue or analysing a security event, by combining traffic visibility with raw traffic analysis.

In order to record raw data ntopng leverages on the n2disk application, which is able to capture full-sized network packets at wire-speed up to 100 Gbps from a live network interface, and write them into pcap files without any packet loss. In a previous post we’ve seen how to build a (cheap) continuous packet recorder, providing instructions for configuring the storage to be able to match the expected performance and size it to get the desired data retention.

Enabling traffic recording in ntopng is really straightforward and requires just a few clicks in the Interface -> (Recording) page as you can see in the screenshot below and explained in the User’s Guide, this as long as you need to process traffic at low/mid rates (up to 1-2 Gbps) and the performance provided by commodity network adapters using standard drivers are fine. In this configuration, both ntopng and n2disk are able to capture traffic from the same interface by leveraging on the linux kernel for receiving a copy of the traffic.

However, if you need to process traffic at high rates (10/40Gbit and above), you should consider using capture technologies able to deliver higher performance, like PF_RING ZC for Intel adapters or specialised FPGA adapters. As both PF_RING ZC and FPGA adapters are based on kernel bypass technologies, the drawback is that they do not allow you to capture the same traffic from multiple applications at the same time: this means that you cannot run ntopng and n2disk at the same time on the same interface.

In order to overcome this limitation, n2disk has been extended with a new feature: the ability to export flow metadata to ntopng, similar to what nProbe does. In fact, n2disk can be configured to capture raw packets, dump PCAP data to disk, and export flow metadata through ZMQ to ntopng at the same time. n2disk can do this at high rates thanks to the internal multithreaded architecture and by leveraging on the optimized PF_RING FT flow processing library providing support for high-speed Layer 7 flow classification.

n2disk ZMQ export

The n2disk service can be configured creating a configuration file /etc/n2disk/n2disk-<instance name>.conf and is controlled using the systemctl utility on operating systems based on the systemd service manager. For details about the n2disk configuration please refer to the n2disk User’s Guide. In order to configure n2disk to export flow metadata, a ZMQ endpoint should be added to the configuration file by using the –zmq <endpoint> and –zmq-export-flows options.
It is a good practice to run n2disk using the ntopng user (-u <user> option) in order to make sure that the ntopng process is able to access the PCAP data recorded by n2disk and run traffic extractions. Such user is created when installing the packaged version available at http://packages.ntop.org.
Please find below a sample configuration, let’s call it /etc/n2disk/n2disk-nt0.conf, which is tied to the n2disk service instance n2disk@nt0 according to the configuration file suffix. In this case n2disk is aggregating traffic from 2 ports of a Napatech adapter, building index and timeline, and exporting flow information through ZMQ.

--interface=nt:0,1
--dump-directory=/storage/n2disk/pcap
--timeline-dir=/storage/n2disk/timeline
--disk-limit=80%
--max-file-len=1000
--buffer-len=4000
--max-file-duration=60
--index
--snaplen=1536
--writer-cpu-affinity=0
--reader-cpu-affinity=1
--compressor-cpu-affinity=2,3,4,5
--index-on-compressor-threads
-u=ntopng
--zmq=tcp://127.0.0.1:5556
--zmq-probe-mode
--zmq-export-flows

This n2disk service can be started with systemctl as below. As it is likely you want to run the service after a reboot, you also need to enable it.

systemctl enable n2disk@nt0
systemctl start n2disk@nt0

In order to process flow metadata coming from n2disk to ntopng, you need to add a ZMQ collector interface in the ntopng configuration file /etc/ntopng/ntopng.conf. Please find below an example of a basic ntopng configuration with a ZMQ endpoint in collector mode:

-i=tcp://*:5556c

This ntopng service can be (re)started with systemctl as below:

systemctl restart ntopng

As last step, you need to set the n2disk instance as external PCAP source in ntopng in the Interface -> Settings page, to be able to check the n2disk service status and drill down up to the packet level in ntopng. In order to do this, you need to select the proper instance from the Traffic Recording Provider dropdown in the collector interface settings page as shown in the screenshot below.

At this point it’s all set and you should be able to drill-down from charts and activities, to flows and packets, while processing 40/100 Gbps! In fact we’ve been able to process full 100 Gbps using “bulk” capture mode (n2disk is able to work in this mode when used in combination with Napatech or Fiberblaze adapters) and multithreaded dump writing to 8 NVMe disks in parallel on an Intel Xeon Gold 12 cores 3Ghz, while sending flow metadata to ntopng. Please take a look at the Continuous Traffic Recording section to learn more about the integration of ntopng with n2disk.

Enjoy!


Packets vs eBPF/System Events: Positioning nProbe vs nProbe Agent

$
0
0

nProbe (and ntopng) is a traditional packet-based application, whose lifecycle is

  • Capture a packet and dissect/decode it
  • Update the representation in memory of the network traffic (e.g. the flow table)
  • Export the information

Using packets for traffic analysis has several positive things including:

  • Ability to analyse traffic using a port mirror/TAP without installing and agent on every monitored host, thing that might be a nightmare if your network is heterogeneous.
  • Scalability issues have been solved (e.g. see PF_RING ZC) years ago, so monitoring a 40/100G network is no longer a problem.

However while this model worked for decades and it’s still a good one, there are now some new challenges:

  • As packets do not carry metadata, nProbe is unable for instance to report the user/application that produced that traffic (information that might be relevant when doing security analysis).
  • Limited visibility on virtualised environments (both VM and container based) and many traffic flows are internal to the system, and inspecting packets we only see a portion of the picture being unable to monitor the traffic flows internal to the system.

One of the reasons why protocols such as SNMP failed to become popular for host-based traffic monitoring and stay limited to device (e.g. router/switch) management is the need to install an agent per system. This creates problems in particular on proprietary systems (e.g. if you have purchased a costly database system, the manufacturer will prevent you from installing software on it, or will allow it but your support/guarantee will be void) or on embedded systems such as a NAS. On the other hand people demand tools able to provide advanced network visibility, not for all systems but in particular for some of them (those that are relevant for an organisation) where monitoring data should be rich and not just packets/bytes-based.

This is the reason why we have developed nProbe agent that is a lightweight, event based (read it as non-packet based) agent, that you install on the host you want to monitor. This tool will provide you very detailed monitoring metrics including latencies, user/process/container information that can be used to complement your traditional packet-based traffic analysis provided by nProbe; this is in order to increase visibility of key company assets while avoid installing the agent on all systems, thing that might be unfeasible or simply not practical.

In the picture above you can see a typical deployment where nProbe monitors all network traffic and on key assets nProbe Agent is installed to provide detailed network visibility. Of course if you want to monitor just one important assets, you can use just the agent that can export monitoring data of the asset without having to also reply nProbe, this unless you want to monitor also additional hosts and not just such asset. The main advantages of the agent with respect to nProbe include:

  • Visibility of intra-process and intra-container communications. In case of containerised environments you can install an additional container with the nProbe agent to monitor the whole system.
  • Low-memory and CPU utilisation (typical CPU usage is 1-3%) that is independent from the traffic rate/volume.
  • Detailed latency and environmental information (e.g. process producing a certain traffic flow) read directly from the kernel.

In summary:

  • You can use the nProbe Agent to provide detailed monitoring information without having to deal with packets, that is a practice that falls short on virtualised environments.
  • nProbe-exported information can be complemented with metrics exported by the agent, this to implement different levels of visibility based on the importance of some specific assets.
  • One nProbe Agent instance can monitor all the containers/processes running on a system using low CPU/memory resources.
  • Using a hybrid nProbe/nProbe Agent approach you can selectively decide the level of visibility you need to monitor your assets.

Enjoy!

Merging Infrastructure and Traffic Monitoring: Integrating ntopng with Icinga

$
0
0

Icinga2 is an open source monitoring system which checks the availability of hosts and services, notifies users of outages and generates performance data for reporting. Thanks to its scalability and extensibility, it has become very popular (as Nagios successor) and suitable to monitor complex environments, even across multiple locations.

Although popular, it falls short when it comes to monitor how the network is being used by certain host. There are several plugins for network monitoring available both in the Icinga Template Library and in the Icinga Exchange, however, they only provide very limited checks such as interfaces throughput, errors, or up-or-down status. Other plugins are available to monitor the status of services such as HTTP/HTTPS web servers or DNS servers, but again they don’t tell anything really valuable apart from a simple green-or-red/working-or-not-working flag.

We have the experience and the technology to tell additional, valuable information on how a particular host or service is being used, network-wise. For example, we can tell if a particular host is undergoing a SYN flood attack, if a certain webserver is being accessed by a blacklisted or malware IP address, or if a DNS server is being abused. This technology is ready and comes with one of our most popular tools ntopng. So the point was, how to deliver this extra knowledge from ntopng straight into Icinga2? Well, writing an Icinga2 plugin could have done the trick.

The basic idea behind writing a plugin is that, given any Icinga2 monitored host with IP address a.b.c.d, it is possible to leverage the plugin to connect to ntopng and extract a.b.c.d network status and health, returning such information to Icinga2 which, in turn, can use it to decide on the status of the host and its services.

ntopng Icinga2 Plugin

The Icinga2 plugin we have decided to write connects to the ntopng REST API to query for host alerts. Specifically, it queries:

  • Host Engaged Alerts to capture ongoing host network issues (for example, the host is a victim of a SYN flood attack)
  • Host Flow Alerts to capture suspicious or malicious flows involving a particular host (for example, the host has been contacted by a blacklisted IP).

The plugin code is available at https://github.com/ntop/ntopng/tree/dev/tools/icinga2 along with other files necessary for Icinga2 to properly interface with the plugin.

Setting Up the ntopng Icinga2 Plugin

To set up the plugin, assuming you’ve already an Icinga2 instance up and running, perform the following steps:

Download the plugin file check_ntopng.py into the PluginContribDir directory. The path to this directory can be found inside Icinga2 constants.conf file, which is typically located under /etc/icinga2/ under Linux.

cat /etc/icinga2/constants.conf | grep PluginContribDir

const PluginContribDir = "/usr/lib/nagios/plugins"

In this example, the PluginContribDir is /usr/lib/nagios/plugins.

Once the plugin is in place, it is necessary to download file check_ntopng_command.conf in /etc/icinga2/conf.d/ or in any other directory which is read by Icinga2 upon startup. The file contains the definition of a CheckCommand object, necessary to tell Icinga2 how to interface with the plugin.

Then, download and place file check_ntopng_service.conf in /etc/icinga2/conf.d/ or in any other directory which Icinga2 is aware of. This file contains the definition of two Service objects, one to check for host engaged alerts ("ntopng-icinga-host-health") and another one to check for host flow alerts ("ntopng-icinga-host-flows-health"). Those two files will automatically apply the services to all the Icinga2 monitored hosts.

Finally, a bunch of constants should be configured to tell Icinga2 how to properly reach and authenticate to the ntopng REST API. Such constants go inside file constants.conf, the same file used above to locate the PluginContribDir directory.

Constants are the following

# cat /etc/icinga2/constants.conf | grep Ntopng
/* Ntopng */
const NtopngHost = "127.0.0.1"
const NtopngPort = 3000
const NtopngInterfaceId = 0
const NtopngUser = "admin"
const NtopngPassword = "admin1"
const NtopngUseSsl = false
const NtopngUnsecureSsl = false

NtopngHost and NtopngPort tell Icinga2 how to connect to the ntopng REST API and NtopngUseSsl whether SSL has to be used for the connection (NtopngUnsecureSsl set to true prevents the plugin from checking SSL certificates validity). When the ntopng authentication is enabled, NtopngUser and NtopngPassword are necessary to indicate a user/password pair which will be used by Icinga2 to authenticate to the REST API. Finally, NtopngInterfaceId is used to tell Icinga2 the id of the ntopng interface which is responsible for the monitoring of traffic.

Running Plugin Checks

Let’s see now how this plugin works and what we can expect from it. Let’s assume Icinga2 monitored host with address 192.168.2.222 is trying to contact a malware host, maybe because it has been infected.

Before the contact, service "ntopng-icinga-host-flows-health" is OK

But as soon as 192.168.2.222 contacts a malware host (contact which has been simulated by pinging one of the hosts in blacklist for the sake of this example), the service becomes CRITICAL and all the necessary notifications will be sent by Icinga2

At this point, to have additional information, one can jump into the ntopng web user interface to find this malware flow among the host flow alerts

Similarly, let’s assume 192.168.2.222 has been configured, in ntopng, to be considered a victim of a SYN flood when it receives more than 10 SYN per second for three consecutive seconds.

Before the SYN flood, service "ntopng-icinga-host-health" is OK

But as soon as 192.168.2.222 becomes a victim of the SYN flood (the SYN flood has been simulated, for the sake of this example, with nmap -sS 192.168.2.222), the service becomes CRITICAL

Again, to have additional information, one can jump into the ntopng web user interface where this alert will show up among the host engaged alerts

Conclusion

We believe this plugin is the first step towards making Icinga2 not just a tool which checks the availability of hosts and services, but also a tool which gives extended information on how such hosts and services are being used from a network perspective.

Feel free to try the plugin and give us feedback!

 

Using RFC8520 (MUD) to Enforce Hosts Traffic Policies in ntopng

$
0
0

RFC8520 (Manufacturer Usage Description) specifies what is the intended (from the manufacturer standpoint) network behaviour of a network device. Being it defined in JSON format by the device manufacturer, it can be used for simple single-task devices such as a printer or an access-point where the device communications are simple and well defined. Typically a device specifies in DHCP requests the URL of a MUD file [image below courtesy of osMUD]


that is defined by the manufacturer specifying what IP/ports the device can access. The URL is passed to an additional network component named MUD manager that downloads the URL and reports to the local policer (typically the network firewall) what is the allowed policy for the device so that unexpected communications are now allowed. As you can see this model can work only with simple devices where the behaviour can be mapped easily, but it falls short with general purpose devices such as a tablet or a PC where it is not possible for the manufacturer to decide what is the allowed device policy by restricting communications to specific sites and services.

Despite all these limitations, the MUD is a good starting point for representing device behaviour onto a simple JSON file that can be exchanged across hosts to enforce device network policies. In ntopng we have recently enhanced the alarming system to track misbehaving traffic flows, and with MUD we can bring this to the next level by using it to track unwanted communications (in passive monitoring mode) or block them (in inline mode). Prior to do this we have to enhance MUD to make sure it works not just for single-task devices but also for general purpose devices. The solution we have adopted is to observe device behaviour for some time (e.g. one day) in a sort of “training mode” and past this time use the known behaviour to match network traffic flows. To make this effective we have enhanced the MUD specification by adding extra information fields that make it suitable also for generic devices. In particular we have replaced for non-local traffic flows IP/ports with nDPI information, and used fingerprints (JA3 and HASSH recently implemented in nDPI) to further characterise network traffic. A typical MUD block now looks like

{
   "matches":{
     "ja3":{
       "source-fingerprint":"d3e1de2ca313c6c0a639f69cc3e924a4"
     },
     "ipv4":{
       "protocol":6
     },
     "ndpi_l7":{
       "application":"tls"
     }
   },
   "actions":{
     "forwarding":"accept"
   },
   "name":"from-ipv4-pc-ntop-64"
 }

If you update to the latest ntopng build you will see (for local hosts) a new preference under the host page that allow you to tell ntopng to start monitoring host traffic in order to learn device network behaviour and thus create a host MUD specification.

After you enable MUD recording you will see two new icons that allow you to download the MUD file or to trash the known MUD host information and start over. This is work in progress as we currently generate only MUD without using it. In particular the items under development include

  • Ability to import an existing MUD file instead of learning it.
  • Trigger traffic alerts based on the known MUD behaviour.
  • As ntopng is able to discover network assets and thus map them to categories, we need to automate MUD generation by removing the drop-down menu where a network admin is asked for device type (either generic or single-task device) as this information can be inferred looking at network traffic.
  • Enable MUD auto-learn mode so that whenever a local device appears on the network MUD generation is triggered.

While we develop MUD support in ntopng, we would be glad if you could tell us more about the current plan and thus send us ideas and comments.

 

Enjoy!

 

 

How Encryption Changed Network Traffic (Monitoring). Finally.

$
0
0

For years traffic monitoring tools assumed traffic was in clear text. This because when the Internet was created all the main protocols such as DNS, HTTP, SMTP, Telnet, POP were in clear. With this practice it was easy to report let’s say the breakdown of DNS response codes, or detect force brute attacks on HTTP authentication.

With the advent of traffic encryption, the (bad?) practice of inspecting traffic was no longer possible and network developers had several headaches. Those who were unable to see new opportunities with traffic encryption started to ask questions like “is your tool able to decode SSL ?” or “nDPI is useless with encrypted traffic, isn’t it?”. While these questions are legitimate to ask, they indicate that the mindset of those asking these questions was simply wrong. Look at this chart created by ntopng on a WiFi network earlier this summer.

The Internet we used to know is this. DNS and other major protocols are listed under Other, and all the rest of traffic was basically encrypted. At ntop we believe that encryption opens up many opportunities and that it is a very good news for both privacy and network traffic application developers. This is because we can simplify many things. See for instance the IDS market, where tools like Suricata and Zeek are very popular feeds for many cybersecurity companies. One of the core principles of these tools was to search specific patterns in traffic streams and report an alert when an entry was found. While this practice worked for long time, today is no longer a good idea. See for instance Emerging Threats rules and you will notice that most of the rules are becoming useless.

$ grep ^alert emerging-all.rules | grep -v HTTP_ | grep -v FTP | grep -v POST | grep -v GET | grep -v urilen | grep -v http_uri | wc -l
    8499
$ grep #alert emerging-all.rules | wc -l
    6923
$ grep ^alert emerging-all.rules | wc -l
   17940
$ grep ^alert emerging-all.rules | grep DNS |wc -l
    1999

There are about 7k rules commented out as obsolete, out of about 18k rules, ~8k rules are not for HTTP/FTP protocols. Most of these remaining rules are either for obsolete apps (this was last updated in 2010)

alert tcp $HOME_NET any -> $EXTERNAL_NET any (msg:”ET CHAT Yahoo IM conference offer invitation”; flow: to_server,established; content:”YMSG”; nocase; depth: 4; content:”|00|P”; offset: 10; depth: 2; reference:url,doc.emergingthreats.net/2001262; classtype:policy-violation; sid:2001262; rev:5; metadata:created_at 2010_07_30, updated_at 2010_07_30;)

or for protocols such as the DNS that is becoming encrypted (see RFC 8484, DNS Queries over HTTPS). Firefox already uses it, and when operating systems like iOS/Android or browsers like Chrome will use it by default, DNS traffic will become a minor Internet protocol in terms of traffic volume.

So said that traffic encryption is already here and soon it will be pervasive, at ntop we have decided to live with this and make our tools encryption friendly as like it or not, this will be the future of Internet traffic. We have extended nDPI to handle encryption as first citizen, and in the upcoming nDPI v3 version we have already implemented:

  • TLS JA3
  • TLS Certificate Fingerprint, validity, unsafe ciphers, SNI.
  • SSH HASSH and client/server signatures.

this is to allow all nDPI-based applications to use industry-standard techniques to fingerprint encrypted traffic.

In addition to that we have enhanced nDPI to go beyond that and figure out what is happening behind an encrypted connection. A typical question would be: is this SSH connection interactive or it is a SCP file transfer (i.e. read it as data exfiltration)? This TLS connection is safe or a malware is using it? For answering the latter questions, Cisco Joy has been integrated into nDPI (see ndpiReader -J for more info) and we’re planning to go beyond this in post 3.0 release.

In essence due to encryption most software applications will be based on these principles

  • As you can’t inspect encrypted traffic anymore, validate it using fingerprints/signatures (abuse.ch is a great one).
  • Use IP blacklists to identify traffic coming/going towards malware hosts (iplists.firehol.org for instance).
  • Use TLS SNI to categorise encrypted traffic (nDPI does it for a long time).
  • Use traffic metrics to characterise the encrypted traffic, and figure out malware activities WITHOUT decoding it.

nDPI is ready for these new challenges, so all ntop tools. It’s time for you to start playing with them and have full 100% traffic visibility in your network with asking the “can you inspect encrypted traffic?” as if you read this article until here, now you know the answer.

Enjoy!

PS. If you are now wondering if many IDSs are obsolete with encrypted traffic, you know this answer too. It’s time to rethink them, make them faster and focused on what matters today, not 15 years ago when they first designed.

Introducing nDPI v3: Encrypted/Malware Traffic Analysis with Ease

$
0
0

Those who though that DPI died with the advent of traffic encryption should play with nDPI v3 that we’re introducing today. As already discussed, the pervasive use of encrypted traffic requires a new mindset when analysing network traffic. We decided to enhance nDPI adding the best traffic analysis techniques available today, in particular Cisco Joy, and facilities for calculating metrics such as entropy, standard deviation etc. that can be used to identify hidden traffic properties otherwise invisible. Thanks to all this, nDPI is now able to report if a SSH connection is an interactive connection or a file transfer, or if a TLS connection hides a malware activity. In essence we have tried to turn encryption from a problem to a new opportunity. This is for creating a baseline for developing traffic analysis applications without having to deal with low-level traffic details (nDPI will take care of them) and let developers focus on using the metrics rather than computing them.

Thanks to SSH and TLS fingerprint additions, you can now create security oriented applications in a a few lines of codes lightweight (read it as efficient/slim vs slow/fat) IDS/IPS applications using nDPI and PF_RING FT. You can now create your mini-Zeek or mini-Suricata over the week-end as we provide you all the necessary ingredients. In order to demonstrate the new nDPI capabilities  we have enhanced the ndpiReader application to export all the new data we compute, as well added the ability to export data in CSV so you can use your favourite data analysis libraries to analyse traffic data with new and richer metrics. For our Python users, Python bindings have been developed, so that they can start playing with DPI using their favourite language.

As people is using more and more nDPI for inline traffic analysis (i.e. read it as IPS or traffic block) we have changed the library design to report immediately the application protocol in case you just need to block (eg. via JA3 blacklisting)/prioritise traffic ASAP, or wait until all metadata has been dissected (see the new API call ndpi_extra_dissection_possible()) whenever you need full metadata visibility (for instance the TLS certificate fingerprint requires more than 10 packets usually as the certificate is exchanged after initial negotiation and this might be too late for an IPS, so that you now have the choice to do what you want).

This is all for v3. You can read more on the changelog below what we did in detail. However this is not yet the time for resting as we need to look forward. There are many things on the pipeline including for instance nDPI OVS integration. Stay tuned as the best is not yet here.

Enjoy!


Changelog

 

New Features

  • nDPI now reports the protocol ASAP even when specific fields have not yet been dissected because such packets have not yet been observed. This is important for inline applications that can immediately act on traffic. Applications that need full dissection need to call the new API function ndpi_extra_dissection_possible() to check if metadata dissection has been completely performed or if there is more to read before declaring it completed.
  • TLS (formerly identified as SSL in nDPI v2.x) is now dissected more deeply, certificate validity is extracted as well certificate SHA-1.
  • nDPIreader can now export data in CSV format with option -C
  • Implemented Sequence of Packet Length and Time (SPLT) and Byte Distribution (BD) as specified by Cisco Joy (https://github.com/cisco/joy). This allows malware activities on encrypted TLS streams. Read more at https://blogs.cisco.com/security/detecting-encrypted-malware-traffic-without-decryption
    • Available as library and in ndpiReader with option -J
  • Promoted usage of protocol categories rather than protocol identifiers in order to classify protocols. This allows application protocols to be clustered in families and thus better managed by users/developers rather than using hundred of protocols unknown to most of the people.
  • Added Inter-Arrival Time (IAT) calculation used to detect protocol misbehaviour (e.g. slow-DoS detection)
  • Added data analysis features for computign metrics such as entropy, average, stddev, variance on a single and consistent place that will prevent when possible. This should ease traffic analysis on monitoring/security applications. New API calls have been implemented such as ndpi_data_XXX() to handle these calculations.
  • Initial release of Python bindings available under nDPI/python.
  • Implemented search of human readable strings for promoting data exfiltration detection
    • Available as library and in ndpiReader with option -e
  • Fingerprints
  • Implemented a library to serialize/deserialize data in both Type-Length-Value (TLV) and JSON format
    • Used by nProbe/ntopng to exchange data via ZMQ

New Supported Protocols and Services

  • DTLS (i.e. TLS over UDP)
  • Hulu
  • TikTok/Musical.ly
  • WhatsApp Video
  • DNSoverHTTPS
  • Datasaver
  • Line protocol
  • Google Duo and Hangout merged
  • WireGuard VPN
  • IMO
  • Zoom.us

Improvements

  • TLS
    • Organizations
    • Ciphers
    • Certificate analysis
  • Added PUBLISH/SUBSCRIBE methods to SIP
  • Implemented STUN cache to enhance matching of STUN-based protocols
  • Dissection improvements
    • Viber
    • WhatsApp
    • AmazonVideo
    • SnapChat
    • FTP
    • QUIC
    • OpenVPN support for UDP-based VPNs
    • Facebook Messenger mobile
    • Various improvements for STUN, Hangout and Duo
  • Added new categories: CUSTOM_CATEGORY_ANTIMALWARE, NDPI_PROTOCOL_CATEGORY_MUSIC, NDPI_PROTOCOL_CATEGORY_VIDEO, NDPI_PROTOCOL_CATEGORY_SHOPPING, NDPI_PROTOCOL_CATEGORY_PRODUCTIVITY and NDPI_PROTOCOL_CATEGORY_FILE_SHARING
  • Added NDPI_PROTOCOL_DANGEROUS classification

Fixes

  • Fixed the dissection of certain invalid DNS responses
  • Fixed Spotify dissection
  • Fixed false positives with FTP and FTP_DATA
  • Fix to discard STUN over TCP flows
  • Fixed MySQL dissector
  • Fix category detection due to missing initialization
  • Fix DNS rsp_addr missing in some tiny responses
  • Various hardening fixes

nProbe Cento 1.10 is Out

$
0
0

After nDPI v3 release, today we have rolled out an incremental update of nProbe Cento. In addition to fixing a few issues, we introduce in Cento some of the fingerprints implemented by nDPI so that we can move forward in combining security with network metrics. In the coming weeks we’ll benchmark this new release and make plans for v2 release due early next year.

Enjoy!

 

Changelog

Main New Features

  • New Addded JA3/TLS/SSH fingerprint export providede by nDPI v3

Fixes

  • Fixed DNS dissection that caused wrong results
  • Fixed crash with some DNS queries
  • When aprotocol is guessed in text file the protocol name ends with _GUESS. Example TLS_GUESS
  • Reworked JSON export to use the new serialization featues of nDPI

Do You Know What Hackers Hide in SSL/TLS?

$
0
0

ntop believes that the future of traffic monitoring and network security will be played by the ability to inspect the behaviour of encrypted communications. It is fortunate that Sam Bocetta, a technical writer focused on network security and open source applications accepted to talk about encryption.

 by Sam Bocetta

SSL/TLS authentication has been around for a while. As one of the first internet safety protocols, an SSL certificate, signified by a green padlock on the far left of the URL bar, is supposed to impart feelings of trust when internet users see that a website is authenticated. However, hackers, being the innovative mischief makers they are, have found a way to commit cyber crimes under the cover of a secure socket layer.

How Does SSL/TLS Authentication Work?

SSL uses a technology known as asymmetric encryption. With this type of encryption, there are two security keys: one public and one private. The public key shared via the SSL certificate tells all browsers how to encrypt the data. The private key resides on the website’s backend servers, where it’s decrypted to complete the request. Website owners are responsible for obtaining an SSL certificate from a proper authority, although some web host providers offer SSL encryption as part of their service. This type of encryption is critical for any website or application that involves the transfer of sensitive information like passwords, account numbers, and other financial data because it keeps outsiders from intercepting the transmission.

The Flaws Inherent in the System

Encryption is added over the HTTP protocol to create a prefix of HTTP, which means secure HTTP or Transport Layer Security (TLS) over HTTP. Occasionally, users will receive a message that the SSL certificates don’t match. This can be due to a simple client/server mismatch or some other benign reason. However, enterprising hackers have found a way to get around the encryption by using the TLS after browsing session begins, and you won’t get any error messages or warnings that it’s happening. In 2017, the Cyren blog reported that 37 percent of malware is using HTTPS as a vehicle to introduce viruses. The malware is engineered as network packets in such a way that it can get past the initial encryption and hide in the end user’s computer, infiltrate your corporate network, or act as a host on its own servers, where it can infect systems with viruses remotely. Regular security measures don’t always work because the malware payload is encrypted and may not be identified by firewalls or intrusion detection systems (IDSs). Most users assume that any website with a valid SSL certificate can be trusted, but in fact the opposite is the case. Even apps stores aren’t safe. Chrome and other stores have been found to carry lookalike third-party security plugins. They’re designed and function like legit ones, but they’re used for crypto-jacking. So, make sure that you only use apps and plugins on your website that come from trusted developers, and try to download them directly from an official website.

Protecting Your Website

One of the first measures to take is to use one of the top web hosting services rated for security and uptime. When researching choices, look for cloud providers that offer network monitoring services, live scanning for viruses and malware, and strong uptime performance. You should also make use of a virtual private network (VPN) client when browsing the web to add another layer of encryption to your traffic. To make your enterprise network truly secure, you’ll need to invest in more modern cybersecurity solutions like deep packet inspection (DPI) and SSL fingerprinting. With nDPI, ntop’s open source DPI toolkit, a separate layer of scanning is added at the perimeter of your network that is responsible for decrypting incoming data, scanning it for known malicious threats, and then encrypting it again for final delivery to the user’s browser. However, using DPI does introduce some privacy concerns for the users on your network, as their traffic is not being truly encrypted from end to end. Also, turning on DPI will place a heavy, continuous strain on network resources that could hurt internet speeds across the organization. That’s why SSL fingerprinting may be a better long-term solution. With SSL fingerprinting, metadata is extracted during the initial handshake between the browser and the back-end server to validate that no protocol changes have been injected between the two endpoints. Early methods for fingerprinting, including JA3, relied on a manually maintained database to track what fingerprints were safe and which were dangerous. But now some companies are going a step further by creating real-time fingerprint databases that get updated automatically to identify malware that could be hiding in SSL traffic.

Final Thoughts

SSL and other visible forms of validating websites are used to provide visitors and owners with peace of mind, especially when they’re engaged in eCommerce. Although this is an important part of overall security, you shouldn’t regard it as the only security measure you need to take. It works best when deployed as a portion of your overall security standard, in conjunction with regular threat monitoring.

 


Finding a Needle in a Haystack (was Traffic Disaggregation with Sub Interfaces in ntopng)

$
0
0

Network traffic moving across a link often contains various types of traffic, for example in large companies it can include a mix of traffic coming from:

  • Employees network
  • Core company servers
  • Guests network
  • Other

Analysing the traffic as a whole is usually complicated and as a consequence many things are hard to see. It is more convenient to split it into smaller subsets based on traffic type and analyse it unbundled. This is because with a lot of heterogeneous traffic specific patters might be hard to be identified.

In many cases each subset is identified by a different VLAN. Sometimes it is seen by a different flow exporter (it happens to have a single nProbe instance collecting flows from multiple NetFlow/sFlow exporters and forwarding them to ntopng). In some other case, we need custom filters to identify it.

In ntopng it is now possible to split the interface into several logical sub-interfaces, and divert packets or flows to one or more sub-interfaces based on a traffic disaggregation criterion. This criterion can be dynamic or static/custom.

Dynamic Disaggregation

Dynamic disaggregation can be used whenever we want to automatically create one sub-interface per traffic type and there is a 1:1 mapping between traffic type and packets or flows fields (e.g. the VLAN ID).

ntopng today supports the below disaggregation criterion:

  • VLAN ID: ntopng creates a sub-interface for each VLAN ID.
  • Probe IP: when nProbe is collecting flows from multiple NetFlow/sFlow exporters and forwarding them to a single ntopng interface via ZMQ, disaggregating traffic by Probe IP ntopng creates one sub-interface per exporter based on the %EXPORTER_IPV4_ADDRESS.
  • Interface: ntopng creates a sub-interface for %INPUT_SNMP and another for %OUTPUT_SNMP (a single flow will be duplicated on both interfaces).
  • Ingress Interface: ntopng creates a sub-interface for each %INPUT_SNMP.
  • VRF ID: ntopng creates a virtual interface for each %INGRESS_VRFID.

Please note that besides the VLAN ID, this kind of disaggregation requires the corresponding template field to be present in the collected flow, thus you need to properly configure the nProbe template.

Dynamic disaggregation can be configured in the Interface page under the Settings tab, by selecting the desired disaggregation criterion in the Dynamic Traffic Disaggregation dropdown and restarting ntopng.

Custom Disaggregation

In some cases dynamic disaggregation does not work well as we have complex criterion to identify a traffic type, or simply there is no 1:1 mapping. In this case ntopng allows you to define a custom filter, using a BPF-like syntax, to disaggregate incoming traffic and divert it to logical sub-interfaces. Please note that overlapping is allowed in this case: a single packet or flow can be diverted to multiple sub-interfaces.

An extended BPF format is supported when defining custom filters, in fact in addition to the standard BPF primitives (that applies to both packets and flows), NetFlow fields (and combinations of them) can also be used, including INPUT_SNMP, OUTPUT_SNMP, SRC_VLAN, DST_VLAN when collecting flows from nProbe.

Custom disaggregation can be enabled by disabling Dynamic Traffic Disaggregation (setting it to None) and creating new rules in the Custom Traffic Disaggregation tab. Please keep in mind that ntopng should be restarted in order to apply the changes.

Please note that traffic directed to dynamic or custom sub-interfaces is not shows in the main interface by default. If you still want to have a view of the whole traffic in the main interface, you need to enable the Duplicate Disaggregated Traffic flag in the interface Settings page.

For further details about the configuration, please check the documentation.

 

Enjoy!

New Directions in Network Traffic Security: Homework for 2020

$
0
0

Summary

With today’s traffic, most network IDSs (NIDS) have severe limitations in terms of visibility and ability to be easily circumvented by malware (for instance running a known service on a non-default port or the other way round), and thus need to be used together with traffic analysis applications to produce a comprehensive view of what is happening on the network. For this reason monitoring tools must integrate more security features as possible, and be open to receive alerts from external sources such (e.g. IDSs) as they are still useful on the (increasingly smaller) amount of Internet traffic they are able to analyse effectively. HIDS (Host-based IDSs) will become increasingly important as network probes/IDSs are mostly blind with respect to network lateral movements, this unless you have full network visibility (usually not the case as probes often analyse only the traffic from/to the Internet and know very little of internal LAN communications not being sFlow-like tools a viable option). This article shows how the ntop 2020 roadmap will be taking these facts into account.

 

All Details

The pervasive use of encryption has finally changed the network traffic monitoring and security market. Simple packet payload inspection is no longer effective and this has been a bad news for many IDSs/IPS. Looking at Zeek and Suricata protocol dissector list it is evident that most of the supported protocols have hard time to match in today’s traffic a simply that traffic is no longer flowing in networks or (take for instance RDP) it has been migrated to encryption making the protocol dissector basically useless on recent protocol versions. Someone might say that in LAN there is still a fair amount of traffic that is unencrypted, but even this traffic will decrease as even in-host container-based communications have to be encrypted, so imagine how people can accept unencrypted traffic on the wire. Said that protocol fingerprinting such as JA3 and HASSH are nice to have features (i.e. you cannot rely 100% on them as you will have many false positives bue to the nature such fingerprints are computed), recent trends in TLS traffic analysis have shown that the idea of deciding if an encrypted stream is good/bad based on the fingerprint is not very effective. The outcome is that without continuous traffic monitoring, security experts will have a hard time for instance slow-DoS attacks as well malware hidden in encrypted streams.

Below you can find a typical trace generated by a popular N-IDS.

Event 'tls' 
{  
   "timestamp":"2019-10-10T16:37:24.293378+0200",
   "dest_ip":"212.39.72.21",
   "src_port":57505,
   "tls":{  
      "subject":"C=BG, ST=Sofia, L=Sofia, O=Bulgarian Telecommunications Company, OU=IT, CN=*.vivacom.bg",
      "ja3s":{},
      "ja3":{},
      "issuerdn":"C=US, O=DigiCert Inc, OU=www.digicert.com, CN=DigiCert SHA2 High Assurance Server CA",
      "version":"TLSv1",
      "serial":"02:57:7E:6E:4D:E0:EF:70:80:D6:DF:5C:1F:CB:C6:EA",
      "fingerprint":"09:55:46:d2:52:68:d1:e6:cd:b1:2b:e0:ca:15:3f:05:65:3b:cd:ce",
      "notbefore":"2014-12-16T00:00:00",
      "notafter":"2018-02-16T12:00:00"
   },
   "src_ip":"10.214.164.115",
   "proto":"TCP",
   "flow_id":1487440626086869,
   "in_iface":"dummy0",
   "event_type":"tls",
   "dest_port":443
}

As you can see the IDS basically reports nothing about the traffic. Even simple metrics such as number of packets/bytes, duration are omitted. This not to mention DPI that was not taken in account years ago and that is not used at all. In many popular IDS for instance you need to configure the TLS port, so if on port 443 you put non-TLS traffic or put TLS traffic on a port other that 443 you’re basically blind. This sounds like a huge problem, at least for us who maintain nDPI and understand the value of deep packet inspection. This makes hard for the consumers of these logs to decide if this stream was good or bad. Information about intra-packet-delay or fragmentation/out-of-order might definitively help to make a verdict on this flow.

This is a big problem as the network security market is now populated by companies that often using machine learning (ML) techniques (to be honest in this ML trend, companies often call ML statistical methods such as Holt-Winters that have nothing to do with ML but are fashion when used for “predictions”) analyse such logs and decide about the health of the monitored network. The reason resides on the fact that ML is based on features (i.e. a traffic metric in the network traffic monitoring world parlance), if the input is poor, ML can’t go too far with it.

So we’re back to square one: the evolution of this market is limited by the ability of tools to produce meaningful logs, features such, on which ML algorithms can do their best. For this reason, in the past years companies have started to create agents to install on hosts for producing very detailed information that is key when used to track host activities. The practice of installing agents on hosts is kind of unexpected news for us who have been told for years to be completely passive and not to install anything on monitored hosts, so we have to cope with it.

If you have read until here, you might wonder what we plan to do at ntop. In our mind it is key to combine network and security monitoring: visibility means security plus monitoring for the reasons explained above. All combined. So what were trying to develop is an ecosystem where:

  • nProbe Agent will evolve (today it is focusing too much on containers and too little on security: this needs to change) and become of a tool for implementing host visibility (yes, we’re thinking about a Windows port but we’ve not yet made a plan). Unfortunately we have based our tool on eBPF, but RedHat (contrary to the rest of the distros) has decided to move eBPF support off Centos/RedHat 8 and put it in a technology preview release. So the eBPF adoption seems mixed in the Linux community.
  • nProbe and nProbe Cento will be improved in metrics richness as those provided by nDPI and be used for monitoring network lateral movements in addition to what they do today.
  • ntopng will become the center of this ecosystem able to collect data not only from ntop tools but also from the outside (read it as non-ntop, such as firewalls, HIDSs, NIDSs, anti-malware tools). Next week at Suricon we will  talk about using ntopng as Suricata web front-end, and using Suricata as security feed for ntopng. This is just the first step, as in the upcoming ntopng v4 we plan to integrate additional external feeds and merge them up seamlessly. This is because people buy products from leading networking/security companies and (as we did 15 years ago when opened the original ntop to the outside world integrating SNMP, NetFlow and sFlow) and we cannot tolerate the practice to have many monitoring consoles, instead of having a single ntopng-based monitoring console that merges all the available information to have a single view of the network. Note that we do NOT want to turn ntopng into a SIEM, but rather use and correlate external feeds to enrich our view of the network.

Aggressive schedule? Maybe, but if you are watching our development activities you will realise that is is months that we’re working hard to make this happen. Your feedback is valuable so please let us know what are your feelings.

Enjoy!

ntopng & Suricata: Unifying Visibility with Security

$
0
0

This week we have presented at Suricon 2019 our work about unifying ntopng with Suricata.

In short:

  • Suricata is a great tool for analysing individual flows but
    • It lacks a GUI
    • It is blind to security threats when they use non-standard ports
    • It is mostly blind to encrypted traffic
    • It does not provide a comprehensive view of the network but it is focusing only on flows.
    • It is able to dissect only about 20 protocols with respect to 250 nDPI supports
    • It is blind with respect to containers
  • ntopng is great but
    • It does not offer signature-based security as Suricata does

So why not combine them together and create a comprehensive tool you can use to merge security and visibility? This allows people to avoid Elastic-based export+visualisation that are not natively merging information, and use Grafana or InfluxDB tools to create great dashboards with network+security data merged properly.

These are our presentation slides in case you are interested to details. Please let us know what you think and enjoy!

Packet-less traffic analysis using Wireshark and libebpfflow

$
0
0

If you wonder how you can use Wireshark with containers, you now have a solution. This week we have presented at Sharkfest EU 2019 how we have integrated libebpfflow, our home-grown eBPF-based library for system introspection, with Wireshark. Thanks to our work it is now possible to analyse traffic in containerised environments with just a few clicks using Wireshark, our favorite network packet analyser. If you want to know more about you work you can read the whole story on our presentation slides, or immediately jump to the source code (yes it’s open source of course).

Enjoy!

Spotting Plaintext Information in Network Protocols

$
0
0

In short: encryption does not always mean that all the information exchanged is really encrypted. Another myth is that many people believe that the equation “encryption = security” holds. Unfortunately this is not true.

This slide deck we presented at Sharkfest Europe 19 shows in practical terms what information is sent in clear text in popular protocol as well what information encrypted TLS traffic reports unencrypted.

Enjoy!

Exploring Physical Network Topologies Using ntopng

$
0
0

ntop tools are known for monitoring network traffic. However this traffic has to flow on physical networks and thus it is important to understand the physical network layout. LLDP (Link Layer Discovery Protocol) is a network protocol used to dynamically build network topologies and identify network device neighbours. In the latest ntopng dev build (that will be merged in the next v4 stable) we have enhanced the SNMP monitoring capabilities with LLDP support.

if your SNMP devices have LLDP enabled, ntopng now polls this information and build an adjacency graph similar to the one below.

You can click on nodes (they represent SNMP devices) to zoom on the specific device, or to drag and zoom using the mouse as you would do with force directed graphs. You can click on the Topology menu item to see a detailed adjacency view and identify device neighbours and connection ports.

In order to see a full meshed topology you need to configure all your SNMP devices in ntopng so they could be periodically polled and the adjacency graph created.

Thanks to this new development you can map a monitored host to a physical port, and now you can depict how this port is connected (for instance at layer-2 that is invisible to tools such as traceroute) to the rest of the network.

Enjoy!

How to use nDPI from CLI to analyse network traffic

$
0
0

Most people use nDPI indirectly being it part of ntopng and many other non-ntop developed tools. However not many people know that nDPI can also be used from the command line to analyse network traffic. This is useful to create scripts to automate detection of specific issues. ndpiReader is a testing tool used to demonstrate the library features as well run validation tests. With this tool is also possible to generate a report in CSV format that can be analysed with tools such as q.

Below you can find some practical examples of how this technique can be used in real life. Suppose we need to analyse some malware traffic in order to spot anomalies. A good starting point for sample pcap files is the CIC dataset or this website or you can use any pcap you already have collected as those part of the nDPI test set.

From the CIC dataset, support you want to analyse what flows are affected by Slow DoS.

$ ndpiReader -i dos_slow.pcap -C dos_slow.csv

Dumps flow analysis results into CSV file that contains the following fields

$ head -1 ~/Downloads/Ricci/dos_slow.csv
#flow_id,protocol,first_seen,last_seen,duration,src_ip,src_port,dst_ip,dst_port,ndpi_proto_num,ndpi_proto,src2dst_packets,src2dst_bytes,src2dst_goodput_bytes,dst2src_packets,dst2src_bytes,dst2src_goodput_bytes,data_ratio,str_data_ratio,src2dst_goodput_ratio,dst2src_goodput_ratio,iat_flow_min,iat_flow_avg,iat_flow_max,iat_flow_stddev,iat_c_to_s_min,iat_c_to_s_avg,iat_c_to_s_max,iat_c_to_s_stddev,iat_s_to_c_min,iat_s_to_c_avg,iat_s_to_c_max,iat_s_to_c_stddev,pktlen_c_to_s_min,pktlen_c_to_s_avg,pktlen_c_to_s_max,pktlen_c_to_s_stddev,pktlen_s_to_c_min,pktlen_s_to_c_avg,pktlen_s_to_c_max,pktlen_s_to_c_stddev,client_info,server_info,tls_version,ja3c,tls_client_unsafe,ja3s,tls_server_unsafe,ssh_client_hassh,ssh_server_hassh

You can now run this query to find out the top 10 slow DoS flows

q -H -d ',' "select src_ip,src_port,dst_ip,dst_port,ndpi_proto,duration from ./dos_slow.csv where dst2src_goodput_ratio<10 order by duration desc limit 10"
172.16.0.1,54240,192.168.10.50,80,HTTP,1241.436
172.16.0.1,53816,192.168.10.50,80,HTTP,1239.08
172.16.0.1,53824,192.168.10.50,80,HTTP,1239.078
172.16.0.1,53834,192.168.10.50,80,HTTP,1239.077
172.16.0.1,53840,192.168.10.50,80,HTTP,1239.077
172.16.0.1,53846,192.168.10.50,80,HTTP,1239.076
172.16.0.1,53852,192.168.10.50,80,HTTP,1239.075
172.16.0.1,53858,192.168.10.50,80,HTTP,1239.074
172.16.0.1,53866,192.168.10.50,80,HTTP,1239.073
172.16.0.1,53872,192.168.10.50,80,HTTP,1239.072

Suppose you want to know the amount of traffic the top IPe in netfix.pcap have spent watching NetFlix. First run ndpiReader as follows

$ cd nDPI/tests/pcap
$ ../../example/ndpiReader -i netflix.pcap -C /tmp/netflix.csv

then do

$ q -H -d ',' "select src_ip,SUM(src2dst_bytes+dst2src_bytes) from /tmp/netflix.csv where ndpi_proto like '%NetFlix%' group by src_ip"
192.168.1.7,6151821

Possibilites are basically endless.

Enjoy!


Rethinking Network Flow Visualisation

$
0
0

Traffic monitoring applications often aggregate traffic in flows, that in essence is a way to divide traffic according to a 5-tuple key (Protocol, IP/port source/destination). Flows are then aggregated for instance according to IP address or protocol, and often represented with timeseries as the one below.

What is missing in all this is how the traffic is distributed over time as everything is flattened, protocols are merged (for instance according the source IP address) and it is not possible to understand intra-flow relationships. For instance to see that when I connect to a remote host, my PC first issues a DNS request to resolve the remote host IP, and them the SSH communication is created.

In order to represent these relationship, and thus understand more about flow interactions, we have created a new tool for nDPI named ndpi2timeline. This tool leverages on Google Chrome/Chromium tracing capabilities to show flow interaction. Usage is simple:

  1. Suppose that you have saved in test.pcap the traffic you want to analyse with nDPI.
  2. Type ndpiReader -C test.csv -i test.pcap so that you can save on test.csv the result of nDPI in CSV format.
  3. You can now convert this CSV into a JSON file that  can be visualised in Chrome according to this format. Type ndpi2timeline.py test.csv test.json
  4. Open Chrome and set as URL chrome://tracing/ then drag onto the Chrome window the file test.json generated at the step above, or simply load with the ‘Load’ button.

The result is shown in the figure below (Note that this tool is used to trace processes so the work ‘Process’ might be misleading in the figure). The traffic is sorted according to each host (192.168.1.33 in the example below) and for each protocol detected by nDPI a horizontal line is created. Clicking on the line, further information is reported at the bottom of the windows (in the example the DNS query that has been performed).

 

This example shows that it is now possible to depict the flow interactions with a timeline and interactively zoom/drag/drill-down with the mouse. In order to create a report that can be easily navigated, we have decided to collapse flows generated by the same host/protocol in the same line, however this can be changed as necessary changing a couple of lines of code. All this is just an experiment, to demonstrate that the standard timeseries-based reports can now be rethought. In particular, we believe that tracing tools such as Zipkin and Jaeger could be very well used to moving this type of visualisation to the next level. After we complete all these experiments, we’ll likely integrate the results in ntopng.

Please send us your feedback. Enjoy!

Introducing Automatic Package Update in ntopng

$
0
0

One of the most useful features in applications, is the ability to

  • Update the application with a matter of click with no need to move to the terminal console.
  • Instruct the system to update the application as a new version is available.

We have realised that many of our users missed this feature in ntopng for a long time, and so we decided to implement it. Currently it is part of the nightly builds, and it will be included in the next stable release. As this feature depends on the operating system, we have decided to implement it for all the Linux distributions (sorry, no Windows support) for which we package our tools and that can be found at http://packages.ntop.org.

Once you have installed the latest ntopng, you will see a new menu item under the wheel menu:

      

You can force the check for a new ntopng version and manually install it if a version is available. In case you want to check for a new version, you need to allow a few seconds as the check is performed via cron at one minute granularity in case it was requested by the use. Every night we check if a new version is available, so in the morning you don’t have to do it yourself manually.

In case you want ntopng to self-update itself overnight, you can find a new preference item under the “Updates” section. By default ntopng does NOT update itself overnight as we do not want to restart ntopng unless necessary, but if you want you can enable this preference and thus let ntopng do everything automatically on your behalf.

Enjoy!

Introducing n2disk 3.4: 100 Gbit Traffic Dump to Disk

$
0
0

This is to announce a new n2disk release 3.4. In addition to major performance optimisations with FPGA-based NICs, this release adds new interesting features including the ability to filter traffic based on the application protocol, aggregate traffic from multiple (2+) ZC interfaces, a better disk space management in case of multiple output folders (also from the same volume), and other useful options.

With the current n2disk release and adequate storage, it is now possible on FPGA-based NICs to dump up over 40 Gbit of traffic with a single n2disk instance. This means that if you want to dump 100 Gbit to disk using NVMEs, you can now start 3 n2disk instances each dumping on a separate disk partition a portion of the ingress traffic. With npcapextract you can the aggregate the dumped traffic into a single pcap file.

Enjoy!

Changelog

  • n2disk
    • New –l7-filter-conf option to enable l7 filtering (based on FT)
    • New –time-pulse-precision option to lower CPU load when high-precision is not required
    • Bulk/segment mode capture optimizations and fixes
    • Add support for multiple (more than 2) ZC interfaces
    • Add support for interface index encoding in timestamp (LSB)
    • Improve ‘–disk-limit’ support to take into account multiple folders from the same volume
    • Improve exported flows (ZMQ) in JSON/TLV format, add PEN
    • Improve disk utilization monitoring
    • New option –kernel-cluster-id|-K to set a kernel cluster
    • Improve egress queue flushing when idle with –export-flow-offload
    • Improve /proc stats, add number of exported flows and timeline path
    • Fix exported stats (ZMQ) JSON format
    • Fix for mountpoints with no device (e.g. NFS)
    • Fix storage status check on NFS without no_root_squash
    • Fix disk limit check when indexing is not enabled
  • disk2n
    • Add support for interface index encoding in timestamp (LSB) to select the egress interface
    • Add fully-cached mode to avoid continuously reading from disk when full pcap fits in memory
  • npcapextract
    • Increased max number of timelines from 10 to 16
    • New -C <num> option to stop extraction after <num> packets
    • Fix for filesystems with no I_DIRECT support
    • Improve permission check
  • Misc
    • Add -T option to print first/last TS only to npcapprintindex
    • Add more options to control n2membenchmark

Important Geolocation Changes in ntop Products

$
0
0

ntop products have been using geolocation databases provided by MaxMind for a long time, to augment network IP addresses with geographical coordinates (cities, countries) and information on the Autonomous Systems. ntop have been freely packaging and redistributing such databases in ntopng-data.

Unfortunately, new privacy regulations, such as GDPR and CCPA, place restrictions that impact our ability to continue distributing  databases in ntopng-data. Reasons are the same that have impacted MaxMind ability to do the redistribution and are explained in detail at the following page.

Hence starting late December 2019, in order to continue using geolocation in ntop software, you are required to register for a MaxMind account and obtain a license key in order to download  geolocation databases. Package ntopng-data continues to exist, however, it no longer carries databases but just the MaxMind download helper geoipupdate which fetches databases from MaxMind servers using the license key.

Detailed instructions to continue using geolocation databases in ntop products are available here . The transition is pretty straightforward: there is nothing special to do, and everything will continue to function automatically and transparently once you have obtained a license key.

Encrypted Traffic Analysis: A Primer

$
0
0

Monitoring encrypted traffic is must for providing visibility in modern traffic. Due to this we’ve put a lot of energy in extending nDPI so that it could be useful in this context. DPI (deep packet inspection) however is not enough for complete visibility, and thus we have started to add classification techniques and algorithm to nDPI to merge visibility and behavioural analysis. In fact flow-based analysis is not enough to understand what’s happening on a network without having a big picture. An this is what we’re doing in our tools, and in particular on ntopng v4 that will be introduced next month.

 

 

In this context, we have organised a series of seminars at the University of Pisa, Italy, where we cover some hot topics in cybersecurity. Yesterday over 200 people attended the first event about encrypted traffic analysis.

For those who have missed this event, these are the presentation slides. We hope you will enjoy the presentation that describe various techniques we implemented/experimented with while carrying on our research.

Enjoy!

Viewing all 544 articles
Browse latest View live