Quantcast
Channel: ntop
Viewing all 544 articles
Browse latest View live

Using ElasticSearch to Store and Correlate Ntopng Alarms

$
0
0

With the introduction of ntopng endpoints and recipients, it is now possible to handle alerts in a flexible fashion by means of recipients. ntopng embeds a SQLite database for turn-key alert storage and reporting. However in large organizations with many alerts scalability of this solution is limited due to the limited number of records (16k) that can be handled. In the latest ntopng 4.1.x versions it is now possible to export alerts in an external ElasticSearch database (not available in the community edition). This post shows you how to use this integration in ntopng 4.1.x and soon 4.2.

 

As shown in the video the first element to create is an endpoint for ElasticSearch that points to the instance running on our datacenter or on the same host where ntopng is running.

At this point you need to define a recipient for this Endpoint

In order to instruct ntopng to send notifications are sent to this recipient you need to configure the pools that use this recipient.

This is done by clicking on the icon highlighted by the arrow in the above picture that will bing you to the pools page.

For each entity (Hosts, Flows, SNMP…) for which you want to deliver alerts, you need to click on edit and specify on the dropdown menu the list of recipients to which notifications will be delivered. Note that you have always a built-in SQLite recipient enabled and used by ntopng to display alerts in the web GUI.

If you want to check if the notification delivery is working you can check (see the picture below) if the number of uses increases.

 

At this points alerts are stored in Elastic and they can visualised and explored using Kibana. In order to do that you first need to create an index pattern (menu “Stack Management” -> “Index Patterns”) selecting @timestamp as index as shown in the picture below.

Done this you can visualize alerts and create beautiful dashboards with them

Enjoy !


A Step-by-Step Guide on How to Write a ntopng Plugin from Scratch

$
0
0

In ntopng you can write plugins to extend it with custom features. This short tutorial explains you how to do that step-by-step. Here we drive you through the creation of a plugin for generating alerts when an unexpected DNS server is observed: this is useful to spot hosts that have a custom DNS configured or scanners applications.

The plugin source code described in this post can be found here and is has been authored by Daniele Zulberti and Luca Argentieri. These are the steps to implement the plugin.

Step 1: Create the plugin folder

ntopng stores plugins under <installation dir>/scripts/plugins, plugins are grouped in categories, in this case alerts/security is the correct one. So let’s create our folder unexpected_dns, this folder contains all the plugin’s sources and configurations.

$ cd <installation dir>/scripts/plugins/alerts/security
$ mkdir unexpected_dns

Step 2: Create the manifest.lua

manifest.lua contains basic plugin information such as name, description and version. It must return these information with a Lua table:

return {
    title = "Unexpected DNS",
    description = "Trigger an alert when an unexpected DNS server is detected",
    author = "Daniele Zulberti, Luca Argentieri",
    dependencies = {},
}

Table keys are:

  • title: The title of the plugin. This is used within the ntopng web GUI to identify and configure the plugin.
  • description: The description of the plugin. This is used within the ntopng web GUI to provide a plugin description to the user.
  • author: A string indicating the name of the author of the plugin.
  • dependencies: A Lua array of strings indicating which other plugins this – plugin depends on. The array can be empty when the plugin has no dependencies.

Now, in the web GUI, under Developer/Plugins click on Reload Plugins button. If it’s all ok you should see Unexpected DNS in the plugin list.

Plugins page:

Unexpected DNS plugin raw:

Step 3: Plugin logic

The main part of the plugin it’s under user_scripts. Every plugins can run scripts for various traffic elements. This script need to analyse flows, so you must tell to ntopng that by creating user_script/flow folder:

$ mkdir -p user_scripts/flow

Now we can start to write some code.

Move in user_script/flow and create unexpected_dns.lua, the file name must be the same as the plugin root directory.

local script = {
   -- Script category
   category = , 
   -- Priority
   prio = ,
   -- NOTE: hooks defined below
   hooks = {},
   -- use this plugin only with this protocol
   l7_proto_id = ,
   -- Specify the default value when clicking on the "Reset Default" button
   default_value = {
      items = {},
   },
   gui = {
   }
}

return script

The script variable contains the configuration of the script. It’s a Lua table where category, priority, hooks, l7_proto_id, default_value and gui, are some of the script table keys.

    • category: specify the category of the scripts, our is under security, so you should set the value to user_scripts.script_categories.security (note that the root directory of the plugin is under alerts/security).
    • priority: Is a number representing the script execution priority, the default priority is 0, lower number have less priority.
    • hooks: a Lua table with hook names as key and callbacks as values. User Script Hooks are events or points in time. ntopng uses hooks to know when to call a user script. A user script defining a hook will get the hook callback called by ntopng. User scripts must register to at least one hook. For our purpose protocolDetected it’s enough, the script runs only when the DNS protocol is detected in a flow. The hooks are defined below the script table definition:
      
      local script = { ... }
      
      -- #################################################
      
      function script.hooks.protocolDetected(now, conf)
         io.write("DNS Protocol Detected\n")
      end

  • l7_proto_id: only execute the script for flows matching the L7 application protocol. Set it to 5, the DNS ID in nDPI.
  • default_value: the default value for the script configuration. See User Scripts GUI. There is no default configuration for DNS servers, you can set it later in the web GUI configuration page of the script.
  • gui: a Lua table specifying user script name, description and configuration. Data is used by ntopng to show the user script configurable from the [User Scripts GUI](https://www.ntop.org/guides/ntopng/plugins/user_script_gui.html#web-gui). Set it as follow:
    lua
    gui = {
    i18n_title = "unexpected_dns.unexpected_dns_title",
    i18n_description = "unexpected_dns.unexpected_dns_description",input_builder = "items_list",
    item_list_type = "string",
    input_title = i18n("unexpected_dns.title"),
    input_description = i18n("unexpected_dns.description"),
    }
  • unexpected_dns.unexpected_dns_title, unexpected_dns.unexpected_dns_description, unexpected_dns.title, unexpected_dns.description are localisation keys. if the variable name have i18n suffix like i18n_title ntopng automatically converts that localisation keys to the current user language otherwise you can force it using the i18n() function. The localisation keys are stored under <plugin root directory>/locales, and every language have the own file, for example the english language have the en.lua file. See Localization.

Create the locales directory:

$ cd ../..
$ mkdir locales

and the en.lua:

return {
   unexpected_dns_description = "Trigger an alert when not allowed DNS server is detected",
   unexpected_dns_title = "Unexpected DNS",
   title = "Allowed DNS",
   description = "Comma separated values of allowed DNS IPs. Example: 8.8.8.8,8.8.4.4,1.1.1.1",
   status_unexpected_dns_description = "Unexpected DNS server found:",
   alert_unexpected_dns_title = "Unexpected DNS found"
}

Now you can test it reloading the plugin, and executing dig google.com @1.1.1.1 in the shell, dig is a useful tool for interrogating DNS name servers.

$ dig google.com @1.1.1.1

; <<>> DiG 9.16.1-Ubuntu <<>> google.com @1.1.1.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 33360
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;google.com.            IN  A

;; ANSWER SECTION:
google.com.     168 IN  A   216.58.209.46

;; Query time: 23 msec
;; SERVER: 1.1.1.1#53(1.1.1.1)
;; WHEN: mer ott 14 12:23:52 CEST 2020
;; MSG SIZE  rcvd: 55

Then take a look to the shell where you’ve launched ntopng, you should see a line saying DNS Protocol Detected.

Obviously write a line in the shell is not our goal. We want the script to trigger an alert when a not allowed DNS request is detected, to do so we need to add alert and status definitions, and then change the hook code.

Step 4: Define alert definition

A plugin enables alerts to be generated. All the alerts a plugin is willing to generate require a file in plugin subdirectory <plugin root directory>/alert_definitions/. The file contains all the necessary information which is required to properly show, localise and format an alert. The name of the file has this format: alert_plugin_name.lua, in our case: alert_unexpected_dns.lua

The file must return a Lua table with the following keys:

  • alert_key: A constant uniquely identifying this alert.
  • i18n_title: A string indicating the title of the alert.
  • i18n_description (optional): Either a string with the alert description or a function returning an alert description string.
  • icon: A Font Awesome 5 icon shown next to the i18n_title.
  • creator (optional): this function creates the alert and prepares a JSON.

The alert_key is a constant uniquely identifying the alert. Constants are available in file <installation dir>/scripts/lua/modules/alert_keys.lua. The file contains a table alert_keys with two sub-tables:

  • ntopng
  • user

Plugins distributed with ntopng must have their alert_keys defined in sub-table ntopng. User plugins must have their alert_keys defined in sub-table user.

Sub-tables can be extended adding new alert_key s to either the ntopng or the user table. Each alert_key has an integer number assigned which must be unique.

Alert description i18n_description can be either a string with the alert description or a function returning an alert description string. See Alert Definitions.

alert_unexpected_dns.lua:

local alert_keys = require ("alert_keys")

-- #################################################

local function createUnexpectedDNS(alert_severity, dns_info)
    local built = {
        alert_severity = alert_severity,
        alert_type_params = dns_info 
    }

    return built
end

-- #################################################

return {
    alert_key = alert_keys.ntopng.alert_unexpected_dns_server,
    i18n_title = "unexpected_dns.alert_unexpected_dns_title",
    icon = "fas fa-exclamation",
    creator = createUnexpectedDNS,
}

The createUnexpectedDNS function has many parameters as you need. Our are:

  • alert_severity: tells ntopng the severtity of the alert, one of {info, warning, error}. We set it in the status definition.
  • alert_type_params: the value will go in the JSON file.

Step 5: Define status definition

A plugin enables one or more statuses to be set on certain flows. A flow can have multiple statuses set and statuses can be associated to alerts. Flow statuses must be defined in plugin sub-directory <plugin root directory>/status_definitions/ and they are set calling flow.triggerStatus. In the next step you can see the final code for unexpected_dns.lua. Definition is done using a Lua files, one file per status. The name of the file has this format: status_plugin_name.lua, in our case: status_unexpected_dns.lua

A flow status definition file must return a Lua table with the following keys:

  • status_key: A constant uniquely identifying this status.
  • i18n_title: A string indicating the title of the status.
  • i18n_description (optional): Either a string with the flow status description or a function returning a flow status description string.
  • alert_type (optional): When an alert is associated to the flow status, this key must be present. Key has the structure alert_consts.alert_types.<an alert key>, where <an alert key> is the name of a file created in Alert Definitions, without the .lua suffix.
  • alert_severity (optional): When an alert is associated to the flow status, this key indicates the severity of the alert. Key has the structure alert_consts.alert_severities.<alert severity>, where <alert severity> is one among the available alert severities.

status_unexpected_dns.lua:

local alert_consts = require("alert_consts")
local status_keys = require ("flow_keys")

return {
    status_key = status_keys.ntopng.status_unexpected_dns_server,
    alert_severity = alert_consts.alert_severities.error,
    alert_type = alert_consts.alert_types.alert_unexpected_dns,
    i18n_title = "unexpected_dns.unexpected_dns_title",
    i18n_description = "unexpected_dns.status_unexpected_dns_description",
}

At the end this should be the plugin directory tree :

unexpected_dns/
├── alert_definitions
│   └── alert_unexpected_dns.lua
├── locales
│   └── en.lua
├── status_definitions
│   └── status_unexpected_dns.lua
├── user_scripts
│   └── flow
│       └── unexpected_dns.lua
└── manifest.lua

Step 6: Change and finish the script file

Now we can modify the hook in unexpected_dns.lua to detect the not allowed DNS servers.

function script.hooks.protocolDetected(now, conf)
   if(table.len(conf.items) > 0) then
      local ok = 0
      local flow_info = flow.getInfo()
      local server_ip = flow_info["srv.ip"]

      for _, dns_ip in pairs(conf.items) do
         if server_ip == dns_ip then
            ok = 1
            break
         end
      end

      if ok == 0 then
         flow.triggerStatus(
            flow_consts.status_types.status_unexpected_dns.create(
               flow_consts.status_types.status_unexpected_dns.alert_severity,
               server_ip
            ),
            100, -- flow_score
            0, -- cli_score
            100 --srv_score
         )
      end
   end
end

The parameters of the hook are:

  • now: An integer indicating the current epoch.
  • conf: A table containing the user script configuration submitted by the user from the User Scripts GUI. Table can be empty if the script doesn’t require user-submitted configuration.

The if statement checks if there are allowed DNS server set by the user in the GUI. With flow.getInfo() we can retrieve the server ip by accessing the Lua table that the function returns.

We can simply scan the table and compare the value with the server ip. We can break as soon as a match is found. If there aren’t any match we call flow.triggerStatus with this parameters:

  • flow_consts.status_types.status_unexpected_dns.create: this is the createUnexpectedDNS function defined in the alert definition, so we pass the alert severity and the server ip.
  • flow_score, cli_score, srv_score: score goes from 0 to 100 and represent the relevance of the flow, the client and the server.

Test the final script

Now the plugin is ready to run. Reload the plugins and configure a list of DNS to test it:

  • go under Settings/User Scripts/
  • click on the Flows tab
  • click on the edit button.

  • search the plugin in All tab by using the search box in the upper right corner.
  • click on edit

  • Insert the IP list in CSV format in the input box.
  • click Apply

Now launch the dig command as done before. You shold see a flow alert in the top bar near the local devices icon.

In the info of the alerted flow you can find the Flow Alerted raw saying: Unexpected DNS server found: [Score: 100].

To check if an alert was triggered follow this steps:

  • go under Alerts/Detected Alerts/
  • click the Flow Alerts tab

you should see a line with the unexpected DNS alert description.

Final Words

The plugin is complete and working. See the Documentation for further information about ntopng and the plugins development. It is now time for you to write a plugin and contribute to the ntopng development.

Released nDPI 3.4: detection speed, statistical analysis, fuzzing, cybersecurity

$
0
0

This is to announce the release of nDPI 3.4 that is a major step ahead with respect to 3.2:

  • Detection speed has been greatly optimised
  • Many new functions for statistical protocol analysis have been introduced. This is to expand nDPI into traffic analysis beyond simple flow-based analysis.
  • Fuzzing and code analysis (credits to catenacyber and lnslbrty) made nDPI more stable and robust than ever
  • Completely rewritten QUIC dissector (credits to IvanNardi) with support of the latest protocol versions
  • Added 24 security risks for speeding up the adoption of nDPI in cybersecurity that can be used to detect obsolete protocol versions, invalid/outdated ciphers, encryption violations, insecure protocol versions and many more.

Below you can find the complete changelog.

Enjoy!

 

Changelog

New Features

  • Completely reworked and extended QUIC dissector
  • Added flow risk concept to move nDPI towards result interpretation
  • Added ndpi_dpi2json() API call
  • Added DGA risk for names that look like a DGA
  • Added HyperLogLog cardinality estimator API calls
  • Added ndpi_bin_XXX API calls to handle bin handling
  • Fully fuzzy tested code that has greatly improved reliability and robustness

New Supported Protocols and Services

  • QUIC
  • SMBv1
  • WebSocket
  • TLS: added ESNI support
  • SOAP
  • DNScrypt

Improvements

  • Python CFFI bindings
  • Various TLS extensions and fixes including extended metadata support
  • Added various pcap files for testing corner cases in protocols
  • Various improvements in JSON/Binary data serialisation
  • CiscoVPN
  • H323
  • MDNS
  • MySQL 8
  • IEC 60870-5-104
  • DoH/DoT dissection improvements
  • Office365 renamed to Microsoft365
  • Major protocol dissection improvement in particular with unknown traffic
  • Improvement in Telegram v6 protocol support
  • HTTP improvements to detect file download/upload and binary files
  • BitTorrent and WhatsApp dissection improvement
  • Spotify
  • Added detection of malformed packets
  • Fuzzy testing support has been greatly improved
  • SSH code cleanup

Fixes

  • Fixed various memory leaks and race conditions in protocol decoding
  • NATS, CAPWAP dissector
  • Removed HyperScan support that greatly simplified the code
  • ARM platform fixes on memory alignment
  • Wireshark extcap support
  • DPDK support
  • OpenWRT, OpenBSD support
  • MINGW compiler support

Misc

  • Created demo app for nDPI newcomers
  • Removed obsolete pplive and pando protocols

Introducing PF_RING 7.8: ZC support for new Intel adapters and much more

$
0
0

This is to announce a new PF_RING major release 7.8.

The main changes in this release include:

  • The new ice ZC driver supporting E800 Series 100 Gigabit Intel adapters.
  • Hardware timestamp support  for packet trailers and keyframes generated by Arista 7150 Series and Metawatch. This also includes device information such as the Device ID and the Port ID.
  • BPF support for all ZC devices and queues, both to filter received or transmitted traffic.
  • ZC API extensions to further simplify its use, which is one of the main advantages of this library, together with performance and flexibility.
  • FT (Flow Table) improvements for flow export with slicing, which is the ability to deliver periodic flow updates before flow termination. Improved also the application protocol detection.
  • New libpcap v.1.9.1 and tcpdump v.4.9.3 including fixes for several CVEs.
  • Extended nBPF primitives to match Local or Remote IPs.
  • More sample applications and extensions to the existing ones to cover more use cases.

Below you can find the complete changelog.

Enjoy!

Changelog

  • PF_RING Library
    • Add support for Device ID and Port ID to the extended packet header
    • Add Arista 7150 Series hw timestamps support (keyframes and packet trailer parsing and stripping)
    • Add Metawatch Metamako hw timestamps support (packet trailer parsing and stripping)
    • errno EINTR is now returned on breakloop
    • Improve XDP support
    • Replace configure –enable-xdp with –disable-xdp (XDP enabled by default when supported)
  • ZC Library
    • New PF_RING_ZC_DEVICE_METAWATCH_TIMESTAMP flag to enable Metawatch hw timestamps
    • New pfring_zc_get_pool_id API to get the Pool ID
    • New pfring_zc_run_balancer_v2 pfring_zc_run_fanout_v2 APIs to support filtering functions
    • BPF support in ZC interfaces, standard interfaces and SPSC queues
    • Add support for BPF in TX queues
    • Builtin GTP hash now expose GTP info (flags)
    • Fix CRC strip on ixgbevf
  • FT Library
    • New pfring_ft_flow_get_id API to get the flow ID
    • New PFRING_FT_IGNORE_HW_HASH flag to ignore hw packet hash
    • New PKT_FLAGS_FLOW_OFFLOAD_1ST packet flag (first packet of a flow)
    • Add support for flow slicing
    • New API pfring_ft_flow_get_users to get flow users (in case of slicing)
    • Improve application protocol detection
    • Fix bogus-IP headers parsing
  • PF_RING-aware Libpcap/Tcpdump
    • New libpcap v.191
    • New tcpdump v.4.9.3
    • stats.ps_recv now includes packets dropped due to out of buffer space
  • PF_RING Kernel Module
    • Fix channels with standard drivers
    • Fix 64-bit channel mask
    • Fix defragmentation of packets with ethernet padding
    • Fix unnecessary device mapping causing ifindex exhaustion
  • PF_RING Capture Modules
    • Update support for Fiberblaze adapters
    • Fix filtering with Accolade adapters
  • ZC Drivers
    • New ice ZC driver supporting E800 Series Intel adapters
    • Support for Ubuntu 20 LTS
    • Support for CentOS/RedHat 8.2
    • Fix queue attach/detach in ixgbe-zc
    • Support for kernel 5.4
  • nBPF
    • Add support for matching Local/Remote IP (new extended-BPF primitives)
    • Support uppercase AND/OR in extended-BPF filters
    • Fix extended-BPF grammar
  • Examples
    • New zfilter_mq_ipc sample app (packet filtering with multiple threads and fanout to multiple consumer processes)
    • ftflow:
      • New -H option to ignore hw hash setting PFRING_FT_IGNORE_HW_HASH
      • New -t option to print stats only
    • ftflow_dpdk
      • New -l option to run loopback tests
      • Add RX/TX ring size configuration
    • pfsend:
      • New -z option to precompute randomized sequence
      • New -W ID[,ID] option to forge VLAN and QinQ VLAN
    • zbalance_ipc:
      • New -x option to filter by VLAN ID
      • Add ability to set BPF to egress queues
      • Add ability to refresh BPF filters at runtime
      • New -G : option to forward GTP-C traffic to a specific queue
    • New zcount -f option to set BPF filters
    • New pfcount -F option (do not strip FCS)
    • New zcount/zcount_ipc -t option to read the packet payload
    • New pcount -e option to set the capture direction
    • Add VLAN to the flows dumped by ftflow
    • Fix transmission of small packets (less than 60 bytes)
    • Fix CPU affinity in ZC sample applications
  • Misc
    • Handle failures in service restart during package updates
    • Add linux headers dependency to the pfring-dkms package
    • Add actual version/revision to pfring-drivers-zc-dkms packages
    • Fix installed .so library and links
    • Fix ZC DAQ compilation and API update
    • Fix service scripts to avoid command injections

Introducing n2disk 3.6: full L7 support, fast flow export, replay rate control

$
0
0

This is to announce a new n2disk release 3.6.

This release adds full support for indexing and retrieving traffic based on the Layer-7 application protocol. This can now be enabled even when flow export is disabled, and it is possible to use the extraction tool to extract selected application traffic using the Layer-7 protocol as part of the nBPF filter.

n2disk is now also able to use the main storage as a cache, and in the meantime archive pcap files moving them from the fast to a slower storage, even when the new “disk-limit” file schema. This is useful to handle peak hours with high throughput using a fast/small NVMe storage, moving data to a slower/larger/cheaper storage off peak, slowly or overnight.

Flow export has been optimized, to be able to handle a high number of flows to be exported to ntopng while dumping traffic to disk. More settings have also been added to provide full control on flow termination and export.

The disk2n tool has also been improved: it is now possible to control the transmission rate by specifying the number of packets per second (e.g. –transmission-rate 100 – 100 packets/s), or bit rate (e.g. –transmission-rate 1.25Gbps – 1.25 Gigabit/sec), or the relative speed (e.g. –transmission-rate 50% – 50% of the original traffic rate).

Below you can find the full changelog.

Enjoy!

Changelog

  • n2disk (dump)
    • Add support for Metawatch Metamako packet trailer (timestamp is added to packet header and index, device and port ID are exported as flow metadata using INPUT_SNMP/OUTPUT_SNMP/OBSERVATION_POINT_ID IEs)
    • Add support for Arista 7150 Series packet trailer and keyframes (timestamp is added to packet header and index)
    • New -E 2 option to enable application protocol (L7) indexing when ZMQ export is disabled
    • Add support for archiving to a slower storage (-O) when the –disk-limit dump schema is used
    • Set a default disk limit (auto computing 80% of free space + space already in use) when not configured
    • Increse maximum number of interfaces (up to 32)
    • Exporting FirstDumpedEpoch only when available
    • Fix access to latest deleted PCAP file epoch
    • Fix drop stats in PCAP mode (do not account drop in recv)
    • Fix index root folder (when a folder different from the dump folder is specified)
    • Fix -I<index path> with –disk-limit
    • Support for Ubuntu 20
  • Flow export
    • Add –lifetime-timeout and –idle-timeout options to control flow expiration
    • Optimize flow export with batch mode
    • Fix ZMQ message ID
  • npcapextract
    • Add support for L7 filtering using nBPF
  •  disk2n (replay)
    • Add new –transmission-rate option to set the replay speed in bps, pps or % (relative to the original traffic speed)
  • Misc
    • Add -a option to npcapmove to generate absolute paths
    • Fix npcapmanage in case of relative paths
    • Fix logrotate configuration file permission

Introducing nProbe Cento 1.12: Combining Visibility and Cybersecurity at 100 Gbit

$
0
0

This is to announce the released of cento 1.12 that is a maintenance release for ntop’s 100 Gbit probe. In this version we have integrated support of the latest nDPI features to combine processing speed with latest innovations in application detection an cybersecurity. Cento’s JSON output has been greatly enhanced and it includes all the nDPI-dissected information by streaming JSON-based data to Kafka or ElasticSearch/Syslog consumers. This to make cento useful to cybersecurity analysis by combining visibility and security at 100 Gbit by streaming.

Enjoy!

Changelog

New Features

  • Core engine performance improvements
  • Added risk detection reported in flow dumps and ZMQ
  • Improved flow export over ZMQ: TLV format is the default now
  • Add support for ZMQ load-balancing and replication
  • Add support for ZMQ batch mode
  • Add ZMQ CURVE encryption (–zmq-encryption-key )
  • More Information Elements are now exported over ZMQ
  • New –hash-function|-H option to select the hash function for the flow table
  • New –snaplen|-l option to set the capture length
  • Add human readable TCP flags in JSON format
  • Improved flow export stats
  • Added Ubuntu 20 and CentOS 8 support

Changes

  • Information Elements exported over ZMQ now use PEN.NTOP-ID
  • QUIC improvements
  • DNS query is now returned also for MDNS
  • nDPI is now dynamically linked to avoid extensions and customisations
  • Updated flow offload support with Accolade adapters

Fixes

  • Fixed application protocol detection with nDPI (packets with no payload, packets timestamp)
  • Fixed a few bugs with the ZMQ export and statistics
  • Fixed package dependencies
  • Fixed packet length check
  • Fixed stats in PCAP mode
  • Fixed handling of IPv4 packets with zero header length
  • Fixed drop counter during application startup

Security-Centric Traffic Analysis

$
0
0

Days ago we have given a short speak about cybersecurity at an Italian meetup. These are the presentation slides (English) where you can read more about the steps we have taken to make our tools more cybersecurity-oriented.

Below you can also find the video that is only for Italian-speaking people (sorry about that).

Enjoy!

 

Introducing nProbe 9.2: Collection Pass-Through and Reforge, OpenWRT support, Flexible JSON-export

$
0
0

This is to announce the release of nProbe 9.2. The main new features of this release are focused on flow collection speed and flexibility in particular for modern JSON-based flow consumers. This is to enable applications relying on nProbe, e.g. ntopng, to scale up when collecting flows:

  • The new –collector-passthrough option allows the flow cache to be bypassed when flows are collected. This mean that flows are forwarded to remote collectors unmodified (i.e. -T is not used) without placing them into the flow cache (i.e. flows are not merged by nProbe but forwarded unmodified) for maximum speed. In our tests this new feature allows flow collection and forwarding to be greatly enhanced (~ 5x speedup) with flows begin collected and exported at about 100k flows/sec per instance (and of course you can start multiple nProbe instances).
  • The new –collector-nf-reforge allows incoming flows to be filtered according to the NetFlow/IPFIX interfaceId and reforged in terms of collector IP address. Thanks to this new option, it is possible to reconcile flows with real IP sender in case of NAT, or ignore flows created on network interfaces that instead need to be discarded.
  • Better template handling when collecting from multiple routers: you can now simultaneously collect flows, per nProbe instance, coming from 128+ routers. This is very much needed when collecting from many small IoT devices sending flows to the same nProbe for conversion.

This new release also

  • Supports the latest nDPI and thus it is possible to interpret flows and export the nDPI flow risk value that interprets flow information and reports security-oriented information that is very valuable for identifying cybersecurity issues.
  • Greatly enhanced GTP support and VoLTE support.
  • We have improved OpenWRT support and optimised the code for running on embedded environments such as the new Nokia Beacon 6 home Wi-Fi.

Below you can find the complete changelog.

Enjoy!

ChangeLog

New Features and Command Line Options

  • Added Kafka and Syslog export when –collector-passthrough is used
  • Changed -p format to <Outer VLAN Id>.<Inner VLAN Id/<proto>/<IP>/<port>/<TOS>/<SCTP StreamId>/<exporter IP>
  • Added the ability to specify a binding IPv4 address in collector mode (e.g. -3 127.0.0.1:1234)
  • Implemented –collector-nf-reforge for filtering and reforging collected netflow flows
  • Flow cache is now disabled by default in collection mode: replaced –disable-cache with –enable-collection-cache
  • Added –gtpv1-track-non-gtp-u-traffic and –gtpv2-track-non-gtp-u-traffic for non GTP-encapsulated user export in IE %FLOW_USER_NAME

Extensions

  • Added the ability to sniff from stdin by using -i –
  • Added %L7_PROTO_RISK %L7_PROTO_RISK_NAME
  • Added %TCP_WIN_MAX_IN %TCP_WIN_MAX_OUT IEs to @NTOPNG@
  • Added DNS/HTTP IEs to @NTOPNG@ in probe mode
  • Added collected flow lifetime export via ZMQ
  • Added IP-in-IP (IPv4 encapsulated in IPv6) support
  • Improved DNS plugin with additional records and NAPTR query type
  • Exporting %SEQ_PLEN as 8 bit element
  • Added TOS export via ZMQ
  • GTP traffic analysis improvements
  • Improved IMSI/APN traffic accounting and aggregation when using –imsi-apn-aggregation
  • Support for SIP over TCP (VoLTE)
  • Added IPv6 support in GTPv1
  • Added IPv4+IPv6 GTP-C v2 dissection
  • Improvement on GTP-C v1 dissection
  • Added support for %BGP_PREV_ADJACENT_ASN %BGP_NEXT_ADJACENT_ASN when collecting sFlow and Netflow
  • Added IPv6 PAA export
  • Support for overwriting element names with aliases provided by the user (case sensitive)

Bug Fixes

  • Fixed detection of multiple connections on the same port (RST) exporting multiple flows
  • Fixed EXPORTER_IPV6_ADDRESS
  • Fixed UNTUNNELED_IPV6_SRC_ADDR / UNTUNNELED_IPV6_DST_ADDR
  • Fixed dump of IPv6 flows to MySQL
  • Fixed shutdown crashes
  • Fixed kafka stats number overflow
  • Fixed multiple –collection-filter options
  • Fixed accounting of bidirectional flows in stats
  • Fixed export of empty data
  • Fixed invalid flow idle computation
  • Fixed CSV export (always print all columns)
  • Fixed AS lookup/calculation support for .mmdb files part of the ntopng-data package
  • Fixed bug that caused FLOW_USER_NAME to be empty
  • Fixed custom template elements support
  • Fixed SIP decoding with malformed packets
  • Fixed IPv6 dissection when encapsulated in GTP
  • Fixed application protocol detection with GTP
  • Fixed GTPv1 GTPV1_END_USER_IP field
  • Fixed drop count

Miscellaneous

  • Moved all binaries and libraries from /usr/local/ to /usr/
  • Plugins are now loaded from ./plugins, /usr/lib/nprobe/plugins, /usr/local/lib/nprobe/plugins
  • Added Ubuntu 20.04 support
  • Improved OpenWRT support
  • Windows fixes
  • Improved plugins SDK

Say Hello to ntopng 4.2: Flexible Alerting, Major Speedup, Scada, Cybersecurity

$
0
0

We are pleased to introduce ntopng 4.2 that introduces several new features and breakthroughs while consolidating the changes introduced with 4.0. The main goals of this release include

  • Enhance and simplify how alerts are delivered to consumers
  • Many internal components of ntopng have been rewritten in order to improve the overall ntopng performance, reduce system load, and capable of processing more data while reducing memory usage with respect to 4.0.
  • Cybersecurity extensions have been greatly enhanced by leveraging on the latest nDPI enhancements that enabled the creation of several user scripts able to supervise many security aspects of modern systems.
  • Behavioral traffic analysis and lateral traffic movement detection for finding cybersecurity threats in traffic noise.
  • Initial Scada support with native IEC 60870-5-104 support. We acknowledge switch.ch for having supported this development.
  • Consolidation of Suricata and external alerts integration to further open ntopng to the integration of commercial security devices.
  • SNMP support has been enhanced in terms of speed, SNMPv3 protocol support, and variety of supported devices.
  • New REST API that enabled the integration of ntopng with third party applications such as CheckMK.

As the list of new features and enhancements we plan to write new posts, make videos and organise online training sessions to introduce our community to this new release.

Flexible Alerts Handling

The way alerts are delivered to interested recipients has been completely reworked. Before version 4.2, all the generated alerts were delivered to all recipients, causing issues such as:

  • Recipients flooded with too many alerts
  • Recipients getting alerts they’re not interested into

For these reasons, we wanted to rethink and redesign the way alerts were delivered to recipients as described in the user’s guide We wanted to obtain enough flexibility to:

  • Avoid flooding recipients with too unwanted alerts by introducing flexible alerts delivery.
  • Selectively send alerts to a recipient subset based on:
    • Severity-based criteria (e.g., only send alerts with severity error or higher to that particular recipient)
    • Type-based criteria (e.g., only send security-related alerts to that particular recipient)

For the sake of example, the way alerts are now delivered to recipients, allows you to create policies such as

  • Send security-related alerts to an Elasticsearch instance managed by the SecOps
  • Send network-related alerts via email to the NetOps
  • Send ntopng login attempts and configuration changes on the Discord channel of the DevOps
  • Send alerts with severity error or higher to SecOps, NetOps, and DevOps together

See this post for a comprehensive discussion and additional examples.

Scalable SNMP v2c/v3 support

This 4.2 release also carries an almost-completely rewritten SNMP engine. The new engine

  • Supports SNMP v2c and v3
  • Features SNMP bulk requests to greatly improve speed
  • Polls multiple devices in parallel to increase throughput

This is a great step forward compared to the SNMP engine featured in version 4.0 which was definitely slower.

With the new engine is also possible to enforce SNMP attack mitigation to toggle the administrative status of an SNMP port to down, when a malicious host is connected to it.

Additional New Features

Among the new features shipped with version 4.2 it is worth mentioning

  • Traffic Behavioral Analysis
    • Periodic Traffic
    • Lateral Movements
    • TLS with self-signed certificates, issuerDN, subjectDN
  • Support for Industrial IOT and Scada with modbus, DNP3 and IEC60870
  • Active monitoring
    • Support for ICMP v4/v6, HTTP, HTTPS and native Speedtest for measuring the available bandwidth.
    • Ability to generate alerts upon unreachable, or slow hosts or services.
  • Detection of unexpected servers.
  • DHCP, NTP, SMTP, DNS.
  • Services map.
  • Enhance nIndex integration to maximize flows dump performance and provide better flow drill-down features.
  • MacOS package.

For a comprehensive list of features, changes, and fixes, have a look at the CHANGELOG.

So now it’s time for you to give version 4.2 a try! And feel free to join the discussion!

Howto Write a Telegram Alert Endpoint for ntopng

$
0
0

Telegram is a popular messaging application that many people use daily to do instant messaging and receive notifications. As of ntopng 4.2, it is now possible to deliver alerts to external entities including Slack, email and Discord.

This post will show you how the Telegram alert endpoint has been developed so that readers can learn how to contribute to the ntopng development by coding new integrations. For a complete guide about alert endpoints, please refer to the ntopng user’s guide, whereas the complete telegram endpoint source code can be found here.

We suppose that you have downloaded the ntopng code from github. Done that, go to ntopng/scripts/plugins/endpoints: here you can find the folder where endpoints are stored. Similar to all the other endpoints, you need to create a new folder for your endpoint (e.g. “some_alert_endpoint”): for telegram we created a folder named telegram_endpoint. Then you need to create two files named

  • http_lint.lua
    This file is used to check (this is know as linting) the values of the parameters that will be passed to the endpoint through the ntopng web interface. This file needs to have a single function called script.getAdditionalParameters(http_lint) and has to return the checks needed to be done on the inputs.
  • manifest.lua
    It contains the title, the description and the author of the plugin.

Inside the telegram endpoint you also need to create a templates folder that will contain both the endpoint and recipient html templates. These two templates are used to show the various input arguments, descriptions, ecc. on the GUI
respectively of the recipient and endpoint configuration.

The various template parameters (format, inputs and so on) shown on the GUI, are taken from the locales folder divided according to the respective language file such as en.lua (English), it.lua (Italian) and so on. ntopng will choose the language set in the preferences panel for the ntopng web user.

Furthermore, there is a new directory named alert_endpoint folder, where we’re going to store the endpoint code. In this folder create a file named telegram.lua, and put here all the code necessary to send telegram messages, format the parameters and so on.

The first part of this script is depicted below.

and it defines the HTTP parameters used in the recipient and endpoint forms used by telegram. Inside the XXX_params you need to specify the parameters passed in the input forms: the endpoint_params will be used by the endpoint form template, and the other for the recipient form. Additionally it is required to create a function called telegram.format_recipient_params(recipient_params) responsible to format the output of the recipient parameters.

At this point you can create a function to be called periodically responsible to process the queued alerts (generated by ntopng when specific traffic patterns are detected, e.g. when a host contacts a malware site), dequeue and deliver them to telegram. This function is named telegram.dequeueRecipientAlerts().

In the above code the ntop.recipient_dequeue() function is the one that will dequeue triggered alerts that need to be sent via telegram.

Once notifications have been dequeued, they can be delivered to Telegram via a HTTP POST according to the JSON format specified by the Telegram API.

We’re almost ready: we just need to create an additional function that is used by the ntopng engine when during recipient configuration, the administrator wants to check if the setup is correct by sending a test message.

That’s all. Your telegram endpoint is now completed. As you can see most of the work is required to create the web template and format the message that will be sent via HTTP POST. As most systems now feature a REST API, we believe that integrating additional endpoints should be pretty simple by cloning the telegram endpoint code, renaming telegram to my_endpoint and doing little cosmetic changes. Are you ready to contribute to the ntopng development  by creating a new endpoint?

Enjoy!

You’re Invited to the ntop MiniConference 2020: November 24th, December 3rd and 12th

$
0
0

This year due to the pandemic, we had to cancel our scheduled community event. Considered that we have introduced many new features in our tools we would like to invite you to an online mini-conference divided in three distinct events. The first event is a general even where we briefly summarise what we have done in the individual tools so people can have an overview of what we have done and where we would like to go. The other two events are instead focusing on specific tools so people can join to learn more in details what are those features about, what problem they address, and how to use them. Below you can find the individual events schedule.

November 24th

[Calendar: miniconference, Webinar URL, Duration: 90 min, 4 PM CET/10 AM EST]

  • Introduction: overview of 2020 changes and improvements, 2021 roadmap.
  • Update on nProbe 9.2, nProbe Cento 1.12, and nDPI 3.4
  • Overview of new features of ntopng 4.2
  • How to produce and deliver alerts to recipients and consumer applications
  • How to monitor ICS/Scada networks with ntopng
  • New features introduced with n2n 2.8
  • Update on PF_RING 7.8 and n2disk 3.6
  • Public discussion with the ntop community

December 3rd

[Calendar: ntopng, Webinar URL, Duration: 90 min, 4 PM CET/10 AM EST]

  • Using ntopng 4.2: real life scenarios that highlight how new features can be used in practical use cases
  • How to implement ntopng endpoints and alert scripts, for extending ntopng
  • Using ntopng Edge in production
  • Embedding ntopng in Cubro EXA8

December 10th

[Calendar: nprobe_n2disk, Webinar URL, Duration: 90 min, 4 PM CET/10 AM EST]

  • Traffic monitoring: how to use probes to deliver monitoring data to ntopng and external  applications.
  • Embedding nProbe: low-end OpenWRT-based systems and 40-100 Gbit cPacket cProbe
  • Mastering n2disk for efficient packet-to-disk, 100 Gbit and metadata generation

Notes

  • All events will be online (English language)
  • They are scheduled at 4 PM CET / 10 AM EST in order to enable everyone to join.
  • All events will be recorded in case somebody misses them.
  • You do not need to register to attend them: just click on the event webinar URL (each event has a different URL)

Using ntop tools on VyOS

$
0
0

VyOS  is a popular open-source router and firewall platform based on Linux, and some of our users asked us to support it natively. This post explains you how to achieve that in a few simple steps.

Prerequisites

As VyOS is based on Debian Linux, the easiest solution is to install precompiled Debian packages or compile it from source.

In order to do this you need to configure the Debian repositories that on VyOS are empty. You need (as root) to edit /etc/apt/sources.list and store on it something like this:


deb http://mi.mirror.garr.it/mirrors/debian/ jessie main
deb-src http://mi.mirror.garr.it/mirrors/debian/ jessie main
deb http://archive.debian.org/debian jessie-backports main
deb http://security.debian.org/ jessie/updates main
deb-src http://security.debian.org/ jessie/updates main
deb http://mi.mirror.garr.it/mirrors/debian/ jessie-updates main
deb-src http://mi.mirror.garr.it/mirrors/debian/ jessie-updates main

As of today, we are using VyOS 1.2.x that is based on Debian 8 (jessie). For different VyOS versions you might need to use a different Debian version that you can find out running the following command

root@vyos:/home/vyos# lsb_release -a
No LSB modules are available.
Distributor ID:	Debian
Description:	Debian GNU/Linux 8.11 (jessie)
Release:	8.11
Codename:	jessie

Furthermore please make sure you use the best mirror for your country (in this example we used the Italian Debian mirror).

You are now ready to do

apt-get update

and your VyOS installation will now look like a Debian box where you can install your favorite packages.

How to install ntopng

As this point you have two options. You can:

Installing Additional Packages

If you decided to use binary packages, you can also install additional ntop packages such as nProbe that can turn your VyOS router installation in a full fledged nDPI-based NetFlow/IPFIX probe or remote probe for a ntopng installation running on a remote server.

Enjoy!

Embedding ntop: Nokia Beacon and Ubiquity UniFi Dream Machine

$
0
0

The latest generation of network devices are pretty powerful and open. This means that such devices ship with a Linux-based distribution such as OpenWRT or UniFI OS. In these devices it is possible to install third party software as the CPU is pretty powerful, there is some storage and memory available for running additional applications. In this blog post we want to describe our experience with two of these devices where it is possible to install ntop tools. This allows the network traffic to be monitored without having to install additional equipment such as a network tap or a port mirror that sends monitored traffic to a PC running monitoring tools. This is a good advantage as with a simple software upgrade it is possible to have traffic visibility even on home networks.

Nokia Beacon

The Nokia WiFi Beacon is a family of WiFi mesh devices using OpenWRT whose source code is available at this URL.

We have ported nProbe to Nokia WiFi Beacon 6, by cross-compiling on a Ubuntu 18.04 LTS for building a .ipk OpenWRT package that can be installed with sudo opkg install https://packages.ntop.org/Nokia/Beacon6/nprobe_9.1.200819-1_ipq.ipk after you have ssh to your Beacon6 device.

UniFi Dream Machine

Ubiquiti Dream Machine is a powerful network appliance based on UniFi OS, Ubiquiti’s Linux-based OS.

On this device is possible to install native packages or even run containers. ntopng-udm is a prebuilt Docker image of ntopng ready to run on UDM or UDM PRO and able to run ntopng natively on the device.

There are many other devices that could be used in similar ways. At ntop we’re making experiments with embedded devices in order to bring visibility to the network edge where monitoring tools are either missing or poor (always the old same monitoring tools based on IP/port/bytes/packets with no DPI or quality indicators. Stay tuned and attend our upcoming mini conference 2020 to know more about this topic.

 

Enjoy!

Using ntopng as network sensor for SecurityOnion (and integrated with Suricata)

$
0
0

SecurityOnion (SO) is a popular Linux distribution for threat hunting and security. It included ElasticSearch as backend for storing alerts as well as Kibana-based web interface. SO includes out of the box a few sensors such as Suricata that is a signature-based IDS used for flow analysis. To date SO does not include a tool that is able to merge network and security analysis or that can collect input from sensors and provide a high-level consolidated alert (e.g. a DoS vs individual alerts generated by Suricata). As most of our users know, ntopng already integrated Suricata and in this blog post we explain you how to export this information into SO, for the best of both worlds including DPI visibility.

In order to use ntopng with SO, you need to use the latest 4.3 ntopng dev build. You can export directly to SO via the ElasticSearch ntopng endpoint (you need ntopng Enterprise for that) or export via the syslog endpoint (all versions from the community edition up will work) to SO and then import data into ElasticSearch via LogStash. In this blog post we’ll cover the first option but if you want to use the syslog way make sure you use export your data in JSON format.

As SO is basically a CentOS 7 distribution, you can install ntopng directly on your SO box (just follow the instructions at https://packages.ntop.org for installing ntop packages via yum) or on an external box that sends alerts to ElasticSearch running on SO’s box. If you follow this path make sure you configure the firewall on SO so allow data to be sent from an external box to SO. You can do this as follows (in our case the SO box is active at IP 192.168.2.163 and our local network is 192.168.0.0/16)

[root@securityonion yum.repos.d]# so-allow
This program allows you to add a firewall rule to allow connections from a new IP address.

Choose the role for the IP or Range you would like to add

[a] - Analyst - ports 80/tcp and 443/tcp
[b] - Logstash Beat - port 5044/tcp
[e] - Elasticsearch REST API - port 9200/tcp
[f] - Strelka frontend - port 57314/tcp
[o] - Osquery endpoint - port 8090/tcp
[s] - Syslog device - 514/tcp/udp
[w] - Wazuh agent - port 1514/tcp/udp
[p] - Wazuh API - port 55000/tcp
[r] - Wazuh registration service - 1515/tcp

Please enter your selection:
e
Enter a single ip address or range to allow (example: 10.10.10.10 or 10.10.0.0/16):
192.168.0.0/16
Adding 192.168.0.0/16 to the elasticsearch_rest role. This can take a few seconds

You now need to go inside ntopng, change the interface to System, in the sidebar choose Notifications and define a new Endpoint as follows

You can now define a recipient associated to this endpoint (still from the Notifications sidebar menu) as follows

Make sure you click on the Check button before creating it, this to make sure that everything is working from the connectivity standpoint. At this point you need to bind this recipient to the pools for which you want to export data to SO.

In essence this mechanism allows you to specify what alerts you want to send SO, their severity, only for specific host or interfaces etc. This is to avoid flooding SO with too many alerts and exporting only those you care about.

Done this you’re basically ready: go to the SO web console and explore the data sent by ntopng.

Below you can see the network dashboard with nDPI-generated information in the top-right tile.

You can click on the nDPI protocol to drill down and see what hosts are doing Telegram for instance.

Of course you can drill down at any time and explore raw alerts. Below you can see an example of a ntopng-generated alert

and in JSON format according to the latest Elastic Common Schema (ECS)

{
  "_index": "securityonion:so-ntopng-2020.11.23",
  "_type": "_doc",
  "_id": "6G-l9HUBxzCea2aR50WS",
  "_version": 1,
  "_score": null,
  "_source": {
    "ecs": {
      "version": "1.6.0"
    },
    "rule": {
      "name": "Low Goodput Ratio"
    },
    "organization": {
      "name": "ntop"
    },
    "network": {
      "protocol": "tls.amazon",
      "community_id": "1:lVT6PgEISWUJa00vjxX2fPcNKZo=",
      "transport": "tcp"
    },
    "message": "{\"srv_continent_name\":\"NA\",\"alert_type\":72,\"srv_addr\":\"52.0.218.127\",\"cli2srv_bytes\":204,\"first_seen\":1606127296,\"pool_id\":0,\"srv_localhost\":false,\"cli_city_name\":\"\",\"srv_asn\":14618,\"ifid\":2,\"cli_country_name\":\"\",\"cli_port\":35314,\"srv_location_lon\":-77,\"cli_asn\":0,\"cli_addr\":\"192.168.1.11\",\"cli_localhost\":true,\"cli_blacklisted\":false,\"srv_port\":443,\"srv_city_name\":\"\",\"alert_json\":\"{\"info\":\"\",\"status_info\":\"{\"hash_entry_id\":303,\"alert_generation\":{\"confset_id\":0,\"script_key\":\"low_goodput\",\"subdir\":\"flow\"},\"ntopng.key\":4104816922,\"goodput_ratio\":45.753425598145}\"}\",\"l7_proto\":178,\"community_id\":\"1:lVT6PgEISWUJa00vjxX2fPcNKZo=\",\"srv_country_name\":\"US\",\"l7_master_proto\":91,\"srv2cli_packets\":1,\"srv2cli_bytes\":161,\"srv_os\":\"\",\"flow_status\":12,\"proto.ndpi\":\"TLS.Amazon\",\"is_flow_alert\":true,\"alert_tstamp\":1606127313,\"cli_continent_name\":\"\",\"action\":\"store\",\"score\":10,\"alert_entity_val\":\"flow\",\"alert_entity\":4,\"srv_blacklisted\":false,\"proto\":6,\"alert_severity\":3,\"cli_os\":\"\",\"vlan_id\":0,\"srv_location_lat\":39,\"cli2srv_packets\":2}",
    "source": {
      "port": 35314,
      "ip": "192.168.1.11"
    },
    "destination": {
      "geo": {
        "location": {
          "lon": -77,
          "lat": 39
        },
        "country_iso_code": "US",
        "continent_name": "NA"
      },
      "as": {
        "number": 14618
      },
      "port": 443,
      "ip": "52.0.218.127"
    },
    "event": {
      "risk_score": 10,
      "created": "2020-11-23T10:28:33.0Z",
      "severity_label": "low",
      "kind": "alert",
      "category": "network",
      "module": "ntopng",
      "dataset": "alerts",
      "severity": 3
    },
    "@timestamp": "2020-11-23T10:28:33.0Z"
  },
  "fields": {
    "@timestamp": [
      "2020-11-23T10:28:33.000Z"
    ],
    "Push to TheHive": [
      "https://192.168.2.163/soctopus/thehive/case/6G-l9HUBxzCea2aR50WS"
    ]
  },
  "sort": [
    1606127313000
  ]
}

Of course if you want, you can create new Kibana dashboards for ntopng alerts, and extend the SO visualisation system with a few clicks.

You can learn more about ntopng endpoints and

Enjoy !

Dec 3rd, ntop miniconf 2020 part II: ntopng

$
0
0

This is a reminder for the second part of our mini-conference 2020 scheduled for this Thursday, December 3rd 4 PM CET/10 AM EST. This time we’ll focus on the latest ntopng 4.2 features. We have the pleasure to host our friends at Tribe29 that will preview how ntopng has been integrated with CheckMK, Nextworks and Verxo that will talk about using ntopng and ntopng Edge in real use cases, and Cubro who will present a new product that embeds ntopng.

Below you can find all details, including the webinar link and calendar entry.

[Calendar: ntopng, Webinar URL, Duration: 90 min, 4 PM CET/10 AM EST]

  • Using ntopng 4.2: real life scenarios that highlight how new features can be used in practical use cases
  • Jan Justus, CEO Tribe29: CheckMK integration with ntopng
  • Cristiano Bozzi, Nextworks: Using ntopng Edge in boatyards
  • Giordano Zambelli, VerXo: Using ntopng in pharmaceutical and industrial environments
  • Christian Ferenz, CEO Cubro, Embedding ntopng in Cubro Omnia 10 appliances

You can read more about the 2020 ntop mini-conference at this URL. The third and last part of our conference will take place December 10th.

Hope to see you !


Exploiting Arista MetaWatch with n2disk and ntopng: HighRes Timestamping and Analytics

$
0
0

Precise packet timestamping is a key feature for network traffic analysis and troubleshooting. Traditionally many people use FPGA-based NICs with precise timestamping (e.g. Napatech, Silicom) even though a good precision can be obtained with PTP-based NICs such as many Intel network adapters. A better alternative to this practice is to avoid ad all using specialised adapters and rely on existing network devices to timestamp packets.

Arista packet brokers with MetaWatch  can be configured to add an extra trailer (Metamako) with metadata to every captured packet. In fact Arista 7150 Series devices are able to add packet trailers and generate keyframes to provide high-resolution timestamping, allowing for advanced network analysis and precise latency measurements. Arista MetaWatch devices are also able to include device information such as the Device ID and the incoming Port ID for captured packets and thus identify packet source that is then propagated to packet consumers. Below you can see an example of the packet trailer containing this information.

 

n2disk, ntop’s software tool for packet recording, is able to dump traffic and build an index on the fly, for enabling quick traffic retrieval by specifying the time interval and a BPF-like criteria. In addition to the 5-tuple, n2disk is able to index extended metadata, including those provided by MetaWatch devices and use the timestamp reported in the packet trailer. n2disk stores the device <ID> and interface <ID> in the packet index and it allows to use then when running traffic extractions.  A typical use case is the ability to retrieve traffic that went through a specific port and at a specific time in our network, this in addition to traditional IP address, port and layer-7 based filters.

This can be enabled in n2disk by adding –extended-index 4 and –hw-timestamp metawatch to the configuration (for further information please read the documentation). Example:

--interface=eth1
--dump-directory=/storage
--timeline-dir=/storage
--disk-limit=90%
--index
--extended-index=4
--hw-timestamp=metawatch
--index-on-compressor-threads
--reader-cpu-affinity=0
--compressor-cpu-affinity=1
--writer-cpu-affinity=2

n2disk is also able to export flow metadata to ntopng, acting as a flow probe, similar to what nProbe or nProbe Cento do. In this configuration, when support for MetaWatch devices is enabled with –hw-timestamp metawatch, n2disk also exports device and port information by populating %INPUT_SNMP %OUTPUT_SNMP %OBSERVATION_POINT_ID Information Elements. Example n2disk configuration file:

--interface=eth1
--dump-directory=/storage
--timeline-dir=/storage
--disk-limit=90%
--index
--extended-index=4
--hw-timestamp=metawatch
--index-on-compressor-threads
--reader-cpu-affinity=0
--compressor-cpu-affinity=1
--writer-cpu-affinity=2
--zmq=tcp://127.0.0.1:5556
--zmq-export-flows

ntopng configuration file example:

-i=tcp://*:5556c

In this configuration, device and ingress/egress port information are collected by ntopng and displayer in the flow details page as depicted below as well hardware timestamps are used in dumped pcaps.

In summary, thanks to Metamako support in ntop tools it is possible to combine precise timestamping as well packet-to-disk with real-time monitoring capabilities. This is an improvement with respect to hardware-based timestamping NICs that provide just timestamping and offer no device visibility and match on the actual network topology, very useful feature to locate and troubleshoot network issues.

Enjoy!

 

 

 

Dec 10th, ntop miniconf 2020 part III: nProbe and n2disk (on embedded systems)

$
0
0

This is a reminder for the third and last part of our mini-conference 2020 scheduled for this Thursday, December 10th 4 PM CET/10 AM EST. This time we’ll focus on the latest nProbe and n2disk features and provide a short practical tutorial. In addition we’ll cover ntopng alert and endpoints. Finally we’ll discuss how to embed ntop toolsin small devices for ubiquitous monitoring

Below you can find all details, including the webinar link and calendar entry.

[Calendar: nprobe_n2disk, Webinar URL, Duration: 90 min, 4 PM CET/10 AM EST]

  • Luca Deri: nProbe Traffic Monitoring and Embedding
  • Carlos Talbot: Embedding ntopng on Ubiquity UDM
  • Matteo Biscosi: Using ntopng alerts and endpoints
  • Alfredo Cardigliano: n2disk deep dive
  • Marco Graziano: ntop Defender
  • Public discussion

You can read more about the 2020 ntop mini-conference at this URL.

Efficiently Detecting and Blocking SunBurst Malware

$
0
0

Earlier this month a new highly evasive malware attacker named SunBurst has been disclosed. Immediately some countermeasures have been disclosed and in particular some Snort/Suricata rules have been published. We have analysed the rules trying to figure out if ntop tools could detect and block Sunburst and the answer is yes, you can. Let’s have a look at some of the rules. The first thing you can observe is that the rules are any/any, meaning that an IDS has to look into every single connection this because most IDS do not use DPI an ntop tools do, hence they need to search everywhere instead of targeting the exact fields: this means that overall the tool performance is degraded as even traffic that is not relevant has to be analysed, and that you might encounter false positives.

The rules below

alert tcp any any <> any 443 (msg:"APT.Backdoor.MSIL.SUNBURST"; content:"|16 03|"; depth:2; content:"avsvmcloud.com"; distance:0; sid:77600845; rev:1;) 
alert tcp any any <> any 443 (msg:"APT.Backdoor.MSIL.SUNBURST"; content:"|16 03|"; depth:2; content:"|55 04 03|"; distance:0; content:"digitalcollege.org"; within:50; sid:77600846; rev:1;) 
alert tcp any any <> any 443 (msg:"APT.Backdoor.MSIL.SUNBURST"; content:"|16 03|"; depth:2; content:"|55 04 03|"; distance:0; content:"freescanonline.com"; within:50; sid:77600847; rev:1;) 

are basically a TLS SNI (Server Name Indication) match.

that you can detect with nDPI.

Note that such rules are suboptimal as they have been designed before encrypted traffic was used, and so they are very primitives and limited in scope. See for instance what DPI reports for such TLS traffic

TCP 192.168.1.102:51293 <-> 20.140.0.1:443 [proto: 91/TLS][cat: Web/5][7 pkts/998 bytes <-> 6 pkts/1553 bytes][Goodput ratio: 52/74][1.74 sec][ALPN: h2;http/1.1][bytes ratio: -0.218 (Download)][IAT c2s/s2c min/avg/max/stddev: 0/109 253/420 1142/1033 447/434][Pkt Len c2s/s2c min/avg/max/stddev: 66/66 143/259 583/1215 180/428][Risk: ** Self-signed Certificate **][TLSv1.2][Client: avsvmcloud.com][JA3C: 2a26b1a62e40d25d4de3babc9d532f30][JA3S: 364ff14b04ef93c3b4cfa429d729c0d9][Issuer: CN=localhost][Subject: CN=localhost][Certificate SHA-1: D2:D1:B8:2B:15:FB:C9:51:B7:24:FF:56:B4:EF:9D:82:E2:E5:EA:B3][Validity: 2020-10-14 21:20:12 – 2022-12-17 11:32:25][Cipher: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384][Plen Bins: 33,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,33,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,33,0,0,0,0,0,0,0,0,0,0,0,0]

As you can see this is a self-signed TLS certificate that is not a good thing either.

Other rules like these below are different but similar

alert tcp any any -> any any (msg:"APT.Backdoor.MSIL.SUNBURST"; content:"T "; offset:2; depth:3; content:"Host:"; content:"freescanonline.com"; within:100; sid:77600852; rev:1;) 
alert tcp any any -> any any (msg:"APT.Backdoor.MSIL.SUNBURST"; content:"T "; offset:2; depth:3; content:"Host:"; content:"deftsecurity.com"; within:100; sid:77600853; rev:1;) 
alert tcp any any -> any any (msg:"APT.Backdoor.MSIL.SUNBURST"; content:"T "; offset:2; depth:3; content:"Host:"; content:"thedoccloud.com"; within:100; sid:77600854; rev:1;) 
alert tcp any any -> any any (msg:"APT.Backdoor.MSIL.SUNBURST"; content:"T "; offset:2; depth:3; content:"Host:"; content:"virtualdataserver.com"; within:100; sid:77600855; rev:1;)

In this case these rules basically say: search on HTTP (even on non standard ports) and in case you find connections towards specific sites (e.g. freescanonline.com) raise an alert.

In summary, these are old-style rules designed for year 2000 protocols that need to be refreshed. These are the equivalent rules for nDPI

$ cat sunburst.protos
#  Format:
#  <tcp|udp>:,<tcp|udp>:,.....@
#  Subprotocols
#  Format:
#  host:"",host:"",.....@
#
#  IP based Subprotocols
#  Format:
#  ip:,ip:,.....@

host:"avsvmcloud.com"@APT.Backdoor.MSIL.SUNBURST
host:"digitalcollege.org"@APT.Backdoor.MSIL.SUNBURST
host:"freescanonline.com"@APT.Backdoor.MSIL.SUNBURST
host:"freescanonline.com"@APT.Backdoor.MSIL.SUNBURST
host:"deftsecurity.com"@APT.Backdoor.MSIL.SUNBURST
host:"thedoccloud.com"@APT.Backdoor.MSIL.SUNBURST
host:"virtualdataserver.com"@APT.Backdoor.MSIL.SUNBURST

and you can now start ndpiReader as follows

$ ndpiReader -p sunburst.protos -i ~/avsvmcloud.com.pcap -v 2

...

Detected protocols:
APT.Backdoor.MSIL.SUNBURST packets: 13 bytes: 2551 flows: 1

Protocol statistics:
Acceptable 2551 bytes

JA3 Host Stats:
IP Address # JA3C
1 192.168.1.102 1

1 TCP 192.168.1.102:51293 <-> 20.140.0.1:443 [proto: 91.255/TLS.APT.Backdoor.MSIL.SUNBURST][cat: Web/5][7 pkts/998 bytes <-> 6 pkts/1553 bytes][Goodput ratio: 52/74][1.74 sec][ALPN: h2;http/1.1][bytes ratio: -0.218 (Download)][IAT c2s/s2c min/avg/max/stddev: 0/109 253/420 1142/1033 447/434][Pkt Len c2s/s2c min/avg/max/stddev: 66/66 143/259 583/1215 180/428][Risk: ** Self-signed Certificate **][TLSv1.2][Client: avsvmcloud.com][JA3C: 2a26b1a62e40d25d4de3babc9d532f30][JA3S: 364ff14b04ef93c3b4cfa429d729c0d9][Issuer: CN=localhost][Subject: CN=localhost][Certificate SHA-1: D2:D1:B8:2B:15:FB:C9:51:B7:24:FF:56:B4:EF:9D:82:E2:E5:EA:B3][Validity: 2020-10-14 21:20:12 - 2022-12-17 11:32:25][Cipher: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384][Plen Bins: 33,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,33,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,33,0,0,0,0,0,0,0,0,0,0,0,0]

Now you can use this technique in other tools such as ntopng as follows

ntopng -p sunburst.protos -i ~/avsvmcloud.com.pcap

Then inside ntopng you have to tell it that Sunburst is a malware by binding (menu Settings -> Applications and Categories) that Sunburst belongs to the malware category.

Done that ntopng detects it as a malware

and triggers an alert

that can be sent via the endpoint/recipients mechanism to external applications, messaging apps, ElasticSearch or SecurityOnion.

If in addition to detecting you want also to block it, just use ntopng Edge (that is basically ntopng inline) and you’re done.

Enjoy !

 

A Step-By-Step Guide for Protecting Your Network with nScrub

$
0
0

Distributed Denial of Service (DDoS) attacks represent a family cyber-attacks that are more and more common nowadays. They aim to make the service unavailable by overwhelming the victim with high traffic volumes (this is the case of volumetric or amplification attacks based on UDP, ICMP, DNS, …) or an high number of requests (including TCP connection attacks like the SYB flood, or Layer 7 attacks able to exhaust the resources of the service at the application level). This differentiate them from other cyber-attacks like intrusion attacks or malwares aiming to destroying, stealing or compromising data. With the proliferation of IoT devices, the number and entity of these attacks is exploding.

A traditional security device, like a firewall or an IPS, is not able to cope with such attacks, as they have not been designed to process and filter bad traffic at high volume/rate and with an high number of sessions. Those attacks are usually able to exhaust the resources of such security devices, which become the first point of failure.

A dedicated device should be used to protect the network (including traditional security devices) from DDoS attacks. Such device is usually known as mitigator or scrubber, and should be able, during an attack, to filter the traffic in order to make sure that 1. legitimate users are able to reach the service and 2. the service stays up and is able to serve the legitimate requests.

nScrub is a software-based protection system able to mitigate DDoS attacks on commodity hardware. nScrub is able to process 10+ Gbit full-rate traffic on a low-end 4-6 core Xeon CPU, and scale to Terabit using multiple blades, load balancing the traffic through multiple links with mechanisms like ECMP. nScrub can be easily deployed as bump-in-the-wire (a transparent bridge, with hardware bypass support to handle system failures or maintenance) or as a router to implement on-demand traffic diversion.

This step-by-step guide aims to get you started with nScrub, guiding you from the software installation to the policy configuration in order to setup a mitigation box protecting your services in minutes.

1. Select the Hardware

nScrub is multithreaded and able to process more than 10 Gbps full-rate on a low-end CPU load-balancing the traffic to 4-6 cores using RSS queues on Intel adapters supported by PF_RING ZC accelerated drivers. Just to mention a couple of sample hardware configurations, a Xeon E3-1230 v5 4-cores 3 Ghz or a Xeon E-2136 6-cores 3 Ghz should be fast enough for processing a 10 Gbit segment. As of the Network adapter, Intel 82599 or X520 are usually recommended, and the same chipsets are also available with the hardware bypass extension from Silicom (this would help a lot reducing down-time during maintenance and setup, or in case of system failures).

2. Install the Software

Instructions for configuring the ntop repository available at packages.ntop.org according to your Linux distribution. In this post we will go through the configuration steps for Ubuntu, similar instructions will also work on other distibutions including Debian and CentOS.
As first step we need to install at least pfring (the packet capture driver) and nscrub (the mitigation tool):

sudo apt-get install pfring nscrub

3. Configure the Driver

The pfring package includes a basic packet capture module that can be used with any adapter. In order to take advantage of the full PF_RING ZC acceleration on Intel adapters, a ZC driver needs to be installed, configured and loaded based on the adapter model. The pf_ringcfg tool installed with pfring takes care of this.

Check the adapter and driver model.

sudo pf_ringcfg --list-interfaces
Name: enp2s0f0             Driver: ixgbe      [Supported by ZC]   
Name: enp2s0f1             Driver: ixgbe      [Supported by ZC]

Install and configure the driver. The number of RSS queues (as many as the physical CPU cores available at most) should be configured to load-balance the traffic to multiple CPU cores and get optimal performance.

sudo pf_ringcfg --configure-driver ixgbe --rss-queues 4

Make sure that the driver has been loaded.

sudo pf_ringcfg --list-interfaces
Name: enp2s0f0             Driver: ixgbe      [Running ZC]   
Name: enp2s0f1             Driver: ixgbe      [Running ZC]

4. Configure the nScrub Service

The nScrub service configuration may differ a bit depending on the deployment mode (transparent bridge, routing mode with a single or two interfaces, etc.). In this guide we assume the most common configuration is used, which is the transparent bridge (bump in the wire).

The /etc/nscrub/nscrub.conf configuration file should be created as below.

# WAN interface name
--wan-interface=zc:enp2s0f0

# Internal interface name (optional when using routing mode)
--lan-interface=zc:enp2s0f1

# Unique ZC cluster ID
--cluster-id=99

# Processing thread(s) CPU core(s) affinity
--thread-affinity=2:3:4:5

# Time thread CPU core affinity
--time-source-affinity=1

# Other threads affinity
--other-affinity=0

# CLI/REST listening address
#--http-address=127.0.0.1

# Monitor queues
--aux-queues=2
# System log file path
--log-path=/var/log/nscrub/nscrub.log

Where:

  • –wan-interface is the interface towards Internet, from which attacks are received
  • –lan-interface is the interface towards the local network, where the victim services are located
  • –thread-affinity is a list of cores that should be used for traffic processing, one for each RSS queue (in this configuration we are assuming a CPU with 6 cores, 4 of which will be used for traffic processing, 2 for auxiliary threads and applications)
  • –http-address specifies the IP address where the service will be listening for controlling the engine and configuring the policies using the CLI tool (nscrub-cli) or the RESTful API. This is set to localhost only by default for security reasons, preventing the connection from remote boxes.
  • –aux-queues=2 specifies the number of traffic mirrors where nScrub will send a (sampled) copy of the traffic for providing more visibility or traffic recording, by analysing the traffic with external tools (e.g. ntopng and n2disk)

A REST API over HTTPs is provided by nScrub in order to control the engine, which is also used by the nscrub-cli tool. This requires the installation of a SSL certificate, that can be created by running the below commands.

openssl req -new -x509 -sha1 -extensions v3_ca -nodes -days 365 -out cert.pem
cat privkey.pem cert.pem > /usr/share/nscrub/ssl/ssl-cert.pem

Note: the license for both nScrub and PF_RING ZC for Intel (one for each interface), should be installed before running the service.

Enable and run the service.

systemctl enable nscrub
systemctl start nscrub

After running the nScrub service in transparent bridge mode, all the traffic is just forwarded between the two interfaces as in a standard bridge. In order to enable traffic inspection and mitigation, we need to configure the hosts or networks to be protected, by specifying the desired protection policies.

5. Configure the Mitigation Policies

Configuring the protection policies requires the creation of one (or more) target, which is logically a set of IPs or subnets running homogeneous services. In fact nScrub supports multiple sets of protection policies, which are tied to different targets, implementing multi-tenancy and providing the flexibility of applying different policies based on the target type.

Targets and protection policies can be created and configured at runtime using the nscrub-cli tool or the REST API. An additional tool, nscrub-add, is also available for the impatient to create a basic configuration, simply specifying the service type and IP/subnet (example: nscrub-add WEBSERVER 192.168.1.1/32 web). In this guide we will use the nscrub-cli tool to create a custom configuration, to be able to fine-tune the target configuration. However the same tool can be used to modify a target configuration previously created with nscrub-add.

It is possible to run nscrub-cli on the same machine or on a different machine (please make sure you are configuring –http-address in nscrub.conf with the right listening address), in the latter case please specify the address of nScrub with -c HOST:PORT.

Running the nscrub-cli command, an interactive prompt with auto-completion is presented where you can issue commands (type h for the help), or alternatively batch mode is also available (create a configuration file and load it with cat policy.conf | nscrub-cli).

add target mynet 10.10.10.0/24
add target mynet 192.168.1.0/24

target mynet type hsp

target mynet profile DEFAULT default drop
target mynet profile WHITE all accept enable
target mynet profile BLACK all drop enable
target mynet profile GRAY default drop

target mynet profile DEFAULT tcp syn check auto
target mynet profile DEFAULT tcp syn check_method rfc

target mynet profile DEFAULT udp checksum0 drop enable
target mynet profile DEFAULT udp fragment drop enable

target mynet profile DEFAULT udp dst 123 accept enable

target mynet profile DEFAULT dns request check_method forcetcp
target mynet profile DEFAULT dns request threshold 1000

target mynet profile DEFAULT icmp type 0 accept enable
target mynet profile DEFAULT icmp type 8 accept enable

In the above example a target mynet is created (an arbitrary name can be used) configuring a couple of subnets. The service type has been set to hsp (Hosting Service Provider), which is just a hint for the mitigation engine, but you should not worry much about the effect of this as we are going to manually configure the protection policies. After that, the default protection policies are configured for the four profiles which are default (unknown traffic), black (bad traffic), white (good traffic) and gray (an extra profile the can be used for special policies). Each profile is tied to a list: IPs can be manually added to those lists (we can blacklist an IP by adding it to the black list). At this point the protection policies for each protocol can be configured (the protection algorithm for TCP, the allowed UDP ports and ICMP types, etc), please check the help or the documentation for a full list of settings.

At this point the system is up and running, mitigating attacks towards the configured target.

6. Monitor the System

The easiest way to read statistics and make sure nScrub is actually processing traffic is to run the stats command in nscrub-cli. Statistics about a specific target can also be read by running target mynet stats in nscrub-cli. However this provides the actual statistics only.

Historical statistics are available from the web GUI which is available using a browser connecting to http://NSCRUB-IP:8880/monitor.html (please make sure you are configuring –http-address in nscrub.conf to allow connections from remote machines).

 

In addition to basic traffic statistics, nScrub is able to export a copy of the raw traffic to third-party applications via software queues, for providing enhanced visibility. In the nscrub.conf configuration above we already enabled a couple of traffic mirrors with –aux-queues=2, thus we are able to use up to two applications to concurrently analyse the traffic.

It is possible to configure what traffic should be forwarded to both mirrors (respectively with IDs 0 and 1), and the sampling rate to avoid overwhelming the third-party applications with traffic in the worst case.

mirror 0 type all
mirror 0 sampling 1000
mirror 1 type discarded
mirror 1 sampling 1000

Please note that nScrub creates one software queue for each RSS queue/thread, thus in this case we will have 4 queues for each mirror. Please find below a sample configuration file for ntopng that you can use for analysing the traffic that is provided by nScrub. In this case we are configuring ntopng to capture from all queues in the mirror, and setting up a view interface in ntopng to provide aggregated data.

-i=zc:99@0
-i=zc:99@2
-i=zc:99@4
-i=zc:99@6
-i=view:all
-g=0
-y=1

At this point you can start ntopng and connect to the web GUI for analysing the traffic.

In a similar way it is possible to dump the traffic to a PCAP file using n2disk or tcpdump, capturing from zc:99@1, zc:99@3, zc:99@5, zc:99@7.

tcpdump -Q in -ni zc:99@1 -c 1000 -w attack.pcap

And you can analyse the PCAP in Wireshark or any traffic analysis tool.

Enjoy!

ntopng, InfluxDB and Grafana: A Step-By-Step Guide to Create Dashboards

$
0
0

Creating Grafana dashboards out of ntopng data basically boils down to:

  • Configuring ntopng to export timeseries data to InfluxDB
  • Configuring the Grafana InfluxDB datasource to extract timeseries data from InfluxDB
  • Adding Grafana Dashboards panels with ntopng data

This post aims at covering the topics above to serve as reference for those who want to create Grafana dashboards.

Configuring ntopng to Export Timeseries Data to InfluxDB

To configure ntopng to export timeseries data to InfluxDB, visit the ntopng Timeseries preferences page, and pick InfluxDB as driver. Then, it suffices to configure InfluxDB connection parameters. Once preferences are saved, ntopng will start exporting timeseries data to InfluxDB.

Configuring the Grafana InfluxDB Datasource

The same InfluxDB connection parameters specified above to configure ntopng can be used also to create a Grafana InfluxDB datasource. To create the datasource, pick the Datasources entry under the Grafana configuration menu, and add a new datasource of type InfluxDB. Then, it is enough to specify InfluxDB connection parameters.

Clicking on “Save & Test” will automatically test the connection and save it.

NOTE: The Grafana ntopng plugin datasource is outdated and should not be used.

Adding Grafana Dashboards panels with ntopng data

Now that Grafana is properly set up to extract timeseries data from InfluxDB, new panels with ntopng timeseries data can be added to dashboard panels.

Timeseries data are added to panels using the Grafana query builder. The query builder helps constructing the classical SELECT-FROM-WHERE clauses to pick the right data.

Different queries need to be constructed, depending on whether a gauge or a counter is being charted. Gauges and counters are the two types of timeseries exported by ntopng:

  • Gauge are for things like the number of active flows, or active hosts
  • Counters are for continuous incrementing values such as bytes sent and received

Gauges

Likely, to chart a gauge, a query will have to take the non_negative_derivative. Indeed, continuous incrementing values are only meaningful when derived, that is, when compared with their adjacent values. For the sake of example, a gauge panel with interface traffic can be created as follows.

Counters

To chart counters, there is no need to take the derivative. Data can be taken as-is. For example, one can create a panel with the number of active flows for an interface as follows.

A Complete Dashboard

A complete yet basic dashboard can look like the following.

The dashboard can be downloaded in JSON format from this link.

Enjoy !

Viewing all 544 articles
Browse latest View live