Quantcast
Channel: ntop
Viewing all 544 articles
Browse latest View live

Traffic directions, port mirrors and taps

$
0
0

Network taps have the ability to preserve traffic directions as based on the port you’re monitoring it is possible to know id traffic is going A -> B or B->A. With port mirrors you completely loose this information (this unless you creare a port mirror per direction, not always possible on all network switches) as directions are mixed up and thus typical breakdown charts in/down don’t work.

In order to overcome this limitation, in nProbe mimic directions using MAC addresses. In essence if you know the MAC address of your routers you know if your traffic is going towards your router or coming from it. The assumption is that router MAC address are reliable, not always true in these cybersec days.

  • Option 1
    Use V9/IPFIX to export the MAC address
    Using -T you can set the export template and if you add %IN_SRC_MAC and %OUT_DST_MAC you can see MAC in flows and let the collector compute the direction.
  • Option 2
    Use the MAC addresses to set interfaceId’s and export direction
    Supposing that your routers have MAC  5c:49:79:75:4e:6a and 0a:30:62:56:00:1c, you can define in nProbe –if-networks “5c:49:79:75:4e:6a@2,0a:30:62:56:00:1c@2” that says: all the traffic with these source MACs are bound to interfaceId 2. Adding also –in-iface-idx 2 –out-iface-idx 0 that says: ingress traffic will have ifIndex 2 all pieces are set. In this case the DIRECTION information element will be set accordingly based on the MAC address information.
  • Option 3
    If you set –in-iface-idx -1 –out-iface-idx -1, nProbe will set the interfaceId to the last two bytes of the MAC address so you can use the same technique of option 1 even with netflow V5.

In some cases, when taps or multiple interfaces are used (e.g. each monitoring a network link), and you need to merge traffic ports, you can do it inside nProbe by leveraging on PF_RING. If you set nprobe -i “eth0,eth1” the traffic coming from both ports is first merged then analysed by nProbe. Even in this case you can use the same MAC address trick explained above to mimic directions.


Introducing Multi-language Support in ntopng

$
0
0

Traditionally all ntop tools have manuals and user interface in English. As sometimes our users are not really familiar with it, we have decided to introduce user interface translation of the user interface so that we can make those users more comfortable when using ntopng. As the moment we have added support for Italian and German, but we might consider adding further languages in the future. When you first login to ntopng after installation you will notice that there is a new menu that allows you to set the language used by the admin user.

Inside the GUI you can also change the language by selecting in the menu settings (the one with the wheel icon) the entry “Manage Users” and setting the language in preferences.

 You can have a different language per user, and ntopng will honour this setting. Below you can see an example of the interface when set in Italian.

Multi-Language support is implemented in the Pro and Enterprise ntopng edition.

Improved nProbe Kafka Export Support: Theory and Practice

$
0
0

Kafka is a distributed messaging system widely used in the industry. Kafka can be deployed on just a small server but it can also scale up to span multiple datacenters. Given the scale and variety of possible Kafka deployments, it is desirable to have flexible, configurable producer applications able to adapt to and robustly feed any Kafka real-world deployment.

nProbe, thanks to its export plugin, can be configured as a Kafka producer for the streaming of monitored/collected flows to categories called known as topics. The latest nProbe 8.3.x has been extended to allow:

Being able to set configuration properties is key as it allows to decide if high throughput is the name of the game, or if a low latency service is required. Among all the configuration properties available, for the sake of performance tuning, it is worth mentioning:

  • batch.num.messages : the minimum number of messages – flows, in the nProbe parlance – to wait for to accumulate in the queue before sending off a message set.
  • queue.buffering.max.ms : how long to wait for batch.num.messages to fill up in the queue. A lower value improves latency at the cost of lower throughput and higher per-message overhead. A higher value improves throughput at the expense of latency.

Due to the non-realtime nature of flowsy, flows are periodic summaries of network connections – improving latency could be of no value. Contrarily, optimizing throughput could be fundamental to stream tens of thousands of flows per second. For this reason, a set of experiments with queue.buffering.max.ms set to 1000 (i.e., one second) but with a variable number batch.num.messages have been carried out. Results are presented in the remainder of this article after a brief introduction that explains how to set custom configuration properties and the number of producer threads.

Setting Custom Configuration Properties

Custom configuration properties are specified using option --kafka-conf followed by a pair <name=value>. The option can be passed multiple times to set several properties. As properties are either global or specific to the topic, topic-related properties must be prefixed with the string "topic.".

The following example shows an nProbe instance started in collector mode that streams flows to a Kafka cluster and has three properties set:

simone@devel:~/nProbe$ ./nprobe -i none -n none --collector-port 2055 --kafka "192.168.2.129:9092;test7;none;0" --disable-cache --kafka-conf debug=msg --kafka-conf queue.buffering.max.ms=1000 --kafka-conf=topic.auto.commit.interval.ms=2000

Configuring The Number of Kafka Producer Threads

The number of producer threads is set using option --kafka-num-producers followed by a number. The default number of threads is 1 but it can be increased up to 8. An nProbe collector instance with two kafka producers can be started as

simone@devel:~/nProbe$ ./nprobe -i none -n none --collector-port 2055 --kafka "192.168.2.129:9092;test7;none;0" --disable-cache --kafka-num-producers 2

Performance Tuning Experiments

Setup

Performance numbers stem from tests using the following setup:

  • Producer Machine: Intel(R) Xeon(R) CPU E3-1230 v3 @ 3.30GHz, 16GB RAM
  • Kafka Machine: Virtual Machine, Intel(R) Core(TM) i5-4258U CPU @ 2.40GHz, 2GB RAM
    • Disk performance shortcut by setting the brokers’ flush configuration properties as so:
    • log.flush.interval.messages=10000000
    • log.flush.interval.ms=100000
    • Two brokers
    • One topic with two partitions and replication factor 2

Scope

Experiment configuration settings in order to:

  1. Determine the size S of flow messages generated by nProbe and streamed to Kafka
  2. Asses the maximum number of S-size messages that Kafka can ingest per second
  3. Vary the number of nProbe producer threads and batch.num.messages to see how fast it can go and if it can export a number of flows per second that reaches the maximum number determined in 2.

Determining the Size of a Kafka Flow Message

To determine the size S of a Kafka flow message it suffices to instruct nprobe to actually export with batch.num.messages=1 to make sure every message set will contain one and only one flow:

simone@devel:~/nProbe$ ./nprobe -i none -n none --collector-port 2055 --kafka "192.168.2.129:9092;test7;none;0" --disable-cache --kafka-conf debug=msg --kafka-conf batch.num.messages=1
...
%7|1521802885.914|PRODUCE|rdkafka#producer-0| simone-VirtualBox-1:9092/0: produce messageset with 1 messages (304 bytes)%7|1521802885.915|MSGSET|rdkafka#producer-0| simone-VirtualBox-1:9092/0: MessageSet with 1 message(s) delivered%7|1521802886.915|PRODUCE|rdkafka#producer-0| simone-VirtualBox-1:9092/0: produce messageset with 1 messages (328 bytes)%7|1521802886.915|PRODUCE|rdkafka#producer-0| simone-VirtualBox-1:9092/0: produce messageset with 1 messages (309 bytes)%7|1521802886.915|MSGSET|rdkafka#producer-0| simone-VirtualBox-1:9092/0: MessageSet with 1 message(s) delivered%7|1521802886.915|MSGSET|rdkafka#producer-0| simone-VirtualBox-1:9092/0: MessageSet with 1 message(s) delivered%7|1521802886.918|PRODUCE|rdkafka#producer-1| simone-VirtualBox-1:9093/1: produce messageset with 1 messages (324 bytes)%7|1521802886.918|PRODUCE|rdkafka#producer-1| simone-VirtualBox-1:9093/1: produce messageset with 1 messages (305 bytes)
...

As it can be seen from the debug output above, a good estimation is S=300 Bytes.

Assessing the Maximum Number of Messages Ingested Per Second

To assess the maximum number of messages with size S=300 Kafka can ingest per second, the tool rdkafka_performance has been used. The tool is basically a command-line diagnostic tool useful to do this kind of performance assessments. As anticipated above, property queue.buffering.max.ms is always set to 1000 (ie., one second) while property batch.num.messages is varied to 30, 300, and 3000 to produce message sets with approximate size 10k, 100k, and 1M, respectively.

The outcome of the experiments is the following:

simone@devel:~/librdkafka/examples$ ./rdkafka_performance -P -t test7 -s 300 -c 100000 -m "_____________Test1:TwoBrokers:100kmsgs:300bytes" -S 1 -b 192.168.2.129:9092 -v -X queue.buffering.max.ms=1000 -X batch.num.messages=3000
% 100000 messages produced (30000000 bytes), 100000 delivered (offset 0, 0 failed) in 3521ms: 28400 msgs/s and 8.52 MB/s, 0 produce failures, 0 in queue, no compression
simone@devel:~/librdkafka/examples$ ./rdkafka_performance -P -t test7 -s 300 -c 100000 -m "_____________Test1:TwoBrokers:100kmsgs:300bytes" -S 1 -b 192.168.2.129:9092 -v -X queue.buffering.max.ms=1000 -X batch.num.messages=300
% 100000 messages produced (30000000 bytes), 100000 delivered (offset 0, 0 failed) in 3924ms: 25480 msgs/s and 7.64 MB/s, 0 produce failures, 0 in queue, no compression
simone@devel:~/librdkafka/examples$ ./rdkafka_performance -P -t test7 -s 300 -c 100000 -m "_____________Test1:TwoBrokers:100kmsgs:300bytes" -S 1 -b 192.168.2.129:9092 -v -X queue.buffering.max.ms=1000 -X batch.num.messages=30
% 100000 messages produced (30000000 bytes), 100000 delivered (offset 0, 0 failed) in 9141ms: 10939 msgs/s and 3.28 MB/s, 0 produce failures, 0 in queue, no compression

The maximum insertion speed, 28400 msgs/s and 8.52 MB/s,  is reached when batching 3000 messages in message sets with size of approximately 1M.

Testing nProbe Performances

To see how fast nProbe can go, and also to compare its speed with the maximum insertion speed measured above, several experiments have been run. Property batch.num.messages is varied to 30, 300, and 3000 to produce message sets with size 10k, 100k, and 1M, using either 1 or 2 producer threads.

Results for 1 producer thread:

simone@devel:~/nProbe$ ./nprobes -i none -n none --collector-port 2055 --kafka "192.168.2.129:9092;test7;none;0" --disable-cache --kafka-num-producers 1 --kafka-performance-test 10000 --kafka-conf debug=msg --kafka-conf queue.buffering.max.ms=1000 --kafka-conf batch.num.messages=3000 -b 1
23/Mar/2018 15:35:06 [nprobe.c:3217] Kafka producer #0 [msgs produced: 900150][msgs delivered: 900150][bytes delivered: 261254571][msgs failed: 0][msgs/s: 19568][MB/s: 5.68][produce failures: 29][queue len: 99943]
simone@devel:~/nProbe$ ./nprobes -i none -n none --collector-port 2055 --kafka "192.168.2.129:9092;test7;none;0" --disable-cache --kafka-num-producers 1 --kafka-performance-test 10000 --kafka-conf debug=msg -X queue.buffering.max.ms=1000 --kafka-conf batch.num.messages=300 -b 1
23/Mar/2018 15:32:58 [nprobe.c:3217] Kafka producer #0 [msgs produced: 900301][msgs delivered: 900301][bytes delivered: 261259728][msgs failed: 0][msgs/s: 22507][MB/s: 6.53][produce failures: 15][queue len: 99795]
simone@devel:~/nProbe$ ./nprobes -i none -n none --collector-port 2055 --kafka "192.168.2.129:9092;test7;none;0" --disable-cache --kaa-num-producers 1 --kafka-performance-test 10000 --kafka-conf debug=msg --kafka-conf queue.buffering.max.ms=1000 --kafka-conf batch.num.messages=30 -b 1
23/Mar/2018 15:37:11 [nprobe.c:3217] Kafka producer #0 [msgs produced: 900097][msgs delivered: 900097][bytes delivered: 261240981][msgs failed: 0][msgs/s: 16982][MB/s: 4.93][produce failures: 15][queue len: 100000]

Results for 2 producer threads:

simone@devel:~/nProbe$ ./nprobes -i none -n none --collector-port 2055 --kafka "192.168.2.129:9092;test7;none;0" --disable-cache --kaa-num-producers 2 --kafka-performance-test 10000 --kafka-conf debug=msg --kafka-conf queue.buffering.max.ms=1000 --kafka-conf batch.num.messages=3000 -b 1
23/Mar/2018 15:42:24 [nprobe.c:3217] Kafka producer #0 [msgs produced: 400076][msgs delivered: 400076][bytes delivered: 116086398][msgs failed: 0][msgs/s: 9757][MB/s: 2.83][produce failures: 41][queue len: 99968]
23/Mar/2018 15:42:24 [nprobe.c:3217] Kafka producer #1 [msgs produced: 415511][msgs delivered: 415511][bytes delivered: 120603129][msgs failed: 0][msgs/s: 10134][MB/s: 2.94][produce failures: 2][queue len: 84538]
simone@devel:~/nProbe$ ./nprobes -i none -n none --collector-port 2055 --kafka "192.168.2.129:9092;test7;none;0" --disable-cache --kaa-num-producers 2 --kafka-performance-test 10000 --kafka-conf debug=msg --kafka-conf queue.buffering.max.ms=1000 --kafka-conf batch.num.messages=300 -b 1
23/Mar/2018 15:40:47 [nprobe.c:3217] Kafka producer #0 [msgs produced: 421172][msgs delivered: 421172][bytes delivered: 122256967][msgs failed: 0][msgs/s: 11383][MB/s: 3.30][produce failures: 2][queue len: 78877]
23/Mar/2018 15:40:47 [nprobe.c:3217] Kafka producer #1 [msgs produced: 400124][msgs delivered: 400124][bytes delivered: 116140787][msgs failed: 0][msgs/s: 10814][MB/s: 3.14][produce failures: 29][queue len: 99921]
simone@devel:~/nProbe$ ./nprobes -i none -n none --collector-port 2055 --kafka "192.168.2.129:9092;test7;none;0" --disable-cache --kaa-num-producers 2 --kafka-performance-test 10000 --kafka-conf debug=msg --kafka-conf queue.buffering.max.ms=1000 --kafka-conf batch.num.messages=30 -b 1
23/Mar/2018 15:39:09 [nprobe.c:3217] Kafka producer #0 [msgs produced: 400119][msgs delivered: 400119][bytes delivered: 116100328][msgs failed: 0][msgs/s: 8335][MB/s: 2.42][produce failures: 25][queue len: 99925]
23/Mar/2018 15:39:09 [nprobe.c:3217] Kafka producer #1 [msgs produced: 495282][msgs delivered: 495282][bytes delivered: 143787297][msgs failed: 0][msgs/s: 10318][MB/s: 3.00][produce failures: 0][queue len: 4768]

Using 2 producer threads doesn’t seem to provide any gain in the performance. The best performance reached, [msgs/s: 22507][MB/s: 6.53], is obtained for the single producer and batch size equal to 300 even if there’s a 2MB/s reduction with reference to the maximum speed reached with the performance assessment tool. This reduction should be explained by the other tasks performed by nProbe that include NetFlow collection and the JSON-encoding of exported data.

 

Protecting a Web Server from DDoS Attacks Using nScrub

$
0
0

nScrub is a software-based DDoS mitigation system based on PF_RING ZC, able to operate at 10 Gbit full-rate (or multi 10 Gbit distributing the load across multiple modules) using commodity hardware, making it affordable in terms of price and deployment.

nScrub is easy to configure even for beginners and companies with no experience with DDoS mitigation, it can be implemented as bump in the wire (i.e. no BGP or traffic tunneling necessary) or as router for on-demand traffic diversion.

In this post we will go through the installation steps for installing and configuring nScrub as bump in the wire, which is really straightforward. We will also learn how to create a basic configuration for protecting a web server.

1. Installation

Installation packages are provided at packages.ntop.org, please follow the steps on the website for configuring the repository. In this post we will go through the configuration steps for Ubuntu 16, however it is possible to do the same also on other Ubuntu/CentOS/Debian systems. As first step we need to install at least the pfring, pfring-drivers-zc-dkms and nscrub packages:

apt-get install pfring pfring-drivers-zc-dkms nscrub

2. Configuration

Assuming that we want to run nScrub on a 4 core CPU, using an Intel ixgbe-based card, we need to configure the driver with 4 RSS queues for distributing the load across all the CPU cores:

mkdir -p /etc/pf_ring/zc/ixgbe
touch /etc/pf_ring/zc/ixgbe/ixgbe.start
echo “RSS=4,4” > /etc/pf_ring/zc/ixgbe/ixgbe.conf

Hugepages need to be configured, as they are used for memory allocation. The number of hugepages required mainly depends on the number of interface queues and the MTU. For instance if you configure 4 RSS queues with an MTU of 1500 bytes 1024 hugepages are enough, with 12 RSS queues you need to setup 2048 hugepages.

echo “node=0 hugepagenumber=1024” > /etc/pf_ring/hugepages.conf

Now you are ready to load pf_ring:

systemctl enable pf_ring
systemctl start pf_ring

As next step you need to configure the nScrub service, creating the /etc/nscrub/nscrub.conf configuration file. Assuming that you want to run nScrub as bump in the wire, creating a transparent bridge between two interfaces eth1 and eth2, the configuration file is quite simple:

touch /etc/nscrub/nscrub.start
cat /etc/nscrub/nscrub.conf
--wan-interface=zc:eth1
--lan-interface=zc:eth2
--log-path=/var/tmp/nscrub.log

Before starting the nScrub service, we also need to create a SSL certificate, which is needed to enable the RESTful API over HTTPs for configuring the mitigation policies. This is also needed to be able to use nscrub-cli, the CLI tool with auto-completion, which is using the RESTful API as communication channel. Please note that the HTTP server listens on localhost only by default for security reasons, you can bind it to a different IP adding –http-address to the configuration file. In order to create a SSL certificate run:

openssl req -new -x509 -sha1 -extensions v3_ca -nodes -days 365 -out cert.pem
cat privkey.pem cert.pem > /usr/share/nscrub/ssl/ssl-cert.pem

Now we are ready to run the nScrub service:

systemctl start nscrub

3. Mitigation Policies Configuration

Now the nScrub service is up and running, it’s time to configure traffic mitigation for a target server or subnet. This should be done creating a “target”, which is a nScrub internal instance (nScrub is multi-tenant!), and configuring protection policies based on the target service. Targets and protection policies can be modified at runtime using the RESTful API or the nscrub-cli tool.

If you are running the nScrub service for the first time, using the default credentials, all you need to do is to run nscrub-cli with no arguments, and you get a command prompt.

nscrub-cli
localhost:8880>

Now you are ready to issue commands, you can start typing ‘help’ for the full list.

In order to create a simple configuration for protecting a web server, first of all we need to create a target, providing a unique name and an IP or subnet in CIDR format for our server or subnet to protect.

add target my_web_server 192.168.1.0/24

We can also specify the service type (a few options are available) in order to help nScrub understanding how to behave (if you are not sure just skip this setting):

target my_web_server type web

At this point we can setup our actual protection policies for this target.

Each target has a few protection profiles, which is based on the traffic source. Protection profiles are default, black, white, gray. The ‘default’ profile applies to all unknown source IPs, while the other profiles (black, white, gray) apply to the corresponding lists based on source IPs. IPs can be manually added to those lists (e.g. we can blacklist a subnet adding it to the black list, or avoid any mitigation for selected subnets adding them to the white list), and can be automatically added by nScrub to those lists based on the configured protection algorithms.

It is a common practice to just set the “drop all” policy to the black profile:

target my_web_server profile black all drop enable

And set the “accept all” policy to the white profile:

target my_web_server profile white all accept enable

All the actual protection policies for unknown sources go to the ‘default’ profile. It’s a common practice to configure this profile to drop traffic by default, unless it is recognized as legitimate traffic:

target my_web_server profile default default drop

For a web server we probably want to drop all UDP traffic:

target my_web_server profile default udp drop enable

Exception made for DNS, that should be able to work on UDP if we have a server that should be able to answer to requests. We probably want to forward all requests up to a certain rate (pps):

target my_web_server profile default dns request threshold 1000

In case that threshold is reached, we want to check all requests and forward only legitimate ones. We can do this setting the DNS check method:

target my_web_server profile default dns request check_method forcetcp

What about TCP? nScrub has multiple active algorithms for checking TCP sessions, and make sure that a only legitimate connections reach the destination server. Those algorithms can be enabled setting the TCP check method, for instance to RFC:

target my_web_server profile default tcp syn check_method rfc

Now you are ready to test the mitigation, running attacks towards the destination server.

It is possible to check live stats running ‘stats’ in the nscrub-cli prompt or using a browser to access the web gui at http://<host>:8880/monitor.html

Enjoy!

Released nDPI 2.2.2: 7 New Protools, Many Improvements

$
0
0

This is to announce a minor nDPI release update that adds a few fixes and introduces support for popular cloud protocols such as Google and Apple push service. Below you can find the complete changelog.

Enjoy!

Main New Features

  • Initial experimental Hyperscan support
  • ndpi_get_api_version API call to be used in applications that are dynamically linking with nDPI
  • –enable-debug-messages to enable debug information output
  • Increased number of protocols to 512

New Supported Protocols and Services

  • GoogleDocs
  • GoogleServices
  • AmazonVideo
  • ApplePush
  • Diameter
  • GooglePlus
  • WhatsApp file exchage

Improvements

  • WhatsApp detection
  • Amazon detection
  • Improved Google Drive
  • Improved Spotify support
  • Improved SNI matching when using office365
  • Improved HostShield VPN

Fixes

  • Fixed invalid RTP/Skype detection
  • Fixed possible out-of-bounds due to malformed DHCP packets
  • Fixed buffer overflow in function ndpi_debug_printf

Introducing nProbe 8.4: New Metrics and Extensions, Improved Kafka Support

$
0
0

This is to announce the release of nProbe 8.4 that introduces enhanced Kafka support and adds various extensions and stability fixes. We encourage all our users to move to this version. Below you can find the complete application changelog.

Enjoy !

Main New Features

  • Implements Kafka batching, options parsing, and variable number of producers
  • Adds Kafka messages transmission statistics

New Options

  • --plugin-dir to load plugins from the specified directory
  • --adj-from-as-path to get previous/next adjacent ASNs from BGP AS-path
  • --disable-sflow-upscale to disable sFlow upscaling

Extensions

  • Implemented ICMP network latency
  • Added ICMP type/code on flow keys to differenciate ICMP flows from the same peers
  • sFlow upscale now takes into account sample drops
  • Improves throughput calculations with NetFlow
  • Improved RTP metrics calculation

Fixes

  • Fixes framentation issues that could lead to crashes
  • Prevents leaks with multiple BGP updates
  • Fixes a crash when exporting option templates to Kafka
  • Fixes missing fields (e.g, FIREWALL_EVENT) in MySQL db columns
  • Preserve endianness of string_dump_as_hex NetFlow custom fields
  • Fixes overwrite of untunnelled addresses for tunnels after the first
  • Updates centos7 mysql dependency to work either with mysql and mariadb
  • Fixed invalid FTP detection
  • Fix for computing %DIRECTION even with reduced temolate IEs
  • Fixes wrong sFlow average scale estimation
  • Fix for wrapping ZMQ rates > 4Gbps
  • Fixed loop bug in plugin handling when multiple plugins are enabled

Welcome to ntopng 3.4: Improved Alerts/SNMP/Asset Discovery, InfluxDB/Prometheus Support

$
0
0

We’re happy to announce the release of ntopng 3.4 that introduces several enhancements and new features, some of which will be finalised in 3.6 due later this year. This version consolidates several months of work and paves the way to more radical changes planned for the next release. In particular beta features present in this version include support for InfluxDB and Prometheus so that you can use ntopng for exporting traffic data towards time-series databases (you can read about influx and prometheus). We have also revamped the alert implementation and introduced initial ntopng monitoring before we extend this code to include also host monitoring (we’re currently prototyping with eBPF, hoping it will serve all our host monitoring needs, including VM an container visibility). SNMP support has also been greatly enhanced including support of many new device types.

We encourage you to play with it, and join the development team. The whole changeling is listed below.

Enjoy!

 

Changelog

New features

  • Improved alerts generation
    • Send alerts via email
    • SNMP alerts on port status change
    • Alerts at ntopng startup/shutdown
    • ARP/IP re-assignments alerts
    • Beta support for InfluxDB and Prometheus
  • Multi-language support
    • English
    • Italian
    • German
  • “hide-from-top” to selectively hide hosts from top stats

Improvements

  • Discovery with SSH scan and MDNS dissection
  • SNMP devices support
  • HTML documentation with ReadTheDocs
  • ERSPAN Type 2 detunneling
  • per-AS network latency stats
  • TCP KeepAlive stats
  • Redis connection via Unix domain socket

Security Fixes

  • Disables CGI support in mongoose
  • Hardened options parsing

Fixes

  • Fixes memory leaks with SNMP
  • Fixes possible out-of-bounds reads with SSDP dissection

 

Using nProbe for Collecting Palo Alto Flows

$
0
0

nProbe is both a probe and a NetFlow/sFlow collector. As you all know, we have recently added the ability to collect flows with proprietary information elements. However we natively support in nProbe popular flow exporter devices such as Cisco NBAR and Palo Alto security devices. In this article we show you how to collect the latter flows in nProbe.

A typical Palo Alto flow is depicted below.

As explained in this document, the last two fields identified with Id 56701 and 56702 identify respectively the App-ID and User-ID. Typing ‘nprobe -H you can see all the information elements natively supported by the nProbe engine. As you can see

$ nprobe -H | grep -i Palo
[57899] %APPLICATION_NAME                                     Palo Alto App-Id
[57900] %USER_NAME                                            Palo Alto User-Id

the nProbe engine supports both proprietary  and we also the standard Post XXXX elements too. So in order to collect these flows on port 2055 and dump them on /flows in text format, you can use for instance the following command

nprobe -T "%IPV4_SRC_ADDR %IPV4_DST_ADDR %INPUT_SNMP %OUTPUT_SNMP %IN_PKTS %IN_BYTES %FIRST_SWITCHED %LAST_SWITCHED %L4_SRC_PORT %L4_DST_PORT %TCP_FLAGS %PROTOCOL  %POST_NAT_SRC_IPV4_ADDR %POST_NAT_DST_IPV4_ADDR   %POST_NAPT_DST_TRANSPORT_PORT %POST_NAPT_SRC_TRANSPORT_PORT %APPLICATION_ID %APPLICATION_NAME" -i none -n none -3 2055 -P /flows

A typical flow will look like

 

172.16.X.Y|X.X.X.X|500010000|8|42|19639|1524754795|1524754857|45829|7351|0|17|X.X.X.X|X.X.X.X|7351|52092|0|meraki-cloud-controller
Y.Y.Y.Y|11|500010000|0|0|1524754856|1524754856|123|19650|0|17|X.X.X.X|172.16.X.Y|123|123|0|ntp

Of course in addition to collection to a file, nProbe allows you to forward them to ntopng via ZMQ or export them to ElasticSearch and Kafka in JSON format.

Enjoy!


ntopng goes Elastic: Introducing ElasticSearch 6 Support

$
0
0

As you ntopng users know, out of the Elastic toolset ntopng supports both ElasticSearch and LogStash. You can use them using the -F flag:

--dump-flows|-F] <mode>  | Dump expired flows. Mode:
                         | es            Dump in ElasticSearch database
                         |   Format:
                         |   es;<mapping type>;<idx name>;<es URL>;<http auth>
                         |   Example:
                         |   es;ntopng;ntopng-%Y.%m.%d;http://localhost:9200/_bulk;
                         |   Notes:
                         |   The <idx name> accepts the strftime() format.
                         |   <mapping type>s have been removed starting at
                         |   ElasticSearch version 6. <mapping type>
                         |   values whill therefore be ignored when using
                         |   versions greater than or equal to 6.
                         |
                         | logstash      Dump in LogStash engine
                         |   Format:
                         |   logstash;<host>;<proto>;<port>
                         |   Example:
                         |   logstash;localhost;tcp;5510

With ElasticSearch being one of the most widely used downstream stores for information, it was time for the ntop team to add ElasticSearch version 6 support to ntopng.

Many users were eager to take advantage of the new ElasticSearch 6 features that include:

  • More scalable searches across shards;
  • Index-time sorting;
  • Faster restarts and shard recoveries.

However, new features almost always come with breaking changes and ElasticSearch 6 is not an exception. Among the breaking changes it is worth mentioning the removal of multiple index mapping types. There is an interesting discussion that motivates the choice of removing multiple mapping types. Mapping types were initially supposed to be the equivalent of tables in SQL databases but actually that was just a bad analogy as database tables are independent(*) each other whereas in ElasticSearch different mapping types are backed by the same Lucene field internally. Having multiple mapping types was causing issues for example when creating the same field with different types in the same index. Other issues were determining the creation of sparse data that, in turn, was negatively affecting the Lucene compression abilities.

We believe that the removal of multiple mapping types is actually a good choice as it allows for a greater flexibility that is typical of noSQL databases. However, that costed us some effort to adapt the ntopng code and to create a new ElasticSearch template specific for version 6. Today we are happy to announce that the latest ntopng version 3.5 has support for ElasticSearch 6. And for those of you that are still using version 5? No worries, both versions are transparently supported. ntopng queries ElasticSearch to obtain its version and behaves accordingly.

So what’s next? If you want to try the latest ntopng 3.5 just install (or upgrade) a nightly build or build it from source. Happy indexing!

(*) OK, there are foreign keys that create tights among tables but in general columns in one table have no bearing on columns with the same name in another table.

Introducing PF_RING FT: nDPI-based Flow Classification and Filtering for PF_RING and DPDK

$
0
0

Motivation

Most network monitoring and security applications are based on flow processing, which is in practice the activity of grouping packets based on common attributes (e.g. source and destination IP, source and destination port, protocol, etc.) and do some analysis based on the collected information. What happens behind the scenes can be divided in a few major tasks:

  • capturing raw packets
  • decoding packet headers to extract flow attributes
  • classify the packets based on flow attributes
  • (optional) extracting also L7 protocol information.

Introducing PF_RING FT

With PF_RING, and later on with PF_RING ZC (Zero Copy), we created a packet processing framework able to provide 1/10/100 Gbit line-rate packet capture to the applications, based both on commodity NICs or specialized FPGA adapters. This dramatically reduced the overhead for the first major task a network monitor application has to cope with, which is the packet capture. PF_RING also includes support for packet decoding, with the ability to leverage on metadata provided by the hardware to accelerate attributes extraction, first but not least the packet hash.

PF_RING FT is taking one step further: it assists the application in the flow classification activity. PF_RING FT implements a highly optimized flow table that can be used to keep track of flows and extract flow information up to L7. It provides many hooks to be able to customize and extend flow processing in order to build any type of application on top of it, including probes, IDSs, IPSs, L7 firewalls.

Although PF_RING FT is distributed with PF_RING, it is possible to use the library with any third-party packet capture framework such as Intel DPDK, as its data-ingestion API is capture-agnostic.

The designing and implementation of a flow processing application on top of PF_RING FT is quite straightforward. The code snippet below shows how to capture traffic and export flow information with PF_RING FT in a few lines of code.

ft = pfring_ft_create_table(0);

pfring_ft_set_flow_export_callback(ft, processFlow, NULL);

while (1) {
  if (pfring_recv(pd, &amp;packet, 0, &amp;header, 0) &gt; 0)
    action = pfring_ft_process(ft, packet, &amp;header);
}

void processFlow(pfring_ft_flow *flow, void *user){
  pfring_ft_flow_key *k = pfring_ft_flow_get_key(flow);
  pfring_ft_flow_value *v = pfring_ft_flow_get_value(flow);
  /* flow export here with metadata in k and v */
}

The pfring_ft_set_flow_export_callback() API in the code snippet above is just an example of hook provided by PF_RING FT. Through this mechanism it is possible to get notified when a new flow is created, or a flow expired, or a packet has been successfully classified, or the L7 protocol has been identified.

PF_RING FT provides common flow information, that can be extended with custom metadata defined by the application. Thanks to a native integration with the nDPI library, the built-in information include the L7 protocol out of the box. The application itself does not need to deal with the nDPI library directly.

Suricata, Bro, Snort Acceleration

The PF_RING FT library also features a filtering/shunting mechanism that can be used to automatically, or through custom code, mark flows for filtering traffic. This can be used for instance for accelerating CPU-bound applications such as IDS/IPSs including Suricata, Bro and Snort, shunting flows based on the application protocol. In fact, discarding elephant flows is becoming a common practice for reducing the amount of traffic such applications need to inspect (typically multimedia traffic), dramatically reducing packet loss and improving the system performance. PF_RING FT is today used by PF_RING to implement L7 flow shunting, this means that PF_RING-based or Libpcap-based application can take advantage of this technology without changing a single line of application code.

Performance

As we have seen this library provides high flexibility and customization, however we also really care about the performance. Developing this library we applied all the lessons learnt while implementing PF_RING ZC and nProbe Cento, creating a highly optimized engine capable of processing 14.88 Mpps (10 Gbit line rate) on a single core of a Xeon E3-1230v5, and 130 Mpps on 12 cores of a Dual Xeon E5-2630v2 2.6Ghz (not the fastest CPU on the market), as you can se from the test results in the product page.

Learning PF_RING FT

If you want to learn more about PF_RING FT, you can go to the PF_RING repository and walk through our code examples. You will see how to use this technology also with non PF_RING packet processing libraries such as libpcap and DPDK. In essence you can now focus on solving your problem instead of addressing time-consuming tasks such as packet capture, decode and flow processing.

 

Introducing nBroker: Traffic Steering and Filtering on Intel RRC (FM10K)

$
0
0

Exactly two years ago we introduced Intel FM10K (FM10000) support in PF_RING ZC. The Intel FM10K ethernet controller family supports 10/25/40/100 Gbit on the same NIC, at a convenient price (sub 1000$ range) and it powers NIC various models manufactured by Silicom Inc.
The most interesting aspect of the FM10K is the programmability that this adapter provides. In fact this adapter integrates an internal switch attached to the external ports (those that are physically connected to the cables) and to the internal ports (towards the CPU, those seen by the host OS) that can be instructed to filter and steer traffic between all the ports. In essence a switch has been embedded into a NIC form factor.

As we know achieving full network visibility requires a combination of a wide range of monitoring tools, and it is crucial to efficiently deliver data in real-time to those tools with activities that include:

  • Efficient traffic steering from the network to the monitoring tools
  • Traffic filtering to perform selective analyses with the benefit of a reduced load on the CPU
  • Traffic blocking to implement policies in inline applications.

As mentioned before, we have already added support for the FM10K in PF_RING ZC, however PF_RING ZC only manages the internal (host) interfaces. What was missing was the ability to control the switch component, using a simple tool or API. This lead to the development of the nBroker framework.

nBroker is a software application that can be used for traffic steering and filtering at 100 Gbps on Intel FM10K adapters. It consists of a daemon that drives the FM10K switch, and an API that can be used to configure steering and filtering rules controlling the daemon. The communication with the daemon happens over a ZMQ channel, thus it is possible to control it using any programming language, implementing the simple communication protocol, however a C library, that takes care of the communication, is also provided. In addition to the C library, a command-line tool with auto-completion is also provided, which is really convenient for scripting your filtering logic.

An IDS/IPS is an example of inline application that can take advantage of nBroker to offload traffic forwarding. In fact an IPS usually inspects all the traffic, and sometimes decides to whitelist (forward) or blacklist (drop) specific traffic. Such activities can be offloaded to the switch by means of steering and filtering rules.

Below an example of whitelisting a specific source IP towards a specific destination port using the nBroker, using both the command-line tool and the C API.

CLI

$ nbroker-cli
 tcp://127.0.0.1:5555> default port eth1 pass
 tcp://127.0.0.1:5555> default port eth2 pass
 tcp://127.0.0.1:5555> set port eth1 match shost 10.0.0.1 dport 80 steer-to eth2

C API

nbroker_set_default_policy(broker, "eth1", NBROKER_POLICY_PASS);
nbroker_set_default_policy(broker, "eth2", NBROKER_POLICY_PASS);
match.shost.ip_version = 4;
match.shost.mask.v4 = 0xFFFFFFFF;
match.shost.host.v4 = inet_addr("10.0.0.1");
match.dport.low = htons(80);
rule_id = NBROKER_AUTO_RULE_ID;
nbroker_set_steering_rule(broker, "eth1", &rule_id, &match, "eth2");

nBroker is available on github: existing PF_RING ZC users do not require an additional license in order to use it.

We would like to thank Silicom Inc for the support during this development work.

Webinar Invitation: ntop traffic analysis and flow collection with InfluxDB

How to use ntopng in compliance with GDPR

$
0
0

Today the General Data Protection Regulation (GDPR) (EU) 2016/679 is effective in the European Union. GDPR is designed to protect personal data and thus preserve privacy in particular as specified in articles 13 to 22, and 34. As we manufacture tools for traffic monitoring, we’ve to make sure that our tools can be used in compliancy with GDPR. In particular we’ve implemented a couple of features that can be useful:

  • If you go select “Preferences” from the ntopng menu, and click on the “Misc” pane you can access the preference for masking addresses.
    In essence you can configure ntopng to hide from the screen non local host information (or vice-versa). This prevents network administrators from being able to visualise the remote hosts a local host is talking with. This hides sensitive information such as the site being contacted or the URL but it allows you to keep an eye on the local network activities (i.e. those that are under your administrative domain).
  • Right to erasure: GDPR requires that at any time a user can ask to delete from the database any information stored about such user. This facility has been already implemented in ntopng, so that network administrators can delete at any time information about specific hosts or MAC addresses. You can do that by selecting “Manage Data” from the preferences menu that will bring you to the following formthat implements the GDPR “Right to be Forgotten”.

In the near future we will implement pseudonymization features that hides from network data sensitive information. These features are still in progress and will probably be extended to other components such as nDPI and PF_RING. Stay tuned for details!

How ntop built a web-based traffic analysis and flow collection with InfluxDB

$
0
0

A couple of days ago InfluxData hosted a ntop webinar about how we have integrated InfluxDB into ntopng. Those who have not attended it can give a look at the presentation slides as well watch the webinar.

In essence:

  • ntopng is based on RRD for timeseries
  • As networks grow, ntopng needs to store more time series more granular.
  • RRD is file based, that is a good things as configuration is minimal, but it does not scale on mid/large networks.
  • We need an alternative, and found InfluxDB to be the best option available.
  • We’re now coding a thin time series layer that allows people to decide what time series-engine to use, either RRD or Influx (and in the future other technologies)
  • This presentation describes the problems we had to tackle with time series, and how we did integrate Influx.

Enjoy!

n2n is back !

$
0
0

Hi all, it is finally time to restart development activities in n2n whose code is available at https://github.com/ntop/n2n. The advent of the cloud, privacy concerns on the Internet, mobile users now producing a large amount of Internet traffic, require a secure network overlay such as n2n to build upon.

Initially designed to solve our connectivity issues created by NATs, we believe it is now time to refresh it and serve modern user needs. The first activity we would like to do is to merge mack in the n2n repository all the changes and patches made by various developers in the past web years, and open the n2n repository to everyone who would like to contribute, this to store all the code in one single place.

Then I want to split n2n in a library plus the applications so that apps can easily embed n2n without the complexity of starting the edge app but to join the n2n overlay as necessary. We also need to simplify n2n a bit (there are many CLI parameters in n2n we would like to make optional), make it DPI undetectable so that people will have hard time blocking it. In essence there is a lot of work in front of us, and we need help. Please contact us if you want to join the n2n team.


Best Practices to Secure ntopng

$
0
0

After a fresh install, ntopng will run using a default, basic configuration. Such configuration is meant to provide an up-and-running ntopng but does not try to secure it. Therefore, the default configuration should only be used for testing purposes in non-production environments.

Several things are required to secure ntopng and make it enterprise-proof. Those things include, but are not limited to, enabling an encrypted web access, restricting the web server access, and protecting the Redis server used by ntopng as a cache.

Here is the list of things required to secure ntopng.

Encrypted Web Access

By default, ntopng runs an HTTP server on port 3000. In production, it is recommended to disable HTTP and only leave HTTPS. To disable HTTP and enable HTTPS on port 443 the following options suffice:

--http-port=0
--https-port=443

Enabling HTTPS ntopng requires ntopng to be able to use a certificate and a private key for the encryption. Generation instruction are available in README.SSL.

 

Restrict Web Server Listening Address

ntopng embedded web server listens on any address by default. This means that anyone who has IP-reachability of the ntopng host can be served with web contents by the server. That does not imply anyone can access the ntopng web GUI — login credentials are required for the GUI — but it is never a good idea to leave a remote web server exposed also to those that should not be entitled to have access to ntopng.

The listening address can be changed from any to another custom address that can be an IP address of an host interface, or just the loopback address 127.0.0.1.

Listening address changes are indicated using a couple of ntopng configuration options, namely --http-port for HTTP and --https-port for HTTPS. For example to change the HTTP server listening address to only 127.0.0.1 and the listening address of the HTTPS server to 192.168.2.222, the following options can be used:

--http-port=:3000
--https-port=192.168.2.222:3001

The listening addresses can easily be verified with netstat on unix. The any address is indicated with 0.0.0.0. This is the netstat output when the HTTP and the HTTPS servers are listening on the any addresses.

simone@devel:~$ sudo netstat -polenta | grep 300
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 65534 67324991 5480/ntopng off (0.00/0/0)
tcp 0 0 0.0.0.0:3001 0.0.0.0:* LISTEN 65534 67324992 5480/ntopng off (0.00/0/0)

This is the netstat output after the changes highlighted in the example above. The any address is no longer listed.

simone@devel:~$ sudo netstat -polenta | grep 300
tcp 0 0 127.0.0.1:3000 0.0.0.0:* LISTEN 65534 67323743 5808/ntopng off (0.00/0/0)
tcp 0 0 192.168.2.222:3001 0.0.0.0:* LISTEN 65534 67323744 5808/ntopng off (0.00/0/0)

Protected Redis Server Access

Password-Protected Redis Server

ntopng uses Redis as a cache for DNS names and other values. The Redis server by default listens only on the loopback address 127.0.0.1 but it is accessible without passwords.

To secure the Redis server with a password, uncommend the requirepass line of the Redis configuration file and specify a secure (very long) password here.

simone@devel:~/ntopng$ sudo cat /etc/redis/redis.conf | grep requirepass
requirepass verylongredispassword

Once the password is set and the Redis server service has been restarted, the ntopng --redis option can be used to specify the password. To use the verylongredispassword in ntopng it suffices to use the following option:

--redis=127.0.0.1:6379:verylongredispassword

Redis Server Access via Local Unix Socket Files

Another way to secure the Redis server is to configure it to only accept connections via a local unix socket file, rather than on any TCP socket.

The relevant part of the Redis configuration to use just a local unix socket file is the following:

# 0 = Redis will not listen on a TCP socket
port 0

# Create a unix domain socket to listen on
unixsocket /var/run/redis/redis.sock

To tell ntopng to use the Redis unix socket file the same --redis option can be used as:

--redis=/var/run/redis/redis.sock

ntopng User with Limited Privileges

ntopng runs with user nobody by default. That user is meant to represent the user with the least permissions on the system. It is recommended to create another user ntopng to run ntopng with so that even if there are other daemons running as nobody, none of them will ever be able to access files and data created by ntopng.

To run ntopng with another user just use option --user. For example to run ntopng with user ntopng specify:

--user=ntopng

Final Remarks

We do our best to develop code that is both efficient and safe from the security standpoint. However we’re humans and thus might be we make mistakes that need to be fixed. In case of security, we take these problems seriously and expedite their fixes. Please contact us promptly whenever you feel there is a security issue we need to tackle and we’ll come back to you ASAP.

Enjoy ntopng!

How to accelerate Suricata, Bro, Snort with PF_RING FT

$
0
0

In a previous post we discussed the advantages of using specialized adapters featuring flow offload in hardware for accelerating IDS applications. What we have learnt is that IDSs are typically CPU-bound applications, and this is mainly caused by the thousands of rules that need to be evaluated for every single packet (of course in addition to packet capture). This is the case of Suricata, Bro, Snort and other IDS/IPSs as well security applications. More than 2/3 of the Internet traffic is multimedia traffic (mostly video, social networks and music streaming), consisting of a few flows, well-known as elephant flows, carrying a lot of data. This is for example the case of Netflix and Youtube, that is the typical traffic an IDS doesn’t really care about. The same applies to encrypted traffic. Discarding elephant flows is becoming a common yet effective practice for reducing the amount of traffic an IDS/IPS needs to inspect, dramatically reducing packet loss and improving the system performance.

This is why native bypass support has been added to Suricata: a user can write signatures using the bypass keyword to explicitly tell Suricata to skip all packets for the matching flows. In most cases Suricata is using eBPF (as an alternative to local bypass, which is less efficient as packets need to be captured and processed by Suricata, before being discarded) for shunting elephant flows, this means that the application is injecting filtering rules (5-tuples) in kernel space as soon as an elephant flow is detected. This approach has some limitations:

  1. It requires the user to write a Suricata ruleset able to detect all multimedia protocols and bypass matching flows.
  2. Packet parsing is not flexible as eBPF programs cannot loop (it does not work with encapsulations, including vlan and QinQ).
  3. It cannot keep flow state (making it complicated to handle flows expiration).

Last month we introduced PF_RING FT, a new framework supporting flow-processing applications in the flow classification activity, implementing a highly optimized flow table with native support for L7 protocol detection and filtering/shunting capabilities. The L7 filtering engine provided by PF_RING FT can also be used for accelerating Suricata, Bro, Snort with little effort. Since PF_RING FT in already part of PF_RING, if you are using PF_RING or Libpcap-over-PF_RING as capture library, no additional change or recompilation is needed to the application in order to use it. This means that it is possible to leverage on the filtering capabilities of PF_RING FT to filter out multimedia (or any other meaningless) traffic or shunt flows just creating a simple configuration file where you can simply list the application/protocol names: PF_RING FT is based on the nDPI Deep Packet Inspection library for protocols detection, you can specify all the protocols detected by nDPI!

Running FT with Suricata

In this post we will show how to use PF_RING FT to accelerate Suricata in IDS mode, filtering out multimedia traffic including Youtube, Netflix and Spotify.

First of all we need to compile Suricata on top of PF_RING. PF_RING should be installed first:

cd ~
git clone https://github.com/ntop/PF_RING.git
cd ~/PF_RING/kernel
make && sudo make install

cd ~/PF_RING/userland/lib
./configure && make && sudo make install

Then we can install Suricata enabling the PF_RING capture module:

cd ~
git clone https://github.com/OISF/suricata
cd ~/suricata
git clone https://github.com/OISF/libhtp
./autogen.sh
LIBS="-lrt" ./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var \
  --enable-pfring --with-libpfring-includes=/usr/local/include \
  --with-libpfring-libraries=/usr/local/lib
make
sudo make install
sudo ldconfig

sudo make install-conf
sudo make install-rules

We can make sure that PF_RING support is enabled in Suricata with the command below:

suricata --build-info | grep PF_RING
PF_RING support:                         yes

As we said PF_RING FT is already part of PF_RING, however in order to be able to detect and filter L7 protocols, we need to install nDPI:

cd ~
git clone https://github.com/ntop/nDPI.git
./autogen.sh
make && make install

Now we need to create a configuration file like the one below, listing all the L7 protocols that we want to discard or shunt (in this case we need to specify the number of packets to forward per flow before discarding them):

cat /etc/pf_ring/ids-rules.conf
[filter]
YouTube = discard
Netflix = discard
Spotify = discard

[shunt]
SSL = 10

At this point we are able to run Suricata. All we need to do to enable L7 filtering is setting the PF_RING_FT_CONF environment variable with the path of the configuration file we created with the filtering rules:

PF_RING_FT_CONF=/etc/pf_ring/ids-rules.conf suricata --pfring-int=zc:eth1 -c /etc/suricata/suricata.yaml

Validation

In order to validate this work, we used a huge PCAP file (almost 10 GB) with mixed internet traffic, and we replayed the PCAP file using our disk2n application at full 10 Gigabit. We ran Suricata both in a standard configuration (no changes have been made to the default suricata.yaml), with PF_RING FT configured to filter out all meaningless data (NetFlix, Youtube, Spotify, Google, Facebook, SSL). In both cases we used PF_RING ZC drivers for optimal capture performance, in order to reduce the packet capture bottleneck and avoid any noise affecting the results.

In the first configuration Suricata dropped 5.6 Mpkts out of 7.06 Mpkts transmitted (79.4% of packet loss), while in the second configuration, with PF_RING FT enabled, Suricata dropped just 0.56 Mpkts (7.9% of packet loss), after 4.9 Mpkts have been pre-filtered by FT. This looks like a huge performance improvement, with minimal effort!

Have fun with PF_RING FT!

Introducing PF_RING 7.2, including PF_RING FT and nBroker

$
0
0

This is to announce a new PF_RING major release 7.2 that includes:

  • Support for Ubuntu 18 as well the latest Debian and CentOS kernels.
  • Many improvements to the FPGA capture modules and the ZC library (that is now able to reserve head room for zero-copy traffic encapsulation/decapsulation, just to mention one).
  • Full support for Containers and Namespaces.

Besides many improvements and bug fixes, this release also introduces PF_RING FT, an highly optimized library that assists flow-processing application with L7 classification and filtering, and nBroker, a framework for hardware-based traffic steering and filtering at 100 Gbit on Intel RRC (FM10K).

This is the complete changelog of the 7.2 release:

  • ZC Library
    • New API pfring_zc_pkt_buff_pull / pfring_zc_pkt_buff_push to manage buffer head room
    • New builtin hash pfring_zc_builtin_gre_hash with support for GRE tunneling
    • zbalance_ipc -m 5 option for enabling GRE hashing
    • Support for up to 64 queues in pfring_zc_send_pkt_multi and pfring_zc_distribution_func
    • Fix for attaching to ZC IPC queues from containers
  • FT Library (New)
    • L7 flow classification and filtering library
    • Event-driven capture-agnostic API
    • Sample applications
    • ftflow: flow records generation with PF_RING capture
    • ftflow_pcap: flow records generation with PCAP capture
    • ftflow_dpdk: flow records generation with DPDK capture
    • fttest: performance benchmarking tool
    • zbalance_ipc extension to process flows and filter packets
  • nBroker (New)
    • Traffic steering and filtering on Intel RRC (FM10K adapters)
    • Daemon to drive the adapter (nbrokerd)
    • API to configure the adapter using a C library (nbrokerlib)
    • Command-line tool with auto-completion to configure the adapter using scripts (nbroker-cli)
    • Low-level library used by nbrokerd to drive the adapter (rrclib)
  • PF_RING-aware Libpcap
    • PCAP_PF_RING_USERSPACE_BPF env var to force userspace filtering instead of kernel filtering
  • PF_RING Kernel Module
    • Full support for namespaces and containers
    • Fixed skbuff forwarding with fast-tx using reflect_packet
    • Fixed VLAN support in BPF with kernel extensions
    • Fixed support for NetXtreme cards with multiple queues
    • Fixed sw hash filtering for IPv6
    • Fixed intel_82599_perfect_filter_hw_rule VLAN byte order
    • Fixed huge rings (high number of slots or huge slot size)
    • Fixed VLAN offset and packet hash in case of QinQ and VLAN offload
    • Support for Ubuntu 18
    • Support for latest Centos 7 kernel
    • Support for latest Debian 8 kernel
  • PF_RING Capture Modules
    • Released source code for FPGA capture modules including Endace, Exablaze, Inveatech, Mellanox, Netcope
    • Accolade lib updates
      • New flag PF_RING_FLOW_OFFLOAD_NOUP to enable flow offload without flow updates (standard raw packets are received, flow id is in the packet hash)
      • Automatically generate the rule ID using rule_id = FILTERING_RULE_AUTO_RULE_ID
      • Support for accolade 200Ku Flex adapters
    • Fiberblaze lib updates
      • Packet recv in chunk mode
    • Fixed extraction from npcap/timeline in case of empty PCAP files in the dump set
    • Endace DAG updates
      • Setting extended_hdr.pkt_hash from ERF FlowID or Packet Signature extension headers if available
      • Support for pfring_set_application_name
      • Support for pfring_dag_findalldevs
    • Napatech lib updates
      • Support for sdk v10
  • Drivers
    • e1000e zc driver update v.3.4.0.2
    • i40e zc driver update v.2.4.6
    • ixgbe zc driver update v.5.3.7
    • igb zc driver update v.5.3.5.18
    • Fixed interrupts handling on i40e when in zc mode, this fixes the case where packets are received in batches of 4/8 packets
    • Using nbrokerd for initializing FM10K adapters and configuring the RRC switch
  • nBPF
    • Fixed rules constraints
  • Misc
    • Reworked init.d systemd support
    • New pf_ringctl script to manage pf_ring and drivers (this is used by init.d/systemd)
    • Documentation improvements, Doxygen integration with “read the docs”

You’re Invited to the “Monitoring with Time Series” Meetup: San Francisco June 27th

$
0
0

Hi all this is to invite all of you living in San Francisco and in the Bay Area to attend the “Monitoring with Time Series” meetup organised by our friends at InfluxData. I will be speaking about ntop, traffic monitoring, time series and InfluxDB. It will also be a good time to meet with our users, hear suggestions, and (perhaps) complains. The Internet is a nice place, but a physical meeting has no price.

The meetup will take place at InfluxData HQ, 799 Market St Suite 400, San Francisco. The agenda is pretty straightforward:

  • 6:30 – 7:00 Food, drinks & chat
  • 7:00 – 7:05 Introduction
  • 7:15 – 7:45 How ntop built their high-speed based traffic analysis and flow collection with the use of InfluxDB – Luca Deri
  • 7:45 Q&A, More food, drinks, & chat

All details including the registration link are available here.

It will be fun. I hope to see you!

Using n2n to Steer your Internet Traffic and Circumvent Restrictions

$
0
0

Suppose that you are travelling abroad and you need to access some Internet sites that are not available abroad. Or suppose that you want to evade the restrictions of your ISP, of the hotel room where you are currently staying, or the WiFi hotspot you are using for connecting to the Internet. The simplest thing to do is to open a VPN and you’re done. However VPNs are not very flexible and they require a single place where everybody meet and great. n2n instead is based on the peer-to-peer paradigm and it implements an adaptable network overlay that you can use to access your private assets, regardless of their actual location.

Ingredients:

  • A host with a public IP (or at least a static NAT for publishing the supernode port to the Internet) that can act as supernode.
  • A host sitting on a network back to your home country: it can be deployed on the same host where the supernode is running, or somewhere else. This host (that will run the n2n edge component) will act as router for your Internet traffic. A Raspberry PI is perfectly adequate for running both the the supernode and the edge.

As you can see from the above picture, the goal of this exercise is to exit the Internet though the purple path (and thus from your home network) rather than from your hotel room. This way you can evade the restrictions (orange path) of the hotel Internet provider, as well be visible using a home country IP rather than using the remote hotel room IP.

The home Raspberry (connected to the Internet via eth0) should be configured with NAT to mask the remote host IP.

edge -d n2n0 -c mynetwork -k mypassword -u 99 -g 99 -m DE:AD:BE:EF:91:10 -a 192.168.254.6 -p 50001 -l myPublicIP:9876 -r
iptables --table nat --delete-chain
iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
iptables --append FORWARD --in-interface n2n0 -j ACCEPT
iptables -A FORWARD -i eth0 -o n2n0 -j ACCEPT

In essence it creates a new n2n0 interface that will NAT all the ingress traffic and let you access the Internet though the Raspberry.

On the PC/Mac laptop of you use on your hotel room you can do (we assume that the local gateway is 192.168.2.1):

# edge -d n2n0 -c mynetwork -k mypassword -u 99 -g 99 -m DE:AD:BE:EF:91:10 -a 192.168.254.20 -p 50001 -l myPublicIP:9876 -r
# route del default
# route add myPublicIP/32 gw 192.168.2.1
# route add default gw 192.168.254.6

In essence you use the local Internet connectivity only for accessing the supernode, and all the rest of the traffic will pass though the n2n tunnel.

You can verify the IP address you as using to access the Internet using services like ipinfo.io and you will see that the Internet believes you’re no longer abroad, but back home. In future n2n version we will reduce the number of parameters as well auto-configure routing to make all this even simpler.

Enjoy!

Viewing all 544 articles
Browse latest View live