Quantcast
Channel: ntop
Viewing all 544 articles
Browse latest View live

ntop and Kentik bring nProbe to the Cloud

$
0
0

Traditionally nProbe is used as a host-based network monitoring probe able to produce “augmented” flow records including performance monitoring, security and visibility information.

We have a common vision with Kentik of how network instrumentation needs to evolve beyond “just” bytes and packets-based NetFlow, and of how that can enable users to understand network performance and security challenges.

This year, we entered a partnership with Kentik to leverage nProbe to export rich network metrics to the Kentik Detect big data network analytics cloud platform, and we’re proud to announce the first product based on this collaboration.

Today Kentik has introduced a cloud-based product named NPM (Network Performance Monitoring)  that uses their network-savvy big data-based analytics and nProbe as a source of augmmented flow to allow ops staff to detect and pinpoint network and application problems based on actual user performance (as observed by nProbe).

ntop is proud to see nProbe being used in partnership with Kentik to address new monitoring challenges of, and in, the cloud.


Introducing nBPF: line-rate hardware packet filtering (yes Wireshark at 100G is possible)

$
0
0

Modern network adapters such as Exablaze, Napatech and Silicom’s Intel FM10K, support hardware filters. Unfortunately every company has its own way to set filters, no unified API, and no support of any BPF-like filters. Most of the network monitoring community instead is used to set filters using BPF and thus powerful hardware filtering is present but unused.

This has been the driving force for developing nBPF (ntop BPF). We have realized that most of the times filters include IP, port and protocol, that are exactly the features that hardware-based filters support. Thus we have written from scratch a BPF interpreter able to convert a subset (that should satisfy most of the user needs) of BPF. Filtering happens in hardware if possible, leaving to software-based filtering (as with the classic BPF) the final cleanup, only if necessary.

Libpcap-based application such as Wireshark or tcpdump, can immediately take advantage of nBPF by simply using the lib cap-over-PF_RING library. Example suppose that you want to use Wireshark with nBPF on interface nt:0. All you have to do is:

  1. Install the prerequisite libraries/drivers for your NIC.
  2. git clone https://github.com/ntop/PF_RING.git
  3. Compile PF_RING including the libpcap library under PF_RING/userland/libpcap
  4. Install PF_RING. We assume that the libcap-over-PF_RING library in installed under /usr/local/lib
  5. git clone https://github.com/ntop/n2disk.git
  6. cd PF_RING/tools
  7. sudo su
  8. ./n2if up -i nt:0 -d napatech0
  9. LD_LIBRARY_PATH=/usr/local/lib/ wireshark

In this case in the wireshark capture window you will see a new adapter named napatech0, on which you can capture traffic. Supposing you want to capture tcp packets sent by host 1.2.3.4 you just have to set your filter in Wireshark as usual

nBPF

and voila. Instead of overflooding wireshark with packets, your favourite packet analyser will receive only the packets your are interested in, as all filtering happens in hardware. People who have used wireshark at 10 Gbit or more, know that with this technology you can finally analyse live traffic, thing that was not possible until today.

If you want to see a demo of nBPF you can watch the video below.

nBPF is released in source code as part of PF_RING. You can find the code on github where you can also find implementation notes and README for the various supported NICs.

This is just a preview of nBPF. We will present it extensively at the upcoming conference Sharkfest Europe later this month, where ntop organises a free half-day workshop (you do not have to be registered for the sharkfest to attend it). See you soon mates!

Filtering Terabytes of pcaps using nBPF and Wireshark

$
0
0

In a previous post we introduced our new nBPF library that able to convert a BPF filter to hardware rules for offloading traffic filtering to the network card. We did not mention that the same engine can be used for accelerating traffic extraction from an indexed dump set produced by n2disk. n2disk is a traffic recording application able to produce multiple PCAP files (a per-file limit in duration or size can be used to control the file size) together with an index (for accelerating extraction) and a timeline (for keeping all the files in chronological order).

Until last month the only way to efficiently extract traffic out of the terabytes of raw data produced by n2disk in PCAP format was the npcapextract tool, which can be used from command like or through our nBox GUI. With this tool you can retrieve specific packets matching a BPF filter in a given time interval very quickly. This proved to be really useful, however the difficulty of integrating this tool with other applications led us to the development of a PF_RING “timeline” module that can be used to seamlessly do the same job using the PF_RING API (and consequently also the PCAP API thanks to our libpcap-over-PF_RING).

One of the most common use cases for the timeline module is the Wireshark integration. In fact it is very convenient to run Wireshark directly on a n2disk timeline, specifying a BPF filter for extracting a small portion of the whole dump set, and starting the analysis task while the extraction is progressing.

In order to test this module you should install pfring and n2disk following the instruction at http://packages.ntop.org. Please note that the extraction module needs an index and a timeline in order to work, thus you should instruct n2disk to create them on-the-fly using -I and -A options. For additional options please refer to the n2disk documentation. Example:

n2disk -i eth1 -o /storage/n2disk/eth1 -I -A /storage/n2disk/eth1/timeline

In order to tell PF_RING that you want to select the timeline module, you should use the “timeline:” prefix followed by the timeline path as interface name. In addition to this, it is mandatory to provide a BPF filter containing at the beginning the time interval using “start” and “end” tokens, followed by the actual packet filter (a subset of the BPF syntax is supported, please refer to the n2disk documentation) as in the example below:

pfcount -i timeline:/storage/n2disk/eth1/timeline -f "start 2016-09-22 8:40:53 and end 2016-09-22 10:43:54 and host 192.168.2.130"

If you want to do the same in Wireshark, since you cannot use “timeline:” as interface name (Wireshark lets you choose PCAP files and devices as traffic sources, but it is not aware of n2disk timelines), you have to create a virtual interface bound to your actual timeline, and select it as traffic source. The libpcap-over-PF_RING will do all the rest. In order to create the virtual interface you should use the “n2if” tool, part of the PF_RING package:

n2if up -t /storage/n2disk/eth1/timeline -d timeline0

After creating the virtual interface bound to the timeline, you should be able to run the extraction using Wireshark (or tshark). Please note you should set the env var LD_LIBRARY_PATH with the PCAP-over-PF_RING library installation path (default is /usr/local/lib/) in order to force Wireshark to load the correct libpcap. Please also note that the Wireshark provided by most distros are compiled across libpcap.so.0.8, thus you probably need to create an ad-hoc symlink:

ln -s /usr/local/lib/libpcap.so /usr/local/lib/libpcap.so.0.8

At this point you should be able to run Wireshark providing the virtual interface created with n2if and a BPF filter containing the time interval as described above:

LD_LIBRARY_PATH=/usr/local/lib/ tshark -i timeline0 -f "start 2016-09-22 8:40:53 and end 2016-09-22 10:43:54 and host 192.168.2.130"

If you are using the Wireshark GUI, you should run just the wireshark command without any option:

LD_LIBRARY_PATH=/usr/local/lib/ wireshark

Select the virtual interface from the GUI:

screen-shot-2016-10-09-at-16-06-05

Set a capture filter specifying time interval and BPF:

screen-shot-2016-10-11-at-14-00-33

Then you can start the packet capture (extraction) and start analysing your traffic:

screen-shot-2016-10-09-at-16-08-47

We will present also this at the upcoming Sharkfest Europe conference, where we will organise a free half-day workshop. See you next week!

See You Next Week at the ntop Users Meeting

$
0
0

This is to renew the invitation to meet you next week at the ntop users meeting colocated with Sharkfest Europe. The event is free of charge but seats are limited. More information can be found here.

Hope too see you next week at the workshop!

ntop Users Meeting 2016 Retrospective

ntopng MySQL Flow Export: Increase the Maximum Number of Open Files

$
0
0

ntopng uses partitioned MySQL tables when storing flows. As MySQL needs a file handle for each partition and its index, it is important to make sure that the open_files_limit is large enough to allow the process to keep all these files open. Typically, open_files_limit  default value works out-of-the-box but there are some packages/distributions that keeps this number pretty low.

When the current value is too low, ntopng can show errors such as

[MySQLDB.cpp:55] ERROR: MySQL error: Out of resources when opening file './ntopng/flowsv6#P#p23.MYD' (Errcode: 24 - Too many open files) [rc=-1]

The current value can be checked from a MySQL shell

mysql> SHOW VARIABLES LIKE 'open%';
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| open_files_limit | 256   |
+------------------+-------+
1 row in set (0.02 sec)

mysql>

In order to increase the open_files_limit it can be necessary to tweak the MySQL configuration file — and, sometimes, also the kernel — to increase it.

To increase that value it is necessary to change the MySQL configuration file adding an entry under [mysqld]

[mysqld]
open_files_limit = 1024

Editing the configuration file may not be sufficient. Depending on the OS/platform it can also be necessary to tune the system-wide resources limits using ulimit.

Stream That Flow: How to Publish nProbe/Cento Flows in a Kafka Cluster

$
0
0

Apache Kafka can be used across an organization to collect data from multiple sources and make them available in standard format to multiple consumers, including Hadoop, Apache HBase, and Apache Solr. nProbe — and it’s ultra-high-speed sibling nProbe cento — integration with the Kafka messaging system makes them good candidates source of network data. The delivery of network data to a redundant, scalable, and fault-tolerant messaging system such as Kafka enables companies to protect their data even in-flight, that is, when the consolidation in a database has still to occur.

An impatient reader who is eager to use Cento for delivering flows to a Kafka cluster having a broker at address 127.0.0.1:9092 on a topic named “topicFlow can use the following command

ntopPC:~/code/cento$ ./cento -i eth0 --kafka “127.0.0.1:9092;topicFlows"

Readers who are interested in learning more about Cento and Kafka should continue reading this article that starts by describing Cento-Kafka publishing mechanisms and then moves to a real configuration example. Finally, in the appendix, it describes how to setup Kafka both in a single- and multi-broker fashion.

Cento will be used in the remainder of this article to carry on the discussion. Examples and configurations given work, mutatis mutandis, also for nProbe.

Publishing nProbe Cento Flows to Kafka

Cento publish flow “messages” into a Kafka cluster by sending them to one or more Kafka brokers responsible for a given topic. Both the topic and the list of Kafka brokers are submitted using a command line option. Initially, Cento tries to contact one or more user-specified brokers to retrieve Kafka cluster metadata. Metadata include, among other things, the full list of brokers available in the cluster that are responsible for a given topic, and the available topic partitions. Cento will use the retrieved full list of brokers to push flow “messages” in a round robin fashion to the various partitions.

Cento also features optional message compression and message acknowledgement policy specification. Acknowledgment policies make it possible to totally avoid waiting for acknowledgements, to wait only for the Kafka leader acknowledgement, or to wait for an acknowledgment from every replica.

Setting Up Cento

Let’s say Cento has to monitor interface eth1 and has to export generated flows to Kafka topic “topicFlows”. The following command will do the magic

ntopPC:~/code/cento$ ./cento -i eth0 --kafka “127.0.0.1:9092,127.0.0.1:9093,127.0.0.1:9094;topicFlows"

The command above assumes flows have to be exported on topic topicFlows and that there are three brokers listening on localhost, on ports 9092, 9093 and 9094 respectively. These Kafka brokers are shown also in the picture below together with the running Cento instance

06-kafka-three-brokers-with-cento

Three Kafka Brokers (top and bottom left). Cento exporting to the brokers (bottom right).

Consuming Cento Flows

For the sake of example, the command-line script kafka-console-consumer.sh available in the bin/ folder of the Kafka sources is used to read messages from a Kafka topic (see the appendix for instructions on how to get it). In order to consume flows in the topicFlows we can use the command line-script as follows

ntopPC:~/kafka/kafka_2.11-0.10.1.0$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topicFlows --from-beginning

Below is an image that shows flows that are being consumed from the Kafka cluster.

07-kafka-three-brokers-with-cento-consume-flows

Three Kafka Brokers (top and bottom left). Cento exporting to the brokers (bottom right). The command line consumer (middle right)

Note that flows can be consumed by any custom application that can interact with Kafka.

Cento and Kafka Topics

Cento takes as input a topic, that is, a string representing the generated stream of data inside Kafka. Topics can be partitioned and replicated. With reference to the topic being used, Cento has the following behavior:

  • If the topic doesn’t exist in the Kafka cluster, then Cento creates it with a single partition and replication factor 1
  • If the topic exists in the Kafka cluster, then Cento uses the existing topic and will send flows to the whole set of partitions in round-robin.

So, if the user is OK with a single-partition topic can simply fire up Cento that will create it. If more complex topic configurations are needed, then the topic has to be created in advance using, for example, the helper script kafka-topics.sh. We refer the interested reader to the remainder of this section for a detailed discussion on the Kafka setup and its topics.

 

Appendix

Setting Up Kafka

Prerequisites

Getting Kafka

The latest version of Kafka can be downloaded from https://kafka.apache.org/downloads. Download the latest tar archive (kafka_2.11-0.10.1.0.tgz at the time of writing), extract it, and navigate into the decompressed folder.

ntopPC:wget http://apache.panu.it/kafka/0.10.1.0/kafka_2.11-0.10.1.0.tgz
ntopPC:~/kafka$ tar xfvz kafka_2.11-0.10.1.0.tgz
ntopPC:~/kafka$ cd kafka_2.11-0.10.1.0/
ntopPC:~/kafka/kafka_2.11-0.10.1.0$ ls
bin config libs LICENSE logs NOTICE site-docs

Starting Zookeeper

Kafka uses Zookeeper to store a great deal of status information that includes, but is not limited to, the topics managed by each broker. An healthy Zookeeper installation consists of at least three distributed nodes. In this example, we will just start a quick-and-dirty zookeeper on the same machine that will run Kafka. This represents a single point of failure and should be absolutely avoided in production environments.

To start the zookeeper we can simply use the one that is shipped with Kafka as follows

ntopPC:~/kafka/kafka_2.11-0.10.1.0$ ./bin/zookeeper-server-start.sh config/zookeeper.properties
[...]
[2016-10-23 12:50:02,402] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)

Starting a Kafka Broker

Now that Zookeeper is up and running it is possible to start a Kafka broker by using the default configuration found under config/server.properties. Upon startup, the broker will contact the Zookeeper instance to exchange and agree on some status variables.

ntopPC:~/kafka/kafka_2.11-0.10.1.0$ ./bin/kafka-server-start.sh config/server.properties
[...]
[2016-10-23 12:52:01,510] INFO [Kafka Server 0], started (kafka.server.KafkaServer)

Creating a Kafka test Topic

A simple test Kafka topic with just one partition an replication factor 1 can be created using the kafka-topics.sh helper script.

ntopPC:~/kafka/kafka_2.11-0.10.1.0$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

Graphically, topic creation is show in the following image.

01-kafka-create-topic-test

The running kafka broker (top left); the running zookeeper (top right) and the command issued to create the test topic (bottom).

Producing Messages on a test Topic

A command-line producer available in the bin/ folder can be used to test and see if the Kafka deployment can receive messages. The script will send messages (one per line) unless Ctrl+C is pressed to exit. In the following example two “ntop test” messages are produced and sent to the broker listening on localhost port 9092.

ntopPC:~/kafka/kafka_2.11-0.10.1.0$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
ntop test 01
ntop test 02

Consuming messages on a test Topic

A command-line consumer available in the bin/ folder can be used to read messages from a Kafka topic. In the following snippet the script is used to consume the two “ntop test” messages produced above. The script will wait for new messages until Ctrl+C will be pressed to exit.

ntopPC:~/kafka/kafka_2.11-0.10.1.0$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
ntop test 01
ntop test 02

Following is an image that shows how to consume messages from the topic.

Top: the kafka broker (left) and the Zookeeper (right). Bottom: The command-line consumer

Top: the kafka broker (left) and the Zookeeper (right).
Bottom: The command-line consumer

Muti-Broker

A real Kafka cluster consist of multiple brokers. In the remainder of this section is is shown how to add two extra Kafka brokers to the cluster that is already running the broker started above. In order to add the two kafka brokers two configuration files must be created. They can be copied from the default configuration file found in config/server.properties.

ntopPC:~/kafka/kafka_2.11-0.10.1.0$ cp config/server.properties config/server-1.properties
ntopPC:~/kafka/kafka_2.11-0.10.1.0$ cp config/server.properties config/server-2.properties

The two files will carry configuration information for two different brokers:

  • server-1.properties configures a broker as follows: broker id 1; listen on localhost port 9092 and log in /tmp/kafka-logs-1
  • server-2.properties configures a broker as follows: broker id 2; listen on localhost port 9093 and log in /tmp/kafka-logs-2

The relevant part of the configuration files is the following.

ntopPC:~/kafka/kafka_2.11-0.10.1.0$ cat config/server-{1,2}.properties | egrep '(id|listeners|logs)'
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
# listeners = security_protocol://host_name:port
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9093
# it uses the value for "listeners" if configured. Otherwise, it will use the value
#advertised.listeners=PLAINTEXT://your.host.name:9092
log.dirs=/tmp/kafka-logs-1
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=2
# listeners = security_protocol://host_name:port
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9094
# it uses the value for "listeners" if configured. Otherwise, it will use the value
#advertised.listeners=PLAINTEXT://your.host.name:9092
log.dirs=/tmp/kafka-logs-2
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining

The two additional brokers can be started normally. They will discover each other — and the already running broker — through the Zookeeper.

ntopPC:~/kafka/kafka_2.11-0.10.1.0$ bin/kafka-server-start.sh config/server-1.properties
ntopPC:~/kafka/kafka_2.11-0.10.1.0$ bin/kafka-server-start.sh config/server-2.properties

Creating a Partitioned Topic

A replicated test topic named “topicFlowsPartitioned” can be created using the kafka-topic.sh helper script. Once created, the status of the topic can be queried using the same helper.

ntopPC:~/kafka/kafka_2.11-0.10.1.0$ ./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 3 --topic topicFlowsPartitioned
Created topic "topicFlowsPartitioned".
ntopPC:~/kafka/kafka_2.11-0.10.1.0$ bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic topicFlowsPartitioned
Topic:topicFlowsPartitioned PartitionCount:3 ReplicationFactor:3 Configs:
 Topic: topicFlowsPartitioned Partition: 0 Leader: 2 Replicas: 2,1,0 Isr: 2,1,0
 Topic: topicFlowsPartitioned Partition: 1 Leader: 0 Replicas: 0,2,1 Isr: 0,2,1
 Topic: topicFlowsPartitioned Partition: 2 Leader: 1 Replicas: 1,0,2 Isr: 1,0,2

The creation of the partitioned topic is also shown in the image below

04-kafka-three-brokers

The three Kafka brokers (top and bottom left) and the creation of a replicated topic (bottom right)

Producing and consuming messages on the replicated topic can be done exactly as it has already been shown for the single topic. The following image shows the usage of  producer and consumer scripts execution

05-kafka-three-brokers-console-topics

The three Kafka brokers (top and bottom left) and the production/consumption of messages in a replicated topic (bottom right)

References

 

Monitoring VoIP Traffic with nProbe and ntopng

$
0
0

VoIP applications usually limit theirs monitoring capabilities to the generation of CDR (Call Data Records) that are used for the generation of billing/consumption data. In essence you know how many calls a certain user/number has made, the duration etc. While this information can be enough for basic monitoring, it is not enough for guaranteeing reliable call quality as these systems are essentially blind with respect to call quality. Wireshark can analyse both call signalling and voice, but it is a troubleshooting tool meaning that it cannot be used for permanent monitoring but just for analysing specific situations when there is a specific problem to analyse.

Fortunately you can complement CDR with realtime VoIP traffic monitoring and voice quality analysis using nProbe Pro (with VoIP plugin) and ntopng. All you need is to send nProbe (usually via a span port or network tap) the VoIP traffic (or if you want all your network traffic including VoIP) for analysis. Via redis, nProbe is able to analyse SIP, RTP, RTCP and to correlate SIP with RTP, so that you know for each RTP stream what is the call it belongs to.

Supposing that the traffic to monitor is received on eth1 all you need to do is to start the following applications (in the example below thay have been started on the same machine but via ZMQ flows can be sent remotely over the network in an encrypted format)

# nprobe -i eth1 -T "%IPV4_SRC_ADDR %IPV4_DST_ADDR %IPV4_NEXT_HOP %INPUT_SNMP %OUTPUT_SNMP %IN_PKTS %IN_BYTES %FIRST_SWITCHED %LAST_SWITCHED %L4_SRC_PORT %L4_DST_PORT %TCP_FLAGS %PROTOCOL %L7_PROTO @SIP@ @RTP@" --redis localhost --zmq tcp://127.0.0.1:1234

$ ntopng -i tcp://127.0.0.1:1234

In essence nProbe analyses the traffic and sends ntopng flows via ZMQ. ntopng collects the flows and display them on the user interface. Inside the nProbe template there are two special information elements @SIP@ and @RTP@: they represent a wildcard for various information elements such as SIP caller/called or RTP codecs. This way users can forget about VoIP details and focus on monitoring traffic

Click to view slideshow.

As you can see, ntopng interprets the VoIP information and represents it in a user-friendly way. It reports not just about CDRs, but it can analyse voice quality by computing pseudo-MOS. This enables network administrators to spot calls with bad quality and try to find a solution to the problem. If you enable in ntopng the export to ElasticSearch and/or MySQL, you can dump call information persistently on a database or use Kibana to create a dashboard about VoIP calls.

# StartTime[epoch]	EndTime[epoch]	SIP_Server[ascii:32]	ClientIP[ascii:32]	CallId[ascii:64]	CallingParty[ascii:64]	CalledParty[ascii:64]	RTPInfo[ascii:64]	SIPFailureCode[uint]	ReasonCause[uint]	Packets[uint]	CallState[ascii:64]	StateMachine[ascii]
#
1481205353	1481205353	212.13.205.165	10.96.5.59	b654d999b6321def238939c5d48ce777@10.37.129.2	brix <sip:brix@testyourvoip.net>	<sip:tyv@212.13.205.165>	10.37.129.2:50070,212.13.205.165:50422	200	0	12105	CALL_COMPLETED	INVITE=1481205353,TRYING=0,RINGING=1481205353,INV_RSP=1481205353,BYE=1481205353,CANCEL=0
1481205353	1481205353	10.96.5.59	212.13.205.165	400ad5a7a28e421f355369a9fc8910ee@192.168.1.101	brix <sip:brix@testyourvoip.net>	<sip:tyv@212.13.205.165>	212.13.205.165:50390,0.0.0.0:0	200	0	5	CALL_IN_PROGRESS	INVITE=0,TRYING=0,RINGING=1481205353,INV_RSP=1481205353,BYE=0,CANCEL=0

Above you can find an excerpt of a call sample. The whole list of SIP

%SIP_CALL_ID               	SIP call-id
%SIP_CALLING_PARTY         	SIP Call initiator
%SIP_CALLED_PARTY          	SIP Called party
%SIP_RTP_CODECS            	SIP RTP codecs
%SIP_INVITE_TIME           	SIP time (epoch) of INVITE
%SIP_TRYING_TIME           	SIP time (epoch) of Trying
%SIP_RINGING_TIME          	SIP time (epoch) of RINGING
%SIP_INVITE_OK_TIME        	SIP time (epoch) of INVITE OK
%SIP_INVITE_FAILURE_TIME   	SIP time (epoch) of INVITE FAILURE
%SIP_BYE_TIME              	SIP time (epoch) of BYE
%SIP_BYE_OK_TIME           	SIP time (epoch) of BYE OK
%SIP_CANCEL_TIME           	SIP time (epoch) of CANCEL
%SIP_CANCEL_OK_TIME        	SIP time (epoch) of CANCEL OK
%SIP_RTP_IPV4_SRC_ADDR     	SIP RTP stream source IP
%SIP_RTP_L4_SRC_PORT       	SIP RTP stream source port
%SIP_RTP_IPV4_DST_ADDR     	SIP RTP stream dest IP
%SIP_RTP_L4_DST_PORT       	SIP RTP stream dest port
%SIP_RESPONSE_CODE         	SIP failure response code
%SIP_REASON_CAUSE          	SIP Cancel/Bye/Failure reason cause
%SIP_C_IP                  	SIP C IP adresses
%SIP_CALL_STATE            	SIP Call State

and RTP

%RTP_SSRC                       RTP Sync Source ID
%RTP_FIRST_SEQ             	First flow RTP Seq Number
%RTP_FIRST_TS              	First flow RTP timestamp
%RTP_LAST_SEQ              	Last flow RTP Seq Number
%RTP_LAST_TS               	Last flow RTP timestamp
%RTP_IN_JITTER             	RTP jitter (ms * 1000)
%RTP_OUT_JITTER            	RTP jitter (ms * 1000)
%RTP_IN_PKT_LOST           	Packet lost in stream (src->dst)
%RTP_OUT_PKT_LOST          	Packet lost in stream (dst->src)
%RTP_IN_PKT_DROP           	Packet discarded by Jitter Buffer (src->dst)
%RTP_OUT_PKT_DROP          	Packet discarded by Jitter Buffer (dst->src)
%RTP_IN_PAYLOAD_TYPE       	RTP payload type
%RTP_OUT_PAYLOAD_TYPE      	RTP payload type
%RTP_IN_MAX_DELTA          	Max delta (ms*100) between consecutive pkts (src->dst)
%RTP_OUT_MAX_DELTA         	Max delta (ms*100) between consecutive pkts (dst->src)
%RTP_SIP_CALL_ID           	SIP call-id corresponding to this RTP stream
%RTP_MOS                   	RTP pseudo-MOS (value * 100) (average both directions)
%RTP_IN_MOS                	RTP pseudo-MOS (value * 100) (src->dst)
%RTP_OUT_MOS               	RTP pseudo-MOS (value * 100) (dst->src)
%RTP_R_FACTOR              	RTP pseudo-R_FACTOR (value * 100) (average both directions)
%RTP_IN_R_FACTOR           	RTP pseudo-R_FACTOR (value * 100) (src->dst)
%RTP_OUT_R_FACTOR          	RTP pseudo-R_FACTOR (value * 100) (dst->src)
%RTP_IN_TRANSIT            	RTP Transit (value * 100) (src->dst)
%RTP_OUT_TRANSIT           	RTP Transit (value * 100) (dst->src)
%RTP_RTT                   	RTP Round Trip Time (ms)
%RTP_DTMF_TONES            	DTMF tones sent (if any) during the call

information elements are reported above.

Thanks to PF_RING, using nProbe it is possible to monitor large VoIP networks using a low-cost x86-based server with tent of thousand concurrent calls. Advanced users can instruct nProbe also to create call logs (you need to add –sip-dump-dir <dump dir> to the nProbe command line) in addition to exporting data to ntopng.

Happy VoIP traffic monitoring!


Flow-Based Monitoring, Troubleshooting and Security using nProbe

$
0
0

nProbe is a tool developed over the last 10 years, and thus it has been extended and improved year by year. However many users, even those who are using it since a long time, might not know all its features. Next week at Flocon 2017, I will give a talk about nProbe. The idea is to position nProbe (e.g. against the popular YAF tool), highlight what people can do with it (in addition to traffic monitoring and troubleshooting) and learn that nProbe is much more than a network sensor.

I invite those who will attend the conference to show up, so we can meet in person. Hope to see you.

Clustering Network Devices using ntopng Host Pools

$
0
0

In computer networks, devices are identified by an IP and a MAC. The IP can be dynamically assigned (so it might not be persistent), whereas the MAC is (in theory) unique and persistent for identifying a device. Non-technical users, do not know these low-level details, and in general it makes sense to cluster devices using other criteria. VLANs are a way to logically group devices belonging to the same administrative domain, but this is still a low-level network-level properly.

When administering a network, we have have realised that we need a way to cluster devices onto logical groups, that have nothing to do with network-level properties such as IP address. In order to address this need, in ntopng (development version only at the moment, but soon also in the stable release) we have implemented what we have called Host Pool. They are logical group of devices, that can identified by IP address and/or MAC. In order to define a host pool you need to select the interface view and click on the host pool icon.

There you can define a pool by setting its name, and on the membership tab you can see what are the devices belonging to the pool. Remember that you can set both the MAC and the IP address (or network).

At this point in the host view you can see the pool associated with a host that is depicted on the green badge next to the IP address/network it belong to.

It is worth to remark that host pools are a logical cluster of devices that do not have to belong to the same network. Example you can group all printers of your company, all mobile phones etc. In essence this is a way to cluster devices and to easily spot those that are unknown and thus suspicious (e.g. a new MAC we have not listed and thus that can hide a device that should not have been connected) or simply ungrouped. You can list all active devices belonging to a pool by clicking on the pool badge. Example, if you want to list all local devices that do not belong to any pool, just click on the pool badge of a host not belonging to any pool, then from the “Filter Hosts” menu select local hosts only and you will see the host list.

We plan to further expand the host pool concept before the next stable ntopng release, by easing the association of hosts to pools, multi-pool support (e.g. my smartphone should belong both to “Luca’s Devices” and to “Smartphone” pools), and applying actions to pools (e.g. execute Lua script action.lua when a new device belonging to pool MyPool appears on the network). In a future blog post, we’ll discuss how host pools are used by ntopng in bridge mode with the captive portal, to automatically bind network devices to users.

Stay tuned, and report us suggestions or enhancements you would like to see by opening a ticket on github.

Positioning PF_RING ZC vs DPDK

$
0
0

Last week I have met some PF_RING ZC and DPDK users. The idea was to ask questions on PF_RING (for the existing ZC users) and understand (for DPDK users) whether it was a good idea to jump on ZC for future projects or stay on DPDK. The usual question people ask is: can you position ZC vs DPDK? The answer is not a simple yes/no. Let’s start from the beginning. When PF_RING was created, we have envisioned an API, persistent across network adapters, able to give people the ability to code the application once, forget the hardware details of the NIC being used, and deploy it everywhere. Seamlessly, without changing a single line of code, recompilation or anything a non-developer is unable to do. This means that you can code your application on your laptop using the WiFi NIC for testing and deploy it on a 100 Gbit NIC simply changing the device name from -i eth1 to -i zc:eth13. We have spent a lot of time to make sure that this above statement holds also for FPGA-based NICs such as Accolade, FiberBlaze or Napatech. This is the idea: developers should NOT pay attention to the underlying hardware, to memory allocation/reallocation, packet lifecycle, NIC API release differences, etc. Instead they should pay attention to the application they are developing.

Instead DPDK is based on the assumption that is very likely that you will be using an Intel NIC (PF_RING supports Intel NICs, we like them of course, but we do not want to be an Intel shop, as this is part of the freedom we want to give our developers to hop on the best NIC they want to use/can afford for a project), that you are a skilled developer (sorry but the DPDK API is all but simple), that you are coding your application from scratch and thus that you can use all the DPDK API calls to allocation/manage packets, and that you must be aware of the NIC you are sitting on. A good example is the Intel X710/XL710 that is the current flagship 10/40 Gbit adapters from Intel. When you enable jumbo frames, the NIC is returning 2K-long RX packets (so if you have an ingress 5k packet, you will receive a partial 2 x 2K buffer and a the remaining 1k buffer) and if you want to TX a packet the size is 9K (so you need to send 1x8K partial buffer plus the rest on the following buffer). In essence the developer must know this, prepare the app to handle these issues, and make sure that when you move to another NIC that doesn’t work this way (e.g. the Intel X520/X540) you are able to handle 1-buffer jumbo frames. In PF_RING ZC instead, the library allocates the memory buffers according to the MTU, regardless of the NIC you use, the library will always return you full packets (i.e. all this packet segmentation in buffers is not exposed to the user who will always play with a single jumbo packet), and the only thing a developer has to do is to make sure his app can handle jumbo packets. For PF_RING hiding these low-level details is compulsory for granting seamlessly application execution across network adapters, and we believe it is a big relief for developers.

Other usual questions DPDK users ask us are: 1) ok DPDK is free whereas ZC has a license cost and 2) DPDK is in some benchmarks 1-2 %faster that ZC. As of 1) we offer free support to anyone that compared to the fact that you can use non-super-skilled developers and have a smaller development team than DPDK is a fee you will be willing to pay. As of 2) you can read here that the performance is basically the same (sometimes ZC is more efficient than DPDK) so it’s not an argument really.

Conclusion: we let developers choose the API they like most. ntop is not Intel of course, we are a small team, focusing on creating simple technology able to be used by everyone, provide timely support, maintaining it over the years (the PF_RING project was started in 2003). But being small is a value sometimes, as we can speak directly with our users, without anybody in the middle. We do not want to convince people to move from DPDK to ZC, but just make them aware that performance or overall develop costs are not arguments against our tools.

Collecting Proprietary Flows with nProbe

$
0
0

nProbe has been originally designed as an efficient tool able to capture traffic packets and transform them into flows. Call it network probe or sensor. Over the years we have added the ability to collect flows (i.e. nProbe is both a probe and a collector), so that nProbe can now act as probe, collector, also proxy by covering flows across formats. For instance you can collect IPFIX flows and export them in NetFlowV9. All this following the standards as confirmed by the IPFIX interoperability tests.

Until now we focused in collecting standard flow fields (i.e. those defined in the NetFlow/IPFIX RFCs), but as vendors often use custom fields (i.e. on IPFIX those with a non zero PEN) we receive many requests about supporting information element (IE) X and Y. Initially we have handled them as exceptions (e.g. nProbe supports since many years a few Cisco NBAR, PaloAlto and IXIA IEs) but this was not a long term solution.

For this reason we have decided to enhance nProbe Pro (unfortunately the standard nProbe lacks this feature as we rely on mechanisms not present on that version) with the ability for users to define at runtime new fields they want to support during flow export. As nProbe has been based on the concept of template (-T command line option), in essence it is now possible to enhance the list of available IEs with custom ones. These new IEs can be collected and exported, as any other native IE such as IP address and port. For instance if you send nProbe some flows that contain a custom numeric IE that contains the name of the process that generated flow X, nProbe can collect the field and export it for instance in JSON-based formats (e.g. Apache Kafka, and ElasticSearch) but it cannot use it for doing anything more than this. In order to do this. nProbe Pro has a new command line option

--load-custom-fields <file> | Load custom templates from the specified file.

 than allows users to pass a configuration file at runtime. For instance have a look at simple configuration file. Fields are tab-separated and you can define them as follows:
  • Field name. This is the string that will be used in the template definition (-T) and when exported to MySQL and JSON.
  • PEN. Use 0 if this is a standard field, or a custom enterprise number if this IE is coming from a specific vendor.
  • Field Id, is the numeric identifier that is used to identify the field in the flow template.
  • Len is the field length (in bytes).
  • Format specifies how to represent the field when exporting it (e.g. in JSON) or dumping it on text field (-P).

If you are wondering how you can fill these values, I suggest you to look at the flow specs provided by your flow vendor device, or if you do no have them, to use wireshark to capture the flow template and dissect it.

With this enhancement you can now collect custom IEs with nProbe simply providing a configuration file and specifying with -T what to export. For instance do

nprobe -i none -3 2055 --load-custom-fields custom_fields -P /flows -T "%IPV4_SRC_ADDR %IPV4_DST_ADDR %IPV4_NEXT_HOP %INPUT_SNMP %OUTPUT_SNMP %IN_PKTS %IN_BYTES %FIRST_SWITCHED %LAST_SWITCHED %L4_SRC_PORT %L4_DST_PORT %TCP_FLAGS %PROTOCOL %VENDOR_PROPRIETARY_65"

for collecting flow with a proprietary field Id 65 and dump them in test format under /flows. For the time being, this new feature is present only in the development version of nProbe and it will be included in the next stable version.

Happy flow collection!

PS. If you are wondering if these IEs can be sent to ntopng via ZMQ, the answer is yes.

What Is a Microburst and How to Detect It?

$
0
0

It’s not uncommon to see network administrator struggling tracking down packet drop on network equipments at interface level, while having a low average link utilisation. At the end it often turns out to be due to a phenomenon (well) known as microburst. While forwarding data between network links, network equipments absorb spikes with buffers, when buffers fill much quicker than they empty because of a line-rate burst, they overflow and packet loss occurs (yes you drop even though your like is in average little used).

Now it’s clear that having a tool able to monitor our network for bursts in real time is crucial in identifying potential capacity issues. On the other side, the tools we use everyday for monitoring our network won’t show up much as they are unable to monitor micro-second bursts. They provide data at a resolution of seconds, which is enough for measuring average bandwidth, but it’s definitely not enough for detecting microbursts, as they last for a fraction of a second and even a 1 second average can hide them.

Here at ntop we received many requests, in the last years, regarding this topic. This led to the development of a technology able to continuously compute the traffic rate and detect unexpected data bursts reporting them as soon as they occur. This technology has been integrated in n2disk, our traffic recording application, as well as into standalone tools able to analyse both live traffic or PCAP files. This is to combine microburst with packet-to-disk capabilities into a single tool so you can use one box (one tap, one application) to do both things instead of two specialised apps.

The microburst detection feature in n2disk allows you to specify the traffic rate limit as a percentage of the link speed, when this threshold is exceeded the system generates a log.

MICRO-BURST DETECTION
[--uburst-detection] | Enable microburst detection.
[--uburst-log] | Microbursts log file.
[--uburst-win-size] | Window size for microburst check (usec).
[--uburst-link-speed] <mbit/s> | Link speed (Mbit/s).
[--uburst-threshold] | Traffic threshold wrt link speed (percenage).

Example:

# n2disk -i zc:eth1 -o /storage/ --uburst-detection --uburst-link-speed 100 --uburst-threshold 90 --uburst-win-size 10000 --uburst-log /var/tmp/n2disk/uburst.log

# cat /var/tmp/n2disk/uburst.log
Start End Duration Kbit Peak-Mbit/s
1477408107.351533705 1477408107.366453800 0.014920095 1378 95.971
1477408110.330377529 1477408110.341709397 0.011331868 1058 94.741

Please bear in mind that:

  • Microbursts are computed on a per link basis.
  • Microbursts require precise timestamping, better to use a card with hw timestamp, even though any Intel card using our PF_RING ZC drivers and our software timestamping technology is usually precise enough for microsecond-based measurements.
  • Using standard kernel drivers lead to unreliable results, due to buffering happening on the machine itself because of the mechanisms for moving packets from the card to the application.
  • Port mirroring usually introduces buffering/timing changes, better to use a network tap!

Meet ntop on April 28th @ Microsoft Munich

$
0
0

This year we’ve accepted the invitation from Wuerth-Phoenix to be part of their Roadshows 2017 and talk about network and system monitoring. The first workshop will be in Munich, Germany on April 28th.

All ntop users are invited to come and talk about our monitoring tools.

 

ntop am 28. April bei Microsoft in München

Wie wird die IT zu einem echten Service Enabler? Wie wird sich das Monitoring im Zeitalter von Industrie 4.0 entwickeln? Welche Neuigkeiten sind herstellerübergreifend in nächster Zeit zu erwarten? Eine neue Form des Managements und Monitorings von IT-Services sowie Arbeitsprozessen wird zu einem zentralen Asset für eine moderne Unternehmens- IT, um Kunden intern erbrachte und extern bezogene IT-Dienste in höchster Qualität bereitzustellen. Diese hochaktuellen Themen stehen im Fokus der „IT System Management-Roadshow“ mit Station am 28. April in München, bei der auch ich als Gastredner auf die Ansätze einer intelligenten Netzwerküberwachung für die IoT Praxis mit ntop eingehen werde.

Das Teilnehmer-Kontingent ist beschränkt. Das Programm und Anmeldemöglichkeiten finden sich unter www.wuerth-phoenix.com/roadshow

 

Filling the Pipe: Exporting ntopng Flows to Logstash

$
0
0

Logstash comes in very handy when it is necessary to manipulate or augment data before the actual consolidation. Typical examples of augmentation include IP address to customer ID mappings and geolocation, just to name a few.

ntopng natively supports network flows export to Logstash. The following video tutorial demonstrates this feature.


Capture, Filter, Extract Traffic using Wireshark and PF_RING

$
0
0

Last year we introduced our new nBPF library able to:
1. Convert a BPF filter to hardware rules for offloading traffic filtering to the network card, making it possible to analyse traffic at 100G.
2. Accelerate traffic extraction from an indexed dump set produced by n2disk, our traffic recording application able to produce multiple PCAP files together with an index.

Along with that library we released a tool n2if, able to create virtual interfaces to be used in Wireshark for implementing line-rate hardware packet filtering at 100G with Wireshark and filtering terabytes of pcaps with Wireshark.

In the last months we have decided to take another step forward towards a better integration with Wireshark creating an extcap module. The extcap interface is a plugin-based mechanism to allow external executables to be used as traffic source in case the capture interface is not a standard network interface directly recognised by Wireshark. This means that there is no more need for using external tools for creating special virtual interfaces, and linking Wireshark to our libpcap is no longer necessary, being everything based on plugins.

The ntopdump extcap module can be used to both open PF_RING interfaces (i.e. even those that are not listed by ifconfig) and extract traffic from a n2disk dumpset in Wireshark with a few clicks inside the Wireshark GUI.

In order to get started with the ntopdump module, you need to compile and copy the module to the extcap path where Wireshark will look for the extcap plugins. This unless you are using the PF_RING binary package, that contains it pre-packaged and that is installed in the directory where Wireshark will search it for.

cd PF_RING/userland/wireshark/extcap/
make
cp ntopdump /usr/lib/x86_64-linux-gnu/wireshark/extcap/

In the example above the extcap folder is /usr/lib/x86_64-linux-gnu/wireshark/extcap/, if you install Wireshark from sources it will probably be /usr/local/lib/wireshark/extcap/. However you
can read the actual extcap folder from the Wireshark menu:

“Help” -> “About Wireshark” -> “Folders” -> “Extcap path”

At this point you are ready to start Wireshark and start using the ntopdump module. Once you open Wireshark, you will see two additional interfaces, “PF_RING interface” and “n2disk timeline”. Before running the capture, please configure the interface you want to use by clicking on the “configuration” icon of the corresponding interface.

We will present this and other ntop technologies usable in Wireshark at the upcoming Sharkfest ’17 US in Pittsburg where we will organise a ntop meetup open to all of our users willing to hear about the latest things we have developed and future roadmap items.

 

Network Security Analysis Using ntopng

$
0
0

Most security-oriented traffic analysts rely on IDSs such as Bro or Suricata for network security. While we believe that they are good solutions, we have a different opinion on this subject. In fact we believe that it is possible to use network traffic monitoring tools like ntopng to spot many security issues that would make and IDS too complex/heavy to use (if possible at all). What many of our users are asking, is the ability to highlight possible scenarios where there is a potential security issue to be analysed more in details using more-security oriented tools. This while using a lightweight approach that an IDS cannot offer because it can be very verbose and information oriented, rather than providing an overall picture of the network status and help understanding real issues. For instance is a ping to a host a real problem? We don’t think so, but most IDSs would mark this as a warning for “information disclosure”. At the end you will have your hard drive filled up by many security logs like these that probably won’t make your network more secure, but for sure generate many security alerts that will often be ignored.

These presentation slides give you an idea of what you can expect today using ntopng from the security view point. This is just the beginning, it’s a revamp of old concepts we prototyped years ago, and that have a new life in the current ntopng. However this is not all, as in the coming months we plan to make ntopng more powerful and able to go beyond this initial step.

Stay tuned!

PF_RING 6.6 Just Released

$
0
0

After almost one year of development, this is to announce the release of PF_RING 6.6. In this release we have worked on different areas:

  • Introduced nBPF, a software packet-filtering component similar to BPF, that is able to exploit hardware packet filtering capabilities of modern network adapters and transparently deliver these facilities to user-space applications such as nProbe and ntopng, or non-ntop applications such as Wireshark and Suricata.
  • Improved PF_RING ZC Intel 40 Gbit drivers to transparently provide users that ability to use these NICs without having to pay attention to low-level details as with other solutions (e.g. jumbo frames on these NICs are handled on a very complicated way) and still play with a NIC-independent library.
  • Added support for Silicom/Fiberblaze NICs (10/40/100 Gbit) that can be transparently used via ZC both in packet (process one packet at time) and batch-mode (process multiple packets at time that can greatly accelerate applications such as n2disk).
  • Endace NICs are not natively supported by PF_RING ZC.
  • Accolade, and Myricom ZC drivers support has been greatly enhanced and updated to support all their latest NICs.
  • Created a Wireshark Extcap module named ntopdump that we have presented at the Sharkfest EU 2016.
  • All the FPGA-based NICs that PF_RING ZC are now dynamically loading vendor runtime libraries wit the advantage that you do not need to link your PF_RING application to these libs and thus improving portability and reliability across the various runtime libraries versions.

See the complete changelog for all details:

  • PF_RING Library
    • New pfring_findalldevs/pfring_freealldevs API for listing all interfaces supported by pf_ring
    • New timeline module based on libnpcap for seamlessly extracting traffic from a n2disk dumpset using the pf_ring API
    • Dynamic capture modules loading with dlopen support
    • Improved pfring_set_bpf_filter to set hw rules when supported by the network card thanks to the nBPF engine
  • ZC Library
    • New pfring_zc_set_bpf_filter/pfring_zc_remove_bpf_filter API for setting BPF filters to device queues
    • Fixed pfring_zc_queue_is_full for device queues
    • Flushing SPSC queues when a consumer attaches (RX only)
    • PF_RING-aware Libpcap/Tcpdump
    • Support for extracting traffix from a n2disk dumpset using libpcap
    • tcpdump upgrade to v.4.9.0
    • PF_RING kernel module
    • Support for latest ubuntu and centos stable kernels
    • Support for SCTP and ICMP packet parsing
    • Packet hash improvements
    • Added tunneled IP version to packet metadata
    • Added IP version to sw filters
    • New kernel cluster hash types for tunneled traffic
    • QinQ VLAN parsing
    • Removed deprecated kernel plugins support
    • Promisc fix in case of multiple devices in a single socket
  • Drivers
    • Support for latest ubuntu and centos stable kernels
    • FPGA modules/libraries are now loaded at runtime using dlopen
    • RSS support on Intel i211
    • Jumbo frames support on i40e
    • i40e tx optimisations
    • i40e interrupts fixes in case of RSS
    • Fiberblaze capture module with chunk mode support
    • Exablaze capture module
    • Accolade improvements
    • Endace DAG update and support for streams
    • Myricom ports aggregation fixes, new syntax myri:<port>,<port>
  • nBPF
    • New nBPF filtering engine supporting an extended subset of the BPF syntax (tunneled traffic and l7 protocols are supported)
    • nBPF support for hw filtering on Fiberblaze cards
    • nBPF support for hw filtering on Intel FM10K cards (Silicom PE3100G2DQIR)
    • nBPF support for hw filtering on Exablaze cards
    • nBPF support for hw filtering on Napatech cards and NTPL generation
    • Support for “start <time> and end <time> and <bpf>” when extracting from a n2disk timeline
    • Support for vlan [id], mpls [label], gtp
  • Examples
    • pfcount:
      • ability to list interfaces with -L (-v 1 for more info)
      • ability to dump traffic on PCAP file with -o
    • psend:
      • option to force flush per packet (-F)
      • options to specify src/dst IP for packet forging (-S/-D)
      • option to forge packets on the fly instead of at preprocessing time (-O)
      • option to randomize generated ips sequence (-z)
      • ability to generate IPv6 traffic (-V 6)
      • ability to generate mixed v4 and v6 traffic (-V 0)
      • TCP/UDP checksum when reforging
    • zbalance_ipc
      • option to use hw aggregation when supported by the card (-w)
      • IP-based filtering with ZMQ support for rules injection
  • Wireshark
    • New extcap module ‘ntopdump’ for Wireshark 2.x
  • Misc
    • Improved systemd support (Ubuntu 16)

Introducing n2disk 2.8 with Microburst Detection

$
0
0

Together with PF_RING 6.6, today we also released n2disk 2.8. In this release we introduced support for microburst detection in order to spot traffic bursts, which is crucial in identifying potential capacity issues and troubleshooting packet loss in network equipments. We also improved our “fast” BPF engine extending the supported primitives, and improving the ability to match tunneled traffic. More tools have been added, for playing with the dump set, for instance for moving part of the dump set to an external storage, or deleting PCAP files in a specified time interval.

Below the complete changelog for all details.

Changelog

  • n2disk (recording)
    • Performance improvements for 40 Gbit packet-to-disk with both Intel and FPGA-based NICs.
    • Support for microburst detection
    • n2disk renamed to n2disk5g, n2disk10g renamed to n2disk, n2disk10g and n2disknt are now symlinks to n2disk
    • Fast BPF support for rules with relative byte match (e.g. udp[9]!=0x0a)
    • Improved tunnels parsing, support for PPTP GRE
    • Implemented –exec-cmd for executing a command when a pcap file has been dumped
    • Improved systemd scripts
    • New –daemon option
    • Maximum supported Napatech segment size moved from 1MB to 4MB
    • Changed -z N limit from “num files – 1” to “num files”, to improve indexing load distribution
    • 64bit counters print fix
    • Support of Silicom/FiberBlaze NICs in burst mode for 10/40 Gbit packet-to-disk.
  • npcapextract (extraction)
    • Fixed extraction with empty filter
    • Fix for -P option
  • Tools
    • New npcapmanage utility to delete pcap files from timeline in a time interval
    • New npcapmove utility to move a pcap file with its index and timeline to a new storage path
    • New myritool utility for Myricom cards
    • Improved nttool utility for Napatech cards, added card S/N, removed ntlib dep (dlopen support)
    • Fixed index header dump v2 in npcapindex

Introducing nScrub: Powerful yet Affordable DDoS Mitigation

$
0
0

ntop has always tried to make the Internet a better place by developing many open-source network monitoring tools, and releasing all the software at no cost to non-profit and education. A few years ago, Qurium/VirtualRoad, a swedish foundation offering secure hosting to independent online news outlets and human rights organisations, contacted us. The reason was that after years mitigating attacks using proprietary appliances and servers running customised Linux kernel code based on netfilter, they reached the conclusion that those solutions were not affordable, or flexible, or fast enough. Their experience with the “scrubbing market” over the years has been always full of frustrations: lack of transparency, over priced solutions, vendor lock-in, poor documentation, expensive support, lack of interest tracking the attacks… The day they wanted to upgrade the infrastructure to handle 40 Gbps attacks, they were confronted with the sad reality: they could not even dream affording it. Since then, they invested their energy in understanding what was needed to build the best traffic scrubber: the magic that identifies and drops back traffic at very high speeds. The first obvious finding was the needing for a technology capable of moving packets between the interfaces of an affordable network adapter without using too many CPU cycles, at line-speed. That’s when they reached us at ntop and a new adventure started. In a few months we drafted, prototyped, and created the roadmap for a multi-tenant scrubbing system, “nScrub”.

nScrub is a software-based DDoS mitigation tool based on PF_RING ZC, our flexible packet processing framework, able to operate at 10 Gbps line-rate using commodity hardware (Intel NICs and standard servers). Every packet that reaches nScrub interacts with more than twenty filters and scrubbing algorithms developed to mitigate known Denial of Service attacks against web applications, game servers and DNS servers.

Key features:

  • Transparent bridge (bump-in-the-wire) or routing (BGP diversion) working mode
  • Hardware bypass support
  • Multitenancy, to protect heterogeneous services
  • Historical data
  • Web-based RRD-style historical graphs
  • PCAP dump on request
  • Event-driven scriptable engine
  • Active sessions verification for protocols including TCP and DNS
  • Flexible blacklists and whitelists
  • Firewall-like filtering
  • Anomaly detection based on traffic behavior
  • Pattern matching, HTTP filtering
  • Rate limiting based on source, destination, protocol
  • Plugins support for easy extensibility

Today, which is the World Press Freedom Day, we are glad to release nScrub 1.0. All informations for getting started are available at the nScrub page. As usual, nScrub is free for non-profit and educational users.

Viewing all 544 articles
Browse latest View live