Quantcast
Channel: ntop
Viewing all 544 articles
Browse latest View live

Network Monitoring Deep Dive: Interview with Scott Schweitzer

$
0
0

In early August, Scott Schweitzer interviewed me about network monitoring and packet capture. The conversation has been very broad, and I have covered various topics ranging from packet capture, network traffic analysis, deep packet inspection, IoT (Internet of Things) and cybersecurity.

You can hear my view on this market, and what we’re doing at ntopng to tackle new challenges, as well what we envisage the (hardware) networking industry should provide developers in terms of new products. This is because after being almost 20 years on this industry, looking back at the past 5-10 years I see very little changes beside speed increase. And this is not good news.

You can hear the whole podcast here.

Enjoy!


When Live is not Enough: Connecting ntopng and nProbe via MySQL for Historical Flows Exploration

$
0
0

Using nProbe in combination with ntopng is a common practice. The benefits of this combination are manyfold and include:

  • A complete decoupling of monitoring activities (taking place on the nProbe) from visualization tasks (taking place on ntopng);
  • The capability of building distributed deployments where multiple (remote) nProbe instances send monitored data towards one or more ntopng instances for visualization;
  • A comprehensive support for the collection, harmonization and visualization of heterogeneous flow export protocols and technologies, including NetFlow V5/v9/V10 IPFIX and sFlow;
  • Full support for any proprietary technology that sends custom fields over NetFlow V5/v9/V10 with visualization of data;
  • Harmonization of diverse physical network interfaces and flow export protocols and technologies into a single, clear, JSON format sent over ZMQ to ntopng.

ntopng and nProbe communicate each other via a publish-subscribe mechanism implemented over ZMQ. Exchanged data contains both interface updates (e.g., the number of bytes and packets monitored) as well as network flows, obtained by monitoring physical interfaces (NIC cards) or by processing flow export technologies such as NetFlow.

Since flows sent over ZMQ are only those that active or recently expired, one has to store and archive them systematically for later accesses and analyses. Presently, ntopng offer rich historical flow exploration features when it is instructed to archive flows to MySQL (see part 1 and part 2 of the tutorial “Exploring Historical Data Using ntopng”). However, there are cases where MySQL flow export must be done directly on nProbe. Such cases include, but are not limited to:

  • The capability of creating a database column for each nProbe template field — ntopng creates a fixed set of database columns;
  • A MySQL database that is closer or more effectively reached from nProbe rather than from ntopng;
  • A low-end ntopng device that couldn’t deal with massive/batched database insertions.

In the cases above, it is still desirable to have full access to the ntopng historical flow exploration features. Therefore ntopng must work seamlessly even when operating on top of a database created and used by nProbe for flow export.

Fortunately, this interoperability is accomplished transparently by mean of database table views. Just a couple of things are required. The first thing is to instruct ntopng to connect to the nprobe database using a special mysql-nprobe prefix in the -F option. The second thing is to ensure nProbe will create a minimum set of database columns as required by ntopng by specifying the macro @NTOPNG@ inside nProbe template. This macro will expand to the following set of fields:

%L7_PROTO %IPV4_SRC_ADDR %IPV4_DST_ADDR %L4_SRC_PORT %L4_DST_PORT %IPV6_SRC_ADDR %IPV6_DST_ADDR %IP_PROTOCOL_VERSION %PROTOCOL %IN_BYTES %IN_PKTS %OUT_BYTES %OUT_PKTS %FIRST_SWITCHED %LAST_SWITCHED %SRC_VLAN

Following is a working example that illustrates ntopng and nProbe configurations. Clearly, database connection parameters (host, user and password, schema name, and table name) must be the same on both sides.

./nprobe -i eno1 -T "@NTOPNG@" --mysql="localhost:ntopng:nf:root:root" --zmq tcp://127.0.0.1:5556 --zmq-probe-mode
./ntopng -i tcp://*:5556c -F "mysql-nprobe;localhost;ntopng;nf;root;root"

Note that when ntopng operates in this mode, it won’t export flows (no one want the same flows stored twice in the database). It will just visualize them.

Happy flow hunting!

20 Years of ntop and Beyond

$
0
0

This month it’s 20 years that I have started the ntop project. Initially it was a hobby project, willing to understand what was really flowing on a network after having spent 5 years playing with OSI that was clearly a dead end (whoever used FTAM to download a file and compared it with FTP/NFS or drag-and-drop on a Mac desktop, understands what I mean), even for me that just graduated from university.

My initial idea behind ntop was to create a simple tool able to enable network visibility without having to deal with complicated network protocols (you’re all used to IP, but in late 90s many other non-IP protocols existed such as AppleTalk, IPX, SNA… and non-Ethernet encapsulations such as Token-Ring, FDDI…). This triggered my interests in creating tools able to operate on commodity hardware boxes, simple to use and install. Today it’s probably normal to buy a PC on Amazon, install Linux and run your monitoring tools, but years ago it was not like that.

Since then, many tools have been created. Most of them are home-grown such as PF_RING and nProbe, others orphans we adopted such as nDPI. If you are wondering what the next steps in ntop will be, you won’t have to wait too long as soon we’ll introduce two new tools:

  • nDB, a very high-speed index/database for networking data, able to index million records/sec and store hundred of billion of records on a single box with sub second response time (remember that with MySQL-like tools you can insert < 50k records/sec, so 2 orders of magnitude less, not to mention that when you have million of records your DB will be very slow) without requiring typical big-data headaches and costs (data sharding, clusters and distributed systems for storing networking data aren’t the best answer in terms of complexity, and the trend towards cloud-based systems is a way to hide all this mess with a per-service price tag).
  • Embedded ntopng inline for families and businesses, able not just to monitor but to enforce network policies, and complement security features provided by firewalls (that are configurable but unable to stop your printer from doing BitTorrent or your children from accessing inappropriate or malware sites).

We’ll come to this soon. The message is that after 20 years we’re not tired, but we’re looking at the next thing, not for tomorrow but for the years to come. In the past 5 years we have consolidated many technologies ntop developed previously, and because of this we’re now ready to move forward again.

Thanks to all of you who are following our activities since long time, and to those who sent me messages for this anniversary.

PS. We’ll organise a workshop/meetup during Sharkfest EU on Nov 7th, 6 PM. Details will follow, but in the meantime try to be there.

Announcing ntopng and Grafana Integration

$
0
0

This is to announce the release of the ntopng Grafana datasource that you can find on the grafana website. Using this plugin you can create a Grafana dashboard that fetches data from ntopng in a matter of clicks.

To set up the datasource visit Grafana Datasources page and select the green button Add a datasource. Select ntopng as the datasource Type in the page that opens.

The HTTP url must point to a running ntopng instance, to the endpoint /lua/modules/grafana. The Access method must be set to Direct. An example of a full HTTP url that assumes there is an ntopng instance running on localhostport 3001 is the following:

http://localhost:3001/lua/modules/grafana

Tick Basic Auth if your ntopng instance has authentication enabled and specify a username-password pair in fields User and Password. The pair must identify an ntopng user. Leave the Basic Auth checkbock unticked if ntopng has no authentication (--disable-login).

Finally, hit the button Save and Test to verify the datasource is working properly. A green message Success: Data souce is working appears to confirm the datasource is properly set up.

Supported metrics

Once the datasource is set up, ntopng metrics can be charted in any Grafana dashboard.

Supported metrics are:

  • Interface metrics
  • Host metrics

Metrics that identify an interface are prefixed with a interface_ that precedes the actual interface name. Similarly, metrics that identify an host are prefixed with a host_ followed by the actual host ip address.

Interface and host metrics have a suffix that contain the type of metric (i.e., traffic for traffic rates and traffic totals orallprotocols for layer-7 application protocol rates). The type of metric is followed by the unit of measure (i.e., bpsfor bits per second, pps for packets per second, and bytes).

Interface Metrics

Supported interface metrics are:

  • Traffic rates, in bits and packets per second
  • Traffic totals, both in Bytes and packets
  • Application protocol rates, in bits per second

Host Metrics

Supported host metrics are:

  • Traffic rate in bits per second
  • Traffic total in Bytes
  • Application protocol rates in bits per second.

You’re Invited to the ntop and Wireshark Users Group Meeting

$
0
0

On November 7th we will be organising the ntop meetup during the Sharkfest EU 2017 that will take place in Portugal. You can find all details here.

This year we will be focusing on cybersecurity, IoT and user traffic monitoring, as well on Wireshark. In fact during our talk at Sharkfest we won’t have enough time to explain in detail all our activities for turning (or complementing) Wireshark into an effective monitoring tool and not just a packet dissector.

We welcome all users of our community (attendance of Sharkfest EU is not necessary to participate) to this event that is totally free of charge and a great place for talking about our common interests.

Hope to see you !

ntopng Grafana Integration: The Beauty of Data Visualizazion

$
0
0

Summary

  • Grafana is one of the most widely known platforms for metrics monitoring (and alerting);
  • ntopng version 3.1 natively integrates with Grafana thanks to a datasource plugin which is freely available;
  • This article explains how to install and configure the ntopng datasource plugin, and how to build a dashboard for the visualization of ntopng-generated metrics.
  • A video tutorial is available as well:

Introduction

Grafana is an open platform for analytics and visualization. An extremely-well engineered architecture makes it completely agnostic to the storage where data resides. This means that you can build beautiful dashboards by simultaneously pulling points from data sources such as ntopng, MySQL and Influxdb, just to name a few. Grafana interacts with tens of different data sources by means of datasource plugins. Those plugins provide a standardized way to deliver points to Grafana. ntopng implements one of those datasource plugins, to expose metrics of monitored interfaces and hosts, including throughput (bps and pps) and Layer-7 application protocols 
e.g., (Facebook, Youtube, etc).

Exposed Metrics

ntopng exposes metrics for monitored interfaces as well as for monitored hosts. Each metric is identifiable with a unique, self-explanatory string. In general, interface metrics are prefixed with the string interface_ while host metrics are prefixed with the string host_. Similarly, a suffix indicates the measurement unit. Specifically, _bps and _pps are used for bit and packet rates (i.e., the number of bits and packets per second), whereas _total_bytes and _total_packets are used for the total number of bytes and packets over time, respectively.

Currently, supported metrics carry traffic as well as Layer-7 application protocols metrics.

Traffic metrics exposed are:

  • interface_<interface name>_traffic_bps
  • interface_<interface name>_traffic_total_bytes
  • interface_<interface name>_traffic_pps
  • interface_<interface name>_traffic_total_packets
  • host_<host ip>_interface_<interface name>_traffic_bps
  • host_<host ip>_interface_<interface_name>_traffic_total_bytes

Layer-7 application protocol metrics exposed are:

  • interface_<interface_name>_allprotocols_bps
  • host_<host ip>_interface_<interface_name>_allprotocols_bps

To be able to use the aforementioned metrics inside Grafana dashboards, the ntopng datasource plugin must be installed and configured as explained below.

Configuring the ntopng Datasource

Prerequisites

  • A running instance of Grafana version 4 or above;
  • A running instance of ntopng version 3.1 or above.

Grafana and ntopng run on Linux and Windows, either on physical, virtualized or containerized environments. For Grafana installation instructions see Installing Grafana. ntopng can either be built from source, or installed as a package.

Installing the ntopng Datasource Plugin

Installing the ntopng Datasource plugin is as easy as

$ grafana-cli plugins install ntop-ntopng-datasource

Upon successful installation, you will receive a confimation message and you will have to restart Grafana

installing ntop-ntopng-datasource @ x.y.z
from url: https://grafana.com/api/plugins/ntop-ntopng-datasource/versions/x.y.z/download

Installed ntop-ntopng-datasource successfully

Restart grafana after installing plugins .

After restarting Grafana, you can connect to its web User Interface (UI) and visit the Plugins page. ntopng will be listed under the datasources tab.

Configuring the ntopng Datasource

A new datasource with type ntopng will be available once the ntopng datasource plugin is installed. Multiple ntopng datasources can be created to connect to several running ntopng instances. The list of configured datasources is available at the Grafana ‘Data Sources’ page. The following image shows two ntopng datasource configured with the aim of connecting to two different ntopng instances running on separate machines.

Adding a new ntopng datasource is a breeze. Just hit the ‘+ Add datasource’ button inside the Grafana ‘Data Sources’ page. This will open an ‘Edit Data Source’ page that can be used to specify ntopng connection parameters.

To configure the ntopng datasource select ntopng as the datasource Type and give it a mnemonic Name that will help you identifying the datasource connection. The Url in the HTTP settings must point to a running ntopng instance, to the endpoint /lua/modules/grafana. For example, to connect to an ntopng running on host devel on port 3001, you have to use url http://devel:3001/lua/modules/grafana.

The Access method must be set to direct. Tick Basic Auth if your ntopng instance has authentication enabled and specify a username-password pair in fields User and Password. The pair must identify an ntopng user. Leave the Basic Auth checkbock unticked if ntopng has no authentication (--disable-login).

Finally, hit the button Save and Test to verify the datasource is working properly. A green message Success: Data source is working will appear to confirm the datasource is properly set up.

The following screenshot highlights the connection to an ntopng instance running on host devel on port 3001.

 

Building a Dashboard

Once the datasource is properly set up, you can visualize ntopng timeseries in any of your Grafana dashboards. Dashboards are flexible ensembles of panels. Each panel is meant to visualize a single timeseries. Panels are added in any dashboard by clicking on the ‘Add Row’ button that will allow you to choose among the available panel types.

Currently, ntopng provides timeseries that can be used effectively to build ‘Chart’ and ‘Singlestat’ panels.

Adding an Interface Speed Panel

To add an interface speed panel, select ‘Graph’ in the available panel types. A graph panel with random data will be automatically added to the dashboard. Click on the ‘Panel Title’ and select ‘Edit’. A configuration page as the following will appear:

There is a ‘Test data: random walk’ timeseries with random data by default. Drop it by clicking on the bin. To add ntopng metrics select one of the ntopng datasources configured from the ‘Panel Data Source’ dropdown. In the following image, an ntopng datasource named lab-monitor is selected:

Once the datasource is selected, you can click the ‘Add query’ button and start type a metric name. Autocompletion will automatically show all the available metrics matching the typed text. In the image above, interface eno1 bps is picked among all timeseries available. As soon as the metric is chosen, a chart will be populated. However, as shown below, the chart is sill pretty basic and some extra work is needed to configure the axis unit of measure as well as the title.

To change the chart title select tab ‘General’ and input the title:

More important, to set the unit of measure of the y-axis select tab ‘Axes’ and pick ‘bits/sec‘ from the ‘Unit’ dropdown.

The final result is shown in the picture below

Adding an Interface Layer-7 Application Protocols Panel

To add an interface application protocols panel the above instructions apply. Just make sure select the interface metric ending in _allprotocols_bps. In addition, as this metric carry more than one timeseries (one per application protocol), it is recommended to stack them by ticking the ‘Stack’ checkbox under the ‘Display’ tab.

The final result will appear similar to the following image

Adding the Interface Average Speed Panel

Using a ‘Singlestat’ panel it is possible to crunch a metric using an aggregation function. To visualize the average speed, you can add a ‘Singlestat’ panel, select the interface traffic timeseries, and configure avg as ‘Stat’ in the ‘Options’ tab, as well as bits/sec in the ‘Unit’.

A Full ntopng Grafana Dashboard

By putting together all the panels introduced above, you can build a complete dashboard as the one shown here

Remember that you can combine panels created with ntopng with panes created from other datasources (e.g., MySQL or InfluxDb). There is no limit on how you can combine panels to create dashboards!

Conclusion

ntopng features an handy datasource plugin that exposes monitored metrics to Grafana. Visualizing ntopng metrics in Grafana will allow you to show ntopng data inside the beautiful Grafana UI, and will give you enough flexibility to mix and match ntopng data with other data sources.

 

Introducing PF_RING 7.0 with Hardware Flow Offload

$
0
0

This is to announce a new PF_RING major release 7.0.
In addition to many improvements to the capture modules, drivers upgrades, containers isolation,
the main change of this release is the ability to offload flow processing to the network card (when supported by the underlying hw).

Flow offload is a great feature for cutting the CPU load when using applications doing intensive flow processing, as it’s possible to let the network card handle activities like flow classification (update flow statistics) and shunting (discard or bypass flows according to the application verdict). This saves CPU for further processing (e.g. DPI), or for running multiple applications on the same box (Netflow probe and traffic recording, or IDS). Enabling flow offload it is possible to receive from the capture stream both raw packets (with metadata including the flow ID) and flow records (in the form of periodic flow stats updates), and it is possible to shunt a specific flow providing the flow ID.

Flow offload is currently supported by 10/40G Accolade Technology adapters of the ANIC-Ku Series (tested on ANIC-20/40Ku, ANIC-80Ku), however PF_RING provides a generic API that is hardware agnostic, as always.

Soon we will post news on how to accelerate applications by leveraging on flow offload. This not only reduces the CPU load but it opens up to many new opportunities as combining on the same box flow-based analysis and packet-to-disk. For those who will attend Suricon 2017, you can hear how Suricata benefits from this new technology to move this IDS to 40/100 Gbit.

This is the complete changelog of the 7.0 release:

  • PF_RING Library
    • Flow offload support
    • New PF_RING_FLOW_OFFLOAD pfring_open() flag to enable hw flow offload on supported cards (received buffers are native metadata)
    • New PF_RING_FLOW_OFFLOAD_NOUPDATES pfring_open() flag to disable flow updates with hw flow offload enabled: only standard raw packets with a flow id are received
    • New PKT_FLAGS_FLOW_OFFLOAD_UPDATE packet flag to indicate flow metadata in the received buffer (generic_flow_update struct)
    • New PKT_FLAGS_FLOW_OFFLOAD_PACKET packet flag to indicate raw packet with flow_id in pkt_hash
    • New PKT_FLAGS_FLOW_OFFLOAD_MARKER packet flag to indicate marked raw packet
    • Fixes for ARM systems
  • ZC Library
    • New pfring_zc_set_app_name API
    • PF_RING_ZC_PKT_FLAGS_FLOW_OFFLOAD flag to enable hw flow offload
    • Fixed BPF filters in SPSC queues
    • Fixed hugepages cleanup in case of application dropping privileges
    • Fixed sigbus error on hugepages allocation failure on numa systems
    • Fixed multiple clusters allocation in a single process
  • PF_RING-aware Libpcap/Tcpdump
    • Libpcap update v.1.8.1
    • Tcpdump update v.4.9.2
  • PF_RING Kernel Module
    • Docker/containers namespaces isolation support
    • Fixed capture on Raspberry Pi
    • Implemented support for VLAN filtering based on interface name (., where ID = 0 accepts only untagged packets)
    • New cluster types cluster_per_flow_ip_5_tuple/cluster_per_inner_flow_ip_5_tuple to balance 5 tuple with IP traffic, src/dst mac otherwise
    • Fixed hash rule last match, new hash_filtering_rule_stats.inactivity stats
  • PF_RING Capture Modules
    • Accolade flow offload support
    • New hw_filtering_rule type accolade_flow_filter_rule to discard or mark a flow
    • Netcope support
    • New hw_filtering_rule type netcope_flow_filter_rule to discard a flow
    • Improved Fiberblaze support
    • pfring_get_device_clock support
    • Ability to set native filters by setting as BPF string ‘fbcard:’
    • Fixed TX memory management
    • Fixed subnet BPF filters
    • Fixed drop counter
    • Fixed capture mode
    • Fixed sockets not enabled
    • FPGA error detection
    • Endace DAG update
    • npcap/timeline module compressed pcap extraction fix
  • Drivers
    • ixgbe-zc driver update v.5.0.4
    • i40e-zc driver update v.2.2.4
  • nBPF
    • Fixed nBPF parser memory leak
  • Examples
    • New pfsend option -L to forge VLAN IDs
    • zbalance_ipc improvements
    • Ability to dump output to log file (-l)
    • Fixed privileges drop (-D)
  • Misc
    • Fixed systemd dependencies, renamed pfring.service to pf_ring.service
    • New /etc/pf_ring/interfaces.conf configuration file for configuring management and capture interfaces

ntop User’s Group Meeting at Shakfest EU 2017

$
0
0

Those who have not been able to attend our ntop meeting at Sharkfest Europe 2017 can find our presentation slides below

We need your feedback and we could be glad if our community could give us guidance in the next steps. So please don’t be shy and send us a mail about 2018 plans. Thank you!


ntop is Now Operational Again: We Apologise for the Inconvenience

$
0
0

Yesterday a major outage to our service provider has taken down many European websites including ntop (web and email). Our services are now operational again. We sincerely apologise for this issue.

PS. As you can guess it’s time to move to another provider. A downtime of about a day is not reasonable.

Using nDPI to Turn Wireshark Into a Traffic Monitoring Tools

Implementing PF_RING-based Hardware Flow Offload in Suricata

$
0
0

Last month we have integrated hardware flow offload in PF_RING 7.0. This week Alfredo has presented at Suricon 2017 the integration of hardware flow offload with Suricata and demonstrated that with this technology you can significantly reduce packet drops and CPU load. Below you can see how NetFlow traffic analysis and Suricata can both benefit from this work.

Hardware Flow Offload with Netflow

Hardware Flow Offload with Suricata

Shall you be interested to read the full story, these are the presentation slides. We remind you that the PF_RING source code is available on GitHub nd where you can also find pfflow, a demo application that demonstrates flow offload.

Enjoy!

Announcing nDPI 2.2

$
0
0

Today we are glad to release nDPI stable version 2.2. This minor release present several fixes and adds support for a handful of new protocols. It also features custom application categories to allow users to create personalized mappings between protocols and categories.

The full list of changes introduced with this release are:

Main New Features
  • Custom protocol categories to allow personalization of protocols-categories mappings
  • DHCP fingerprinting
  • HTTP User Agent discovery
New Supported Protocols and Services
  • ICQ (instant messaging client)
  • YouTube Upload
  • LISP
  • SoundCloud
  • Sony PlayStation
  • Nintendo (switch) gaming protocol
Improvements
  • Windows 10 detection from UA and indentation
  • Determine STUN flows that turn into RTP
  • Fixes for iQIYI and 1kxun
  • Android fingerprint
  • Added DHCP class identifier support

nProbe 8.2 stable is out – A Wink At Next-Gen ASA Firewalls

$
0
0

We are pleased to announce that the new 8.2 release of nProbe is out. This release features full Cisco ASA NetFlow support. ASA are industry’s first threat-focused next-generation firewalls that export a rich set of information through NetFlow. Being able to collect ASA data using nProbe will give you an advantage over collectors that only interpret standard NetFlow. Collected data can also be sent to ntopng over ZMQ to actually create a very effective solution for the monitoring and visualization of firewall-generated data.

ZMQ-based data export has been greatly improved in this release, too. ZMQ, a high-performance asynchronous messaging library, has always been used to send collected and monitored data from nProbe to ntopng in a JSON-encoded format. Nonetheless, some peculiarities of the JSON-encoded format were preventing ultra-high throughputs from being reached when exchanging data over ZMQ. This release heavily uses batching and compression to remove any possible bottleneck occurring in ZMQ communications.

nProbe binary packages are available at http://packages.ntop.org/.

The full list of new features and changes present in this release is the following:

Main New Features
  • Support for multiple –zmq endpoints to load-banace exported flows in a round-robin fashion
  • Full support for NetFlow exported by ASA, including firewall events and cumulative counters
  • MySQL database interoperability with ntopng using template -T “@NTOPNG@”
New Options
  • Added –plugin-dir <dir> for loading plugins from the specified directory
Extensions
  • bgpNextHop support
  • sFlow
    • Improved sFlow upscale algorithm and added heuristic to prevent sFlow exporters bugs
    • Fixed throughput calculation and upsampling of sFlow traffic
  • Full systemd support for Debian, Ubuntu, Centos, and Raspbian
  • Fixes wrong flow first/last calculations when collecting IPFIX
  • Added support for flowDurationMillis Fixed bug for properly handling flowStart/flowEndMillis

Announcing ntopng 3.2 – The First Move Towards Active Network Monitoring

$
0
0

Today we are glad to announce the new 3.2 stable release of ntopng. Among the most important new features available in this release, there is without any doubt an advanced network devices discovery functionality. Historically, ntopng has always been a fully passive monitoring tool. This release aims at complementing the information gathered from a purely passive packet capture with precious extra bits of data obtained by actively searching for devices. Network devices discovery glues together multiple techniques and heuristics, including ARP pinging, SNMP querying, SSDP discovery and MDNS names resolution. By opportunely combining the pieces of information obtained by actively probing network devices, ntopng is not only to discover them, but also to understand the services they provide as well as the operating system they run.

This is what the outcome of a active network device discovery looks like in ntopng.

For a detailed explanation of all the techniques and heuristics implemented, we refer the interested reader to this article.

ntopng release 3.2 is also incredibly more efficient when it comes to handling big traffic volumes. Indeed, I/O operations have been reduced to a great extent, thus alleviating the pressure on disks and, at the same time, making the software running faster. In addition, all the periodic activities that crunch hosts and interfaces traffic into time series data are now run by a thread pool that orchestrates their execution in parallel to fully leverage any modern multi-core system. The benefits of reduced I/O and parallel periodic activities execution together make ntopng sensibly more responsive also when browsing the web user interface.

As usual, ntopng installation instruction can be found at packages.ntop.org.

The complete list of changes introduced in this release is the following:

New features
  • Support for the official ntopng Grafana datasource plugin
    • Plugin available at: https://grafana.com/plugins/ntop-ntopng-datasource
  • Newtork devices discovery
    • Discovery of smartphones, laptops, IoT devices, routers, smart TVs, etc
    • Device type and operating system detection
    • ARP scan, SSDP dissection, Multicast DNS (MDNS) resolution
    • DHCP fingerprinting
  • Adds an active flows page to the AS details
  • Bridge mode
    • Enforcement of global per-pool time and byte quotas
    • Support of per-host traffic shapers
    • Added support for banned sites detection with informative splash screen
    • Implement per-host/mac/pool flow drop count
  • nDPI traffic categories and RRDs
  • Implements MySQL database interoperability between ntopng and nProbe
Improvements
  • Flows sent by nProbe over ZMQ
    • Batched, compressed ZMQ flow format to optimize data exchange
    • Use of post-nat src/dst addresses and ports
    • Handles multiple balanced ZMQ endpoints
  • Periodic tasks performed by a thread-pool to optimize cores utilization
  • Hosts and devices are walked in batches to greatly reduce Lua VM memory
  • Full systemd support for Debian, Ubuntu, Centos, and Raspbian
  • Extended sFlow support to include sample packet drops and counter stats in interface views
  • Stacked applications and categories charts for ASes, Networks, etc
Security Fixes
  • More restrictive permissions for created files and directories
  • Fix of a possible dissectHTTP reads beyond end of payload

PF_RING and Network Namespaces

$
0
0

Last week we made a couple of presentations at LinuxLab 2017 where we spoke about Containers, focusing on Network Namespaces support in PF_RING, and User and IoT-oriented Network Traffic Monitoring on Embedded Devices.

With the advent of Containers, processes isolation has become extremely easy and effective, to the point that ordinary Virtual Machines have been reconsidered. Many ntop users today are running traffic monitoring applications in Docker, thus it’s important to understand how Containers work and how to make the best use of them. Network isolation is provided by Network Namespaces, a native feature of the Linux kernel, that virtualize the network stack. With this talk we have seen what exactly happens under the hood of Network Namespaces, focusing on raw packet capture, and we learnt that even when we are not running containers in Linux, we are running in a Namespace.

Those who have not been able to attend this event can find our presentation slides below.

PF_RING_and_Network_Namespaces


Introducing n2disk 3.0

$
0
0

This is to announce n2disk 3.0 that is more than a maintenance release, as it:

  • Consolidates pre-existing functionalities
  • Adds extraction security features that pave the way to GDPR support.
  • Adds flow offload support
  • Simplifies storage management to avoid headaches during the n2disk configuration

During our last meeting at Sharkfest EU we talked about Hardware Flow Offload. In essence, applications running on top of PF_RING and (supported) FPGA adapters are now able to offload flow processing to the network card that be programmed to:

  1. Keep flow state, doing (basic) flow classification in hw.
  2. Periodically provide informations like hash, packets, bytes, first/last packet timestamp, tcp flags, to the application.
  3. Drop/bypass/prioritize flow packets.

This technology dramatically reduces CPU utilization in applications like our nProbe Cento Netflow generator, or IDSs like Suricata. With this new release we added support for Flow Offload also in n2disk, for a better/faster integration with Netflow applications like nProbe Cento. Thanks to this integration, n2disk is able to record raw data while feeding nProbe Cento with flow updates. Optionally, when nDPI is enabled in nProbe Cento, for L7 protocol detection, n2disk can be instructed to forward also raw traffic, using a feedback queue for shunting flow packets as soon as the nDPI engine detects the protocol. All this allows you to do traffic recording and Netflow generation at high speed on the same box with a really low CPU utilization!

 

Those familiar with the n2disk configuration have likely spent some time finding the right dump-set sizing configuration. Until the previous n2disk version, in order to configure the maximum space on disk to be used for PCAP files (n2disk overwrites old files when the maximum data retention is reached), the user was required to set

  • (A) The maximum file size
  • (B) The maximum number of folders containing PCAP files (this is needed to improve the filesystem performance)
  • (C) The maximum number of files per folder.

As a result, the maximum amount of disk space that n2disk could use was A x B x C. This is not really user friendly, and there is also another limitation: A is the *maximum* disk space that n2disk is able to use, in fact it happens to create PCAP files smaller than the configured file size (e.g. when enabling the index timeline that cuts PCAP files in time slots) with the result that the dump set usually contains less data than the maximum specified.


With this new n2disk release, it is now possible to simply specify the disk space to use, either as absolute value (MBytes) or disk size percentage. n2disk will track disk usage and dynamically computes the number of folders and files to keep on disk in order to match exactly the configured disk utilisation.

n2disk 3.0 also introduces PAM support, this allows you to integrate multiple authentication schemes (including LDAP for instance) for granting traffic retrieval capabilities to selected users/groups. This is yet another step forward implementing those measures which meet the principles of data protection as defined by the GDPR regulation.

This is the complete changelog of the 3.0 release:

  • n2disk
    • Dynamic disk management: new –disk-limit option to specify the max amount of disk space to use (MByte or %), instead of using -m (max number of files) and -n (max number of directories) which is less flexible.
    • Raw packets and flow updates export based on the new PF_RING 7 flow offload support. This also includes a feedback queue for raw packets shunting (when used in combination with nProbe Cento and with DPI enabled for instance).
    • Support for kill -USR1 to close and flush the current pcap in order to make live traffic immediately available
    • Microburst detection now works also in multithreaded capture mode (ZC) and segment mode (FPGA capture)
    • New –reader-threads-queue-len option to configure the queue len in multithreaded capture
    • Printing microseconds in timeline file names always with 6 digits now
    • Fixed drop percentage stats
    • Fixed threads synchronisation
  • disk2n
    • Fixed nanosecond pcap files replay
  • npcapextract
    • Support for PAM authentication for running extractions
    • New -o – option to write to stdout
    • Fixed extraction of huge packets (e.g. captured with gro)
    • Fixed npcap open mode from ‘read/write’ to ‘read only’
    • Fixed extraction on compressed pcaps
  • Tools
    • New npcapdecompress -o – option to write to stdout
    • New npcapprintindex option -c to check index sanity
    • New npcaprepair tool for reparing indexes and timeline
    • npcapmode now creates relative paths
  • Misc
    • Fixed a few corner cases in the init.d scripts
    • Fixed systemd dependencies

Released nBox 2.6 Now Featuring a New Centralised Manager

$
0
0

This is to introduce a new nBox stable release 2.6, that includes many security enhancements, a reworked services management system to fully support systemd (available on latest CentOS/Ubuntu releases), and the new NxN user interface to monitor the status of all ntop applications running on distributed appliances in a single place and facilitate centralized management.

The NxN manager includes a dashboard where you can add your nBox appliances, and it will automatically show all services running on each appliance, including informations like actual processed traffic and disk utilisation. The dashboard also lets you control the applications, with the ability to run or stop each application instance.

If you are running raw traffic recording on multiple appliances, the NxN manager also provides a wizard to guide you through the steps for running traffic retrieval from all remote boxes, specifying time interval and traffic filter once. The GUI will run the extractions on the remote boxes and provide you the result in a single centralized place, as soon as data is ready.

Enjoy!

Network Monitoring 101: A Beginner’s Guide to Understanding ntop Tools

$
0
0

The first important step to start with network monitoring is to analyze what we want to monitor and how to deploy the monitoring solution in the existing network.

Here are some important questions to ask ourselves before starting the actual monitoring:

  • Do we need to monitor the entire network or just a specific segment?
  • Do we already have network appliances with network flow export capabilities (e.g. NetFlow/sFlow devices)?
  • Can we use port mirroring of a switch or a network TAP?
  • Where are we deploying our network monitoring appliances to get visibility on the traffic of interest?
  • Do we have a NAT/routers which possibly hides IP/MAC addresses? Can we place our monitor appliance before the actual NAT/router?
  • Do we need full L7 application traffic analysis capabilities or we can go with a simpler port based approach?

In this article we will see how to deploy ntopng and optionally nProbe to fulfill some common network analysis requirements.

nProbe is a powerful network probe. It supports many standard flow formats as Netflow sFlow and IPFIX. nProbe itself does not provide a Graphical User Interface (GUI). When coupled with ntopng, however, it allows us to monitor traffic and display it on the ntopng GUI.

ntopng is a full-featured network monitoring tool. It provides a web GUI to access accurate monitoring data. It provides detailed views on active hosts, flows, IP addresses, Mac addresses, Autonomous systems. It cam be used to monitor and report live throughput, network and application latencies, Round Trip Time (RTT), TCP statistics (retransmissions, out of order packets, packet lost), and bytes and packets transmitted.

Let’s now review some of the most common network monitoring use cases with ntopng.

Monitoring Netflow/sFlow Traffic

If our network appliances supports the Netflow/sFlow flow export, we can send flows data to a remote server running ntopng. This setup is not appropriate if we need detailed L7 application dissection or per-packet realtime analysis. We will need to set up nProbe as an intermediate flow collector, which in turn will send flows to ntopng. The ntopng dashboard visualization will not be “real-time” actually as there are some export timeouts (both into nProbe and into the NetFlow appliance) involved.

In this example we have a Netflow capable router. We have to configure the router to export flows to nprobe. From the router admin interface (typically a web GUI), we configure Netflow export to our nProbe running server by specifying its IP address as well as a port, say 2055.

Now we have to configure nProbe to receive the Netflow data. We do this by creating the file /etc/nprobe/nprobe-none.conf on the nProbe host:

--zmq="tcp://*:5556"
-i=none
-n=none
--collector-port=2055

We are also telling nProbe to send the flows to ntopng. This intermediate step is needed as ntopng does not know talk Netflow so nProbe acts like a translator.

Now we have to set up ntopng. Let’s install ntopng and configure it to receive flows from nProbe. Let’s modify /etc/ntopng/ntopng.conf:

-i="tcp://127.0.0.1:5556"
--local-networks="172.16.1.0/24,172.16.2.0/24"

We are also telling ntopng which are the local networks to monitor, in this example 172.16.1.0/24 and 172.16.2.0/24. ntopng will mark hosts belonging to that networks as “local” and this will enable their historical data to be saved to disk.

After setting up the configuration files, we have to enable and start the system services:

systemctl enable ntopng
systemctl enable nprobe@none
systemctl restart ntopng
systemctl restart nprobe@none

If we have many Netflow appliances we can direct all of the to exports flows to our single nprobe instance. In ntopng, we can then split the incoming traffic by using the Dynamic Interfaces Disaggregation from the ntopng preferences.

Monitoring a Port Mirror/TAP

In this example we have an appliance which mirrors the packets using a SPAN port. With this setup we can perform full L7 packet analysis and get a realtime view of the traffic.

We only need to set up ntopng to listen from the network interface connected to the SPAN port. Ideally the network interface should not have an IP address as it should only be used to receive the mirrored traffic.

The ntopng setup is really simple: we only need to tell it to monitor the -interface connected to the span port. Supposing the interface is eth1, the correspondent /etc/ntopng/ntopng.conf file will be:

-i=eth1
--local-networks="192.168.1.0/24"

Remember to restart the ntopng service after applying the changes.

Monitoring Multiple Locations

We may need to deploy multiple probes in our network to capture traffic at different points. We can use a single ntopng to gather all the information from the probes. Let’s assume we have two nProbe instances running on two different hosts, one at IP 192.168.10.10 and the other on 192.168.20.20. ntopng runs on a separate host with IP address 1.2.3.4. We assume that the probes read traffic from local SPAN ports, connected on the interface eth0 of each probe.

The first nprobe configuration (/etc/nprobe/nprobe-eth0.conf on host 192.168.10.10) is:

--zmq="tcp://1.2.3.4:5556"
-i=eth0

The second nprobe configuration (/etc/nprobe/nprobe-eth0.conf on host 192.168.20.20) is:

--zmq="tcp://1.2.3.4:5557"
-i=eth0

The ntopng configuration file /etc/ntopng/ntopng.conf will contain:

-i="tcp://*:5556c"
-i="tcp://*:5557c"
--local-networks="192.168.10.0/24,192.168.20.0/24"

We enable and start the nprobe@eth0 service on both probes, then enable and start the ntopng service. The ntopng GUI will show two network interfaces, each one represents one remote probe and its related traffic.

For advanced nProbe and ntopng communication see https://www.ntop.org/nprobe/advanced-flow-collection-with-ntopng-and-nprobe/ .

Introducing nProbe Cento 1.4 with Hardware Flow Offload

$
0
0

This is to announce the new 1.4 stable release of nProbe cento. The most important feature that comes with this new version is definitely the support for hardware flow offloading as well as various bug fixing and improved netflow template definition.

We recently discussed the benefits of hardware flow offloading in another blog post. Hardware flow offloading alleviates, to a great extent, the pressure put on the CPU by intensive tasks such as classification (associating single packets to flows for accounting and deep packet inspection). Basically, hardware flow offloading means that the network adapter keeps a stateful flows table constantly kept updated to account for any single packet. Periodically, flow updates with aggregated information (e.g., total bytes and packets) are reported to nProbe cento that will have to just annotate such information for downstream export. In practical terms, this translates into a less-loaded CPU that can thus be used for other activities such as raw traffic recording as well as for Intrusion Detection and Intrusion Prevention Systems (IDS/IPS).

Currently hardware flow offload supported by 10/40G Accolade Technology adapters of the ANIC-Ku Series (tested on ANIC-20/40Ku, ANIC-80Ku).

The full list of changes shipped with 1.4 stable are:

Main New Features

  • Full support for the new systemd service manager
  • Support for Accolade adapters with hardware flow offloading capabilities
  • Support for multiple aggregated egress queues and devices
  • Egress policies can be applied on a per-egress queue/device basis and can be configured with HTTP REST API calls
  • Handles HUP signals to reload configuration files and policy rules
  • flowStartMilliseconds and flowEndMilliseconds for precise flow timestamping
  • Support for AWS virtual interfaces
  • Added support for RFC 5103 to export reverse counters in IPFIX
  • Support for IPFIX 64-bit counters
  • Added BPF support in ZC
  • Added DNS query type in nDPI
  • Implements VLAN to interface index mapping

New Options

  • --bpf-filter to filter monitored traffic using BPF syntax
  • --flow-offload to enable hardware flow offload on Accolade adapters
  • Implemented IPFIX unidirectional flow support with --uniflow
  • Added --max-socket-tx-buffer to specify the TX buffer size and to slow down export when the TX buffer is > 50% full
  • Non-blocking UDP export is now disabled and it can be enabled with --send-dont-wait
  • Added --flow-delay and --count-delay to throttle the flow export rate
  • --template-send-pkts to control the export frequence of flow templates
  • --vlan-iface-map to map VLAN IDs to INPUT/OUTPUT interface ids
  • Implemented --human-readable-tcpflags to dump TCP flags to text files in a human readable format
  • Implemented --sample-rate to perform packet sampling
  • Added the ability to specify the binding between id interfaces and networks with --if-networks
  • Added --iface-id to control INPUT/OUTPUT interface ids in exported flows
  • --csv-separator to control the character that separates columns in generated text files
  • Export flows in JSON format to syslog with --json-to-syslog
  • Added --trace-log to dump traces on a log file

Is your Android phone safe? nDPI will tell you

$
0
0

Weeks ago I have added support for GoogleServices detection in nDPI and thus I wanted to test the code with real traffic. For this reason I started to play with a few Android phones in order to test the code on various OS releases and implementations. This is what I found out. The testbed was very simple: disable 3G/4G, start a packet sniffer application such a tcpdump/wireshark so that I could dump all traffic, connect the phone to a WiFi hotspot and wait< 1 minute without doing anything (start applications or so). Then analyze the pcap with nDPI to see what the phone did just connecting it to the WiFi. Below I report the results of a two entry level phones:  a Samsung A5 and a Wiko Lenny 3.

Samsung A5

Detected protocols:
Unknown packets: 24 bytes: 2358 flows: 2
HTTP packets: 26 bytes: 14879 flows: 2
DHCP packets: 2 bytes: 1180 flows: 1
ICMP packets: 2 bytes: 3028 flows: 1
SSL packets: 36 bytes: 5220 flows: 4
Facebook packets: 44 bytes: 5594 flows: 6
Dropbox packets: 10 bytes: 1404 flows: 1
Google packets: 52 bytes: 7490 flows: 10
WhatsApp packets: 2 bytes: 363 flows: 1
Amazon packets: 10 bytes: 784 flows: 3
Telegram packets: 13 bytes: 1336 flows: 2
QUIC packets: 3 bytes: 4176 flows: 2
GoogleServices packets: 18 bytes: 2703 flows: 2

1 TCP 192.168.2.38:46556 <-> 192.168.2.1:80 [proto: 7/HTTP][13 pkts/987 bytes <-> 11 pkts/13766 bytes][Host: 192.168.2.1]
2 TCP 192.168.2.38:35056 <-> 52.210.33.72:5223 [proto: 91/SSL][10 pkts/1138 bytes <-> 7 pkts/2548 bytes][client: samsung.com][server: *.push.samsungosp.com]
3 TCP 192.168.2.38:45021 <-> 216.58.198.4:443 [proto: 91.126/SSL.Google][8 pkts/1745 bytes <-> 8 pkts/1347 bytes][client: www.google.com]
4 ICMP 192.168.2.38:0 <-> 192.168.2.1:0 [proto: 81/ICMP][1 pkts/1514 bytes <-> 1 pkts/1514 bytes]
5 TCP 192.168.2.38:41983 <-> 31.13.86.2:443 [proto: 91.119/SSL.Facebook][13 pkts/1967 bytes <-> 9 pkts/1015 bytes]
6 UDP 216.58.198.1:443 -> 192.168.2.38:54769 [proto: 188/QUIC][2 pkts/2784 bytes -> 0 pkts/0 bytes]
7 TCP 192.168.2.38:59877 <-> 108.177.96.188:5228 [proto: 91.239/SSL.GoogleServices][8 pkts/1554 bytes <-> 8 pkts/952 bytes][client: mtalk.google.com]
8 TCP 192.168.2.38:45494 <-> 31.13.86.34:443 [proto: 91.119/SSL.Facebook][7 pkts/1113 bytes <-> 5 pkts/643 bytes][client: mqtt-mini.facebook.com]
9 TCP 192.168.2.38:53058 <-> 162.125.66.1:80 [proto: 7.121/HTTP.Dropbox][5 pkts/645 bytes <-> 5 pkts/759 bytes][Host: www.dropbox.com]
10 UDP 216.58.198.1:443 -> 192.168.2.38:52545 [proto: 188/QUIC][1 pkts/1392 bytes -> 0 pkts/0 bytes]
11 ICMP 192.168.2.38:0 -> 216.58.198.1:0 [proto: 81.126/ICMP.Google][2 pkts/1180 bytes -> 0 pkts/0 bytes]
12 UDP 192.168.2.1:67 -> 192.168.2.38:68 [proto: 18/DHCP][2 pkts/1180 bytes -> 0 pkts/0 bytes]
13 TCP 192.168.2.38:38150 <-> 172.217.21.3:80 [proto: 7.126/HTTP.Google][5 pkts/670 bytes <-> 5 pkts/440 bytes][Host: connectivitycheck.gstatic.com]
14 TCP 192.168.2.38:33486 <-> 149.154.167.91:443 [proto: 91.185/SSL.Telegram][4 pkts/562 bytes <-> 2 pkts/318 bytes]
15 TCP 192.168.2.38:55213 <-> 172.217.17.46:80 [proto: 7.126/HTTP.Google][5 pkts/515 bytes <-> 3 pkts/289 bytes][Host: clients3.google.com]
16 TCP 136.243.146.196:443 <-> 192.168.2.38:59726 [proto: 91/SSL][8 pkts/714 bytes <-> 1 pkts/60 bytes]
17 TCP 52.210.33.72:5223 <-> 192.168.2.38:35029 [proto: 178/Amazon][3 pkts/352 bytes <-> 3 pkts/180 bytes]
18 TCP 149.154.167.91:443 <-> 192.168.2.38:32860 [proto: 91.185/SSL.Telegram][6 pkts/396 bytes <-> 1 pkts/60 bytes]
19 TCP 185.63.145.1:443 <-> 192.168.2.38:41318 [proto: 91/SSL][3 pkts/229 bytes <-> 3 pkts/180 bytes]
20 TCP 216.58.198.4:443 <-> 192.168.2.38:44774 [proto: 91.126/SSL.Google][5 pkts/330 bytes <-> 1 pkts/60 bytes]
21 UDP 192.168.2.38:25651 <-> 192.168.2.1:53 [proto: 5.142/DNS.WhatsApp][1 pkts/76 bytes <-> 1 pkts/287 bytes][Host: e15.whatsapp.net]
22 TCP 2.23.81.94:443 <-> 192.168.2.38:44761 [proto: 91/SSL][3 pkts/291 bytes <-> 1 pkts/60 bytes]
23 TCP 31.13.86.34:443 <-> 192.168.2.38:45466 [proto: 91.119/SSL.Facebook][2 pkts/163 bytes <-> 2 pkts/120 bytes]
24 UDP 192.168.2.38:14913 <-> 192.168.2.1:53 [proto: 5.119/DNS.Facebook][1 pkts/82 bytes <-> 1 pkts/127 bytes][Host: mqtt-mini.facebook.com]
25 UDP 192.168.2.38:30549 <-> 192.168.2.1:53 [proto: 5.119/DNS.Facebook][1 pkts/82 bytes <-> 1 pkts/122 bytes][Host: edge-mqtt.facebook.com]
26 UDP 192.168.2.38:32514 <-> 192.168.2.1:53 [proto: 5.126/DNS.Google][1 pkts/79 bytes <-> 1 pkts/119 bytes][Host: clients3.google.com]
27 UDP 192.168.2.38:9876 <-> 192.168.2.1:53 [proto: 5.239/DNS.GoogleServices][1 pkts/76 bytes <-> 1 pkts/121 bytes][Host: mtalk.google.com]
28 UDP 192.168.2.38:35465 <-> 192.168.2.1:53 [proto: 5.126/DNS.Google][1 pkts/89 bytes <-> 1 pkts/105 bytes][Host: connectivitycheck.gstatic.com]
29 UDP 192.168.2.38:44543 <-> 192.168.2.1:53 [proto: 5.126/DNS.Google][1 pkts/89 bytes <-> 1 pkts/105 bytes][Host: connectivitycheck.gstatic.com]
30 UDP 192.168.2.38:37248 <-> 8.8.8.8:53 [proto: 5.126/DNS.Google][1 pkts/74 bytes <-> 1 pkts/90 bytes][Host: www.google.com]
31 UDP 192.168.2.38:40550 <-> 8.8.4.4:53 [proto: 5.126/DNS.Google][1 pkts/74 bytes <-> 1 pkts/90 bytes][Host: www.google.com]
32 TCP 31.13.86.2:443 <-> 192.168.2.38:41957 [proto: 91.119/SSL.Facebook][1 pkts/100 bytes <-> 1 pkts/60 bytes]
33 TCP 52.222.146.9:80 <-> 192.168.2.38:52465 [proto: 7.178/HTTP.Amazon][1 pkts/66 bytes <-> 1 pkts/60 bytes]
34 TCP 52.222.149.122:80 <-> 192.168.2.38:36676 [proto: 7.178/HTTP.Amazon][1 pkts/66 bytes <-> 1 pkts/60 bytes]
35 TCP 151.101.114.202:80 <-> 192.168.2.38:57157 [proto: 7/HTTP][1 pkts/66 bytes <-> 1 pkts/60 bytes]

Wiko Lenny

  Unknown              packets: 6             bytes: 540           flows: 2            
  DNS                  packets: 30            bytes: 4348          flows: 15           
  HTTP                 packets: 9             bytes: 582           flows: 2            
  MDNS                 packets: 1             bytes: 439           flows: 1            
  SSDP                 packets: 33            bytes: 11765         flows: 3            
  DHCP                 packets: 5             bytes: 2281          flows: 2            
  QQ                   packets: 2             bytes: 220           flows: 1            
  IGMP                 packets: 1             bytes: 60            flows: 1            
  SSL                  packets: 104           bytes: 31401         flows: 6            
  ICMPV6               packets: 15            bytes: 1354          flows: 7            
  Dropbox              packets: 4             bytes: 2196          flows: 2            
  YouTube              packets: 95            bytes: 43326         flows: 8            
  Google               packets: 448           bytes: 171720        flows: 25           
  Spotify              packets: 2             bytes: 172           flows: 1            
  Amazon               packets: 130           bytes: 45409         flows: 11           
  PlayStore            packets: 432           bytes: 136001        flows: 8            
  GoogleServices       packets: 1681          bytes: 1118643       flows: 30           


Protocol statistics:
  1	TCP 192.168.2.49:52565 <-> 172.217.23.106:443 [proto: 91.239/SSL.GoogleServices][360 pkts/580391 bytes <-> 345 pkts/31695 bytes][client: play.googleapis.com]
  2	TCP 192.168.2.49:33912 <-> 216.58.205.138:443 [proto: 91.239/SSL.GoogleServices][127 pkts/19561 bytes <-> 142 pkts/184398 bytes][client: www.googleapis.com][server: *.googleapis.com]
  3	TCP 192.168.2.49:57865 <-> 172.217.23.106:443 [proto: 91.239/SSL.GoogleServices][129 pkts/86039 bytes <-> 151 pkts/29208 bytes][client: play.googleapis.com][server: *.googleapis.com]
  4	TCP 192.168.2.49:49034 <-> 216.58.198.3:443 [proto: 91.126/SSL.Google][77 pkts/5760 bytes <-> 75 pkts/98578 bytes][client: www.gstatic.com][server: *.google.com]
  5	TCP 192.168.2.49:33654 <-> 216.58.198.46:443 [proto: 91.228/SSL.PlayStore][80 pkts/20527 bytes <-> 96 pkts/34572 bytes][client: android.clients.google.com]
  6	TCP 192.168.2.49:36186 <-> 172.217.21.42:443 [proto: 91.239/SSL.GoogleServices][33 pkts/3444 bytes <-> 45 pkts/51108 bytes][client: playatoms-pa.googleapis.com][server: *.googleapis.com]
  7	TCP 192.168.2.49:50007 <-> 216.58.198.46:443 [proto: 91.228/SSL.PlayStore][31 pkts/18332 bytes <-> 41 pkts/9437 bytes][client: android.clients.google.com]
  8	TCP 192.168.2.49:42811 <-> 172.217.23.106:443 [proto: 91.239/SSL.GoogleServices][16 pkts/19096 bytes <-> 16 pkts/5144 bytes][client: play.googleapis.com][server: *.googleapis.com]
  9	TCP 192.168.2.49:35466 <-> 54.192.2.18:80 [proto: 7.178/HTTP.Amazon][23 pkts/2425 bytes <-> 21 pkts/21223 bytes][Host: api.ntracecloud.com]
  10	TCP 192.168.2.49:36148 <-> 216.58.198.46:443 [proto: 91.228/SSL.PlayStore][28 pkts/10959 bytes <-> 34 pkts/8169 bytes][client: android.clients.google.com]
  11	TCP 192.168.2.49:52066 <-> 216.58.198.46:443 [proto: 91.228/SSL.PlayStore][33 pkts/7237 bytes <-> 30 pkts/11619 bytes][client: android.clients.google.com][server: *.google.com]
  12	TCP 192.168.2.49:36262 <-> 216.58.198.3:443 [proto: 91.126/SSL.Google][14 pkts/1962 bytes <-> 13 pkts/12222 bytes][client: www.gstatic.com]
  13	TCP 192.168.2.49:56772 <-> 216.58.205.138:443 [proto: 91.239/SSL.GoogleServices][24 pkts/4408 bytes <-> 24 pkts/9647 bytes][client: www.googleapis.com][server: *.googleapis.com]
  14	UDP 192.168.2.49:54039 <-> 172.217.21.42:443 [proto: 188.239/QUIC.GoogleServices][13 pkts/8381 bytes <-> 10 pkts/5250 bytes][Host: youtubei.googleapis.com]
  15	TCP 192.168.2.49:60384 <-> 172.217.21.42:443 [proto: 91.239/SSL.GoogleServices][24 pkts/2597 bytes <-> 20 pkts/10078 bytes][client: chromecontentsuggestions-pa.googleapis.com][server: *.googleapis.com]
  16	TCP 192.168.2.49:35527 <-> 216.58.205.138:443 [proto: 91.239/SSL.GoogleServices][10 pkts/10641 bytes <-> 10 pkts/1930 bytes][client: www.googleapis.com]
  17	UDP 192.168.0.254:1025 -> 239.255.255.250:1900 [proto: 12/SSDP][30 pkts/11166 bytes -> 0 pkts/0 bytes]
  18	TCP 192.168.2.49:53793 <-> 64.233.167.81:443 [proto: 91.126/SSL.Google][18 pkts/3262 bytes <-> 17 pkts/7243 bytes][client: 9][server: sandbox.google.com]
  19	TCP 192.168.2.49:49444 <-> 216.58.198.13:443 [proto: 91.126/SSL.Google][21 pkts/3027 bytes <-> 16 pkts/6582 bytes][client: accounts.google.com][server: accounts.google.com]
  20	UDP 192.168.2.49:36491 <-> 172.217.17.238:443 [proto: 188.124/QUIC.YouTube][7 pkts/3698 bytes <-> 6 pkts/4932 bytes][Host: www.youtube.com]
  21	TCP 192.168.2.49:47901 <-> 13.250.83.167:443 [proto: 91/SSL][11 pkts/1873 bytes <-> 9 pkts/6688 bytes][client: s2ssn.toolkits.mobi]
  22	TCP 192.168.2.49:38412 <-> 13.250.83.167:443 [proto: 91/SSL][13 pkts/1673 bytes <-> 10 pkts/6755 bytes][client: s2ssn.toolkits.mobi]
  23	TCP 192.168.2.49:55740 <-> 216.58.198.42:443 [proto: 91.239/SSL.GoogleServices][19 pkts/2219 bytes <-> 14 pkts/5806 bytes][client: datasaver.googleapis.com][server: *.googleapis.com]
  24	UDP 192.168.2.49:59432 <-> 172.217.21.42:443 [proto: 188.239/QUIC.GoogleServices][7 pkts/3304 bytes <-> 5 pkts/4482 bytes][Host: youtubei.googleapis.com]
  25	UDP 192.168.2.49:34223 <-> 172.217.21.110:443 [proto: 188.124/QUIC.YouTube][7 pkts/3296 bytes <-> 5 pkts/4482 bytes][Host: i.ytimg.com]
  26	UDP 192.168.2.49:51529 <-> 172.217.21.110:443 [proto: 188.124/QUIC.YouTube][7 pkts/3296 bytes <-> 5 pkts/4482 bytes][Host: i.ytimg.com]
  27	TCP 192.168.2.49:57348 <-> 172.217.22.238:443 [proto: 91.228/SSL.PlayStore][13 pkts/1877 bytes <-> 11 pkts/5900 bytes][client: android.clients.google.com][server: *.google.com]
  28	TCP 192.168.2.49:35346 <-> 216.58.198.4:443 [proto: 91.126/SSL.Google][16 pkts/2296 bytes <-> 15 pkts/5237 bytes][client: www.google.com][server: www.google.com]
  29	TCP 192.168.2.49:45081 <-> 216.58.205.138:443 [proto: 91.239/SSL.GoogleServices][15 pkts/2187 bytes <-> 12 pkts/5337 bytes][client: www.googleapis.com][server: *.googleapis.com]
  30	TCP 192.168.2.49:41528 <-> 52.74.157.239:443 [proto: 91/SSL][11 pkts/1346 bytes <-> 8 pkts/6174 bytes][client: pks.a.mobimagic.com][server: a.mobimagic.com]
  31	TCP 192.168.2.49:37590 <-> 216.58.205.138:443 [proto: 91.239/SSL.GoogleServices][10 pkts/2199 bytes <-> 9 pkts/5060 bytes][client: www.googleapis.com][server: *.googleapis.com]
  32	TCP 192.168.2.49:52510 <-> 216.58.198.46:443 [proto: 91.228/SSL.PlayStore][16 pkts/2987 bytes <-> 15 pkts/3729 bytes][client: android.clients.google.com]
  33	TCP 192.168.2.49:41381 <-> 172.217.17.238:443 [proto: 91.124/SSL.YouTube][10 pkts/952 bytes <-> 8 pkts/5237 bytes][client: www.youtube.com][server: *.google.com]
  34	TCP 192.168.2.49:40804 <-> 172.217.21.110:443 [proto: 91.124/SSL.YouTube][10 pkts/948 bytes <-> 8 pkts/5238 bytes][client: i.ytimg.com][server: *.google.com]
  35	TCP 192.168.2.49:40814 <-> 172.217.21.110:443 [proto: 91.124/SSL.YouTube][10 pkts/948 bytes <-> 8 pkts/5238 bytes][client: i.ytimg.com][server: *.google.com]
  36	TCP 192.168.2.49:51964 <-> 172.217.23.66:443 [proto: 91.126/SSL.Google][11 pkts/1490 bytes <-> 7 pkts/4166 bytes][client: www.googleadservices.com][server: www.googleadservices.com]
  37	TCP 192.168.2.49:60891 <-> 172.217.17.226:443 [proto: 91.126/SSL.Google][10 pkts/1424 bytes <-> 8 pkts/4232 bytes][client: www.googleadservices.com][server: www.googleadservices.com]
  38	TCP 192.168.2.49:58471 <-> 172.217.17.234:443 [proto: 91.239/SSL.GoogleServices][10 pkts/961 bytes <-> 7 pkts/4298 bytes][client: translate.googleapis.com][server: *.googleapis.com]
  39	TCP 192.168.2.49:60391 <-> 172.217.21.42:443 [proto: 91.239/SSL.GoogleServices][9 pkts/894 bytes <-> 7 pkts/4299 bytes][client: youtubei.googleapis.com][server: *.googleapis.com]
  40	TCP 192.168.2.49:60395 <-> 172.217.21.42:443 [proto: 91.239/SSL.GoogleServices][9 pkts/894 bytes <-> 7 pkts/4299 bytes][client: youtubei.googleapis.com][server: *.googleapis.com]
  41	TCP 192.168.2.49:60388 <-> 172.217.21.42:443 [proto: 91.239/SSL.GoogleServices][9 pkts/894 bytes <-> 7 pkts/4297 bytes][client: youtubei.googleapis.com][server: *.googleapis.com]
  42	TCP 192.168.2.49:56231 <-> 216.58.213.228:443 [proto: 91.126/SSL.Google][12 pkts/2468 bytes <-> 13 pkts/2547 bytes][client: www.google.com]
  43	TCP 192.168.2.49:40628 <-> 35.156.170.184:80 [proto: 7.178/HTTP.Amazon][7 pkts/1319 bytes <-> 5 pkts/2979 bytes][Host: setting.rayjump.com]
  44	TCP 192.168.2.49:57549 <-> 35.158.23.155:80 [proto: 7.178/HTTP.Amazon][7 pkts/1319 bytes <-> 5 pkts/2979 bytes][Host: setting.rayjump.com]
  45	TCP 192.168.2.49:47091 <-> 34.209.7.180:80 [proto: 7.178/HTTP.Amazon][6 pkts/1718 bytes <-> 5 pkts/1719 bytes][Host: strategy.lmobi.net]
  46	TCP 192.168.2.49:52468 <-> 34.209.7.180:80 [proto: 7.178/HTTP.Amazon][6 pkts/1718 bytes <-> 5 pkts/1719 bytes][Host: strategy.lmobi.net]
  47	TCP 192.168.2.49:41006 <-> 14.215.138.67:443 [proto: 91/SSL][11 pkts/2577 bytes <-> 8 pkts/701 bytes]
  48	TCP 192.168.2.49:60319 <-> 13.250.83.167:443 [proto: 91/SSL][7 pkts/1534 bytes <-> 7 pkts/1091 bytes][client: s2ssn.toolkits.mobi]
  49	TCP 192.168.2.49:41343 <-> 64.233.166.114:80 [proto: 7.126/HTTP.Google][11 pkts/855 bytes <-> 13 pkts/1360 bytes][Host: check.googlezip.net]
  50	TCP 192.168.2.49:49508 <-> 35.158.23.155:80 [proto: 7.178/HTTP.Amazon][6 pkts/1261 bytes <-> 4 pkts/926 bytes][Host: setting.rayjump.com]
  51	TCP 192.168.2.49:51382 <-> 35.156.170.184:80 [proto: 7.178/HTTP.Amazon][6 pkts/1261 bytes <-> 4 pkts/926 bytes][Host: setting.rayjump.com]
  52	TCP 192.168.2.49:41344 <-> 64.233.166.114:80 [proto: 7.126/HTTP.Google][10 pkts/797 bytes <-> 6 pkts/890 bytes][Host: check.googlezip.net]
  53	UDP 192.168.2.1:67 -> 192.168.2.49:68 [proto: 18/DHCP][2 pkts/1180 bytes -> 0 pkts/0 bytes]
  54	UDP 0.0.0.0:68 -> 255.255.255.255:67 [proto: 18/DHCP][3 pkts/1101 bytes -> 0 pkts/0 bytes][Host: android-6f3c341a80a91fd2]
  55	UDP 192.168.2.20:17500 -> 192.168.2.255:17500 [proto: 121/Dropbox][2 pkts/1098 bytes -> 0 pkts/0 bytes]
  56	UDP 192.168.2.20:17500 -> 255.255.255.255:17500 [proto: 121/Dropbox][2 pkts/1098 bytes -> 0 pkts/0 bytes]
  57	TCP 192.168.2.49:59918 <-> 34.253.50.28:80 [proto: 7.178/HTTP.Amazon][5 pkts/565 bytes <-> 4 pkts/458 bytes][Host: wp.360overseas.com]
  58	TCP 192.168.2.49:42715 <-> 183.61.51.77:443 [proto: 91/SSL][5 pkts/588 bytes <-> 4 pkts/401 bytes]
  59	TCP 192.168.2.49:41345 <-> 64.233.166.114:80 [proto: 7.126/HTTP.Google][7 pkts/478 bytes <-> 4 pkts/288 bytes]
  60	TCP 192.168.2.49:56174 <-> 172.217.17.227:80 [proto: 7.126/HTTP.Google][4 pkts/457 bytes <-> 3 pkts/289 bytes][Host: connectivitycheck.gstatic.com]
  61	TCP 192.168.2.49:60385 <-> 172.217.21.42:443 [proto: 91.126/SSL.Google][5 pkts/338 bytes <-> 3 pkts/214 bytes]
  62	TCP 216.58.198.46:443 <-> 192.168.2.49:55897 [proto: 91.126/SSL.Google][7 pkts/462 bytes <-> 1 pkts/60 bytes]
  63	ICMPV6 [fe80::b2a2:e7ff:fed4:53eb]:0 <-> [fd00::5e49:79ff:fe75:4e6a]:0 [proto: 102/ICMPV6][3 pkts/258 bytes <-> 3 pkts/234 bytes]
  64	TCP 192.168.2.49:60392 <-> 172.217.21.42:443 [proto: 91.126/SSL.Google][4 pkts/272 bytes <-> 3 pkts/206 bytes]
  65	UDP 192.168.2.20:5353 -> 224.0.0.251:5353 [proto: 8/MDNS][1 pkts/439 bytes -> 0 pkts/0 bytes]
  66	UDP 192.168.2.20:52355 -> 239.255.255.250:1900 [proto: 12/SSDP][2 pkts/432 bytes -> 0 pkts/0 bytes]
  67	TCP 13.229.191.253:443 <-> 192.168.2.49:37442 [proto: 91.178/SSL.Amazon][4 pkts/357 bytes <-> 1 pkts/60 bytes]
  68	TCP 192.168.2.49:40808 <-> 172.217.21.110:443 [proto: 91.126/SSL.Google][4 pkts/272 bytes <-> 2 pkts/140 bytes]
  69	UDP [fd00::b039:4ad6:2420:d62b]:31386 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5/DNS][1 pkts/99 bytes <-> 1 pkts/298 bytes][Host: api.ntracecloud.com]
  70	UDP [fd00::b039:4ad6:2420:d62b]:29094 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5.239/DNS.GoogleServices][1 pkts/107 bytes <-> 1 pkts/269 bytes][Host: playatoms-pa.googleapis.com]
  71	UDP [fd00::b039:4ad6:2420:d62b]:38428 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5.239/DNS.GoogleServices][1 pkts/104 bytes <-> 1 pkts/266 bytes][Host: datasaver.googleapis.com]
  72	UDP [fd00::b039:4ad6:2420:d62b]:18910 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5.239/DNS.GoogleServices][1 pkts/99 bytes <-> 1 pkts/261 bytes][Host: play.googleapis.com]
  73	UDP [fd00::b039:4ad6:2420:d62b]:56350 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5.239/DNS.GoogleServices][1 pkts/98 bytes <-> 1 pkts/260 bytes][Host: www.googleapis.com]
  74	TCP 13.229.191.253:443 <-> 192.168.2.49:42818 [proto: 91.178/SSL.Amazon][3 pkts/291 bytes <-> 1 pkts/60 bytes]
  75	UDP 192.168.2.49:35080 <-> 192.168.2.1:53 [proto: 5.239/DNS.GoogleServices][1 pkts/102 bytes <-> 1 pkts/248 bytes][Host: chromecontentsuggestions-pa.googleapis.com]
  76	UDP [fd00::b039:4ad6:2420:d62b]:52614 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5.228/DNS.PlayStore][1 pkts/106 bytes <-> 1 pkts/242 bytes][Host: android.clients.google.com]
  77	UDP 192.168.2.49:59979 <-> 192.168.2.1:53 [proto: 5/DNS][1 pkts/79 bytes <-> 1 pkts/251 bytes][Host: s2ssn.toolkits.mobi]
  78	UDP 192.168.2.49:65069 <-> 192.168.2.1:53 [proto: 5/DNS][1 pkts/79 bytes <-> 1 pkts/251 bytes][Host: s2ssn.toolkits.mobi]
  79	UDP [fd00::b039:4ad6:2420:d62b]:1066 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5/DNS][1 pkts/99 bytes <-> 1 pkts/231 bytes][Host: setting.rayjump.com]
  80	UDP [fd00::b039:4ad6:2420:d62b]:9270 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5/DNS][1 pkts/99 bytes <-> 1 pkts/231 bytes][Host: setting.rayjump.com]
  81	UDP [fd00::b039:4ad6:2420:d62b]:28384 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5/DNS][1 pkts/99 bytes <-> 1 pkts/231 bytes][Host: setting.rayjump.com]
  82	UDP [fd00::b039:4ad6:2420:d62b]:27354 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5/DNS][1 pkts/99 bytes <-> 1 pkts/227 bytes][Host: pks.a.mobimagic.com]
  83	TCP 14.17.43.118:80 <-> 192.168.2.49:42429 [proto: 7/HTTP][4 pkts/264 bytes <-> 1 pkts/60 bytes]
  84	UDP 192.168.2.49:15073 <-> 192.168.2.1:53 [proto: 5.239/DNS.GoogleServices][1 pkts/87 bytes <-> 1 pkts/233 bytes][Host: playatoms-pa.googleapis.com]
  85	ICMPV6 [fe80::5e49:79ff:fe75:4e6a]:0 -> [ff02::1]:0 [proto: 102/ICMPV6][2 pkts/316 bytes -> 0 pkts/0 bytes]
  86	UDP [fd00::b039:4ad6:2420:d62b]:16924 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5.239/DNS.GoogleServices][1 pkts/134 bytes <-> 1 pkts/181 bytes][Host: phonedeviceverification-pa-prod.sandbox.googleapis.com]
  87	UDP 192.168.2.49:51685 <-> 8.8.8.8:53 [proto: 5.239/DNS.GoogleServices][1 pkts/84 bytes <-> 1 pkts/230 bytes][Host: datasaver.googleapis.com]
  88	UDP 192.168.2.49:58121 <-> 192.168.2.1:53 [proto: 5.239/DNS.GoogleServices][1 pkts/84 bytes <-> 1 pkts/230 bytes][Host: datasaver.googleapis.com]
  89	UDP 192.168.2.49:50330 <-> 192.168.2.1:53 [proto: 5.239/DNS.GoogleServices][1 pkts/83 bytes <-> 1 pkts/229 bytes][Host: youtubei.googleapis.com]
  90	UDP 192.168.2.49:53840 <-> 192.168.2.1:53 [proto: 5.239/DNS.GoogleServices][1 pkts/83 bytes <-> 1 pkts/229 bytes][Host: youtubei.googleapis.com]
  91	UDP 192.168.2.49:57173 <-> 8.8.8.8:53 [proto: 5.239/DNS.GoogleServices][1 pkts/83 bytes <-> 1 pkts/229 bytes][Host: youtubei.googleapis.com]
  92	UDP 192.168.2.49:35247 <-> 192.168.2.1:53 [proto: 5.228/DNS.PlayStore][1 pkts/86 bytes <-> 1 pkts/222 bytes][Host: android.clients.google.com]
  93	UDP 192.168.2.49:51185 <-> 192.168.2.1:53 [proto: 5.124/DNS.YouTube][1 pkts/75 bytes <-> 1 pkts/221 bytes][Host: www.youtube.com]
  94	UDP 192.168.2.49:36133 <-> 192.168.2.1:53 [proto: 5.124/DNS.YouTube][1 pkts/71 bytes <-> 1 pkts/212 bytes][Host: i.ytimg.com]
  95	UDP [fd00::b039:4ad6:2420:d62b]:34377 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5/DNS][1 pkts/94 bytes <-> 1 pkts/169 bytes][Host: xvlczajjgoxwaw]
  96	UDP [fd00::b039:4ad6:2420:d62b]:20197 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5.126/DNS.Google][1 pkts/104 bytes <-> 1 pkts/158 bytes][Host: www.googleadservices.com]
  97	TCP 14.17.43.118:80 <-> 192.168.2.49:44735 [proto: 7/HTTP][3 pkts/198 bytes <-> 1 pkts/60 bytes]
  98	UDP [fd00::b039:4ad6:2420:d62b]:58378 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5/DNS][1 pkts/91 bytes <-> 1 pkts/166 bytes][Host: ltzilbhvwhv]
  99	UDP [fd00::b039:4ad6:2420:d62b]:56108 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5/DNS][1 pkts/88 bytes <-> 1 pkts/163 bytes][Host: aporeczc]
  100	UDP [fd00::b039:4ad6:2420:d62b]:36288 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5/DNS][1 pkts/104 bytes <-> 1 pkts/146 bytes][Host: xvlczajjgoxwaw.fritz.box]
  101	UDP [fd00::b039:4ad6:2420:d62b]:3062 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5/DNS][1 pkts/98 bytes <-> 1 pkts/146 bytes][Host: strategy.lmobi.net]
  102	UDP [fd00::b039:4ad6:2420:d62b]:42097 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5/DNS][1 pkts/101 bytes <-> 1 pkts/143 bytes][Host: ltzilbhvwhv.fritz.box]
  103	UDP [fd00::b039:4ad6:2420:d62b]:60852 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5/DNS][1 pkts/98 bytes <-> 1 pkts/140 bytes][Host: aporeczc.fritz.box]
  104	UDP [fd00::b039:4ad6:2420:d62b]:28941 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5/DNS][1 pkts/98 bytes <-> 1 pkts/130 bytes][Host: wp.360overseas.com]
  105	UDP 192.168.2.49:53682 <-> 192.168.2.1:53 [proto: 5.126/DNS.Google][1 pkts/84 bytes <-> 1 pkts/138 bytes][Host: www.googleadservices.com]
  106	UDP [fd00::b039:4ad6:2420:d62b]:1123 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5.48/DNS.QQ][1 pkts/94 bytes <-> 1 pkts/126 bytes][Host: mazu.3g.qq.com]
  107	UDP [fd00::b039:4ad6:2420:d62b]:50361 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5.126/DNS.Google][1 pkts/99 bytes <-> 1 pkts/115 bytes][Host: accounts.google.com]
  108	UDP 192.168.2.49:64356 <-> 192.168.2.1:53 [proto: 5.126/DNS.Google][1 pkts/89 bytes <-> 1 pkts/117 bytes][Host: connectivitycheck.gstatic.com]
  109	UDP [fd00::b039:4ad6:2420:d62b]:21466 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5.126/DNS.Google][1 pkts/95 bytes <-> 1 pkts/111 bytes][Host: www.gstatic.com]
  110	UDP [fd00::b039:4ad6:2420:d62b]:1325 <-> [fd00::5e49:79ff:fe75:4e6a]:53 [proto: 5.126/DNS.Google][1 pkts/94 bytes <-> 1 pkts/110 bytes][Host: www.google.com]
  111	UDP 192.168.2.49:48336 <-> 192.168.2.1:53 [proto: 5.126/DNS.Google][1 pkts/89 bytes <-> 1 pkts/105 bytes][Host: connectivitycheck.gstatic.com]
  112	UDP 192.168.2.49:39859 <-> 192.168.2.1:53 [proto: 5.239/DNS.GoogleServices][1 pkts/84 bytes <-> 1 pkts/100 bytes][Host: translate.googleapis.com]
  113	UDP 192.168.2.49:60478 <-> 192.168.2.1:53 [proto: 5.126/DNS.Google][1 pkts/79 bytes <-> 1 pkts/95 bytes][Host: check.googlezip.net]
  114	UDP 192.168.2.20:57621 -> 192.168.2.255:57621 [proto: 156/Spotify][2 pkts/172 bytes -> 0 pkts/0 bytes]
  115	UDP 192.168.2.20:49606 -> 239.255.255.250:1900 [proto: 12/SSDP][1 pkts/167 bytes -> 0 pkts/0 bytes]
  116	UDP 192.168.2.49:48155 <-> 192.168.2.1:53 [proto: 5.126/DNS.Google][1 pkts/74 bytes <-> 1 pkts/90 bytes][Host: www.google.com]
  117	ICMPV6 [::]:0 -> [ff02::1:ffd4:53eb]:0 [proto: 102/ICMPV6][2 pkts/156 bytes -> 0 pkts/0 bytes]
  118	ICMPV6 [fe80::b2a2:e7ff:fed4:53eb]:0 -> [ff02::2]:0 [proto: 102/ICMPV6][2 pkts/140 bytes -> 0 pkts/0 bytes]
  119	TCP 54.230.0.218:80 <-> 192.168.2.49:47761 [proto: 7.178/HTTP.Amazon][1 pkts/66 bytes <-> 1 pkts/60 bytes]
  120	ICMPV6 [fd00::5e49:79ff:fe75:4e6a]:0 -> [fd00::b039:4ad6:2420:d62b]:0 [proto: 102/ICMPV6][1 pkts/86 bytes -> 0 pkts/0 bytes]
  121	ICMPV6 [fd00::b039:4ad6:2420:d62b]:0 -> [ff02::1:ff75:4e6a]:0 [proto: 102/ICMPV6][1 pkts/86 bytes -> 0 pkts/0 bytes]
  122	ICMPV6 [::]:0 -> [ff02::1:ff20:d62b]:0 [proto: 102/ICMPV6][1 pkts/78 bytes -> 0 pkts/0 bytes]
  123	IGMP 192.168.2.1:0 -> 224.0.0.1:0 [proto: 82/IGMP][1 pkts/60 bytes -> 0 pkts/0 bytes]

As you can see the results are pretty different. The Samsung phone is essentially behaving as I would expect:

  • The phone checked Internet connectivity (connectivitycheck.gstatic.com)
  • The installed apps connected home (e.g. FaceBook, Telegram, WhatsApp)
  • The phone connected to Samsung push services (push.samsungosp.com)

So in essence this is an expected behaviour, with no side effects.

The wiko phone instead is a totally different device as:

  • The apps installed by the manufacturer connected home (e.g. 360overseas.com)
  • Some analytics information was shared (ssl.google-analytics.com)
  • The phone connected a few times with the CPU manufacturer probably to check for updates (e.g. mepodownload.mediatek.com)
  • The phone, even though was in Europe, started to connect to chinese websites checking weather, location or other information (weather.jstinno.com, loc.map.baidu.com). Note that this was a stock phone with no baidu account whatsoever.
  • Even the time was checked against asian NTP servers (asia.pool.ntp.org).
  • The phone connected other unknown sites (pmir.3g.qq.com, t1.hshh.org, pks.a.mobimagic.com) I have no idea why.

Summary

Although this report cannot be considered exhaustive, the conclusion is that not all Android phones are alike. While the Samsung is a reasonable device that behaves how I would expect, the Wiko phone is doing things that I would not have expected. While it is questionable that a phone has to use Internet plan to connect to sites the user has not requested, this phone is probably leaking some information. For instance what is this connection to loc.map.baidu.com doing?

POST /offline_loc HTTP/1.1
Content-Type: application/x-www-form-urlencoded; charset=utf-8
Accept-Charset: UTF-8
Accept-Encoding: gzip
Host: loc.map.baidu.com
User-Agent: Dalvik/2.1.0 (Linux; U; Android 6.0; LENNY3 Build/MRA58K)
Connection: Keep-Alive
Content-Length: 137

req=uOup7PD47aPjrPD7htHUjZicmpmc4svrsbLDtbqw5JeGiqj_rN3fpITVjdGcgJOYiZGYx6m348-Kl5vEzc7ala64oqno6P_os_fnoPO0vrWtsK6p_D4frHb.|tp=3&qt=confHTTP/1.1 200 OK
Cache-Control: max-age=86400
Content-Encoding: gzip
Content-Length: 39
Content-Type: text/plain
Date: Tue, 16 Jan 2018 14:07:17 GMT
Expires: Wed, 17 Jan 2018 14:07:18 GMT
Http_x_bd_logid64: 14237345982061197261
P3p: CP=" OTI DSP COR IVA OUR IND COM "
Server: nginx
Set-Cookie: BAIDUID=46A9A8928A9A7D17BF9672DB0DC1D421:FG=1; max-age=31536000; expires=Wed, 16-Jan-19 14:07:17 GMT; domain=.baidu.com; path=/; version=1
Vary: Accept-Encoding

{"ofl":0,"ver":"1"}

Bottom line, if you will plan to purchase a new Android phone, you better look at security and privacy rather than just limiting you to price and features.

 

Viewing all 544 articles
Browse latest View live