Quantcast
Channel: ntop
Viewing all 544 articles
Browse latest View live

Monitoring IoT and Fog Computing: Challenges and Solutions

$
0
0

Since last year we are designing a solution for monitoring IoT and Fog computing devices. This is becoming a hot argument since they are more and more used to create large Internet attacks and also because our privacy can be affected by this new computing trend. While we do not have a complete solution ready, we have some preliminary results and lessons learnt that are worth to be shared with our community.

This is a presentation we created on this subject and that has been shown at the Wurth-Phoenix Roadshow (BTW if you are in the Frankfurt area at the end of this month you will have the chance to see this presentation live and meet).

Enjoy!


Monitoring Network Devices with ntopng and SNMP

$
0
0

Summary

  • SNMP is widely used for network monitoring.
  • Being able to remotely monitor network devices is fundamental to have a clear picture of present and past network health.
  • ntopng systematically interacts with SNMP devices to provide historical and real-time insights on the network.

ntopng SNMP support

Simple Network Management Protocol (SNMP) is one of the de-facto standards used to remotely monitor network devices such as routers, switches and servers, just to name a few. With ntopng Pro it is possible to consistently and programmatically interact with those devices to have a real-time view of their status, as well as to build historical record for future investigations and trouble shootings.

Overview of configured SNMP devices

ntopng represent an effective way to have a clear, centralized view of multiple devices. Indeed, a dedicated SNMP menu provides instantaneous access to all the configured devices, and allow the administrator to add/remove devices from the pool.

Configured devices are listed along with their address, description, location, and other information. The rightmost column gives access to device-specific actions.

Adding a new SNMP device

An “Add New Device” link is available at the bottom of the “SNMP Devices” page. The addition of a new SNMP is straightforward as it only requires to specify device IP address and SNMP community. Upon successful addition, the device will appear in the list of devices.

 

Real-time inspection of an SNMP device

A details page is available for every SNMP device configured simply by clicking on the “Device IP” hyperlink. The access to the details page triggers a series of SNMP queries aimed at retrieving the health and status of the device of interest. Retrieved information include, but is not limited to, device description, contact, and uptime, along with its interfaces, connected MAC addresses and transferred bytes. An handy warning is shown when a non-trunk port has multiple mac addresses seen, or when slow devices are attached to high-speed ports.

Historical inspection of an SNMP device

Historical SNMP data is accessible by clicking the “Chart” icon that is available for any SNMP device as well as for every of its interfaces. Chart icon may not be visible. In that case, historical SNMP timeseries have to be enabled from ntopng preferences.

The chart above shows a stacked view of all the device interfaces. Single devices interfaces can be selected as well. In this case, a breakdown between ingress and egress traffic is visualized.

 

Mapping an host to SNMP devices

Another useful feature provided by ntopng is the ability to probe SNMP devices with the aim of detecting on which devices and interfaces a particular host has been seen. This lookup is automatically performed when accessing every host details page, provided that there is at least one SNMP device configured.

In the image above, host 192.168.2.222 has been found to be connected to interface 3 of the SNMP device 192.168.2.169.

Conclusion

This post demonstrates how ntopng can be used to systematically interact with SNMP devices to monitor their health and status. Data is visualized in real-time but also recorded for historical analyses. Currently SNMPv1 is supported but soon we will add support for v2c. In the near future we will add the ability to trigger alerts based on SNMP (e.g. when a port changes status), and we’ll add support for proprietary MIBs out of the box so that you can use ntopng alerts to be notified when a paper is running out of paper, or the router CPU is too heavily loaded.

Detecting and Fighting Ransomware Using ntopng (yes including WannaCry)

$
0
0

These days many people are talking about ransomware and in particular of the problems created by WannaCry. Some ntop users contacted us asking if they could use our tools for detecting and stopping ransomware. While the best solution to these issues is to properly implement network security (that is a process, not a product in our opinion) by designing the network properly and keeping hosts updated,  it is usually possible to use ntopng to detect infections, block most of them, and have a list of hosts that might have been compromised. If you run ntopng in passive mode (i.e. you send ntopng traffic from a span port or a tap) you can be notified about suspicious or blacklist host contacting (for being contacted) by local hosts either looking at the ntopng alert dashboard or on the go through the slack integration (see menu Preferences > External Alerts Report > Slack Integration inside ntopng or read this document for details).

In the latest ntopng 2.5.x series we have implemented (see menu Preferences > Alerts >Security Alerts > Enable Hosts Malware Blacklists)

the ability for ntopng to nightly download a list of blacklisted hosts that are used for detecting potential security issues, or if using noting in inline mode, to automatically block this traffic and thus avoid infections.

If you go in the historical data explorer menu, you can browse past flows and see flow details. As you can see in DNS flows you have the query name. This can help you finding out for instance what are the hosts that are querying WannaCry  hosts.

If you are using ntopng in inline mode, you can also use ntopng to force using specified DNS servers (i.e. ntopng captures the DNS query, reforges the packet so that the DNS query is sent to the DNS set in the preferences, and masks the response back to the DNS client that is then unable to figure out that the response has been served by another DNS server)

This way you can prevent local host from contacting potential malware site by sampling setting as default a DNS such as Norton ConnectSafe (note that we’re not affiliated with Norton, this is just an example) so that whenever a DNS query for a potential dangerous name is performed, instead of sending back in the response the requested host IP, the IP of a landing page is returned and thus your host is unable to talk with the infected site.

These are just a few examples of what you can do with ntopng to secure your network and monitor security infections. If interested we have written a tutorial that describes all this more in detail.

 

 

 

Webinar: Security Monitoring with 1:1 NetFlow and 100% Packet Capture

$
0
0

Thu May 23rd and 25th together with Napatech we have organised two webinars about monitoring network traffic using flow-based technologies. We will be talking about

  • 100 Gbit network traffic monitoring.
  • Flow-based monitoring including nProbe Cento.
  • 100% packet capture with no loss combining Napatech NICs and PF_RING ZC

You can register here to save your seat.  Hope that many of our users will attend these webinars.

Say hello to nDPI 2.0 (with wireshark integration)

$
0
0

nDPI 2.0 is a major release that:

  • Consolidates the API, in particular for guessing new protocols or notifying nDPI that for a given flow there are no more packets to dissect.
  • Introduces nDPI support into Wireshark by means of a lua script and extcap plugin. Available via an extcap interface, the plugin sends Wireshark the nDPI-detected protocols by adding an ethernet packet trailer that is then interpreted and displayed inside the Wireshark GUI using the companion lua script. If you’re planning to attend the Sharkfest US 2017, we will present the tool in detail.
  • Introduces support for many new protocols and add enhancements on existing dissectors as described below.

New Supported Protocols and Services

  • STARTTLS
  • IMAPS
  • DNScrypt
  • QUIC (Quick UDP Internet Connections)
  • AMQP (Advanced Message Queueing Protocol)
  • Ookla (SpeedTest)
  • BJNP
  • AFP (Apple Filing Protocol)
  • SMPP (Short Message Peer-to-Peer)
  • VNC
  • OpenVPN
  • OpenDNS
  • RX protocol (used by AFS)
  • CoAP and MQTT (IoT specific protocols)
  • Cloudflare
  • Office 365
  • OCS
  • MS Lync
  • Ubiquity AirControl 2
  • HEP (Extensible Encapsulation Protocol)
  • WhatsApp Voice vs WhatsApp (chat, no voice)
  • Viber
  • Wechat
  • Github
  • Hotmail
  • Slack
  • Instagram
  • Snapchat
  • MPEG TS protocol
  • Twitch
  • KakaoTalk Voice and Chat
  • Meu
  • EAQ
  • iQIYI media service
  • Weibo
  • PPStream

Improvements to Existing Dissectors

  • SSH client/server version dissection
  • Improved SSL dissection
  • SSL server certificate detection
  • Added double tagging 802.1Q in dissection of vlan-tagged packets
  • Improved netBIOS dissection
  • Improved Skype detection
  • Improved Netflix traffic detection
  • Improved HTTP subprotocol matching
  • Implemented DHCP host name extraction
  • Updated Facebook detection by ip server ranges
  • Updated Twitter networks
  • Improved Microsoft detection
  • Enhanced Google detection
  • Improved BT-uTP protocol dissection
  • Added detection of Cisco datalink layer (Cisco hDLC and Cisco SLARP)

For future releases we have plans to make nDPI more flexible and rich in terms of categorization of protocols, as well enrich it with new protocols and extensions.

Shall you be interested to contribute to the library or join the team, please speak up!

Enjoy.

Filling the Pipe: Exporting ntopng Flows to Logstash

$
0
0

Logstash comes in very handy when it is necessary to manipulate or augment data before the actual consolidation. Typical examples of augmentation include IP address to customer ID mappings and geolocation, just to name a few.

ntopng natively supports network flows export to Logstash. The following video tutorial demonstrates this feature.

Introducing ntopng 3.0

$
0
0

If you have enjoyed ntopng 2.x, we believe you will like 3.0 even more as we have worked for almost one year to this release. We have modified many things, improved security in ntopng (in the cybersecurity days this is the least we could do), added layer 2 visibility, improved metrics calculations, added alerts support (even on the go), improved significantly the Windows version (yes Win 10 is supported out of the box), improved performance, reworked the GUI in many aspects, improved significantly the inline traffic mode, improved FreeBSD support.

As many professional users requested us specific features such as high-speed flow export in databases or improved reports, we have created a new version named ntopng Enterprise that is now part of the ntopng product family (community and professional). You can find all differences in features across the various product editions in the ntopng page. If you are an existing ntopng pro user, you can contact us for upgrade information.

If you are interested to learn more about the 3.0 improvements, below you can find the whole changelog.

Enjoy ntopng!

 

3.0 Changelog

  • New features
    • Layer-2 Devices
      • MAC devices page
      • Implemented MAC last seen tracking in redis
      • Manufacturer filter and sort
    • Host pools (logical groups of hosts)
    • Logstash flow export extension
    • Implemented data anonymization: hosts and top sites
    • Implements CPU load average and memory usage
    • Virtual Interfaces
      • ZMQ: disaggregate based on probeIP or ingress interfaceId
      • Packet: disaggregate on VLANId
    • ElasticSearch and MySQL flow export statistics
    • Tiny Flows
    • Alerts
      • Implements alerts on a per-interface per-vlan basis
      • Global alert thresolds for all local hosts/interfaces/local networks
      • LUA alerts generation
      • Adds hosts stateful syn attacks alerts
      • Visualization/Retrieval of Host Alerts
      • Added the ability to generate alert when ntopng detects traffic produced by malware hosts
      • Slack integration: send alerts to slack
      • Alerts for anomalous flows
      • Host blacklisted alerts
      • Alerts delete by type, older than, by host
      • SSL certificates mismatch alerts generation
    • Implement SSL/TLS handshake detection
    • Integrated MSDN support
    • Implemented DHCP dissection for name resolution
    • Traffic bridging
      • Per host pool, per host pool member policies
      • Per L7 protocol category policies
      • Flashstart categories to block
      • Time and Traffic quotas
      • Support to google Safe Search DNS
      • Ability to set custom DNS
    • Captive portal
      • Limited lifetime users
      • Support for pc, kindle, android, ipad devices
    • SNMP
      • Periodic SNMP device monitoring and polling
      • Historical SNMP timeseries
      • Host-to-SNMP devices mapping
    • Daily/Weekly/Monthly Traffic Report: per host, interface, network
    • Added ability to define host blacklists
    • DNS flow characterization with FlashStart (www.flashstart.it)
    • Flow LUA scripts: on flow creation, protocol detected, expire
    • Periodic MySQL flows aggregation
    • Batched MySQL flows insertions
    • sFlow device/interface counters
    • Implementation of flow devices stats
  • Improvements
    • Allows web server binding to system ports for non-privileged users
    • Improved VLAN support
    • Improved IPv6 support
    • Implements a script to add users from the command line
    • View interfaces rework
    • Reported number of Layer-2 devices in ntopng footer
    • Preferences re-organization and search
    • Adds RIPE integration for Autonomous Systems
    • Search host by custom name
    • Move to the UTF-8 encoding
    • Make real-time statics refresh time configurable (footer, dashboard)
    • Adds support for localization (i18n)
    • Traffic bridging: improved stability
    • Traffic profiles: improved stability and data persistence
    • Charts
    • Improved historical graphs
    • Traffic report rework and optimizations
    • Improves the responsiveness and interactivity of historical exploration (ajax)
    • Stacked top hosts
    • Add ZMQ flows/sec graph
    • Profiles graphs
    • Implemented ICMP detailed stats for local hosts
    • ASN graphs: traffic and protocols history
    • ARP requests VS replies sent and received by hosts
    • Implement host TCP flags distribution
    • DNS packets ratio
    • FlashStart category graphs
    • Added ARP protocol in interface statistics
    • SNMP port graphs
  • VoIP (nProbe required)
    • Changes and rework for SIP and RTP protocol
    • Adds VoIP SIP to RTP flow search
    • Improves VoIP visualization (RTP)
  • Security Fixes
    • Disable TLS 1.0 (vulnerable) in mongoose
    • Disabled insecure cyphers in SSL (when using ntopng over SSL)
    • Hardens the code to prevent SQL injections
    • Enforce POST form CSRF to prevent programmer mistakes
    • Strict GET and POST parameters validation to prevent XSS
    • Prevent HTTP splitting attacks
    • Force default admin password change

Introducing nProbe 8.0, the ntopng flow companion

$
0
0

The current nProbe 8.0 release contains many changes with respect to the 7.x series. We have optimised the code, added the ability to collect non standard fields (e.g. Cisco AVC), improved Kafka export, and reworked many tiny details to make the tool a stable solution for all those looking for a flexible and versatile flow probe and collector.

For all those interested in the whole changelog, below you can find the main changes we have implemented in the past months. In summary we have made nProbe better adding new extensions, opening it to new encapsulations, and extending the collection capabilities.

  • Main New Features
    • Implemented realtime interface stats via ZMQ to ntopng
    • Reworked packet fragmentation support that was not properly rebuilding packet fragments
    • Many tiny bugs fixed that increase stability and metrics reliability
    • Implemented BPF filtering with PF_PACKET directional sockets
    • Added VXLAN support
    • Created multiple kafka publishers to enhance performance
    • Implemented options template export via Kafka
    • Added support for collection of IXIA URI and Host
    • Added @SIP@ and @RTP@ plugin shortcuts for VoIP analysis
    • Improved SSL dissection
    • Added support for GTPv2 PCO
    • Added support for IPFIX flowEndMilliSeconds when observationTimeMilliSeconds (often in Cisco ASA)
    • Added ability to export sFlow interface counters via ZMQ
    • Added drops (export/elk/too many flows) drops
    • Added kflow export (kentik.com)
  • New Options
    • –upscale-traffic to scale sampled sFlow traffic
    • –kafka-enable-batch and –kafka-batch-len to batch flow export to kafka
    • –load-custom-fields to support custom fields shipped with NetFlow (see http://www.ntop.org/nprobe/collecting-proprietary-flows-with-nprobe/)
    • –max-num-untunnels to decapsulate up to 16 tunnelling levels.
    • –vlanid-as-iface-idx to use the VLAN tag as the interface index
    • –zmq-disable-compression to disable ZMQ data compression
  • Extensions
    • Implemented min/avg/max throughput with %SRC_TO_DST_MIN_THROUGHPUT %SRC_TO_DST_AVG_THROUGHPUT %SRC_TO_DST_MAX_THROUGHPUT %DST_TO_SRC_MIN_THROUGHPUT %DST_TO_SRC_AVG_THROUGHPUT %DST_TO_SRC_MAX_THROUGHPUT
    • Added support in collection of %IN_SRC_MAC %OUT_DST_MAC %FRAGMENTS %CLIENT_NW_LATENCY_MS %SERVER_NW_LATENCY_MS %APPL_LATENCY_MS %RETRANSMITTED_IN_PKTS %RETRANSMITTED_OUT_PKTS %OOORDER_IN_PKTS %OOORDER_OUT_PKTS
    • Split %FRAGMENTS IN %SRC_FRAGMENTS and %DST_FRAGMENTS
    • Added %NPROBE_IPV4_ADDRESS to export the IP address of the nProbe sensor, whereas %EXPORTER_IPV4_ADDRESS contains the IP address of the flow exporter (e.g. the router that generated the exported flow)
    • Implemented %ICMP_IPV4_TYPE, %ICMP_IPV4_CODE, %FLOW_DURATION_MILLISECONDS, %FLOW_DURATION_MICROSECONDS, %FLOW_START_MICROSECONDS, %FLOW_END_MICROSECONDS
    • VXLAN VNI exported in %UPSTREAM_TUNNEL_ID and %DOWNSTREAM_TUNNEL_ID

Integrating ntopng with Grafana

$
0
0

Last week the NYC Metrics and Monitoring meetup invited ntop to give a talk. The topic was how to open ntopng so that it can become a gateway for producing network metrics that could be used by popular applications and frameworks such as Snap-io, Prometheus or Influx. The first result of this activity is the integration of ntopng with Grafana that we plan to complete in July.

Here you can see the presentation slides  where you can have an idea of the work we’re doing. If you are interested in using this preliminary work, you can find all details here.

Enjoy!

How to Enhance Wireshark with DPI, latency measurement and more

$
0
0

This week at Sharkfest US 17, we have presented the ntop contributions to wireshark. In particular:

  • How to use nDPI to complement Wireshark traffic classification
  • How to remote capture on a remote box at 10/401/100 Gbit and stream traffic securely to wireshark via SSH
  • Same as above but extracting packets from TBytes (of pcaps)  using pcap indexes
  • How to turn wireshark into a traffic monitoring tool able to measure traffic and network latency.

For those who have not attended the session (recording will appear soon on the sharkfest web site), you can have a look at the presentation slides or go to GitHub for looking at the code we have developed for enhancing wireshark.

Happy wiresharking!

How to Monitor and Troubleshoot an Unfamiliar Network

$
0
0

At ntop we use wireshark to dissect traffic and to learn how to make our tools better. We’re not typical packet-oriented users however, as we want to see traffic as a whole and not packet-by-packet. This has been the motivation for contributing to wireshark for extending it towards a more monitoring-oriented tool.

Above you can see the video (and slides) of our presentation at the Sharkfest US 2017 conference.

 

 

How to use ntopng for Realtime Traffic Analysis on Fritz!Box Routers

$
0
0

Fritz!Box routers are popular devices that many people use to connect to the Internet. Inside these routers there is a hidden (i.e. not accessible from the router web admin page, but that you access directly with a web browser by writing the whole URL) URL http://192.168.2.1/html/capture.html (BTW replace the 192.168.2.1 IP address with your Fritz!Box router IP if you have changed it) that can be used to dump router traffic in pcap format.


While pcaps are good for troubleshooting, most people need to know what is happening on their network in realtime, so they can spot for instance bandwidth hogs or high-latency communications. In essence we need to tell ntopng to analyse traffic flowing in our router. This is exactly what the fritzdump.sh script is doing:  connect to your Fritz!Box router, start the packet capture process and spawn ntopng for analyzing the network traffic by reading traffic from a pipe (see the picture below).

 

This is a great solution for home and small business users that can monitor the network traffic in realtime without having to deploy network probes, taps etc. that are not affordable (in terms of complexity but sometimes also from the price standpoint) on most small networks.

Enjoy!

Network Monitoring Deep Dive: Interview with Scott Schweitzer

$
0
0

In early August, Scott Schweitzer interviewed me about network monitoring and packet capture. The conversation has been very broad, and I have covered various topics ranging from packet capture, network traffic analysis, deep packet inspection, IoT (Internet of Things) and cybersecurity.

You can hear my view on this market, and what we’re doing at ntopng to tackle new challenges, as well what we envisage the (hardware) networking industry should provide developers in terms of new products. This is because after being almost 20 years on this industry, looking back at the past 5-10 years I see very little changes beside speed increase. And this is not good news.

You can hear the whole podcast here.

Enjoy!

When Live is not Enough: Connecting ntopng and nProbe via MySQL for Historical Flows Exploration

$
0
0

Using nProbe in combination with ntopng is a common practice. The benefits of this combination are manyfold and include:

  • A complete decoupling of monitoring activities (taking place on the nProbe) from visualization tasks (taking place on ntopng);
  • The capability of building distributed deployments where multiple (remote) nProbe instances send monitored data towards one or more ntopng instances for visualization;
  • A comprehensive support for the collection, harmonization and visualization of heterogeneous flow export protocols and technologies, including NetFlow V5/v9/V10 IPFIX and sFlow;
  • Full support for any proprietary technology that sends custom fields over NetFlow V5/v9/V10 with visualization of data;
  • Harmonization of diverse physical network interfaces and flow export protocols and technologies into a single, clear, JSON format sent over ZMQ to ntopng.

ntopng and nProbe communicate each other via a publish-subscribe mechanism implemented over ZMQ. Exchanged data contains both interface updates (e.g., the number of bytes and packets monitored) as well as network flows, obtained by monitoring physical interfaces (NIC cards) or by processing flow export technologies such as NetFlow.

Since flows sent over ZMQ are only those that active or recently expired, one has to store and archive them systematically for later accesses and analyses. Presently, ntopng offer rich historical flow exploration features when it is instructed to archive flows to MySQL (see part 1 and part 2 of the tutorial “Exploring Historical Data Using ntopng”). However, there are cases where MySQL flow export must be done directly on nProbe. Such cases include, but are not limited to:

  • The capability of creating a database column for each nProbe template field — ntopng creates a fixed set of database columns;
  • A MySQL database that is closer or more effectively reached from nProbe rather than from ntopng;
  • A low-end ntopng device that couldn’t deal with massive/batched database insertions.

In the cases above, it is still desirable to have full access to the ntopng historical flow exploration features. Therefore ntopng must work seamlessly even when operating on top of a database created and used by nProbe for flow export.

Fortunately, this interoperability is accomplished transparently by mean of database table views. Just a couple of things are required. The first thing is to instruct ntopng to connect to the nprobe database using a special mysql-nprobe prefix in the -F option. The second thing is to ensure nProbe will create a minimum set of database columns as required by ntopng by specifying the macro @NTOPNG@ inside nProbe template. This macro will expand to the following set of fields:

%L7_PROTO %IPV4_SRC_ADDR %IPV4_DST_ADDR %L4_SRC_PORT %L4_DST_PORT %IPV6_SRC_ADDR %IPV6_DST_ADDR %IP_PROTOCOL_VERSION %PROTOCOL %IN_BYTES %IN_PKTS %OUT_BYTES %OUT_PKTS %FIRST_SWITCHED %LAST_SWITCHED %SRC_VLAN

Following is a working example that illustrates ntopng and nProbe configurations. Clearly, database connection parameters (host, user and password, schema name, and table name) must be the same on both sides.

./nprobe -i eno1 -T "@NTOPNG@" --mysql="localhost:ntopng:nf:root:root" --zmq tcp://127.0.0.1:5556 --zmq-probe-mode
./ntopng -i tcp://*:5556c -F "mysql-nprobe;localhost;ntopng;nf;root;root"

Note that when ntopng operates in this mode, it won’t export flows (no one want the same flows stored twice in the database). It will just visualize them.

Happy flow hunting!

20 Years of ntop and Beyond

$
0
0

This month it’s 20 years that I have started the ntop project. Initially it was a hobby project, willing to understand what was really flowing on a network after having spent 5 years playing with OSI that was clearly a dead end (whoever used FTAM to download a file and compared it with FTP/NFS or drag-and-drop on a Mac desktop, understands what I mean), even for me that just graduated from university.

My initial idea behind ntop was to create a simple tool able to enable network visibility without having to deal with complicated network protocols (you’re all used to IP, but in late 90s many other non-IP protocols existed such as AppleTalk, IPX, SNA… and non-Ethernet encapsulations such as Token-Ring, FDDI…). This triggered my interests in creating tools able to operate on commodity hardware boxes, simple to use and install. Today it’s probably normal to buy a PC on Amazon, install Linux and run your monitoring tools, but years ago it was not like that.

Since then, many tools have been created. Most of them are home-grown such as PF_RING and nProbe, others orphans we adopted such as nDPI. If you are wondering what the next steps in ntop will be, you won’t have to wait too long as soon we’ll introduce two new tools:

  • nDB, a very high-speed index/database for networking data, able to index million records/sec and store hundred of billion of records on a single box with sub second response time (remember that with MySQL-like tools you can insert < 50k records/sec, so 2 orders of magnitude less, not to mention that when you have million of records your DB will be very slow) without requiring typical big-data headaches and costs (data sharding, clusters and distributed systems for storing networking data aren’t the best answer in terms of complexity, and the trend towards cloud-based systems is a way to hide all this mess with a per-service price tag).
  • Embedded ntopng inline for families and businesses, able not just to monitor but to enforce network policies, and complement security features provided by firewalls (that are configurable but unable to stop your printer from doing BitTorrent or your children from accessing inappropriate or malware sites).

We’ll come to this soon. The message is that after 20 years we’re not tired, but we’re looking at the next thing, not for tomorrow but for the years to come. In the past 5 years we have consolidated many technologies ntop developed previously, and because of this we’re now ready to move forward again.

Thanks to all of you who are following our activities since long time, and to those who sent me messages for this anniversary.

PS. We’ll organise a workshop/meetup during Sharkfest EU on Nov 7th, 6 PM. Details will follow, but in the meantime try to be there.


Announcing ntopng and Grafana Integration

$
0
0

This is to announce the release of the ntopng Grafana datasource that you can find on the grafana website. Using this plugin you can create a Grafana dashboard that fetches data from ntopng in a matter of clicks.

To set up the datasource visit Grafana Datasources page and select the green button Add a datasource. Select ntopng as the datasource Type in the page that opens.

The HTTP url must point to a running ntopng instance, to the endpoint /lua/modules/grafana. The Access method must be set to Direct. An example of a full HTTP url that assumes there is an ntopng instance running on localhostport 3001 is the following:

http://localhost:3001/lua/modules/grafana

Tick Basic Auth if your ntopng instance has authentication enabled and specify a username-password pair in fields User and Password. The pair must identify an ntopng user. Leave the Basic Auth checkbock unticked if ntopng has no authentication (--disable-login).

Finally, hit the button Save and Test to verify the datasource is working properly. A green message Success: Data souce is working appears to confirm the datasource is properly set up.

Supported metrics

Once the datasource is set up, ntopng metrics can be charted in any Grafana dashboard.

Supported metrics are:

  • Interface metrics
  • Host metrics

Metrics that identify an interface are prefixed with a interface_ that precedes the actual interface name. Similarly, metrics that identify an host are prefixed with a host_ followed by the actual host ip address.

Interface and host metrics have a suffix that contain the type of metric (i.e., traffic for traffic rates and traffic totals orallprotocols for layer-7 application protocol rates). The type of metric is followed by the unit of measure (i.e., bpsfor bits per second, pps for packets per second, and bytes).

Interface Metrics

Supported interface metrics are:

  • Traffic rates, in bits and packets per second
  • Traffic totals, both in Bytes and packets
  • Application protocol rates, in bits per second

Host Metrics

Supported host metrics are:

  • Traffic rate in bits per second
  • Traffic total in Bytes
  • Application protocol rates in bits per second.

You’re Invited to the ntop and Wireshark Users Group Meeting

$
0
0

On November 7th we will be organising the ntop meetup during the Sharkfest EU 2017 that will take place in Portugal. You can find all details here.

This year we will be focusing on cybersecurity, IoT and user traffic monitoring, as well on Wireshark. In fact during our talk at Sharkfest we won’t have enough time to explain in detail all our activities for turning (or complementing) Wireshark into an effective monitoring tool and not just a packet dissector.

We welcome all users of our community (attendance of Sharkfest EU is not necessary to participate) to this event that is totally free of charge and a great place for talking about our common interests.

Hope to see you !

ntopng Grafana Integration: The Beauty of Data Visualizazion

$
0
0

Summary

  • Grafana is one of the most widely known platforms for metrics monitoring (and alerting);
  • ntopng version 3.1 natively integrates with Grafana thanks to a datasource plugin which is freely available;
  • This article explains how to install and configure the ntopng datasource plugin, and how to build a dashboard for the visualization of ntopng-generated metrics.

Introduction

Grafana is an open platform for analytics and visualization. An extremely-well engineered architecture makes it completely agnostic to the storage where data resides. This means that you can build beautiful dashboards by simultaneously pulling points from data sources such as ntopng, MySQL and Influxdb, just to name a few. Grafana interacts with tens of different data sources by means of datasource plugins. Those plugins provide a standardized way to deliver points to Grafana. ntopng implements one of those datasource plugins, to expose metrics of monitored interfaces and hosts, including throughput (bps and pps) and Layer-7 application protocols 
e.g., (Facebook, Youtube, etc).

Exposed Metrics

ntopng exposes metrics for monitored interfaces as well as for monitored hosts. Each metric is identifiable with a unique, self-explanatory string. In general, interface metrics are prefixed with the string interface_ while host metrics are prefixed with the string host_. Similarly, a suffix indicates the measurement unit. Specifically, _bps and _pps are used for bit and packet rates (i.e., the number of bits and packets per second), whereas _total_bytes and _total_packets are used for the total number of bytes and packets over time, respectively.

Currently, supported metrics carry traffic as well as Layer-7 application protocols metrics.

Traffic metrics exposed are:

  • interface_<interface name>_traffic_bps
  • interface_<interface name>_traffic_total_bytes
  • interface_<interface name>_traffic_pps
  • interface_<interface name>_traffic_total_packets
  • host_<host ip>_interface_<interface name>_traffic_bps
  • host_<host ip>_interface_<interface_name>_traffic_total_bytes

Layer-7 application protocol metrics exposed are:

  • interface_<interface_name>_allprotocols_bps
  • host_<host ip>_interface_<interface_name>_allprotocols_bps

To be able to use the aforementioned metrics inside Grafana dashboards, the ntopng datasource plugin must be installed and configured as explained below.

Configuring the ntopng Datasource

Prerequisites

  • A running instance of Grafana version 4 or above;
  • A running instance of ntopng version 3.1 or above.

Grafana and ntopng run on Linux and Windows, either on physical, virtualized or containerized environments. For Grafana installation instructions see Installing Grafana. ntopng can either be built from source, or installed as a package.

Installing the ntopng Datasource Plugin

Installing the ntopng Datasource plugin is as easy as

$ grafana-cli plugins install ntop-ntopng-datasource

Upon successful installation, you will receive a confimation message and you will have to restart Grafana

installing ntop-ntopng-datasource @ x.y.z
from url: https://grafana.com/api/plugins/ntop-ntopng-datasource/versions/x.y.z/download

Installed ntop-ntopng-datasource successfully

Restart grafana after installing plugins . 

After restarting Grafana, you can connect to its web User Interface (UI) and visit the Plugins page. ntopng will be listed under the datasources tab.

Configuring the ntopng Datasource

A new datasource with type ntopng will be available once the ntopng datasource plugin is installed. Multiple ntopng datasources can be created to connect to several running ntopng instances. The list of configured datasources is available at the Grafana ‘Data Sources’ page. The following image shows two ntopng datasource configured with the aim of connecting to two different ntopng instances running on separate machines.

Adding a new ntopng datasource is a breeze. Just hit the ‘+ Add datasource’ button inside the Grafana ‘Data Sources’ page. This will open an ‘Edit Data Source’ page that can be used to specify ntopng connection parameters.

To configure the ntopng datasource select ntopng as the datasource Type and give it a mnemonic Name that will help you identifying the datasource connection. The Url in the HTTP settings must point to a running ntopng instance, to the endpoint /lua/modules/grafana. For example, to connect to an ntopng running on host devel on port 3001, you have to use url http://devel:3001/lua/modules/grafana.

The Access method must be set to direct. Tick Basic Auth if your ntopng instance has authentication enabled and specify a username-password pair in fields User and Password. The pair must identify an ntopng user. Leave the Basic Auth checkbock unticked if ntopng has no authentication (--disable-login).

Finally, hit the button Save and Test to verify the datasource is working properly. A green message Success: Data source is working will appear to confirm the datasource is properly set up.

The following screenshot highlights the connection to an ntopng instance running on host devel on port 3001.

 

Building a Dashboard

Once the datasource is properly set up, you can visualize ntopng timeseries in any of your Grafana dashboards. Dashboards are flexible ensembles of panels. Each panel is meant to visualize a single timeseries. Panels are added in any dashboard by clicking on the ‘Add Row’ button that will allow you to choose among the available panel types.

Currently, ntopng provides timeseries that can be used effectively to build ‘Chart’ and ‘Singlestat’ panels.

Adding an Interface Speed Panel

To add an interface speed panel, select ‘Graph’ in the available panel types. A graph panel with random data will be automatically added to the dashboard. Click on the ‘Panel Title’ and select ‘Edit’. A configuration page as the following will appear:

There is a ‘Test data: random walk’ timeseries with random data by default. Drop it by clicking on the bin. To add ntopng metrics select one of the ntopng datasources configured from the ‘Panel Data Source’ dropdown. In the following image, an ntopng datasource named lab-monitor is selected:

Once the datasource is selected, you can click the ‘Add query’ button and start type a metric name. Autocompletion will automatically show all the available metrics matching the typed text. In the image above, interface eno1 bps is picked among all timeseries available. As soon as the metric is chosen, a chart will be populated. However, as shown below, the chart is sill pretty basic and some extra work is needed to configure the axis unit of measure as well as the title.

To change the chart title select tab ‘General’ and input the title:

More important, to set the unit of measure of the y-axis select tab ‘Axes’ and pick ‘bits/sec‘ from the ‘Unit’ dropdown.

The final result is shown in the picture below

Adding an Interface Layer-7 Application Protocols Panel

To add an interface application protocols panel the above instructions apply. Just make sure select the interface metric ending in _allprotocols_bps. In addition, as this metric carry more than one timeseries (one per application protocol), it is recommended to stack them by ticking the ‘Stack’ checkbox under the ‘Display’ tab.

The final result will appear similar to the following image

Adding the Interface Average Speed Panel

Using a ‘Singlestat’ panel it is possible to crunch a metric using an aggregation function. To visualize the average speed, you can add a ‘Singlestat’ panel, select the interface traffic timeseries, and configure avg as ‘Stat’ in the ‘Options’ tab, as well as bits/sec in the ‘Unit’.

A Full ntopng Grafana Dashboard

By putting together all the panels introduced above, you can build a complete dashboard as the one shown here

Remember that you can combine panels created with ntopng with panes created from other datasources (e.g., MySQL or InfluxDb). There is no limit on how you can combine panels to create dashboards!

Conclusion

ntopng features an handy datasource plugin that exposes monitored metrics to Grafana. Visualizing ntopng metrics in Grafana will allow you to show ntopng data inside the beautiful Grafana UI, and will give you enough flexibility to mix and match ntopng data with other data sources.

 

 

Introducing PF_RING 7.0 with Hardware Flow Offload

$
0
0

This is to announce a new PF_RING major release 7.0.
In addition to many improvements to the capture modules, drivers upgrades, containers isolation,
the main change of this release is the ability to offload flow processing to the network card (when supported by the underlying hw).

Flow offload is a great feature for cutting the CPU load when using applications doing intensive flow processing, as it’s possible to let the network card handle activities like flow classification (update flow statistics) and shunting (discard or bypass flows according to the application verdict). This saves CPU for further processing (e.g. DPI), or for running multiple applications on the same box (Netflow probe and traffic recording, or IDS). Enabling flow offload it is possible to receive from the capture stream both raw packets (with metadata including the flow ID) and flow records (in the form of periodic flow stats updates), and it is possible to shunt a specific flow providing the flow ID.

Flow offload is currently supported by 10/40G Accolade Technology adapters of the ANIC-Ku Series (tested on ANIC-20/40Ku, ANIC-80Ku), however PF_RING provides a generic API that is hardware agnostic, as always.

Soon we will post news on how to accelerate applications by leveraging on flow offload. This not only reduces the CPU load but it opens up to many new opportunities as combining on the same box flow-based analysis and packet-to-disk. For those who will attend Suricon 2017, you can hear how Suricata benefits from this new technology to move this IDS to 40/100 Gbit.

This is the complete changelog of the 7.0 release:

  • PF_RING Library
    • Flow offload support
    • New PF_RING_FLOW_OFFLOAD pfring_open() flag to enable hw flow offload on supported cards (received buffers are native metadata)
    • New PF_RING_FLOW_OFFLOAD_NOUPDATES pfring_open() flag to disable flow updates with hw flow offload enabled: only standard raw packets with a flow id are received
    • New PKT_FLAGS_FLOW_OFFLOAD_UPDATE packet flag to indicate flow metadata in the received buffer (generic_flow_update struct)
    • New PKT_FLAGS_FLOW_OFFLOAD_PACKET packet flag to indicate raw packet with flow_id in pkt_hash
    • New PKT_FLAGS_FLOW_OFFLOAD_MARKER packet flag to indicate marked raw packet
    • Fixes for ARM systems
  • ZC Library
    • New pfring_zc_set_app_name API
    • PF_RING_ZC_PKT_FLAGS_FLOW_OFFLOAD flag to enable hw flow offload
    • Fixed BPF filters in SPSC queues
    • Fixed hugepages cleanup in case of application dropping privileges
    • Fixed sigbus error on hugepages allocation failure on numa systems
    • Fixed multiple clusters allocation in a single process
  • PF_RING-aware Libpcap/Tcpdump
    • Libpcap update v.1.8.1
    • Tcpdump update v.4.9.2
  • PF_RING Kernel Module
    • Docker/containers namespaces isolation support
    • Fixed capture on Raspberry Pi
    • Implemented support for VLAN filtering based on interface name (., where ID = 0 accepts only untagged packets)
    • New cluster types cluster_per_flow_ip_5_tuple/cluster_per_inner_flow_ip_5_tuple to balance 5 tuple with IP traffic, src/dst mac otherwise
    • Fixed hash rule last match, new hash_filtering_rule_stats.inactivity stats
  • PF_RING Capture Modules
    • Accolade flow offload support
    • New hw_filtering_rule type accolade_flow_filter_rule to discard or mark a flow
    • Netcope support
    • New hw_filtering_rule type netcope_flow_filter_rule to discard a flow
    • Improved Fiberblaze support
    • pfring_get_device_clock support
    • Ability to set native filters by setting as BPF string ‘fbcard:’
    • Fixed TX memory management
    • Fixed subnet BPF filters
    • Fixed drop counter
    • Fixed capture mode
    • Fixed sockets not enabled
    • FPGA error detection
    • Endace DAG update
    • npcap/timeline module compressed pcap extraction fix
  • Drivers
    • ixgbe-zc driver update v.5.0.4
    • i40e-zc driver update v.2.2.4
  • nBPF
    • Fixed nBPF parser memory leak
  • Examples
    • New pfsend option -L to forge VLAN IDs
    • zbalance_ipc improvements
    • Ability to dump output to log file (-l)
    • Fixed privileges drop (-D)
  • Misc
    • Fixed systemd dependencies, renamed pfring.service to pf_ring.service
    • New /etc/pf_ring/interfaces.conf configuration file for configuring management and capture interfaces

Network Device Discovery. Part 1: Active Discovery

$
0
0

Since its introduction in 1998, ntop(ng) has been a pure (well beside DNS address resolution if enabled) passive network monitoring tool. Recently we have complemented it with active device discovery in order to find out if there are silent devices in our network, and what services/OS our devices are featuring. In this article we will analyze how active discovery works, leaving to a future article the analysis of passive discovery.

Active discovery can be started on demand from the menu

 

 

or from the network preferences to enable periodic discovery

The result of network discovery is stored in redis and kept for future use. It allows to figure out the device type (is it a printer, a router or a tablet?) and their OS/features and advertised network services.

 

Above you can see a scan of a small home network. As you can see ntopng has recognised the router, and for my iMac detected the network services as well the exact computer model and type.

If you are wondering how active discovery works in ntopng, unless you want to read the its source code, below you can have a brief summary.

  1. The first thing ntopng does is a “ARP ping” to all local subnet hosts to identify the active assets. ntopng reads the IP subnet/mask from the network internet interface (if the interface has no subnet, for instance when you have a span port, active discovery won’t work) and starts pinging all devices. We do this via ARP as it’s a reliable method contrary to ICMP ping that might be disabled for some hosts. At the end of this process we have a list of active hosts that is compared against the list of devices that you see under Devices -> Layer 2. If there is a device that answers ARP ping that was not listed under L2 we mark it as ghost device (i.e. a device that is active on our local subnet but that is silent/ghost)
  2. We do a SSDP discovery so that devices (not all devices will answer to SSDP) can tell us the services they advertise. This works by sending a message to a multicast address and waiting for responses. Note that we might receive answers from hosts that do NOT belong to our subnet as they have seen the SSDP request: this is a good way to figure out it our local LAN is serving multiple networks or if somebody has created a private network (without permission?). So if you do that, make sure your devices are silent (note that ARP messages are sent in broadcast so even without SSDP you will be discovered by the network administrators), otherwise your secret network overlay will be detected. Also note that via SSDP is possible to learn the device capabilities as well the icon that is eventually stored inside the network device and that is shown by ntopng next to the IP address.
  3. In the meantime while SSDP responses are being received, for all active hosts discovered via ARP, ntopng sends a SNMP request to learn more about the device. As the standard “public” community is used, we might discover only a part of the devices. SNMP helps detecting features of devices such as printers or access points/routers.
  4. As small networks do not usually have a DNS, we use MDNS to resolve names (see for instance fritx.box) for local hosts when DNS is not available. Via MDNS it is also possible to know (in particular for Apple devices/phones) the advertised services and the device model/type. Recent OSX versions are much more verbose than old releases.
  5. Done all this we merge the information so far collected and produce discovery information as you can see on the above screen. The discovery information is kept on redis so it survives across ntopng restarts.

In summary

  • If you have “hidden” devices on your network using a different subnet other than the one that you are supposed to use, make sure that devices are really hidden. But in all cases ARP will be broadcasted, so you will be caught soon or later.
  • If you have installed in your network a NAT router that hides your private devices, ntopng will figure this out. For instance looking at User-Agent in HTTP headers that report OS and not just browser. Beware of this or always use SSL that is unlikely as your OS is sending many spontaneous requests so you will be caught too.
  • SSDP/MDNS are very verbose protocols and advertise more information that you might think. Consider this when planning network security.

Finally we need your help! In our networks there are not so many different device types. Please send us reports form discovery in your network so we can improve the discovery process, or if you can send us a pull request with enhancements. Thank you!

 

 

Viewing all 544 articles
Browse latest View live