1. Home
  2. Project settings
  3. How to stop tracking web crawlers

How to stop tracking web crawlers


This guide explains how to prevent most common web crawlers from unnecessary utilization of the visitor traffic log and appearing on your website tracking reports.

A web crawler is an Internet website scouting bot (script) which periodically scans websites typically for the purpose of indexing web pages. Other web bots extract information from websites for the purpose of data mining.

Blocking website crawlers from being logged by the Visitor Tracker

Blocking a website bot from being logged by the tracker does not block the bot from accessing a website but rather tells the tracker system to ignore bots visits.

  1. Navigate to “My Projects” page. Locate the project that you need to have stop logging web crawlers and click on the “edit” link.
  2. Find the “Log Filter” drop down menu and select “Do NOT Log Robot Visits”
  3. Scroll to the bottom of the page and click on the “Update” Button.

On “My Projects” page, you can now visually confirm that the feature is enabled by observing a corresponding robot icon displayed under the status screen. blocking-web-crawlers-setting-enabled

Was this article helpful?

Related Articles