Click here to see your recently viewed pages and most viewed pages.
Hide

Controlling Crawling and Indexing

Automated website crawlers are powerful tools to help crawl and index content on the web. As a webmaster, you may wish to guide them towards your useful content and away from irrelevant content. The methods described in these documents are the de-facto web-wide standards to control crawling and indexing of web-based content. They consist of the robots.txt file to control crawling, as well as the robots meta tag and X-Robots-Tag HTTP header element to control indexing. The robots.txt standard predates Google and is the accepted method of controlling crawling of a website.

This document represents the current usage of the robots.txt web-crawler control directives as well as indexing directives as they are used at Google. These directives are generally supported by all major web-crawlers and search engines.