Block Search indexing with 'noindex'
You can prevent a page from appearing in Google Search by including a
meta tag in the page's HTML code, or by returning a
noindex header in the HTTP
request. When Googlebot next crawls that page and sees the tag or header, Googlebot will drop
that page entirely from Google Search results, regardless of whether other sites link to it.
noindex is useful if you don't have root access to your server, as it
allows you to control access to your site on a page-by-page basis.
There are two ways to implement
noindex: as a meta tag and as an HTTP response
header. They have the same effect; choose the method that is more convenient for your site.
To prevent most search engine web crawlers from indexing a page on your site, place
the following meta tag into the
<head> section of your page:
<meta name="robots" content="noindex">
To prevent only Google web crawlers from indexing a page:
<meta name="googlebot" content="noindex">
You should be aware that some search engine web crawlers might interpret the
noindex directive differently. As a result, it is possible that your page might
still appear in results from other search engines.
HTTP response header
Instead of a meta tag, you can also return an
X-Robots-Tag header with a value of
none in your response. Here's an example of an
HTTP response with an
X-Robots-Tag instructing crawlers not to index a page:
HTTP/1.1 200 OK (…) X-Robots-Tag: noindex (…)
Help us spot your meta tags
We have to crawl your page in order to see meta tags and HTTP headers. If a page is still appearing in results, it's probably because we haven't crawled the page since you added the tag. You can request that Google recrawl a page using the URL Inspection tool. Another reason could also be that the robots.txt file is blocking the URL from Google web crawlers, so they can't see the tag. To unblock your page from Google, you must edit your robots.txt file. You can edit and test your robots.txt using the robots.txt Tester tool.