Stay organized with collections
Save and categorize content based on your preferences.
Tuesday, December 15, 2009
We've recently discussed several ways of
handling duplicate content on a single website;
today we'll look at ways of handling similar duplication across different websites, across
different domains. For some sites, there are legitimate reasons to duplicate content across
different websites—for instance, to migrate to a new domain name using a web server that
cannot create server-side redirects. To help with issues that arise on such sites, we're
announcing our support of the
cross-domain rel="canonical" link element.
Ways of handling cross-domain content duplication:
Choose your preferred domain
When confronted with duplicate content, search engines will generally take one version and
filter the others out. This can also happen when multiple domain names are involved, so while
search engines are generally pretty good at choosing something reasonable, many webmasters
prefer to make that decision themselves.
Reduce in-site duplication
Before starting on cross-site duplicate content questions, make sure to
handle duplication within your site
first.
Enable crawling and use 301 (permanent) redirects where possible
Where possible, the most important step is often to use appropriate
301 redirects.
These redirects send visitors and search engine crawlers to your preferred domain and make it
very clear which URL should be indexed. This is generally the preferred method as it gives clear
guidance to everyone who accesses the content. Keep in mind that in order for search engine
crawlers to discover these redirects, none of the URLs in the redirect chain can be disallowed
via a
robots.txt file.
Don't forget to handle your www / non-www preference with appropriate redirects and in
Webmaster Tools.
Use the cross-domain rel="canonical" link element
There are situations where it's not easily possible to set up redirects. This could be the case
when you need to move your website from a server that does not feature server-side redirects.
In a situation like this, you can use the
rel="canonical" link element
across domains to specify the exact URL of whichever domain is preferred for indexing. While
the rel="canonical" link element is seen as a hint and not an absolute command,
we do try to follow it where possible.
Still have questions?
Do the pages have to be identical?
No, but they should be similar. Slight differences are fine.
For technical reasons I can't include a 1:1 mapping for the URLs on my sites. Can I just
point the rel="canonical" at the home page of my preferred site?
No; this could result in problems. A mapping from old URL to new URL for each URL on the old site
is the best way to use rel="canonical".
I'm offering my content / product descriptions for syndication. Do my publishers need to use
rel="canonical"?
We leave this up to you and your publishers. If the content is similar enough, it might make
sense to use rel="canonical", if both parties agree.
My server can't do a 301 (permanent) redirect. Can I use
rel="canonical" to move my site?
If it's at all possible, you should work with your webhost or web server to do a
301 redirect. Keep in mind that we treat rel="canonical" as a hint,
and other search engines may handle it differently. But if a 301 redirect is
impossible for some reason, then a rel="canonical" may work for you. For more
information, see our
guidelines on moving your site.
Should I use a noindex robots
meta tag
on pages with a rel="canonical" link element?
No, since those pages would not be equivalent with regards to indexing—one would be allowed
while the other would be blocked. Additionally, it's important that these pages are not
disallowed from crawling through a robots.txt file, otherwise search engine crawlers will not
be able to discover the rel="canonical" link element.
We hope this makes it easier for you to handle duplicate content in a user-friendly way. Are
there still places where you feel that duplicate content is causing your sites problems? Let us
know in the
Webmaster Help Forum!
Posted by
John Mueller,
Webmaster Trends Analyst, Google Zürich
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],[],[[["\u003cp\u003eThis post explains how to handle duplicate content across different websites, focusing on the cross-domain \u003ccode\u003erel="canonical"\u003c/code\u003e link element.\u003c/p\u003e\n"],["\u003cp\u003eSearch engines often choose one version of duplicate content and filter others, but webmasters can influence this choice using redirects or the \u003ccode\u003erel="canonical"\u003c/code\u003e tag.\u003c/p\u003e\n"],["\u003cp\u003eUsing 301 redirects is the preferred method for consolidating duplicate content across domains as it provides clear signals for both users and search engines.\u003c/p\u003e\n"],["\u003cp\u003eThe cross-domain \u003ccode\u003erel="canonical"\u003c/code\u003e link element can be used when redirects are not feasible, allowing webmasters to specify the preferred domain for indexing.\u003c/p\u003e\n"],["\u003cp\u003eWhile \u003ccode\u003erel="canonical"\u003c/code\u003e is a hint rather than a directive, Google tries to follow it to consolidate duplicate content for a better user experience.\u003c/p\u003e\n"]]],["To manage cross-domain content duplication, utilize the cross-domain `rel=\"canonical\"` link element to specify the preferred URL for indexing when `301` redirects are not feasible. Prioritize reducing in-site duplication first. Ensure crawlers can access pages with `rel=\"canonical\"` by not blocking them in `robots.txt` files. While pages don't need to be identical, they should be similar. The `rel=\"canonical\"` method is a hint and may be handled differently by other search engines.\n"],null,["# Handling legitimate cross-domain content duplication\n\n| It's been a while since we published this blog post. Some of the information may be outdated (for example, some images may be missing, and some links may not work anymore). Specifically, `rel=\"canonical\"` is no longer recommended for [syndicated content](/search/docs/crawling-indexing/canonicalization-troubleshooting#syndicated-content).\n\nTuesday, December 15, 2009\n\n\nWe've recently discussed several ways of\n[handling duplicate content on a single website](/search/blog/2009/10/reunifying-duplicate-content-on-your);\ntoday we'll look at ways of handling similar duplication across different websites, across\ndifferent domains. For some sites, there are legitimate reasons to duplicate content across\ndifferent websites---for instance, to migrate to a new domain name using a web server that\ncannot create server-side redirects. To help with issues that arise on such sites, we're\nannouncing our support of the\n**cross-domain `rel=\"canonical\"` link element**.\n\nWays of handling cross-domain content duplication:\n--------------------------------------------------\n\n- **Choose your preferred domain** \n When confronted with duplicate content, search engines will generally take one version and filter the others out. This can also happen when multiple domain names are involved, so while search engines are generally pretty good at choosing something reasonable, many webmasters prefer to make that decision themselves.\n- **Reduce in-site duplication** \n Before starting on cross-site duplicate content questions, make sure to [handle duplication within your site](/search/docs/crawling-indexing/consolidate-duplicate-urls) first.\n- **Enable crawling and use `301` (permanent) redirects where possible** Where possible, the most important step is often to use appropriate [`301` redirects](/search/docs/crawling-indexing/301-redirects). These redirects send visitors and search engine crawlers to your preferred domain and make it very clear which URL should be indexed. This is generally the preferred method as it gives clear guidance to everyone who accesses the content. Keep in mind that in order for search engine crawlers to discover these redirects, none of the URLs in the redirect chain can be disallowed via a [robots.txt file](/search/docs/crawling-indexing/robots/intro). Don't forget to handle your www / non-www preference with appropriate redirects and in [Webmaster Tools](https://www.google.com/support/webmasters/bin/answer.py?answer=44231).\n- **Use the cross-domain `rel=\"canonical\"` link element** \n There are situations where it's not easily possible to set up redirects. This could be the case when you need to move your website from a server that does not feature server-side redirects. In a situation like this, **you can use the\n [`rel=\"canonical\"` link element](/search/docs/crawling-indexing/consolidate-duplicate-urls)\n across domains** to specify the exact URL of whichever domain is preferred for indexing. While the `rel=\"canonical\"` link element is seen as a hint and not an absolute command, we do try to follow it where possible.\n\nStill have questions?\n---------------------\n\n\n**Do the pages have to be identical?** \n\nNo, but they should be similar. Slight differences are fine.\n\n\n**For technical reasons I can't include a 1:1 mapping for the URLs on my sites. Can I just\npoint the `rel=\"canonical\"` at the home page of my preferred site?** \n\nNo; this could result in problems. A mapping from old URL to new URL for each URL on the old site\nis the best way to use `rel=\"canonical\"`.\n\n\n**I'm offering my content / product descriptions for syndication. Do my publishers need to use\n`rel=\"canonical\"`?** \n\nWe leave this up to you and your publishers. If the content is similar enough, it might make\nsense to use `rel=\"canonical\"`, if both parties agree.\n\n\n**My server can't do a `301` (permanent) redirect. Can I use\n`rel=\"canonical\"` to move my site?** \n\nIf it's at all possible, you should work with your webhost or web server to do a\n`301` redirect. Keep in mind that we treat `rel=\"canonical\"` as a hint,\nand other search engines may handle it differently. But if a `301` redirect is\nimpossible for some reason, then a `rel=\"canonical\"` may work for you. For more\ninformation, see our\n[guidelines on moving your site](/search/docs/crawling-indexing/site-move-no-url-changes).\n\n\n**Should I use a `noindex` robots\n[`meta` tag](/search/docs/advanced/crawling/special-tags)\non pages with a `rel=\"canonical\"` link element?** \n\nNo, since those pages would not be equivalent with regards to indexing---one would be allowed\nwhile the other would be blocked. Additionally, it's important that these pages are not\ndisallowed from crawling through a robots.txt file, otherwise search engine crawlers will not\nbe able to discover the `rel=\"canonical\"` link element.\n\n\nWe hope this makes it easier for you to handle duplicate content in a user-friendly way. Are\nthere still places where you feel that duplicate content is causing your sites problems? Let us\nknow in the\n[Webmaster Help Forum](https://support.google.com/webmasters/community/thread?tid=07603d23e8071644)!\n\n\nPosted by\n[John Mueller](https://twitter.com/JohnMu),\nWebmaster Trends Analyst, Google Zürich"]]