Stay organized with collections
Save and categorize content based on your preferences.
Tuesday, April 12, 2011
We recently hosted our second site clinic, this time at TechHub in London, UK. Like
last year, here's our
summary of the topics that came up.
Title tags and meta description tags are easy ways to improve your site's visibility in Google
search results, yet we still see webmasters not fully utilizing their potential. We have a bit
of help available about writing
good page titles
and
descriptions
which you can read to brush up on the subject. That said, you can
ignore the meta keywords,
at least as far as Google is concerned.
One way Google's algorithms determine the context of content on a page is by looking at the
page's
headings.
The way semantic markup is used throughout a site, including h1, h2,
and h3 tags, helps us to understand the priorities of a site's content. One should
not fret, though, about every single H tag. Using common sense is the way to go.
Just as we recommend you structuring pages logically, it is similarly important to structure the
whole website, particularly by linking to related documents within your site as necessary. This
helps both users and search engine bots explore all the content you provide. To augment this,
be sure to provide a regularly updated
Sitemap,
which can be conveniently
linked to from your site's robots.txt file
for automatic discovery by Google and other search engines.
Duplicate content
and
canonicalization
issues were discussed for many websites reviewed at the site clinic. Duplicate content within a
website is generally not a problem, but can make it more difficult for search engines to
properly index your content and serve the right version to users. There are two common ways to
signal what your preferred versions of your content are: By using
301 redirects
to point to your preferred versions, or by using the
rel="canonical"link element. If you're concerned about setting your preferred domain in terms of
whether to use www or non-www, we recommend you check out the related feature for
setting the preferred domain feature in Webmaster Tools.
Another commonly seen issue is that some sites have error pages which do not return an error
HTTP result code, but instead return the HTTP success code 200. Only documents that
are actually available should reply with the HTTP success result code 200. When a
page no longer exists, it should return a 404 (Not found) response. Header
responses of any URL can be checked using
Fetch as Googlebot
in Webmaster Tools or using third party tools such as the
Live HTTP Headers Firefox addon
or web-sniffer.net.
Ranking for misspelled queries, for example, local business names including typos, seems to be
an area of concern. In some cases,
Google's automatic spelling correction
gets the job done for users by suggesting the correct spelling. It isn't a wise idea to stuff
a site's content with every typo imaginable. It's also not advisable to hide this or any other
type of content using JavaScript, CSS or similar techniques. These methods are in violation of
Google's Webmaster Guidelines
and we may take appropriate action against a site that employs them. If you're not sure how
Googlebot "sees" your pages, for example, when using lots of JavaScript, you can get a better
idea by looking at the text-only version of the cached copy in Google web search results.
Users love fast websites. That's why
webpage loading speed
is an important consideration for your users. We offer a wide range of tools and recommendations
to help webmasters understand the performance of their websites and how to improve them. The
easiest way to get started is to use
Page Speed Online,
which is the web-based version of our popular
Page Speed Chrome extension.
Our
Let's make the web faster page
has great list of resources from Google and elsewhere for improving website speed, which we
recommend you to read.
We'd like to thank the
TechHub
team, who helped us facilitate the event, and give a big thank you to all participants. We hope
you found the presentation and Q&A; session interesting. We've embedded the presentation below.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],[],[[["\u003cp\u003eProperly utilize title tags and meta descriptions to improve site visibility in Google search results, while ignoring meta keywords.\u003c/p\u003e\n"],["\u003cp\u003eStructure webpages logically with headings (h1, h2, h3) and the entire website with internal links and a regularly updated sitemap for better navigation and indexing.\u003c/p\u003e\n"],["\u003cp\u003eAddress duplicate content issues by using 301 redirects or rel="canonical" link element to signal preferred content versions and set preferred domain (www or non-www).\u003c/p\u003e\n"],["\u003cp\u003eEnsure error pages return a 404 (Not found) HTTP result code instead of a 200 success code for unavailable pages.\u003c/p\u003e\n"],["\u003cp\u003eFocus on webpage loading speed as it significantly impacts user experience and consider using Google's tools and recommendations for optimization.\u003c/p\u003e\n"]]],["The core topics discussed at the site clinic included optimizing site visibility using title and meta description tags, properly utilizing heading tags (`h1`, `h2`, `h3`), and structuring websites with internal linking and updated sitemaps. Duplicate content and canonicalization were addressed, recommending `301` redirects or `rel=\"canonical\"` for preferred content versions. The importance of correct HTTP error codes (e.g., `404`) and website loading speed, was stressed, along with avoiding keyword stuffing and hidden content.\n"],null,["# Sharing advice from our London site clinic\n\nTuesday, April 12, 2011\n| It's been a while since we published this blog post. Some of the information may be outdated (for example, some images may be missing, and some links may not work anymore).\n\n\nWe recently hosted our second site clinic, this time at TechHub in London, UK. Like\n[last year](/search/blog/2010/03/sharing-advice-from-our-site-clinic), here's our\nsummary of the topics that came up.\n\n- Title tags and meta description tags are easy ways to improve your site's visibility in Google search results, yet we still see webmasters not fully utilizing their potential. We have a bit of help available about writing [good page titles](/search/docs/appearance/title-link) and [descriptions](/search/blog/2007/09/improve-snippets-with-meta-description) which you can read to brush up on the subject. That said, you can [ignore the meta keywords](/search/blog/2009/09/google-does-not-use-keywords-meta-tag), at least as far as Google is concerned.\n- One way Google's algorithms determine the context of content on a page is by looking at the page's [headings](https://www.w3.org/TR/html401/struct/global.html#h-7.5.5). The way semantic markup is used throughout a site, including `h1`, `h2`, and `h3` tags, helps us to understand the priorities of a site's content. One should not fret, though, about every single `H` tag. Using common sense is the way to go.\n- Just as we recommend you structuring pages logically, it is similarly important to structure the whole website, particularly by linking to related documents within your site as necessary. This helps both users and search engine bots explore all the content you provide. To augment this, be sure to provide a regularly updated [Sitemap](/search/docs/crawling-indexing/sitemaps/build-sitemap), which can be conveniently [linked to from your site's robots.txt file](https://www.sitemaps.org/protocol.php#submit_robots) for automatic discovery by Google and other search engines.\n- [Duplicate content](/search/docs/advanced/guidelines/duplicate-content) and [canonicalization](/search/docs/crawling-indexing/consolidate-duplicate-urls) issues were discussed for many websites reviewed at the site clinic. Duplicate content within a website is generally not a problem, but can make it more difficult for search engines to properly index your content and serve the right version to users. There are two common ways to signal what your preferred versions of your content are: By using [`301` redirects](/search/docs/crawling-indexing/301-redirects) to point to your preferred versions, or by using the [`rel=\"canonical\"`](/search/docs/crawling-indexing/consolidate-duplicate-urls) `link` element. If you're concerned about setting your preferred domain in terms of whether to use www or non-www, we recommend you check out the related feature for [setting the preferred domain feature in Webmaster Tools](https://www.google.com/support/webmasters/bin/answer.py?answer=44231).\n- Another commonly seen issue is that some sites have error pages which do not return an error HTTP result code, but instead return the HTTP success code `200`. Only documents that are actually available should reply with the HTTP success result code `200`. When a page no longer exists, it should return a `404 (Not found)` response. Header responses of any URL can be checked using [Fetch as Googlebot](https://www.google.com/support/webmasters/bin/answer.py?answer=158587 class=) in Webmaster Tools or using third party tools such as the [Live HTTP Headers Firefox addon](https://addons.mozilla.org/en-US/firefox/addon/live-http-headers/) or web-sniffer.net.\n- Ranking for misspelled queries, for example, local business names including typos, seems to be an area of concern. In some cases, [Google's automatic spelling correction](https://www.google.com/intl/en/help/features_list.html#spell) gets the job done for users by suggesting the correct spelling. It isn't a wise idea to stuff a site's content with every typo imaginable. It's also not advisable to hide this or any other type of content using JavaScript, CSS or similar techniques. These methods are in violation of [Google's Webmaster Guidelines](/search/docs/essentials) and we may take appropriate action against a site that employs them. If you're not sure how Googlebot \"sees\" your pages, for example, when using lots of JavaScript, you can get a better idea by looking at the text-only version of the cached copy in Google web search results.\n- Users love fast websites. That's why [webpage loading speed](/search/blog/2010/04/using-site-speed-in-web-search-ranking) is an important consideration for your users. We offer a wide range of tools and recommendations to help webmasters understand the performance of their websites and how to improve them. The easiest way to get started is to use [Page Speed Online](https://pagespeed.googlelabs.com/), which is the web-based version of our popular [Page Speed Chrome extension](https://code.google.com/speed/page-speed/docs/using_chrome). Our [Let's make the web faster page](https://code.google.com/speed/tools) has great list of resources from Google and elsewhere for improving website speed, which we recommend you to read.\n\n\nWe'd like to thank the\n[TechHub](https://www.techhub.com/)\nteam, who helped us facilitate the event, and give a big thank you to all participants. We hope\nyou found the presentation and Q\\&A; session interesting. We've embedded the presentation below.\n\n\nAnd as we mentioned in the site clinic, sign up at the\n[Google Webmaster Help Forum](https://support.google.com/webmasters/community)\nto discuss any further questions you might have and keep an eye on our\n[Webmaster Central Blog](/search/blog).\n\nWritten by\n[Kaspar Szymanski](https://profiles.google.com/110192192554903281760/about),\n[Pierre Far](/search/blog/authors/pierre-far),\n[Sven Naumann](https://profiles.google.com/105424734647625719803/about)"]]