[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["必要な情報がない","missingTheInformationINeed","thumb-down"],["複雑すぎる / 手順が多すぎる","tooComplicatedTooManySteps","thumb-down"],["最新ではない","outOfDate","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["サンプル / コードに問題がある","samplesCodeIssue","thumb-down"],["その他","otherDown","thumb-down"]],[],[[["Google open-sourced their robots.txt parser and is retiring support for undocumented and unpublished rules (like `noindex`) on September 1, 2019."],["Unsupported rules like `crawl-delay`, `nofollow`, and `noindex` were never documented by Google and their usage is contradicted by other rules in almost all robots.txt files."],["Webmasters relying on the `noindex` directive in robots.txt should switch to alternatives like `noindex` in robots meta tags, `404/410` status codes, or password protection."],["Google provides alternative options for removing URLs from search results, including disallowing crawling in robots.txt and using the Search Console Remove URL tool."],["Developers and webmasters can provide feedback and ask questions through GitHub, Twitter, and the Webmaster Community."]]],["Google open-sourced its robots.txt parser, allowing for custom rules like \"unicorns: allowed.\" The parser will retire code handling unsupported rules like `noindex` on September 1, 2019. Alternatives to `noindex` in robots.txt include `noindex` in meta tags, 404/410 HTTP status codes, password protection, `Disallow` in robots.txt, and the Search Console Remove URL tool. Google analyzed robots.txt rule usage and found unsupported rules are rarely used effectively.\n"]]