[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["没有我需要的信息","missingTheInformationINeed","thumb-down"],["太复杂/步骤太多","tooComplicatedTooManySteps","thumb-down"],["内容需要更新","outOfDate","thumb-down"],["翻译问题","translationIssue","thumb-down"],["示例/代码问题","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],[],[[["The Robots Exclusion Protocol (REP), used for controlling web crawler access, is becoming an internet standard after 25 years as a de-facto standard."],["Google open-sourced their C++ robots.txt parsing library to aid developers in implementing the standardized REP."],["The open-sourced library incorporates 20 years of Google's experience and knowledge in handling robots.txt files and edge cases."],["A testing tool is included within the open-source package to facilitate easy verification of robots.txt rules."],["Developers are encouraged to utilize the library and share their creations or feedback with Google."]]],["Google is leading efforts to formalize the Robots Exclusion Protocol (REP) as an internet standard, previously only a de-facto standard. They have open-sourced their C++ library, used for 20 years to parse and match rules in robots.txt files, to assist developers. This library now includes a testing tool, `robots_main`, for checking rules. Developers can engage with Google via GitHub and Twitter. The aim is to address past uncertainties.\n"]]