Google Launches Effort to Make Robots Exclusion Protocol an Internet Standard, Open Sources Robots.txt Parser

Website owners have been excluding web crawlers using the Robots Exclusion Protocol (REP) on robots.txt files for 25 years. More than 500 million websites use robots.txt files to talk to bots, according to Google’s data. Up until now, there has never been an official Internet standard, no documented specification for writing the rules correctly according to the protocol.

Source: Google Launches Effort to Make Robots Exclusion Protocol an Internet Standard, Open Sources Robots.txt Parser

Terms of use Privacy policy

Made in Austria. Hotline: 0800 800 880 566  /  EU: +49 86 54 60 85 66