Google Launches Effort to Make Robots Exclusion Protocol an Internet Standard, Open Sources Robots.txt Parser

Website owners have been excluding web crawlers using the Robots Exclusion Protocol (REP) on robots.txt files for 25 years. More than 500 million websites use robots.txt files to talk to bots, according to Google’s data. Up until now, there has never been an official Internet standard, no documented specification for writing the rules correctly according to the protocol.

Source: Google Launches Effort to Make Robots Exclusion Protocol an Internet Standard, Open Sources Robots.txt Parser

LONZO, Austria – WordPress Services and Hosting
 
info@lonzo.eu / 0800 80 08 80 566 (Austria) / +49 8654 60 85 66 (Europe)

Partners Security Contact System Status

© 2017-2019 LONZO – All rights reserved. Terms of use Privacy policy