Google has recently announced the release of a new web crawler known as «GoogleOther» that has been created to reduce the workload on its primary search index crawler, Googlebot.
This addition to Google’s crawling operations will help streamline and optimize the process of automatically discovering and scanning websites using robots or spiders.
Googlebot is responsible for building the index used in Google Search. GoogleOther, on the other hand, is a generic web crawler that will be utilized by various product teams at Google to obtain publicly accessible content from websites.
Google Search Analyst Gary Illyes shared more details in a LinkedIn post, revealing that the primary aim of GoogleOther is to handle non-essential tasks that are currently being performed by Googlebot, freeing it up to focus on building the search index.
This will allow GoogleOther to handle other jobs, such as research and development (R&D) crawls, which are not directly related to search indexing. GoogleOther will replace some of Googlebot’s R&D crawls, freeing up some crawl capacity for Googlebot.
GoogleOther shares the same infrastructure as Googlebot, including limitations and features such as host load limitations, robots.txt, HTTP protocol version, and fetch size. Essentially, GoogleOther operates as Googlebot under a different name.
The introduction of GoogleOther is unlikely to significantly impact websites, as it operates using the same infrastructure and limitations as Googlebot. However, it is a notable development in Google’s ongoing efforts to optimize and streamline its web crawling processes.
If you’re concerned about GoogleOther, there are several ways to monitor it, such as regularly reviewing server logs to identify requests made by GoogleOther, updating your robots.txt file to include specific rules for GoogleOther if necessary, keeping an eye on crawl stats within Google Search Console, and tracking your website’s performance metrics to identify any potential correlations with GoogleOther’s crawling activities.