{"id":5585,"date":"2026-03-31T23:47:31","date_gmt":"2026-03-31T15:47:31","guid":{"rendered":"http:\/\/longzhuplatform.com\/?p=5585"},"modified":"2026-03-31T23:47:31","modified_gmt":"2026-03-31T15:47:31","slug":"google-explains-googlebot-byte-limits-and-crawling-architecture-via-sejournal-mattgsouthern","status":"publish","type":"post","link":"http:\/\/longzhuplatform.com\/?p=5585","title":{"rendered":"Google Explains Googlebot Byte Limits And Crawling Architecture via @sejournal, @MattGSouthern"},"content":{"rendered":"<p><\/p> <div id=\"narrow-cont\"> <p>Google\u2019s Gary Illyes published a blog post explaining how Googlebot\u2019s crawling systems work. The post covers byte limits, partial fetching behavior, and how Google\u2019s crawling infrastructure is organized.<\/p> <p>The post references episode 105 of the Search Off the Record podcast, where Illyes and Martin Splitt discussed the same topics. Illyes adds more details about crawling architecture and byte-level behavior.<\/p> <h2>What\u2019s New<\/h2> <h3>Googlebot Is One Client Of A Shared Platform<\/h3> <p>Illyes describes Googlebot as \u201cjust a user of something that resembles a centralized crawling platform.\u201d<\/p> <p>Google Shopping, AdSense, and other products all send their crawl requests through the same system under different crawler names. Each client sets its own configuration, including user agent string, robots.txt tokens, and byte limits.<\/p> <p>When Googlebot appears in server logs, that\u2019s Google Search. Other clients appear under their own crawler names, which Google lists on its crawler documentation site.<\/p> <h3>How The 2 MB Limit Works In Practice<\/h3> <p>Googlebot fetches up to 2 MB for any URL, excluding PDFs. PDFs get a 64 MB limit. Crawlers that don\u2019t specify a limit default to 15 MB.<\/p> <p>Illyes adds several details about what happens at the byte level.<\/p> <p>He says HTTP request headers count toward the 2 MB limit. When a page exceeds 2 MB, Googlebot doesn\u2019t reject it. The crawler stops at the cutoff and sends the truncated content to Google\u2019s indexing systems and the Web Rendering Service (WRS).<\/p> <p>Those systems treat the truncated file as if it were complete. Anything past 2 MB is never fetched, rendered, or indexed.<\/p> <p>Every external resource referenced in the HTML, such as CSS and JavaScript files, gets fetched with its own separate byte counter. Those files don\u2019t count toward the parent page\u2019s 2 MB. Media files, fonts, and what Google calls \u201ca few exotic files\u201d are not fetched by WRS.<\/p> <h3>Rendering After The Fetch<\/h3> <p>The WRS processes JavaScript and executes client-side code to understand a page\u2019s content and structure. It pulls in JavaScript, CSS, and XHR requests but doesn\u2019t request images or videos.<\/p> <p>Illyes also notes that the WRS operates statelessly, clearing local storage and session data between requests. Google\u2019s JavaScript troubleshooting documentation covers implications for JavaScript-dependent sites.<\/p> <h3>Best Practices For Staying Under The Limit<\/h3> <p>Google recommends moving heavy CSS and JavaScript to external files, since those get their own byte limits. Meta tags, title tags, link elements, canonicals, and structured data should appear higher in the HTML. On large pages, content placed lower in the document risks falling below the cutoff.<\/p> <p>Illyes flags inline base64 images, large blocks of inline CSS or JavaScript, and oversized menus as examples of what could push pages past 2 MB.<\/p> <p>The 2 MB limit \u201cis not set in stone and may change over time as the web evolves and HTML pages grow in size.\u201d<\/p> <h2>Why This Matters<\/h2> <p>The 2 MB limit and the 64 MB PDF limit were first documented as Googlebot-specific figures in February. HTTP Archive data showed most pages fall well below the threshold. This blog post adds the technical context behind those numbers.<\/p> <p>The platform description explains why different Google crawlers behave differently in server logs and why the 15 MB default differs from Googlebot\u2019s 2 MB limit. These are separate settings for different clients.<\/p> <p>HTTP header details matter for pages near the limit. Google states headers consume part of the 2 MB limit alongside HTML data. Most sites won\u2019t be affected, but pages with large headers and bloated markup might hit the limit sooner.<\/p> <h2>Looking Ahead<\/h2> <p>Google has now covered Googlebot\u2019s crawl limits in documentation updates, a podcast episode, and a dedicated blog post within a two-month span. Illyes\u2019 note that the limit may change over time suggests these figures aren\u2019t permanent.<\/p> <p>For sites with standard HTML pages, the 2 MB limit isn\u2019t a concern. Pages with heavy inline content, embedded data, or oversized navigation should verify that their critical content is within the first 2 MB of the response.<\/p> <hr\/> <p><em>Featured Image: Sergei Elagin\/Shutterstock<\/em><\/p> <\/div> <p>News,Technical SEO#Google #Explains #Googlebot #Byte #Limits #Crawling #Architecture #sejournal #MattGSouthern1774972051<\/p> ","protected":false},"excerpt":{"rendered":"<p>Google\u2019s Gary Illyes published a blog post explaining how Googlebot\u2019s crawling systems work. The post covers byte limits, partial fetching behavior, and how Google\u2019s crawling infrastructure is organized. The post references episode 105 of the Search Off the Record podcast, where Illyes and Martin Splitt discussed the same topics. Illyes adds more details about crawling [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":5586,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[16],"tags":[20872,20871,4675,211,75,8553,8064,90,80],"class_list":["post-5585","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-accessibility","tag-architecture","tag-byte","tag-crawling","tag-explains","tag-google","tag-googlebot","tag-limits","tag-mattgsouthern","tag-sejournal"],"acf":[],"_links":{"self":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts\/5585","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5585"}],"version-history":[{"count":0,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/posts\/5585\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=\/wp\/v2\/media\/5586"}],"wp:attachment":[{"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5585"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5585"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/longzhuplatform.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5585"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}