GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2
-
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol
-
Robots file is correct (simply allowing all and referring to https://www. sitemap
-
Sitemap is referencing https://www. pages including homepage
-
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
-
301 redirects set up for non-secure and non-www versions of website all to https://www. version
-
Not using a CDN or proxy
-
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!
-
-
@john1408 said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
@AKCAC said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol
-
Robots file is correct (simply allowing all and referring to https://www. sitemap
-
Sitemap is referencing https://www. pages including homepage
-
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
-
301 redirects set up for non-secure and non-www versions of website all to https://www. version
-
Not using a CDN or proxy
-
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!
It's baffling that GoogleBot persists with HTTP/1.1 for the homepage despite proper setup. Consider exploring Google Search Console further for indexing insights, and reach out to Google Support for assistance in resolving this unusual behavior.
-
-
@AKCAC said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol
-
Robots file is correct (simply allowing all and referring to https://www. sitemap
-
Sitemap is referencing https://www. pages including homepage
-
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
-
301 redirects set up for non-secure and non-www versions of website all to https://www. version
-
Not using a CDN or proxy
-
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update backwards 3 index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!
First off, it's great that your entire website made the transition to HTTPS and HTTP/2 three years ago. That's definitely a step in the right direction for performance and security.
Since your hosting provider has confirmed that the server is configured correctly for HTTP/2 and you've got the 301 redirects set up properly, it's puzzling why GoogleBot is still sticking to HTTP/1.1 for accessing the homepage. One thing you might want to double-check is if there are any specific directives in your server configuration that could be affecting how GoogleBot accesses your site. Sometimes, even seemingly minor configurations can have unintended consequences.
Regarding the non-secure version of your website still showing up in the Discovery section of Google Search Console (GSC), despite the homepage being correctly indexed with the HTTPS version, it could be a matter of Google's index taking some time to catch up. However, it's worth investigating further to ensure there aren't any lingering issues causing this discrepancy.
As for the home page not ranking as well in SERPs compared to other pages, despite having better content and speed, this could be due to a variety of factors. It's possible that Google's algorithms are prioritizing other pages for certain keywords or that there are specific technical issues with the homepage that are affecting its visibility.
In terms of next steps, I'd recommend continuing to monitor the situation closely and perhaps reaching out to Google's support team for further assistance. They may be able to provide additional insights or suggestions for resolving these issues.
Overall, it sounds like you've done a thorough job of troubleshooting so far, but sometimes these technical SEO mysteries require a bit of persistence to unravel. Keep at it, and hopefully, you'll be able to get to the bottom of these issues soon!
-
-
@AKCAC said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocolRobots file is correct (simply allowing all and referring to https://www. sitemap
Sitemap is referencing https://www. pages including homepage
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
301 redirects set up for non-secure and non-www versions of website all to https://www. version
Not using a CDN or proxy
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!t seems like you've taken several steps to ensure the correct protocol (HTTP/2) for your website, and it's puzzling that GoogleBot still accesses the home page via HTTP/1.1. A few additional suggestions:
Crawl Rate Settings: Check your Google Search Console (GSC) for crawl rate settings. Google might be intentionally crawling your site slowly.
Server Logs: Reanalyze server logs to confirm that GoogleBot is indeed accessing via HTTP/1.1 for the home page. This could help identify patterns or anomalies.
Mobile Usability: Ensure your home page is mobile-friendly. Google tends to prioritize mobile indexing.
Fetch and Render Tool: Use GSC's Fetch and Render tool to see how Google renders your home page. It might provide insights into how Google sees your page.
Structured Data and Markup: Ensure structured data and markup on your home page are correct and up-to-date.
Manual Submission: Consider manually requesting indexing for your home page through GSC.
Regarding the new pages performing well compared to the home page, it might be worth revisiting your on-page SEO elements and analyzing the competition for relevant keywords.
-
@AKCAC said in GoogleBot still crawling HTTP/1.1 years after website moved to HTTP/2:
Whole website moved to https://www. HTTP/2 version 3 years ago.
When we review log files, it is clear that - for the home page - GoogleBot continues to only access via HTTP/1.1 protocol
-
Robots file is correct (simply allowing all and referring to https://www. sitemap
-
Sitemap is referencing https://www. pages including homepage
-
Hosting provider has confirmed server is correctly configured to support HTTP/2 and provided evidence of accessing via HTTP/2 working
-
301 redirects set up for non-secure and non-www versions of website all to https://www. version
-
Not using a CDN or proxy
-
GSC reports home page as correctly indexed (with https://www. version canonicalised) but does still have the non-secure version of website as the referring page in the Discovery section. GSC also reports homepage as being crawled every day or so.
Totally understand it can take time to update index, but we are at a complete loss to understand why GoogleBot continues to only go through HTTP/1.1 version not 2
Possibly related issue - and of course what is causing concern - is that new pages of site seem to index and perform well in SERP ... except home page. This never makes it to page 1 (other than for brand name) despite rating multiples higher in terms of content, speed etc than other pages which still get indexed in preference to home page.
Any thoughts, further tests, ideas, direction or anything will be much appreciated!
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Solved Moz Link Explorer slow to find external links
I have a site with 48 linking domains and 200 total links showing in Google Search Console. These are legit and good quality links. Since creating a campaign 2 months ago, Moz link explorer for the same site only shows me 2 linking domains and 3 total links. I realise Moz cannot crawl with the same speed and depth as Google but this is poor performance for a premium product and doesn't remotely reflect the link profile of the domain. Is there a way to submit a sitemap or list of links to Moz for the purpose of crawling and adding to Link Explorer?
Link Explorer | | mathewphotohound0 -
Do articles written about artificial intelligence rank on Google?
This is my personal website. I wonder, will the articles written about artificial intelligence rank on Google, or will the site not rank? https://withpositivity.com/
Community | | lowzy0 -
Can't get Google to index our site although all seems very good
Hi there, I am having issues getting our new site, https://vintners.co indexed by Google although it seems all technical and content requirements are well in place for it. In the past, I had way poorer websites running with very bad setups and performance indexed faster. What's concerning me, among others, is that the crawler of Google comes from time to time when looking on Google Search Console but does not seem to make progress or to even follow any link and the evolution does not seem to do what google says in GSC help. For instance, our sitemap.xml was submitted, for a few days, it seemed like it had an impact as many pages were then visible in the coverage report, showing them as "detected but not yet indexed" and now, they disappeared from the coverage report, it's like if it was not detected any more. Anybody has any advice to speed up or accelerate the indexing of a new website like ours? It's been launched since now almost two months and I was expected, at least on some core keywords, to quickly get indexed.
Technical SEO | | rolandvintners1 -
Old domain still being crawled despite 301s to new domain
Hi there, We switched from the domain X.com to Y.com in late 2013 and for the most part, the transition was successful. We were able to 301 most of our content over without too much trouble. But when when I do a site:X.com in Google, I still see about 6240 URLs of X listed. But if you click on a link, you get 301d to Y. Maybe Google has not re-crawled those X pages to know of the 301 to Y, right? The home page of X.com is shown in the site:X.com results. But if I look at the cached version, the cached description will say :This is Google's cache of Y.com. It is a snapshot of the page as it appeared on July 31, 2014." So, Google has freshly crawled the page. It does know of the 301 to Y and is showing that page's content. But the X.com home page still shows up on site:X.com. How is the domain for X showing rather than Y when even Google's cache is showing the page content and URL for Y? There are some other similar examples. For instance, you would see a deep URL for X, but just looking at the <title>in the SERP, you can see it has crawled the Y equivalent. Clicking on the link gives you a 301 to the Y equivalent. The cached version of the deep URL to X also shows the content of Y.</p> <p>Any suggestions on how to fix this or if it's a problem. I'm concerned that some SEO equity is still being sequestered in the old domain.</p> <p>Thanks,</p> <p>Stephen</p></title>
Technical SEO | | fernandoRiveraZ1 -
Website cache?
Hi mozzers, I am conducting an audit and was looking at the cache version of it. The homepage is fine but all the other pages, I get a Google 404. I don't think this is normal. Can someone tell me more what could be the issue here? thanks
Technical SEO | | Ideas-Money-Art0 -
Disallow: /404/ - Best Practice?
Hello Moz Community, My developer has added this to my robots.txt file: Disallow: /404/ Is this considered good practice in the world of SEO? Would you do it with your clients? I feel he has great development knowledge but isn't too well versed in SEO. Thank you in advanced, Nico.
Technical SEO | | niconico1011 -
Will combining multiple websites/brands into one Wordpress Multisite Installation hurt SEO?
My company currently operates four websites, independently of each other, a corporate website and three separate store brands. innovativemattresssolutions.com - corporate website mattressking.net sleepoutfitters.com mattresswarehouse.com All of our stores have the same branding, same TV spots, same print ads, etc across the company, we just swap out logos on all marketing pieces. It is proving nearly impossible for us to maintain four separate websites, currently on three different platforms. all four are hosted separately as of now. We would like to combine all four websites to one Wordpress Multisite installation so we can manage the pages from one place, using the same theme and even content in many places because all brands share the same info, policies, products, etc. We would set up wordpress multi using subfolders for installation and point the URL directly to resolve to the appropriate subfolder. The only site that would crosslink to the others would be the corporate website. Is this a bad idea for SEO? What other options would we have? Should we keep the corporate site on its own installation, but put the other three brands on a multisite install? Would duplicating content on the three brand pages be an issue? Less of an issue if they were not on multisite? Any insight is much appreciated.
Technical SEO | | Karrie_Beth0 -
More than 1 XML Sitemap
I recently took over administration of my site and I have 2 XML sitemaps for my main site and 1 XML sitemap for my blog (which is a sub-page of the main site). Don't I only need 1 sitemap for my site and one for my blog? I don't know which one to delete - they both has the same page authority. Also, only 1 of them is accessible by browser search. http://www.rmtracking.com/rmtracking-sitemap.xml - accessible in browser http://www.rmtracking.com/sitemap.xml - regularly updated in Google Webmaster Tools but not accessible in search browser. I don't have any error messages in Webmaster tools.
Technical SEO | | BradBorst0