Martin Splitt mentioned on a LinkedIn post[1] as a follow up to this that larger sites may have crawl budget applied.
> That was a pretty defensive stance in 2018 and, to be fair, using server-side rendering still likely gives you a more robust and faster-for-users setup than CSR, but in general our queue times are significantly lower than people assumed and crawl budget only applies to very large (think 1 million pages or more) sites and matter mostly to those, who have large quantities of content they need updated and crawled very frequently (think hourly tops).
We have also tested smaller websites and found that Google consistently renders them all. What was very surprising about this research is how fast the render occured after crawling the webpage.
I originally submitted an issue which I was referencing today, based on the Ocaml.org open source website source code. I was quite surprised when clicking on the link and saw someone elses issue. The wayback machine shows the original issue in the tweet. It looks like they removed the entire project, then added a new one. I wonder how many references to Github will be broken or out of context (similar to link rot on the general web).
> That was a pretty defensive stance in 2018 and, to be fair, using server-side rendering still likely gives you a more robust and faster-for-users setup than CSR, but in general our queue times are significantly lower than people assumed and crawl budget only applies to very large (think 1 million pages or more) sites and matter mostly to those, who have large quantities of content they need updated and crawled very frequently (think hourly tops).
We have also tested smaller websites and found that Google consistently renders them all. What was very surprising about this research is how fast the render occured after crawling the webpage.
[1] https://www.linkedin.com/feed/update/urn:li:activity:7224438...