Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What would you do if you required SEO enhancement AND dynamic loading of content? Are you supposed to just let that portion of the site go without indexing? Surely there are sites that have both requirements.

What's the alternative?



Progressive enhancement. It is a web development best practice.

You will find that "dynamic loading of content" doesn't automatically mean "no content served by HTML under any circumstances". This is an error perpetuated by these JavaScript-only frameworks.

For example, bustle.com - there is absolutely no customer experience reason for the website not to have the content loaded at the HTML layer and then progressively enhanced with the customer experience additions. The content here isn't tied exclusively to the behaviour layer.


"The content here isn't tied exclusively to the behaviour layer."

Can you elaborate on a situation where that the content and the behaviour are tied together, and what you would do in that case?

From my understanding, Facebook's BigPipe loads content in modules to reduce user perceived latency. If I'm building X site and wanted that same behavior (since I've heard on several occasions that there is a direct correlation between page load times and user engagement), is my only option to sacrifice SEO?


Facebook's big pipe is nothing more than a client-side hack to work around a limitation of their server-side architecture.

Both Yahoo and Amazon - that I personally know of - have an infrastructure where components on the page are rendered separately and in parallel, and are stitched together on the HTML layer. The render time is then down to the rendering time of the slowest component, or the slowest dependency chain of components.

Loading content in with JavaScript after the HTML page load is always going to be slower, and perceivably so - look at both Twitter and AirBnb, both have written about how much faster they get content to the user using progressive enhancement.

If you decide that the HTML layer isn't the right layer for content, you are working against the strengths of the Web. And of course, that leads down a path where you are sacrificing SEO, sacrificing robustness.

Your time is better spent figuring out why it takes your server too long to generate content, and put in steps to reduce the server side labour.

The JavaScript include approach isn't quicker. Bustle.com for example, takes 10 seconds to show the first page - that's horrific.


Okay, I'm sold on HTML being where the content should be on page load. My next question is are there any frameworks that can assist with this stitching together of content? I'm afraid my ignorance is showing here, but I can't off the top of my head list any.

The web paradigm that I've grown up with is the single-threaded dynamic content generation one, most recently using MVC, but with any server-side logic. The concept of parallel rendering of content and a "stitching" together of HTML is new to me.

I'm also curious as to the best practices surrounding page linking when the behavior specifies something like no screen flash. It seems like all that content (like, the whole page) would have to be loaded with AJAX.. then you're right back where you started with loading content with JS. Maybe it's forgivable as long as the initial page load returns a complete set of content?

Perhaps there is a place where I can learn more about the logistics of progressive enhancement.

Thanks for being willing to answer my questions about this. It's something I've always wanted hashed out in my head from an opponent of these frameworks.


"are there any frameworks that can assist with this stitching together of content"

Mainstream, no. These are not typical use-cases for sites until they reach a gigantic scale.

Plus, even before you get to that level, there's heaps you can do on caching at different levels, pre-calculating, pre-generating. So many optimisations at various levels of your stack, then there's scaling across hardware. Wordpress and Wikipedia don't need paralellisation of HTML components yet.

"The concept of parallel rendering of content and a "stitching" together of HTML is new to me."

It was new to me until I joined Yahoo. The approach is breaking down a page into modular/independent sections. And then running some sort of parelellisation process, and when all the responses are received, then render the page with those generated components.

There's probably a variety of hacks in each major platform that will allow things to be parelellised. If you want to parallelise on the HTTP level, then curl multiget is an option: http://php.net/manual/en/function.curl-multi-init.php

It's probably possible to cobble together something with node.js too. Node.js receives the HTTP request, turns that into a series of service calls that return HTML, makes those calls asynchronously, waits for them all to return (this is where Node.js excels), then renders an HTML page skeleton, replacing placeholders with the responses from the various services. With a decent promise library that waits for a number of calls to finish, this is quite a compact approach, I guess.

Almost anything that allows asynchronous operations that uses resources outside of the current request handler can be fashioned into a component parallelisation stack.

"I'm also curious as to the best practices surrounding page linking when the behavior specifies something like no screen flash."

No screen flash is impossible, due to the nature of the Web. The browser has to receive the HTML first before it can know what dependent resources are needed. The problem you are trying to solve here is to minimise the perceived time between the HTML arriving, and enough of the CSS to load in for an incremental render to paint a close-enough-to-look-complete rendering.

Loading the content after the CSS is one way of doing that. Which replaces the screen-flash delay with a blank screen. That's the JavaScript-app approach.

I don't like that, because it delays the appearance of content.

The perception of screenflash can be minimised, mostly by decreasing the amount of traffic crossing the wire until a good enough repaint can happen. There are various tricks and hacks for minimising this, but due to the nature of the web they cannot be completely eliminated using Web technologies. They can be replaced with other issues.

Tricks I'd consider is reducing the amount of CSS needed to render the page, break the CSS up into a primary rendering and a secondary, more detailed rendering. The primary rendering is just a basic layout plus main elements styling. Perhaps a careful inline style or two, judicious display:nones and overflow: hiddens to minimise page assets moving around as incremental CSS rendering happens. Also, if you want to get serious, techniques for deferring CSS, JavaScript and images of content outside the current viewport is an option. Yahoo loved deferring the loading of avatar images in an article comments area till after onload. I see that technique used in tech publication websites, can't remember off the top of my head a site that did this.

"Perhaps there is a place where I can learn more about the logistics of progressive enhancement."

The process is about thinking about a site one layer at a time. Get it functional at each level: HTML with links and form posts, CSS presentational level, JavaScript enhancements and usability improvements. Like building a skyscraper, you get the foundation right first.

But before that, it takes understanding as to what are the core use-cases for the site. This is about tasks a visitor can complete. Something that's tied into key product indicators and metrics. I doubt bustle.com use page loading performance as their primary business success factor. It is more likely to be about customer activity - how long did they visit, how many articles, any social activity.

It's figuring out the primary services and functionality of the site, and building that to not rely on JavaScript, or CSS. Primary services are those that, if you didn't provide them, you'd have no business.

Secondary functionality and use-cases -- those that complement or are related to primary functionality, that can be argued on a case by case basis whether a quick ajax solution is sufficient. But most of the time when you get progressive enhancement right, it becomes just a natural development technique.

Gov.uk have a good explanation of progressive enhancement here: https://www.gov.uk/service-manual/making-software/progressiv...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: