This is a great point. It might seem extreme, but I would advocate never using the User-Agent string to make decisions about what to serve a client. There is too much hackery and history that clouds up the User-Agent (such as every browser identifying itself as Mozilla), and it's almost always a proxy for something else that you actually want to test for.
In some rare situations, it's unavoidable, but even then I'd urge trying to rearchitect the solution to avoid it.
That makes sense, though in an ideal world there would be an abstracted header that said "hey, I'm not gonna render JS the way a regular browser will, so send me something prerendered". Then you could write something that would actually be future-proof and work with other search engines.
The way Google suggests there actually seems a little bit nefarious, as it makes it hard-coded to Google instead of working for any search engine.
You do realize that you gave a link which is from 2006? And more recent recommendations does not include that.
[EDIT] OK as I was downvoted I will clarify my point: https://developers.google.com/webmasters/ajax-crawling/docs/... This is recommended practice for crawling javascript generated pages, no need to lookup for spiders IP address as someone mentioned.
In some rare situations, it's unavoidable, but even then I'd urge trying to rearchitect the solution to avoid it.