Have you actually studied the JSONB capabilities of PostgreSQL and related features such as partial indexes? These things can be used at scale and there is at least one company that has millions of partial indexes in production.
And you might want to look at how SalesForce has implemented a highly scalable architecture on top of Oracle RDBMS servers. That includes a document store as well as a dynamic denormalized schema layer that works real well for reporting.
Heck, I've implemented a persistent memcache server on top of SQLITE and it scaled pretty well because modern servers and network infrastructure just happen to do most of the work needed.
It is not easy to learn PostgreSQL well enough to evaluate it, but more and more people are doing so and finding that it is the ideal open source persistence solution for the vast majority of business uses.
Honestly you can say any of these things about MongoDB. Their marketing team sure does. Did you know MongoDB is web scale?
I'm not insulting Postgres (actually a big fan), but Postgres added JSON support after. By that metric, document store in Postgres is less tried-and-true than MongoDB. Does that mean it's the wrong choice? No. Age of product is not always a good qualifier.
I'll take "less mature" over "data loss" every day and twice on Sunday.
Whatever else you might say about Postgres, that community cares about the integrity of its users data, over pretty much every other consideration. I don't think the same can be said of Mongo, or any other "NoSQL" anything.
You can reimplement 95% of the "document model" with a relational database just fine, and you'll actually come out ahead given all its advantages.
Unstructured documents are very rare. Querying arbitrarily against them even more so. For the fields that do need querying in a document, put them apart and add a few indexes and you're good. That's been my experience and I've been around a while.