I agree with start with a monolith and a shared database. I’ve done that in the past quite successfully. I would just add that if scaling becomes an issue, I wouldn’t consider sharding my first option, it’s more of a last resort. I would prefer scaling vertically the shared database and optimizing it as much as possible. Also, another strategy I’ve adopted was avoiding doing `JOIN` or `ORDER BY`, as they stress your database precious CPU and IO. `JOIN` also adds coupling between tables, which I find hard to refactor once done.
I don't understand how do you avoid JOIN and ORDER BY?
Well, with ORDER BY, if your result set is not huge, sure, you can just sort it on the client side. Although sorting 100 rows on database side isn't expensive. But if you need, say, latest 500 records out of million (very frequent use-case), you have to sort it on the database side. Also with proper indices, database sometimes can avoid any explicit sort.
Do you just prefer to duplicate everything in every table instead of JOINing them? I did some denormalization to improve performance, but that was more like the last thing I would do, if there's no other recourse, because it makes it very possible that database will contain logically inconsistent data and it causes lots of headache. Fixing bugs in software is easier. Fixing bugs in data is hard and requires lots of analytic work and sometimes manual work.