> When someone suggests this kind of thing, I'll ask them "how do we diagnose performance problems with this technology when there are 100,000 concurrent users and millions of data elements?"
I don't understand; the exact same performance diagnostics work in both cases. Why is this different? There's nothing intrinsically less performant about this approach. You really think your checkerboard tables and long lists of columns with names like "VALUE12" and "VALUE13" and multiple different kinds of key/value pairs you jammed in there for different clients -- you think those are better performance!?
> 100,000 concurrent users
Do you actually have 100,000 concurrent users? Really? You don't, do you? You just kinda hope you will eventually. And again: this approach is not worse for that.
> millions of data elements
This is absolute peanuts for any modern database system. It's weird that this is your extreme example.
I don't understand; the exact same performance diagnostics work in both cases. Why is this different? There's nothing intrinsically less performant about this approach. You really think your checkerboard tables and long lists of columns with names like "VALUE12" and "VALUE13" and multiple different kinds of key/value pairs you jammed in there for different clients -- you think those are better performance!?
> 100,000 concurrent users
Do you actually have 100,000 concurrent users? Really? You don't, do you? You just kinda hope you will eventually. And again: this approach is not worse for that.
> millions of data elements
This is absolute peanuts for any modern database system. It's weird that this is your extreme example.