This very much depends on the SQL engine you are talking about - many early sql engines literally compiled stored procedures and didn't allow the dynamism you imply - some still offer such features.
Some SQL engines are more sensitive (due to caching plans in the first place or not) to this problem as well - SQL Server famously utilizes parameter sniffing for performance, which has positive implications of skipping work, and the negative of skipping work you might need to do.
A stored procedure and a prepared statement aren't the same, though. I'm not sure how persuasive an argument from how one is optimized is meant to be for the other.
My experience has long been that in almost every case where a "database performance" problem occurs while an ORM is in use, presumptively refactoring to eliminate the ORM almost immediately reveals the buried n+1 or other trivially pathological pattern which, if the ORM's implementor knows to try to avoid it, the ORM's interface typically is not sufficiently expressive to disambiguate in any case. (Fair do's: In the uncommon case where that interface is so expressive, one does usually find the implementation accounts for the case.)
Hence my earlier reference to the "Vietnam" paper, which dissects in extensive (if unfortunately mostly fruitless) detail the sunk cost fallacy at the heart of this problem.
(Belatedly revisiting after the edit window to note, when I say 'refactoring to eliminate...' above, I should be clear I mean experimentally eliminating ORM calls for the poorly performing query and not across an entire codebase, or any other such ocean-boiling nonsense. In a sane world this would all be reasonably implicit, but we live here, so.)
Some SQL engines are more sensitive (due to caching plans in the first place or not) to this problem as well - SQL Server famously utilizes parameter sniffing for performance, which has positive implications of skipping work, and the negative of skipping work you might need to do.