I've noticed this as well, though I've also noticed that you can sometimes avoid it if you're more explicit and actually say things like "can you write these endpoints in a way that ___, ___, and ____." Or I'll mention some context that I'm worried the LLM will miss (for example pointing out when there are already existing functions for doing certain things).
The broader a request is, the more likely I am to get a bunch of bloat. I think this is partly because the LLM will always try to fully solve the problem entirely from your initial prompt. Instead of stopping to clarify something, it'll just move forward with doing something that technically works. I find it's better to break things into smaller steps so that you can "intervene" if it starts to do things wrong.
I think one issue you can run into with clever abstractions is that it can be harder to fix/change them if something is wrong with their fundamental assumptions (or those assumptions change later). Something like this happened at my work a while back, where if I had written the code it would have probably just involved a few really long/ugly functions (but only required changing a few lines in and after the SQL query to fix), but instead the logic was so deeply intertwined with the code structure that there wasn't any simple way to fix it without straight-up rewriting the code (it was written in a functional way with a bunch of functions taking other functions as arguments and giving functions as output, which also made debugging really tough).
It also depends how big the consequences to failure/bugs are. Sometimes bugs just aren't a huge deal, so it's a worthwhile trade-off to make development easier in change for potentially increasing the chance of them appearing.
I think one issue is that some people just find very different things intuitive. Low cognitive load for one person might be high cognitive load for another.
Because of some quirk of the way my brain works, giant functions with thousands of lines of code doesn't really present a high cognitive load for me, while lots of smaller functions do. My "working memory" is very low (so I have trouble seeing the "big picture" while hopping from function to function), while "looking through a ton of text" comes relatively easily to me.
I have coworkers who tend to use functional programming, and even though it's been years now and I technically understand it, it always presents a ton of friction for me, where I have to stop and spend a while figuring out exactly what the code is saying (and "mentally translating" it into a form that makes more sense to me). I don't think this is necessarily because their code inherently presents a higher cognitive load - I think it's easier for them to mentally process it, while my brain has an easier time with looking at a lot of lines of code, provided the logic within is very simple.
The broader a request is, the more likely I am to get a bunch of bloat. I think this is partly because the LLM will always try to fully solve the problem entirely from your initial prompt. Instead of stopping to clarify something, it'll just move forward with doing something that technically works. I find it's better to break things into smaller steps so that you can "intervene" if it starts to do things wrong.