Where this style applies the most is with hardcoded assets: initializations and runtime behaviors that require an unknown degree of flexibility.
When someone is trained in OOP, the requirement "flexible behavior" triggers a search for answers through polymorphism, but polymorphism is much more structured than a copy-paste-modify code path. It creates limits to scope and requires more explicit names for things. That is the point, of course. You have write more code and do more complex things to get the same level of flexibility, and it'll be harder to debug since the call stack will have to jump around throughout the layers of abstraction. In a "time from idea to production code" latency analysis, OOP structures lose to copy-paste.
And that's why, across the bulk of game codebases I've seen, there tends to be a big jump from hardcoded assets straight into data-driven ECS-style approaches. Small games start with the former, and if they get big enough switch to the latter. The ECS approach still exacts a debugging penalty because more bugs will exist in data, where they're harder to analyze with IDE tooling, but more behaviors are encoded as explicit patterns with limited scope, which is good in a team environment, and it's possible to go data-driven in an incremental rewrite: simply find all the truly redundant stuff, replace it with a parameterized pattern, expose the pattern as data. For the stuff that isn't easily parameterized, consider encoding a small state machine interpreter so that an imperative program definition may occur in data. In the entity system, add some notion of dynamic compositions of state and behavior at initialization. Between those three you can cover just about everything, and you never have to do it 100% to ship: it's there to assist the things that benefit from additional structure, which probably isn't all of the game. (rather, in the AAA environment the data-driven approach can tend to get out of hand - it's a move that allows the designers to avoid needing explicit code support for longer and longer periods of time, with the predictable result of enterprise anti-patterns that abuse scripting in a fashion that is much harder to debug than the hardcoded equivalent.)
This distinction is also basically why you don't see Jon Blow, for example, really get excited about discussing the runtime architecture of his game projects: If the bulk of the game is assets all the way down, there's nothing to talk about, and the only part that concievably could be exciting is some of the core algorithms that drive the interactivity.
It's also why so many gamedevs are apologetic about their code: half of them try to escape this reality by finding a magic mix of abstractions(which eventually blows up and causes rewrites once sufficient complex behavior hits the codebase), the other half run with it and stay in the local optimum of inlined, hardcoded, primitive-heavy functions.
My experience 100% mirrors the code vs data trade off. The games I work on are all data driven from the start, but are also never could have been made hardcoded in the firs tplace.
When someone is trained in OOP, the requirement "flexible behavior" triggers a search for answers through polymorphism, but polymorphism is much more structured than a copy-paste-modify code path. It creates limits to scope and requires more explicit names for things. That is the point, of course. You have write more code and do more complex things to get the same level of flexibility, and it'll be harder to debug since the call stack will have to jump around throughout the layers of abstraction. In a "time from idea to production code" latency analysis, OOP structures lose to copy-paste.
And that's why, across the bulk of game codebases I've seen, there tends to be a big jump from hardcoded assets straight into data-driven ECS-style approaches. Small games start with the former, and if they get big enough switch to the latter. The ECS approach still exacts a debugging penalty because more bugs will exist in data, where they're harder to analyze with IDE tooling, but more behaviors are encoded as explicit patterns with limited scope, which is good in a team environment, and it's possible to go data-driven in an incremental rewrite: simply find all the truly redundant stuff, replace it with a parameterized pattern, expose the pattern as data. For the stuff that isn't easily parameterized, consider encoding a small state machine interpreter so that an imperative program definition may occur in data. In the entity system, add some notion of dynamic compositions of state and behavior at initialization. Between those three you can cover just about everything, and you never have to do it 100% to ship: it's there to assist the things that benefit from additional structure, which probably isn't all of the game. (rather, in the AAA environment the data-driven approach can tend to get out of hand - it's a move that allows the designers to avoid needing explicit code support for longer and longer periods of time, with the predictable result of enterprise anti-patterns that abuse scripting in a fashion that is much harder to debug than the hardcoded equivalent.)
This distinction is also basically why you don't see Jon Blow, for example, really get excited about discussing the runtime architecture of his game projects: If the bulk of the game is assets all the way down, there's nothing to talk about, and the only part that concievably could be exciting is some of the core algorithms that drive the interactivity.
It's also why so many gamedevs are apologetic about their code: half of them try to escape this reality by finding a magic mix of abstractions(which eventually blows up and causes rewrites once sufficient complex behavior hits the codebase), the other half run with it and stay in the local optimum of inlined, hardcoded, primitive-heavy functions.