This is called the actor model, and we actually implemented it in a prototype game engine once. We made one thread per CPU core and collected messages in "cycles" - where you'd process a batch of messages in parallel, which could produce the next batch. Messages would be sorted/grouped by target object address and then partitioned among the thread pool for execution. Most of the architecture was wait-free (no locks/atomics) so it really was quite fast.
This resulted in a C++ framework where you could write 'typical' OOP code and it would automatically be deconstructed into call-graphs and safely scheduled for execution across any number of threads in a completely deterministic way, without locks.
We abandoned the prototype because the "typical" OO behaviour of having deep call graphs is not actually something we want to keep, making the purpose of the framework kind of shaky.
Also, while performance was good, the characteristics weren't quite suitable for games -- it worked best if you had large objects, receiving multiple messages each, doing computationally complex work. In our games though, we typically have huge numbers of very simple objects, so batching messages by class instead of by object typically gives the best performance.
Follow up articles in that series are going to show how to make it multi-threading and OO-compliant.
"Typical" bad OOP code is a threading nightmare because the flow of control and flow of data becomes impossible to follow - it's all just random objects calling random objects calling random objects. Not spaghetti code, but spaghetti flow.
To write threadable OOP, you need to avoid fine grained polymorphism and deep call graphs. Instead of doing work immediately by calling out to the next object, return some results to your caller and allow them to make the call. This also allows your caller to collect large batches of work a'la DOD... which often actually increases performance (much better I$ and D$ usage) AND it's also much closer to old school OO message passing theory, and assist in writing simple, decoupled components.
I'd go as far to say that even if you're not targeting multiple threads, accounting for them in this way (NOT the stupid "just put a lock on each object" way) results in much better/simpler code.
It's because every single article that's promoting ECS does so by comparing it against incorrect inheritance-based code.
The amount of time you see newbies jumping on the ECS bandwagon because inheritance is bad, while they don't yet understand OOP, ECS, procedural, relational, or functional... is infuriating.
Do both. Teach how to use composition. Teach how to use the relational model. Don't avoid teaching either by using misleading sales tactics.
The fact the the original code is a straw-man is discussed. The fact the flexibility is being removed is discussed too, along with why it was there and hits on better ways to re-achieve it later. It's mentioned that these were going to be covered in a follow-up.
You need to work on your speed reading skills before throwing shade...
> The fact the the original code is a straw-man is discussed.
Yes, but instead of timing it against the improvements the grand-author made to the straw man, you still timed it against his straw man.
So I'll quote you: "You need to work on your speed reading skills before throwing shade..."
> The fact the flexibility is being removed is discussed too
You state: "Why do these frameworks exist then? Well to be fair, they enable dynamic, runtime composition. Instead of GameObject types being hard-coded, they can be loaded from data files. This is great to allow game/level designers to create their own kinds of objects... However, in most game projects, you have a very small number of designers on a project and a literal army of programmers, so I would argue it's not a key feature."
Which is failing to recognize that the whole point of the original code - written by a game engine developer - is to demonstrate exactly how to do this for performance and flexibility for any small team.
> along with why it was there and hits on better ways to re-achieve it later. It's mentioned that these were going to be covered in a follow-up.
With out it being there, I cannot judge it. But considering that you removed features and barely managed to match the ECS for performance is... not promising.
The point of the ECS is to be a very efficient solution to the run-time composition problem. You removed run-time composition and still weren't technically able to be more performant than it with your solution. You are comparing apples to oranges and still loosing on arguably the most key aspect of the entire problem space!
Welcome to HN! (I'm a moderator here.) Please stick to civil, substantive comments, regardless of how wrong someone else is (or you feel they are). Swipes like "calm your titties" (or even telling people to "work on their reading skills", as you did upthread) are the sort of thing we ban accounts for. We're trying hard to prevent this place from sinking into a toxic swamp, as so much of the internet has become. Doom is probably inevitable in the long run, but we're still hoping to stave it off for a while.
No, some logic simply isn’t cleanly decomposable, plus the main problem here is that objects lets you get away with implicit state (e.g. if (this.x == 0) doA() else doB(); ) for long enough that when you start realizing you need explicit state, it’s usually distributed quite a bit.