> I was taught that to allocate memory was to summon death itself to ruin your performance. A single call to malloc() during any frame is likely to render your game unplayable. Any sort of allocations that needed to happen with any regularity required writing a custom, purpose-built allocator, usually either a fixed-size block allocator using a freelist, or a greedy allocator freed after the level ended.
Where do people get their opinions from? It seems like opinions now spread like memes - someone you respect/has done something in the world says it, you repeat it without verifying any of their points. It seems like gamedev has the highest "C++ bad and we should all program in C" commmunity out there.
If you want a good malloc impl just use tcmalloc or jemalloc and be done with it
I'm a sometimes real-time programmer (not games - sound, video and cable/satellite crypto) - malloc(), even in linux is an anathema to real-time coding (because deep in the malloc libraries are mutexes that can cause priority inversion) - if you want to avoid the sorts of heisenbugs that occur once a week and cause weird sound burbles you don't malloc on the fly - instead you pre-alloc from non-real-time code and run your own buffer lists
Mutexes shouldn't be able to cause priority inversion, there's enough info there to resolve the inversion unless the scheduler doesn't care to - i.e. you know the priority of every thread waiting on it. I guess I don’t know how the Linux scheduler works though.
But it's not safe to do anything with unbounded time on a realtime thread, and malloc takes unbounded time. You should also mlock() any large pieces of memory you're using, or at least touch them first, to avoid swapins.
if you have to wait on a mutex to get access to shared resource (like the book keeping inside your malloc's heap) then you have to wait in order to make progress - and if the thread that's holding it is at a lower priority and is pre-empted by something lower than you but higher than them then you can't make progress (unless your mutex gives the thread holding it a temporary priority boost when a higher priority thread contests for the mutex)
(this is not so much an issue with linux but with your threading library)
I'm completely in agreement that you shouldn't be mallocing, that was kind of my point - if you just got a key change from the cable stream and you can't get it decoded within your small number of millisecond window before the on-the-wire crypto changes you're screwed (I chased one of these once that only happened once a month when you paid your cable bill .....)
> (this is not so much an issue with linux but with your threading library)
If your threading library isn't capable of handling priority inheritance then it's probably Linux's fault for making it not easy enough to do that. This is a serious issue on AMP (aka big.little) processors, if everything has waits on the slow cores with no inheritance then everything will be slow.
Aside from the performance implications being very real (even today, the best first step to micro-optimize is usually to kill/merge/right-size as many allocations as possible), up through ~2015 the dominant consoles still had very little memory and no easy way to compact it. Every single non-deterministic malloc was a small step towards death by fragmentation. (And every deterministic malloc would see major performance gains with no usability loss if converted to e.g. a per-frame bump allocator, so in practice any malloc you were doing was non-deterministic.)
If this person was taught game dev any time before about 2005, that would have still been relevant knowledge. Doing a large malloc or causing paging could have slaughtered game execution, especially during streaming.
>If you want a good malloc impl just use tcmalloc or jemalloc and be done with it
> Doing a large malloc or causing paging could have slaughtered game execution, especially during streaming.
... it still does ? I had a case a year or so ago (on then-latest Linux / GCC / etc.) where a very sporadic allocation of 40-something bytes (very exactly, inserting a couple of int64 in an unordered_map at the wrong time) in a real-time thread was enough to go from "ok" to "unuseable"
modern engines generally have a memory handler, which means that mallocs are usually coached in some type of asset management. you are also discouraged from extending working memory of the scene suddenly. When I was doing gamedev, even then, there was no reason to big malloc because everything was already done for you with good guardrails
If you go way back into the archives of the blog's author, probably about ten years now, you will find another memory-related rant on how multisampled VST instrument plugins should be simple and "just" need mmap.
I did, in fact, call him out on that. I did not know exactly how those plugins worked then(though I have a much better idea now) but I already knew that it couldn't be so easy. The actual VST devs I shared it with concurred.
But it looks like he's simply learned more ways of blaming his tools since then.
As always there is some truth to it - the problem of the MSVCRT malloc described in this blog article is the living proof of that - but these days it's definitely not a rule that will be true in 100% of cases. Modern allocators are really fast.
Where do people get their opinions from? It seems like opinions now spread like memes - someone you respect/has done something in the world says it, you repeat it without verifying any of their points. It seems like gamedev has the highest "C++ bad and we should all program in C" commmunity out there.
If you want a good malloc impl just use tcmalloc or jemalloc and be done with it