> I don't know realistically what Obj C uses in the osx cpu architecture but I'm almost certain it's not going to be as simple as just retargeting your compiler...
Objective C is in mainline GCC. You retarget your compiler and port the runtime. As far as I know, any target with GCC support can run Objective C, given a runtime. The important parts of the runtime require kilobytes, not megabytes, of space.
Remember that Objective C was originally a very thin layer on top of C. You take all the [method calls] and replace them with calls to objc_msgSend(). The overhead is fairly small— your object instances need the “isa” pointer, which is basically just a pointer to a vtable. The objc_msgSend() function reads the vtable and forwards method calls to the correct implementation.
My experience is that Objective C binaries typically have a much smaller footprint than C++ binaries. That’s obviously not some kind of rule, but it reflects idiomatic usage of Objective C. In C++ you'd use a std::vector<T> which is a templated class, which gets instantiated for every different T you use in your program. In Objective C, you’d use NSArray, which is monomorphic.
This all should be completely unsurprising—since Objective C first appeared in the 1980s, it’s no surprise that it doesn’t need much memory.
> GCC doesn't do modern Objective-C, only what they got from NeXT days.
It's definitely a lot more than the NeXT days. GCC got fast enumeration, exceptions, garbage collection (if you really want it), synthesized properties, etc. These are the "Objective C 2.0" features which got released in 2007, back before Apple was shipping Clang.
GCC doesn't have ARC, array / dictionary literals, or other new features. But it's definitely a lot more than Objective C from the NeXT days. These are "modern" Objective C. They're also basically just sugar for the appropriate calls to +[NSArray arrayWithObjects:...] or -[retain] / -[release] etc.
Debating whether some language feature can exist on an mcu is somewhat unrelated - the point was about code size bloat not whether you can fake one instruction set with another...
You can fake e.g. multiply with a for loop and an add instruction (1 or 2 cycles) but that will run orders of magnitude slower than a 2 or 3 clock cycle multiply instruction...
So the point about osx arch is not really addressed, if you have runtime that cannot be easily ported or osx cpu arch which produces prohibitive instruction sets it doesn't matter if gcc can target an old version of objective C, it is, as stated, not that simple.
Objective C is in mainline GCC. You retarget your compiler and port the runtime. As far as I know, any target with GCC support can run Objective C, given a runtime. The important parts of the runtime require kilobytes, not megabytes, of space.
Remember that Objective C was originally a very thin layer on top of C. You take all the [method calls] and replace them with calls to objc_msgSend(). The overhead is fairly small— your object instances need the “isa” pointer, which is basically just a pointer to a vtable. The objc_msgSend() function reads the vtable and forwards method calls to the correct implementation.
My experience is that Objective C binaries typically have a much smaller footprint than C++ binaries. That’s obviously not some kind of rule, but it reflects idiomatic usage of Objective C. In C++ you'd use a std::vector<T> which is a templated class, which gets instantiated for every different T you use in your program. In Objective C, you’d use NSArray, which is monomorphic.
This all should be completely unsurprising—since Objective C first appeared in the 1980s, it’s no surprise that it doesn’t need much memory.