One thing I’ve always found tricky when reversing C64 code is self-modifying code, pretty much every game and demo uses STA to patch operands at runtime. Does the auto-analysis flag writes into code regions, or is that something you’d handle manually with the VICE debugger integration?
There's something more, to me: the confusion between identity and role isn't just psychological, it's social. People ask "what do you do?" and mean "who are you?" It takes either courage or a real crisis to separate those two questions. AI is forcing that crisis at industrial scale. Maybe that's not totally a bad thing.
It's crazy to see a 400B model running on an iPhone. But moving forward, as the information density and architectural efficiency of smaller models continue to increase, getting high-quality, real-time inference on mobile is going to become trivial.
They will. Either new architectures will come out that give us greater efficiency, or we will hit a point where the main thing we can do is shove more training time onto these weights to get more per byte. Similar thing is already happening organically when it comes to efficient token use; see for instance https://github.com/qlabs-eng/slowrun.
The "if" is fair. But when scaling hits diminishing returns, the field is forced to look at architectures with better capacity-per-parameter tradeoffs. It's happened before, maybe it'll happen again now.