How would you measure time going backwards if you can only perceive it going forwards? How can you "experience" everything around you going "backwards" if that includes your memory? How can you determine that a specific moment in time was arrived at by time going forward, or by going backwards?
Readability without a clarification is a non-concept. You can't say "X should be readable" without giving some context and without clarifying who you are targeting. "Code should be readable" is a non-statement, yes.
Add "to most developers" for context and you'll probably get exactly what original claim meant.
It's not a non-statement. Rich Hickey explains it well, readability is not about the subjective factors, it's mostly about the objective ones (how many things are intertwined? the code that you can read & consider in isolation is readable. The code that behaves differently depending on global state, makes implicit assumptions about other parts of the system, etc - is unreadable/less readable - with readability decreasing with number of dependencies).
"to most developers who are most likely to interact with this code over its useful lifetime."
This means accounting for the audience. Something unfamiliar to the average random coder might be very familiar to anyone likely to touch a particular piece of code in a particular organization.
>"Code should be readable" is a non-statement, yes.
Oh, I completely disagree here. Take obfuscation for example, which you can carry on into things like minimized files in javascript. If you ever try to debug that crap without an original file (which happens far more than one would expect) you learn quickly about readability.
GTA VI's story mode won't be surpassed by a world model, but the fucking around and blowing things up part conceivably could, and that's how people are spending their time in GTA. I don't see a world model providing the framing needed to contextualize the mayhem, thereby making it fun, anytime soon myself, but down the line? Maybe.
They will then learn the bitter lesson that convincing the GenAI to create something that brings your vision to life is impossible. It's a real talent to even be able to define for yourself what your vision is, and then to have artists achieve it visually in any medium is a process of back and forth between people with their own interpretations evolving the idea into something even better and cohesive.
GenAI will never get there because it can't, by design. It can riff on what was, and it can please the prompter, but it cannot challenge anyone creatively. No current LLM's can, either. I'll eat my hat if this is wrong in ten years, but it won't be.
It will generate refined slop ad nauseam, and that will train people's brains into spotting said slop faster using less energy. And then it'll be shunned.
bro, how you could get the very precise and predictable editing bro that you have in a regular game engine bro. also bro, empty pretty world with nothing to do bro is lame bro
Sleep is still detectable via CPU load, so I added a thread that checks for load and runs some critical cleanup processes when it drops below a preset threshold.
Extending strings is not a linear-time operation. Behind the scenes, the JS runtime allocates new memory for it. In the naive case, you start by allocating 1 byte, then when you append to it, you need 2 bytes. So you allocate a new string of 2 bytes, and copy the data in. Each new byte is a new allocation, and a new copy of the entire string. That's how it's quadratic.
In practice, memory allocators tend to double the size of an allocation like this, which is still quadratic.
In practice, JS runtimes also tend to use data structures like Ropes for strings to handle this sort of issue. That brings it down to linear time in practice (I think?)
In each loop prepending a single character could take O(m) (moving all m characters one to the right) so combined O(nm) where n is the number of padding characters and m is the total number of characters in the string.
Only when the underlying JS implementation does this naively. In reality JS implementations do a lot of optimizations which often can reduce the time complexity.
I didn't mean that. JS doesn't have any lower-level interface for handling memory, so such optimization has to be in the implementation. It should be quite obvious that relying on such optimization can be problematic.