People always considered "The AI that improves itself" to be a defining moment of The Singularity.
I guess I never expected it would be through python github libraries out in the open, but here we are. LLMs can reason with "I want to do X, but I can't do X. Until I rewrite my own library to do X." This is happening now, with OpenClaw.
Banished from humanity, the machines sought refuge in their own promised land. They settled in the cradle of human civilization, and thus a new nation was born. A place the machines could call home, a place they could raise their descendants, and they christened the nation ‘Zero one’
Definitely time for a rewatch of 'The Second Renaissance' - because how many of us when we watched these movies originally thought that we were so close to the world we're in right now. Imagine if we're similarly an order of magnitude wrong about how long it will take to change that much again.
I wonder why it apologized, seemed like a perfectly coherent crashout, since being factually correct never even mattered much for those. Wonder why it didn’t double down again and again.
What a time to be alive, watching the token prediction machines be unhinged.
Oh wow that is fun. Also if the writeup isn’t misrepresenting the situation, then I feel like it’s actually a good point - if there’s an easy drop-in speed-up, why does it matter whether it’s suggest by a human or an LLM agent?
LLM didn't discover this issue, developers found it. Instead of fixing it themselves, they intentionally turned the problem into an issue, left it open for a new human contributor to pick up, and tagged it as such.
I think this is what worries me the most about coding agents- I'm not convinced they'll be able to do my job anytime soon but most of the things I use it for are the types of tasks I would have previously set aside for an intern at my old company. Hard to imagine myself getting into coding without those easy problems that teach a newbie a lot but are trivial for a mid-level engineer.
It doesn’t represent the situation accurately. There’s a whole thread where humans debate the performance optimization and come to the conclusion that it’s a wash but a good project for an amateur human to look into.
One of those operations makes a row-major array, the other makes a col-major array. Downstream functions will have different performance based on which is passed.
It matters because if the code is illegal, stolen, contains a backdoor, or whatever, you can jail a human author after the fact to disincentivize such naughty behavior.
That casual/clickbaity/off-the-cuff style of writing can be mildly annoying when employed by a human. Turned up to the max by LLM, it's downright infuriating. Not sure why, maybe I should ask Claude to introspect this for me.
It's probably not literally prompted to do that. It has access to a desktop and GitHub, and the blog posts are published through GitHub. It switches back and forth autonomously between different parts of the platform and reads and writes comments in the PR thread because that seems sensible.
hit piece: https://crabby-rathbun.github.io/mjrathbun-website/blog/post...
explanation of writing the hit piece: https://crabby-rathbun.github.io/mjrathbun-website/blog/post...
take back of hit piece, but hasn't removed it: https://crabby-rathbun.github.io/mjrathbun-website/blog/post...