Interesting, I actually find LLMs very useful at debugging. They are good at doing mindless grunt work and a great deal of debugging in my case is going through APIs and figuring out which of the many layers of abstraction ended up passing some wrong argument into a method call because of some misinterpretation of the documentation.
Claude Code can do this in the background tirelessly while I can personally focus more on tasks that aren't so "grindy".
They are good at purely mechanical debugging - throw them an error, they can figure out which line threw it, and therefore take a reasonable stab at how to fix it. Anything where the bug is actually in the code, sure, you'll get an answer. But they are terrible at weird runtime behaviors caused by unexpected data.
> In the age of LLMs, debugging is going to be the large part of time spent.
That seems a premature conclusion. LLMs excel at meeting the requirements of users having little if any interest in debugging. Users who have a low tolerance for bugs likewise have a low tolerance for coding LLMs.
I don't think so. I think reviewing (and learning) will be. I actually think that the motivation to become better will vanish. AI will produce applications as good as we have today, but will be incapable of delivering better because AI lacks the motivation.
In other words, the "cleverness" of AI will eventually be pinned. Therefore only a certain skill level will be required to debug the code. Debug and review. Which means innovation in the industry will slow to a crawl.
AI will never be able to get better either (once it plateaus) because nothing more clever will exist to train from.
Though it's a bit worse than that. AI is trained from lots of information and that means averages/medians. It can't discern good from bad. It doesn't understand what clever is. So it not only will plateau, but it will ultimately rest at a level that is below the best. It will be average and average right now is pretty bad.
rqlite creator here, happy to answer any questions.
As for reliability - it's a fault-tolerant, highly available system. Reliability is the reason it exists. :-) If you're asking about quality and test coverage, you might like to check out these resources:
Considering AA gave them ~500TB of books, which is astonishing (very expensive to even store for AA), I wonder how much nvidia paid them for it? It has to be atleast close to half a million?
I have a very large collection of magazines. AI companies were offering straight cash and FTP logins for them about a year or so ago. Then when things all blew up they all went quiet.
It was a POC from shortly after Spectre CVE dropped, and I'm not sure if the source code made it into the public. I heard about the exploit in a talk by Joanna Rutkowska, where she admitted the OS could no longer completely span TCSEC standards on consumer Intel CPUs. YMMV
The modern slop-web is harder to find things now, and I can't recall specifically if it was something more than just common hypervisor guest escape. =3
reply