Hacker Newsnew | past | comments | ask | show | jobs | submit | flipped's commentslogin

In the age of LLMs, debugging is going to be the large part of time spent.

Interesting, I actually find LLMs very useful at debugging. They are good at doing mindless grunt work and a great deal of debugging in my case is going through APIs and figuring out which of the many layers of abstraction ended up passing some wrong argument into a method call because of some misinterpretation of the documentation.

Claude Code can do this in the background tirelessly while I can personally focus more on tasks that aren't so "grindy".


They are good at purely mechanical debugging - throw them an error, they can figure out which line threw it, and therefore take a reasonable stab at how to fix it. Anything where the bug is actually in the code, sure, you'll get an answer. But they are terrible at weird runtime behaviors caused by unexpected data.

> In the age of LLMs, debugging is going to be the large part of time spent.

That seems a premature conclusion. LLMs excel at meeting the requirements of users having little if any interest in debugging. Users who have a low tolerance for bugs likewise have a low tolerance for coding LLMs.


I don't think so. I think reviewing (and learning) will be. I actually think that the motivation to become better will vanish. AI will produce applications as good as we have today, but will be incapable of delivering better because AI lacks the motivation.

In other words, the "cleverness" of AI will eventually be pinned. Therefore only a certain skill level will be required to debug the code. Debug and review. Which means innovation in the industry will slow to a crawl.

AI will never be able to get better either (once it plateaus) because nothing more clever will exist to train from.

Though it's a bit worse than that. AI is trained from lots of information and that means averages/medians. It can't discern good from bad. It doesn't understand what clever is. So it not only will plateau, but it will ultimately rest at a level that is below the best. It will be average and average right now is pretty bad.


> In the age of LLMs, debugging is going to be the large part of time spent.

That seems a premature conclusion. LLMs are quite good as debugging and much faster than people.


Nftables has a really good doc site https://wiki.nftables.org/wiki-nftables/index.php/Main_Page. I wouldn't rely on any book


https://toni.cunyat.net/2019/11/nftables-vs-pf-ipv4-filterin.... According to this article, it depends on usecase.


Has anyone tried using distributed versions of sqlite, such as rqlite? How reliable is it?


rqlite creator here, happy to answer any questions.

As for reliability - it's a fault-tolerant, highly available system. Reliability is the reason it exists. :-) If you're asking about quality and test coverage, you might like to check out these resources:

- https://rqlite.io/docs/design/

- https://rqlite.io/docs/design/#blog-posts

- https://philipotoole.com/how-is-rqlite-tested/


Forgejo does all that while being lightweight and run by a non-profit. Gitlab is awfully resource hungry.


> Gitlab is awfully resource hungry.

Yes... and no.

Gitlab doesn't make sense for a low-volume setup (single private user or small org) because it's a big boat in itself.

But when you reach a certain org size (hundreds of users, thousands of repos), it's impressive how well it behaves with so little requirements!


Forgejo scales too, even for a large org it's a perfect choice.


Considering AA gave them ~500TB of books, which is astonishing (very expensive to even store for AA), I wonder how much nvidia paid them for it? It has to be atleast close to half a million?


I have a very large collection of magazines. AI companies were offering straight cash and FTP logins for them about a year or so ago. Then when things all blew up they all went quiet.


How did AI companies find your collection?


Most workloads are cloud-native these days so a k8s/docker rootkit would make a lot more sense.


>persistence I guess no point in having any

The most obvious reason would be the fear of patching a vulnerability which the attacker used to gain initial access. Persistence is required.


I am hearing first time of a sandbox escape in QubesOS. Can you link the source?


It was a POC from shortly after Spectre CVE dropped, and I'm not sure if the source code made it into the public. I heard about the exploit in a talk by Joanna Rutkowska, where she admitted the OS could no longer completely span TCSEC standards on consumer Intel CPUs. YMMV

The modern slop-web is harder to find things now, and I can't recall specifically if it was something more than just common hypervisor guest escape. =3


With secure boot enabled, is it mandatory for kernel modules to be signed with same key so they can be loaded? I was not aware of this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: