Hacker Newsnew | past | comments | ask | show | jobs | submit | jonahx's commentslogin

> In my view the most productive people of every field are not incentivized by money and would do it anyway.

The idea that money is not an effective incentive to drive behavior is wishful thinking. Even just among devs, even just among devs who truly love programming, most would be doing very different work, and working for different organizations (or none at all) if money weren't the driver.

> Hence UBI here would mean that the dev would not have to monetize.

Ok, but the dev might still want to monetize, and we're back to the original question.


> Even just among devs, even just among devs who truly love programming, most would be doing very different work, and working for different organizations (or none at all) if money weren't the driver.

Somehow I can imagine that a world where a the brightest minds of a generation didn't spend their prime optimizing ad clicking wouldn't necessarily be a complete disaster.


Optimizing ad clicking is profitable and the thing that would [partially] pay for UBI. That stops happening and money/value stop being created. The market is not 0 sum.

It's good to talk about UBI, but people taking it seriously have no idea how to fund it.


That's right, much of the market is negative sum.

> Ok, but the dev might still want to monetize, and we're back to the original question.

It's alright. Those who would like to monetize can. There are others who wouldn't and UBI would utilize that surplus talent, which otherwise had to perform tasks they weren't skilled at to earn a living.


> most would be doing very different work, and working for different organizations (or none at all) if money weren't the driver.

With UBI I wouldn't be surprised if those would be even more productive doing something else they want. And others who couldn't do the CS curiculum even though they would have loved to because they had to find a job quickly would plausibly be at their place instead.

I really view UBI as something that puts oil in the society: people have less friction to be at the spot they're better at. People who want to do nothing will not slow us down anymore. And jobs that nobody wants to do would finally be paid by how much they suck instead of how much money your parents had to educate you.

> Ok, but the dev might still want to monetize, and we're back to the original question

I don't really see the issue. We're far from having shortage of ways to make people pay: ads, paywall, soft paywall, begging, rate limits. What's the issue with those? I certainly don't like them as a user and as a member of society but am fine with people doing that.

Especially with UBI in place: if the dev is putting a paywall, they have to compete with people that have plausibly much more freedom of time and mind to allocate to another free foss project. So in the end it becomes less profitable to be adversarial against end users.


> And others who couldn't do the CS curiculum even though they would have loved to because they had to find a job quickly would plausibly be at their place instead.

Unfortunately, also wishful thinking. A particular kind of wishful thinking endemic to naturally highly curious, academic achievers (not a dig, I am one). But -- and if you don't understand this, spending some time teaching at universities makes it abundantly clear -- most of the world is nothing like this. They aren't being held back from their natural passions and curiosities by the demands of living. They would not suddenly flourish under UBI.

> With UBI I wouldn't be surprised if those would be even more productive doing something else they want.

For the people that do naturally love creating and are good at it, they might "even more productive" in one sense -- creating more stuff that they, personally, value. And personally I'd love to do that, but it doesn't maximize value across society. That's one of the main things money is. It's a constraint forcing the production of consensus value. In a world of infinite resources that ceases to matter, but we're still very far from that.

> People who want to do nothing will not slow us down anymore.

Who do you think is supporting them? Until we have robots taking care of everyone for free, support is still a cost levied on other humans.


I am aware that most of the world isn't like this. But I am also aware that there are many people who more than anything want to share things they made, have a positive impact etc. In other words : there are 10x engineers and 10x altruists and some are even both. I am convinced that they collectively could make basically unlimited progress on things we all agree on: less sick people, more happy people, less waste, better environment, etc. I'm sure you've seen some random genius on youtube who built things in their backyards that are normally only buildable by conglomerate with advanced logistics. I just want them to not have to worry about an algorithm and sponsors and accomodate spaced for them to worl together on things.

> it doesn't maximize value across society

Well you'd have to define "value" here. I am sure GDP would plummet because bullshit jobs would plummet. The current society is doing maybe a decent job at producing but a terrible job at making it "across society". We still have millions of people dying every year of very preventable causes just because of the lack lf coordination. I think this would be better if we had less noise in our daily lives caused by the system so inefficient that we have bullshit jobs.


> The idea that money is not an effective incentive to drive behavior is wishful thinking

It is obviously an incentive. But I think it's not an effective one and has many morally bad side effects.

I highly recommend taking a look at the work of Daniel Pink related to money as an incentive. See The Puzzle Of Motivation (~20min) https://www.youtube.com/watch?v=rrkrvAUbU9Y


This is really nice work, as are the other posts.

If the author stops by, I'd be interested to hear about the tech used.


My first thought was I'd love to see everything, from the unicorns to the mid-level success to the failures, all laid out in one big infographic.

But "cognate" is not.

This is the standard (for good reason) in code golf.


thanks for introducing me to code golf.-)


Nice niche golfing languages as well for your continued entertainment!


See also: https://code.golf/ for a relatively recent gamified site with rankings


this is so good


So the stuff that agents would excel at is essentially just the "checklist" part of the job? Check A, B, C, possibly using tools X, Y, Z, possibly multi-step checks but everything still well-defined.

Whereas finding novel exploits would still be the domain of human experts?


I'm bullish on novel exploits too but I'm much less confident in the prediction. I don't think you can do two network pentests and not immediately reach the conclusion that the need for humans to do significant chunks of that work at all is essentially a failure of automation.

With more specificity: I would not be at all surprised if the "industry standard" netpen was 90%+ agent-mediated by the end of this year. But I also think that within the next 2-3 years, that will be true of web application testing as well, which is in a sense a limited (but important and widespread) instance of "novel vulnerability" discovery.


Well, agents can't discover bypass attacks because they don't have memory. That was what DNCs [1] (Differentiable Neural Computers) tried to accomplish. Correlating scan metrics with analytics is btw a great task for DNCs and what they are good at due to how their (not so precise) memory works. Not so much though at understanding branch logic and their consequences.

However, I currently believe that forensic investigations will change post LLMs, because they're very good at translating arbitrary bytecode, assembly, netasm, intel asm etc syntax to example code (in any language). It doesn't have to be 100% correct in those translations, that's why LLMs can be really helpful for the discovery phase after an incident. Check out the ghidra MCP server which is insane to see real-time [2]

[1] https://github.com/JoergFranke/ADNC

[2] https://github.com/LaurieWired/GhidraMCP


The lack of memory issue is already being solved architecturally, and ARTEMIS is a prime example. Instead of relying on the model's context window (which is "leaky"), they use structured state passed between iterations. It's not a DNC per se, but it is a functional equivalent of long-term memory. The agent remembers it tried an SQL injection an hour ago not because it's in the context, but because it's logged in its knowledge base. This allows for chaining exploits, which used to be the exclusive domain of humans


Can you be more specific about the kind of "bypass attack" you think an agent can't find? Like, provide a schematic example?


SSL Heartbleed is a good example. Or pretty much any vulnerability that needs understanding of how memset or malloc works, or anything where you have to use leaky functions to create a specific offset because that's where the return (eip) in assembly is, so that you can modify/exploit that jmp or cmp call.

These kind of things are very hard for LLMs because they tend to forget way too much important information about both the code (in the branching sense) and the program (in the memory sense).

I can't provide a schematic for this, but it's pretty common in binary exploitation CTF events, and kind of mandatory knowledge about exploit development.

I listed some nice CTFs we did with our group in case you wanna know more about these things [1]. I think in regards to LLMs and this bypass/sidechannel attacks topic I'd refer to the Fusion CTF [2] specifically, because it covers a lot of examples.

[1] https://cookie.engineer/about/writeups.html

[2] https://exploit.education/fusion/


Wait, I don't understand why Heartbleed is at all hard for an agent loop to uncover. There's a pattern for these attacks (we found one in nginx in the ordinary course of a web app pentest at Matasano --- and we didn't find it based on code, though I don't concede that an LLM would have a hard time uncovering these kinds of issues in code either).

I think people are coming to this with the idea that a pentesting agent is pulling all its knowledge of vulnerabilities and testing patterns out of its model weights. No. The whole idea of a pentesting agent is that the agent code --- human-mediated code that governs the LLM --- encodes a large amount of knowledge about how attacks work.


I think I'd differ between source code audits (where LLMs already are pretty good at spotting bugs if you can convince them to) and exploit development here.

The former is automated by a large part already with fuzz testing of all kinds, so you wouldn't need an LLM if you knew what you were doing and have a TDD workflow or similar that checks against memleaks (say, with valgrind or similar approaches).

The latter part is what I was referring to where I had hope initially that DNCs could help with that, and what I'd say that right now LLMs cannot discover this, only repeat and translate it (e.g. similar vulnerabilities in the past discovered by humans in another programming language).

I'm talking specifically about discovery here because transformers lose symbolic inference, and that's why you can't use them for exploit generation. At least I wasn't able to make them work for the DARPA challenges, and had to use an AlphaGo based model combined with a CPPN and some techniques that worked in ES/HyperNEAT.

I suppose what I'm trying to say is that there's a missing understanding of memory and time when it comes to LLMs. And that is usually manually encoded/governed how you put it by humans. And I would not count that as an LLM doing it, because you could have just automated the tool use without an LLM and get identical results. (When thinking e.g. about an MCP for kernel memory maps or say, valgrind or AFL etc)


We're talking about different things here. A pentesting agent directly tests running systems. It's a (much) smarter version of Burp Scanner. It's going to find memory disclosure vulnerabilities the same way pentesters do, by stimulus/response testing. You can do code/test fusion to guide stimulus/response, which will make them more efficient, but the limiting factor here isn't whether transformers lose symbolic inference.

Remember, the competition here is against human penetration testers. Humans are extremely lossy testing agents!

If the threshold you're setting is "LLMs can eradicate memory disclosure bugs by statically analyzing codebases to the point of excluding those vulnerabilities as valid propositions", no, of course that isn't going to happen. But nothing on the table today can do that either! That's not the right metric.


> Humans are extremely lossy testing agents!

Ha, I laughed at that one. I suppose you're right :D


With exploits, you'll have to go through the rote stuff of checklisting over and over, until you see aberrations across those checklist and connect the dots.

If that part of the job is automated away. I wonder how the talent and skill for finding those exploits will evolve.


They suck at collecting the bounty money because they can't legally own a bank account.


I am not saying this to be mean, because these feel like good faith questions. But they also sound like questions rooted in a purely logical view of the world, divorced of experience.

That is, I don't believe it is possible that you've had real world experience with alcoholics, because if you had, it would be obvious why it doesn't work the way you are asking about. Some addictions are just too powerful. It is not a matter of having failed to treat the root cause. It's a matter of acknowledging that, for some people, the only solution to alcohol is not to consume any. It doesn't mean they don't also try to treat and understand deeper emotional reasons for their drinking.


There's lots of research that points to that the brain after addiction just isn't the same as before addiction[1][2]. So while there might have been a root cause before, the effects of addiction is still present even if the root cause isn't an issue anymore.

[1]: https://med.stanford.edu/news/insights/2025/08/addiction-sci...

[2]: https://www.rockefeller.edu/news/35742-newly-discovered-brai...


> Approximately 4.6 years of continuous play, every second, to see a single jackpot win.

This seems pretty reasonable, actually! Somehow it makes the 320M seem manageable.



I feel like "technically, no" but "practically, yes".

Somehow the distinction of just adding a tag / using filters doesn't communicate the cultural/process distinction in the same way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: