Hacker Newsnew | past | comments | ask | show | jobs | submit | rogerrogerr's commentslogin

> Parliament cannot restate the entire legal corpus each session.

This is probably bad for some reasons, but I have wondered about a system where the legislature does have to restate the entire legal corpus every session. Like, everything resets at the end of the year and whatever didn’t get passed in the last 12 months is no longer law. Paired with some kind of rule that says you can’t just vote to pass “everything that already exists”.

In my fantasy, there would be less weird cruft in the law, and less bikeshedding about stuff that really should be done.


In reality, there would probably be almost as much archaic weird cruft and more new ill considered cruft snuck in through each of the many annual bills that would mostly serve to readopt large sections of the prior law verbatim which would end up being must-lass legislation being considered on short deadlines.

Similarly, I sometimes daydream about those ancient law systems that were so simple that they could be engraved on a stone pillar and placed in a public area. Could you imagine a legal system that can be read and understood in minutes instead of the sprawling, byzantine thing that takes years of law school to comprehend?

Seems to be one of those things where you think about them for a minute and go, huh, that'd be neat, but then you think about it some more how it'd play out in the real world, and that's when it starts to get actually interesting.

Maybe not every 12 months / session, but I definitely think having a way to "sunset" certain types of laws would be a net positive on society.

"If no one is charged with $crime for n years, it goes away unless explicitly renewed" would fix some of the weirder laws still on the books, but by definition wouldn't really change much.

No, it wouldn't fix anything, but it would result it annoying periodic pro forma charges being filed.

When you take a measure and tie a control function to it, you make it a target. When you do that, Goodhart’s law applies, and when the specific connection is as simple as “maintaining a criminal law requires charging someone under it with at least X frequency”, the failure mode is obvious.


They’ll never reveal the data, because that would reveal this is all built on stolen work.

Some of the models DO reveal the data, and it's still built on "stolen work" in that it's unlicensed scrapes of the Web. Here's an example:

https://huggingface.co/allenai/OLMo-2-0325-32B

Here's one of their training mixes: https://huggingface.co/datasets/allenai/dolma3_pool - which includes 8 trillion tokens from Common Crawl.


"quietly discloses"

Can we vote to ban an adverb?


The idea that anything can still be buried in a filling is equally ridiculous

"hidden in a news press"

Quietly has not so quietly become an AI slop calling card.

I think it’s bad enough that more people are using it too. Maddening.

Please write a blog post about the experience of making / hosting / paying for this. I’d love to hear about it.

> Get reimbursed for the receipts when you retire.

Holy crap, you can do this? I always assumed for some reason you had to pay for expenses with an HSA in the year they were incurred.


That's for an FSA (which is similar to but distinct from an HSA).

I do

A couple times a week my freaking VP is announcing some new tool he vibecoded and talked to no one about.

I’m sure they’re all riddled with security issues, but am I gonna go be the one pointing it out? Heck no.


we love to say things like these, but... most security issues are in fact BYPASSABLE - virtualization, firewalls, autorollbacks, ro-filesystems and so on are many of the tools we have on our belsts

decades of WordPress have taught us that insecure apps can 100% be securely deployed

it's a bit of an art, most recently edicated devops/sre ppl suck at it, but it's doable

...aeons a go in a former life we ran production apps that got hacked weekly, and nobody batted an eye at it, backups servers recreated from secure ro-images were span up with last-clean-app version, occassionally we had fun disassembling whatever reverse shells and other mallware that got beached on our systems (but couldn't "swim" bc everything we ran was "too exotic" for them to figure out the next steps of a proper attack), development and business continued as usual with zero interruptions etc


If you go against every principle (defense in depth, security through obscurity), maybe you should ask yourself "am I willing to be on the record saying this when my company gets hacked?"

There can be multiple reasons system crumbles, do you want to be behind one of them... intentionally?


100%. I'm willing to prioritize what matters at the right time. if "inner-system security" is not the right priority, and security can be attained at the "outer-system level" better, we should have the balz to say it. fuckitol

Imagine if your doctor said "we don't really need to do this if some other guy or nurse does a right job, so fuck it".

In other critical professions you don't want to screw up because when you lose license you're legally unemployable. Maybe it's time to require a license to be a programmer. We used to have a strong culture but those days are gone and stakes are higher. Putting people at risk because you think VC can vibe code an insecure app and then it's everybody else's responsibility to ship it securely?


you got everything I said wrong: I'm familiar with security and infrastructure best practice and I'm confident I/we can securely deploy almost any vibe-coded crap someone can throw at us - we understand security, we understand defense-in-depth, we understand the subtle trade offs of why security by obscurity is usually a bad idea (and when it does help) etc.

sure, if the vibe-coded sloptopus does bank transfers and stuff, properly carving out these pieces out of it might require actual engineering work before containerizing it - but someone is willing to pay for it it can be done

some "toy" example: take a crappy app that stores llm keys in config files that the llm agents themselves can edit - after isolating it up, but an llm proxy in front of it and have those keys be short lived proxy-keys with aggressive rate limits and monitoring etc etc

isolation, injecting proper monitoring into code of apps, putting proxies between app and apis, and layers between app and infra it runs on or touches etc

and these things now can be mostly cookbook-ified / automated 90% of the way too

as long as you can shop things into little ppl and ensure short-lived and granular access to valuable data you can 100% run totally unsecure and buggy code reliably and get value from it

it's engineering and understanding security from first principles [and a culture arund it - that _is_ the HARD af bit though...] instead of just believing in "secure app best practices" from the "holy scriptures" - secure apps are hackable, and unsecure apps can be unhackable, heck even mil systems run on unpatched old software everywhere, they're just properly insulated, the components are insecure but the system as a whole can be perfectly secure


If you believe in unhackable, maybe you're not familiar with security enough...

ffs sake, u get the point... "under threat models x, z & q that are considered for scenarios ..."

anything deployed is hackable ofc, question is just the profit/risk ratio a business tolerates/prefers, and what backup plans exist to "reboot" after fatal incidents

nothing's perfect in the real world but most things are survivable

reducing all risk is the same as reducing all opportunity for profit - and in a much truer sense than it seems ...as you also reduce adversary's risk to profit form you, so essentially pursuing too low risk you head towards negative sum (as security has costs) games that on average we all loose from playing


And there are, like, six of them.

> it's still a bar that can be passed with human intelligence

Can you expand on this?


As a developer becomes better, they become better than an LLM, being able to deal with more complex things than what an LLM can handle. Some people will not be able to pass it, but others will.

When there will ever be AGI (I don't think this can be achieved with the current architecture, it needs another AI breakthrough), then we might not be able to surpass it, much like chess currently.


Yes, we have an infinite amount of knowledge work that needs done. But if AI is better at it than humans, we aren’t going to use humans.

We don’t use chimpanzees for any knowledge work today, even though they’d be better at it than some other animals.


I think the evidence that AI is better at knowledge work without a human in the loop... is very limited.

Humans with many agents will be more productive, but the tendency has been for these models is to regress to the mean when it comes to strategic insights.


So far, I think you're right. But the rate of progress just seems so crazy that I'm not seeing any moats that look fundamental. I hope I'm wrong and you're right.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: