Hacker Newsnew | past | comments | ask | show | jobs | submit | muvlon's commentslogin

They are quite literally negotiable: https://isrg.formstack.com/forms/rate_limit_adjustment_reque...

There are also a bunch of rate limit exemptions that automatically apply whenever you "renew" a cert: https://letsencrypt.org/docs/rate-limits/#non-ari-renewals. That means whenever you request a cert and there already is an issued certificate for the same set of identities.


Your comment is 100% correct, but I just want to point out that this doesn't negate the risks of bob's approach here.

LE wouldn't see this as a legitimate reason to raise rate limits, and such a request takes weeks to handle anyway.

Indeed, some rate limits don't apply for renewals but some still do.


Because with Next.js, Vercel was able to turn the frontend stack into also a really shitty backend stack. And it's particularly shitty at being deployed, so they're in the business of doing that for you.

The world has not decided that spyware can't be produced. Mostly, the powers that be treat it like weapons of war.

That is, companies can make and sell it as long as they only sell it to governments and only the ones that we like.


What other markdown viewers or editors support URL schemes that just execute code? And not in a browser sandbox but in the same security context notepad itself is running in.

Funnily enough, the core Windows API here that brings with it support for every URL scheme under the sun is plain old ShellExecute() from the mid-90s IE-in-the-shell era when such support was thought reasonable. (I actually still think it’s reasonable, just not with the OS architectures we have now or had then.)

I used to enjoy it much more before it became just another podcast extoiling the virtues of AI-assisted coding. I have too many of those already.

I appreciate their treatment of the current AI boom cycle. Just last night they had Evan Ratliff on from the Shell Game podcast[1], and it was a great episode. They're not breathlessly hyping AI and trying to make a quick buck off it, instead it seems they're taking an honest, rigorous look at it (which is sadly pretty rare) and talking about the successes as well as the failures. Personally I don't always agree with their takes, I'm more firmly in Ed Zitron's camp that this is all a massive financial scam, isn't really good for much, and will do a lot more harm than good in the long run. They're less negatively biased than that, which is fine.

[1] https://www.shellgame.co/


How would they? This is AI, it has to move faster than you can even ask security questions, let alone answer them.


Option 4 as well, that's how we do it at work and it's been great. However, it can't really be "someone on the team knows Nix", anyone working on Ops will need Nix skills in order to be effective.


Why this fixation on Nix? You don't need Nix to run bare metal.


Nix makes sure that everything is exactly as you declared, and that in case of [INSERT APOCALYPTIC EVENT], you'll be able to recover much faster

If you're interacting with stateful systems (which you usually are with this kind of command), --dry-run can still have a race condition.

The tool tells you what it would do in the current situation, you take a look and confirm that that's alright. Then you run it again without --dry-run, in a potentially different situation.

That's why I prefer Terraform's approach of having a "plan" mode. It doesn't just tell you what it would do but does so in the form of a plan it can later execute programmatically. Then, if any of the assumptions made during planning have changed, it can abort and roll back.

As a nice bonus, this pattern gives a good answer to the problem of having "if dry_run:" sprinkled everywhere: You have to separate the planning and execution in code anyway, so you can make the "just apply immediately" mode simply execute(plan()).


>That's why I prefer Terraform's approach of having a "plan" mode. It doesn't just tell you what it would do but does so in the form of a plan it can later execute programmatically. Then, if any of the assumptions made during planning have changed, it can abort and roll back.

Not to take anything away from your comment but just to add a related story... the previous big AWS outage had an unforeseen race condition between their DNS planner vs DNS executor:

>[...] Right before this event started, one DNS Enactor experienced unusually high delays needing to retry its update on several of the DNS endpoints. As it was slowly working through the endpoints, several other things were also happening. First, the DNS Planner continued to run and produced many newer generations of plans. Second, one of the other DNS Enactors then began applying one of the newer plans and rapidly progressed through all of the endpoints. The timing of these events triggered the latent race condition. When the second Enactor (applying the newest plan) completed its endpoint updates, it then invoked the plan clean-up process, which identifies plans that are significantly older than the one it just applied and deletes them. At the same time that this clean-up process was invoked, the first Enactor (which had been unusually delayed) applied its much older plan to the regional DDB endpoint, overwriting the newer plan. The check that was made at the start of the plan application process, which ensures that the plan is newer than the previously applied plan, was stale by this time due to the unusually high delays in Enactor processing. [...]

previous HN thread: https://news.ycombinator.com/item?id=45677139


Overkill I’m sure for many things but I’m curious as to whether there’s a TLA kind of solution for this sort of thing. It feels like it could although it depends how well modelled things are (also aware this is a 30s thought and lots of better qualified people work on this full time).



And just like that, you find yourself implementing a compiler (specs to plan) and a virtual machine (plan to actions)!


> And just like that, you find yourself implementing a compiler (specs to plan) and a virtual machine (plan to actions)!

Not just any compiler, but a non-typesafe, ad-hoc, informally specified grammar with a bunch of unspecified or under-specified behaviour.

Not sure if we can call this a win :-)


Greenspun's tenth rule in action!


It can be type safe and testable with free monads


This is why I think things like devops benefit from the traditional computer science education. Once you see the pattern, whatever project you were assigned looks like something you've done before. And your users will appreciate the care and attention.


I think you're already doing that? The only thing that's added is serializing the plan to a file and then deserializing it to make the changes.


Yeah any time you're translating "user args" and "system state" to actions + execution and supporting a "dry run" preview it seems like you only really have two options: the "ad-hoc quick and dirty informal implementation", or the "let's actually separate the planning and assumption checking and state checking from the execution" design.


I was thinking that he's describing implementing an initial algebra for a functor (≈AST) and an F-Algebra for evaluation. But I guess those are different words for the same things.


I like that idea! For an application like Terraform, Ansible or the like, it seems ideal.

For something like in the article, I’m pretty sure a plan mode is overkill though.

Planning mode must involve making a domain specific language or data structure of some sort, which the execution mode will interpret and execute. I’m sure it would add a lot of complexity to a reporting tool where data is only collected once per day.


No need to overthink it. In any semi-modern language you can (de)serialize anything to and from JSON, so it's really not that hard. The only thing you need to do is have a representation for the plan in your program. Which I will argue is probably the least error-prone way to implement --dry-run anyway (as opposed to sprinkling branches everywhere).


> you can (de)serialize anything to and from JSON, so it's really not that hard

First, it is hard, especially in at least somewhat portable manner.

Second, serialization only matters if you cannot (storage, IPC) pass data around in-memory anyway. That's not the problem raised, though. Whatever the backing implementation, the plan, ultimately, consists of some instructions (verbs in parent) over objects (arguments in parent). Serializing instructions any other way than dropping non-portable named references requires one to define execution language, which is not an easy feat.

> The only thing you need to do is have a representation for the plan in your program.

That "only" is doing lifting heavier than you probably realize. Such representation, which is by the way specified to be executable bidirectionally (roll back capabilities), is a full blown program, so you end up implementing language spec, godegen and execution engines. In cases of relatively simple business models that is going to be the majority of the engineering effort.


> First, it is hard, especially in at least somewhat portable manner.

I'm curious what portability concerns you've run into with JSON serialization. Unless you need to deal with binary data for some reason, I don't immediately see an issue.

> Such representation, which is by the way specified to be executable bidirectionally (roll back capabilities), is a full blown program

Of course this depends on the complexity of your problem, but I'd imagine this could be as simple as a few configuration flags for some problems. You have a function to execute the process that takes the configuration and a function to roll back that takes the same configuration. This does tie the representation very closely to the program itself so it doesn't work if you want to be able to change the program and have previously generated "plans" continue to work.


> I'm curious what portability concerns you've run into with JSON serialization.

The hard part concerns instructions and it is not technical implementation of serializing an in-memory data structures into serialization format (be it JSON or something bespoke) that is the root of complexity.

> You have a function to execute the process that takes the configuration and a function to roll back that takes the same configuration.

Don't forget granularity and state tracking. The opposite of a seemingly simple operation like "set config option foo to bar" is not a straightforward inverse: you need to track the previous value. Does the dry run stop at computing the final value for foo and leaves possible access control issues to surface during real run or does it perform "write nothing" operation to catch those?

> This does tie the representation very closely to the program itself so it doesn't work if you want to be able to change the program and have previously generated "plans" continue to work.

Why serialize then? Dump everything into one process space and call the native functions. Serialization implies either strictly, out of band controlled interfaces, which is a fragile implementation of codegen+interpreter machinery.


Right, but you still have to define every ”verb” your plan will have, their ”arguments”, etc. Not need to write a parser (even Java can serialize/deserialize stuff), as you say, but you have to meta-engineer the tool. Not just script a series of commands.


It's not strictly related to the original theme, but I want to mention this.

Ansible implementation is okay, but not perfect (plus, this is difficult to implement properly). For cases like file changes, it works, but if you install a package and rely on it later, the --check command will fail. So I am finding myself adding conditions like "is this a --check run?"

Ansible is treated as an idempotent tool, which it's not. If I delete a package from the list, then it will pollute the system until I create a set of "tearing-down" jobs.

Probably, Nix is a better alternative.


Yes! I'm currently working on a script that modifies a bunch of sensitive files, and this the approach I'm taking to make sure I don't accidentally lose any important data.

I've split the process into three parts:

1. Walk the filesystem, capture the current state of the files, and write out a plan to disk.

2. Make sure the state of the files from step 1 has not changed, then execute the plan. Capture the new state of the files. Additionally, log all operations to disk in a journal.

3. Validate that no data was lost or unexpectedly changed using the captured file state from steps 1 and 2. Manually look at the operations log (or dump it into an LLM) to make sure nothing looks off.

These three steps can be three separate scripts, or three flags to the same script.


I think it's configurable, but my experience with terraform is that by default when you `terraform apply` it refreshes state, which seems to be tantamount to running a new plan. i.e. its not simply executing whats in the plan, its effectively running a fresh plan and using that. The plan is more like a preview.


That is the default, but the correct (and poorly documented and supported) way to use terraform is to save the plan and re-use it when you apply. See the -out parameter to terraform plan, and then never apply again without it.


Totally agree, and this is covered in an (identically named?) Google Research blog [1].

Just last week I was writing a demo-focused Python file called `safetykit.py`, which has its first demo as this:

    def praise_dryrun(dryrun: bool = True) -> None:
        ...
The snippet which demonstrates the plan-then-execute pattern I have is this:

    def gather(paths):
        files = []
        for pattern in paths:
            files.extend(glob.glob(pattern))
        return files

    def execute(files):
        for f in files:
            os.remove(f)

    files = gather([os.path.join(tmp_dir, "*.txt")])
    if dryrun:
        print(f"Would remove: {files}")
    else:
        execute(files)
I introduced dry-run at my company and I've been happy to see it spread throughout the codebase, because it's a coding practice that more than pays for itself.

[1] https://www.gresearch.com/news/in-praise-of-dry-run/


G-Research is a trading firm, not Google research


The G stands for "Google", does it not?


There is no relation.


> That's why I prefer Terraform's approach of having a "plan" mode. It doesn't just tell you what it would do but does so in the form of a plan it can later execute programmatically. Then, if any of the assumptions made during planning have changed, it can abort and roll back.

And how do you imagine doing that for the "rm" command?


In my case, I’d use a ZFS snapshot. Many equivalent tools exist on different OSes and filesystems as well.


Really? Should I be snapshotting the volume before every "rm"? Even if it's a part of routine file exchanges between machines? (As it happens for many production lines, especially older ones).

I think the current semantic of "rm" works fine. But I understand the new world where we'll perhaps gonna be deleting single files using Terraform or cluster of machines, or possibly LLMs/AI agents.


Oh, I don't think we should change the semantics of "rm"--not because reversibility is unimportant (shameless self-promotion: it is: https://blog.zacbentley.com/post/on-reversibility/), but because baking it into "rm" is the wrong layer.

Folks usually want reversibility in the context of a logical set of operations, like a Terraform apply. For shell commands like "rm", the logical set of operations might be a session (having a ZFS snapshot taken on terminal session start, with a sane auto-delete/age-out rotation, would be super useful! I might script that up in my shell profile in fact) or a script, or a task/prompt/re-prompt of an AI agent. But yeah, it definitely shouldn't happen at the level of a singular "rm" call.

Since filesystem snapshots (in most snapshot-capable filesystems, not just ZFS) are very simple to create, and are constant-time or otherwise extremely fast to perform, the overhead of taking this approach wouldn't be too hard.


I had a similar (but not as good) thought which was to separate out the action from the planning in code then inject the action system. So —-dry-run would pass the ConsoleOutput() action interface but without it passes a LiveExecutor() (I’m sure there’s a better name).

Assuming our system is complex enough. I guess it sits between if dry_run and execute(plan()) in its complexity.


I'm a happy Heroic user but I don't mind them porting GOG Galaxy. Makes for a smoother migration for people coming from Windows, for example.


The article names a lot of other things that AI is being used for besides scamming the elderly, such as making us distrust everything we see online, generating sexually explicit pictures of women without their consent, stealing all kinds of copyrighted material, driving autonomous killer drones and more generally sucking the joy out of everything.

And I think I'm inclined to agree. There are a small amount of things that have gotten better due to AI (certain kinds of accessibility tech) and a huge pile of things that just suck now. The internet by comparison feels like a clear net positive to me, even with all the bad it enables.


Here's the thing with AI, especially as it becomes more AGI like, it will encompass all human behaviors. This will lead to the bad behaviors becoming especially noticeable since bad actors quickly realized this is a force multiplication factor for them.

This is something everyone needs to think about when discussing AI safety. Even ANI applications carry a lot of potential societal risks and they may not be immediately evident. I know with the information superhighway few expected it to turn into a dopamine drip feed for advertising dollars, yet here we are.


> bad actors quickly realized this is a force multiplication factor for them

You'd think we would have learned this lesson in failing to implement email charges that net'd to $0 for balanced send/receive patterns. And thereby heralded in a couple decades of spam, only eventually solved by centralization (Google).

Driving the cost of anything valuable to zero inevitably produces an infinite torrent of volume.


AI doesn't encompass any "human behaviours", the humans controlling it do. Grok doesn't generate nude pictures of women because it wants to, it does it because people tell it to and it has (or had) no instructions to the contrary


If it can generate porn, it can do so because it was explicitly trained on porn. Therefore the system was designed to generate porn. It can't just materialize a naked body without having seen millions of them. they do not work that way.


Not all pictures of naked people are porn.


I hate to be a smartass, but do you read the stuff you type out?

>Grok doesn't generate nude pictures of women because it wants to,

I don't generate chunks of code because I want to. I do it because that's how I get paid and like to eat.

What's interesting with LLMs is they are more like human behaviors than any other software. First you can't tell non-AI (not just genAI)software to generate a picture of a naked women, it doesn't have that capability. So after that you have models that are trained on content such as naked people. I mean, that's something humans are trained on, unless we're blind I guess. If you take a data set encompassing all human behaviors, which we do, then the model will have human like behaviors.

It's in post training that we add instructions to the contrary. Much like if you live in American you're taught that seeing naked people is worse than murdering someone and that if someone creates a naked picture of you, your soul has been stolen. With those cultural biases programmed into you, you will find it hard to do things like paint a picture of a naked person as art. This would be openAI's models. And if you're a person that wanted to rebel, or lived in a culture that accepted nudity, then you wouldn't have a problem with it.

How many things do you do because society programmed you that way, and you're unable to think outside that programming?


You’re way off base. It can also create sexually explicit pictures of men.


Not sure if you're being sarcastic, but women are disproportionately affected by this than men.


That sounds like it could be true, but do you have any actual evidence of that?


These are one of those things that are hard to get statistics of due to the nature of the subject, but going to any website that features AI generated content like CivitAI will show you a lot more naked AI generated women than men, and that the images of women are greatly better in quality than the men. None of the people actually exist, of course, but some things stem from this:

1. There are probably AI portals that are OK with uploading nonconsensual sexual images of people. I am not about to go looking for those, but the ratio of women to men on those sites is likely similar. 2. The fact that the quality of women is better than the quality of men speaks to vastly more training being done on women 3. Because there's so much training on women it's just easier to use AI for nefarious purposes on women than men (have to find custom trained LORAs to get male anatomy right, for example).

I did try to look for statistics out of curiosity, but most just cite a number without evidence.

https://www.pbs.org/newshour/show/women-face-new-sexual-hara...

https://verfassungsblog.de/deepfakes-ncid-ai-regulation/

https://www.csis.org/analysis/left-shoulder-worries-ai


Obviously there is more interest in generating images of naked women, since naked women look better than naked men. It’s not some kind of patriarchal conspiracy.


It is obvious, but again that's subjective (I'm a straight male so of course I find it to be true but I'm not sure straight women would agree). The person I was responding to was asking if evidence existed, so I was curious to see if evidence did indeed exist.


In addition to AI-specific data, the existing volume and consumption patterns for non-AI pornography can be extrapolated to AI, I think, with high confidence.


Source: I have eyes


>The internet by comparison feels like a clear net positive to me, even with all the bad it enables.

When I think of the internet, I think of malware, porn, social media manipulating people, flame wars, "influencers", and more.

It is also used to scam the elderly, sharing photoshopped sexually explicit pictures of men, women, and children, without their consent, stealing all kinds of copyrighted material, and definitely sucking the joy out of everything. Revenge porn wasn't started in 2023 with OpenAI. And just look at META's current case about Instagram being addicting and harmful to children. If "AI" is a tech carcinogen, then the internet is a nuclear reactor, spewing radioactive material every which way. But hey, it keeps the lights on! Clearly, a net positive.

Let's just be intellectually consistent, that's all I'm saying.


[flagged]


It's true, making these things easier and faster and more accessible really doesn't matter


That's a bonkers take.

Am I misunderstanding you or are you somehow saying anything done in the past is fine to do more of?


Poe's Law, mate.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: