Yes, what Postel's Law is about. That's the whole point of contrasting it with Hyrum's Law, no?
Hyrum's Law is pointing out that sometimes the new field is a breaking change in the liberal scenario as well, because if you used to just ignore the field before and now you don't, your client that was including it before will see a change in behavior now. At least by being strict, (not accepting empty arrays, extra fields, empty strings, incorrect types that can be coerced, etc), you know that expanding the domain of valid inputs won't conflict with some unexpected-but-previously-papered-over stuff that current clients are sending.
I don't think that interpretation makes that much sense. Isn't it a bit too... obvious that you shouldn't just crash and/or corrupt data on invalid input? If the law were essentially "Don't crash or corrupt data on invalid input", it would seem to me that an even better law would be: "Don't crash or corrupt data." Surely there aren't too many situations where we'd want to avoid crashing because of bad input, but we'd be totally fine crashing or corrupting data for some other (expected) reason.
So, I think not crashing because of invalid input is probably too obvious to be a "law" bearing someone's name. IMO, it must be asserting that we should try our best to do what the user/client means so that they aren't frustrated by having to be perfect.
I actually dont think it's that obvious at all (unless you are a senior engineer). It's like the classic joke:
A QA engineer walks into a bar and orders a beer.
She orders 2 beers.
She orders 0 beers.
She orders -1 beers.
She orders a lizard.
She orders a NULLPTR.
She tries to leave without paying.
Satisfied, she declares the bar ready for business. The first customer comes in an orders a beer. They finish their drink, and then ask where the bathroom is.
The bar explodes.
It's usually not obvious when starting to write an API just how malformed the data could be. It's kind of a subconscious bias to sort of assume that the input is going to be well-formed, or at least malformed in predictable ways.
I think the cure for this is another "law"/maxim: "Parse, don't validate." The first step in handling external input is try to squeeze it into as strict of a structure with as many invariants as possible, and failing to do so, return an error.
Hmm. Fair point. It's entirely possible that it's not obvious and that the "law" is almost a "reminder" of sorts to not assume you're getting well-formed inputs.
I'm still skeptical that this is the case with Postel's Law, but I do see that it's possible to read it that way. I guess I could always go do some research to prove it one way or the other, but... nah.
And yes, "Parse, don't validate." is one of my absolute favorite maxims (good word choice, by the way; I would've struggled on choosing a word for it here).
Right even for senior engineers this can be hard to get right in practice. Parse, don't validate is certainly one approach to the problem. Choosing languages that force you to get it right is another.
This is also a problem, IMO, in having this optimization in PHP. Anonymous functions are instances of a Closure class, which means that the `===` operator should return false for `foo() === foo()` just like it would for `new MyClass() === new MyClass()`.
But, since when has PHP ever prioritized correctness or consistency over trivial convenience? (I know it's anti-cool these days to hate on PHP, but I work with PHP all the time and it's still a terrible language even in 2026)
I never understood why people think somehow PHP is fine now, and I've had that opinion expressed several times on HN. The best I can make out is that people's expectations are so dismal now that they're like "Well new versions fixed 2 of the 5 worst problems I noticed, so that's good right?"
Because PHP is a amazing backed language for making CRUD apps. Always has been.
It has great web frameworks, a good gradual typing story and is the easiest language to deploy.
You can start with simple shared hosting, copy your files into the server and you are done. No docker, nothing.
Sure it has warts but so have all mainstream programming languages. I find it more pleasant than TypeScript which suffers from long compile times and a crazy complex type system.
The only downside is that PHP as a job means lots of legacy code. It a solid career but you will rarely if ever have interesting programming projects.
It’s a “terrible” language? That’s news to me. What’s “terrible” about it?
> `new MyClass() === new MyClass()`
Does that look like the code you’re writing for some reason? Because I’ve seen 100k loc enterprise PHP apps that not once ran into that as an issue. The closest would be entities in an ORM for which there were other features for this anyway.
I'm especially angry that if you go to reddit.com in a mobile browser, it will sometimes fully block you from certain subreddits (not just NSFW ones) and tell you that you can only access it from the app. Meanwhile, you can easily visit the exact same subreddit by typing old.reddit.com/r/whatever. The outright lying bothers me so much. I refuse to be desensitized to lying just because everyone is lying all the time; it's still really wrong, and they really should be ashamed of themselves.
When you say "meme", it sounds like it might not be true. But, a few years ago I handed my stepson a USB flash drive with some files on it, he plugged it into his laptop and the very first thing he did was launch Google Chrome and then not have any clue what to do to access the files (it was a Windows laptop).
One of the most enraging things about life since 2005-ish is that no matter how private and careful I am, it doesn't even matter because every other inconsiderate fool I know and interact with will HAPPILY let some random company have access to THEIR contacts--which includes me--in order to play Farmville for a month until they get bored of that and offer up my private information to the next bullshit ad company that asks for their contacts.
It used to frustrate me that people didn't care about their own privacy, because I genuinely didn't want evil people to hurt them. But, it's even more angering that people don't have the common decency to consider whether their friends and family would want them sharing their phone numbers, email addresses, photos of them, etc.
Yep. If someone is trying to make you do something, or stop doing something, or buy something, your first question should always be "Why?".
Why would someone try to force me off of my browser (that has ad-blocking and tracker-blocking mitigations) and on to a locked-down app that may want permission to run in the background, display notifications, access my files or camera, etc?
Maybe it really is to "improve my experience"... yeah, right.
The idea that making things immutable somehow fixes concurrency issues always made me chuckle.
I remember reading and watching Rich Hickey talking about Clojure's persistent objects and thinking: Okay, that's great- another thread can't change the data that my thread has because I'll just be using the old copy and they'll have a new, different copy. But now my two threads are working with different versions of reality... that's STILL a logic bug in many cases.
That's not to say it doesn't help at all, but it's EXTREMELY far from "share xor mutate" solving all concurrency issues/complexity. Sometimes data needs to be synchronized between different actors. There's no avoiding that. Sometimes devs don't notice it because they use a SQL database as the centralized synchronizer, but the complexity is still there once you start seeing the effect of your DB's transaction level (e.g., repeatable_read vs read_committed, etc).
It's not that shared-xor-mutate magically solves everything, it's that shared-and-mutate magically breaks everything.
Same thing with goto and pointers. Goto kills structured programming and pointers kill memory safety. We're doing fine without both.
Use transactions when you want to synchronise between threads. If your language doesn't have transactions, it probably can't because it already handed out shared mutation, and now it's too late to put the genie in the bottle.
> This, we realized, is just part and parcel of an optimistic TM system that does in-place writes.
+5 insightful. Programming language design is all about having the right nexus of features. Having all the features or the wrong mix of features is actually an anti-feature.
In our present context, most mainstream languages have already handed out shared mutation. To my eye, this is the main reason so many languages have issues with writing asynch/parallel/distributed programs. It's also why Rust has an easier time of it, they didn't just hand out shared mutation. And also why Erlang has the best time of it, they built the language around no shared mutation.
> It's a good article but I think you need to start explaining structured concurrency from the very core of it: why it exists in the first place.
I disagree. Not every single article or essay needs to start from kindergarten and walk us up through quantum theory. It's okay to set a minimum required background and write to that.
As a seasoned dev, every time I have to dive into a new language or framework, I'll often want to read about styles and best practices that the community is coalescing around. I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.
I'm not saying that level of article/essay shouldn't exist. I'm just saying there's more than enough. I almost NEVER find articles that are targeting the "I'm a newbie to this language/framework, but not to programming" audience.
> I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.
You’d be surprised. Modern Swift concurrency is relatively new and the market for Swift devs is small. Finding good explainers on basic Swift concepts isn’t always easy.
I’m extremely grateful to the handful of Swift bloggers who regularly share quality content.
Paul Hudson is the main guy right now, although his stuff is still a little advanced for me. Sean Allen on youtube does great video updates and tutorials.
I haven't written any Go in many years (way before generics), but I'm shocked that something so implicit and magical is now valid Go syntax.
I didn't look up this syntax or its rules, so I'm just reading the code totally naively. Am I to understand that the `user` variable in the final return statement is not really being treated as a value, but as a reference? Because the second part of the return (json.NewDecoder(resp.Body).Decode(&user)) sure looks like it's going to change the value of `user`. My brain wants to think it's "too late" to set `user` to anything by then, because the value was already read out (because I'm assuming the tuple is being constructed by evaluating its arguments left-to-right, like I thought Go's spec enforced for function arg evaluation). I would think that the returned value would be: `(nil, return-value-of-Decode-call)`.
I'm obviously wrong, of course, but whereas I always found Go code to at least be fairly simple--albeit tedious--to read, I find this to be very unintuitive and fairly "magical" for Go's typical design sensibilities.
No real point, here. Just felt so surprised that I couldn't resist saying so...
> I would think that the returned value would be: `(nil, return-value-of-Decode-call)`.
`user` is typed as a struct, so it's always going to be a struct in the output, it can't be nil (it would have to be `*User`). And Decoder.Decode mutates the parameter in place. Named return values essentially create locals for you. And since the function does not use naked returns, it's essentially saving space (and adding some documentation in some cases though here the value is nil) for this:
func fetchUser(id int) (User, error) {
var user User
var err Error
resp, err := http.Get(fmt.Sprintf("https://api.example.com/users/%d", id))
if err != nil {
return user, err
}
defer resp.Body.Close()
return user, json.NewDecoder(resp.Body).Decode(&user)
}
yeah, not really an expert but my understanding is that naming the return struct automatically allocates the object and places it into the scope.
I think that for the user example it works because the NewDecoder is operating on the same memory allocation in the struct.
I like the idea of having named returns, since it's common to return many items as a tuple in go functions, and think it's clearer to have those named than leaving it to the user, especially if it's returning many of the same primitive type like ints/floats:
Hyrum's Law is pointing out that sometimes the new field is a breaking change in the liberal scenario as well, because if you used to just ignore the field before and now you don't, your client that was including it before will see a change in behavior now. At least by being strict, (not accepting empty arrays, extra fields, empty strings, incorrect types that can be coerced, etc), you know that expanding the domain of valid inputs won't conflict with some unexpected-but-previously-papered-over stuff that current clients are sending.
reply