Hacker Newsnew | past | comments | ask | show | jobs | submit | fl4tul4's commentslogin

Yes, sir, blog away, we need your wisdom.

"There's so much oil in here, US is about to invade this dish!"

--Chef Ramsay


Haven't you heard about the millions of dollars trying to 'convince' the CEO of doing so?


Yes, we heard the CEO saying that Mozilla is not going to take the millions of dollars for doing so.

There are some things to be angry at Mozilla, but I'm not sure how you can read the exact opposite of what was being said in this particular case.


Got a source?



Just to be accurate then: "it" refers to Mozilla (all and any products).

It'll be the last straw for me.


The strange case of "TLA+" topics reaching HN's Front Page without any reason.


The reason is obvious - LLMs got good at writing them, initial cost to write went down 99%, toil to keep them up to date with code went down 99% and some people noticed. Value proposition went from ‘you must be joking unless we’re AWS and you’re proposing an IAM code change’ to ‘plan mode may also emit TLA+ spec as an intermediate implementation artifact’.


Just like with code, the issue was never about writing a TLA+ spec. It was about writing a good TLA+ spec. Anyone can press a piano's key, but not everyone can write the Moonlight Sonata.


Yes, but the accessibility of TLA+, if you come from traditional programming languages, is quite harsh. I have 2 books about it and keep coming back to it every once in a while, but every single time there is a lot of investment into the exact syntax of certain things. Better tooling like AI autocomplete is a huge step there. I can still think about the spec, but not get stuck every single time on minor details.


The devil is in the details. That's the whole point of formalism. There should be no ambiguity meaning what's written is what you want to be written. If you generate and can't prove what's generated is what you intended, the whole point of even writing it is moot.


Yes, and no. Relevant details. If the formalism bogs you down in tangential or irrelevant stuff, it means it's a poor fit.

But I don't think that's the case here. Most programmers don't have much experience with specs, and much less experience with formalized specs, especially if they come from an imperative background. I think those from declarative backgrounds will be much more at home with TLA+. The imperative style causes people to obsess over the "how", while the declarative style focuses more on the "what", and specs are more of a matter of "what" than "how".


It can be both. Here is one example that provides the how instead of the what:

https://www.rfc-editor.org/rfc/rfc5321.html#section-3.1

A lot of declarative languages do a lot of hard work designing an imperative kernel that will support the declarative construct. It just takes skill to separate the two together, just like it takes skill to split code between states, interfaces and implementations.


I understand the semantics, I forget the syntax. So better tooling is great because it allows even retards like me to use TLA+ after forgetting the syntax details after 2 weeks.


> if you come from traditional programming languages, is quite harsh

This may be the result of failing to adequately distinguish between the spec and the implementation. PlusCal, as I understand it, is an attempt to bridge this gap for programmers who don't yet have a good grasp of this distinction. So where in TLA+ you would model the observable behavior of a system in terms of transitions in a state machine, PlusCal gives users a language that looks more like implementation. (Frankly, I find the PlusCal approach more distracting and opaque, but I'm also a more abstract thinker than most.)


It's not strange at all.

The second story, and highly upvoted, on HN right now is: "AI will make formal verification go mainstream"[1].

[1] https://news.ycombinator.com/item?id=46294574


It's interesting btw that Martin Kleppmann lists a couple of proof assistants, but not model checkers like TLA+. With him working extensively on distributed systems, I would have thought that would be on the list as well.


Fast-forward to 2025.

The same problems still exist, exacerbated by the prevalence of LLMs and no detection mechanisms whatsoever.

The recipe for disaster.


Not to disagree with your point but why does literally any discussion must have a mention of AI/LLMs?

Is it possible not to bring them up and still have a deep conversation?


I guess the whole discussion (in 1963, in 2025), is about 'knowledge acquisition' (or lack thereof). He mentions the Brazilian students memorising 'stuff' without understanding - as a former Brazilian educator, I can tell you that when I was working there in 2010-2020, it hadn't changed, and, to my point, got worse in late years. I think a lot of students care about 'getting a diploma' without actually learning something, but my main concern is about fairness: how could I praise good students from 'devious' students altogether?


The education system is a prime example of Goodhart's Law and I'm so surprised how little it's done to avoid that.

I guess that as long as possessing a piece of paper stating "Mr. White passed all hoops we put in front of them" is a baseline requirement for many jobs nowadays, we will always have this problem.

At least in tech, the piece of paper helps but it's mostly about hobby projects, external contributions, past job experiences and referrals which matter the most.

But in more and more countries even just working at a supermarket requires a high degeee, so the non-academically inclined people will try to keep finding ways to pass with as little effort as possible (and any learning takes effort). So, I can't really blame them.


> why does literally any discussion must have a mention of AI/LLMs?

Your sentiment is right but in this case not applicable.

A Teacher who did not really understand what he was teaching can easily have LLMs generate lectures/notes/etc. and pass it along to students without any thought put into it. A Student on his part can simply have LLMs generate answers for all of his problem sets and pass it along to the teacher.

The above would be a disaster for the overall spread of Science in the Society.


A teacher who doesn't understand what they teach. Who put that teacher there in the first place?


That is one of the points of the essay. You just need the appropriate credentials to qualify as a Teacher (i.e. passing B.Ed/M.Ed etc.) and not necessarily "domain understanding" in the Feynman sense.


I have very limited experience with LLMs, and no recent experience teaching. But every time I hear about the problem of students using LLMs, I have two thoughts:

1) When they get out of school, no one can stop them from using LLMs. So preventing them from using them now is not a way to teach them how to cope in the future.

2) LLMs are (duh!) often wrong. So treat what the LLMs say as hypotheses, and teach the students how to test those hypotheses. Or if the LLMs are being used to write essays, have the students edit the output for clarity, form, etc. Exams might be given orally, or at least in a situation where the students don't have access to an LLM.


Just noticed this - will read it later. Looks interesting.


This hit hard, for some reason:

>> $127,419 USD raised of $450,000 goal · 788 donations

Go play lute with the angles, noble sir.


Working with a partner has never been a problem, don't worry.


RIP Bob Redford, your movies will stay with us forever.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: